text
string | filename
string | file_size
int64 | title
string | authors
string | journal
string | category
string | publisher
string | license
string | license_url
string | doi
string | source_file
string | content
string | year
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
# Antiadherent and Antibiofilm Activity ofHumulus lupulus L. Derived Products: New Pharmacological Properties
**Authors:** Marcin Rozalski; Bartlomiej Micota; Beata Sadowska; Anna Stochmal; Dariusz Jedrejek; Marzena Wieckowska-Szakiel; Barbara Rozalska
**Journal:** BioMed Research International
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101089
---
## Abstract
New antimicrobial properties of products derived fromHumulus lupulus L. such as antiadherent and antibiofilm activities were evaluated. The growth of gram-positive but not gram-negative bacteria was inhibited to different extents by these compounds. An extract of hop cones containing 51% xanthohumol was slightly less active against S. aureus strains (MIC range 31.2–125.0 μg/mL) than pure xanthohumol (MIC range 15.6–62.5 μg/mL). The spent hop extract, free of xanthohumol, exhibited lower but still relevant activity (MIC range 1-2 mg/mL). There were positive coactions of hop cone, spent hop extracts, and xanthohumol with oxacillin against MSSA and with linezolid against MSSA and MRSA. Plant compounds in the culture medium at sub-MIC concentrations decreased the adhesion of Staphylococci to abiotic surfaces, which in turn caused inhibition of biofilm formation. The rate of mature biofilm eradication by these products was significant. The spent hop extract at MIC reduced biofilm viability by 42.8%, the hop cone extract by 74.8%, and pure xanthohumol by 86.5%. When the hop cone extract or xanthohumol concentration was increased, almost complete biofilm eradication was achieved (97–99%). This study reveals the potent antibiofilm activity of hop-derived compounds for the first time.
---
## Body
## 1. Introduction
Hops, the resinous female inflorescences ofHumulus lupulus L. (Cannabaceae) (called hop cones or strobiles), are used primarily in the brewing industry because of their bitter and aromatic properties. However, hop extracts and/or compounds such as polyphenols and acylphloroglucides are also reported to have antioxidant, estrogenic, sedative, and potential cancer-chemopreventive activities. Xanthohumol is the most abundant prenylated flavonoid in fresh hops, with properties overlapping those mentioned above [1–4]. Most interesting for our research is the antimicrobial activity of xanthohumol and other hop extract compounds, which could find new applications beyond the brewing industry as natural antimicrobial therapeutic substances. Research on the use of hop products to combat human pathogens has been conducted all over the world, and knowledge in this area from in vitro studies is already quite extensive, though further studies are still required [5–9]. Here, we wish to propose an investigation into an entirely new potential use of hop products, as antibiofilm compounds and enhancers of antibiotic action. Biofilm formation has substantial implications for a variety of industries such as oil drilling, paper production, and food processing. It is also well known that bacterial and fungal pathogens that form biofilms are responsible for serious infections, which are usually very difficult to treat. This is due to the high resistance of biofilms (100–1000 times higher than for a planktonic culture) to antibiotics, antiseptics, disinfectants, and host defense mechanisms [10]. This justifies the search for new therapeutic options, and plant-derived products are in the spotlight as promising sources or templates for new drugs [11, 12].The aim of our study was to establish the antibacterial activities of a purified extract from hop cones containing 51% xanthohumol and a spent hop extract depleted of xanthohumol and to compare them with commercially available xanthohumol. The first stage of the study comprised MIC/MBC evaluation of the above products and their synergy with antibiotics. We then examined the effects of these phytochemicals against staphylococcal biofilms, which have not been evaluated previously. Owing to the high resistance of biofilms, the ideal way of avoiding their engagement inin vivo pathogenesis or in other biofouling processes in the environment and industry would be to prevent their development. Therefore, one of our objectives was to evaluate the adhesion of S. aureus strains to glass/plastic surfaces and biofilm formation when hop constituents were continually present. We also assessed the capacity of these phytocompounds to eradicate an already-established biofilm.
## 2. Materials and Methods
### 2.1. Extraction, Isolation, and Chemical Analysis of Phytocompounds
Hop cones var. Marynka were grown at the experimental farm of the Institute of Soil Science and Plant Cultivation, State Research Institute of Pulawy, Poland. Plant material was collected during the 2010 season. Hop cones were dried at 55°C and kept in a cooler (4°C) pending extraction. Hop cones (100 g) were powdered and extracted with 2 L of 70% ethanol (EtOH) by boiling for 60 min. The extract was filtered and evaporated at 40°C to remove the organic phase, and then the crude extract was left at room temperature for 2 h until the sediment was separated from the liquid phase. This process was accelerated by centrifuging the extract (15 min, 5.000 ×g). The precipitate, which contained most of the xanthohumol, was freeze-dried, suspended in 30% EtOH, and applied to a C18 preparative column (45 × 160 mm, 40–63μm LiChroprep, Merck) previously preconditioned with 30% EtOH in 1% acetic acid (AcOH). The column was washed with linearly increasing concentrations of EtOH (from 30% to 100%) in 1% AcOH. Ten mL fractions were collected and monitored by HPLC. Fractions containing xanthohumol were combined and freeze-dried. After this stage, the quantity of xanthohumol in the extract was measured by HPLC. The final xanthohumol content amounted to 51% of the dry matter, as determined by the HPLC system (Waters, USA), which comprised a Waters 600 controller, a 616 pump with an in-line degasser AF, and a model 717 plus autosampler, as described previously [2]. A calibration curve was prepared for xanthohumol (Sigma, USA) at λ=370 nm.Spent hops, after extraction of the hop cones by supercritical CO2, were supplied by the Fertilizer Research Institute of Pulawy, Poland. A dried sample was ground to fine powder and suspended in acetone-water (70 : 30, v/v) at a solid to liquid ratio 1 : 10, mixed at room temperature for 30 min, and then centrifuged for 15 min (4.000 rpm). The pellet was reextracted three times with 70% aqueous acetone at room temperature, with stirring, and then the extracts were filtered and concentrated to remove the organic solvent. Lipophilic compounds were removed from the extract using chloroform and dichloromethane. The defatted aqueous extract was then concentrated to remove any residual solvent, concentrated under vacuum, and freeze-dried. Before analysis, the dried extract was reconstituted at 2 mg/mL in 10% aqueous dimethyl sulfoxide (DMSO). Total phenols, flavanols, and proanthocyanidins were determined using the methods described, respectively, by Bordonaba and Terry [13], Swain and Hillis [14], and Rösch et al. [15]. Polyphenols in the extracts were identified using an Acquity Ultra Performance LCTM system (UPLCTM) with a binary solvent manager (Waters Co., Milford, USA) and a Micromass Q-TOF Micromass spectrometer (Waters, Manchester, UK), equipped with an electrospray ionization (ESI) source operating in negative and positive modes. The individual components were characterized via their retention times and accurate molecular masses. The data obtained from UPLC/MS were analyzed with MassLynx 4.0 ChromaLynxTM Application Manager software.
### 2.2. Evaluation of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) of the Phytocompounds
The reference strains ofStaphylococcus aureusATCC 29213, Enterococcus faecalis ATCC 29212, Escherichia coli NCTC 8196, Pseudomonas aeruginosa NCTC 6749, and the clinical S. aureusstrains A7 and D5 were used. Bacteria were grown for 24 h at 37°C on Müeller-Hinton Agar—MHA (BTL, Poland), and microbial suspensions (5×105 CFU/mL) were prepared in Müeller-Hinton Broth—MHB (BTL, Poland). MIC values were determined by a microdilution broth assay according to Clinical and Laboratory Standards Institute (CLSI) recommendations [16, 17]. Stock solutions of hop cone, spent hop extracts, and xanthohumol (Sigma, USA) were prepared in 50% spent hop extract or 100% DMSO. The concentration ranges of the compounds used in the tests (using a twofold dilution system) were 0.0039–0.5 mg/mL for XH and 0.0078–2.0 mg/mL for the extracts. These ranges were based on general assumptions underpinning research on natural products for medical use, which set the upper limit for biostatic/biocidal concentration at about 1 mg/mL for complex preparations and 0.1 mg/mL for pure chemical compounds.Bacterial suspensions (100μL) were mixed 1 : 1 with the serial dilutions of the phytocompounds under test. Microplate wells containing no extract but inoculated with test strains were used as positive controls. Negative control wells consisted of the serial dilution of the phytocompound only. The final highest DMSO concentration was 1.25%, which did not affect bacterial growth. Plates were incubated at 37°C for 18 h, and the highest dilution showing no turbidity was recorded as the MIC. Since the color of the extracts at higher concentrations made turbidimetry difficult, bacterial growth on MHA (10 μL from each well after vigorous stirring; linear culture incubated for the subsequent 18 h at 37°C) was tested concurrently. The concentrations of the compounds bactericidal to ≥99.9% of the inoculum (MBC) were determined using the same method of solid culture (starting from four wells below the suspected MIC value). In each case, experiments were carried out in quadruplicate on two different experiments. In order to test whether the compounds induced cell aggregation, 25 μL of bacterial suspension, treated as described above, was placed on a glass microscope slide and gently smeared. After air drying, heat fixation, and gram staining, the slides were examined by light microscopy. The size and quantity of clusters were compared to the controls (nontreated bacteria) according to the score established by Cushnie et al. [18].
### 2.3. Determination of Antibiotic Synergy with Hop-Derived Products, Assessed by the E-Test Strip/Agar Dilution Method
Prepared inocula ofS. aureus ATCC 29213 (MSSA) and the clinical S. aureus MRSA strains D5 and A7 (1×108 CFU/mL) were spread with a sterile cotton swab on (a) control MHA or (b) MHA containing hop cone or spent hop extract or xanthohumol (at a final concentration of 1/2 MIC or 1/4 MIC). At the first stage a standard disk-diffusion test was performed according to CLSI recommendations [16, 17], using the following antibiotic set: oxacillin (1 μg/disc), cefoxitin (30 μg/disc), clindamycin (2 μg/disc), vancomycin (30 μg/disc), and erythromycin (15 μg/disc) (Mast Diagnostics, UK). Antibiotic gradient strips (E-test, BioMerieux, France) containing oxacillin, vancomycin, or linezolid (concentration range 0.016–256 mg/L) were then used; the MHA plates with the overlayered strips were incubated at 37°C for 24 h, and the growth inhibition zones were measured. Differences in MIC values between the control and test plates were recorded (end points were determined according to the manufacturer’s instructions).
### 2.4.S. aureus Adhesion, Biofilm Formation, and Biofilm Eradication under the Influence of Hop Constituents
A suspension ofS. aureus ATCC 29213 (OD = 0.6, which corresponded to a density about 1×107 CFU/mL) prepared from a fresh overnight culture in tryptic soy broth (TSB, Difco, USA) supplemented with 0.25% glucose (TSB/Glc) was added (100 μL) to the wells of a 96-well tissue culture polystyrene microplate (Nunc, Denmark). To estimate bacterial adhesion, standardized glass carriers (5 mm diameter; Thermo Scientific, Germany) were put into the wells followed by 100 μL of the phytochemicals under test at final concentrations of 1/2 MIC, 1/4 MIC, and 1/8 MIC (in quadruplicate for each concentration). Glass carriers placed in bacterial culture alone (without hop constituents) were used as positive controls. Negative control wells consisted of glass carriers in phytocompounds (1/2 MIC) and TSB/Glc only. After 2 h incubation at 37°C, the glass carriers were removed, vortexed (3 min), serially diluted (10-fold dilution series in 0.85% NaCl), and cultured on MHA plates (100 μL/plate; 24 h, 37°C). The percentage of bacterial adhesion in the presence of phytocompounds was compared to that in the control culture on the basis of CFU counts. To evaluate biofilm formation, a LIVE/DEAD BacLight Bacterial Viability kit (Molecular Probes, USA) was used as recommended by the manufacturer. Bacterial suspensions (OD = 0.6) were cultured on microplates (100 μL/well) at 37°C in the absence (control) or constant presence of the phytocompounds (1 : 1 ratio with bacteria) at their final 1/2, 1/4, and 1/8 MICs. After 24 h incubation, free-floating bacterial cells were gently removed from the wells, and the remaining biofilm was stained with Syto9 and propidium iodide (PI) (15 min in the dark). Finally, the dyes were replaced with water (200 μL/well) and the fluorescence of the wells (at 485ex/535em nm for green Syto9 and at 485ex/620em nm for red PI) was measured. The results are presented as percentage biofilm biomass calculated from the mean fluorescence values ±S.D. of the control (considered as 100%) and test wells. Another set of plates was designed to investigate the influence of the MIC or 2xMIC of each phytocompound on preformed S. aureus ATCC 29213 biofilms (24 h old), starting from the same bacterial suspension. After a subsequent 24 h incubation of the staphylococcal biofilm at 37°C with or without (control) the hop constituents under test, the degree of biofilm survival (%) was assessed as described above using staining with the LIVE/DEAD Bacterial Viability assay.
### 2.5. Statistical Analysis
If necessary, differences in parameters were tested for significance using the Mann-WhitneyU test and the program Statistica 5.0 [Stat Soft Inc.].
## 2.1. Extraction, Isolation, and Chemical Analysis of Phytocompounds
Hop cones var. Marynka were grown at the experimental farm of the Institute of Soil Science and Plant Cultivation, State Research Institute of Pulawy, Poland. Plant material was collected during the 2010 season. Hop cones were dried at 55°C and kept in a cooler (4°C) pending extraction. Hop cones (100 g) were powdered and extracted with 2 L of 70% ethanol (EtOH) by boiling for 60 min. The extract was filtered and evaporated at 40°C to remove the organic phase, and then the crude extract was left at room temperature for 2 h until the sediment was separated from the liquid phase. This process was accelerated by centrifuging the extract (15 min, 5.000 ×g). The precipitate, which contained most of the xanthohumol, was freeze-dried, suspended in 30% EtOH, and applied to a C18 preparative column (45 × 160 mm, 40–63μm LiChroprep, Merck) previously preconditioned with 30% EtOH in 1% acetic acid (AcOH). The column was washed with linearly increasing concentrations of EtOH (from 30% to 100%) in 1% AcOH. Ten mL fractions were collected and monitored by HPLC. Fractions containing xanthohumol were combined and freeze-dried. After this stage, the quantity of xanthohumol in the extract was measured by HPLC. The final xanthohumol content amounted to 51% of the dry matter, as determined by the HPLC system (Waters, USA), which comprised a Waters 600 controller, a 616 pump with an in-line degasser AF, and a model 717 plus autosampler, as described previously [2]. A calibration curve was prepared for xanthohumol (Sigma, USA) at λ=370 nm.Spent hops, after extraction of the hop cones by supercritical CO2, were supplied by the Fertilizer Research Institute of Pulawy, Poland. A dried sample was ground to fine powder and suspended in acetone-water (70 : 30, v/v) at a solid to liquid ratio 1 : 10, mixed at room temperature for 30 min, and then centrifuged for 15 min (4.000 rpm). The pellet was reextracted three times with 70% aqueous acetone at room temperature, with stirring, and then the extracts were filtered and concentrated to remove the organic solvent. Lipophilic compounds were removed from the extract using chloroform and dichloromethane. The defatted aqueous extract was then concentrated to remove any residual solvent, concentrated under vacuum, and freeze-dried. Before analysis, the dried extract was reconstituted at 2 mg/mL in 10% aqueous dimethyl sulfoxide (DMSO). Total phenols, flavanols, and proanthocyanidins were determined using the methods described, respectively, by Bordonaba and Terry [13], Swain and Hillis [14], and Rösch et al. [15]. Polyphenols in the extracts were identified using an Acquity Ultra Performance LCTM system (UPLCTM) with a binary solvent manager (Waters Co., Milford, USA) and a Micromass Q-TOF Micromass spectrometer (Waters, Manchester, UK), equipped with an electrospray ionization (ESI) source operating in negative and positive modes. The individual components were characterized via their retention times and accurate molecular masses. The data obtained from UPLC/MS were analyzed with MassLynx 4.0 ChromaLynxTM Application Manager software.
## 2.2. Evaluation of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) of the Phytocompounds
The reference strains ofStaphylococcus aureusATCC 29213, Enterococcus faecalis ATCC 29212, Escherichia coli NCTC 8196, Pseudomonas aeruginosa NCTC 6749, and the clinical S. aureusstrains A7 and D5 were used. Bacteria were grown for 24 h at 37°C on Müeller-Hinton Agar—MHA (BTL, Poland), and microbial suspensions (5×105 CFU/mL) were prepared in Müeller-Hinton Broth—MHB (BTL, Poland). MIC values were determined by a microdilution broth assay according to Clinical and Laboratory Standards Institute (CLSI) recommendations [16, 17]. Stock solutions of hop cone, spent hop extracts, and xanthohumol (Sigma, USA) were prepared in 50% spent hop extract or 100% DMSO. The concentration ranges of the compounds used in the tests (using a twofold dilution system) were 0.0039–0.5 mg/mL for XH and 0.0078–2.0 mg/mL for the extracts. These ranges were based on general assumptions underpinning research on natural products for medical use, which set the upper limit for biostatic/biocidal concentration at about 1 mg/mL for complex preparations and 0.1 mg/mL for pure chemical compounds.Bacterial suspensions (100μL) were mixed 1 : 1 with the serial dilutions of the phytocompounds under test. Microplate wells containing no extract but inoculated with test strains were used as positive controls. Negative control wells consisted of the serial dilution of the phytocompound only. The final highest DMSO concentration was 1.25%, which did not affect bacterial growth. Plates were incubated at 37°C for 18 h, and the highest dilution showing no turbidity was recorded as the MIC. Since the color of the extracts at higher concentrations made turbidimetry difficult, bacterial growth on MHA (10 μL from each well after vigorous stirring; linear culture incubated for the subsequent 18 h at 37°C) was tested concurrently. The concentrations of the compounds bactericidal to ≥99.9% of the inoculum (MBC) were determined using the same method of solid culture (starting from four wells below the suspected MIC value). In each case, experiments were carried out in quadruplicate on two different experiments. In order to test whether the compounds induced cell aggregation, 25 μL of bacterial suspension, treated as described above, was placed on a glass microscope slide and gently smeared. After air drying, heat fixation, and gram staining, the slides were examined by light microscopy. The size and quantity of clusters were compared to the controls (nontreated bacteria) according to the score established by Cushnie et al. [18].
## 2.3. Determination of Antibiotic Synergy with Hop-Derived Products, Assessed by the E-Test Strip/Agar Dilution Method
Prepared inocula ofS. aureus ATCC 29213 (MSSA) and the clinical S. aureus MRSA strains D5 and A7 (1×108 CFU/mL) were spread with a sterile cotton swab on (a) control MHA or (b) MHA containing hop cone or spent hop extract or xanthohumol (at a final concentration of 1/2 MIC or 1/4 MIC). At the first stage a standard disk-diffusion test was performed according to CLSI recommendations [16, 17], using the following antibiotic set: oxacillin (1 μg/disc), cefoxitin (30 μg/disc), clindamycin (2 μg/disc), vancomycin (30 μg/disc), and erythromycin (15 μg/disc) (Mast Diagnostics, UK). Antibiotic gradient strips (E-test, BioMerieux, France) containing oxacillin, vancomycin, or linezolid (concentration range 0.016–256 mg/L) were then used; the MHA plates with the overlayered strips were incubated at 37°C for 24 h, and the growth inhibition zones were measured. Differences in MIC values between the control and test plates were recorded (end points were determined according to the manufacturer’s instructions).
## 2.4.S. aureus Adhesion, Biofilm Formation, and Biofilm Eradication under the Influence of Hop Constituents
A suspension ofS. aureus ATCC 29213 (OD = 0.6, which corresponded to a density about 1×107 CFU/mL) prepared from a fresh overnight culture in tryptic soy broth (TSB, Difco, USA) supplemented with 0.25% glucose (TSB/Glc) was added (100 μL) to the wells of a 96-well tissue culture polystyrene microplate (Nunc, Denmark). To estimate bacterial adhesion, standardized glass carriers (5 mm diameter; Thermo Scientific, Germany) were put into the wells followed by 100 μL of the phytochemicals under test at final concentrations of 1/2 MIC, 1/4 MIC, and 1/8 MIC (in quadruplicate for each concentration). Glass carriers placed in bacterial culture alone (without hop constituents) were used as positive controls. Negative control wells consisted of glass carriers in phytocompounds (1/2 MIC) and TSB/Glc only. After 2 h incubation at 37°C, the glass carriers were removed, vortexed (3 min), serially diluted (10-fold dilution series in 0.85% NaCl), and cultured on MHA plates (100 μL/plate; 24 h, 37°C). The percentage of bacterial adhesion in the presence of phytocompounds was compared to that in the control culture on the basis of CFU counts. To evaluate biofilm formation, a LIVE/DEAD BacLight Bacterial Viability kit (Molecular Probes, USA) was used as recommended by the manufacturer. Bacterial suspensions (OD = 0.6) were cultured on microplates (100 μL/well) at 37°C in the absence (control) or constant presence of the phytocompounds (1 : 1 ratio with bacteria) at their final 1/2, 1/4, and 1/8 MICs. After 24 h incubation, free-floating bacterial cells were gently removed from the wells, and the remaining biofilm was stained with Syto9 and propidium iodide (PI) (15 min in the dark). Finally, the dyes were replaced with water (200 μL/well) and the fluorescence of the wells (at 485ex/535em nm for green Syto9 and at 485ex/620em nm for red PI) was measured. The results are presented as percentage biofilm biomass calculated from the mean fluorescence values ±S.D. of the control (considered as 100%) and test wells. Another set of plates was designed to investigate the influence of the MIC or 2xMIC of each phytocompound on preformed S. aureus ATCC 29213 biofilms (24 h old), starting from the same bacterial suspension. After a subsequent 24 h incubation of the staphylococcal biofilm at 37°C with or without (control) the hop constituents under test, the degree of biofilm survival (%) was assessed as described above using staining with the LIVE/DEAD Bacterial Viability assay.
## 2.5. Statistical Analysis
If necessary, differences in parameters were tested for significance using the Mann-WhitneyU test and the program Statistica 5.0 [Stat Soft Inc.].
## 3. Results and Discussion
Separation of the precipitate from the ethanolic extract of the powdered hops on a preparative C18 column resulted in a fraction that contained 51% dry weight xanthohumol. The HPLC profile showed the presence of one predominant compound. The other small peaks seen in the profile belonged to isoxanthohumol and alpha-acids, but the peak area indicated that they did not exceed 2% of the sample dry mass, as described previously [2].The total phenol content of the spent hop extract was about 24%; half the phenols were flavanols. A total of 10 flavan-3-ols were identified using ultraperformance liquid chromatography-mass spectrometry. Two monomers ((+)-catechin and (−)-epicatechin), four dimers, and four trimers were detected. The spent hop extract also contained four hydroxycinnamates: neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, and feruloylquinic acid. Flavonols were represented by quercetin and kaempferol derivatives. This extract did not contain xanthohumol (achieved by additional purification steps), since our research was intentionally directed towards the utilization of the spent hops after extraction of this compound from the waste.The hop cone extract, spent hop extract, and (for comparison) pure xanthohumol were assessed for antimicrobial activityin vitro. Initially, the antibacterial effect of these products was evaluated against a panel of reference strains. The growth of gram-negative bacteria (Escherichia coli and Pseudomonas aeruginosa) was not inhibited by any of the investigated compounds in relevant concentrations (the MICs exceeded their highest used concentrations, data not shown). In contrast, the hop cone extract was a potent antagonist of the gram-positive Staphylococcus aureus ATCC 29213 (MIC 31.3 μg/mL) and Enterococcus faecalis ATCC 29212 (MIC 62.5 μg/mL), exhibiting half the efficacy of pure xanthohumol (Table 1). The spent hop extract (free of xanthohumol but containing significant amounts of various quercetin and kaempferol derivatives, catechin, and epicatechin), not previously studied in this regard, exhibited much lower but still significant activity against gram-positives (MICs range 1-2 mg/mL). For comparison, as presented in our previous work, the MICs of the reference flavonoids quercetin and naringin were >300 μg/mL and those of thymol and kaempferol were, respectively, 18.75 μg/mL and 62.5 μg/mL [19]. A standard set of microorganisms used to assess antimicrobial activity of various preparations covers a broad panel of microbes, including both Gram-positive (represented by Staphylococci and enterococci) and gram-negative (fermenting and nonfermenting bacilli) bacteria. The reasons for their behavior towards biologically active substances can be ascribed in their structure or metabolism. Our results on antimicrobial activity of studied phytocompounds divided the group of test microorganisms into two subgroups, susceptible and resistant, which correspond to the division on gram-positive and gram-negative bacteria. This demonstrates the importance of structure of cell wall/membrane and its permeability rather than metabolic activity of these microorganisms. It is not without significance the fact that, as was described by Sakai et al. [20], xanthohumol (an inhibitor of diacylglycerol acyltransferase) and natural products which contain this compound (hop extract) belong to the group of products which are potent inhibitors of lipid metabolism. This can significantly affect the composition and stability of microbial cell wall/membrane. But our experimental results do not allow us to explain the mechanisms of biological activity of tested hop extract, spent hops extract, and xanthohumol accurately, and this requires further detailed studies.Table 1
Minimum inhibitory and biocidal concentration (MIC/MBC) of hop-derived compounds against selectedS. aureus strains, determined by a microdilution broth assay accompanied by assessment of bacterial growth on solid media.
[mg/mL]
Spent hops extract
Hop cones extract
Xanthohumol
MIC
MBC
MIC
MBC
MIC
MBC
S. aureus 29213
2
>2
0.031
0.065
0.015
>0.5
S. aureus D5
1
2
0.125
0.5
0.125
0.5
S. aureus A7
2
2
0.031
0.031
0.125/0.062
0.25
E. faecalis 29212
>2
>2
0.062
1
0.062
>0.5The obtained results encouraged us to undertake further studies on the activities ofH. lupulus-derived products against multidrug-resistant S. aureus strains. For the experiment, we selected two clinical isolates—members of an important group of “alert” human pathogens—MRSA (methicillin-resistant S. aureus) A7 and D5. The hop-derived components demonstrated potent activity, but their MICs against the D5 strain were higher than those reported earlier for S. aureus ATCC 29213 (MSSA) (Table 1). Following MIC evaluation, inhibition of bacterial growth on solid media revealed ≥99.9% reduction of the original inoculum by hop cone and spent hope extracts or xanthohumol. The results proved that the phytocompounds tested exhibited concentration-dependent bactericidal effects (MBC) (Table 1). In order to test whether the compounds induced cell aggregation, which could influence CFU counts during MBC testing, samples of bacteria incubated with the phytocompounds were compared under light microscopy with controls (untreated bacteria). Most of the bacteria incubated with hop cone and spent hop extracts (at MIC) formed similar numbers of pairs and small clusters as the control, while bacteria treated with xanthohumol were found mainly in small and large aggregates (data not shown). This effect disappeared when the concentration of xanthohumol was reduced to 1/2 MIC. Therefore, half MIC and two lower concentrations (1/4 and 1/8 MIC) were used in experiments on the influence of phytochemicals on adhesion, biofilm formation, and synergy with antibiotics. As proposed by Cushnie et al. [18] and Cushnie and Lamb [21], aggregation of bacterial cells should always be taken into account when interpreting data from assays with natural flavonoids and flavonoid-rich phytochemical preparations. It is well known that the hop extraction method used determines the composition of the products and their biological activity [22–25]. Papers describing the antibacterial actions of different compounds in hops mainly are concerned with the activities of the bitter acids humulone and lupulone and of the flavonoid xanthohumol [9, 26–29]. Xanthohumol has also been reported as the main component with anti-infective effects against viruses and fungi [6].As pathogens become more and more resistant to available antibiotics, posing a significant medical problem, the need for alternative treatments becomes greater. Several studies have suggested that combining plant- or animal-derived natural compounds with antibiotics is a new strategy for developing therapies against infections [12, 21, 30]. Here, we report that the antimicrobial activities of commercial antibiotics are enhanced by adding hop compounds, forming a potent inhibitor of growth of S. aureus. MIC values were decreased by the coactions of hop products with oxacillin (β-lactam) and linezolid (oxazolidinone) but not vancomycin (glycopeptide) (Table 2). When hop cone or spent hop extracts or xanthohumol was incorporated into the agar medium at 1/2 MIC or 1/4 MIC, the MIC of oxacillin against MSSA S. aureus ATCC 29213 was reduced. The example is shown on Figure 1. Since hop products have previously been demonstrated to affect cell wall and membrane integrity, it is possible that they facilitate antibiotic penetration. This could explain why they potentiate the action of oxacillin, which by binding to specific penicillin-binding proteins (PBP) inside the bacterial cell wall inhibits cell wall synthesis. Unfortunately, the sensitivity of MRSA strains was not increased by hop derivatives. Methicillin resistance in S. aureus is primarily mediated by the mecA gene, which encodes the modified PBP 2a. This protein is also located in the bacterial cell wall and has a lower binding affinity for β-lactams. Although all cells in a population of S. aureus can carry the mecA gene, often only a few of them express it. Thus, both resistant and susceptible bacteria can exist in the same culture [31]. Our results confirm this phenomenon. Although the MIC of oxacillin did not decrease when the whole population of MRSA was grown in the presence of phytochemicals, growth was weakened. As mentioned, extracts of hop cones, spent hops and xanthohumol did not increase the sensitivity of the reference or clinical strains to vancomycin. However, there was increased sensitivity to linezolid, a bacteriostatic antibiotic inhibiting the initiation of protein synthesis. It is effective against infections caused by various gram-positive pathogens, including multidrug resistant enterococci and MRSA. However, starting from 2008, cases of isolation of resistant strains have been recorded, luckily still with low incidence [32]. The strengthening of its action that we were able to achieve is interesting and suggests that hop products probably facilitate the penetration of this antibiotic into the bacterial cell.Table 2
Synergistic activity of subinhibitory concentrations ofHumulus lupulus constituents and antibiotics belonging to various therapeutic classes, determined by the E-test strip/agar dilution method against S. aureus ATCC 29213.
Treatment
MIC [µg/mL]
Oxacillin
Vancomycin
Linezolid
(Control)
0.125
1.0
0.5
Hop cones extract
1/2 MIC (15.6 µg/mL)
0.094
1.0
0.38
1/4 MIC (7.8 µg/mL)
0.094
1.0
0.5
Spent hops extract
1/2 MIC (1000 µg/mL)
0.094
1.0
0.5
1/4 MIC (500 µg/mL)
0.125
1.0
n.t.
Xanthohumol
1/2 MIC (7.8 µg/mL)
0.064
1.0
0.38
1/4 MIC (3.9 µg/mL)
0.094
1.0
0.5
n.t.: not tested.Figure 1
Synergistic effect of oxacillin and xanthohumol againstS. aureus ATCC 29213 evaluated by E-test strip/agar dilution method. (a) The control plate (MHA); (b) the tested plate (MHA with xanthohumol at final dilution equal to 1/4 MIC).Why did we useStaphylococcus aureus as the model organism in our study? It is obvious that it would be also interesting to test susceptibility of enterococci to products derived from hops. These bacteria, like Staphylococci, constitute a serious epidemiological risk since they are well equipped with a variety of natural antibiotic resistance; they are also capable of acquiring new resistance genes and/or mutations. However, our research interest has focused the attention on the S. aureus behavior. These are the reasons: first, because these bacteria produce a large number of virulence factors that are important for pathogenesis; secondly, because they are on the list of alarm multiresistant pathogens; finally, because biofilm formation by them is a major medical problem. Since bacteria in biofilms are extremely resistant to antimicrobial agents, biofilm-associated infections are very difficult to treat, especially when the causative organism is multidrug resistant [10]. Thus, another question we asked was whether the hop-derived compounds can be considered effective in antibiofilm therapy. We demonstrated that they were effective (at 1/4 or 1/2 MIC) against staphylococcal adhesion evaluated after 2 h (inhibition range 50–90%) and biofilm formation evaluated after 24 h of coincubation (Figure 2). Moreover, these extracts or pure xanthohumol applied to already-formed biofilms at the relatively low concentrations of MIC or 2xMIC reduced the viability of the mature biofilm significantly. The most potent in this respect was xanthohumol, which reduced the biofilm by 86.5±1.5% (at MIC), whereas the spent hop extract caused 42.8±16.3% and the hop cone extract 74.9±6.7% biofilm eradication at MIC. Given the known high resistance of biofilm populations, our observation should be considered promising, since even partial destruction of a biofilm by antibiotics/antiseptics is encouraging.Figure 2
Antibiofilm activity ofHumulus lupulus-derived extracts and xanthohumol against S. aureus ATCC 29213. Bacteria were cultured for 24 h in absence or constant presence of the phytocompounds used at their 1/2, 1/4, and 1/8 MICs. Biofilm formation was assessed using a LIVE/DEAD BacLight kit. Results are presented as the percentage of the biomass viability, compared to the control. All presented results are mean from 2 independent experiments performed in quadruplicate ± S.D. Black bars: control; grey bars: spent hops; open bars: hop cones; striped bars: xanthohumol.However, our experimental results to date do not allow the mechanisms of biological activity of the tested hop extract, spent hop extract, and xanthohumol to be explained adequately. It can be supposed that their effective penetration across the cell wall and/or membrane damage is the most important property. They could also influence bacterial cell surface hydrophobicity and, depending on sortase activity, the assembly of adhesins in the cell wall [33]. Thus, our plant-derived compounds could interfere with the adhesion step essential for successful biofilm development. Their observed effects suggest that they easily penetrate biological membranes, probably without the help of active transport mechanisms [21], but this possibility needs further research.
## 4. Conclusions
In summary, the present study has revealed the potent antibiofilm activity of hop-derived compounds for the first time. This is interesting, particularly with regard to the action of the spent hop extract, which is a quantitatively significant waste from the brewing industry. Therefore, this observation has the potential for practical application, since spent hop extract containing no xanthohumol is still a good source of substances with antimicrobial and antibiofilm activities. Although the mechanisms of biological activity of the phytocompounds tested are not clear (we can only discuss possibilities, as above), our results suggest that hop-derived constituents can be extended beyond the beer industry to prospective medical applications.
---
*Source: 101089-2013-09-23.xml* | 101089-2013-09-23_101089-2013-09-23.md | 38,496 | Antiadherent and Antibiofilm Activity ofHumulus lupulus L. Derived Products: New Pharmacological Properties | Marcin Rozalski; Bartlomiej Micota; Beata Sadowska; Anna Stochmal; Dariusz Jedrejek; Marzena Wieckowska-Szakiel; Barbara Rozalska | BioMed Research International
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101089 | 101089-2013-09-23.xml | ---
## Abstract
New antimicrobial properties of products derived fromHumulus lupulus L. such as antiadherent and antibiofilm activities were evaluated. The growth of gram-positive but not gram-negative bacteria was inhibited to different extents by these compounds. An extract of hop cones containing 51% xanthohumol was slightly less active against S. aureus strains (MIC range 31.2–125.0 μg/mL) than pure xanthohumol (MIC range 15.6–62.5 μg/mL). The spent hop extract, free of xanthohumol, exhibited lower but still relevant activity (MIC range 1-2 mg/mL). There were positive coactions of hop cone, spent hop extracts, and xanthohumol with oxacillin against MSSA and with linezolid against MSSA and MRSA. Plant compounds in the culture medium at sub-MIC concentrations decreased the adhesion of Staphylococci to abiotic surfaces, which in turn caused inhibition of biofilm formation. The rate of mature biofilm eradication by these products was significant. The spent hop extract at MIC reduced biofilm viability by 42.8%, the hop cone extract by 74.8%, and pure xanthohumol by 86.5%. When the hop cone extract or xanthohumol concentration was increased, almost complete biofilm eradication was achieved (97–99%). This study reveals the potent antibiofilm activity of hop-derived compounds for the first time.
---
## Body
## 1. Introduction
Hops, the resinous female inflorescences ofHumulus lupulus L. (Cannabaceae) (called hop cones or strobiles), are used primarily in the brewing industry because of their bitter and aromatic properties. However, hop extracts and/or compounds such as polyphenols and acylphloroglucides are also reported to have antioxidant, estrogenic, sedative, and potential cancer-chemopreventive activities. Xanthohumol is the most abundant prenylated flavonoid in fresh hops, with properties overlapping those mentioned above [1–4]. Most interesting for our research is the antimicrobial activity of xanthohumol and other hop extract compounds, which could find new applications beyond the brewing industry as natural antimicrobial therapeutic substances. Research on the use of hop products to combat human pathogens has been conducted all over the world, and knowledge in this area from in vitro studies is already quite extensive, though further studies are still required [5–9]. Here, we wish to propose an investigation into an entirely new potential use of hop products, as antibiofilm compounds and enhancers of antibiotic action. Biofilm formation has substantial implications for a variety of industries such as oil drilling, paper production, and food processing. It is also well known that bacterial and fungal pathogens that form biofilms are responsible for serious infections, which are usually very difficult to treat. This is due to the high resistance of biofilms (100–1000 times higher than for a planktonic culture) to antibiotics, antiseptics, disinfectants, and host defense mechanisms [10]. This justifies the search for new therapeutic options, and plant-derived products are in the spotlight as promising sources or templates for new drugs [11, 12].The aim of our study was to establish the antibacterial activities of a purified extract from hop cones containing 51% xanthohumol and a spent hop extract depleted of xanthohumol and to compare them with commercially available xanthohumol. The first stage of the study comprised MIC/MBC evaluation of the above products and their synergy with antibiotics. We then examined the effects of these phytochemicals against staphylococcal biofilms, which have not been evaluated previously. Owing to the high resistance of biofilms, the ideal way of avoiding their engagement inin vivo pathogenesis or in other biofouling processes in the environment and industry would be to prevent their development. Therefore, one of our objectives was to evaluate the adhesion of S. aureus strains to glass/plastic surfaces and biofilm formation when hop constituents were continually present. We also assessed the capacity of these phytocompounds to eradicate an already-established biofilm.
## 2. Materials and Methods
### 2.1. Extraction, Isolation, and Chemical Analysis of Phytocompounds
Hop cones var. Marynka were grown at the experimental farm of the Institute of Soil Science and Plant Cultivation, State Research Institute of Pulawy, Poland. Plant material was collected during the 2010 season. Hop cones were dried at 55°C and kept in a cooler (4°C) pending extraction. Hop cones (100 g) were powdered and extracted with 2 L of 70% ethanol (EtOH) by boiling for 60 min. The extract was filtered and evaporated at 40°C to remove the organic phase, and then the crude extract was left at room temperature for 2 h until the sediment was separated from the liquid phase. This process was accelerated by centrifuging the extract (15 min, 5.000 ×g). The precipitate, which contained most of the xanthohumol, was freeze-dried, suspended in 30% EtOH, and applied to a C18 preparative column (45 × 160 mm, 40–63μm LiChroprep, Merck) previously preconditioned with 30% EtOH in 1% acetic acid (AcOH). The column was washed with linearly increasing concentrations of EtOH (from 30% to 100%) in 1% AcOH. Ten mL fractions were collected and monitored by HPLC. Fractions containing xanthohumol were combined and freeze-dried. After this stage, the quantity of xanthohumol in the extract was measured by HPLC. The final xanthohumol content amounted to 51% of the dry matter, as determined by the HPLC system (Waters, USA), which comprised a Waters 600 controller, a 616 pump with an in-line degasser AF, and a model 717 plus autosampler, as described previously [2]. A calibration curve was prepared for xanthohumol (Sigma, USA) at λ=370 nm.Spent hops, after extraction of the hop cones by supercritical CO2, were supplied by the Fertilizer Research Institute of Pulawy, Poland. A dried sample was ground to fine powder and suspended in acetone-water (70 : 30, v/v) at a solid to liquid ratio 1 : 10, mixed at room temperature for 30 min, and then centrifuged for 15 min (4.000 rpm). The pellet was reextracted three times with 70% aqueous acetone at room temperature, with stirring, and then the extracts were filtered and concentrated to remove the organic solvent. Lipophilic compounds were removed from the extract using chloroform and dichloromethane. The defatted aqueous extract was then concentrated to remove any residual solvent, concentrated under vacuum, and freeze-dried. Before analysis, the dried extract was reconstituted at 2 mg/mL in 10% aqueous dimethyl sulfoxide (DMSO). Total phenols, flavanols, and proanthocyanidins were determined using the methods described, respectively, by Bordonaba and Terry [13], Swain and Hillis [14], and Rösch et al. [15]. Polyphenols in the extracts were identified using an Acquity Ultra Performance LCTM system (UPLCTM) with a binary solvent manager (Waters Co., Milford, USA) and a Micromass Q-TOF Micromass spectrometer (Waters, Manchester, UK), equipped with an electrospray ionization (ESI) source operating in negative and positive modes. The individual components were characterized via their retention times and accurate molecular masses. The data obtained from UPLC/MS were analyzed with MassLynx 4.0 ChromaLynxTM Application Manager software.
### 2.2. Evaluation of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) of the Phytocompounds
The reference strains ofStaphylococcus aureusATCC 29213, Enterococcus faecalis ATCC 29212, Escherichia coli NCTC 8196, Pseudomonas aeruginosa NCTC 6749, and the clinical S. aureusstrains A7 and D5 were used. Bacteria were grown for 24 h at 37°C on Müeller-Hinton Agar—MHA (BTL, Poland), and microbial suspensions (5×105 CFU/mL) were prepared in Müeller-Hinton Broth—MHB (BTL, Poland). MIC values were determined by a microdilution broth assay according to Clinical and Laboratory Standards Institute (CLSI) recommendations [16, 17]. Stock solutions of hop cone, spent hop extracts, and xanthohumol (Sigma, USA) were prepared in 50% spent hop extract or 100% DMSO. The concentration ranges of the compounds used in the tests (using a twofold dilution system) were 0.0039–0.5 mg/mL for XH and 0.0078–2.0 mg/mL for the extracts. These ranges were based on general assumptions underpinning research on natural products for medical use, which set the upper limit for biostatic/biocidal concentration at about 1 mg/mL for complex preparations and 0.1 mg/mL for pure chemical compounds.Bacterial suspensions (100μL) were mixed 1 : 1 with the serial dilutions of the phytocompounds under test. Microplate wells containing no extract but inoculated with test strains were used as positive controls. Negative control wells consisted of the serial dilution of the phytocompound only. The final highest DMSO concentration was 1.25%, which did not affect bacterial growth. Plates were incubated at 37°C for 18 h, and the highest dilution showing no turbidity was recorded as the MIC. Since the color of the extracts at higher concentrations made turbidimetry difficult, bacterial growth on MHA (10 μL from each well after vigorous stirring; linear culture incubated for the subsequent 18 h at 37°C) was tested concurrently. The concentrations of the compounds bactericidal to ≥99.9% of the inoculum (MBC) were determined using the same method of solid culture (starting from four wells below the suspected MIC value). In each case, experiments were carried out in quadruplicate on two different experiments. In order to test whether the compounds induced cell aggregation, 25 μL of bacterial suspension, treated as described above, was placed on a glass microscope slide and gently smeared. After air drying, heat fixation, and gram staining, the slides were examined by light microscopy. The size and quantity of clusters were compared to the controls (nontreated bacteria) according to the score established by Cushnie et al. [18].
### 2.3. Determination of Antibiotic Synergy with Hop-Derived Products, Assessed by the E-Test Strip/Agar Dilution Method
Prepared inocula ofS. aureus ATCC 29213 (MSSA) and the clinical S. aureus MRSA strains D5 and A7 (1×108 CFU/mL) were spread with a sterile cotton swab on (a) control MHA or (b) MHA containing hop cone or spent hop extract or xanthohumol (at a final concentration of 1/2 MIC or 1/4 MIC). At the first stage a standard disk-diffusion test was performed according to CLSI recommendations [16, 17], using the following antibiotic set: oxacillin (1 μg/disc), cefoxitin (30 μg/disc), clindamycin (2 μg/disc), vancomycin (30 μg/disc), and erythromycin (15 μg/disc) (Mast Diagnostics, UK). Antibiotic gradient strips (E-test, BioMerieux, France) containing oxacillin, vancomycin, or linezolid (concentration range 0.016–256 mg/L) were then used; the MHA plates with the overlayered strips were incubated at 37°C for 24 h, and the growth inhibition zones were measured. Differences in MIC values between the control and test plates were recorded (end points were determined according to the manufacturer’s instructions).
### 2.4.S. aureus Adhesion, Biofilm Formation, and Biofilm Eradication under the Influence of Hop Constituents
A suspension ofS. aureus ATCC 29213 (OD = 0.6, which corresponded to a density about 1×107 CFU/mL) prepared from a fresh overnight culture in tryptic soy broth (TSB, Difco, USA) supplemented with 0.25% glucose (TSB/Glc) was added (100 μL) to the wells of a 96-well tissue culture polystyrene microplate (Nunc, Denmark). To estimate bacterial adhesion, standardized glass carriers (5 mm diameter; Thermo Scientific, Germany) were put into the wells followed by 100 μL of the phytochemicals under test at final concentrations of 1/2 MIC, 1/4 MIC, and 1/8 MIC (in quadruplicate for each concentration). Glass carriers placed in bacterial culture alone (without hop constituents) were used as positive controls. Negative control wells consisted of glass carriers in phytocompounds (1/2 MIC) and TSB/Glc only. After 2 h incubation at 37°C, the glass carriers were removed, vortexed (3 min), serially diluted (10-fold dilution series in 0.85% NaCl), and cultured on MHA plates (100 μL/plate; 24 h, 37°C). The percentage of bacterial adhesion in the presence of phytocompounds was compared to that in the control culture on the basis of CFU counts. To evaluate biofilm formation, a LIVE/DEAD BacLight Bacterial Viability kit (Molecular Probes, USA) was used as recommended by the manufacturer. Bacterial suspensions (OD = 0.6) were cultured on microplates (100 μL/well) at 37°C in the absence (control) or constant presence of the phytocompounds (1 : 1 ratio with bacteria) at their final 1/2, 1/4, and 1/8 MICs. After 24 h incubation, free-floating bacterial cells were gently removed from the wells, and the remaining biofilm was stained with Syto9 and propidium iodide (PI) (15 min in the dark). Finally, the dyes were replaced with water (200 μL/well) and the fluorescence of the wells (at 485ex/535em nm for green Syto9 and at 485ex/620em nm for red PI) was measured. The results are presented as percentage biofilm biomass calculated from the mean fluorescence values ±S.D. of the control (considered as 100%) and test wells. Another set of plates was designed to investigate the influence of the MIC or 2xMIC of each phytocompound on preformed S. aureus ATCC 29213 biofilms (24 h old), starting from the same bacterial suspension. After a subsequent 24 h incubation of the staphylococcal biofilm at 37°C with or without (control) the hop constituents under test, the degree of biofilm survival (%) was assessed as described above using staining with the LIVE/DEAD Bacterial Viability assay.
### 2.5. Statistical Analysis
If necessary, differences in parameters were tested for significance using the Mann-WhitneyU test and the program Statistica 5.0 [Stat Soft Inc.].
## 2.1. Extraction, Isolation, and Chemical Analysis of Phytocompounds
Hop cones var. Marynka were grown at the experimental farm of the Institute of Soil Science and Plant Cultivation, State Research Institute of Pulawy, Poland. Plant material was collected during the 2010 season. Hop cones were dried at 55°C and kept in a cooler (4°C) pending extraction. Hop cones (100 g) were powdered and extracted with 2 L of 70% ethanol (EtOH) by boiling for 60 min. The extract was filtered and evaporated at 40°C to remove the organic phase, and then the crude extract was left at room temperature for 2 h until the sediment was separated from the liquid phase. This process was accelerated by centrifuging the extract (15 min, 5.000 ×g). The precipitate, which contained most of the xanthohumol, was freeze-dried, suspended in 30% EtOH, and applied to a C18 preparative column (45 × 160 mm, 40–63μm LiChroprep, Merck) previously preconditioned with 30% EtOH in 1% acetic acid (AcOH). The column was washed with linearly increasing concentrations of EtOH (from 30% to 100%) in 1% AcOH. Ten mL fractions were collected and monitored by HPLC. Fractions containing xanthohumol were combined and freeze-dried. After this stage, the quantity of xanthohumol in the extract was measured by HPLC. The final xanthohumol content amounted to 51% of the dry matter, as determined by the HPLC system (Waters, USA), which comprised a Waters 600 controller, a 616 pump with an in-line degasser AF, and a model 717 plus autosampler, as described previously [2]. A calibration curve was prepared for xanthohumol (Sigma, USA) at λ=370 nm.Spent hops, after extraction of the hop cones by supercritical CO2, were supplied by the Fertilizer Research Institute of Pulawy, Poland. A dried sample was ground to fine powder and suspended in acetone-water (70 : 30, v/v) at a solid to liquid ratio 1 : 10, mixed at room temperature for 30 min, and then centrifuged for 15 min (4.000 rpm). The pellet was reextracted three times with 70% aqueous acetone at room temperature, with stirring, and then the extracts were filtered and concentrated to remove the organic solvent. Lipophilic compounds were removed from the extract using chloroform and dichloromethane. The defatted aqueous extract was then concentrated to remove any residual solvent, concentrated under vacuum, and freeze-dried. Before analysis, the dried extract was reconstituted at 2 mg/mL in 10% aqueous dimethyl sulfoxide (DMSO). Total phenols, flavanols, and proanthocyanidins were determined using the methods described, respectively, by Bordonaba and Terry [13], Swain and Hillis [14], and Rösch et al. [15]. Polyphenols in the extracts were identified using an Acquity Ultra Performance LCTM system (UPLCTM) with a binary solvent manager (Waters Co., Milford, USA) and a Micromass Q-TOF Micromass spectrometer (Waters, Manchester, UK), equipped with an electrospray ionization (ESI) source operating in negative and positive modes. The individual components were characterized via their retention times and accurate molecular masses. The data obtained from UPLC/MS were analyzed with MassLynx 4.0 ChromaLynxTM Application Manager software.
## 2.2. Evaluation of Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) of the Phytocompounds
The reference strains ofStaphylococcus aureusATCC 29213, Enterococcus faecalis ATCC 29212, Escherichia coli NCTC 8196, Pseudomonas aeruginosa NCTC 6749, and the clinical S. aureusstrains A7 and D5 were used. Bacteria were grown for 24 h at 37°C on Müeller-Hinton Agar—MHA (BTL, Poland), and microbial suspensions (5×105 CFU/mL) were prepared in Müeller-Hinton Broth—MHB (BTL, Poland). MIC values were determined by a microdilution broth assay according to Clinical and Laboratory Standards Institute (CLSI) recommendations [16, 17]. Stock solutions of hop cone, spent hop extracts, and xanthohumol (Sigma, USA) were prepared in 50% spent hop extract or 100% DMSO. The concentration ranges of the compounds used in the tests (using a twofold dilution system) were 0.0039–0.5 mg/mL for XH and 0.0078–2.0 mg/mL for the extracts. These ranges were based on general assumptions underpinning research on natural products for medical use, which set the upper limit for biostatic/biocidal concentration at about 1 mg/mL for complex preparations and 0.1 mg/mL for pure chemical compounds.Bacterial suspensions (100μL) were mixed 1 : 1 with the serial dilutions of the phytocompounds under test. Microplate wells containing no extract but inoculated with test strains were used as positive controls. Negative control wells consisted of the serial dilution of the phytocompound only. The final highest DMSO concentration was 1.25%, which did not affect bacterial growth. Plates were incubated at 37°C for 18 h, and the highest dilution showing no turbidity was recorded as the MIC. Since the color of the extracts at higher concentrations made turbidimetry difficult, bacterial growth on MHA (10 μL from each well after vigorous stirring; linear culture incubated for the subsequent 18 h at 37°C) was tested concurrently. The concentrations of the compounds bactericidal to ≥99.9% of the inoculum (MBC) were determined using the same method of solid culture (starting from four wells below the suspected MIC value). In each case, experiments were carried out in quadruplicate on two different experiments. In order to test whether the compounds induced cell aggregation, 25 μL of bacterial suspension, treated as described above, was placed on a glass microscope slide and gently smeared. After air drying, heat fixation, and gram staining, the slides were examined by light microscopy. The size and quantity of clusters were compared to the controls (nontreated bacteria) according to the score established by Cushnie et al. [18].
## 2.3. Determination of Antibiotic Synergy with Hop-Derived Products, Assessed by the E-Test Strip/Agar Dilution Method
Prepared inocula ofS. aureus ATCC 29213 (MSSA) and the clinical S. aureus MRSA strains D5 and A7 (1×108 CFU/mL) were spread with a sterile cotton swab on (a) control MHA or (b) MHA containing hop cone or spent hop extract or xanthohumol (at a final concentration of 1/2 MIC or 1/4 MIC). At the first stage a standard disk-diffusion test was performed according to CLSI recommendations [16, 17], using the following antibiotic set: oxacillin (1 μg/disc), cefoxitin (30 μg/disc), clindamycin (2 μg/disc), vancomycin (30 μg/disc), and erythromycin (15 μg/disc) (Mast Diagnostics, UK). Antibiotic gradient strips (E-test, BioMerieux, France) containing oxacillin, vancomycin, or linezolid (concentration range 0.016–256 mg/L) were then used; the MHA plates with the overlayered strips were incubated at 37°C for 24 h, and the growth inhibition zones were measured. Differences in MIC values between the control and test plates were recorded (end points were determined according to the manufacturer’s instructions).
## 2.4.S. aureus Adhesion, Biofilm Formation, and Biofilm Eradication under the Influence of Hop Constituents
A suspension ofS. aureus ATCC 29213 (OD = 0.6, which corresponded to a density about 1×107 CFU/mL) prepared from a fresh overnight culture in tryptic soy broth (TSB, Difco, USA) supplemented with 0.25% glucose (TSB/Glc) was added (100 μL) to the wells of a 96-well tissue culture polystyrene microplate (Nunc, Denmark). To estimate bacterial adhesion, standardized glass carriers (5 mm diameter; Thermo Scientific, Germany) were put into the wells followed by 100 μL of the phytochemicals under test at final concentrations of 1/2 MIC, 1/4 MIC, and 1/8 MIC (in quadruplicate for each concentration). Glass carriers placed in bacterial culture alone (without hop constituents) were used as positive controls. Negative control wells consisted of glass carriers in phytocompounds (1/2 MIC) and TSB/Glc only. After 2 h incubation at 37°C, the glass carriers were removed, vortexed (3 min), serially diluted (10-fold dilution series in 0.85% NaCl), and cultured on MHA plates (100 μL/plate; 24 h, 37°C). The percentage of bacterial adhesion in the presence of phytocompounds was compared to that in the control culture on the basis of CFU counts. To evaluate biofilm formation, a LIVE/DEAD BacLight Bacterial Viability kit (Molecular Probes, USA) was used as recommended by the manufacturer. Bacterial suspensions (OD = 0.6) were cultured on microplates (100 μL/well) at 37°C in the absence (control) or constant presence of the phytocompounds (1 : 1 ratio with bacteria) at their final 1/2, 1/4, and 1/8 MICs. After 24 h incubation, free-floating bacterial cells were gently removed from the wells, and the remaining biofilm was stained with Syto9 and propidium iodide (PI) (15 min in the dark). Finally, the dyes were replaced with water (200 μL/well) and the fluorescence of the wells (at 485ex/535em nm for green Syto9 and at 485ex/620em nm for red PI) was measured. The results are presented as percentage biofilm biomass calculated from the mean fluorescence values ±S.D. of the control (considered as 100%) and test wells. Another set of plates was designed to investigate the influence of the MIC or 2xMIC of each phytocompound on preformed S. aureus ATCC 29213 biofilms (24 h old), starting from the same bacterial suspension. After a subsequent 24 h incubation of the staphylococcal biofilm at 37°C with or without (control) the hop constituents under test, the degree of biofilm survival (%) was assessed as described above using staining with the LIVE/DEAD Bacterial Viability assay.
## 2.5. Statistical Analysis
If necessary, differences in parameters were tested for significance using the Mann-WhitneyU test and the program Statistica 5.0 [Stat Soft Inc.].
## 3. Results and Discussion
Separation of the precipitate from the ethanolic extract of the powdered hops on a preparative C18 column resulted in a fraction that contained 51% dry weight xanthohumol. The HPLC profile showed the presence of one predominant compound. The other small peaks seen in the profile belonged to isoxanthohumol and alpha-acids, but the peak area indicated that they did not exceed 2% of the sample dry mass, as described previously [2].The total phenol content of the spent hop extract was about 24%; half the phenols were flavanols. A total of 10 flavan-3-ols were identified using ultraperformance liquid chromatography-mass spectrometry. Two monomers ((+)-catechin and (−)-epicatechin), four dimers, and four trimers were detected. The spent hop extract also contained four hydroxycinnamates: neochlorogenic acid, chlorogenic acid, cryptochlorogenic acid, and feruloylquinic acid. Flavonols were represented by quercetin and kaempferol derivatives. This extract did not contain xanthohumol (achieved by additional purification steps), since our research was intentionally directed towards the utilization of the spent hops after extraction of this compound from the waste.The hop cone extract, spent hop extract, and (for comparison) pure xanthohumol were assessed for antimicrobial activityin vitro. Initially, the antibacterial effect of these products was evaluated against a panel of reference strains. The growth of gram-negative bacteria (Escherichia coli and Pseudomonas aeruginosa) was not inhibited by any of the investigated compounds in relevant concentrations (the MICs exceeded their highest used concentrations, data not shown). In contrast, the hop cone extract was a potent antagonist of the gram-positive Staphylococcus aureus ATCC 29213 (MIC 31.3 μg/mL) and Enterococcus faecalis ATCC 29212 (MIC 62.5 μg/mL), exhibiting half the efficacy of pure xanthohumol (Table 1). The spent hop extract (free of xanthohumol but containing significant amounts of various quercetin and kaempferol derivatives, catechin, and epicatechin), not previously studied in this regard, exhibited much lower but still significant activity against gram-positives (MICs range 1-2 mg/mL). For comparison, as presented in our previous work, the MICs of the reference flavonoids quercetin and naringin were >300 μg/mL and those of thymol and kaempferol were, respectively, 18.75 μg/mL and 62.5 μg/mL [19]. A standard set of microorganisms used to assess antimicrobial activity of various preparations covers a broad panel of microbes, including both Gram-positive (represented by Staphylococci and enterococci) and gram-negative (fermenting and nonfermenting bacilli) bacteria. The reasons for their behavior towards biologically active substances can be ascribed in their structure or metabolism. Our results on antimicrobial activity of studied phytocompounds divided the group of test microorganisms into two subgroups, susceptible and resistant, which correspond to the division on gram-positive and gram-negative bacteria. This demonstrates the importance of structure of cell wall/membrane and its permeability rather than metabolic activity of these microorganisms. It is not without significance the fact that, as was described by Sakai et al. [20], xanthohumol (an inhibitor of diacylglycerol acyltransferase) and natural products which contain this compound (hop extract) belong to the group of products which are potent inhibitors of lipid metabolism. This can significantly affect the composition and stability of microbial cell wall/membrane. But our experimental results do not allow us to explain the mechanisms of biological activity of tested hop extract, spent hops extract, and xanthohumol accurately, and this requires further detailed studies.Table 1
Minimum inhibitory and biocidal concentration (MIC/MBC) of hop-derived compounds against selectedS. aureus strains, determined by a microdilution broth assay accompanied by assessment of bacterial growth on solid media.
[mg/mL]
Spent hops extract
Hop cones extract
Xanthohumol
MIC
MBC
MIC
MBC
MIC
MBC
S. aureus 29213
2
>2
0.031
0.065
0.015
>0.5
S. aureus D5
1
2
0.125
0.5
0.125
0.5
S. aureus A7
2
2
0.031
0.031
0.125/0.062
0.25
E. faecalis 29212
>2
>2
0.062
1
0.062
>0.5The obtained results encouraged us to undertake further studies on the activities ofH. lupulus-derived products against multidrug-resistant S. aureus strains. For the experiment, we selected two clinical isolates—members of an important group of “alert” human pathogens—MRSA (methicillin-resistant S. aureus) A7 and D5. The hop-derived components demonstrated potent activity, but their MICs against the D5 strain were higher than those reported earlier for S. aureus ATCC 29213 (MSSA) (Table 1). Following MIC evaluation, inhibition of bacterial growth on solid media revealed ≥99.9% reduction of the original inoculum by hop cone and spent hope extracts or xanthohumol. The results proved that the phytocompounds tested exhibited concentration-dependent bactericidal effects (MBC) (Table 1). In order to test whether the compounds induced cell aggregation, which could influence CFU counts during MBC testing, samples of bacteria incubated with the phytocompounds were compared under light microscopy with controls (untreated bacteria). Most of the bacteria incubated with hop cone and spent hop extracts (at MIC) formed similar numbers of pairs and small clusters as the control, while bacteria treated with xanthohumol were found mainly in small and large aggregates (data not shown). This effect disappeared when the concentration of xanthohumol was reduced to 1/2 MIC. Therefore, half MIC and two lower concentrations (1/4 and 1/8 MIC) were used in experiments on the influence of phytochemicals on adhesion, biofilm formation, and synergy with antibiotics. As proposed by Cushnie et al. [18] and Cushnie and Lamb [21], aggregation of bacterial cells should always be taken into account when interpreting data from assays with natural flavonoids and flavonoid-rich phytochemical preparations. It is well known that the hop extraction method used determines the composition of the products and their biological activity [22–25]. Papers describing the antibacterial actions of different compounds in hops mainly are concerned with the activities of the bitter acids humulone and lupulone and of the flavonoid xanthohumol [9, 26–29]. Xanthohumol has also been reported as the main component with anti-infective effects against viruses and fungi [6].As pathogens become more and more resistant to available antibiotics, posing a significant medical problem, the need for alternative treatments becomes greater. Several studies have suggested that combining plant- or animal-derived natural compounds with antibiotics is a new strategy for developing therapies against infections [12, 21, 30]. Here, we report that the antimicrobial activities of commercial antibiotics are enhanced by adding hop compounds, forming a potent inhibitor of growth of S. aureus. MIC values were decreased by the coactions of hop products with oxacillin (β-lactam) and linezolid (oxazolidinone) but not vancomycin (glycopeptide) (Table 2). When hop cone or spent hop extracts or xanthohumol was incorporated into the agar medium at 1/2 MIC or 1/4 MIC, the MIC of oxacillin against MSSA S. aureus ATCC 29213 was reduced. The example is shown on Figure 1. Since hop products have previously been demonstrated to affect cell wall and membrane integrity, it is possible that they facilitate antibiotic penetration. This could explain why they potentiate the action of oxacillin, which by binding to specific penicillin-binding proteins (PBP) inside the bacterial cell wall inhibits cell wall synthesis. Unfortunately, the sensitivity of MRSA strains was not increased by hop derivatives. Methicillin resistance in S. aureus is primarily mediated by the mecA gene, which encodes the modified PBP 2a. This protein is also located in the bacterial cell wall and has a lower binding affinity for β-lactams. Although all cells in a population of S. aureus can carry the mecA gene, often only a few of them express it. Thus, both resistant and susceptible bacteria can exist in the same culture [31]. Our results confirm this phenomenon. Although the MIC of oxacillin did not decrease when the whole population of MRSA was grown in the presence of phytochemicals, growth was weakened. As mentioned, extracts of hop cones, spent hops and xanthohumol did not increase the sensitivity of the reference or clinical strains to vancomycin. However, there was increased sensitivity to linezolid, a bacteriostatic antibiotic inhibiting the initiation of protein synthesis. It is effective against infections caused by various gram-positive pathogens, including multidrug resistant enterococci and MRSA. However, starting from 2008, cases of isolation of resistant strains have been recorded, luckily still with low incidence [32]. The strengthening of its action that we were able to achieve is interesting and suggests that hop products probably facilitate the penetration of this antibiotic into the bacterial cell.Table 2
Synergistic activity of subinhibitory concentrations ofHumulus lupulus constituents and antibiotics belonging to various therapeutic classes, determined by the E-test strip/agar dilution method against S. aureus ATCC 29213.
Treatment
MIC [µg/mL]
Oxacillin
Vancomycin
Linezolid
(Control)
0.125
1.0
0.5
Hop cones extract
1/2 MIC (15.6 µg/mL)
0.094
1.0
0.38
1/4 MIC (7.8 µg/mL)
0.094
1.0
0.5
Spent hops extract
1/2 MIC (1000 µg/mL)
0.094
1.0
0.5
1/4 MIC (500 µg/mL)
0.125
1.0
n.t.
Xanthohumol
1/2 MIC (7.8 µg/mL)
0.064
1.0
0.38
1/4 MIC (3.9 µg/mL)
0.094
1.0
0.5
n.t.: not tested.Figure 1
Synergistic effect of oxacillin and xanthohumol againstS. aureus ATCC 29213 evaluated by E-test strip/agar dilution method. (a) The control plate (MHA); (b) the tested plate (MHA with xanthohumol at final dilution equal to 1/4 MIC).Why did we useStaphylococcus aureus as the model organism in our study? It is obvious that it would be also interesting to test susceptibility of enterococci to products derived from hops. These bacteria, like Staphylococci, constitute a serious epidemiological risk since they are well equipped with a variety of natural antibiotic resistance; they are also capable of acquiring new resistance genes and/or mutations. However, our research interest has focused the attention on the S. aureus behavior. These are the reasons: first, because these bacteria produce a large number of virulence factors that are important for pathogenesis; secondly, because they are on the list of alarm multiresistant pathogens; finally, because biofilm formation by them is a major medical problem. Since bacteria in biofilms are extremely resistant to antimicrobial agents, biofilm-associated infections are very difficult to treat, especially when the causative organism is multidrug resistant [10]. Thus, another question we asked was whether the hop-derived compounds can be considered effective in antibiofilm therapy. We demonstrated that they were effective (at 1/4 or 1/2 MIC) against staphylococcal adhesion evaluated after 2 h (inhibition range 50–90%) and biofilm formation evaluated after 24 h of coincubation (Figure 2). Moreover, these extracts or pure xanthohumol applied to already-formed biofilms at the relatively low concentrations of MIC or 2xMIC reduced the viability of the mature biofilm significantly. The most potent in this respect was xanthohumol, which reduced the biofilm by 86.5±1.5% (at MIC), whereas the spent hop extract caused 42.8±16.3% and the hop cone extract 74.9±6.7% biofilm eradication at MIC. Given the known high resistance of biofilm populations, our observation should be considered promising, since even partial destruction of a biofilm by antibiotics/antiseptics is encouraging.Figure 2
Antibiofilm activity ofHumulus lupulus-derived extracts and xanthohumol against S. aureus ATCC 29213. Bacteria were cultured for 24 h in absence or constant presence of the phytocompounds used at their 1/2, 1/4, and 1/8 MICs. Biofilm formation was assessed using a LIVE/DEAD BacLight kit. Results are presented as the percentage of the biomass viability, compared to the control. All presented results are mean from 2 independent experiments performed in quadruplicate ± S.D. Black bars: control; grey bars: spent hops; open bars: hop cones; striped bars: xanthohumol.However, our experimental results to date do not allow the mechanisms of biological activity of the tested hop extract, spent hop extract, and xanthohumol to be explained adequately. It can be supposed that their effective penetration across the cell wall and/or membrane damage is the most important property. They could also influence bacterial cell surface hydrophobicity and, depending on sortase activity, the assembly of adhesins in the cell wall [33]. Thus, our plant-derived compounds could interfere with the adhesion step essential for successful biofilm development. Their observed effects suggest that they easily penetrate biological membranes, probably without the help of active transport mechanisms [21], but this possibility needs further research.
## 4. Conclusions
In summary, the present study has revealed the potent antibiofilm activity of hop-derived compounds for the first time. This is interesting, particularly with regard to the action of the spent hop extract, which is a quantitatively significant waste from the brewing industry. Therefore, this observation has the potential for practical application, since spent hop extract containing no xanthohumol is still a good source of substances with antimicrobial and antibiofilm activities. Although the mechanisms of biological activity of the phytocompounds tested are not clear (we can only discuss possibilities, as above), our results suggest that hop-derived constituents can be extended beyond the beer industry to prospective medical applications.
---
*Source: 101089-2013-09-23.xml* | 2013 |
# Determination of Picogram Levels of Roxithromycin in Pharmaceutical, Human Serum, and Urine by Flow-Injection Chemiluminescence
**Authors:** Jiangman Liu; Huan Yang; Yun Zhang; Min Wu; Haixiang Zhao; Zhenghua Song
**Journal:** ISRN Analytical Chemistry
(2012)
**Publisher:** International Scholarly Research Network
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.5402/2012/101092
---
## Abstract
A sensitive chemiluminescence (CL) method, based on the inhibitory effect of roxithromycin (ROX) on the CL reaction between luminol and dissolved oxygen in a flow-injection system, was first proposed for the determination of ROX at picogram levels. The decrement of CL intensity was linearly proportional to the logarithm of ROX concentrations ranging from 0.1 to 100 pg mL−1, giving the limit of detection (LOD) of 0.03 pg mL−1 (3σ). At a flow rate of 2.0 mL min−1, a complete analytical procedure including sampling and washing could be performed within 0.5 min, with relative standard deviations (RSDs) of less than 5.0% (n=5). The proposed procedure was applied successfully to the determination of ROX in pharmaceutical, human serum, and urine with the recoveries ranging from 90.0 to 110.0%.
---
## Body
## 1. Introduction
Roxithromycin (ROX, C41H76N2O15) is a kind of antibiotic with semisynthetic 14-membered-ring macrolide antibiotic [1–3] as shown in Figure 1. ROX has been frequently adopted as an effective treatment for several different infections, including respiratory tract infections asthma, gum infections like gingivitis, and bacterial infections associated with stomach as well as intestinal ulcers [4]. It works efficiently even at small doses with less frequent administration, which is regarded as a clinical advantage, and has a wide range of applications in human and veterinary medicines [5].Figure 1
Schematic diagram of ROX structure.A variety of methods have been used for the determination of ROX (Table1), including high-performance liquid chromatography (HPLC) [6–8], electrochemistry (EC) [9], fluorescence (FL) [10], aqueous two-phase system extraction (ATPSE) [11], capillary electrophoresis (CE) [12], and chemiluminescence (CL) [13]. Compared with other methods, CL has attracted increasing attention in various fields owing to its high sensitivity, wide linear dynamic range, rapid measurements, and simple instrumentation [14–17].Table 1
Comparisons of different methods for determination of ROX.
MethodsLinear range (μg mL−1)LOD (μg mL−1)ReferencesHPLC0.5–10.00.2[6]0.05–20.00.05[7]5.1–100.05.1[8]EC4.2–840.4[9]FL25.0–350.04.6[10]ATPSE1.0–20.00.03[11]CE0.02–201.07.0×10-3[12]CL1.0×10-6–1.0×10-33.0×10-7[13]Proposed method1.0×10-7–1.0×10-43.0×10-8This workThe simple and green CL system of luminol-dissolved oxygen has been applied for the determination of vitamin B12 [18], sudan IV [19] and chlorogenic acid [20] by our group. However, no CL method of luminol-dissolved oxygen system has been utilized for the determination of ROX to date. In this work, it was found that CL signal from luminol-dissolved oxygen reaction could be inhibited by ROX and the decrement of the CL intensity was linearly proportional to the logarithm of ROX concentrations ranging from 0.1 to 100 pg mL−1, with the linear equation of ΔICL=6.73lnCROX+23.62 (R2=0.9925,n=5) and the LOD of 0.03 pg mL−1(3σ). At a flow rate of 2.0 mL min−1, a complete analytical process could be performed within 0.5 min, with relative standard deviations (RSDs) of less than 5.0% (n=5). The proposed procedure was applied successfully in the determination of ROX in pharmaceutical, human serum, and urine samples with the recoveries ranging from 90.0 to 110.0%.
## 2. Experimental
### 2.1. Apparatus
A schematic diagram of the CL flow-injection system was shown in Figure2. A peristaltic pump of the IFFL-DD luminescence analyzer (Xi’an Remax Electronic Science-Tech. Co. Ltd., Xi’an, China) was applied to deliver all streams. PTFE tubing (1.0 mm i.d.) was used throughout the manifold for carrying the CL reagents. A six-way valve with a loop of 100.0 μL was used for sampling. The CL signals produced in flow cell were detected without wavelength discrimination, and the photomultiplier tube (PMT) output was recorded by PC with the IFFL-DD client system.Figure 2
Schematic diagram of the flow-injection system for determination of ROX.
### 2.2. Reagents
All the reagents were of analytical grade in this work. Water purified in a Milli-Q system (Millipore, Bedford, MA, USA) was used for the preparation of solutions in the whole procedure. Luminol (Fluka, Biochemika, Switzerland) was obtained from Xi’an Medicine Purchasing and Supply Station, Xi’an, China. The stock solution of luminol (2.5 ×10−2 mol L−1) was prepared by dissolving 0.44 g luminol in 100 mL of 0.1 mol L−1 NaOH solution in a brown calibrated flask. The stock solution ROX (Shaanxi Institute for Drug Control) of 1.0 mg mL−1 was stored at 4°C. Working standard solutions were prepared daily from the above stock solution as required.
### 2.3. General Procedure
As shown in Figure2, 100 μL luminol solution and the carrier (purified water) were injected into the flow system quantitatively via the six-way valve until a stable baseline was recorded. Then quantities of known concentration of ROX solutions were injected into the flow system and mixed with luminol reagent. The mixed solution in an alkaline medium was delivered into the CL flow cell, and CL signals were produced and measured by the PMT and luminometer. The concentration of the sample was quantified by the reduction of CL intensity (ΔICL=Io-Is), where Is and Io were the CL intensity in the presence and absence of ROX, respectively.
### 2.4. Sample Preparation
#### 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
#### 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 2.1. Apparatus
A schematic diagram of the CL flow-injection system was shown in Figure2. A peristaltic pump of the IFFL-DD luminescence analyzer (Xi’an Remax Electronic Science-Tech. Co. Ltd., Xi’an, China) was applied to deliver all streams. PTFE tubing (1.0 mm i.d.) was used throughout the manifold for carrying the CL reagents. A six-way valve with a loop of 100.0 μL was used for sampling. The CL signals produced in flow cell were detected without wavelength discrimination, and the photomultiplier tube (PMT) output was recorded by PC with the IFFL-DD client system.Figure 2
Schematic diagram of the flow-injection system for determination of ROX.
## 2.2. Reagents
All the reagents were of analytical grade in this work. Water purified in a Milli-Q system (Millipore, Bedford, MA, USA) was used for the preparation of solutions in the whole procedure. Luminol (Fluka, Biochemika, Switzerland) was obtained from Xi’an Medicine Purchasing and Supply Station, Xi’an, China. The stock solution of luminol (2.5 ×10−2 mol L−1) was prepared by dissolving 0.44 g luminol in 100 mL of 0.1 mol L−1 NaOH solution in a brown calibrated flask. The stock solution ROX (Shaanxi Institute for Drug Control) of 1.0 mg mL−1 was stored at 4°C. Working standard solutions were prepared daily from the above stock solution as required.
## 2.3. General Procedure
As shown in Figure2, 100 μL luminol solution and the carrier (purified water) were injected into the flow system quantitatively via the six-way valve until a stable baseline was recorded. Then quantities of known concentration of ROX solutions were injected into the flow system and mixed with luminol reagent. The mixed solution in an alkaline medium was delivered into the CL flow cell, and CL signals were produced and measured by the PMT and luminometer. The concentration of the sample was quantified by the reduction of CL intensity (ΔICL=Io-Is), where Is and Io were the CL intensity in the presence and absence of ROX, respectively.
## 2.4. Sample Preparation
### 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
### 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
## 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 3. Results and Discussion
### 3.1. CL Intensity-Time Profile
The relative CL intensity-time profile was shown in Figure3. The luminol-dissolved oxygen CL reaction reached the maximum of CL intensity (Tmax) at 7.0 s and then vanished within 26 s; while in the presence of ROX (5 pg mL−1), the maximum of CL intensity remarkably decreased from 150 to 115 by 23.3%.Figure 3
The CL intensity-time profile.
### 3.2. Effect of Luminol and Sodium Hydroxide Concentration
Owing to the nature of luminol CL reaction, which is more favorable in alkaline medium, sodium hydroxide was added to improve the sensitivity of the system. The effect of the concentration of luminol and sodium hydroxide was investigated over the ranges from 1 × 10−8 to 1 × 10−4 mol L−1and 0.01 to 0.5 mol L−1, respectively. With increasing luminol concentration, the CL signal increased steadily until 2.5 × 10−5 mol L−1 and then stabilised. Therefore, the concentration of 2.5 × 10−5 mol L−1 luminol was chosen for the optimum condition. The result indicated the CL signal could reach a maximum value in the NaOH concentration of 0.025 mol L−1. Therefore, 0.025 mol L−1 sodium hydroxide was performed in all subsequent experiments.
### 3.3. Effect of Flow Rate and the Length of Mixing Tubing
The CL intensity was related to the flow rate. A lower flow rate caused broadening of the peak and slowly sampling rates. A higher flow rate could increase the signal-to-ratio (S/N) and lead to an unstable baseline. Then a flow rate of 2.0 mL min−1 was selected as an appropriate condition considering both the good precision and lower solution consumption. The effect of the length of mixing tubing on CL intensity was also tested in pursuit of producing maximum CL intensity in the flow cell. It was observed that 10.0 cm of mixing tubing afforded the best results with good sensitivity and reproducibility. Accordingly, 10.0 cm of mixing tubing was considered as an optimum length.
### 3.4. Performance of Proposed Method for ROX Determination
Under the optimum conditions described, the linearity of the results was examined by measuring a series of standard solutions. The decreased CL intensity was found to be proportional to ROX concentration, and the response to the concentration was linear over the range from 0.1 to 100 pg mL−1, and the LOD was 0.03 pg mL−1(3σ). The linear equation was ΔICL=6.73lnCROX+23.62(R2=0.9925,n=5,RSDs<5.0%).
### 3.5. Interference Studies
Interference of foreign substances was tested by analyzing a standard solution of ROX to which increasing amounts of interfering species were added. Presuming interference at 5% level, tolerable ratios of interferents with respect to 10 pg mL−1ROX were over 1000 times for NO3-,Ac-,I-,SO42-,PO43-,BrO3-, borate, oxalate, tartrate, urea, and uric acid; 50 times for NH4+,Mg2+,Ca2+,Ba2+, methanol, and ethanol; 5 times for Fe3+ and Fe2+. Compounds abundant in human urine and serum such as salt, lipid and proteins caused no obvious interference for the determination of ROX.
## 3.1. CL Intensity-Time Profile
The relative CL intensity-time profile was shown in Figure3. The luminol-dissolved oxygen CL reaction reached the maximum of CL intensity (Tmax) at 7.0 s and then vanished within 26 s; while in the presence of ROX (5 pg mL−1), the maximum of CL intensity remarkably decreased from 150 to 115 by 23.3%.Figure 3
The CL intensity-time profile.
## 3.2. Effect of Luminol and Sodium Hydroxide Concentration
Owing to the nature of luminol CL reaction, which is more favorable in alkaline medium, sodium hydroxide was added to improve the sensitivity of the system. The effect of the concentration of luminol and sodium hydroxide was investigated over the ranges from 1 × 10−8 to 1 × 10−4 mol L−1and 0.01 to 0.5 mol L−1, respectively. With increasing luminol concentration, the CL signal increased steadily until 2.5 × 10−5 mol L−1 and then stabilised. Therefore, the concentration of 2.5 × 10−5 mol L−1 luminol was chosen for the optimum condition. The result indicated the CL signal could reach a maximum value in the NaOH concentration of 0.025 mol L−1. Therefore, 0.025 mol L−1 sodium hydroxide was performed in all subsequent experiments.
## 3.3. Effect of Flow Rate and the Length of Mixing Tubing
The CL intensity was related to the flow rate. A lower flow rate caused broadening of the peak and slowly sampling rates. A higher flow rate could increase the signal-to-ratio (S/N) and lead to an unstable baseline. Then a flow rate of 2.0 mL min−1 was selected as an appropriate condition considering both the good precision and lower solution consumption. The effect of the length of mixing tubing on CL intensity was also tested in pursuit of producing maximum CL intensity in the flow cell. It was observed that 10.0 cm of mixing tubing afforded the best results with good sensitivity and reproducibility. Accordingly, 10.0 cm of mixing tubing was considered as an optimum length.
## 3.4. Performance of Proposed Method for ROX Determination
Under the optimum conditions described, the linearity of the results was examined by measuring a series of standard solutions. The decreased CL intensity was found to be proportional to ROX concentration, and the response to the concentration was linear over the range from 0.1 to 100 pg mL−1, and the LOD was 0.03 pg mL−1(3σ). The linear equation was ΔICL=6.73lnCROX+23.62(R2=0.9925,n=5,RSDs<5.0%).
## 3.5. Interference Studies
Interference of foreign substances was tested by analyzing a standard solution of ROX to which increasing amounts of interfering species were added. Presuming interference at 5% level, tolerable ratios of interferents with respect to 10 pg mL−1ROX were over 1000 times for NO3-,Ac-,I-,SO42-,PO43-,BrO3-, borate, oxalate, tartrate, urea, and uric acid; 50 times for NH4+,Mg2+,Ca2+,Ba2+, methanol, and ethanol; 5 times for Fe3+ and Fe2+. Compounds abundant in human urine and serum such as salt, lipid and proteins caused no obvious interference for the determination of ROX.
## 4. Applications
### 4.1. Determination of ROX in Pharmaceutical
The determination of ROX in tablets was carried out by the procedure described in the experimental portion, and the results were summarized in Table2, with recoveries ranging from 97.0 to 110.0% and RSDs of less than 5.0%.Table 2
Results of determination for ROX in pharmaceutical (labelled amount 75 mg tab−1).
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX (mg tab−1)107.12.7110.072.53.010.43.0207.22.497.171.07.014.02.5307.23.2103.373.59.016.52.4
### 4.2. Determination of ROX in Spiked Human Serum and Urine
The proposed method was used in the determination of ROX in spiked human serum. In order to evaluate the validity of the proposed method, recovery studies were performed on each of the samples by adding a certain amount standard solution of ROX into 1.0 mL of serum sample. After homogenization, the spiked serum sample was diluted to appropriate multiples. The results were shown in Table3 with recoveries ranging from 94.0 to 102.0%.Table 3
Results of determination for ROX in spiked human serum.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked1021.32.996.521.0/20.020.040.65.02023.72.794.722.9/24.030.052.13.33029.44.9102.029.7/30.030.060.02.3The urine sample was collected from volunteers. The known quantities of standard solution of ROX were added into 1.0 mL of urine sample. After sufficient homogenization, the spiked urine sample was diluted to appropriate multiples. The determination of ROX in spiked human urine was also performed by the proposed method, and the results were shown in Table4 with recoveries ranging from 90.0 to 102.0%.Table 4
Results of determination for ROX in spiked human urine.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked104.31.896.74.2/4.53.07.22.5209.82.990.09.6/10.03.012.52.23035.82.3102.036.0/36.220.056.21.94020.82.198.020.7/21.010.030.63.0
## 4.1. Determination of ROX in Pharmaceutical
The determination of ROX in tablets was carried out by the procedure described in the experimental portion, and the results were summarized in Table2, with recoveries ranging from 97.0 to 110.0% and RSDs of less than 5.0%.Table 2
Results of determination for ROX in pharmaceutical (labelled amount 75 mg tab−1).
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX (mg tab−1)107.12.7110.072.53.010.43.0207.22.497.171.07.014.02.5307.23.2103.373.59.016.52.4
## 4.2. Determination of ROX in Spiked Human Serum and Urine
The proposed method was used in the determination of ROX in spiked human serum. In order to evaluate the validity of the proposed method, recovery studies were performed on each of the samples by adding a certain amount standard solution of ROX into 1.0 mL of serum sample. After homogenization, the spiked serum sample was diluted to appropriate multiples. The results were shown in Table3 with recoveries ranging from 94.0 to 102.0%.Table 3
Results of determination for ROX in spiked human serum.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked1021.32.996.521.0/20.020.040.65.02023.72.794.722.9/24.030.052.13.33029.44.9102.029.7/30.030.060.02.3The urine sample was collected from volunteers. The known quantities of standard solution of ROX were added into 1.0 mL of urine sample. After sufficient homogenization, the spiked urine sample was diluted to appropriate multiples. The determination of ROX in spiked human urine was also performed by the proposed method, and the results were shown in Table4 with recoveries ranging from 90.0 to 102.0%.Table 4
Results of determination for ROX in spiked human urine.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked104.31.896.74.2/4.53.07.22.5209.82.990.09.6/10.03.012.52.23035.82.3102.036.0/36.220.056.21.94020.82.198.020.7/21.010.030.63.0
## 5. Conclusions
Based on the inhibitory effect of ROX on the luminol-dissolved oxygen reaction, a simple and rapid CL method for the determination of ROX at picogram levels was first proposed. The presented method offers the advantages of low LOD, instrumental simplicity, rapidity, widely linear range, and high sensitivity. Satisfactory performance of the assay of ROX in pharmaceutical preparations and biofluids has demonstrated that the method is practical and suitable not only for the quality control analysis but also for the analysis of complex biological samples and represents an interesting alternative in pharmacological and clinical research.
---
*Source: 101092-2011-12-12.xml* | 101092-2011-12-12_101092-2011-12-12.md | 22,168 | Determination of Picogram Levels of Roxithromycin in Pharmaceutical, Human Serum, and Urine by Flow-Injection Chemiluminescence | Jiangman Liu; Huan Yang; Yun Zhang; Min Wu; Haixiang Zhao; Zhenghua Song | ISRN Analytical Chemistry
(2012) | Chemistry and Chemical Sciences | International Scholarly Research Network | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.5402/2012/101092 | 101092-2011-12-12.xml | ---
## Abstract
A sensitive chemiluminescence (CL) method, based on the inhibitory effect of roxithromycin (ROX) on the CL reaction between luminol and dissolved oxygen in a flow-injection system, was first proposed for the determination of ROX at picogram levels. The decrement of CL intensity was linearly proportional to the logarithm of ROX concentrations ranging from 0.1 to 100 pg mL−1, giving the limit of detection (LOD) of 0.03 pg mL−1 (3σ). At a flow rate of 2.0 mL min−1, a complete analytical procedure including sampling and washing could be performed within 0.5 min, with relative standard deviations (RSDs) of less than 5.0% (n=5). The proposed procedure was applied successfully to the determination of ROX in pharmaceutical, human serum, and urine with the recoveries ranging from 90.0 to 110.0%.
---
## Body
## 1. Introduction
Roxithromycin (ROX, C41H76N2O15) is a kind of antibiotic with semisynthetic 14-membered-ring macrolide antibiotic [1–3] as shown in Figure 1. ROX has been frequently adopted as an effective treatment for several different infections, including respiratory tract infections asthma, gum infections like gingivitis, and bacterial infections associated with stomach as well as intestinal ulcers [4]. It works efficiently even at small doses with less frequent administration, which is regarded as a clinical advantage, and has a wide range of applications in human and veterinary medicines [5].Figure 1
Schematic diagram of ROX structure.A variety of methods have been used for the determination of ROX (Table1), including high-performance liquid chromatography (HPLC) [6–8], electrochemistry (EC) [9], fluorescence (FL) [10], aqueous two-phase system extraction (ATPSE) [11], capillary electrophoresis (CE) [12], and chemiluminescence (CL) [13]. Compared with other methods, CL has attracted increasing attention in various fields owing to its high sensitivity, wide linear dynamic range, rapid measurements, and simple instrumentation [14–17].Table 1
Comparisons of different methods for determination of ROX.
MethodsLinear range (μg mL−1)LOD (μg mL−1)ReferencesHPLC0.5–10.00.2[6]0.05–20.00.05[7]5.1–100.05.1[8]EC4.2–840.4[9]FL25.0–350.04.6[10]ATPSE1.0–20.00.03[11]CE0.02–201.07.0×10-3[12]CL1.0×10-6–1.0×10-33.0×10-7[13]Proposed method1.0×10-7–1.0×10-43.0×10-8This workThe simple and green CL system of luminol-dissolved oxygen has been applied for the determination of vitamin B12 [18], sudan IV [19] and chlorogenic acid [20] by our group. However, no CL method of luminol-dissolved oxygen system has been utilized for the determination of ROX to date. In this work, it was found that CL signal from luminol-dissolved oxygen reaction could be inhibited by ROX and the decrement of the CL intensity was linearly proportional to the logarithm of ROX concentrations ranging from 0.1 to 100 pg mL−1, with the linear equation of ΔICL=6.73lnCROX+23.62 (R2=0.9925,n=5) and the LOD of 0.03 pg mL−1(3σ). At a flow rate of 2.0 mL min−1, a complete analytical process could be performed within 0.5 min, with relative standard deviations (RSDs) of less than 5.0% (n=5). The proposed procedure was applied successfully in the determination of ROX in pharmaceutical, human serum, and urine samples with the recoveries ranging from 90.0 to 110.0%.
## 2. Experimental
### 2.1. Apparatus
A schematic diagram of the CL flow-injection system was shown in Figure2. A peristaltic pump of the IFFL-DD luminescence analyzer (Xi’an Remax Electronic Science-Tech. Co. Ltd., Xi’an, China) was applied to deliver all streams. PTFE tubing (1.0 mm i.d.) was used throughout the manifold for carrying the CL reagents. A six-way valve with a loop of 100.0 μL was used for sampling. The CL signals produced in flow cell were detected without wavelength discrimination, and the photomultiplier tube (PMT) output was recorded by PC with the IFFL-DD client system.Figure 2
Schematic diagram of the flow-injection system for determination of ROX.
### 2.2. Reagents
All the reagents were of analytical grade in this work. Water purified in a Milli-Q system (Millipore, Bedford, MA, USA) was used for the preparation of solutions in the whole procedure. Luminol (Fluka, Biochemika, Switzerland) was obtained from Xi’an Medicine Purchasing and Supply Station, Xi’an, China. The stock solution of luminol (2.5 ×10−2 mol L−1) was prepared by dissolving 0.44 g luminol in 100 mL of 0.1 mol L−1 NaOH solution in a brown calibrated flask. The stock solution ROX (Shaanxi Institute for Drug Control) of 1.0 mg mL−1 was stored at 4°C. Working standard solutions were prepared daily from the above stock solution as required.
### 2.3. General Procedure
As shown in Figure2, 100 μL luminol solution and the carrier (purified water) were injected into the flow system quantitatively via the six-way valve until a stable baseline was recorded. Then quantities of known concentration of ROX solutions were injected into the flow system and mixed with luminol reagent. The mixed solution in an alkaline medium was delivered into the CL flow cell, and CL signals were produced and measured by the PMT and luminometer. The concentration of the sample was quantified by the reduction of CL intensity (ΔICL=Io-Is), where Is and Io were the CL intensity in the presence and absence of ROX, respectively.
### 2.4. Sample Preparation
#### 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
#### 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 2.1. Apparatus
A schematic diagram of the CL flow-injection system was shown in Figure2. A peristaltic pump of the IFFL-DD luminescence analyzer (Xi’an Remax Electronic Science-Tech. Co. Ltd., Xi’an, China) was applied to deliver all streams. PTFE tubing (1.0 mm i.d.) was used throughout the manifold for carrying the CL reagents. A six-way valve with a loop of 100.0 μL was used for sampling. The CL signals produced in flow cell were detected without wavelength discrimination, and the photomultiplier tube (PMT) output was recorded by PC with the IFFL-DD client system.Figure 2
Schematic diagram of the flow-injection system for determination of ROX.
## 2.2. Reagents
All the reagents were of analytical grade in this work. Water purified in a Milli-Q system (Millipore, Bedford, MA, USA) was used for the preparation of solutions in the whole procedure. Luminol (Fluka, Biochemika, Switzerland) was obtained from Xi’an Medicine Purchasing and Supply Station, Xi’an, China. The stock solution of luminol (2.5 ×10−2 mol L−1) was prepared by dissolving 0.44 g luminol in 100 mL of 0.1 mol L−1 NaOH solution in a brown calibrated flask. The stock solution ROX (Shaanxi Institute for Drug Control) of 1.0 mg mL−1 was stored at 4°C. Working standard solutions were prepared daily from the above stock solution as required.
## 2.3. General Procedure
As shown in Figure2, 100 μL luminol solution and the carrier (purified water) were injected into the flow system quantitatively via the six-way valve until a stable baseline was recorded. Then quantities of known concentration of ROX solutions were injected into the flow system and mixed with luminol reagent. The mixed solution in an alkaline medium was delivered into the CL flow cell, and CL signals were produced and measured by the PMT and luminometer. The concentration of the sample was quantified by the reduction of CL intensity (ΔICL=Io-Is), where Is and Io were the CL intensity in the presence and absence of ROX, respectively.
## 2.4. Sample Preparation
### 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
### 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 2.4.1. Treatment of Pharmaceutical
ROX tablets (No. 6 Pharmaceutical Factory, Harbin, China) were purchased from local market. Five pieces of tablets (labelled amount 75 mg tab−1) were weighed up and ground to fine powder. Then the power of average one tablet was weighed out accurately, dissolved by water, and filtrated. The filter residue was washed several times, and then the solution was diluted to 250 mL calibrated flask. Suitable aliquots from this solution were taken for the determination of ROX, and the concentration of ROX was in the concentration range of its determination.
## 2.4.2. Treatment of Human Urine and Serum Sample
The urine sample was collected from healthy volunteers, and the serum sample was supplied by the Hospital of Northwest University. To prepare the spiked samples, known quantities of standard solution of ROX were spiked into 1.0 mL of urine or serum. After homogenization, the urine and the serum samples were diluted to proper multiples, respectively. Then the samples were determined by the proposed method directly after dilution.
## 3. Results and Discussion
### 3.1. CL Intensity-Time Profile
The relative CL intensity-time profile was shown in Figure3. The luminol-dissolved oxygen CL reaction reached the maximum of CL intensity (Tmax) at 7.0 s and then vanished within 26 s; while in the presence of ROX (5 pg mL−1), the maximum of CL intensity remarkably decreased from 150 to 115 by 23.3%.Figure 3
The CL intensity-time profile.
### 3.2. Effect of Luminol and Sodium Hydroxide Concentration
Owing to the nature of luminol CL reaction, which is more favorable in alkaline medium, sodium hydroxide was added to improve the sensitivity of the system. The effect of the concentration of luminol and sodium hydroxide was investigated over the ranges from 1 × 10−8 to 1 × 10−4 mol L−1and 0.01 to 0.5 mol L−1, respectively. With increasing luminol concentration, the CL signal increased steadily until 2.5 × 10−5 mol L−1 and then stabilised. Therefore, the concentration of 2.5 × 10−5 mol L−1 luminol was chosen for the optimum condition. The result indicated the CL signal could reach a maximum value in the NaOH concentration of 0.025 mol L−1. Therefore, 0.025 mol L−1 sodium hydroxide was performed in all subsequent experiments.
### 3.3. Effect of Flow Rate and the Length of Mixing Tubing
The CL intensity was related to the flow rate. A lower flow rate caused broadening of the peak and slowly sampling rates. A higher flow rate could increase the signal-to-ratio (S/N) and lead to an unstable baseline. Then a flow rate of 2.0 mL min−1 was selected as an appropriate condition considering both the good precision and lower solution consumption. The effect of the length of mixing tubing on CL intensity was also tested in pursuit of producing maximum CL intensity in the flow cell. It was observed that 10.0 cm of mixing tubing afforded the best results with good sensitivity and reproducibility. Accordingly, 10.0 cm of mixing tubing was considered as an optimum length.
### 3.4. Performance of Proposed Method for ROX Determination
Under the optimum conditions described, the linearity of the results was examined by measuring a series of standard solutions. The decreased CL intensity was found to be proportional to ROX concentration, and the response to the concentration was linear over the range from 0.1 to 100 pg mL−1, and the LOD was 0.03 pg mL−1(3σ). The linear equation was ΔICL=6.73lnCROX+23.62(R2=0.9925,n=5,RSDs<5.0%).
### 3.5. Interference Studies
Interference of foreign substances was tested by analyzing a standard solution of ROX to which increasing amounts of interfering species were added. Presuming interference at 5% level, tolerable ratios of interferents with respect to 10 pg mL−1ROX were over 1000 times for NO3-,Ac-,I-,SO42-,PO43-,BrO3-, borate, oxalate, tartrate, urea, and uric acid; 50 times for NH4+,Mg2+,Ca2+,Ba2+, methanol, and ethanol; 5 times for Fe3+ and Fe2+. Compounds abundant in human urine and serum such as salt, lipid and proteins caused no obvious interference for the determination of ROX.
## 3.1. CL Intensity-Time Profile
The relative CL intensity-time profile was shown in Figure3. The luminol-dissolved oxygen CL reaction reached the maximum of CL intensity (Tmax) at 7.0 s and then vanished within 26 s; while in the presence of ROX (5 pg mL−1), the maximum of CL intensity remarkably decreased from 150 to 115 by 23.3%.Figure 3
The CL intensity-time profile.
## 3.2. Effect of Luminol and Sodium Hydroxide Concentration
Owing to the nature of luminol CL reaction, which is more favorable in alkaline medium, sodium hydroxide was added to improve the sensitivity of the system. The effect of the concentration of luminol and sodium hydroxide was investigated over the ranges from 1 × 10−8 to 1 × 10−4 mol L−1and 0.01 to 0.5 mol L−1, respectively. With increasing luminol concentration, the CL signal increased steadily until 2.5 × 10−5 mol L−1 and then stabilised. Therefore, the concentration of 2.5 × 10−5 mol L−1 luminol was chosen for the optimum condition. The result indicated the CL signal could reach a maximum value in the NaOH concentration of 0.025 mol L−1. Therefore, 0.025 mol L−1 sodium hydroxide was performed in all subsequent experiments.
## 3.3. Effect of Flow Rate and the Length of Mixing Tubing
The CL intensity was related to the flow rate. A lower flow rate caused broadening of the peak and slowly sampling rates. A higher flow rate could increase the signal-to-ratio (S/N) and lead to an unstable baseline. Then a flow rate of 2.0 mL min−1 was selected as an appropriate condition considering both the good precision and lower solution consumption. The effect of the length of mixing tubing on CL intensity was also tested in pursuit of producing maximum CL intensity in the flow cell. It was observed that 10.0 cm of mixing tubing afforded the best results with good sensitivity and reproducibility. Accordingly, 10.0 cm of mixing tubing was considered as an optimum length.
## 3.4. Performance of Proposed Method for ROX Determination
Under the optimum conditions described, the linearity of the results was examined by measuring a series of standard solutions. The decreased CL intensity was found to be proportional to ROX concentration, and the response to the concentration was linear over the range from 0.1 to 100 pg mL−1, and the LOD was 0.03 pg mL−1(3σ). The linear equation was ΔICL=6.73lnCROX+23.62(R2=0.9925,n=5,RSDs<5.0%).
## 3.5. Interference Studies
Interference of foreign substances was tested by analyzing a standard solution of ROX to which increasing amounts of interfering species were added. Presuming interference at 5% level, tolerable ratios of interferents with respect to 10 pg mL−1ROX were over 1000 times for NO3-,Ac-,I-,SO42-,PO43-,BrO3-, borate, oxalate, tartrate, urea, and uric acid; 50 times for NH4+,Mg2+,Ca2+,Ba2+, methanol, and ethanol; 5 times for Fe3+ and Fe2+. Compounds abundant in human urine and serum such as salt, lipid and proteins caused no obvious interference for the determination of ROX.
## 4. Applications
### 4.1. Determination of ROX in Pharmaceutical
The determination of ROX in tablets was carried out by the procedure described in the experimental portion, and the results were summarized in Table2, with recoveries ranging from 97.0 to 110.0% and RSDs of less than 5.0%.Table 2
Results of determination for ROX in pharmaceutical (labelled amount 75 mg tab−1).
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX (mg tab−1)107.12.7110.072.53.010.43.0207.22.497.171.07.014.02.5307.23.2103.373.59.016.52.4
### 4.2. Determination of ROX in Spiked Human Serum and Urine
The proposed method was used in the determination of ROX in spiked human serum. In order to evaluate the validity of the proposed method, recovery studies were performed on each of the samples by adding a certain amount standard solution of ROX into 1.0 mL of serum sample. After homogenization, the spiked serum sample was diluted to appropriate multiples. The results were shown in Table3 with recoveries ranging from 94.0 to 102.0%.Table 3
Results of determination for ROX in spiked human serum.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked1021.32.996.521.0/20.020.040.65.02023.72.794.722.9/24.030.052.13.33029.44.9102.029.7/30.030.060.02.3The urine sample was collected from volunteers. The known quantities of standard solution of ROX were added into 1.0 mL of urine sample. After sufficient homogenization, the spiked urine sample was diluted to appropriate multiples. The determination of ROX in spiked human urine was also performed by the proposed method, and the results were shown in Table4 with recoveries ranging from 90.0 to 102.0%.Table 4
Results of determination for ROX in spiked human urine.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked104.31.896.74.2/4.53.07.22.5209.82.990.09.6/10.03.012.52.23035.82.3102.036.0/36.220.056.21.94020.82.198.020.7/21.010.030.63.0
## 4.1. Determination of ROX in Pharmaceutical
The determination of ROX in tablets was carried out by the procedure described in the experimental portion, and the results were summarized in Table2, with recoveries ranging from 97.0 to 110.0% and RSDs of less than 5.0%.Table 2
Results of determination for ROX in pharmaceutical (labelled amount 75 mg tab−1).
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX (mg tab−1)107.12.7110.072.53.010.43.0207.22.497.171.07.014.02.5307.23.2103.373.59.016.52.4
## 4.2. Determination of ROX in Spiked Human Serum and Urine
The proposed method was used in the determination of ROX in spiked human serum. In order to evaluate the validity of the proposed method, recovery studies were performed on each of the samples by adding a certain amount standard solution of ROX into 1.0 mL of serum sample. After homogenization, the spiked serum sample was diluted to appropriate multiples. The results were shown in Table3 with recoveries ranging from 94.0 to 102.0%.Table 3
Results of determination for ROX in spiked human serum.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked1021.32.996.521.0/20.020.040.65.02023.72.794.722.9/24.030.052.13.33029.44.9102.029.7/30.030.060.02.3The urine sample was collected from volunteers. The known quantities of standard solution of ROX were added into 1.0 mL of urine sample. After sufficient homogenization, the spiked urine sample was diluted to appropriate multiples. The determination of ROX in spiked human urine was also performed by the proposed method, and the results were shown in Table4 with recoveries ranging from 90.0 to 102.0%.Table 4
Results of determination for ROX in spiked human urine.
Sample no.Added (pg mL−1)Found (pg mL−1)RSD (%)Recovery (%)ROX content (μg mL−1) Sample/spiked104.31.896.74.2/4.53.07.22.5209.82.990.09.6/10.03.012.52.23035.82.3102.036.0/36.220.056.21.94020.82.198.020.7/21.010.030.63.0
## 5. Conclusions
Based on the inhibitory effect of ROX on the luminol-dissolved oxygen reaction, a simple and rapid CL method for the determination of ROX at picogram levels was first proposed. The presented method offers the advantages of low LOD, instrumental simplicity, rapidity, widely linear range, and high sensitivity. Satisfactory performance of the assay of ROX in pharmaceutical preparations and biofluids has demonstrated that the method is practical and suitable not only for the quality control analysis but also for the analysis of complex biological samples and represents an interesting alternative in pharmacological and clinical research.
---
*Source: 101092-2011-12-12.xml* | 2012 |
# Anatomy Ontology Matching Using Markov Logic Networks
**Authors:** Chunhua Li; Pengpeng Zhao; Jian Wu; Zhiming Cui
**Journal:** Scientifica
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1010946
---
## Abstract
The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
---
## Body
## 1. Introduction
Ontological techniques have been widely applied to medical and biological research [1]. The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. Such ontologies of anatomy and development facilitate the organization of functional data pertaining to a species. To compare such data between species, we need to establish relationships between ontologies describing different species [2]. For example, all gene expression patterns described in ZFIN (the Zebrafish Model Organism Database) are annotated using the zebrafish anatomy ontology. A list of such ontologies is kept on the Open Biomedical Ontologies (OBO) website [3].Heterogeneity is an inherent characteristic of ontologies developed by different parties for the same (or similar) domains. Semantic heterogeneity has become one of the main obstacles to sharing and interoperation among heterogeneous ontologies. Ontology matching, which finds semantic correspondences between entities of different ontologies, is a kind of solutions to the semantic heterogeneity problem [4]. The matching techniques can be classified in a first level as element-level techniques and structure-level techniques. Element-level techniques obtain the correspondences by considering the entities in the ontologies in isolation, therefore ignoring that they are part of the structure of the ontology. Structure-level techniques obtain the correspondences by analyzing how the entities fit in the structure of the ontology [5].Recently, probabilistic approaches to ontology matching which compare ontology entities in a global way have produced competitive matching result [6–9]. OMEN [6] was the first approach that uses a probabilistic representation of ontology mapping rules and probabilistic inference to improve the quality of existing ontology mappings. It uses a Bayesian net to represent the influences between potential concept mappings across ontologies. Based on OMEN, Albagli et al. [7] introduced a novel probabilistic scheme iMatch for ontology matching by using Markov networks rather than Bayesian networks with several improvements. The iMatch better supports the noncausal nature of the dependencies for using undirected networks. Niepert et al. [8] presented a probabilistic-logical framework for ontology matching based on Markov logic. Markov logic has several advantages over existing matching approach and provides a unified syntax that supports different matching strategies in the same language. Li et al. [9] improve the Markov logic model with match propagation strategy and user feedback. References [8, 9] have shown the effectiveness of Markov logic model on conference datasets.In this paper, we consider the Markov logic based framework for anatomy ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies.
## 2. Materials
To evaluate the performance of our proposed approach, we conduct an experimental study using the adult mouse anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the human anatomy, which are large and carefully designed ontologies. They also differ from other ontologies with respect to the use of specific annotations and roles, for example, the extensive use of thepart_of relation. The two resources are part of the Open Biomedical Ontologies (OBO) [3]. We download the owl version of the two ontologies and the reference alignment (with 1516 correspondences) from OAEI anatomy track [10].NCI Thesaurus published by the National Cancer Institute (NCI) contains the working terminology of many data systems in use at NCI. Its scope is broad as it covers vocabulary for clinical care as well as translational and basic research. Among its 37,386 concepts, 4,410 (11.8%) correspond to anatomical entities (anatomic structure, system, or substance hierarchy). Adult mouse anatomy ontology has been developed as part of the mouse Gene Expression Database (GXD) project to provide standardized nomenclature for anatomical entities in the postnatal mouse. It will be used to annotate and integrate different types of data pertinent to anatomy, such as gene expression patterns and phenotype information, which will contribute to an integrated description of biological phenomena in the mouse [11].
## 3. Methods
In this section, we present our Markov logic model for anatomy ontology matching. Our model deviates from [8, 9] in several important ways. First, we model the important hierarchy structure defined by the property of part_of, while previous works consider only subclass-superclass hierarchy. In contrast, our model does not model property correspondences for there are few properties definitions in anatomy ontologies. Another difference is in computing a priori similarities. For conference data sets, [8, 9] apply a similarity measure on the name of matchable entities. However, the class name in anatomy ontology is meaningless signature such as “NCI_C12877.” Therefore, we apply a similarity measure on the labels of classes.We compute an alignment for anatomy ontologies through the following three steps. First, we compute a priori similarity based on Levenshtein distance between labels of two classes from different ontologies and apply a threshold to generate candidate matches. Then, we convert the representation of input ontologies to first-order logic predicate and define a set of formulas as matching strategy. Finally, we execute MAP inference in generated Markov networks as alignment process and output the optimal alignment. Our matching system architecture based on Markov logic networks is illustrated in Figure1.Figure 1
Matching system architecture.
### 3.1. Markov Logic Networks
Markov logic networks [12] is a statistical relational learning language based on first-order logic and Markov networks. A set of formulas in first-order logic can be seen as a set of hard constraints on the set of possible worlds: if a world violates even one formula, it has zero probability. The basic idea in Markov logic is to soften these constraints: when a world violates one formula it is less probable, but not impossible. The fewer formulas a world violates, the more probable it is. Each formula has an associated weight that reflects how strong a constraint it is: the higher the weight, the greater the difference in log probability between a world that satisfies the formula and one that does not, other things being equal.Definition 1.
A Markov logic networkL is a set of pairs (
F
i
,
w
i
), where F
i is a formula in first-order logic and w
i is a real number. Together with a finite set of constants C
=
{
c
1
,
c
2
,
…
,
c
|
C
|
}, it defines a Markov network M
L
,
C as follows:(1)
M
L
,
C contains one binary node for each possible grounding of each predicate appearing in L. The value of the node is 1 if the ground atom is true and 0 otherwise.
(2)
M
L
,
C contains one feature for each possible grounding of each formula F
i in L. The value of this feature is 1 if the ground formula is true and 0 otherwise. The weight of the feature is w
i associated with F
i in L.An MLN can be viewed as a template for constructing Markov networks. Given different sets of constants, it will produce different networks, but all will have certain regularities in structure and parameters, given by the MLN (e.g., all groundings of the same formula will have the same weight). We call each of these networks a ground Markov network to distinguish it from the first-order MLN. From Definition1, the probability distribution over possible worlds x specified by the ground Markov network M
L
,
C is given by(1)
P
X
=
x
=
1
Z
exp
∑
i
ω
i
n
i
x
=
1
Z
∏
i
ϕ
i
x
i
n
i
x
,where n
i
(
x
) is the number of true groundings of F
i in x, x
{
i
} is the state (true values) of the atoms appearing in F
i, and ϕ
i
(
x
{
i
}
)
=
e
ω
i.In the context of ontology matching, possible worlds correspond to possible alignment and the goal is to determine the most probable alignment given the evidence. It was shown that Markov logic provides an excellent framework for ontology matching as it captures both hard logical axioms and soft uncertain statements about potential correspondences between ontological entities.
### 3.2. Ontology Representation
An ontology specifies a conceptualization of a domain in terms of classes and properties and consists of a set of axioms. Matching is the process of finding relationships or correspondences between entities from different ontologies. An alignment is a set of correspondences. A correspondence is a triplee
,
e
′
,
r asserting that the relation r holds between the ontology entities e and e
′, where e is an entity from ontology O and e
′ is an entity from ontology O
′ [4]. The generic form of correspondence captures a wide range of correspondences by varying what is admissible as matchable element and semantic relation, for example, equivalence (=), more general (⊒). In the following we are only interested in equivalence correspondence between classes across anatomy ontologies.The two input ontologies are described in OWL (Web Ontology Language).Classes are concepts organized in asubclass-superclass hierarchy with multiple inheritances. The properties ofis_a andpart_of describe the part and whole relationship between two classes. The properties ofdisjointWith describes relationship between two classes which is interpreted as the emptiness of the intersection of their interpretations. For example, in OWL we can say that Plant and Animal are disjoint classes: no individual can be both a plant and an animal (which would have the unfortunate consequence of making SlimeMold an empty class). SaltwaterFish might be the intersection of Fish and the class SeaDwellers. Figure 2 depicts fragments of human and mouse anatomy ontologies.Figure 2
Example ontology fragments from the human anatomy ontology.We introduce a set of predicates to model the structure of ontologies to be matched. The defined predicates are shown in Table1. We use predicate c
l
a
s
s
i to represent a class from ontology O
i. For example, c
l
a
s
s
1(“NCI_C33854”) representing “NCI_C33854” is a class from ontology O
1. We use predicate s
u
b
i and p
a
r
t
i to model the class hierarchy in ontology O
i, for example, s
u
b
1(“NCI_C33854”, “NCI_C25762”) and p
a
r
t
1(“NCI_C33854”, “NCI_C12686”). The predicate d
i
s
i models the disjointness relationship between two classes, for example, d
i
s
1(“NCI_C21599”, “NCI_C25444”). The predicate l
a
b
e
l
1(“NCI_C33854”, “Vascular_System”) represents class “NCI_C33854” with label “Vascular_System.” We also propose a predicate s
i
m to represent the similarity between labels of two classes from different ontologies, for example, s
i
m(“Vascular_Endothelium”, “blood vessel endothelium”, σ), where σ is a real number. If we apply a similarity measure based on the Levenshtein distance [13], we have σ (“Vascular_Endothelium,” “blood vessel endothelium”) equal to 0.54. The application of a threshold τ is a standard technique in ontology matching. We only generate ground atoms of s
i
m for those pairs of labels whose similarity is greater than τ. Correspondences with a similarity less than τ are deemed incorrect.Table 1
Core predicates for anatomical ontology matching.
Predicate
Description
Observed
c
l
a
s
s
i
(
c
)
c is a class from ontology O
i, i
∈
1,2
l
a
b
e
l
i
(
c
,
l
)
Classc has a label l
s
u
b
i
(
a
,
b
)
a is a subclass of b
p
a
r
t
i
(
a
,
b
)
a is a part of b
d
i
s
i
(
a
,
b
)
a is disjoint with b
s
i
m
(
l
1
,
l
2
,
σ
)
Labelsl
1 and l
2 are similar to a similarity of σ
Hidden
m
a
p
(
c
1
,
c
2
)
Classc
1 from O
1 corresponds to class c
2 from O
2We differentiate between two types of predicates: hidden and observed. The ground atoms of observed predicates are seen and describe the knowledge encoded in the ontologies. The ground atoms of hidden predicates are not seen and have to be predicted using MAP inference. We use hidden predicatesm
a
p to model the sought-after class correspondences.We use the following notation conventions in Table1 and through the rest of this paper:(1)
All entities from ontologyO
1 have a subscript “1”; all entities from ontology O
2 have a subscript “2.”
(2)
Lowercasea, b, and c with or without a subscript are a class.
(3)
Lowercasel with or without a subscript is a label.
### 3.3. Matching Formulas
With predicates defined, we can now go on to incorporate our strategies about the task using weighted first-order logic formulas. Markov logic combines both hard and soft first-order formulas. This allows the inclusion of both known logical statements and uncertain formulas modeling potential correspondences and structural properties of the ontologies. Then it makes joint inference of two and more interdependent hidden predicates.We will introduce five types of constraints to model different matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. The formula without a weight is a hard constraint and holds in every computed alignment. The formula with a weight is a soft constraint and the weight reflects how strong a constraint it is. For simplicity, we will from now on assume that the predicatec
l
a
s
s
i is implicitly added as a precondition to every formula for each class appearing in the formula.A Priori Confidences. We compute an initial a priori similarity σ for each pair of labels of two classes across ontologies based on the Levenshtein distance [13] and use a cut-off threshold τ to produce matching candidates, above which ground atoms of predicates s
i
m are added to the ground Markov network. The higher the similarity between labels of two classes is, the more likely the correspondence between the two classes is correct. We introduce the following formula to model the a priori confidences of a correspondence:σ
(2)
l
a
b
e
l
1
c
1
,
l
1
∧
l
a
b
e
l
2
c
2
,
l
2
∧
s
i
m
l
1
,
l
2
,
σ
=
>
m
a
p
c
1
,
c
2
.Here, we use the similarityσ between labels as the formula weight since the confidence of a correspondence to be correct depends on how similar their labels are.Cardinality Constraints. In general, alignments can be of various cardinalities: 1 : 1 (one to one), 1 : m (one to many), n : 1 (many to one), and m : n (many to many). In this work, we assume the one to one constraint. We use two hard formulas stating that one concept from ontology O
1 can be equivalent to at most one concept in ontology O
2, which ensures the consistency of a computed alignment and vice versa:(3)
m
a
p
a
1
,
a
2
∧
m
a
p
a
1
,
b
2
=
>
a
2
=
b
2
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
a
2
=
>
a
1
=
b
1
.Coherence Constraints. Coherence constraints reduce incoherence during the alignment process. These constraints formulas are added as hard formulas to ensure satisfaction in the computed result alignment. The following formulas describe that two disjoint classes of ontology O
1 will not match two classes of ontology O
2 with subclass relationship respective simultaneously and vice versa:(4)
s
u
b
1
a
1
,
b
1
∧
d
i
s
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
d
i
s
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Stability Constraints. The idea of stability constraints is that an alignment should not introduce new structural knowledge. The formulas for stability constraints are soft formulas associated with weights reflecting how strong the constraints are. When an alignment violates one soft formula it is less probable, but not impossible. Formulas (5) and (6) decrease the probability of alignments that map concept a
1 to a
2 and b
1 to b
2 if a
1 is a subclass of b
1 but a
2 is not a subclass of b
2:ω
1
(5)
s
u
b
1
a
1
,
b
1
∧
!
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.
ω
2
(6)
!
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Here,ω
1 and ω
2 are negative real-valued weights, rendering alignments that satisfy the formulas possibly but less likely.Match Propagation. Generally speaking, if two concepts a
1 and a
2 match, and there is a relationship r between a
1 and b
1 in O
1 and a matching relationship r
′ between a
2 and b
2 in O
2, then we can increase the probability of match between a
2 and b
2. This is accomplished by adding the following formulas to the model. Formula (7) states that if two classes match, it is more likely that their parent classes match too. Formula (8) describes that if parts of two classes match, it is more likely that the classes match too:ω
3
(7)
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.
ω
4
(8)
p
a
r
t
1
a
1
,
b
1
∧
p
a
r
t
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.Here,ω
3 and ω
4 are positive real-valued weights, propagating alignment across the structure of ontologies. These formulas capture the influence of the ontology structure and the semantics of ontology relations and increase the probability of matches between entities that are neighbors of already matched entities in the two ontologies. These formulas help to identify correct correspondences and enable deriving missed correspondences based on the hypothesis.
### 3.4. MAP Inference as Alignment Process
After we generate all ground atoms of observed predicates introduced in previous section, we can select an optimal alignment from the incoming hypotheses using MAP inference in Markov logic networks generated by matching formulas. Give two ontologies, we compute the set of ground atoms of the hidden predicates that maximizes the probability given both the ground atoms of observed predicates and the ground formulas. Letx be the set of ground atoms of observed predicates and let y be the set of ground atoms of hidden predicates m
a
p with respect to the given ontologies, we compute(9)
max
y
P
y
∣
x
=
max
y
∑
i
ω
i
n
i
x
,
y
,where ω
i is the weight of formula F
i and n
i
(
x
,
y
) is the number of possible worlds where formula F
i holds.
## 3.1. Markov Logic Networks
Markov logic networks [12] is a statistical relational learning language based on first-order logic and Markov networks. A set of formulas in first-order logic can be seen as a set of hard constraints on the set of possible worlds: if a world violates even one formula, it has zero probability. The basic idea in Markov logic is to soften these constraints: when a world violates one formula it is less probable, but not impossible. The fewer formulas a world violates, the more probable it is. Each formula has an associated weight that reflects how strong a constraint it is: the higher the weight, the greater the difference in log probability between a world that satisfies the formula and one that does not, other things being equal.Definition 1.
A Markov logic networkL is a set of pairs (
F
i
,
w
i
), where F
i is a formula in first-order logic and w
i is a real number. Together with a finite set of constants C
=
{
c
1
,
c
2
,
…
,
c
|
C
|
}, it defines a Markov network M
L
,
C as follows:(1)
M
L
,
C contains one binary node for each possible grounding of each predicate appearing in L. The value of the node is 1 if the ground atom is true and 0 otherwise.
(2)
M
L
,
C contains one feature for each possible grounding of each formula F
i in L. The value of this feature is 1 if the ground formula is true and 0 otherwise. The weight of the feature is w
i associated with F
i in L.An MLN can be viewed as a template for constructing Markov networks. Given different sets of constants, it will produce different networks, but all will have certain regularities in structure and parameters, given by the MLN (e.g., all groundings of the same formula will have the same weight). We call each of these networks a ground Markov network to distinguish it from the first-order MLN. From Definition1, the probability distribution over possible worlds x specified by the ground Markov network M
L
,
C is given by(1)
P
X
=
x
=
1
Z
exp
∑
i
ω
i
n
i
x
=
1
Z
∏
i
ϕ
i
x
i
n
i
x
,where n
i
(
x
) is the number of true groundings of F
i in x, x
{
i
} is the state (true values) of the atoms appearing in F
i, and ϕ
i
(
x
{
i
}
)
=
e
ω
i.In the context of ontology matching, possible worlds correspond to possible alignment and the goal is to determine the most probable alignment given the evidence. It was shown that Markov logic provides an excellent framework for ontology matching as it captures both hard logical axioms and soft uncertain statements about potential correspondences between ontological entities.
## 3.2. Ontology Representation
An ontology specifies a conceptualization of a domain in terms of classes and properties and consists of a set of axioms. Matching is the process of finding relationships or correspondences between entities from different ontologies. An alignment is a set of correspondences. A correspondence is a triplee
,
e
′
,
r asserting that the relation r holds between the ontology entities e and e
′, where e is an entity from ontology O and e
′ is an entity from ontology O
′ [4]. The generic form of correspondence captures a wide range of correspondences by varying what is admissible as matchable element and semantic relation, for example, equivalence (=), more general (⊒). In the following we are only interested in equivalence correspondence between classes across anatomy ontologies.The two input ontologies are described in OWL (Web Ontology Language).Classes are concepts organized in asubclass-superclass hierarchy with multiple inheritances. The properties ofis_a andpart_of describe the part and whole relationship between two classes. The properties ofdisjointWith describes relationship between two classes which is interpreted as the emptiness of the intersection of their interpretations. For example, in OWL we can say that Plant and Animal are disjoint classes: no individual can be both a plant and an animal (which would have the unfortunate consequence of making SlimeMold an empty class). SaltwaterFish might be the intersection of Fish and the class SeaDwellers. Figure 2 depicts fragments of human and mouse anatomy ontologies.Figure 2
Example ontology fragments from the human anatomy ontology.We introduce a set of predicates to model the structure of ontologies to be matched. The defined predicates are shown in Table1. We use predicate c
l
a
s
s
i to represent a class from ontology O
i. For example, c
l
a
s
s
1(“NCI_C33854”) representing “NCI_C33854” is a class from ontology O
1. We use predicate s
u
b
i and p
a
r
t
i to model the class hierarchy in ontology O
i, for example, s
u
b
1(“NCI_C33854”, “NCI_C25762”) and p
a
r
t
1(“NCI_C33854”, “NCI_C12686”). The predicate d
i
s
i models the disjointness relationship between two classes, for example, d
i
s
1(“NCI_C21599”, “NCI_C25444”). The predicate l
a
b
e
l
1(“NCI_C33854”, “Vascular_System”) represents class “NCI_C33854” with label “Vascular_System.” We also propose a predicate s
i
m to represent the similarity between labels of two classes from different ontologies, for example, s
i
m(“Vascular_Endothelium”, “blood vessel endothelium”, σ), where σ is a real number. If we apply a similarity measure based on the Levenshtein distance [13], we have σ (“Vascular_Endothelium,” “blood vessel endothelium”) equal to 0.54. The application of a threshold τ is a standard technique in ontology matching. We only generate ground atoms of s
i
m for those pairs of labels whose similarity is greater than τ. Correspondences with a similarity less than τ are deemed incorrect.Table 1
Core predicates for anatomical ontology matching.
Predicate
Description
Observed
c
l
a
s
s
i
(
c
)
c is a class from ontology O
i, i
∈
1,2
l
a
b
e
l
i
(
c
,
l
)
Classc has a label l
s
u
b
i
(
a
,
b
)
a is a subclass of b
p
a
r
t
i
(
a
,
b
)
a is a part of b
d
i
s
i
(
a
,
b
)
a is disjoint with b
s
i
m
(
l
1
,
l
2
,
σ
)
Labelsl
1 and l
2 are similar to a similarity of σ
Hidden
m
a
p
(
c
1
,
c
2
)
Classc
1 from O
1 corresponds to class c
2 from O
2We differentiate between two types of predicates: hidden and observed. The ground atoms of observed predicates are seen and describe the knowledge encoded in the ontologies. The ground atoms of hidden predicates are not seen and have to be predicted using MAP inference. We use hidden predicatesm
a
p to model the sought-after class correspondences.We use the following notation conventions in Table1 and through the rest of this paper:(1)
All entities from ontologyO
1 have a subscript “1”; all entities from ontology O
2 have a subscript “2.”
(2)
Lowercasea, b, and c with or without a subscript are a class.
(3)
Lowercasel with or without a subscript is a label.
## 3.3. Matching Formulas
With predicates defined, we can now go on to incorporate our strategies about the task using weighted first-order logic formulas. Markov logic combines both hard and soft first-order formulas. This allows the inclusion of both known logical statements and uncertain formulas modeling potential correspondences and structural properties of the ontologies. Then it makes joint inference of two and more interdependent hidden predicates.We will introduce five types of constraints to model different matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. The formula without a weight is a hard constraint and holds in every computed alignment. The formula with a weight is a soft constraint and the weight reflects how strong a constraint it is. For simplicity, we will from now on assume that the predicatec
l
a
s
s
i is implicitly added as a precondition to every formula for each class appearing in the formula.A Priori Confidences. We compute an initial a priori similarity σ for each pair of labels of two classes across ontologies based on the Levenshtein distance [13] and use a cut-off threshold τ to produce matching candidates, above which ground atoms of predicates s
i
m are added to the ground Markov network. The higher the similarity between labels of two classes is, the more likely the correspondence between the two classes is correct. We introduce the following formula to model the a priori confidences of a correspondence:σ
(2)
l
a
b
e
l
1
c
1
,
l
1
∧
l
a
b
e
l
2
c
2
,
l
2
∧
s
i
m
l
1
,
l
2
,
σ
=
>
m
a
p
c
1
,
c
2
.Here, we use the similarityσ between labels as the formula weight since the confidence of a correspondence to be correct depends on how similar their labels are.Cardinality Constraints. In general, alignments can be of various cardinalities: 1 : 1 (one to one), 1 : m (one to many), n : 1 (many to one), and m : n (many to many). In this work, we assume the one to one constraint. We use two hard formulas stating that one concept from ontology O
1 can be equivalent to at most one concept in ontology O
2, which ensures the consistency of a computed alignment and vice versa:(3)
m
a
p
a
1
,
a
2
∧
m
a
p
a
1
,
b
2
=
>
a
2
=
b
2
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
a
2
=
>
a
1
=
b
1
.Coherence Constraints. Coherence constraints reduce incoherence during the alignment process. These constraints formulas are added as hard formulas to ensure satisfaction in the computed result alignment. The following formulas describe that two disjoint classes of ontology O
1 will not match two classes of ontology O
2 with subclass relationship respective simultaneously and vice versa:(4)
s
u
b
1
a
1
,
b
1
∧
d
i
s
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
d
i
s
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Stability Constraints. The idea of stability constraints is that an alignment should not introduce new structural knowledge. The formulas for stability constraints are soft formulas associated with weights reflecting how strong the constraints are. When an alignment violates one soft formula it is less probable, but not impossible. Formulas (5) and (6) decrease the probability of alignments that map concept a
1 to a
2 and b
1 to b
2 if a
1 is a subclass of b
1 but a
2 is not a subclass of b
2:ω
1
(5)
s
u
b
1
a
1
,
b
1
∧
!
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.
ω
2
(6)
!
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Here,ω
1 and ω
2 are negative real-valued weights, rendering alignments that satisfy the formulas possibly but less likely.Match Propagation. Generally speaking, if two concepts a
1 and a
2 match, and there is a relationship r between a
1 and b
1 in O
1 and a matching relationship r
′ between a
2 and b
2 in O
2, then we can increase the probability of match between a
2 and b
2. This is accomplished by adding the following formulas to the model. Formula (7) states that if two classes match, it is more likely that their parent classes match too. Formula (8) describes that if parts of two classes match, it is more likely that the classes match too:ω
3
(7)
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.
ω
4
(8)
p
a
r
t
1
a
1
,
b
1
∧
p
a
r
t
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.Here,ω
3 and ω
4 are positive real-valued weights, propagating alignment across the structure of ontologies. These formulas capture the influence of the ontology structure and the semantics of ontology relations and increase the probability of matches between entities that are neighbors of already matched entities in the two ontologies. These formulas help to identify correct correspondences and enable deriving missed correspondences based on the hypothesis.
## 3.4. MAP Inference as Alignment Process
After we generate all ground atoms of observed predicates introduced in previous section, we can select an optimal alignment from the incoming hypotheses using MAP inference in Markov logic networks generated by matching formulas. Give two ontologies, we compute the set of ground atoms of the hidden predicates that maximizes the probability given both the ground atoms of observed predicates and the ground formulas. Letx be the set of ground atoms of observed predicates and let y be the set of ground atoms of hidden predicates m
a
p with respect to the given ontologies, we compute(9)
max
y
P
y
∣
x
=
max
y
∑
i
ω
i
n
i
x
,
y
,where ω
i is the weight of formula F
i and n
i
(
x
,
y
) is the number of possible worlds where formula F
i holds.
## 4. Results and Discussion
### 4.1. Experimental Setup
We conducted experiments that were implemented in java using the Jena API (jena.apache.org) and SecondString library [14] to create ground atoms and compute the similarity between labels based on Levenshtein distance. Then we applied theBeast [15] for MAP inference in Markov logic networks, using integer linear program (ILP) as base solver. theBeast is a software tool that provides means of inference and learning for Markov logic networks. Experiments were conducted on Fedora 7 with an Intel i5 [email protected] Ghz and 4 GB memory.We evaluated our model for anatomy ontology matching with thresholds on the similarityσ ranging from 0.65 to 0.95. The weights of soft formulas are determined manually. Although the weights for formulas can be learned with an online learner, being able to set qualitative weights manually is crucial as training data is often unavailable. Further, learning weights from reference alignment as training data would lead to results overfitting the data. We set the weights for stability constraints dealing with class hierarchy to −0.01 and set the weight for match propagation to 0.05 based on the consideration that they are reciprocal ideas with stability constraints, hence with roughly equivalent importance.We evaluated five different settings:prior: the formulation includes only a priori confidence.
ca: the formulation includes a priori confidence and cardinality constraints.
ca + co: the formulation includes a priori confidence, cardinality, and coherence constraints.
ca + co + st: the formulation includes a priori confidence, cardinality constraints, coherence constraints, and stability constraints.
ca + co + st + mp: the formulation includes a priori confidence, cardinality constraints, coherence constraints, stability constraints, and matching propagation.
### 4.2. Experimental Results
We useprecision,recall, andF-measure to measure the performance of the matching results. Given the reference alignment, we compute theprecision as the number of correct correspondences over the total number of correspondences in the computed alignment. We compute therecall as the number of correct correspondences over the number of correspondences in the reference alignment. Then, we compute theF-measure as(10)
F
-
m
e
a
s
u
r
e
=
2
∗
p
r
e
c
i
s
i
o
n
∗
r
e
c
a
l
l
p
r
e
c
i
s
i
o
n
+
r
e
c
a
l
l
.Figure3 comparesprecision,recall, andF-measure scores of generated alignments over the reference alignment for thresholds ranging from 0.65 to 0.95 under different settings. From Figure 3, we can see that our method achieves the highest precision in the setting ofca + co + st + sp, while achieving the highest recall in the setting ofpriori. We obtain significant improvement on F-measure when adding more matching formulas into the model. We also note that there is no obvious difference betweenca andca + co. It is because only the human anatomy ontology defines the relationships of disjointWith. However, we keep coherence constraints in our model since it can further improve the quality of results if the relationships of disjointWith were added into the mouse anatomy ontology in the future. Overall, the precision increases with the growth of the threshold, while the recall slightly decreases for higher thresholds in various settings. The margins between different settings become smaller for higher thresholds than for lower thresholds. It is because there is only a small number of incorrect correspondences in candidates when we apply a threshold greater than 0.8. We achieve the maximum F-measure score at threshold 0.8.Figure 3
Results for thresholds ranging from 0.65 to 0.95.
(a)
Precision
(b)
Recall
(c)
F-measureWe manually sample several false positive correspondences and false negative correspondences to analysis. We found that false positive correspondences were mainly caused by similar labels in spelling. For example, false correspondence (“NCI_C33592”, “MA_0002058”) has similar labels of “Spiral_Artery” and “sural artery.” Furthermore, the superclass of “NCI_C33592” (“NCI_C12372”) and the superclass of “MA_0002058” (“MA_0002058”) happen to be matched, while false positive correspondences were mainly caused by the dissimilarity of labels, such as “Tarsal_Plate” for “NCI_C33736” and “eyelid tarsus” for “MA_0000270.” And “NCI_C33736” has no subclass and subpart; hence we cannot find the correspondence through formula (7) or (8).Figure4 is a comparison of the performance of our method and participating systems of OAEI 2014 which also produce coherent alignment in anatomy track. From Figure 4, we can see that our method (MLN-OM) outperforms most of systems and is comparable with the best system (LogMapLite). Notice that we use a simple similarity measure based on Levenshtein distance in pruning phase and focus on the Markov logic model for ontology matching, while LogMapLite uses an external lexicon (e.g., WordNet or UMLS-lexicon) in the phase of computing an initial set of equivalence anchor mappings, which can be easily adopted by our method in the pruning phase to further improve the quality of matching results.Figure 4
Comparing with results of OAEI 2014.
## 4.1. Experimental Setup
We conducted experiments that were implemented in java using the Jena API (jena.apache.org) and SecondString library [14] to create ground atoms and compute the similarity between labels based on Levenshtein distance. Then we applied theBeast [15] for MAP inference in Markov logic networks, using integer linear program (ILP) as base solver. theBeast is a software tool that provides means of inference and learning for Markov logic networks. Experiments were conducted on Fedora 7 with an Intel i5 [email protected] Ghz and 4 GB memory.We evaluated our model for anatomy ontology matching with thresholds on the similarityσ ranging from 0.65 to 0.95. The weights of soft formulas are determined manually. Although the weights for formulas can be learned with an online learner, being able to set qualitative weights manually is crucial as training data is often unavailable. Further, learning weights from reference alignment as training data would lead to results overfitting the data. We set the weights for stability constraints dealing with class hierarchy to −0.01 and set the weight for match propagation to 0.05 based on the consideration that they are reciprocal ideas with stability constraints, hence with roughly equivalent importance.We evaluated five different settings:prior: the formulation includes only a priori confidence.
ca: the formulation includes a priori confidence and cardinality constraints.
ca + co: the formulation includes a priori confidence, cardinality, and coherence constraints.
ca + co + st: the formulation includes a priori confidence, cardinality constraints, coherence constraints, and stability constraints.
ca + co + st + mp: the formulation includes a priori confidence, cardinality constraints, coherence constraints, stability constraints, and matching propagation.
## 4.2. Experimental Results
We useprecision,recall, andF-measure to measure the performance of the matching results. Given the reference alignment, we compute theprecision as the number of correct correspondences over the total number of correspondences in the computed alignment. We compute therecall as the number of correct correspondences over the number of correspondences in the reference alignment. Then, we compute theF-measure as(10)
F
-
m
e
a
s
u
r
e
=
2
∗
p
r
e
c
i
s
i
o
n
∗
r
e
c
a
l
l
p
r
e
c
i
s
i
o
n
+
r
e
c
a
l
l
.Figure3 comparesprecision,recall, andF-measure scores of generated alignments over the reference alignment for thresholds ranging from 0.65 to 0.95 under different settings. From Figure 3, we can see that our method achieves the highest precision in the setting ofca + co + st + sp, while achieving the highest recall in the setting ofpriori. We obtain significant improvement on F-measure when adding more matching formulas into the model. We also note that there is no obvious difference betweenca andca + co. It is because only the human anatomy ontology defines the relationships of disjointWith. However, we keep coherence constraints in our model since it can further improve the quality of results if the relationships of disjointWith were added into the mouse anatomy ontology in the future. Overall, the precision increases with the growth of the threshold, while the recall slightly decreases for higher thresholds in various settings. The margins between different settings become smaller for higher thresholds than for lower thresholds. It is because there is only a small number of incorrect correspondences in candidates when we apply a threshold greater than 0.8. We achieve the maximum F-measure score at threshold 0.8.Figure 3
Results for thresholds ranging from 0.65 to 0.95.
(a)
Precision
(b)
Recall
(c)
F-measureWe manually sample several false positive correspondences and false negative correspondences to analysis. We found that false positive correspondences were mainly caused by similar labels in spelling. For example, false correspondence (“NCI_C33592”, “MA_0002058”) has similar labels of “Spiral_Artery” and “sural artery.” Furthermore, the superclass of “NCI_C33592” (“NCI_C12372”) and the superclass of “MA_0002058” (“MA_0002058”) happen to be matched, while false positive correspondences were mainly caused by the dissimilarity of labels, such as “Tarsal_Plate” for “NCI_C33736” and “eyelid tarsus” for “MA_0000270.” And “NCI_C33736” has no subclass and subpart; hence we cannot find the correspondence through formula (7) or (8).Figure4 is a comparison of the performance of our method and participating systems of OAEI 2014 which also produce coherent alignment in anatomy track. From Figure 4, we can see that our method (MLN-OM) outperforms most of systems and is comparable with the best system (LogMapLite). Notice that we use a simple similarity measure based on Levenshtein distance in pruning phase and focus on the Markov logic model for ontology matching, while LogMapLite uses an external lexicon (e.g., WordNet or UMLS-lexicon) in the phase of computing an initial set of equivalence anchor mappings, which can be easily adopted by our method in the pruning phase to further improve the quality of matching results.Figure 4
Comparing with results of OAEI 2014.
## 5. Conclusions
In this paper, we propose a Markov logic model for anatomy ontology matching. The model combines five types of matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. Experimental results demonstrate the effectiveness of the proposed approach.
---
*Source: 1010946-2016-06-13.xml* | 1010946-2016-06-13_1010946-2016-06-13.md | 42,838 | Anatomy Ontology Matching Using Markov Logic Networks | Chunhua Li; Pengpeng Zhao; Jian Wu; Zhiming Cui | Scientifica
(2016) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1010946 | 1010946-2016-06-13.xml | ---
## Abstract
The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
---
## Body
## 1. Introduction
Ontological techniques have been widely applied to medical and biological research [1]. The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. Such ontologies of anatomy and development facilitate the organization of functional data pertaining to a species. To compare such data between species, we need to establish relationships between ontologies describing different species [2]. For example, all gene expression patterns described in ZFIN (the Zebrafish Model Organism Database) are annotated using the zebrafish anatomy ontology. A list of such ontologies is kept on the Open Biomedical Ontologies (OBO) website [3].Heterogeneity is an inherent characteristic of ontologies developed by different parties for the same (or similar) domains. Semantic heterogeneity has become one of the main obstacles to sharing and interoperation among heterogeneous ontologies. Ontology matching, which finds semantic correspondences between entities of different ontologies, is a kind of solutions to the semantic heterogeneity problem [4]. The matching techniques can be classified in a first level as element-level techniques and structure-level techniques. Element-level techniques obtain the correspondences by considering the entities in the ontologies in isolation, therefore ignoring that they are part of the structure of the ontology. Structure-level techniques obtain the correspondences by analyzing how the entities fit in the structure of the ontology [5].Recently, probabilistic approaches to ontology matching which compare ontology entities in a global way have produced competitive matching result [6–9]. OMEN [6] was the first approach that uses a probabilistic representation of ontology mapping rules and probabilistic inference to improve the quality of existing ontology mappings. It uses a Bayesian net to represent the influences between potential concept mappings across ontologies. Based on OMEN, Albagli et al. [7] introduced a novel probabilistic scheme iMatch for ontology matching by using Markov networks rather than Bayesian networks with several improvements. The iMatch better supports the noncausal nature of the dependencies for using undirected networks. Niepert et al. [8] presented a probabilistic-logical framework for ontology matching based on Markov logic. Markov logic has several advantages over existing matching approach and provides a unified syntax that supports different matching strategies in the same language. Li et al. [9] improve the Markov logic model with match propagation strategy and user feedback. References [8, 9] have shown the effectiveness of Markov logic model on conference datasets.In this paper, we consider the Markov logic based framework for anatomy ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies.
## 2. Materials
To evaluate the performance of our proposed approach, we conduct an experimental study using the adult mouse anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the human anatomy, which are large and carefully designed ontologies. They also differ from other ontologies with respect to the use of specific annotations and roles, for example, the extensive use of thepart_of relation. The two resources are part of the Open Biomedical Ontologies (OBO) [3]. We download the owl version of the two ontologies and the reference alignment (with 1516 correspondences) from OAEI anatomy track [10].NCI Thesaurus published by the National Cancer Institute (NCI) contains the working terminology of many data systems in use at NCI. Its scope is broad as it covers vocabulary for clinical care as well as translational and basic research. Among its 37,386 concepts, 4,410 (11.8%) correspond to anatomical entities (anatomic structure, system, or substance hierarchy). Adult mouse anatomy ontology has been developed as part of the mouse Gene Expression Database (GXD) project to provide standardized nomenclature for anatomical entities in the postnatal mouse. It will be used to annotate and integrate different types of data pertinent to anatomy, such as gene expression patterns and phenotype information, which will contribute to an integrated description of biological phenomena in the mouse [11].
## 3. Methods
In this section, we present our Markov logic model for anatomy ontology matching. Our model deviates from [8, 9] in several important ways. First, we model the important hierarchy structure defined by the property of part_of, while previous works consider only subclass-superclass hierarchy. In contrast, our model does not model property correspondences for there are few properties definitions in anatomy ontologies. Another difference is in computing a priori similarities. For conference data sets, [8, 9] apply a similarity measure on the name of matchable entities. However, the class name in anatomy ontology is meaningless signature such as “NCI_C12877.” Therefore, we apply a similarity measure on the labels of classes.We compute an alignment for anatomy ontologies through the following three steps. First, we compute a priori similarity based on Levenshtein distance between labels of two classes from different ontologies and apply a threshold to generate candidate matches. Then, we convert the representation of input ontologies to first-order logic predicate and define a set of formulas as matching strategy. Finally, we execute MAP inference in generated Markov networks as alignment process and output the optimal alignment. Our matching system architecture based on Markov logic networks is illustrated in Figure1.Figure 1
Matching system architecture.
### 3.1. Markov Logic Networks
Markov logic networks [12] is a statistical relational learning language based on first-order logic and Markov networks. A set of formulas in first-order logic can be seen as a set of hard constraints on the set of possible worlds: if a world violates even one formula, it has zero probability. The basic idea in Markov logic is to soften these constraints: when a world violates one formula it is less probable, but not impossible. The fewer formulas a world violates, the more probable it is. Each formula has an associated weight that reflects how strong a constraint it is: the higher the weight, the greater the difference in log probability between a world that satisfies the formula and one that does not, other things being equal.Definition 1.
A Markov logic networkL is a set of pairs (
F
i
,
w
i
), where F
i is a formula in first-order logic and w
i is a real number. Together with a finite set of constants C
=
{
c
1
,
c
2
,
…
,
c
|
C
|
}, it defines a Markov network M
L
,
C as follows:(1)
M
L
,
C contains one binary node for each possible grounding of each predicate appearing in L. The value of the node is 1 if the ground atom is true and 0 otherwise.
(2)
M
L
,
C contains one feature for each possible grounding of each formula F
i in L. The value of this feature is 1 if the ground formula is true and 0 otherwise. The weight of the feature is w
i associated with F
i in L.An MLN can be viewed as a template for constructing Markov networks. Given different sets of constants, it will produce different networks, but all will have certain regularities in structure and parameters, given by the MLN (e.g., all groundings of the same formula will have the same weight). We call each of these networks a ground Markov network to distinguish it from the first-order MLN. From Definition1, the probability distribution over possible worlds x specified by the ground Markov network M
L
,
C is given by(1)
P
X
=
x
=
1
Z
exp
∑
i
ω
i
n
i
x
=
1
Z
∏
i
ϕ
i
x
i
n
i
x
,where n
i
(
x
) is the number of true groundings of F
i in x, x
{
i
} is the state (true values) of the atoms appearing in F
i, and ϕ
i
(
x
{
i
}
)
=
e
ω
i.In the context of ontology matching, possible worlds correspond to possible alignment and the goal is to determine the most probable alignment given the evidence. It was shown that Markov logic provides an excellent framework for ontology matching as it captures both hard logical axioms and soft uncertain statements about potential correspondences between ontological entities.
### 3.2. Ontology Representation
An ontology specifies a conceptualization of a domain in terms of classes and properties and consists of a set of axioms. Matching is the process of finding relationships or correspondences between entities from different ontologies. An alignment is a set of correspondences. A correspondence is a triplee
,
e
′
,
r asserting that the relation r holds between the ontology entities e and e
′, where e is an entity from ontology O and e
′ is an entity from ontology O
′ [4]. The generic form of correspondence captures a wide range of correspondences by varying what is admissible as matchable element and semantic relation, for example, equivalence (=), more general (⊒). In the following we are only interested in equivalence correspondence between classes across anatomy ontologies.The two input ontologies are described in OWL (Web Ontology Language).Classes are concepts organized in asubclass-superclass hierarchy with multiple inheritances. The properties ofis_a andpart_of describe the part and whole relationship between two classes. The properties ofdisjointWith describes relationship between two classes which is interpreted as the emptiness of the intersection of their interpretations. For example, in OWL we can say that Plant and Animal are disjoint classes: no individual can be both a plant and an animal (which would have the unfortunate consequence of making SlimeMold an empty class). SaltwaterFish might be the intersection of Fish and the class SeaDwellers. Figure 2 depicts fragments of human and mouse anatomy ontologies.Figure 2
Example ontology fragments from the human anatomy ontology.We introduce a set of predicates to model the structure of ontologies to be matched. The defined predicates are shown in Table1. We use predicate c
l
a
s
s
i to represent a class from ontology O
i. For example, c
l
a
s
s
1(“NCI_C33854”) representing “NCI_C33854” is a class from ontology O
1. We use predicate s
u
b
i and p
a
r
t
i to model the class hierarchy in ontology O
i, for example, s
u
b
1(“NCI_C33854”, “NCI_C25762”) and p
a
r
t
1(“NCI_C33854”, “NCI_C12686”). The predicate d
i
s
i models the disjointness relationship between two classes, for example, d
i
s
1(“NCI_C21599”, “NCI_C25444”). The predicate l
a
b
e
l
1(“NCI_C33854”, “Vascular_System”) represents class “NCI_C33854” with label “Vascular_System.” We also propose a predicate s
i
m to represent the similarity between labels of two classes from different ontologies, for example, s
i
m(“Vascular_Endothelium”, “blood vessel endothelium”, σ), where σ is a real number. If we apply a similarity measure based on the Levenshtein distance [13], we have σ (“Vascular_Endothelium,” “blood vessel endothelium”) equal to 0.54. The application of a threshold τ is a standard technique in ontology matching. We only generate ground atoms of s
i
m for those pairs of labels whose similarity is greater than τ. Correspondences with a similarity less than τ are deemed incorrect.Table 1
Core predicates for anatomical ontology matching.
Predicate
Description
Observed
c
l
a
s
s
i
(
c
)
c is a class from ontology O
i, i
∈
1,2
l
a
b
e
l
i
(
c
,
l
)
Classc has a label l
s
u
b
i
(
a
,
b
)
a is a subclass of b
p
a
r
t
i
(
a
,
b
)
a is a part of b
d
i
s
i
(
a
,
b
)
a is disjoint with b
s
i
m
(
l
1
,
l
2
,
σ
)
Labelsl
1 and l
2 are similar to a similarity of σ
Hidden
m
a
p
(
c
1
,
c
2
)
Classc
1 from O
1 corresponds to class c
2 from O
2We differentiate between two types of predicates: hidden and observed. The ground atoms of observed predicates are seen and describe the knowledge encoded in the ontologies. The ground atoms of hidden predicates are not seen and have to be predicted using MAP inference. We use hidden predicatesm
a
p to model the sought-after class correspondences.We use the following notation conventions in Table1 and through the rest of this paper:(1)
All entities from ontologyO
1 have a subscript “1”; all entities from ontology O
2 have a subscript “2.”
(2)
Lowercasea, b, and c with or without a subscript are a class.
(3)
Lowercasel with or without a subscript is a label.
### 3.3. Matching Formulas
With predicates defined, we can now go on to incorporate our strategies about the task using weighted first-order logic formulas. Markov logic combines both hard and soft first-order formulas. This allows the inclusion of both known logical statements and uncertain formulas modeling potential correspondences and structural properties of the ontologies. Then it makes joint inference of two and more interdependent hidden predicates.We will introduce five types of constraints to model different matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. The formula without a weight is a hard constraint and holds in every computed alignment. The formula with a weight is a soft constraint and the weight reflects how strong a constraint it is. For simplicity, we will from now on assume that the predicatec
l
a
s
s
i is implicitly added as a precondition to every formula for each class appearing in the formula.A Priori Confidences. We compute an initial a priori similarity σ for each pair of labels of two classes across ontologies based on the Levenshtein distance [13] and use a cut-off threshold τ to produce matching candidates, above which ground atoms of predicates s
i
m are added to the ground Markov network. The higher the similarity between labels of two classes is, the more likely the correspondence between the two classes is correct. We introduce the following formula to model the a priori confidences of a correspondence:σ
(2)
l
a
b
e
l
1
c
1
,
l
1
∧
l
a
b
e
l
2
c
2
,
l
2
∧
s
i
m
l
1
,
l
2
,
σ
=
>
m
a
p
c
1
,
c
2
.Here, we use the similarityσ between labels as the formula weight since the confidence of a correspondence to be correct depends on how similar their labels are.Cardinality Constraints. In general, alignments can be of various cardinalities: 1 : 1 (one to one), 1 : m (one to many), n : 1 (many to one), and m : n (many to many). In this work, we assume the one to one constraint. We use two hard formulas stating that one concept from ontology O
1 can be equivalent to at most one concept in ontology O
2, which ensures the consistency of a computed alignment and vice versa:(3)
m
a
p
a
1
,
a
2
∧
m
a
p
a
1
,
b
2
=
>
a
2
=
b
2
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
a
2
=
>
a
1
=
b
1
.Coherence Constraints. Coherence constraints reduce incoherence during the alignment process. These constraints formulas are added as hard formulas to ensure satisfaction in the computed result alignment. The following formulas describe that two disjoint classes of ontology O
1 will not match two classes of ontology O
2 with subclass relationship respective simultaneously and vice versa:(4)
s
u
b
1
a
1
,
b
1
∧
d
i
s
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
d
i
s
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Stability Constraints. The idea of stability constraints is that an alignment should not introduce new structural knowledge. The formulas for stability constraints are soft formulas associated with weights reflecting how strong the constraints are. When an alignment violates one soft formula it is less probable, but not impossible. Formulas (5) and (6) decrease the probability of alignments that map concept a
1 to a
2 and b
1 to b
2 if a
1 is a subclass of b
1 but a
2 is not a subclass of b
2:ω
1
(5)
s
u
b
1
a
1
,
b
1
∧
!
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.
ω
2
(6)
!
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Here,ω
1 and ω
2 are negative real-valued weights, rendering alignments that satisfy the formulas possibly but less likely.Match Propagation. Generally speaking, if two concepts a
1 and a
2 match, and there is a relationship r between a
1 and b
1 in O
1 and a matching relationship r
′ between a
2 and b
2 in O
2, then we can increase the probability of match between a
2 and b
2. This is accomplished by adding the following formulas to the model. Formula (7) states that if two classes match, it is more likely that their parent classes match too. Formula (8) describes that if parts of two classes match, it is more likely that the classes match too:ω
3
(7)
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.
ω
4
(8)
p
a
r
t
1
a
1
,
b
1
∧
p
a
r
t
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.Here,ω
3 and ω
4 are positive real-valued weights, propagating alignment across the structure of ontologies. These formulas capture the influence of the ontology structure and the semantics of ontology relations and increase the probability of matches between entities that are neighbors of already matched entities in the two ontologies. These formulas help to identify correct correspondences and enable deriving missed correspondences based on the hypothesis.
### 3.4. MAP Inference as Alignment Process
After we generate all ground atoms of observed predicates introduced in previous section, we can select an optimal alignment from the incoming hypotheses using MAP inference in Markov logic networks generated by matching formulas. Give two ontologies, we compute the set of ground atoms of the hidden predicates that maximizes the probability given both the ground atoms of observed predicates and the ground formulas. Letx be the set of ground atoms of observed predicates and let y be the set of ground atoms of hidden predicates m
a
p with respect to the given ontologies, we compute(9)
max
y
P
y
∣
x
=
max
y
∑
i
ω
i
n
i
x
,
y
,where ω
i is the weight of formula F
i and n
i
(
x
,
y
) is the number of possible worlds where formula F
i holds.
## 3.1. Markov Logic Networks
Markov logic networks [12] is a statistical relational learning language based on first-order logic and Markov networks. A set of formulas in first-order logic can be seen as a set of hard constraints on the set of possible worlds: if a world violates even one formula, it has zero probability. The basic idea in Markov logic is to soften these constraints: when a world violates one formula it is less probable, but not impossible. The fewer formulas a world violates, the more probable it is. Each formula has an associated weight that reflects how strong a constraint it is: the higher the weight, the greater the difference in log probability between a world that satisfies the formula and one that does not, other things being equal.Definition 1.
A Markov logic networkL is a set of pairs (
F
i
,
w
i
), where F
i is a formula in first-order logic and w
i is a real number. Together with a finite set of constants C
=
{
c
1
,
c
2
,
…
,
c
|
C
|
}, it defines a Markov network M
L
,
C as follows:(1)
M
L
,
C contains one binary node for each possible grounding of each predicate appearing in L. The value of the node is 1 if the ground atom is true and 0 otherwise.
(2)
M
L
,
C contains one feature for each possible grounding of each formula F
i in L. The value of this feature is 1 if the ground formula is true and 0 otherwise. The weight of the feature is w
i associated with F
i in L.An MLN can be viewed as a template for constructing Markov networks. Given different sets of constants, it will produce different networks, but all will have certain regularities in structure and parameters, given by the MLN (e.g., all groundings of the same formula will have the same weight). We call each of these networks a ground Markov network to distinguish it from the first-order MLN. From Definition1, the probability distribution over possible worlds x specified by the ground Markov network M
L
,
C is given by(1)
P
X
=
x
=
1
Z
exp
∑
i
ω
i
n
i
x
=
1
Z
∏
i
ϕ
i
x
i
n
i
x
,where n
i
(
x
) is the number of true groundings of F
i in x, x
{
i
} is the state (true values) of the atoms appearing in F
i, and ϕ
i
(
x
{
i
}
)
=
e
ω
i.In the context of ontology matching, possible worlds correspond to possible alignment and the goal is to determine the most probable alignment given the evidence. It was shown that Markov logic provides an excellent framework for ontology matching as it captures both hard logical axioms and soft uncertain statements about potential correspondences between ontological entities.
## 3.2. Ontology Representation
An ontology specifies a conceptualization of a domain in terms of classes and properties and consists of a set of axioms. Matching is the process of finding relationships or correspondences between entities from different ontologies. An alignment is a set of correspondences. A correspondence is a triplee
,
e
′
,
r asserting that the relation r holds between the ontology entities e and e
′, where e is an entity from ontology O and e
′ is an entity from ontology O
′ [4]. The generic form of correspondence captures a wide range of correspondences by varying what is admissible as matchable element and semantic relation, for example, equivalence (=), more general (⊒). In the following we are only interested in equivalence correspondence between classes across anatomy ontologies.The two input ontologies are described in OWL (Web Ontology Language).Classes are concepts organized in asubclass-superclass hierarchy with multiple inheritances. The properties ofis_a andpart_of describe the part and whole relationship between two classes. The properties ofdisjointWith describes relationship between two classes which is interpreted as the emptiness of the intersection of their interpretations. For example, in OWL we can say that Plant and Animal are disjoint classes: no individual can be both a plant and an animal (which would have the unfortunate consequence of making SlimeMold an empty class). SaltwaterFish might be the intersection of Fish and the class SeaDwellers. Figure 2 depicts fragments of human and mouse anatomy ontologies.Figure 2
Example ontology fragments from the human anatomy ontology.We introduce a set of predicates to model the structure of ontologies to be matched. The defined predicates are shown in Table1. We use predicate c
l
a
s
s
i to represent a class from ontology O
i. For example, c
l
a
s
s
1(“NCI_C33854”) representing “NCI_C33854” is a class from ontology O
1. We use predicate s
u
b
i and p
a
r
t
i to model the class hierarchy in ontology O
i, for example, s
u
b
1(“NCI_C33854”, “NCI_C25762”) and p
a
r
t
1(“NCI_C33854”, “NCI_C12686”). The predicate d
i
s
i models the disjointness relationship between two classes, for example, d
i
s
1(“NCI_C21599”, “NCI_C25444”). The predicate l
a
b
e
l
1(“NCI_C33854”, “Vascular_System”) represents class “NCI_C33854” with label “Vascular_System.” We also propose a predicate s
i
m to represent the similarity between labels of two classes from different ontologies, for example, s
i
m(“Vascular_Endothelium”, “blood vessel endothelium”, σ), where σ is a real number. If we apply a similarity measure based on the Levenshtein distance [13], we have σ (“Vascular_Endothelium,” “blood vessel endothelium”) equal to 0.54. The application of a threshold τ is a standard technique in ontology matching. We only generate ground atoms of s
i
m for those pairs of labels whose similarity is greater than τ. Correspondences with a similarity less than τ are deemed incorrect.Table 1
Core predicates for anatomical ontology matching.
Predicate
Description
Observed
c
l
a
s
s
i
(
c
)
c is a class from ontology O
i, i
∈
1,2
l
a
b
e
l
i
(
c
,
l
)
Classc has a label l
s
u
b
i
(
a
,
b
)
a is a subclass of b
p
a
r
t
i
(
a
,
b
)
a is a part of b
d
i
s
i
(
a
,
b
)
a is disjoint with b
s
i
m
(
l
1
,
l
2
,
σ
)
Labelsl
1 and l
2 are similar to a similarity of σ
Hidden
m
a
p
(
c
1
,
c
2
)
Classc
1 from O
1 corresponds to class c
2 from O
2We differentiate between two types of predicates: hidden and observed. The ground atoms of observed predicates are seen and describe the knowledge encoded in the ontologies. The ground atoms of hidden predicates are not seen and have to be predicted using MAP inference. We use hidden predicatesm
a
p to model the sought-after class correspondences.We use the following notation conventions in Table1 and through the rest of this paper:(1)
All entities from ontologyO
1 have a subscript “1”; all entities from ontology O
2 have a subscript “2.”
(2)
Lowercasea, b, and c with or without a subscript are a class.
(3)
Lowercasel with or without a subscript is a label.
## 3.3. Matching Formulas
With predicates defined, we can now go on to incorporate our strategies about the task using weighted first-order logic formulas. Markov logic combines both hard and soft first-order formulas. This allows the inclusion of both known logical statements and uncertain formulas modeling potential correspondences and structural properties of the ontologies. Then it makes joint inference of two and more interdependent hidden predicates.We will introduce five types of constraints to model different matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. The formula without a weight is a hard constraint and holds in every computed alignment. The formula with a weight is a soft constraint and the weight reflects how strong a constraint it is. For simplicity, we will from now on assume that the predicatec
l
a
s
s
i is implicitly added as a precondition to every formula for each class appearing in the formula.A Priori Confidences. We compute an initial a priori similarity σ for each pair of labels of two classes across ontologies based on the Levenshtein distance [13] and use a cut-off threshold τ to produce matching candidates, above which ground atoms of predicates s
i
m are added to the ground Markov network. The higher the similarity between labels of two classes is, the more likely the correspondence between the two classes is correct. We introduce the following formula to model the a priori confidences of a correspondence:σ
(2)
l
a
b
e
l
1
c
1
,
l
1
∧
l
a
b
e
l
2
c
2
,
l
2
∧
s
i
m
l
1
,
l
2
,
σ
=
>
m
a
p
c
1
,
c
2
.Here, we use the similarityσ between labels as the formula weight since the confidence of a correspondence to be correct depends on how similar their labels are.Cardinality Constraints. In general, alignments can be of various cardinalities: 1 : 1 (one to one), 1 : m (one to many), n : 1 (many to one), and m : n (many to many). In this work, we assume the one to one constraint. We use two hard formulas stating that one concept from ontology O
1 can be equivalent to at most one concept in ontology O
2, which ensures the consistency of a computed alignment and vice versa:(3)
m
a
p
a
1
,
a
2
∧
m
a
p
a
1
,
b
2
=
>
a
2
=
b
2
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
a
2
=
>
a
1
=
b
1
.Coherence Constraints. Coherence constraints reduce incoherence during the alignment process. These constraints formulas are added as hard formulas to ensure satisfaction in the computed result alignment. The following formulas describe that two disjoint classes of ontology O
1 will not match two classes of ontology O
2 with subclass relationship respective simultaneously and vice versa:(4)
s
u
b
1
a
1
,
b
1
∧
d
i
s
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
d
i
s
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
!
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Stability Constraints. The idea of stability constraints is that an alignment should not introduce new structural knowledge. The formulas for stability constraints are soft formulas associated with weights reflecting how strong the constraints are. When an alignment violates one soft formula it is less probable, but not impossible. Formulas (5) and (6) decrease the probability of alignments that map concept a
1 to a
2 and b
1 to b
2 if a
1 is a subclass of b
1 but a
2 is not a subclass of b
2:ω
1
(5)
s
u
b
1
a
1
,
b
1
∧
!
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.
ω
2
(6)
!
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
=
>
m
a
p
a
1
,
a
2
∧
m
a
p
b
1
,
b
2
.Here,ω
1 and ω
2 are negative real-valued weights, rendering alignments that satisfy the formulas possibly but less likely.Match Propagation. Generally speaking, if two concepts a
1 and a
2 match, and there is a relationship r between a
1 and b
1 in O
1 and a matching relationship r
′ between a
2 and b
2 in O
2, then we can increase the probability of match between a
2 and b
2. This is accomplished by adding the following formulas to the model. Formula (7) states that if two classes match, it is more likely that their parent classes match too. Formula (8) describes that if parts of two classes match, it is more likely that the classes match too:ω
3
(7)
s
u
b
1
a
1
,
b
1
∧
s
u
b
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.
ω
4
(8)
p
a
r
t
1
a
1
,
b
1
∧
p
a
r
t
2
a
2
,
b
2
∧
m
a
p
a
1
,
a
2
=
>
m
a
p
b
1
,
b
2
.Here,ω
3 and ω
4 are positive real-valued weights, propagating alignment across the structure of ontologies. These formulas capture the influence of the ontology structure and the semantics of ontology relations and increase the probability of matches between entities that are neighbors of already matched entities in the two ontologies. These formulas help to identify correct correspondences and enable deriving missed correspondences based on the hypothesis.
## 3.4. MAP Inference as Alignment Process
After we generate all ground atoms of observed predicates introduced in previous section, we can select an optimal alignment from the incoming hypotheses using MAP inference in Markov logic networks generated by matching formulas. Give two ontologies, we compute the set of ground atoms of the hidden predicates that maximizes the probability given both the ground atoms of observed predicates and the ground formulas. Letx be the set of ground atoms of observed predicates and let y be the set of ground atoms of hidden predicates m
a
p with respect to the given ontologies, we compute(9)
max
y
P
y
∣
x
=
max
y
∑
i
ω
i
n
i
x
,
y
,where ω
i is the weight of formula F
i and n
i
(
x
,
y
) is the number of possible worlds where formula F
i holds.
## 4. Results and Discussion
### 4.1. Experimental Setup
We conducted experiments that were implemented in java using the Jena API (jena.apache.org) and SecondString library [14] to create ground atoms and compute the similarity between labels based on Levenshtein distance. Then we applied theBeast [15] for MAP inference in Markov logic networks, using integer linear program (ILP) as base solver. theBeast is a software tool that provides means of inference and learning for Markov logic networks. Experiments were conducted on Fedora 7 with an Intel i5 [email protected] Ghz and 4 GB memory.We evaluated our model for anatomy ontology matching with thresholds on the similarityσ ranging from 0.65 to 0.95. The weights of soft formulas are determined manually. Although the weights for formulas can be learned with an online learner, being able to set qualitative weights manually is crucial as training data is often unavailable. Further, learning weights from reference alignment as training data would lead to results overfitting the data. We set the weights for stability constraints dealing with class hierarchy to −0.01 and set the weight for match propagation to 0.05 based on the consideration that they are reciprocal ideas with stability constraints, hence with roughly equivalent importance.We evaluated five different settings:prior: the formulation includes only a priori confidence.
ca: the formulation includes a priori confidence and cardinality constraints.
ca + co: the formulation includes a priori confidence, cardinality, and coherence constraints.
ca + co + st: the formulation includes a priori confidence, cardinality constraints, coherence constraints, and stability constraints.
ca + co + st + mp: the formulation includes a priori confidence, cardinality constraints, coherence constraints, stability constraints, and matching propagation.
### 4.2. Experimental Results
We useprecision,recall, andF-measure to measure the performance of the matching results. Given the reference alignment, we compute theprecision as the number of correct correspondences over the total number of correspondences in the computed alignment. We compute therecall as the number of correct correspondences over the number of correspondences in the reference alignment. Then, we compute theF-measure as(10)
F
-
m
e
a
s
u
r
e
=
2
∗
p
r
e
c
i
s
i
o
n
∗
r
e
c
a
l
l
p
r
e
c
i
s
i
o
n
+
r
e
c
a
l
l
.Figure3 comparesprecision,recall, andF-measure scores of generated alignments over the reference alignment for thresholds ranging from 0.65 to 0.95 under different settings. From Figure 3, we can see that our method achieves the highest precision in the setting ofca + co + st + sp, while achieving the highest recall in the setting ofpriori. We obtain significant improvement on F-measure when adding more matching formulas into the model. We also note that there is no obvious difference betweenca andca + co. It is because only the human anatomy ontology defines the relationships of disjointWith. However, we keep coherence constraints in our model since it can further improve the quality of results if the relationships of disjointWith were added into the mouse anatomy ontology in the future. Overall, the precision increases with the growth of the threshold, while the recall slightly decreases for higher thresholds in various settings. The margins between different settings become smaller for higher thresholds than for lower thresholds. It is because there is only a small number of incorrect correspondences in candidates when we apply a threshold greater than 0.8. We achieve the maximum F-measure score at threshold 0.8.Figure 3
Results for thresholds ranging from 0.65 to 0.95.
(a)
Precision
(b)
Recall
(c)
F-measureWe manually sample several false positive correspondences and false negative correspondences to analysis. We found that false positive correspondences were mainly caused by similar labels in spelling. For example, false correspondence (“NCI_C33592”, “MA_0002058”) has similar labels of “Spiral_Artery” and “sural artery.” Furthermore, the superclass of “NCI_C33592” (“NCI_C12372”) and the superclass of “MA_0002058” (“MA_0002058”) happen to be matched, while false positive correspondences were mainly caused by the dissimilarity of labels, such as “Tarsal_Plate” for “NCI_C33736” and “eyelid tarsus” for “MA_0000270.” And “NCI_C33736” has no subclass and subpart; hence we cannot find the correspondence through formula (7) or (8).Figure4 is a comparison of the performance of our method and participating systems of OAEI 2014 which also produce coherent alignment in anatomy track. From Figure 4, we can see that our method (MLN-OM) outperforms most of systems and is comparable with the best system (LogMapLite). Notice that we use a simple similarity measure based on Levenshtein distance in pruning phase and focus on the Markov logic model for ontology matching, while LogMapLite uses an external lexicon (e.g., WordNet or UMLS-lexicon) in the phase of computing an initial set of equivalence anchor mappings, which can be easily adopted by our method in the pruning phase to further improve the quality of matching results.Figure 4
Comparing with results of OAEI 2014.
## 4.1. Experimental Setup
We conducted experiments that were implemented in java using the Jena API (jena.apache.org) and SecondString library [14] to create ground atoms and compute the similarity between labels based on Levenshtein distance. Then we applied theBeast [15] for MAP inference in Markov logic networks, using integer linear program (ILP) as base solver. theBeast is a software tool that provides means of inference and learning for Markov logic networks. Experiments were conducted on Fedora 7 with an Intel i5 [email protected] Ghz and 4 GB memory.We evaluated our model for anatomy ontology matching with thresholds on the similarityσ ranging from 0.65 to 0.95. The weights of soft formulas are determined manually. Although the weights for formulas can be learned with an online learner, being able to set qualitative weights manually is crucial as training data is often unavailable. Further, learning weights from reference alignment as training data would lead to results overfitting the data. We set the weights for stability constraints dealing with class hierarchy to −0.01 and set the weight for match propagation to 0.05 based on the consideration that they are reciprocal ideas with stability constraints, hence with roughly equivalent importance.We evaluated five different settings:prior: the formulation includes only a priori confidence.
ca: the formulation includes a priori confidence and cardinality constraints.
ca + co: the formulation includes a priori confidence, cardinality, and coherence constraints.
ca + co + st: the formulation includes a priori confidence, cardinality constraints, coherence constraints, and stability constraints.
ca + co + st + mp: the formulation includes a priori confidence, cardinality constraints, coherence constraints, stability constraints, and matching propagation.
## 4.2. Experimental Results
We useprecision,recall, andF-measure to measure the performance of the matching results. Given the reference alignment, we compute theprecision as the number of correct correspondences over the total number of correspondences in the computed alignment. We compute therecall as the number of correct correspondences over the number of correspondences in the reference alignment. Then, we compute theF-measure as(10)
F
-
m
e
a
s
u
r
e
=
2
∗
p
r
e
c
i
s
i
o
n
∗
r
e
c
a
l
l
p
r
e
c
i
s
i
o
n
+
r
e
c
a
l
l
.Figure3 comparesprecision,recall, andF-measure scores of generated alignments over the reference alignment for thresholds ranging from 0.65 to 0.95 under different settings. From Figure 3, we can see that our method achieves the highest precision in the setting ofca + co + st + sp, while achieving the highest recall in the setting ofpriori. We obtain significant improvement on F-measure when adding more matching formulas into the model. We also note that there is no obvious difference betweenca andca + co. It is because only the human anatomy ontology defines the relationships of disjointWith. However, we keep coherence constraints in our model since it can further improve the quality of results if the relationships of disjointWith were added into the mouse anatomy ontology in the future. Overall, the precision increases with the growth of the threshold, while the recall slightly decreases for higher thresholds in various settings. The margins between different settings become smaller for higher thresholds than for lower thresholds. It is because there is only a small number of incorrect correspondences in candidates when we apply a threshold greater than 0.8. We achieve the maximum F-measure score at threshold 0.8.Figure 3
Results for thresholds ranging from 0.65 to 0.95.
(a)
Precision
(b)
Recall
(c)
F-measureWe manually sample several false positive correspondences and false negative correspondences to analysis. We found that false positive correspondences were mainly caused by similar labels in spelling. For example, false correspondence (“NCI_C33592”, “MA_0002058”) has similar labels of “Spiral_Artery” and “sural artery.” Furthermore, the superclass of “NCI_C33592” (“NCI_C12372”) and the superclass of “MA_0002058” (“MA_0002058”) happen to be matched, while false positive correspondences were mainly caused by the dissimilarity of labels, such as “Tarsal_Plate” for “NCI_C33736” and “eyelid tarsus” for “MA_0000270.” And “NCI_C33736” has no subclass and subpart; hence we cannot find the correspondence through formula (7) or (8).Figure4 is a comparison of the performance of our method and participating systems of OAEI 2014 which also produce coherent alignment in anatomy track. From Figure 4, we can see that our method (MLN-OM) outperforms most of systems and is comparable with the best system (LogMapLite). Notice that we use a simple similarity measure based on Levenshtein distance in pruning phase and focus on the Markov logic model for ontology matching, while LogMapLite uses an external lexicon (e.g., WordNet or UMLS-lexicon) in the phase of computing an initial set of equivalence anchor mappings, which can be easily adopted by our method in the pruning phase to further improve the quality of matching results.Figure 4
Comparing with results of OAEI 2014.
## 5. Conclusions
In this paper, we propose a Markov logic model for anatomy ontology matching. The model combines five types of matching strategies, namely, a priori confidences, cardinality constraints, coherence constraints, stability constraints, and match propagation. Experimental results demonstrate the effectiveness of the proposed approach.
---
*Source: 1010946-2016-06-13.xml* | 2016 |
# In Vitro Assessment of Single-Retainer Tooth-Colored Adhesively Fixed Partial Dentures for Posterior Teeth
**Authors:** Tissiana Bortolotto; Carlo Monaco; Ioana Onisor; Ivo Krejci
**Journal:** International Journal of Dentistry
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101095
---
## Abstract
The purpose of this paper was to investigate, by means of marginal adaptation and fracture strength, three different types of single retainer posterior fixed partial dentures (FPDs) for the replacement of a missing premolar. Two-unit cantilever FPDs were fabricated from composite resin, feldspathic porcelain, and fiber-reinforced composite resin. After luting procedures and margin polishing, all specimens were subjected to a Scanning Electron Microscopic marginal evaluation both prior to and after thermomechanical loading with a custom made chewing simulator comprising both thermal and mechanical loads. The results indicated that the highest score of marginal adaptation, that is, the closest score to 100% of continuous margins, at thetooth-composite resin interface was attained by the feldspathic porcelain group (88.1% median), followed by the fiber-reinforced composite resin group (78.9% median). The worse results were observed in the composite resin group (58.05% median). Fracture strength was higher in feldspathic porcelain (196N median) when compared to resin composite (114.9 N median). All the fixed prostheses made of fiber-reinforced composite resin detached from the abutment teeth before fracturing, suggesting that the adhesive surface's retainer should be increased.
---
## Body
## 1. Introduction
Two-unit cantilevered fixed partial dentures (FPDs) may be defined as retainers holding one or more unsupported free-end extensions. This type of prosthodontic rehabilitation has been used as an interim solution for restoring edentulous areas prior to and during implant therapy, instead of using a removable prosthesis [1]. While most of the studies available on cantilever prostheses refer mainly to the anterior area of the mouth [2–7], developments in the field of adhesion and minimally invasive therapy in terms of abutment preparation may also render this technique attractive for the replacement of a missing posterior tooth [8, 9].Among the tooth-colored restorative materials available, fiber-reinforced composite resin is increasingly being used in prosthodontic rehabilitation [10–13]. The main advantages of this material are better stress distribution due to higher elasticity of the framework and relatively simplified laboratory procedures [14–18]. CAD/CAM technology allows for the construction of single unit restorations from industrially fabricated ceramic or composite resin blocks with predictable clinical success [19]. Similarly, the fabrication of CAD/CAM multiple unit restorations, that is, three-unit slot-inlay FPDs made of ceramic and composite-resin, is also possible [20]. However, failures with this type of design are frequently due to loss of retention from the abutments or to fractures within the ceramic or composite-resin material [21]. These failures occur either because the adhesive area provided by the abutment slot preparation is insufficient to withstand mastication forces, or because in three-unit FPDs both abutments are subjected to twisting forces [22] that can cause high stresses at the connector area and/or tooth-restoration interface; with the corresponding materials’ fracture or debonding from one retainer.The major advantages of two-unit cantilever-inlay FPDs for single tooth replacement are that they involve less tissue damage, they are easier to clean, they are less explensive and that there is no chance of undetected debonding due to its single retainer [23]. In addition, twisting forces may be reduced, preventing the detachment of the restoration.A high success rate of clinical longevity can be expected from two-unit cantilevered resin-bonded FPDs made of nickel chrome alloy [8]. Meanwhile, there is little information available on tooth-colored that is resin composite or ceramic, adhesively fixed two-unit cantilever prostheses for the replacement of missing posterior teeth. Understanding the biomechanics of 2-unit cantilevered resin-bonded FPDs made from tooth-colored materials is important in order to learn about the potential limitations of this restorative technique, but also to be able to select the most appropriate restorative material. Therefore, the aim of the present paper, was to evaluate the marginal adaptation and fracture strength of mesio-occlusal inlay-retained cantilever-FPDs made from feldspathic ceramic blocks, microfilled composite-resin blocks and a fiber-reinforced composite-resin. Because stress values at premolar cantilevers are lower than in molar cantilevered FPD [24], the ideal pontic dimensions should not exceed the mesio-distal dimension of a premolar. Thus, in the present paper a mesio-occlusal box preparation was used for framework support of a missing premolar. The null hypothesis tested was that marginal adaptation and fracture strength would fail to identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.
## 2. Materials and Methods
The materials used in the present study are listed in Table1.Table 1
List of materials used in the present study.
Product (Group name)Material typeManufacturerBatch numberVitamark II (FP)Feldspathic ceramicVita Zahnfabrik, Bad2M1/6436Säckingen, GermanyGN-1 (RC)Microhybrid compositeGC Corporation, Tokyo, Japan0000704 ASR Adoro/Vectris (FRC)Fiber reinforcedIvoclarVivadent, Schaan,composite: glass fibers (Vectris) and a microfilled composite (Adoro)LiechtensteinCaries-free human molars of nearly identical size and complete root growth were were procured from a private dental office with the understanding and oral consent of the patient. The teeth needed to be extracted due to periodontal reasons. They were stored in a 0.1% thymol solution for a maximum of 2 months after extraction. The teeth were randomly divided into three groups (n=6). The apex of each root was sealed with an adhesive system (Syntac classic, Ivoclar Vivadent, Schaan, Liechtenstein) without removing the pulpal tissue. To simulate intratubular fluid flow, a cylindrical cavity was drilled 1.5 mm below the cementoenamel junction until the pulp chamber was reached. A metal tube with a diameter of 1.4 mm was luted into the cavity with the same adhesive system. Subsequently, the teeth were mounted on aluminium bases with micro hybrid composite-resin and the bases were immersed in an autopolymerizing acrylic resin (Technovit 4071; Heraeus-Kulzer, Friedrichsdorf, Germany) to an apical depth of two thirds of the root length to create a strong load-resistant support. Through a connecting silicone tube, the pulp chamber was evacuated with a vacuum pump (Vacubrand GmbH, & Co, Wertheim, Germany) and then filled with a bubble-free mixture of horse serum (PAA Laboratories GmbH, Linz, Austria) and phosphate-buffered saline solution (PBS; Oxoid Ltd, Basingstoke, Hampshire, England) with the aid of a 3-way valve, and finally connected to a serum infusion bottle. This bottle was placed vertically 34 cm above the specimen to simulate the normal hydrostatic pressure of 25 mm Hg within the tooth.An inlay preapration (mesio-occlusal, butt joint margins on enamel) was made in each molar by the use of a rotating diamond instrument (80–25μm grain size, FG 8113NR, 3113NR, Intensiv SA, Viganello Switzerland) mounted on a red contra-angle handpiece (Sirius 180 XL, Micro-Mega, Besançon, France) under continuous water-cooling. The depth of the occlusal inlay was of 2 mm and the occlusal step was 4 mm. The interproximal step of 2 mm was used along with an axial depth of 1.6 mm and a faciolingual width of 3.5 mm (Figure 1(a)).View of one cantilever FPD made of fiber reinforced composite. (a) A butt joint mesio-occlusal inlay cavity was prepared on the abutment tooth. (b) The cantilever bridge consisted on a premolar crown retained to the abutment by an inlay restoration. (c) The luting of the FPD was performed with a microhybrid restorative composite.
(a)(b)(c)The pontic measured 7 mm mesiodistally, which approximately corresponds to the size of a second premolar. The connectors of the inlay with the pontic (Figures1(b) and 1(c)) were set to 3.5 × 3.5 mm, in agreement with a previous protocol [10]. After tooth preparation, the dentin surface was immediately sealed with a 3-step self-etching adhesive system (Syntac classic) [25]. Then, the adhesive system was removed from the enamel margins using a diamond instrument without touching the adhesively-sealed dentin [10].For the construction of the fiber reinforced (FRC) cantilever FPDs, polyether impressions (Impregum Duo soft polyheter, 3M ESPE, Seefeld, Germany) were made using a simultaneous mixing technique following the manufacturer’s instructions. Then, provisional restorations were made using Fermit N (IvoclarVivadent) without any cement and placed according to the clinical recommendations proposed by the manufacturer. The FRC system consists of two materials: glass fibers with different orientation (Vectris, IvoclarVivadent) and a microfilled composite (Adoro, IvoclarVivadent) for the veneering of the framework. The design of the fiberglass framework was first premodelled with a light polymerizing resin (Spectra Tray, Ivoclar Vivadent) to obtain the oval shape and its thickness checked on the molding model. This model was embedded in a transparent silicone impression paste to form a mold. The resin was removed, and the fibers were applied into the silicone-mould. The pre-impregnated “pontic” fibers were condensed in a deep-drawing polymerization process. After a cycle of vacuum-forming and after polymerizing by light in a special unit (VS1; IvoclarVivadent) for 10 minutes according to the manufacturer’s recommendations, the FRC was airborne particle abraded (MicroEtcher CD, Danville Materials, San Ramon, CA, USA) using a small grain size of 27 mm at 2.5 bar of pressure for 10 seconds and treated with a silane coupling agent (Wetting agent; Ivoclar Vivadent). A sheet of wave fibers “frame” was placed upon the “pontic” structure and the cycle in the light curing unit (VS1) was repeated. The Adoro material was built incrementally and precured. The final polymerization/tempering were performed by means of light and heat (Lumamat 100; Ivoclar Vivadent). The additional tempering step at 104°C was done to maximize the strength and the surface quality of the restorations.For the construction of the feldspathic porcelain (FP) and composite-resin (CR) fixed prostheses an optical impression was made with the digital camera from the Cerec System (Sirona Dental Systems, Bensheim, Germany). The construction and milling of the prostheses was carried out using the Cerec 3 system (Software version 1.60 R980) according to a modified version of a protocol for the fabrication of three-unit Cerec prostheses. Feldspathic porcelain (Vitablocs Mark II; Vita Zahnfabrik, Bad Zächingen, Germany) and microhybrid composite (GC Corp, Tokyo, Japan) prefabricated blocks were the materials used for the construction of the prostheses. After the milling procedure, they were manually adjusted to the abutment tooth using coarse diamond instruments under continuous water cooling.In the case of the FRC prostheses, the provisional restorations made of Fermit were removed and the teeth’s dentin surfaces (which were previously sealed with bonding) were airborne particle abraded (MicroEtcher CD, Danville Materials, San Ramon, CA, USA) for 2 seconds using aluminum oxide powder (grain size of 27μm) at a pressure of 2 bars. The intaglio surfaces of the FRC and CR abutments were also abraded following the precedent procedure but for 10 seconds. The intaglio surface of the ceramic group (FP) was etched with 5% hydrofluoric acid (Ceramics Etch, Vita Zahnfabrik, Germany) for 60 seconds followed by the application of a silanecoupling agent (Monobond S, IvoclarVivadent). The tooth surface, that is, enamel margin and airborne particle abraded adhesive-covered dentin, was treated using an adhesive system (Syntac Classic, IvoclarVivadent) after phosphoric acid selective enamel conditioning. A microhybrid light cured composite resin (Tetric Transparent, IvoclarVivadent) was used as the luting agent (Figure 1(c)). An ultrasonic technique was used for the seating of the restoration. After removing the excess resin, the luting composite resin was light activated with constant relative power density of 800 mW/cm2 (Optilux 501, Demetron/Kerr, Danbury, CT, USA,) for 60 s each from cervical, buccal, lingual, and occlusal surfaces. The margins of the restorations were then finished using 15 mm diamond instruments (Composhape, Intensiv, Lugano, Switzerland) and polished with flexible discs (Sof-Lex; 3M ESPE).After polishing of the margins (before loading) and after loading, the specimens were cleaned with rotating nylon brushes (Hawe Neos Dental, Bioggio, Switzerland) and toothpaste before making impressions for the replicas. One pair of replicas from both interproximal and occlusal boxes (Table2) was procured from each cantilever prosthesis by using poly vinyl siloxane impressions (President Plus Light-body, Colténe AG, Altstätten, Switzerland).Table 2
Scheme of the quantitative margin analysis in the Scanning Electron Microscope. Two replicas were obtained from each cantilever FPD; one from themesial box and the other one from the occlusal box. For the quantitative margin analysis, the enamel was divided into three segments: interproximal (segments a-b and c-d), cervical (segment b-c) and occlusal (segment a-d). All segments together constituted the total margin length.The impressions were then filled with epoxy resin (Epofix, Struers, Rodovre, Denmark) and gold sputtered (SCD 030, Provac, FL-9496 Balzers, Liechtenstein) for their observation in a Scanning Electron Microscope (XL20, Philips, NL-5600 Eindhoven, Netherlands). A quantitative evaluation of the marginal adaptation was performed at a 200x magnification by using a custom made module programmed within image processing software (Scion Image, Scion Corp, Frederik, MA, USA). Three margin segments that constituted thetotal margin length were analyzed on the SEM: approximal enamel, cervical enamel, and occlusal enamel (Table 2). The percentages of continuous margins were evaluated along the margin and separately for tooth-luting composite (TC) and luting composite-restoration (CI) interfaces.The specimens were mechanically loaded in a computer-controlled masticator with 1,200,000 cycles of 49N each, at a frequency of 1.7 Hz. A total of 3,000 thermal cycles of type 5°C to 50°C to 5°C were performed simultaneously. The chamber was automatically emptied after 2 minutes for 10 s with air pressure to avoid mixing the cold and warm water. The load cycles were transferred to the buccal cusp of the pontic.After the thermomechanical loading procedure, the fracture strength of each prosthesis was calculated by loading them to failure in a universal testing machine (Instron, Milan, Italy). The force was applied on the center of the pontics using a steel ball (5 mm diameter) at a crosshead speed of 1 mm/minute. To ensure a regular force distribution and minimize the transmission of local force peaks from the steel bar to the pontics’ cusps, a layer of 0.5 mm thick tin foil was placed between both surfaces. The failure determination was set at a 10% loss of the maximum loading force. Radiologic examinations (Vistascan, Dürr Dental GmbH & Co. KG, Germany) were made to document the different fracture patterns. Types of failure due to the fracture strength test were described as “adhesive” (in the adhesive interface) and “cohesive” (within ceramic or composite material). Two main locations of the “adhesive” failures were distinguished: between luting composite and inlays’ restorative material (fiber reinforced composite, feldspathic ceramic, or composite resin) and between tooth substrate and luting composite.
### 2.1. Statistical Analysis
The evaluation of the data was performed with Stata 9.0 for Windows. Shapiro-Wilk W test showed that the distribution of the data was not normal. Therefore, a Kruskal-Wallis test was used to detect whether there were differences in the median values of marginal adaptation at both TC and CI interfaces. Chi Square test was used to detect differences in fracture strength among groups feldspathic porcelain (VM) and composite resin (CR). Fiber reinforced composite group (FRC) was excluded from statistical analysis as will be explained in the results section. Multiple comparisons between groups were carried out with Bonferroni post hoc test.
## 2.1. Statistical Analysis
The evaluation of the data was performed with Stata 9.0 for Windows. Shapiro-Wilk W test showed that the distribution of the data was not normal. Therefore, a Kruskal-Wallis test was used to detect whether there were differences in the median values of marginal adaptation at both TC and CI interfaces. Chi Square test was used to detect differences in fracture strength among groups feldspathic porcelain (VM) and composite resin (CR). Fiber reinforced composite group (FRC) was excluded from statistical analysis as will be explained in the results section. Multiple comparisons between groups were carried out with Bonferroni post hoc test.
## 3. Results
The results of marginal adaptation and fracture strength (expressed as the median, 25th and 75th quartiles) are detailed in Figures2, 3, and 4. In respect to marginal adaptation at the tooth-composite interface, no significant differences were detected among FP, FRC, and RC prior to loading. The percentages of continuous margins were of 97.25, 97.65 and of 95.75, respectively. After loading, FP showed significantly better results (88.1% of continuous margins) when compared with CR (58.05% of continuous margins), as detailed in Figure 2.Marginal adaptation at the Tooth-Composite interface. Boxplots displaying the percentages of continuous margins of the three groups before (a) and after (b) thermal (3000x) and mechanical loading (1.2 million cycles). Median, 25% / 75% percentiles, and the highest and lowest not extremely values are shown.
(a)(b)Marginal adaptation at the Composite-Inlay interface. Boxplots displaying the percentages of continuous margins of the three groups before (a) and after (b) thermal (3000x) and mechanical loading (1.2 million cycles). Median, 25%/75% percentiles, and the highest and lowest not extremely values are shown.
(a)(b)Figure 4
Results of fracture strength on loaded specimens. Note that FRC group has been excluded because the FPDs detached from the abutments before the fracture occured. Boxplots displaying the median (25%/75%) percentiles, and the highest and lowest not extremely values are shown Fracture strength in Newtons for FP: 196 (186.1/197.7) and for RC: 114.9 (86.2/144.6).The percentages of continuous margins for FP, FRC, and RC at thecomposite-inlay interface were above 90%, both before (percentages of continuous margins of 97.15, 95.15 and 99.75, resp.) and after loading (92.65, 91.2 and 97.55, resp.) as can be observed in Figure 3. This indicated that the luting composite-inlay interface remained rather stable under fatigue conditions.When performing the fracture strength test on loaded specimens, all prostheses made of fiber reinforced composite (FRC) detached from the abutment before the fracture ocurred. Therefore, no fracture strength data could be procured from this group. A detailed observation of the inlays intaglio surface revealed that the luting composite remained attached to the inlay abutment and that detachments occurred principally between the luting composite and the tooth substrate. Regarding the other two materials, a higher fracture strength was reported for feldspathic porcelain (196N) in respect to Composite Resin (114.9N), as detailed in Figure4.In respect to failure patterns, half of the prostheses made of feldspathic porcelain (FP) detached from the abutments. Adhesive failures were located between luting composite and tooth substrate and in some cases, remnants of enamel structure were still attached to the prosthesis. The other half failed due to cohesive fractures in the connector’s area. All prostheses made of composite resin (CR) failed due to cohesive fractures within the restorative material, mainly located in the connectors’ area.
## 4. Discussion
The results of the current study could reject the null hypothesis investigated. Marginal adaptation and fracture strength test could identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.Two-unit resin-bonded cantilever bridges have been used for the replacement of a single missing anterior teeth [2–5]. Compared to conventional three-unit FPDs, easier cleaning, less biological damage, easier detection of debonding and decay underneath, as well as reduced twisting forces due to bonding to only one retainer have been reasons given for considering the clinical use of such a restorative technique [22]. Reasonably, the same arguments may promote the use of cantilever FPDs in the posterior area of the mouth.In terms of the construction technique and materials’ microstructure, the use of machinable composite-resin and/or feldspathic porcelain may be appealing for the construction of adhesive FPDs. The construction of milled FPDs from prefabricated blocks is not only faster, but material quality contributes to better long-term performance. A recent study demonstrated that in three-unit slot-retained FPDs fabricated from composite-resin and glass ceramic blocks, prosthesis fractures and debonding from the abutments led to a high percentage of failures [21]. To overcome such drawbacks, in this in vitro study, one-abutment inlay retained FPDs were evaluated for their fatigue resistance and fracture strength, to see if increasing the adhesive surface (inlays instead of slots) and limiting the number of retainers to one, could improve their mechanical performance. Feldspathic porcelain and microfilled composite-resin blocks were selected in the present study in agreement with a previous study that employed both materials for the production of CAD/CAM-generated slot-inlay FPDs [20]. The third material was FRC, which has been reported as being successfully used for the fabrication of three-unit posterior FPDs and also for the construction of cantilever bridges in the anterior region of the mouth [6, 7]. Three thousand thermal cycles together with 1.2 million cycles of occlusal loads were applied in a chewing simulator in order to fatigue the adhesive interfaces (Tooth-Composite and Composite-Inlay) and to assess if the three materials had a distinct influence on the stresses transferred to the abutment margins. Such stressing conditions are supposed to simulate a service time of 5 years [26]. In addition, fracture resistance after loading was calculated for each prosthesis to determine which material would better resist the impact of chewing forces in the posterior region. Finally, six FPDs were prepared on each group, following the methodology of recently published protocols in the field of inlay-retained adhesively fixed FPDs. To mention some examples, Ozcan et al. [27] evaluated the effect of different box preparations on the strength of glass fiber-reinforced composite inlay-retained fixed partial dentures; seven FPDs were tested per group. Keulemans et al. [28] evaluated the influence of retainer design on 2-unit cantilever FPDs; eight specimens were tested per group. Xie et al. [29] assessed the load-bearing capacity of fiber-reinforced composite FPDs with 4 framework designs; six specimens were evaluated per group.The high results, above 90% of continuous margins after loading, of marginal adaptation obtained at thecomposite resin-inlay interface showed that a high quality of bonding was achieved between prosthesis material and composite resin used as luting agent (Figure 2(b)). The ceramic surface was conditioned with hydrofluoric acid etching and further silanating. Both procedures ensured the formation of micromechanical retention and a proper wetting of the ceramic intaglio surface [30]. With respect to prostheses made out of composite resin, treatment of the internal surface with aluminium oxide airborne particle abrasion and silane has been shown to provide an efficient bonding to the luting composite resin material [10, 31, 32].With respect to thetooth-composite resin interface, the results after loading showed that cantilevers made out of feldspathic ceramic (FP) demonstrated the highest marginal adaptation when compared with the other two groups (Figure 2(a)). Highest marginal adaptation means that the scores were close to 100% of continuous or close margins. The stiffer material delivered the highest quality of marginal adaptation [10]. As feldspathic porcelain has a higher modulus of elasticity than composite resin and rather similar to enamel (around 85 GPa), the more rigid material (FP) could have transferred less stresses to the margins in comparison to the composite resin group (CR), resulting in a more stable bond to the dental tissues when the FPDs were subjected to fatigue conditions. Regarding the marginal adaptation of the composite resin group (CR), the lowest percentages of continuous margins were obtained when compared to FP and FRC. These results were surprising, as for single unit restorations, a higher fatigue resistance has been observed when composite resin restorations were used instead of porcelain [33, 34]. This is due to the elastic behavior of resin composite during the loading cycle that can compensate for the forces that are transferred to the margins. However, in the case of cantilever FPDs, as the adhesive interface is subjected to higher stresses during the fatigue process, a more elastic material like composite resin can have an adverse effect on the marginal adaptation. This could serve as an explanation for the higher percentages of continuous margins observed in the FRC. As soon as resin composite was reinforced with fibers, no significant differences could be detected between the groups made of Felspathic Porcelain and Fiber Reinforced Composite. Resin composite reinforced with fibers helped to increase the stiffness of the FPD tested in this study, resulting in a similar stress transmission to the margins as with feldspathic porcelain.During the fracture strength test materials fractures or detachments from the abutments occurred at a force of around 20 Kg (196N), which corresponds to a “light” chewing force in the clinical situation. A recent report stated that the mean values for the maximum bite force during mastication varied from 216 to 847N and that posterior fixed partial dentures should withstand loads of at least 500N [26]. In this in vitrostudy, only physiological chewing forces (49N = around 5 Kg) were used to load the FPDs. Under these experimental conditions, the highest results of fracture strength were attained by feldspathic porcelain (FP), as observed in Figure 3. Analysis of fractured specimens revealed that half of the specimens failed due to cohesive fractures in the connectors’ area. The other half of the bridges failed in the adhesive interface. The examination of the detached FPDs revealed that the adhesive failure occurred between the luting composite and tooth substrate; in some specimens some remnants of enamel could be observed still attached to the ceramic surface after fracture. Said differently, cohesive failures occurred within enamel which means that the quality of the enamel-resin bond was not the weak link within the system. We speculate that the adhesive surface provided by the MO abutment preparation was insufficient as enamel, which is considered the most reliable substrate for adhesion, was limited to the margins of the conservative cavity preparation and adhesion relied mainly on dentin substrate. An increased adhesive area involving more enamel substrate could, in theory, improve the adhesive retention of this prosthesis design. Therefore, due to the presence of both cohesive and adhesive failures in the feldspathic porcelain group, further research should focus on the evaluation of all-ceramic cantilever FPDs with an increased adhesive area involving enamel and with a zirconia core to assess if with such a design, fracture resistance can be improved.Fractures of FPDs made of resin composite occurred at a force of 114.9N (around 11 kg), which corresponds to a low mastication force. Such behaviour was expected due to the low fracture toughness of composite resin material and its poor ability to resist the propagation of cracks [15]. Therefore, composite resins without reinforcement, or at least with the current mechanical properties, should not be used for the fabrication of cantilever FPDs.The fracture strength values of FPDs made of FRC could not be determined since they detached from the abutments prior to fracturing. Detachments occurred between the luting composite and the tooth substrate, suggesting that adhesion to the inlays intaglio surface was not the weak link. Problems related to the limited adhesive surface provided by the MO preparation may explain these detachments. A similar evaluation has been recently performed with three unit inlay-retained FPDs made of the same fiber reinforced material as the one used in this study; their fracture strength was 1373.4N after thermal mechanical stressing [11]. However, the retainers consisted of an MOD (mesio-occluso-distal) inlay on the premolar abutment and an MOLD (mesio-occluso-linguo-distal) onlay preparation on the molar. Considering that in the present study MO inlays were used as retainers, we speculate that debonding of the cantilever FPDs was influenced by the box dimensions and therefore, an insufficient surface area available for adhesion. Likewise, a recent study [28] evaluated the influence of different abutment preparations, that is, a proximal box, a step-box, a dual wing and a step-box-wing on the fracture strength of two-unit cantilever resin-bonded glass fiber reinforced composite FPDs. They concluded that a dual-wing retainer was the optimal design for replacing a single premolar by means of a two-unit cantilever FRC-FPD. Therefore, future research should evaluate the mechanical resistance of fiber-reinforced cantilever bridges with increased adhesive surfaces, for example an MOD inlay or a dual-wing retainer as the abutment.
## 5. Conclusion
Within the limitations of thisin vitro study the following conclusions were drawn.(1)
The null hypothesis was rejected as both, the evaluation of marginal adaptation after thermo mechanical loading and fracture strength testing were able to identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.(2)
The marginal adaptation of feldspathic porcelain, the stiffest material, was comparable to the one of fiber-reinforced composite resin (FRC) (no significant differences between the materials). Composite resin (CR) FPDs produced the poorest marginal adaptation.(3)
The highest fracture strength was attained by FPDs made of feldspathic porcelain and the lowest by FPDs made of composite resin. All composite resin FPDs fractured due to cohesive failures within the material, suggesting that the material was not sufficiently strong for this application. Fiber reinforced composite FPDs detached from the abutments before they fractured, suggesting that the adhesive surface was insufficient. In respect to feldspathic porcelain, both cohesive and adhesive failures at the luting composite—tooth interface were observed. Further evaluations with an increased abutment preparation and with a core reinforcement are necessary.
---
*Source: 101095-2010-06-21.xml* | 101095-2010-06-21_101095-2010-06-21.md | 32,386 | In Vitro Assessment of Single-Retainer Tooth-Colored Adhesively Fixed Partial Dentures for Posterior Teeth | Tissiana Bortolotto; Carlo Monaco; Ioana Onisor; Ivo Krejci | International Journal of Dentistry
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101095 | 101095-2010-06-21.xml | ---
## Abstract
The purpose of this paper was to investigate, by means of marginal adaptation and fracture strength, three different types of single retainer posterior fixed partial dentures (FPDs) for the replacement of a missing premolar. Two-unit cantilever FPDs were fabricated from composite resin, feldspathic porcelain, and fiber-reinforced composite resin. After luting procedures and margin polishing, all specimens were subjected to a Scanning Electron Microscopic marginal evaluation both prior to and after thermomechanical loading with a custom made chewing simulator comprising both thermal and mechanical loads. The results indicated that the highest score of marginal adaptation, that is, the closest score to 100% of continuous margins, at thetooth-composite resin interface was attained by the feldspathic porcelain group (88.1% median), followed by the fiber-reinforced composite resin group (78.9% median). The worse results were observed in the composite resin group (58.05% median). Fracture strength was higher in feldspathic porcelain (196N median) when compared to resin composite (114.9 N median). All the fixed prostheses made of fiber-reinforced composite resin detached from the abutment teeth before fracturing, suggesting that the adhesive surface's retainer should be increased.
---
## Body
## 1. Introduction
Two-unit cantilevered fixed partial dentures (FPDs) may be defined as retainers holding one or more unsupported free-end extensions. This type of prosthodontic rehabilitation has been used as an interim solution for restoring edentulous areas prior to and during implant therapy, instead of using a removable prosthesis [1]. While most of the studies available on cantilever prostheses refer mainly to the anterior area of the mouth [2–7], developments in the field of adhesion and minimally invasive therapy in terms of abutment preparation may also render this technique attractive for the replacement of a missing posterior tooth [8, 9].Among the tooth-colored restorative materials available, fiber-reinforced composite resin is increasingly being used in prosthodontic rehabilitation [10–13]. The main advantages of this material are better stress distribution due to higher elasticity of the framework and relatively simplified laboratory procedures [14–18]. CAD/CAM technology allows for the construction of single unit restorations from industrially fabricated ceramic or composite resin blocks with predictable clinical success [19]. Similarly, the fabrication of CAD/CAM multiple unit restorations, that is, three-unit slot-inlay FPDs made of ceramic and composite-resin, is also possible [20]. However, failures with this type of design are frequently due to loss of retention from the abutments or to fractures within the ceramic or composite-resin material [21]. These failures occur either because the adhesive area provided by the abutment slot preparation is insufficient to withstand mastication forces, or because in three-unit FPDs both abutments are subjected to twisting forces [22] that can cause high stresses at the connector area and/or tooth-restoration interface; with the corresponding materials’ fracture or debonding from one retainer.The major advantages of two-unit cantilever-inlay FPDs for single tooth replacement are that they involve less tissue damage, they are easier to clean, they are less explensive and that there is no chance of undetected debonding due to its single retainer [23]. In addition, twisting forces may be reduced, preventing the detachment of the restoration.A high success rate of clinical longevity can be expected from two-unit cantilevered resin-bonded FPDs made of nickel chrome alloy [8]. Meanwhile, there is little information available on tooth-colored that is resin composite or ceramic, adhesively fixed two-unit cantilever prostheses for the replacement of missing posterior teeth. Understanding the biomechanics of 2-unit cantilevered resin-bonded FPDs made from tooth-colored materials is important in order to learn about the potential limitations of this restorative technique, but also to be able to select the most appropriate restorative material. Therefore, the aim of the present paper, was to evaluate the marginal adaptation and fracture strength of mesio-occlusal inlay-retained cantilever-FPDs made from feldspathic ceramic blocks, microfilled composite-resin blocks and a fiber-reinforced composite-resin. Because stress values at premolar cantilevers are lower than in molar cantilevered FPD [24], the ideal pontic dimensions should not exceed the mesio-distal dimension of a premolar. Thus, in the present paper a mesio-occlusal box preparation was used for framework support of a missing premolar. The null hypothesis tested was that marginal adaptation and fracture strength would fail to identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.
## 2. Materials and Methods
The materials used in the present study are listed in Table1.Table 1
List of materials used in the present study.
Product (Group name)Material typeManufacturerBatch numberVitamark II (FP)Feldspathic ceramicVita Zahnfabrik, Bad2M1/6436Säckingen, GermanyGN-1 (RC)Microhybrid compositeGC Corporation, Tokyo, Japan0000704 ASR Adoro/Vectris (FRC)Fiber reinforcedIvoclarVivadent, Schaan,composite: glass fibers (Vectris) and a microfilled composite (Adoro)LiechtensteinCaries-free human molars of nearly identical size and complete root growth were were procured from a private dental office with the understanding and oral consent of the patient. The teeth needed to be extracted due to periodontal reasons. They were stored in a 0.1% thymol solution for a maximum of 2 months after extraction. The teeth were randomly divided into three groups (n=6). The apex of each root was sealed with an adhesive system (Syntac classic, Ivoclar Vivadent, Schaan, Liechtenstein) without removing the pulpal tissue. To simulate intratubular fluid flow, a cylindrical cavity was drilled 1.5 mm below the cementoenamel junction until the pulp chamber was reached. A metal tube with a diameter of 1.4 mm was luted into the cavity with the same adhesive system. Subsequently, the teeth were mounted on aluminium bases with micro hybrid composite-resin and the bases were immersed in an autopolymerizing acrylic resin (Technovit 4071; Heraeus-Kulzer, Friedrichsdorf, Germany) to an apical depth of two thirds of the root length to create a strong load-resistant support. Through a connecting silicone tube, the pulp chamber was evacuated with a vacuum pump (Vacubrand GmbH, & Co, Wertheim, Germany) and then filled with a bubble-free mixture of horse serum (PAA Laboratories GmbH, Linz, Austria) and phosphate-buffered saline solution (PBS; Oxoid Ltd, Basingstoke, Hampshire, England) with the aid of a 3-way valve, and finally connected to a serum infusion bottle. This bottle was placed vertically 34 cm above the specimen to simulate the normal hydrostatic pressure of 25 mm Hg within the tooth.An inlay preapration (mesio-occlusal, butt joint margins on enamel) was made in each molar by the use of a rotating diamond instrument (80–25μm grain size, FG 8113NR, 3113NR, Intensiv SA, Viganello Switzerland) mounted on a red contra-angle handpiece (Sirius 180 XL, Micro-Mega, Besançon, France) under continuous water-cooling. The depth of the occlusal inlay was of 2 mm and the occlusal step was 4 mm. The interproximal step of 2 mm was used along with an axial depth of 1.6 mm and a faciolingual width of 3.5 mm (Figure 1(a)).View of one cantilever FPD made of fiber reinforced composite. (a) A butt joint mesio-occlusal inlay cavity was prepared on the abutment tooth. (b) The cantilever bridge consisted on a premolar crown retained to the abutment by an inlay restoration. (c) The luting of the FPD was performed with a microhybrid restorative composite.
(a)(b)(c)The pontic measured 7 mm mesiodistally, which approximately corresponds to the size of a second premolar. The connectors of the inlay with the pontic (Figures1(b) and 1(c)) were set to 3.5 × 3.5 mm, in agreement with a previous protocol [10]. After tooth preparation, the dentin surface was immediately sealed with a 3-step self-etching adhesive system (Syntac classic) [25]. Then, the adhesive system was removed from the enamel margins using a diamond instrument without touching the adhesively-sealed dentin [10].For the construction of the fiber reinforced (FRC) cantilever FPDs, polyether impressions (Impregum Duo soft polyheter, 3M ESPE, Seefeld, Germany) were made using a simultaneous mixing technique following the manufacturer’s instructions. Then, provisional restorations were made using Fermit N (IvoclarVivadent) without any cement and placed according to the clinical recommendations proposed by the manufacturer. The FRC system consists of two materials: glass fibers with different orientation (Vectris, IvoclarVivadent) and a microfilled composite (Adoro, IvoclarVivadent) for the veneering of the framework. The design of the fiberglass framework was first premodelled with a light polymerizing resin (Spectra Tray, Ivoclar Vivadent) to obtain the oval shape and its thickness checked on the molding model. This model was embedded in a transparent silicone impression paste to form a mold. The resin was removed, and the fibers were applied into the silicone-mould. The pre-impregnated “pontic” fibers were condensed in a deep-drawing polymerization process. After a cycle of vacuum-forming and after polymerizing by light in a special unit (VS1; IvoclarVivadent) for 10 minutes according to the manufacturer’s recommendations, the FRC was airborne particle abraded (MicroEtcher CD, Danville Materials, San Ramon, CA, USA) using a small grain size of 27 mm at 2.5 bar of pressure for 10 seconds and treated with a silane coupling agent (Wetting agent; Ivoclar Vivadent). A sheet of wave fibers “frame” was placed upon the “pontic” structure and the cycle in the light curing unit (VS1) was repeated. The Adoro material was built incrementally and precured. The final polymerization/tempering were performed by means of light and heat (Lumamat 100; Ivoclar Vivadent). The additional tempering step at 104°C was done to maximize the strength and the surface quality of the restorations.For the construction of the feldspathic porcelain (FP) and composite-resin (CR) fixed prostheses an optical impression was made with the digital camera from the Cerec System (Sirona Dental Systems, Bensheim, Germany). The construction and milling of the prostheses was carried out using the Cerec 3 system (Software version 1.60 R980) according to a modified version of a protocol for the fabrication of three-unit Cerec prostheses. Feldspathic porcelain (Vitablocs Mark II; Vita Zahnfabrik, Bad Zächingen, Germany) and microhybrid composite (GC Corp, Tokyo, Japan) prefabricated blocks were the materials used for the construction of the prostheses. After the milling procedure, they were manually adjusted to the abutment tooth using coarse diamond instruments under continuous water cooling.In the case of the FRC prostheses, the provisional restorations made of Fermit were removed and the teeth’s dentin surfaces (which were previously sealed with bonding) were airborne particle abraded (MicroEtcher CD, Danville Materials, San Ramon, CA, USA) for 2 seconds using aluminum oxide powder (grain size of 27μm) at a pressure of 2 bars. The intaglio surfaces of the FRC and CR abutments were also abraded following the precedent procedure but for 10 seconds. The intaglio surface of the ceramic group (FP) was etched with 5% hydrofluoric acid (Ceramics Etch, Vita Zahnfabrik, Germany) for 60 seconds followed by the application of a silanecoupling agent (Monobond S, IvoclarVivadent). The tooth surface, that is, enamel margin and airborne particle abraded adhesive-covered dentin, was treated using an adhesive system (Syntac Classic, IvoclarVivadent) after phosphoric acid selective enamel conditioning. A microhybrid light cured composite resin (Tetric Transparent, IvoclarVivadent) was used as the luting agent (Figure 1(c)). An ultrasonic technique was used for the seating of the restoration. After removing the excess resin, the luting composite resin was light activated with constant relative power density of 800 mW/cm2 (Optilux 501, Demetron/Kerr, Danbury, CT, USA,) for 60 s each from cervical, buccal, lingual, and occlusal surfaces. The margins of the restorations were then finished using 15 mm diamond instruments (Composhape, Intensiv, Lugano, Switzerland) and polished with flexible discs (Sof-Lex; 3M ESPE).After polishing of the margins (before loading) and after loading, the specimens were cleaned with rotating nylon brushes (Hawe Neos Dental, Bioggio, Switzerland) and toothpaste before making impressions for the replicas. One pair of replicas from both interproximal and occlusal boxes (Table2) was procured from each cantilever prosthesis by using poly vinyl siloxane impressions (President Plus Light-body, Colténe AG, Altstätten, Switzerland).Table 2
Scheme of the quantitative margin analysis in the Scanning Electron Microscope. Two replicas were obtained from each cantilever FPD; one from themesial box and the other one from the occlusal box. For the quantitative margin analysis, the enamel was divided into three segments: interproximal (segments a-b and c-d), cervical (segment b-c) and occlusal (segment a-d). All segments together constituted the total margin length.The impressions were then filled with epoxy resin (Epofix, Struers, Rodovre, Denmark) and gold sputtered (SCD 030, Provac, FL-9496 Balzers, Liechtenstein) for their observation in a Scanning Electron Microscope (XL20, Philips, NL-5600 Eindhoven, Netherlands). A quantitative evaluation of the marginal adaptation was performed at a 200x magnification by using a custom made module programmed within image processing software (Scion Image, Scion Corp, Frederik, MA, USA). Three margin segments that constituted thetotal margin length were analyzed on the SEM: approximal enamel, cervical enamel, and occlusal enamel (Table 2). The percentages of continuous margins were evaluated along the margin and separately for tooth-luting composite (TC) and luting composite-restoration (CI) interfaces.The specimens were mechanically loaded in a computer-controlled masticator with 1,200,000 cycles of 49N each, at a frequency of 1.7 Hz. A total of 3,000 thermal cycles of type 5°C to 50°C to 5°C were performed simultaneously. The chamber was automatically emptied after 2 minutes for 10 s with air pressure to avoid mixing the cold and warm water. The load cycles were transferred to the buccal cusp of the pontic.After the thermomechanical loading procedure, the fracture strength of each prosthesis was calculated by loading them to failure in a universal testing machine (Instron, Milan, Italy). The force was applied on the center of the pontics using a steel ball (5 mm diameter) at a crosshead speed of 1 mm/minute. To ensure a regular force distribution and minimize the transmission of local force peaks from the steel bar to the pontics’ cusps, a layer of 0.5 mm thick tin foil was placed between both surfaces. The failure determination was set at a 10% loss of the maximum loading force. Radiologic examinations (Vistascan, Dürr Dental GmbH & Co. KG, Germany) were made to document the different fracture patterns. Types of failure due to the fracture strength test were described as “adhesive” (in the adhesive interface) and “cohesive” (within ceramic or composite material). Two main locations of the “adhesive” failures were distinguished: between luting composite and inlays’ restorative material (fiber reinforced composite, feldspathic ceramic, or composite resin) and between tooth substrate and luting composite.
### 2.1. Statistical Analysis
The evaluation of the data was performed with Stata 9.0 for Windows. Shapiro-Wilk W test showed that the distribution of the data was not normal. Therefore, a Kruskal-Wallis test was used to detect whether there were differences in the median values of marginal adaptation at both TC and CI interfaces. Chi Square test was used to detect differences in fracture strength among groups feldspathic porcelain (VM) and composite resin (CR). Fiber reinforced composite group (FRC) was excluded from statistical analysis as will be explained in the results section. Multiple comparisons between groups were carried out with Bonferroni post hoc test.
## 2.1. Statistical Analysis
The evaluation of the data was performed with Stata 9.0 for Windows. Shapiro-Wilk W test showed that the distribution of the data was not normal. Therefore, a Kruskal-Wallis test was used to detect whether there were differences in the median values of marginal adaptation at both TC and CI interfaces. Chi Square test was used to detect differences in fracture strength among groups feldspathic porcelain (VM) and composite resin (CR). Fiber reinforced composite group (FRC) was excluded from statistical analysis as will be explained in the results section. Multiple comparisons between groups were carried out with Bonferroni post hoc test.
## 3. Results
The results of marginal adaptation and fracture strength (expressed as the median, 25th and 75th quartiles) are detailed in Figures2, 3, and 4. In respect to marginal adaptation at the tooth-composite interface, no significant differences were detected among FP, FRC, and RC prior to loading. The percentages of continuous margins were of 97.25, 97.65 and of 95.75, respectively. After loading, FP showed significantly better results (88.1% of continuous margins) when compared with CR (58.05% of continuous margins), as detailed in Figure 2.Marginal adaptation at the Tooth-Composite interface. Boxplots displaying the percentages of continuous margins of the three groups before (a) and after (b) thermal (3000x) and mechanical loading (1.2 million cycles). Median, 25% / 75% percentiles, and the highest and lowest not extremely values are shown.
(a)(b)Marginal adaptation at the Composite-Inlay interface. Boxplots displaying the percentages of continuous margins of the three groups before (a) and after (b) thermal (3000x) and mechanical loading (1.2 million cycles). Median, 25%/75% percentiles, and the highest and lowest not extremely values are shown.
(a)(b)Figure 4
Results of fracture strength on loaded specimens. Note that FRC group has been excluded because the FPDs detached from the abutments before the fracture occured. Boxplots displaying the median (25%/75%) percentiles, and the highest and lowest not extremely values are shown Fracture strength in Newtons for FP: 196 (186.1/197.7) and for RC: 114.9 (86.2/144.6).The percentages of continuous margins for FP, FRC, and RC at thecomposite-inlay interface were above 90%, both before (percentages of continuous margins of 97.15, 95.15 and 99.75, resp.) and after loading (92.65, 91.2 and 97.55, resp.) as can be observed in Figure 3. This indicated that the luting composite-inlay interface remained rather stable under fatigue conditions.When performing the fracture strength test on loaded specimens, all prostheses made of fiber reinforced composite (FRC) detached from the abutment before the fracture ocurred. Therefore, no fracture strength data could be procured from this group. A detailed observation of the inlays intaglio surface revealed that the luting composite remained attached to the inlay abutment and that detachments occurred principally between the luting composite and the tooth substrate. Regarding the other two materials, a higher fracture strength was reported for feldspathic porcelain (196N) in respect to Composite Resin (114.9N), as detailed in Figure4.In respect to failure patterns, half of the prostheses made of feldspathic porcelain (FP) detached from the abutments. Adhesive failures were located between luting composite and tooth substrate and in some cases, remnants of enamel structure were still attached to the prosthesis. The other half failed due to cohesive fractures in the connector’s area. All prostheses made of composite resin (CR) failed due to cohesive fractures within the restorative material, mainly located in the connectors’ area.
## 4. Discussion
The results of the current study could reject the null hypothesis investigated. Marginal adaptation and fracture strength test could identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.Two-unit resin-bonded cantilever bridges have been used for the replacement of a single missing anterior teeth [2–5]. Compared to conventional three-unit FPDs, easier cleaning, less biological damage, easier detection of debonding and decay underneath, as well as reduced twisting forces due to bonding to only one retainer have been reasons given for considering the clinical use of such a restorative technique [22]. Reasonably, the same arguments may promote the use of cantilever FPDs in the posterior area of the mouth.In terms of the construction technique and materials’ microstructure, the use of machinable composite-resin and/or feldspathic porcelain may be appealing for the construction of adhesive FPDs. The construction of milled FPDs from prefabricated blocks is not only faster, but material quality contributes to better long-term performance. A recent study demonstrated that in three-unit slot-retained FPDs fabricated from composite-resin and glass ceramic blocks, prosthesis fractures and debonding from the abutments led to a high percentage of failures [21]. To overcome such drawbacks, in this in vitro study, one-abutment inlay retained FPDs were evaluated for their fatigue resistance and fracture strength, to see if increasing the adhesive surface (inlays instead of slots) and limiting the number of retainers to one, could improve their mechanical performance. Feldspathic porcelain and microfilled composite-resin blocks were selected in the present study in agreement with a previous study that employed both materials for the production of CAD/CAM-generated slot-inlay FPDs [20]. The third material was FRC, which has been reported as being successfully used for the fabrication of three-unit posterior FPDs and also for the construction of cantilever bridges in the anterior region of the mouth [6, 7]. Three thousand thermal cycles together with 1.2 million cycles of occlusal loads were applied in a chewing simulator in order to fatigue the adhesive interfaces (Tooth-Composite and Composite-Inlay) and to assess if the three materials had a distinct influence on the stresses transferred to the abutment margins. Such stressing conditions are supposed to simulate a service time of 5 years [26]. In addition, fracture resistance after loading was calculated for each prosthesis to determine which material would better resist the impact of chewing forces in the posterior region. Finally, six FPDs were prepared on each group, following the methodology of recently published protocols in the field of inlay-retained adhesively fixed FPDs. To mention some examples, Ozcan et al. [27] evaluated the effect of different box preparations on the strength of glass fiber-reinforced composite inlay-retained fixed partial dentures; seven FPDs were tested per group. Keulemans et al. [28] evaluated the influence of retainer design on 2-unit cantilever FPDs; eight specimens were tested per group. Xie et al. [29] assessed the load-bearing capacity of fiber-reinforced composite FPDs with 4 framework designs; six specimens were evaluated per group.The high results, above 90% of continuous margins after loading, of marginal adaptation obtained at thecomposite resin-inlay interface showed that a high quality of bonding was achieved between prosthesis material and composite resin used as luting agent (Figure 2(b)). The ceramic surface was conditioned with hydrofluoric acid etching and further silanating. Both procedures ensured the formation of micromechanical retention and a proper wetting of the ceramic intaglio surface [30]. With respect to prostheses made out of composite resin, treatment of the internal surface with aluminium oxide airborne particle abrasion and silane has been shown to provide an efficient bonding to the luting composite resin material [10, 31, 32].With respect to thetooth-composite resin interface, the results after loading showed that cantilevers made out of feldspathic ceramic (FP) demonstrated the highest marginal adaptation when compared with the other two groups (Figure 2(a)). Highest marginal adaptation means that the scores were close to 100% of continuous or close margins. The stiffer material delivered the highest quality of marginal adaptation [10]. As feldspathic porcelain has a higher modulus of elasticity than composite resin and rather similar to enamel (around 85 GPa), the more rigid material (FP) could have transferred less stresses to the margins in comparison to the composite resin group (CR), resulting in a more stable bond to the dental tissues when the FPDs were subjected to fatigue conditions. Regarding the marginal adaptation of the composite resin group (CR), the lowest percentages of continuous margins were obtained when compared to FP and FRC. These results were surprising, as for single unit restorations, a higher fatigue resistance has been observed when composite resin restorations were used instead of porcelain [33, 34]. This is due to the elastic behavior of resin composite during the loading cycle that can compensate for the forces that are transferred to the margins. However, in the case of cantilever FPDs, as the adhesive interface is subjected to higher stresses during the fatigue process, a more elastic material like composite resin can have an adverse effect on the marginal adaptation. This could serve as an explanation for the higher percentages of continuous margins observed in the FRC. As soon as resin composite was reinforced with fibers, no significant differences could be detected between the groups made of Felspathic Porcelain and Fiber Reinforced Composite. Resin composite reinforced with fibers helped to increase the stiffness of the FPD tested in this study, resulting in a similar stress transmission to the margins as with feldspathic porcelain.During the fracture strength test materials fractures or detachments from the abutments occurred at a force of around 20 Kg (196N), which corresponds to a “light” chewing force in the clinical situation. A recent report stated that the mean values for the maximum bite force during mastication varied from 216 to 847N and that posterior fixed partial dentures should withstand loads of at least 500N [26]. In this in vitrostudy, only physiological chewing forces (49N = around 5 Kg) were used to load the FPDs. Under these experimental conditions, the highest results of fracture strength were attained by feldspathic porcelain (FP), as observed in Figure 3. Analysis of fractured specimens revealed that half of the specimens failed due to cohesive fractures in the connectors’ area. The other half of the bridges failed in the adhesive interface. The examination of the detached FPDs revealed that the adhesive failure occurred between the luting composite and tooth substrate; in some specimens some remnants of enamel could be observed still attached to the ceramic surface after fracture. Said differently, cohesive failures occurred within enamel which means that the quality of the enamel-resin bond was not the weak link within the system. We speculate that the adhesive surface provided by the MO abutment preparation was insufficient as enamel, which is considered the most reliable substrate for adhesion, was limited to the margins of the conservative cavity preparation and adhesion relied mainly on dentin substrate. An increased adhesive area involving more enamel substrate could, in theory, improve the adhesive retention of this prosthesis design. Therefore, due to the presence of both cohesive and adhesive failures in the feldspathic porcelain group, further research should focus on the evaluation of all-ceramic cantilever FPDs with an increased adhesive area involving enamel and with a zirconia core to assess if with such a design, fracture resistance can be improved.Fractures of FPDs made of resin composite occurred at a force of 114.9N (around 11 kg), which corresponds to a low mastication force. Such behaviour was expected due to the low fracture toughness of composite resin material and its poor ability to resist the propagation of cracks [15]. Therefore, composite resins without reinforcement, or at least with the current mechanical properties, should not be used for the fabrication of cantilever FPDs.The fracture strength values of FPDs made of FRC could not be determined since they detached from the abutments prior to fracturing. Detachments occurred between the luting composite and the tooth substrate, suggesting that adhesion to the inlays intaglio surface was not the weak link. Problems related to the limited adhesive surface provided by the MO preparation may explain these detachments. A similar evaluation has been recently performed with three unit inlay-retained FPDs made of the same fiber reinforced material as the one used in this study; their fracture strength was 1373.4N after thermal mechanical stressing [11]. However, the retainers consisted of an MOD (mesio-occluso-distal) inlay on the premolar abutment and an MOLD (mesio-occluso-linguo-distal) onlay preparation on the molar. Considering that in the present study MO inlays were used as retainers, we speculate that debonding of the cantilever FPDs was influenced by the box dimensions and therefore, an insufficient surface area available for adhesion. Likewise, a recent study [28] evaluated the influence of different abutment preparations, that is, a proximal box, a step-box, a dual wing and a step-box-wing on the fracture strength of two-unit cantilever resin-bonded glass fiber reinforced composite FPDs. They concluded that a dual-wing retainer was the optimal design for replacing a single premolar by means of a two-unit cantilever FRC-FPD. Therefore, future research should evaluate the mechanical resistance of fiber-reinforced cantilever bridges with increased adhesive surfaces, for example an MOD inlay or a dual-wing retainer as the abutment.
## 5. Conclusion
Within the limitations of thisin vitro study the following conclusions were drawn.(1)
The null hypothesis was rejected as both, the evaluation of marginal adaptation after thermo mechanical loading and fracture strength testing were able to identify differences in the fatigue behavior of 2-unit cantilever FPDs made of resin composite, fiber-reinforced composite and feldspathic porcelain.(2)
The marginal adaptation of feldspathic porcelain, the stiffest material, was comparable to the one of fiber-reinforced composite resin (FRC) (no significant differences between the materials). Composite resin (CR) FPDs produced the poorest marginal adaptation.(3)
The highest fracture strength was attained by FPDs made of feldspathic porcelain and the lowest by FPDs made of composite resin. All composite resin FPDs fractured due to cohesive failures within the material, suggesting that the material was not sufficiently strong for this application. Fiber reinforced composite FPDs detached from the abutments before they fractured, suggesting that the adhesive surface was insufficient. In respect to feldspathic porcelain, both cohesive and adhesive failures at the luting composite—tooth interface were observed. Further evaluations with an increased abutment preparation and with a core reinforcement are necessary.
---
*Source: 101095-2010-06-21.xml* | 2010 |
# Aspiration of Aluminum Beverage Can Tab: Case Report and Literature Review
**Authors:** Alhasan N. Elghouche; Brian C. Lobo; Jonathan Y. Ting
**Journal:** Case Reports in Otolaryngology
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1010975
---
## Abstract
We describe the case of a 16-year-old male who aspirated a beverage can tab resulting in significant functional impairment. Since the introduction of beverage can opening tabs (“pop-tops” or “pull-tabs”) nearly 50 years ago, five cases of their aspiration have been reported in the literature and this is the first case to report tracheal lodgment. We describe the clinical course for this patient including the inadequacy of radiographic evaluation and a significant delay in diagnosis. We highlight unique features of small aluminum foreign bodies that require consideration and mention a potential change in epidemiology associated with evolving product design. Our primary objective is increased awareness among otolaryngologists that radiography is unreliable for diagnosis or localization of small aluminum foreign bodies. The patient history must therefore be incorporated with other imaging modalities and/or endoscopic evaluation. Also, given the marked prevalence of aluminum beverage cans, we suspect that the inadvertent aspiration of can tabs is more common than indicated by the paucity of published reports.
---
## Body
## 1. Introduction
The first beverage cans were opened via puncture with a “church key” can opener. In the 1960s, the pull-tab mechanism was introduced. A pull-tab consists of a metal ring which, along with a wedge-shaped portion of the can top, is completely separated from the can when pulled to create an opening (Figure1). Not long after their introduction, cases of accidental pull-tab ingestion and aspiration were described in the medical literature [1]. Their emergence as foreign bodies in conjunction with the rampant littering of these detachable tabs facilitated development of the presently employed “stay-tab,” meant to remain attached following can opening (Figure 2). While the stay-tab appears to have reduced litter, there are still reported injuries related to these small foreign bodies, more commonly in the context of ingestion as opposed to aspiration [2–4].Figure 1Figure 2An important consideration in the aspiration of beverage can tabs is their aluminum composition. Despite being a metal, aluminum is relatively radiolucent and may evade radiographic detection [5]. In fact, this relative radiolucency influenced the United States Treasury decision to replace copper pennies with zinc instead of aluminum given the tendency of coins to become foreign bodies in the pediatric population [6].
## 2. Case Presentation
A previously healthy 16-year-old male presented to his primary care provider with the chief complaint of dyspnea at rest and on exertion. Associated symptoms included halitosis and foreign body sensation within the neck. He denied any cardiac symptoms, dysphagia, and odynophagia. His symptoms progressed such that the patient was unable to participate in gym class at school and he ultimately developed two-pillow orthopnea, sleep disturbance, and significant anxiety.Physical examination was remarkable for increased pressure sensation upon palpation of the anterior neck inferior to the cricoid cartilage. The patient reported no relief with inhaled bronchodilators. Posteroanterior and lateral radiography was unremarkable. A noncontributory cardiac evaluation (including echocardiography) was followed by pulmonary function testing suggestive of fixed upper airway obstruction (Figure3). During this time, the patient recalled an episode coincident with symptom onset in which he chewed and subsequently aspirated the opening tab from an aluminum soda can.Figure 3Bronchoscopic evaluation by the pulmonology team was attempted and a foreign body was noted immediately distal to the subglottis. At this point, the Otolaryngology-Head and Neck Surgery Service was consulted for further management. That evening, the patient was taken to the operating room for rigid laryngoscopy and bronchoscopy. After induction of generalized mask anesthesia and unremarkable assessment of the upper airway, a rigid bronchoscope was passed through the glottis to reveal a metallic foreign body in the proximal trachea with overlying mucoid debris (Figures4(a) and 4(b)). Utilizing a Benjamin-Lindholm laryngoscope, optical forceps were passed through the glottis into the trachea to gently rotate and remove the lodged soda can tab. Superficial mucosal lacerations were seen at the site, with no evidence of granulation tissue or exposed cartilage (Figure 4(c)). The patient emerged from general anesthesia and was observed overnight. He reported symptom resolution and, following an uncomplicated postoperative course, was discharged the following day.Figure 4
(a)
(b)
(c)
## 3. Discussion
Since 1975, five instances of aspiration of a beverage can tab have been described in the literature. We describe the first reported instance of lodging of an aspirated beverage can tab in the trachea. Timely integration of the patient history with appropriate diagnostic studies and interventions is necessary to avoid life-threatening sequelae [7].Radiographic evaluation of the patient with this type of aspiration can be misleading: in our case leading to more than four months of misdiagnosis and an extensive cardiac and pulmonary diagnostic evaluation. One patient experienced ten years of cough and recurrent pulmonary infiltrates due to aspiration of a pull-tab into his left main stem bronchus which evaded detection by chest radiography [8]. A retrospective study determined that radiographic detection of can tabs was demonstrated in 20 percent of cases, only when an ingested tab was localized to the stomach [3]. When utilizing imaging studies in the workup of possible aspiration of a can tab, computed tomography in lieu of plain film has proven to be more beneficial [2]. In the above case, misrecognition of the radiographic properties of aluminum may have contributed to delayed diagnosis and an extensive, unnecessary diagnostic workup including echocardiography and pulmonary function testing. Further impeding radiographic diagnosis was the anatomic location of the foreign body given that 80% of laryngotracheal foreign bodies do not appear on X-ray [9].Inadvertent ingestion of beverage can tabs is more frequently described than aspiration. This is perhaps due to a tendency to place a detached tab into the contents of the can while drinking. Interestingly, the design change from pull- to stay-tabs has potentially affected the patient population at risk. The eldest of seven patients described in a 1976 case series of tab ingestions and aspirations (prior to the advent of the stay-tab) was two years old [10]. In contrast, the majority of patients in a 2010 study of inadvertent ingestion were teenagers [3]. A potential explanation is reduced access by infants and young children to tabs that remain attached to beverage cans.Focusing specifically on aspiration. during the pull-tab period, two out the three reported aspirations occurred in infants [10]. Despite the small sample size, following the introduction of the stay-tab design, there have been no reported cases of aspiration in children. From a product design standpoint, this may indicate the successful mitigation of a pediatric safety hazard and resultant change in the population comprising aspiration of beverage can tabs. Of the described instances of beverage can tab aspiration, one was found at the glottis, one at the carina, and the remainder within the bronchi [1, 4, 8, 10].The evolving design of beverage can tabs illustrates product changes that attempt to enhance patient safety. The capacity for inadvertent ingestion or aspiration does remain; however, depending on the actual extent of ingestion or aspiration, increased consumer awareness or another evolution in product design may be warranted. Potential solutions include introduction of a design that further limits detachment from the can or use of a more radiodense material for the opening tab component.
## 4. Conclusion
Beverage can tabs continue to act as foreign bodies though perhaps in an older patient population subsequent to their design change. Though ingestion appears to be more common, aspiration continues to occur. Despite being metal, X-ray investigation provides poor negative predictive value in evaluation for aluminum beverage can tab, especially in cases of suspected aspiration. Alternate imaging modalities or endoscopy should be pursued to establish a timely diagnosis and avoid secondary injury.
---
*Source: 1010975-2017-05-28.xml* | 1010975-2017-05-28_1010975-2017-05-28.md | 8,850 | Aspiration of Aluminum Beverage Can Tab: Case Report and Literature Review | Alhasan N. Elghouche; Brian C. Lobo; Jonathan Y. Ting | Case Reports in Otolaryngology
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1010975 | 1010975-2017-05-28.xml | ---
## Abstract
We describe the case of a 16-year-old male who aspirated a beverage can tab resulting in significant functional impairment. Since the introduction of beverage can opening tabs (“pop-tops” or “pull-tabs”) nearly 50 years ago, five cases of their aspiration have been reported in the literature and this is the first case to report tracheal lodgment. We describe the clinical course for this patient including the inadequacy of radiographic evaluation and a significant delay in diagnosis. We highlight unique features of small aluminum foreign bodies that require consideration and mention a potential change in epidemiology associated with evolving product design. Our primary objective is increased awareness among otolaryngologists that radiography is unreliable for diagnosis or localization of small aluminum foreign bodies. The patient history must therefore be incorporated with other imaging modalities and/or endoscopic evaluation. Also, given the marked prevalence of aluminum beverage cans, we suspect that the inadvertent aspiration of can tabs is more common than indicated by the paucity of published reports.
---
## Body
## 1. Introduction
The first beverage cans were opened via puncture with a “church key” can opener. In the 1960s, the pull-tab mechanism was introduced. A pull-tab consists of a metal ring which, along with a wedge-shaped portion of the can top, is completely separated from the can when pulled to create an opening (Figure1). Not long after their introduction, cases of accidental pull-tab ingestion and aspiration were described in the medical literature [1]. Their emergence as foreign bodies in conjunction with the rampant littering of these detachable tabs facilitated development of the presently employed “stay-tab,” meant to remain attached following can opening (Figure 2). While the stay-tab appears to have reduced litter, there are still reported injuries related to these small foreign bodies, more commonly in the context of ingestion as opposed to aspiration [2–4].Figure 1Figure 2An important consideration in the aspiration of beverage can tabs is their aluminum composition. Despite being a metal, aluminum is relatively radiolucent and may evade radiographic detection [5]. In fact, this relative radiolucency influenced the United States Treasury decision to replace copper pennies with zinc instead of aluminum given the tendency of coins to become foreign bodies in the pediatric population [6].
## 2. Case Presentation
A previously healthy 16-year-old male presented to his primary care provider with the chief complaint of dyspnea at rest and on exertion. Associated symptoms included halitosis and foreign body sensation within the neck. He denied any cardiac symptoms, dysphagia, and odynophagia. His symptoms progressed such that the patient was unable to participate in gym class at school and he ultimately developed two-pillow orthopnea, sleep disturbance, and significant anxiety.Physical examination was remarkable for increased pressure sensation upon palpation of the anterior neck inferior to the cricoid cartilage. The patient reported no relief with inhaled bronchodilators. Posteroanterior and lateral radiography was unremarkable. A noncontributory cardiac evaluation (including echocardiography) was followed by pulmonary function testing suggestive of fixed upper airway obstruction (Figure3). During this time, the patient recalled an episode coincident with symptom onset in which he chewed and subsequently aspirated the opening tab from an aluminum soda can.Figure 3Bronchoscopic evaluation by the pulmonology team was attempted and a foreign body was noted immediately distal to the subglottis. At this point, the Otolaryngology-Head and Neck Surgery Service was consulted for further management. That evening, the patient was taken to the operating room for rigid laryngoscopy and bronchoscopy. After induction of generalized mask anesthesia and unremarkable assessment of the upper airway, a rigid bronchoscope was passed through the glottis to reveal a metallic foreign body in the proximal trachea with overlying mucoid debris (Figures4(a) and 4(b)). Utilizing a Benjamin-Lindholm laryngoscope, optical forceps were passed through the glottis into the trachea to gently rotate and remove the lodged soda can tab. Superficial mucosal lacerations were seen at the site, with no evidence of granulation tissue or exposed cartilage (Figure 4(c)). The patient emerged from general anesthesia and was observed overnight. He reported symptom resolution and, following an uncomplicated postoperative course, was discharged the following day.Figure 4
(a)
(b)
(c)
## 3. Discussion
Since 1975, five instances of aspiration of a beverage can tab have been described in the literature. We describe the first reported instance of lodging of an aspirated beverage can tab in the trachea. Timely integration of the patient history with appropriate diagnostic studies and interventions is necessary to avoid life-threatening sequelae [7].Radiographic evaluation of the patient with this type of aspiration can be misleading: in our case leading to more than four months of misdiagnosis and an extensive cardiac and pulmonary diagnostic evaluation. One patient experienced ten years of cough and recurrent pulmonary infiltrates due to aspiration of a pull-tab into his left main stem bronchus which evaded detection by chest radiography [8]. A retrospective study determined that radiographic detection of can tabs was demonstrated in 20 percent of cases, only when an ingested tab was localized to the stomach [3]. When utilizing imaging studies in the workup of possible aspiration of a can tab, computed tomography in lieu of plain film has proven to be more beneficial [2]. In the above case, misrecognition of the radiographic properties of aluminum may have contributed to delayed diagnosis and an extensive, unnecessary diagnostic workup including echocardiography and pulmonary function testing. Further impeding radiographic diagnosis was the anatomic location of the foreign body given that 80% of laryngotracheal foreign bodies do not appear on X-ray [9].Inadvertent ingestion of beverage can tabs is more frequently described than aspiration. This is perhaps due to a tendency to place a detached tab into the contents of the can while drinking. Interestingly, the design change from pull- to stay-tabs has potentially affected the patient population at risk. The eldest of seven patients described in a 1976 case series of tab ingestions and aspirations (prior to the advent of the stay-tab) was two years old [10]. In contrast, the majority of patients in a 2010 study of inadvertent ingestion were teenagers [3]. A potential explanation is reduced access by infants and young children to tabs that remain attached to beverage cans.Focusing specifically on aspiration. during the pull-tab period, two out the three reported aspirations occurred in infants [10]. Despite the small sample size, following the introduction of the stay-tab design, there have been no reported cases of aspiration in children. From a product design standpoint, this may indicate the successful mitigation of a pediatric safety hazard and resultant change in the population comprising aspiration of beverage can tabs. Of the described instances of beverage can tab aspiration, one was found at the glottis, one at the carina, and the remainder within the bronchi [1, 4, 8, 10].The evolving design of beverage can tabs illustrates product changes that attempt to enhance patient safety. The capacity for inadvertent ingestion or aspiration does remain; however, depending on the actual extent of ingestion or aspiration, increased consumer awareness or another evolution in product design may be warranted. Potential solutions include introduction of a design that further limits detachment from the can or use of a more radiodense material for the opening tab component.
## 4. Conclusion
Beverage can tabs continue to act as foreign bodies though perhaps in an older patient population subsequent to their design change. Though ingestion appears to be more common, aspiration continues to occur. Despite being metal, X-ray investigation provides poor negative predictive value in evaluation for aluminum beverage can tab, especially in cases of suspected aspiration. Alternate imaging modalities or endoscopy should be pursued to establish a timely diagnosis and avoid secondary injury.
---
*Source: 1010975-2017-05-28.xml* | 2017 |
# Remote Adipose Tissue-Derived Stromal Cells of Patients with Lung Adenocarcinoma Generate a Similar Malignant Microenvironment of the Lung Stromal Counterpart
**Authors:** Elena De Falco; Antonella Bordin; Cecilia Menna; Xhulio Dhori; Vittorio Picchio; Claudia Cozzolino; Elisabetta De Marinis; Erica Floris; Noemi Maria Giorgiano; Paolo Rosa; Erino Angelo Rendina; Mohsen Ibrahim; Antonella Calogero
**Journal:** Journal of Oncology
(2023)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2023/1011063
---
## Abstract
Cancer alters both local and distant tissue by influencing the microenvironment. In this regard, the interplay with the stromal fraction is considered critical as this latter can either foster or hamper the progression of the disease. Accordingly, the modality by which tumors may alter distant niches of stromal cells is still unclear, especially at early stages. In this short report, we attempt to better understand the biology of this cross-talk. In our “autologous stromal experimental setting,” we found that remote adipose tissue-derived mesenchymal stem cells (mediastinal AMSC) obtained from patients with lung adenocarcinoma sustain proliferation and clonogenic ability of A549 and human primary lung adenocarcinoma cells similarly to the autologous stromal lung counterpart (LMSC). This effect is not observed in lung benign diseases such as the hamartochondroma. This finding was validated by conditioning benign AMSC with supernatants from LAC for up to 21 days. The new reconditioned media of the stromal fraction so obtained, was able to increase cell proliferation of A549 cells at 14 and 21 days similar to that derived from AMSC of patients with lung adenocarcinoma. The secretome generated by remote AMSC revealed overlapping to the corresponding malignant microenvironment of the autologous local LMSC. Among the plethora of 80 soluble factors analyzed by arrays, a small pool of 5 upregulated molecules including IL1-β, IL-3, MCP-1, TNF-α, and EGF, was commonly shared by both malignant-like autologous A- and L-MSC derived microenvironments vs those benign. The bioinformatics analysis revealed that these proteins were strictly and functionally interconnected to lung fibrosis and proinflammation and that miR-126, 101, 486, and let-7-g were their main targets. Accordingly, we found that in lung cancer tissues and blood samples from the same set of patients here employed, miR-126 and miR-486 displayed the highest expression levels in tissue and blood, respectively. When the miR-126-3p was silenced in A549 treated with AMSC-derived conditioned media from patients with lung adenocarcinoma, cell proliferation decreased compared to control media.
---
## Body
## 1. Introduction
Mesenchymal stem cells (MSC) have been described as adult multipotent stem cells, showing many relevant properties, spanning from the ability to immunomodulate and migrate to specific sites of injury to the transdifferentiation into multiple cell types [1, 2]. MSC have been considered ideal candidates for many clinical and cell therapy applications, almost concluding that their wide applicability was also possible in cancer treatment [3].The biological interaction between MSC and tumors is complex and enormously debated. Several controversies exist about the potential of MSC to enhance or to even arrest tumorigenicity, not only of their double-faced behavior such as tumor-tropism (hence tested as vehicles for anticancer genes targeting cancer cells or as enhancement of the CAR-T immunotherapy) [4, 5] and immunomodulatory features but also prometastatic functions [6–8], transdifferentiation into cancer-associated fibroblasts and drug resistance [9], the parallel ability to overturn the immune system [10–13], and activation of autophagy and neo-angiogenesis [14], therefore contributing to tumor evolution. This discrepancy also includes exosome-derived MSC, considered both an intriguing therapeutic tool for drug delivery and the main biological mediators of several supporting tumor molecular processes [15]. Moreover, from a clinical standpoint, it has been recognized that the endogenous recruitment of MSC (of different origins including adipose) from systemic niches may occur by tumor secretion of inflammatory soluble factors [16] and that a correlation exists between circulating mesenchymal tumor cells and stage of tumor development [17, 18].This scenario is also complicated by recent indications about the heterogeneity of MSC and the phenotypic and functional changes potentially caused by tumors. For instance, adipose tissue and bone marrow-derived MSC have shown differences with respect to stem cell content and epigenetic states [19, 20]. Besides, MSC obtained from diverse sources such as heart, dermis, bone marrow, and adipose tissue have been reported as genotypically different, expressing different levels of embryonic stem cell markers such as OCT-4, NANOG, and SOX-2 [21], and biological properties including angiogenesis and secretome [20]. When MSC are derived from cancer tissues, they show altered molecular and functional properties [22–24], suggesting that the tumor characteristics such as benignity or malignancy could influence the environment where MSC is located.From a biological standpoint, the evolution from a local to a systemic cancer microenvironment can be driven either by phenotypically altered cancer-associated cells (fibroblasts and endothelial cells), which organize clusters of systemic spreading cells or by niche-to-niche recruiting phenomena from the bone marrow to the tumor site [25, 26]. However, the thorny question is still centered on the modality by which cancer can control the systemic environment, influencing remote “normal” and nonbone marrow stem cell-derived niches including distant MSC niches, particularly at the early stages of the tumor, which are of paramount biological and clinical relevance to understand cancer progression.Assuming that the pathophysiology of cancer can be interpreted as a systemic disease, in this short report we attempt to investigate whether MSC-derived microenvironments at a remote site from a tumor can be already altered at the early stages of lung adenocarcinoma [27].
## 2. Methods
### 2.1. Surgical Specimen Collection and Clinical Database
At the end of the surgical procedure, a small sample of mediastinal adipose [1, 2] and lung tissue was collected by electrocoagulation from patients undergoing surgical procedures for hamartochondroma and non-small cell lung carcinoma (NSCLC). Surgical procedures were conducted at S. Andrea Hospital, Rome. Written informed consent was obtained from patients, before starting all the surgical and laboratory procedures. Patients with NSCLC and staging T1N0M0 G1 were selected, whereas subjects with metastasis have been excluded from the study.
### 2.2. Isolation and Characterization of AMSC and LMSC
AMSCs were isolated and characterized as previously described [1, 2, 28]. Patient’s characteristics are described in supplementary Tables 1a and 1b. Lung specimens were chopped with a scalpel and scissors in a 100 mm Petri dish, then gently transferred into a clean 100 mm Petri dish to allow tissue adherence. A complete growth medium composed of DMEM high glucose (Invitrogen) supplied by 10% FBS, antibiotics, and L-glutamine (all Gibco) was added to the fragments. Plates were incubated at 37°C in a fully humidified atmosphere of 5% CO2, avoiding shaking the plates at least for 72 hours. Half of the medium was replaced with a fresh complete medium every three days.
### 2.3. Isolation of Lung Adenocarcinoma Cells and In Vitro Conditioning with MSC-Derived Supernatants
Human primary lung adenocarcinoma cells (LAC) were isolated as we already previously described [29]. The cells obtained were cultured in a complete medium (DMEM-F12, penicillin-streptomycin, L-glutamine, nonessential amino acids, sodium pyruvate, all Gibco, Monza, Italy, and 5% FBS, Lonza, Milan, Italy). Lung adenocarcinoma A549 was purchased by ATCC and cultured in DMEM-F12 supplemented with 10% FBS (All Gibco). AMSC and LMSC supernatants derived from patients with hamartochondroma or NSCLC were collected between passages 3–6, then stored at −80°C until use.A549 or human primary LAC was conditioned by removing their own medium and replacing it with A- or LMSC-derived conditioned media diluted 1 : 1 with basal media of A549 or LAC. Every 3 days, 1/5 of the whole medium was discarded and fresh media was replaced. Cells were cultured according to the time course indicated in the study.
### 2.4. Proliferation, Clonogenic Assay, and FACS Analysis
Both LAC and A549 were seeded onto 96-well plates (150 cells/well) and incubated for 24 hours with DMEM low glucose 10% FBS [29]. Then cells were exposed to the different conditioned media collected from AMSC and LMSC for up to 7 days. Cells treated with the basal medium were used as a control. The effect of conditioned media on cell viability was evaluated by the MTS assay. Briefly at 3, 5, and 7 days after treatment, 20 μl of MTS reagent were added to each microculture well, and plates were incubated for 2 hours at 37°C, after which absorbance at 492 nm (optical density) was measured using a microplate reader.To test the secondary colony forming efficiency (CFU) assay, LAC or A549 were seeded at passage 3 at low density (10 cells/cm2) [29] in AMSC- and LMSC-derived conditioned media for 14 days and incubated at 37°C. Colonies produced were fixed with 4% paraformaldehyde and then stained with Giemsa (Sigma, Milan, Italy) for 1 h and counted by optical microscope. A cluster with >50 cells was considered a colony [20].FACS analysis was performed to investigate the percentage of apoptotic cells after stimulation with AMSC-derived conditioned media as previously reported [30, 31]. Briefly, semiconfluent cultures were harvested with Accutase (Sigma-Aldrich) and stained for 30 minutes with 5 μl AnnexinV-FITC antibody (Invitrogen Cat. number 88-8005-74) and counterstained with propidium iodide (10 ng/mL, Invitrogen, Cat. number 88-8005-74) according to the manufacturer’s protocol for adherent cells. Data acquisition was performed on a FACS-Aria II platform equipped with FACSDiva software (BD Biosciences). All flow cytometry data were analyzed with FlowJo software (FlowJo LCC, Ashland, USA).
### 2.5. Analysis of the Autologous A- and LMSC-Derived Secretome
The evaluation of the different microenvironments was performed on collected supernatant obtained from both A- and LMSC in patients with lung adenocarcinoma and hemartochondroma. C-Series Human Cytokine Antibody Array C5 (RayBiotech, Inc) was used for simultaneous semiquantitative detection of 80 multiple cytokines/growth factors as previously described [31]. Briefly, an equal volume of collected undiluted supernatants was incubated by gentle shaking overnight at 4°C on the membrane of the kit C-Series Human Cytokine Antibody Array C5. Chemiluminescence was employed to quantify the spots (at the same time exposure was used for all membranes) and each spot signal was analyzed by ImageJ. The samples were normalized on positive control means (six spots in each array) and then values were expressed as a percentage. To visualize the overall changes in cytokines array average data, results were graphed as log (2) in heatmap analysis by using the pheatmap R package in the RdYlBl color scale (from 0 to 7 expression levels). Cytokines with zero values in all replicates were ruled out.
### 2.6. Interaction and Functional Evaluation of Cytokines Network within the Microenvironments and miRNA Target Interaction Analysis
Correlation analysis between the cytokines expressed by A- or LMSC-derived benign and malignant microenvironments, was obtained by calculating the fold changes between these two conditions. A fold change threshold of >1.2 was considered as upregulated [32]. The analysis for known protein interaction was performed on commonly upregulated cytokines between A- and LMSC by using the STRING software (https://string-db.org/, version 11.5) [33], building the whole network according to the high confidence setting (0.7) and default options. The pathway and process enrichment analysis were performed using Metascape as already described elsewhere [34].The miRNA-target interaction analysis was performed by using the “multiMiR” R package and reviewing literature employing lung cancer as a keyword. The list of miRNAs in the R package was obtained using the get_multir function setting only on validated data and the databases queried were miRecords, miRTarBase, and TarBase.
### 2.7. A549 Transfection with AntagomiR-126-3-p and Digital Droplet PCR
A549 were cultured in complete media. For silencing endogenous miR-126-3p, we perform the same protocols we recently described with some modifications [34]. Briefly, A549 were transfected plating at a density of 1.5 × 104/24 wells with DMEM-F12 10% FBS. A mix composed of 25 picomoles LNA_126 (miRCURY LNA miRNA inhibitor (5)—3 ′Fam, Cat. N. 339121, Qiagen) or 25 picomoles control (miRCURY LNA miRNA inhibitor control (5)-No Modification Fam, Cat N. 339126. Qiagen) in Opti-MEM reduced serum media and lipofectamine (1 μl/100 μl Opti-MEM, RNAiMAX, Invitrogen, Cat. N. 56531) was added to the A549 and incubated for 5 hours. After that, the medium was removed, and new DMEM-F12 10% FBS as control and supernatants of AMCSs were added to the cells for up to 24 hours of total transfection. To verify transfection, cells were subjected to digital droplet PCR, to quantify the decrease of copy numbers of the miR-126-3p.Total RNA was extracted from the A549 cell pellet by miRNeasy kit (Qiagen, GmbH, Hilden, Germany) according to the manufacturer’s recommendation. Purified RNA was quantified at a NanoDrop spectrophotometer and used for reverse transcription reaction by the TaqMan miRNA Reverse Transcription Kit and miR-126-3p-3p-specificstem-loop primers (Applied Biosystem, Carlsbad, CA, USA). 10 ng of total extracted RNA, 1 × stem-loop RT primer specific for miRNAs, 3.33 U/μL MuLV reverse transcriptase, 0.25 U/μL RNase inhibitor, 0.25 mM dNTPs, and 1 × reaction buffer were run in a total reaction volume of 15 μL and incubated at 16°C for 30 min, 42°C for 30 min, and 85°C for 5 minutes in a thermal cycler. Afterward, digital droplet PCR was performed by employing the QX200 ddPCR system (Bio-Rad, Hercules, CA, USA), using TaqMan MicroRNA assay specific for hsa-miR-126-3p (Applied Biosystem) as we reported [34, 35]. The reaction mix was assembled with 1.3 μl of miR-126-3p-specific cDNA, 1 × TaqMan MicroRNA miR-126-3p-specific assay, and 1 × ddPCR supermix for probes (no dUTP) (Bio-Rad) in 20 μl of total volume. The mix was loaded into droplet generator cartridges with 70 μl droplet generation oil for probes (Bio-Rad). Each reaction mixture was partitioned by the QX200 droplet generator machine (Bio-Rad) into approximately 20,000 droplets. Then, 40 μl of droplets were placed into a PCR 96-well plate that was sealed using a pierceable foil heat seal and a PX1 PCR plate sealer (Bio-Rad). The PCR was performed on the T-100 thermal cycler (Bio-Rad) under the following conditions: 10 minutes at 95°C, 30 seconds at 94°C and 1 minute at 60°C for 40 cycles with a ramp speed of 2°C/s, 98°C for 10 min, and held at 4°C for at least 40 minutes. Droplets were assessed with a QX200 droplet reader machine and QuantaSoft Software (Bio-Rad). The threshold between the positive and negative droplet clusters was manually set for all samples. ddPCR data are presented as absolute copies of transcripts/μl of reaction sample ± Poisson 95% confidence intervals.
### 2.8. MiRNA Extraction and Quantification
MicroRNA extraction was performed from paraffin tumor tissue sections (RNeasy DSP FFPE Kit, Qiagen) and the RNA amount was determined using a NanoDrop spectrophotometer. Differently, miRNAs obtained from serum patients (200μl) were isolated by the Qiagen miRNeasy kit with further modifications for biofluids applications. Syn-cel-miR-39 spike in synthetic RNA (Qiagen) was added to monitor extraction efficiency. Afterward, on both tissue sections and sera samples the reverse transcription was performed using the “MiRCURY LNA Reverse Transcription Kit” (QIAGEN) in ThermoMixer 5436 (Eppendorf, Italy) according to the following protocol: 42°C for 60 seconds, 95°C for 5 minutes, 4°C forever [33].Selected miRNA levels such as hsa-miR-101, hsa-miR-126-3p, hsa-miR-486, and hsa-Let-7g were quantified by relative quantification using Qiagen LNA-based SYBR green detection method (miRCURY LNA miRNA PCR assay- Qiagen). Briefly, 3μl of cDNA was used on the Applied Biosystems 7900HT machine, adding the relevant ROX concentration to the qPCR mix [33]. The relative miRNA expression was calculated using hsa-miR-16 as endogenous control with the 2-ΔCt method for cancer tissues and miR-16/miR-39 for sera [33].
### 2.9. Statistical Analysis
The results were expressed as the arithmetic mean ± standard deviation (SD) for at least 3 repeated individual experiments for each sample group. Statistical differences between the values were determined by Student’sT-test, with a value of p<0.05 was considered statistically significant.
## 2.1. Surgical Specimen Collection and Clinical Database
At the end of the surgical procedure, a small sample of mediastinal adipose [1, 2] and lung tissue was collected by electrocoagulation from patients undergoing surgical procedures for hamartochondroma and non-small cell lung carcinoma (NSCLC). Surgical procedures were conducted at S. Andrea Hospital, Rome. Written informed consent was obtained from patients, before starting all the surgical and laboratory procedures. Patients with NSCLC and staging T1N0M0 G1 were selected, whereas subjects with metastasis have been excluded from the study.
## 2.2. Isolation and Characterization of AMSC and LMSC
AMSCs were isolated and characterized as previously described [1, 2, 28]. Patient’s characteristics are described in supplementary Tables 1a and 1b. Lung specimens were chopped with a scalpel and scissors in a 100 mm Petri dish, then gently transferred into a clean 100 mm Petri dish to allow tissue adherence. A complete growth medium composed of DMEM high glucose (Invitrogen) supplied by 10% FBS, antibiotics, and L-glutamine (all Gibco) was added to the fragments. Plates were incubated at 37°C in a fully humidified atmosphere of 5% CO2, avoiding shaking the plates at least for 72 hours. Half of the medium was replaced with a fresh complete medium every three days.
## 2.3. Isolation of Lung Adenocarcinoma Cells and In Vitro Conditioning with MSC-Derived Supernatants
Human primary lung adenocarcinoma cells (LAC) were isolated as we already previously described [29]. The cells obtained were cultured in a complete medium (DMEM-F12, penicillin-streptomycin, L-glutamine, nonessential amino acids, sodium pyruvate, all Gibco, Monza, Italy, and 5% FBS, Lonza, Milan, Italy). Lung adenocarcinoma A549 was purchased by ATCC and cultured in DMEM-F12 supplemented with 10% FBS (All Gibco). AMSC and LMSC supernatants derived from patients with hamartochondroma or NSCLC were collected between passages 3–6, then stored at −80°C until use.A549 or human primary LAC was conditioned by removing their own medium and replacing it with A- or LMSC-derived conditioned media diluted 1 : 1 with basal media of A549 or LAC. Every 3 days, 1/5 of the whole medium was discarded and fresh media was replaced. Cells were cultured according to the time course indicated in the study.
## 2.4. Proliferation, Clonogenic Assay, and FACS Analysis
Both LAC and A549 were seeded onto 96-well plates (150 cells/well) and incubated for 24 hours with DMEM low glucose 10% FBS [29]. Then cells were exposed to the different conditioned media collected from AMSC and LMSC for up to 7 days. Cells treated with the basal medium were used as a control. The effect of conditioned media on cell viability was evaluated by the MTS assay. Briefly at 3, 5, and 7 days after treatment, 20 μl of MTS reagent were added to each microculture well, and plates were incubated for 2 hours at 37°C, after which absorbance at 492 nm (optical density) was measured using a microplate reader.To test the secondary colony forming efficiency (CFU) assay, LAC or A549 were seeded at passage 3 at low density (10 cells/cm2) [29] in AMSC- and LMSC-derived conditioned media for 14 days and incubated at 37°C. Colonies produced were fixed with 4% paraformaldehyde and then stained with Giemsa (Sigma, Milan, Italy) for 1 h and counted by optical microscope. A cluster with >50 cells was considered a colony [20].FACS analysis was performed to investigate the percentage of apoptotic cells after stimulation with AMSC-derived conditioned media as previously reported [30, 31]. Briefly, semiconfluent cultures were harvested with Accutase (Sigma-Aldrich) and stained for 30 minutes with 5 μl AnnexinV-FITC antibody (Invitrogen Cat. number 88-8005-74) and counterstained with propidium iodide (10 ng/mL, Invitrogen, Cat. number 88-8005-74) according to the manufacturer’s protocol for adherent cells. Data acquisition was performed on a FACS-Aria II platform equipped with FACSDiva software (BD Biosciences). All flow cytometry data were analyzed with FlowJo software (FlowJo LCC, Ashland, USA).
## 2.5. Analysis of the Autologous A- and LMSC-Derived Secretome
The evaluation of the different microenvironments was performed on collected supernatant obtained from both A- and LMSC in patients with lung adenocarcinoma and hemartochondroma. C-Series Human Cytokine Antibody Array C5 (RayBiotech, Inc) was used for simultaneous semiquantitative detection of 80 multiple cytokines/growth factors as previously described [31]. Briefly, an equal volume of collected undiluted supernatants was incubated by gentle shaking overnight at 4°C on the membrane of the kit C-Series Human Cytokine Antibody Array C5. Chemiluminescence was employed to quantify the spots (at the same time exposure was used for all membranes) and each spot signal was analyzed by ImageJ. The samples were normalized on positive control means (six spots in each array) and then values were expressed as a percentage. To visualize the overall changes in cytokines array average data, results were graphed as log (2) in heatmap analysis by using the pheatmap R package in the RdYlBl color scale (from 0 to 7 expression levels). Cytokines with zero values in all replicates were ruled out.
## 2.6. Interaction and Functional Evaluation of Cytokines Network within the Microenvironments and miRNA Target Interaction Analysis
Correlation analysis between the cytokines expressed by A- or LMSC-derived benign and malignant microenvironments, was obtained by calculating the fold changes between these two conditions. A fold change threshold of >1.2 was considered as upregulated [32]. The analysis for known protein interaction was performed on commonly upregulated cytokines between A- and LMSC by using the STRING software (https://string-db.org/, version 11.5) [33], building the whole network according to the high confidence setting (0.7) and default options. The pathway and process enrichment analysis were performed using Metascape as already described elsewhere [34].The miRNA-target interaction analysis was performed by using the “multiMiR” R package and reviewing literature employing lung cancer as a keyword. The list of miRNAs in the R package was obtained using the get_multir function setting only on validated data and the databases queried were miRecords, miRTarBase, and TarBase.
## 2.7. A549 Transfection with AntagomiR-126-3-p and Digital Droplet PCR
A549 were cultured in complete media. For silencing endogenous miR-126-3p, we perform the same protocols we recently described with some modifications [34]. Briefly, A549 were transfected plating at a density of 1.5 × 104/24 wells with DMEM-F12 10% FBS. A mix composed of 25 picomoles LNA_126 (miRCURY LNA miRNA inhibitor (5)—3 ′Fam, Cat. N. 339121, Qiagen) or 25 picomoles control (miRCURY LNA miRNA inhibitor control (5)-No Modification Fam, Cat N. 339126. Qiagen) in Opti-MEM reduced serum media and lipofectamine (1 μl/100 μl Opti-MEM, RNAiMAX, Invitrogen, Cat. N. 56531) was added to the A549 and incubated for 5 hours. After that, the medium was removed, and new DMEM-F12 10% FBS as control and supernatants of AMCSs were added to the cells for up to 24 hours of total transfection. To verify transfection, cells were subjected to digital droplet PCR, to quantify the decrease of copy numbers of the miR-126-3p.Total RNA was extracted from the A549 cell pellet by miRNeasy kit (Qiagen, GmbH, Hilden, Germany) according to the manufacturer’s recommendation. Purified RNA was quantified at a NanoDrop spectrophotometer and used for reverse transcription reaction by the TaqMan miRNA Reverse Transcription Kit and miR-126-3p-3p-specificstem-loop primers (Applied Biosystem, Carlsbad, CA, USA). 10 ng of total extracted RNA, 1 × stem-loop RT primer specific for miRNAs, 3.33 U/μL MuLV reverse transcriptase, 0.25 U/μL RNase inhibitor, 0.25 mM dNTPs, and 1 × reaction buffer were run in a total reaction volume of 15 μL and incubated at 16°C for 30 min, 42°C for 30 min, and 85°C for 5 minutes in a thermal cycler. Afterward, digital droplet PCR was performed by employing the QX200 ddPCR system (Bio-Rad, Hercules, CA, USA), using TaqMan MicroRNA assay specific for hsa-miR-126-3p (Applied Biosystem) as we reported [34, 35]. The reaction mix was assembled with 1.3 μl of miR-126-3p-specific cDNA, 1 × TaqMan MicroRNA miR-126-3p-specific assay, and 1 × ddPCR supermix for probes (no dUTP) (Bio-Rad) in 20 μl of total volume. The mix was loaded into droplet generator cartridges with 70 μl droplet generation oil for probes (Bio-Rad). Each reaction mixture was partitioned by the QX200 droplet generator machine (Bio-Rad) into approximately 20,000 droplets. Then, 40 μl of droplets were placed into a PCR 96-well plate that was sealed using a pierceable foil heat seal and a PX1 PCR plate sealer (Bio-Rad). The PCR was performed on the T-100 thermal cycler (Bio-Rad) under the following conditions: 10 minutes at 95°C, 30 seconds at 94°C and 1 minute at 60°C for 40 cycles with a ramp speed of 2°C/s, 98°C for 10 min, and held at 4°C for at least 40 minutes. Droplets were assessed with a QX200 droplet reader machine and QuantaSoft Software (Bio-Rad). The threshold between the positive and negative droplet clusters was manually set for all samples. ddPCR data are presented as absolute copies of transcripts/μl of reaction sample ± Poisson 95% confidence intervals.
## 2.8. MiRNA Extraction and Quantification
MicroRNA extraction was performed from paraffin tumor tissue sections (RNeasy DSP FFPE Kit, Qiagen) and the RNA amount was determined using a NanoDrop spectrophotometer. Differently, miRNAs obtained from serum patients (200μl) were isolated by the Qiagen miRNeasy kit with further modifications for biofluids applications. Syn-cel-miR-39 spike in synthetic RNA (Qiagen) was added to monitor extraction efficiency. Afterward, on both tissue sections and sera samples the reverse transcription was performed using the “MiRCURY LNA Reverse Transcription Kit” (QIAGEN) in ThermoMixer 5436 (Eppendorf, Italy) according to the following protocol: 42°C for 60 seconds, 95°C for 5 minutes, 4°C forever [33].Selected miRNA levels such as hsa-miR-101, hsa-miR-126-3p, hsa-miR-486, and hsa-Let-7g were quantified by relative quantification using Qiagen LNA-based SYBR green detection method (miRCURY LNA miRNA PCR assay- Qiagen). Briefly, 3μl of cDNA was used on the Applied Biosystems 7900HT machine, adding the relevant ROX concentration to the qPCR mix [33]. The relative miRNA expression was calculated using hsa-miR-16 as endogenous control with the 2-ΔCt method for cancer tissues and miR-16/miR-39 for sera [33].
## 2.9. Statistical Analysis
The results were expressed as the arithmetic mean ± standard deviation (SD) for at least 3 repeated individual experiments for each sample group. Statistical differences between the values were determined by Student’sT-test, with a value of p<0.05 was considered statistically significant.
## 3. Results
Firstly, we investigated if early-stage lung adenocarcinoma could influence the biological behavior of MSC according to (1) their own tissue source (autologous lung or mediastinal adipose tissue-derived MSC isolated from a tumor-free nearby or remote area, respectively) and (2) the biological feature of the lung tumor (malignant or benign defined by the histological analysis) where the MSC was derived from. To this aim, we conditioned the A549 cell line with supernatants of autologous lung or adipose-derived MSC (LMSC or AMSC) derived in parallel from patients with benign disease (pulmonary hamartochondroma) or malignant tumor (early-stage lung adenocarcinoma, see supplementary Tables1a and 1b). We tested both cell proliferation (at 0, 3, 5, and 7 days) and the clonogenic capacity of A549. The experimental plan is depicted in Figure 1(a). Results showed a significant increase of A549 cell proliferation at 5 and 7 days compared to control (Figure 1(a), p<0.001 both vs control) after conditioning the lung tumor cell line with supernatants derived from LMSC or AMSC.Figure 1
(a) Experimental design of the study on A549 cell line. 1 and 2 represent the steps of the experiments. (b) A549 proliferation by MTS assay by employing supernatants of autologous A- and LMSC-derived from patient with lung adenocarcinoma or (c) hamartochondroma. (d) Clonogenic assay on A549 with the corresponding autologous supernatants of B and CN = 4 different conditioned media for each source of A- and LMSC. Samples were normalized on time 0. ∗p<0.05; #p<0.001.
(a)
(b)
(c)
(d)When A549 were cultured with supernatants derived from autologous LMSC or AMSC both obtained from patients with pulmonary hamartochondroma, A549 proliferation decreased compared to control (Figure1(c), day 7 p<0.05 both vs control). The stimulus with conditioned media of benign or malignant origin did not increase apoptosis of A549 compared to control over time as demonstrated by the percentage of double-positive Annexin V/Iodide propidium cells detected by FACS analysis (Supplementary Figure 1). This result has suggested that no apoptotic effect was related to both tumor microenvironments, apart from the physiological cell turnover similar to controls.Interestingly, a similar scenario was also reproduced regarding the clonogenic capacity of A549, where only mediastinal AMSC derived from patients with early-stage lung adenocarcinoma were able to enhance the clonogenicity respect to control (Figure1(c), p<0.05vs control). The conditioning of A549 with supernatants derived from L- or AMSC of patients with pulmonary hamartochondroma, did not alter the clonogenic capacity of A549 (Figure 1(c)).Afterward, we verified whether the same effects were also reproducible on early-stage human primary lung adenocarcinoma cells (4 lines of LAC staging T1/N0/M0 G1), by employing the same supernatants as for the A549 cell line. The experimental sequence is described in Figure2(a). Notably, both supernatants from autologous LMSC and AMSC (because derived from the same subject) of patients with lung adenocarcinoma were able to sustain cell proliferation of LAC cells similarly to controls at all time points (Figure 2(b), p>0.05). Differently, conditioned media of autologous L- or AMSC obtained from patients with pulmonary hamartochondroma, were able to decrease cell proliferation of LAC compared to controls at days 5 and 7 (Figure 2(c), day 5 LMSC p<0.001, AMSC p<0.05vs controls; day 7 both LMSC and AMDC p<0.001vs controls). Coherently, we also found a significant enhancement of the clonogenic ability of LAC after the culturing with supernatants of autologous LMSC o AMSC derived from patients with early-stage lung adenocarcinoma (Figure 2(d), p=0.04 LMSC and p=0.02 AMSC vs controls). Conditioned media derived from autologous L- or AMSC of patient with hamartochondroma decreased or did not alter the clonogenic capacity of LAC cells (Figure 2(d), p=0.011 LMSC vs controls).Figure 2
(a) Experimental design of the study on human primary lung adenocarcinoma cells (LAC). From 1 to 4 are represented the steps of the experiments (b) LAC proliferation by MTS assay with supernatants of autologous A- and LMSC-derived from patient with lung adenocarcinoma or (c) hamartochondroma. (d) Clonogenic assay of heterologous LAC with the corresponding autologous supernatants of B and C. Below the graph representative images of Giemsa staining of the clones generated by heterologous LAC cells after 3 weeks of culture in presence of autologous A- or LMSC-derived conditioned media.N = 4 different conditioned media for each source of A- and LMSC. ∗p<0.05; #p<0.001.
(a)
(b)
(c)
(d)We then evaluated if these effects entailed also a different paracrine signature of autologous MSC-derived microenvironments. Thus, we screened a panel of 80 pro- and anti-inflammatory cytokines and growth factors excreted by both sources of MSC in the conditioned media. Results analyzed through the heatmap (soluble factors with no expression in both L- and AMSC were ruled out) revealed that the secretome of both autologous L- and AMSC of patients with early-stage lung adenocarcinoma was similar (Figure3(a)), therefore suggesting a comparable in vitro microenvironment. Interestingly, supernatants derived from AMSC of patients with hamartochondroma exhibited a more heterogenous profile of soluble factors with respect to that derived from the corresponding autologous MSC counterpart in the lung (Figure 3(b)). Afterward, we further examined these secretomes. By keeping constant the source of the stromal fraction during the analysis, we calculated the increasing ratio of the soluble factors between malignant and benign microenvironments. An upregulation ratio of >1.2 was set as a cut-off. Then, we sought for those common cytokine or growth factors upregulated among the two microenvironments and we generated a functional protein association network with the STRING software. Results showed a shared upregulation of 6 soluble factors including EGF, IL-1β, IL-3, TNF-α, CCL2, and SPP1 (osteopontin). Notably, the STRING analysis which identifies protein-protein interaction, also displayed that all cytokines but SPP1 were strictly interconnected in the same cluster with a high confidence score of 0.7. Afterward, we used Metascape for the pathway enrichment analysis of this network, and we found that the most significant terms within the cluster consisting of EGF, IL-1β, IL-3, TNF-α, CCL2 were associated with lung fibrosis and proinflammatory/fibrotic mediators (Figure 3(d)).Figure 3
(a) and (b) Hierarchic clustering heatmap based on the cytokine arrays of autologous A- and LMSC-derived conditioned media from patients with lung adenocarcinoma and hamartochondroma (malignant and benign microenvironments, respectively). The red-yellow and the blue range colors indicate cytokines with high and low average levels, respectively. (c) Protein-protein interaction network generated by STRING database on the 6 upregulated cytokines commonly shared by autologous A- and LMSC when their malignant and benign microenvironments are compared. The number of nodes is proportional to a strict correlation among the proteins analyzed. (d) The analysis of the pathway and process enrichment evaluation derived from the Metascape analysis of the cytokines displayed in (c) showing that IL-1β, IL-3, MCP-1, TNF-α, EGF are all related to lung fibrosis and inflammation.
(a)
(b)
(c)
(d)These results indicated a potential biological supporting tumor behavior of AMSC localized in remote areas even at the early stages of the disease. Thus, to validate this aspect, we attempt to similarly “educate” in vitro the benign AMSC obtained from a patient with hamartochondroma towards a biological “malignant-like” behavior, by reversing the experiment and culturing benign AMSC for 7, 14, and 21 days with the sole supernatants produced by LAC cells. Afterward, the same conditioned media produced by AMSC after conditioning with adenocarcinoma was removed and replaced on A549 cells which were tested for cell proliferation. We found a significant increase in cell proliferation of A549 at 14 and 21 days of tumor preconditioning compared to day 7 (Figure4, p<0.001 both).Figure 4
Proliferation assay of A549 after culture with supernatants of benign AMSC previously preconditioned with LAC-derived conditioned media up to 21 days and so retested on A549. The graph shows an increase in cell proliferation of A549 at 14 and 21 days compared to 7 days. Samples were normalized on basal media of AMSC. #p<0.001.Considering the analysis of the malignant and benign secretome of A- and LMSC (Figures3(a)–3(d)) and that miRNAs have been demonstrated as an eligible candidate to mediate a long-distance paracrine effect, to instruct MSC and endothelial cells in lung cancer [36], we performed a computational analysis using the miRNA-target prediction tools miRecords, miRTarBase, and TarBase combined to a revision of the literature on lung cancer of miRNAs targeting the pool of the 5 cytokines. We found four miRNAs including miR-126, 101, 486, and let-7-g. To validate the four miRNAs, we explore their expression levels on matched lung cancer tissues and samples of blood from the same set of patients employed to isolate LAC cells. Results showed that among all analyzed miRNAs the miR-126 displayed the highest expression in the lung cancer tissue (Figure 5(a), p<0.01 and p<0.001), differently from the circulation where is the miR-486 to show the highest levels (Figure 5(b), p<0.05 and p<0.01).Figure 5
Real time PCR for miR-101, 126, 486 and let-7-g in (a) cancer tissues and the corresponding (b) sera of 3 patients with lung adenocarcinoma at early stages. Samples were normalized on miR-16 and miR-16/cell39 for tissue and sera, respectively.∗∗p<0.01; #p<0.001.
(a)
(b)The miR-126 is described as one of the most important differentially expressed miRNAs in lung tumors [37] and a key miRNA regulating the pathogenesis [38] and angiogenesis of NSLC [39]. We also recently showed the angiogenic property exerted by miR-126 in endothelial cells [34]. Thus, to investigate the proliferative effects of only cancer cells by respective miR-126, we firstly evaluated in A549 by digital droplet PCR if the AMSC-derived conditioned media obtained from patients with lung adenocarcinoma was able to increase the levels of miR-126-3p. Results have shown that the treatment did not significantly increase the number of copies of miR-126-3p up to 7 days compared to the corresponding physiological levels (Figure 6(a)). Afterward, we knocked down the miR-126-3p in A549 (the recipient cells) by small interfering-based experiments at 24 hours (which was the best time performing, data not shown), halving the number of copies in cells (Figure 6(b)). After silencing, cells were reconditioned with the AMSC-derived supernatants from patients with lung adenocarcinoma and the MTS assay was performed. Notably, cell proliferation was significantly decreased only when miR-126-3p was silenced compared to the scramble (Figure 6(c), p<0.05). This effect was not reproducible in A549 silenced for miR-126-3p but conditioned with the basal media of the cells (Figure 6(d)).Figure 6
MiR-126-3p silencing in A549 cell line. (a) Absolute copy number of miR-126-3p quantified by digital droplet PCR in A549 treated with AMSC-derived supernatants from patients with lung adenocarcinoma or control basal media. (b) miR-126-3p silencing-based experiments at 24 hours in A549 conditioned with AMSC-derived supernatants from patients with lung adenocarcinoma, showing the decrease of the copy number of the endogenous miR-126-3p. Samples were normalized to the control condition. (c) MTS assay of the A549 after silencing of the endogenous miR-126-3p and treated with AMSC-derived supernatants from patients with lung adenocarcinoma, showing the significant decrease of cell proliferation after 24 hours. The effect was not reproducible in the control (d).∗p<0.05.
(a)
(b)
(c)
(d)
## 4. Discussion
This short report highlights how the tumor microenvironment is already defined at early stages, such that the stromal fraction can be influenced even at remote sites and in absence of metastasis. Intriguingly, remote AMSC derived from subjects with lung adenocarcinoma, are permissive to cell proliferation and clonogenic properties when tested on both A549 and LAC cells similar to the stromal lung counterpart. Oppositely, this effect is lost in “benign conditions.” The first important point of novelty of our brief study is that we have been able to isolate both A- and LMSC within the same patient (autologous stromal cells), therefore ruling out the variability in biological performance within the individual. A further point of originality is that our study has been focused on the early stages of lung adenocarcinoma, which has been currently given more clinical attention. Accordingly, we have shown that at the early stages of the tumor, malignant-like microenvironments are already generated from AMSC at remote sites, and they can be overlapped with the lung stromal counterpart, which is tumor adjacent. This is in line with the concept that cancer cannot be interpreted only as a local disease, but rather than a systemic disorder [40, 41] and with the concept of tumor permissive microenvironment as the result of the interaction between stroma and cancer [42]. Our data extend this idea also to the early stages of the tumor and not only when metastasis occurs [43]. In fact, several pieces of evidence already exist regarding cancer cell spreading, considered a very early event [44].The biological alterations we have highlighted here, are centered on the microenvironment, which is considered a critical hallmark to elucidate mechanisms of cancer plasticity [45] and where the stromal fraction exerts a critical regulatory role within the tissue [46]. Specifically, our data show that the difference among the secretome obtained by benign and malignant-like microenvironments of AMSC at early stages is limited to the increase of a small pool of soluble factors. Notably, among them (IL1-β, IL-3, MCP-1, TNF-α), the EGF, the most acknowledged target for lung cancer therapy [47], emerges. We also found that this set of soluble mediators is functionally interconnected and related to lung inflammation and fibrosis. The grade of fibrosis in lung cancer is a key issue and represents the modification of a permissive microenvironment induced by the continuous crosstalk between tumor and cancer-associated fibroblasts (as part of the stromal fraction), which leads to the manipulation of the extracellular matrix components and to the transition towards the epithelial-mesenchymal traits [48]. Lung fibrosis may be a prerequisite for the development of lung adenocarcinoma [49] and so is the perpetuating inflammatory condition which fosters the most suitable biological background for tumor progression [50]. Profibrotic markers (alpha-smooth muscle actin, fibrillar collagens, SMAD3) expressed in histological samples of patients with lung cancer, are correlated to low survival [51]. Other important observations derived from pulmonary idiopathic fibrosis cases and interstitial fibrosis, are considered an independent risk variable for lung adenocarcinoma [52–54]. Lung fibrosis also positively correlates with a glycolytic metabolism of the tumor in subjects with IIIA NSCLC [55].Notably, we have provided a first biological indication of the possibility to “educate” the benign AMSC toward a malignant-like behavior. This is in line with several observations regarding the process of educating MSC [56] which has been described for bone marrow-derived MSC differentiating towards malignant phenotype, once recruited by cancer microenvironment [57]. Novel clinical applications by employing chemotherapeutic agents or enhancing CAR-T cells/-NK in cancer immunotherapy exploit the ability to educate MSC to guide the tropism of the stromal fraction [4, 58]. Our study is also coherent with additional reports showing that tumor cells educate MSC depending on the tumor microenvironment [59], strengthening the significance of the microenvironmental control exerted by cancer.Although we have not identified the exact molecular mechanism by which a malignant microenvironment can be also generated in the remote stromal area, we have provided a first biological correlation between the secretome (the pool of the 5 increasing soluble factors between malignant and benign microenvironments) of the stromal fraction at remote sites and changes of matched circulating and tissue miRNAs. Our data highlight how serum levels of miR-101, 126, and let-7g already reflect the same expression profile of the corresponding cancer tissue in patients with early stages of lung adenocarcinoma. The miR-486 represents an exception in our findings as its levels are downregulated and upregulated in tissue and serum, respectively. This is not surprising, considering that the decrease of miR-486 found in NSLC tissues, is inversely correlated to both lung metastasis [60] and cancer stages [61]. Plasma levels of miR-486 are reported to increase after NSLC resection [62]. Based on these findings, miR-486 is currently considered one of the most significant prognostic markers for early diagnosis in NSLC [63]. Besides, the miR-126-3p which is known as angiomiRNA and mainly produced by platelets and endothelial cells [33], is already upregulated in both cancer tissue and serum, likely suggesting a potential early involvement of a dysregulated angiogenesis towards the influence of lung adenocarcinoma at early stages. The miR-126 possesses prognostic value and its involvement in lung cancer angiogenesis has been described [64, 65]. By decreasing the proliferation through the silencing of miR-126-3p, our results suggest a specific effect on the “malignant-like” AMSC-derived media. Our explanation is that the AMSC-derived supernatants did not influence per se the level of the miR-126-3p in A549 (as it is similar to the control media up to 7 days), but rather the endogenous target/s of the miR-126-3p associated to proliferation, once cells were treated. We found that a defined pool of 5 soluble factors (EGF, IL-1b, IL-3, TNF-a, CCL2) resulted in predictive under the control of miR-126-3p. However, even the bidirectional cytokine-miRNAs relationship in inflammatory systems has been reported, where also soluble molecules are able to influence the activity of miRNAs and vice versa [66, 67]. Thus, it is plausible that the proliferative effects were mediated by those cytokines within the AMSC-derived media and miR-126 feedback loop. A main involvement in the regulation of the miR-126-associated proliferation could be the epidermal growth factor-like domain-containing gene 7 (EGFL7) a master regulator of angiogenesis and cancer pathogenesis [38, 68]. The miR-126 is encoded by EGFL7 and it may silence genes such as mTOR and PIK3R2 [69], which are all linked to cell proliferation.Notably, the enhancement of miR-126 in A549 cultures impairs tumor cell proliferation [71]. However, these studies have been mainly performed in absence of specific treatments such as AMSC-derived conditioned media strengthening the role of miRNAs in targeting different molecular partners according to the biological stimulus applied.
## 5. Conclusions
Our study has several limitations. We have not demonstrated the direct association between the soluble factors-miRNAs and systemic effect towards stromal cells at remote sites upon the influence of lung adenocarcinoma at early stages. The education of the stromal fraction cannot be also restricted to the sole secretome by cancer cells and certainly additional molecular and biological mechanisms including genomic alterations and mutations need to occur, in order to favor the progression of lung cancer. Besides, we must consider that AMSC retains the intrinsic ability to favor cell proliferation and proangiogenic properties depending on the adipose depot [20], suggesting further prudence to the use of MSC in cancer-related clinical applications.Despite this, our study sheds further light on the complex relationship between cancer and stromal compartment mediated by the microenvironment to communicate with niches at remote sites.
---
*Source: 1011063-2023-01-24.xml* | 1011063-2023-01-24_1011063-2023-01-24.md | 49,210 | Remote Adipose Tissue-Derived Stromal Cells of Patients with Lung Adenocarcinoma Generate a Similar Malignant Microenvironment of the Lung Stromal Counterpart | Elena De Falco; Antonella Bordin; Cecilia Menna; Xhulio Dhori; Vittorio Picchio; Claudia Cozzolino; Elisabetta De Marinis; Erica Floris; Noemi Maria Giorgiano; Paolo Rosa; Erino Angelo Rendina; Mohsen Ibrahim; Antonella Calogero | Journal of Oncology
(2023) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2023/1011063 | 1011063-2023-01-24.xml | ---
## Abstract
Cancer alters both local and distant tissue by influencing the microenvironment. In this regard, the interplay with the stromal fraction is considered critical as this latter can either foster or hamper the progression of the disease. Accordingly, the modality by which tumors may alter distant niches of stromal cells is still unclear, especially at early stages. In this short report, we attempt to better understand the biology of this cross-talk. In our “autologous stromal experimental setting,” we found that remote adipose tissue-derived mesenchymal stem cells (mediastinal AMSC) obtained from patients with lung adenocarcinoma sustain proliferation and clonogenic ability of A549 and human primary lung adenocarcinoma cells similarly to the autologous stromal lung counterpart (LMSC). This effect is not observed in lung benign diseases such as the hamartochondroma. This finding was validated by conditioning benign AMSC with supernatants from LAC for up to 21 days. The new reconditioned media of the stromal fraction so obtained, was able to increase cell proliferation of A549 cells at 14 and 21 days similar to that derived from AMSC of patients with lung adenocarcinoma. The secretome generated by remote AMSC revealed overlapping to the corresponding malignant microenvironment of the autologous local LMSC. Among the plethora of 80 soluble factors analyzed by arrays, a small pool of 5 upregulated molecules including IL1-β, IL-3, MCP-1, TNF-α, and EGF, was commonly shared by both malignant-like autologous A- and L-MSC derived microenvironments vs those benign. The bioinformatics analysis revealed that these proteins were strictly and functionally interconnected to lung fibrosis and proinflammation and that miR-126, 101, 486, and let-7-g were their main targets. Accordingly, we found that in lung cancer tissues and blood samples from the same set of patients here employed, miR-126 and miR-486 displayed the highest expression levels in tissue and blood, respectively. When the miR-126-3p was silenced in A549 treated with AMSC-derived conditioned media from patients with lung adenocarcinoma, cell proliferation decreased compared to control media.
---
## Body
## 1. Introduction
Mesenchymal stem cells (MSC) have been described as adult multipotent stem cells, showing many relevant properties, spanning from the ability to immunomodulate and migrate to specific sites of injury to the transdifferentiation into multiple cell types [1, 2]. MSC have been considered ideal candidates for many clinical and cell therapy applications, almost concluding that their wide applicability was also possible in cancer treatment [3].The biological interaction between MSC and tumors is complex and enormously debated. Several controversies exist about the potential of MSC to enhance or to even arrest tumorigenicity, not only of their double-faced behavior such as tumor-tropism (hence tested as vehicles for anticancer genes targeting cancer cells or as enhancement of the CAR-T immunotherapy) [4, 5] and immunomodulatory features but also prometastatic functions [6–8], transdifferentiation into cancer-associated fibroblasts and drug resistance [9], the parallel ability to overturn the immune system [10–13], and activation of autophagy and neo-angiogenesis [14], therefore contributing to tumor evolution. This discrepancy also includes exosome-derived MSC, considered both an intriguing therapeutic tool for drug delivery and the main biological mediators of several supporting tumor molecular processes [15]. Moreover, from a clinical standpoint, it has been recognized that the endogenous recruitment of MSC (of different origins including adipose) from systemic niches may occur by tumor secretion of inflammatory soluble factors [16] and that a correlation exists between circulating mesenchymal tumor cells and stage of tumor development [17, 18].This scenario is also complicated by recent indications about the heterogeneity of MSC and the phenotypic and functional changes potentially caused by tumors. For instance, adipose tissue and bone marrow-derived MSC have shown differences with respect to stem cell content and epigenetic states [19, 20]. Besides, MSC obtained from diverse sources such as heart, dermis, bone marrow, and adipose tissue have been reported as genotypically different, expressing different levels of embryonic stem cell markers such as OCT-4, NANOG, and SOX-2 [21], and biological properties including angiogenesis and secretome [20]. When MSC are derived from cancer tissues, they show altered molecular and functional properties [22–24], suggesting that the tumor characteristics such as benignity or malignancy could influence the environment where MSC is located.From a biological standpoint, the evolution from a local to a systemic cancer microenvironment can be driven either by phenotypically altered cancer-associated cells (fibroblasts and endothelial cells), which organize clusters of systemic spreading cells or by niche-to-niche recruiting phenomena from the bone marrow to the tumor site [25, 26]. However, the thorny question is still centered on the modality by which cancer can control the systemic environment, influencing remote “normal” and nonbone marrow stem cell-derived niches including distant MSC niches, particularly at the early stages of the tumor, which are of paramount biological and clinical relevance to understand cancer progression.Assuming that the pathophysiology of cancer can be interpreted as a systemic disease, in this short report we attempt to investigate whether MSC-derived microenvironments at a remote site from a tumor can be already altered at the early stages of lung adenocarcinoma [27].
## 2. Methods
### 2.1. Surgical Specimen Collection and Clinical Database
At the end of the surgical procedure, a small sample of mediastinal adipose [1, 2] and lung tissue was collected by electrocoagulation from patients undergoing surgical procedures for hamartochondroma and non-small cell lung carcinoma (NSCLC). Surgical procedures were conducted at S. Andrea Hospital, Rome. Written informed consent was obtained from patients, before starting all the surgical and laboratory procedures. Patients with NSCLC and staging T1N0M0 G1 were selected, whereas subjects with metastasis have been excluded from the study.
### 2.2. Isolation and Characterization of AMSC and LMSC
AMSCs were isolated and characterized as previously described [1, 2, 28]. Patient’s characteristics are described in supplementary Tables 1a and 1b. Lung specimens were chopped with a scalpel and scissors in a 100 mm Petri dish, then gently transferred into a clean 100 mm Petri dish to allow tissue adherence. A complete growth medium composed of DMEM high glucose (Invitrogen) supplied by 10% FBS, antibiotics, and L-glutamine (all Gibco) was added to the fragments. Plates were incubated at 37°C in a fully humidified atmosphere of 5% CO2, avoiding shaking the plates at least for 72 hours. Half of the medium was replaced with a fresh complete medium every three days.
### 2.3. Isolation of Lung Adenocarcinoma Cells and In Vitro Conditioning with MSC-Derived Supernatants
Human primary lung adenocarcinoma cells (LAC) were isolated as we already previously described [29]. The cells obtained were cultured in a complete medium (DMEM-F12, penicillin-streptomycin, L-glutamine, nonessential amino acids, sodium pyruvate, all Gibco, Monza, Italy, and 5% FBS, Lonza, Milan, Italy). Lung adenocarcinoma A549 was purchased by ATCC and cultured in DMEM-F12 supplemented with 10% FBS (All Gibco). AMSC and LMSC supernatants derived from patients with hamartochondroma or NSCLC were collected between passages 3–6, then stored at −80°C until use.A549 or human primary LAC was conditioned by removing their own medium and replacing it with A- or LMSC-derived conditioned media diluted 1 : 1 with basal media of A549 or LAC. Every 3 days, 1/5 of the whole medium was discarded and fresh media was replaced. Cells were cultured according to the time course indicated in the study.
### 2.4. Proliferation, Clonogenic Assay, and FACS Analysis
Both LAC and A549 were seeded onto 96-well plates (150 cells/well) and incubated for 24 hours with DMEM low glucose 10% FBS [29]. Then cells were exposed to the different conditioned media collected from AMSC and LMSC for up to 7 days. Cells treated with the basal medium were used as a control. The effect of conditioned media on cell viability was evaluated by the MTS assay. Briefly at 3, 5, and 7 days after treatment, 20 μl of MTS reagent were added to each microculture well, and plates were incubated for 2 hours at 37°C, after which absorbance at 492 nm (optical density) was measured using a microplate reader.To test the secondary colony forming efficiency (CFU) assay, LAC or A549 were seeded at passage 3 at low density (10 cells/cm2) [29] in AMSC- and LMSC-derived conditioned media for 14 days and incubated at 37°C. Colonies produced were fixed with 4% paraformaldehyde and then stained with Giemsa (Sigma, Milan, Italy) for 1 h and counted by optical microscope. A cluster with >50 cells was considered a colony [20].FACS analysis was performed to investigate the percentage of apoptotic cells after stimulation with AMSC-derived conditioned media as previously reported [30, 31]. Briefly, semiconfluent cultures were harvested with Accutase (Sigma-Aldrich) and stained for 30 minutes with 5 μl AnnexinV-FITC antibody (Invitrogen Cat. number 88-8005-74) and counterstained with propidium iodide (10 ng/mL, Invitrogen, Cat. number 88-8005-74) according to the manufacturer’s protocol for adherent cells. Data acquisition was performed on a FACS-Aria II platform equipped with FACSDiva software (BD Biosciences). All flow cytometry data were analyzed with FlowJo software (FlowJo LCC, Ashland, USA).
### 2.5. Analysis of the Autologous A- and LMSC-Derived Secretome
The evaluation of the different microenvironments was performed on collected supernatant obtained from both A- and LMSC in patients with lung adenocarcinoma and hemartochondroma. C-Series Human Cytokine Antibody Array C5 (RayBiotech, Inc) was used for simultaneous semiquantitative detection of 80 multiple cytokines/growth factors as previously described [31]. Briefly, an equal volume of collected undiluted supernatants was incubated by gentle shaking overnight at 4°C on the membrane of the kit C-Series Human Cytokine Antibody Array C5. Chemiluminescence was employed to quantify the spots (at the same time exposure was used for all membranes) and each spot signal was analyzed by ImageJ. The samples were normalized on positive control means (six spots in each array) and then values were expressed as a percentage. To visualize the overall changes in cytokines array average data, results were graphed as log (2) in heatmap analysis by using the pheatmap R package in the RdYlBl color scale (from 0 to 7 expression levels). Cytokines with zero values in all replicates were ruled out.
### 2.6. Interaction and Functional Evaluation of Cytokines Network within the Microenvironments and miRNA Target Interaction Analysis
Correlation analysis between the cytokines expressed by A- or LMSC-derived benign and malignant microenvironments, was obtained by calculating the fold changes between these two conditions. A fold change threshold of >1.2 was considered as upregulated [32]. The analysis for known protein interaction was performed on commonly upregulated cytokines between A- and LMSC by using the STRING software (https://string-db.org/, version 11.5) [33], building the whole network according to the high confidence setting (0.7) and default options. The pathway and process enrichment analysis were performed using Metascape as already described elsewhere [34].The miRNA-target interaction analysis was performed by using the “multiMiR” R package and reviewing literature employing lung cancer as a keyword. The list of miRNAs in the R package was obtained using the get_multir function setting only on validated data and the databases queried were miRecords, miRTarBase, and TarBase.
### 2.7. A549 Transfection with AntagomiR-126-3-p and Digital Droplet PCR
A549 were cultured in complete media. For silencing endogenous miR-126-3p, we perform the same protocols we recently described with some modifications [34]. Briefly, A549 were transfected plating at a density of 1.5 × 104/24 wells with DMEM-F12 10% FBS. A mix composed of 25 picomoles LNA_126 (miRCURY LNA miRNA inhibitor (5)—3 ′Fam, Cat. N. 339121, Qiagen) or 25 picomoles control (miRCURY LNA miRNA inhibitor control (5)-No Modification Fam, Cat N. 339126. Qiagen) in Opti-MEM reduced serum media and lipofectamine (1 μl/100 μl Opti-MEM, RNAiMAX, Invitrogen, Cat. N. 56531) was added to the A549 and incubated for 5 hours. After that, the medium was removed, and new DMEM-F12 10% FBS as control and supernatants of AMCSs were added to the cells for up to 24 hours of total transfection. To verify transfection, cells were subjected to digital droplet PCR, to quantify the decrease of copy numbers of the miR-126-3p.Total RNA was extracted from the A549 cell pellet by miRNeasy kit (Qiagen, GmbH, Hilden, Germany) according to the manufacturer’s recommendation. Purified RNA was quantified at a NanoDrop spectrophotometer and used for reverse transcription reaction by the TaqMan miRNA Reverse Transcription Kit and miR-126-3p-3p-specificstem-loop primers (Applied Biosystem, Carlsbad, CA, USA). 10 ng of total extracted RNA, 1 × stem-loop RT primer specific for miRNAs, 3.33 U/μL MuLV reverse transcriptase, 0.25 U/μL RNase inhibitor, 0.25 mM dNTPs, and 1 × reaction buffer were run in a total reaction volume of 15 μL and incubated at 16°C for 30 min, 42°C for 30 min, and 85°C for 5 minutes in a thermal cycler. Afterward, digital droplet PCR was performed by employing the QX200 ddPCR system (Bio-Rad, Hercules, CA, USA), using TaqMan MicroRNA assay specific for hsa-miR-126-3p (Applied Biosystem) as we reported [34, 35]. The reaction mix was assembled with 1.3 μl of miR-126-3p-specific cDNA, 1 × TaqMan MicroRNA miR-126-3p-specific assay, and 1 × ddPCR supermix for probes (no dUTP) (Bio-Rad) in 20 μl of total volume. The mix was loaded into droplet generator cartridges with 70 μl droplet generation oil for probes (Bio-Rad). Each reaction mixture was partitioned by the QX200 droplet generator machine (Bio-Rad) into approximately 20,000 droplets. Then, 40 μl of droplets were placed into a PCR 96-well plate that was sealed using a pierceable foil heat seal and a PX1 PCR plate sealer (Bio-Rad). The PCR was performed on the T-100 thermal cycler (Bio-Rad) under the following conditions: 10 minutes at 95°C, 30 seconds at 94°C and 1 minute at 60°C for 40 cycles with a ramp speed of 2°C/s, 98°C for 10 min, and held at 4°C for at least 40 minutes. Droplets were assessed with a QX200 droplet reader machine and QuantaSoft Software (Bio-Rad). The threshold between the positive and negative droplet clusters was manually set for all samples. ddPCR data are presented as absolute copies of transcripts/μl of reaction sample ± Poisson 95% confidence intervals.
### 2.8. MiRNA Extraction and Quantification
MicroRNA extraction was performed from paraffin tumor tissue sections (RNeasy DSP FFPE Kit, Qiagen) and the RNA amount was determined using a NanoDrop spectrophotometer. Differently, miRNAs obtained from serum patients (200μl) were isolated by the Qiagen miRNeasy kit with further modifications for biofluids applications. Syn-cel-miR-39 spike in synthetic RNA (Qiagen) was added to monitor extraction efficiency. Afterward, on both tissue sections and sera samples the reverse transcription was performed using the “MiRCURY LNA Reverse Transcription Kit” (QIAGEN) in ThermoMixer 5436 (Eppendorf, Italy) according to the following protocol: 42°C for 60 seconds, 95°C for 5 minutes, 4°C forever [33].Selected miRNA levels such as hsa-miR-101, hsa-miR-126-3p, hsa-miR-486, and hsa-Let-7g were quantified by relative quantification using Qiagen LNA-based SYBR green detection method (miRCURY LNA miRNA PCR assay- Qiagen). Briefly, 3μl of cDNA was used on the Applied Biosystems 7900HT machine, adding the relevant ROX concentration to the qPCR mix [33]. The relative miRNA expression was calculated using hsa-miR-16 as endogenous control with the 2-ΔCt method for cancer tissues and miR-16/miR-39 for sera [33].
### 2.9. Statistical Analysis
The results were expressed as the arithmetic mean ± standard deviation (SD) for at least 3 repeated individual experiments for each sample group. Statistical differences between the values were determined by Student’sT-test, with a value of p<0.05 was considered statistically significant.
## 2.1. Surgical Specimen Collection and Clinical Database
At the end of the surgical procedure, a small sample of mediastinal adipose [1, 2] and lung tissue was collected by electrocoagulation from patients undergoing surgical procedures for hamartochondroma and non-small cell lung carcinoma (NSCLC). Surgical procedures were conducted at S. Andrea Hospital, Rome. Written informed consent was obtained from patients, before starting all the surgical and laboratory procedures. Patients with NSCLC and staging T1N0M0 G1 were selected, whereas subjects with metastasis have been excluded from the study.
## 2.2. Isolation and Characterization of AMSC and LMSC
AMSCs were isolated and characterized as previously described [1, 2, 28]. Patient’s characteristics are described in supplementary Tables 1a and 1b. Lung specimens were chopped with a scalpel and scissors in a 100 mm Petri dish, then gently transferred into a clean 100 mm Petri dish to allow tissue adherence. A complete growth medium composed of DMEM high glucose (Invitrogen) supplied by 10% FBS, antibiotics, and L-glutamine (all Gibco) was added to the fragments. Plates were incubated at 37°C in a fully humidified atmosphere of 5% CO2, avoiding shaking the plates at least for 72 hours. Half of the medium was replaced with a fresh complete medium every three days.
## 2.3. Isolation of Lung Adenocarcinoma Cells and In Vitro Conditioning with MSC-Derived Supernatants
Human primary lung adenocarcinoma cells (LAC) were isolated as we already previously described [29]. The cells obtained were cultured in a complete medium (DMEM-F12, penicillin-streptomycin, L-glutamine, nonessential amino acids, sodium pyruvate, all Gibco, Monza, Italy, and 5% FBS, Lonza, Milan, Italy). Lung adenocarcinoma A549 was purchased by ATCC and cultured in DMEM-F12 supplemented with 10% FBS (All Gibco). AMSC and LMSC supernatants derived from patients with hamartochondroma or NSCLC were collected between passages 3–6, then stored at −80°C until use.A549 or human primary LAC was conditioned by removing their own medium and replacing it with A- or LMSC-derived conditioned media diluted 1 : 1 with basal media of A549 or LAC. Every 3 days, 1/5 of the whole medium was discarded and fresh media was replaced. Cells were cultured according to the time course indicated in the study.
## 2.4. Proliferation, Clonogenic Assay, and FACS Analysis
Both LAC and A549 were seeded onto 96-well plates (150 cells/well) and incubated for 24 hours with DMEM low glucose 10% FBS [29]. Then cells were exposed to the different conditioned media collected from AMSC and LMSC for up to 7 days. Cells treated with the basal medium were used as a control. The effect of conditioned media on cell viability was evaluated by the MTS assay. Briefly at 3, 5, and 7 days after treatment, 20 μl of MTS reagent were added to each microculture well, and plates were incubated for 2 hours at 37°C, after which absorbance at 492 nm (optical density) was measured using a microplate reader.To test the secondary colony forming efficiency (CFU) assay, LAC or A549 were seeded at passage 3 at low density (10 cells/cm2) [29] in AMSC- and LMSC-derived conditioned media for 14 days and incubated at 37°C. Colonies produced were fixed with 4% paraformaldehyde and then stained with Giemsa (Sigma, Milan, Italy) for 1 h and counted by optical microscope. A cluster with >50 cells was considered a colony [20].FACS analysis was performed to investigate the percentage of apoptotic cells after stimulation with AMSC-derived conditioned media as previously reported [30, 31]. Briefly, semiconfluent cultures were harvested with Accutase (Sigma-Aldrich) and stained for 30 minutes with 5 μl AnnexinV-FITC antibody (Invitrogen Cat. number 88-8005-74) and counterstained with propidium iodide (10 ng/mL, Invitrogen, Cat. number 88-8005-74) according to the manufacturer’s protocol for adherent cells. Data acquisition was performed on a FACS-Aria II platform equipped with FACSDiva software (BD Biosciences). All flow cytometry data were analyzed with FlowJo software (FlowJo LCC, Ashland, USA).
## 2.5. Analysis of the Autologous A- and LMSC-Derived Secretome
The evaluation of the different microenvironments was performed on collected supernatant obtained from both A- and LMSC in patients with lung adenocarcinoma and hemartochondroma. C-Series Human Cytokine Antibody Array C5 (RayBiotech, Inc) was used for simultaneous semiquantitative detection of 80 multiple cytokines/growth factors as previously described [31]. Briefly, an equal volume of collected undiluted supernatants was incubated by gentle shaking overnight at 4°C on the membrane of the kit C-Series Human Cytokine Antibody Array C5. Chemiluminescence was employed to quantify the spots (at the same time exposure was used for all membranes) and each spot signal was analyzed by ImageJ. The samples were normalized on positive control means (six spots in each array) and then values were expressed as a percentage. To visualize the overall changes in cytokines array average data, results were graphed as log (2) in heatmap analysis by using the pheatmap R package in the RdYlBl color scale (from 0 to 7 expression levels). Cytokines with zero values in all replicates were ruled out.
## 2.6. Interaction and Functional Evaluation of Cytokines Network within the Microenvironments and miRNA Target Interaction Analysis
Correlation analysis between the cytokines expressed by A- or LMSC-derived benign and malignant microenvironments, was obtained by calculating the fold changes between these two conditions. A fold change threshold of >1.2 was considered as upregulated [32]. The analysis for known protein interaction was performed on commonly upregulated cytokines between A- and LMSC by using the STRING software (https://string-db.org/, version 11.5) [33], building the whole network according to the high confidence setting (0.7) and default options. The pathway and process enrichment analysis were performed using Metascape as already described elsewhere [34].The miRNA-target interaction analysis was performed by using the “multiMiR” R package and reviewing literature employing lung cancer as a keyword. The list of miRNAs in the R package was obtained using the get_multir function setting only on validated data and the databases queried were miRecords, miRTarBase, and TarBase.
## 2.7. A549 Transfection with AntagomiR-126-3-p and Digital Droplet PCR
A549 were cultured in complete media. For silencing endogenous miR-126-3p, we perform the same protocols we recently described with some modifications [34]. Briefly, A549 were transfected plating at a density of 1.5 × 104/24 wells with DMEM-F12 10% FBS. A mix composed of 25 picomoles LNA_126 (miRCURY LNA miRNA inhibitor (5)—3 ′Fam, Cat. N. 339121, Qiagen) or 25 picomoles control (miRCURY LNA miRNA inhibitor control (5)-No Modification Fam, Cat N. 339126. Qiagen) in Opti-MEM reduced serum media and lipofectamine (1 μl/100 μl Opti-MEM, RNAiMAX, Invitrogen, Cat. N. 56531) was added to the A549 and incubated for 5 hours. After that, the medium was removed, and new DMEM-F12 10% FBS as control and supernatants of AMCSs were added to the cells for up to 24 hours of total transfection. To verify transfection, cells were subjected to digital droplet PCR, to quantify the decrease of copy numbers of the miR-126-3p.Total RNA was extracted from the A549 cell pellet by miRNeasy kit (Qiagen, GmbH, Hilden, Germany) according to the manufacturer’s recommendation. Purified RNA was quantified at a NanoDrop spectrophotometer and used for reverse transcription reaction by the TaqMan miRNA Reverse Transcription Kit and miR-126-3p-3p-specificstem-loop primers (Applied Biosystem, Carlsbad, CA, USA). 10 ng of total extracted RNA, 1 × stem-loop RT primer specific for miRNAs, 3.33 U/μL MuLV reverse transcriptase, 0.25 U/μL RNase inhibitor, 0.25 mM dNTPs, and 1 × reaction buffer were run in a total reaction volume of 15 μL and incubated at 16°C for 30 min, 42°C for 30 min, and 85°C for 5 minutes in a thermal cycler. Afterward, digital droplet PCR was performed by employing the QX200 ddPCR system (Bio-Rad, Hercules, CA, USA), using TaqMan MicroRNA assay specific for hsa-miR-126-3p (Applied Biosystem) as we reported [34, 35]. The reaction mix was assembled with 1.3 μl of miR-126-3p-specific cDNA, 1 × TaqMan MicroRNA miR-126-3p-specific assay, and 1 × ddPCR supermix for probes (no dUTP) (Bio-Rad) in 20 μl of total volume. The mix was loaded into droplet generator cartridges with 70 μl droplet generation oil for probes (Bio-Rad). Each reaction mixture was partitioned by the QX200 droplet generator machine (Bio-Rad) into approximately 20,000 droplets. Then, 40 μl of droplets were placed into a PCR 96-well plate that was sealed using a pierceable foil heat seal and a PX1 PCR plate sealer (Bio-Rad). The PCR was performed on the T-100 thermal cycler (Bio-Rad) under the following conditions: 10 minutes at 95°C, 30 seconds at 94°C and 1 minute at 60°C for 40 cycles with a ramp speed of 2°C/s, 98°C for 10 min, and held at 4°C for at least 40 minutes. Droplets were assessed with a QX200 droplet reader machine and QuantaSoft Software (Bio-Rad). The threshold between the positive and negative droplet clusters was manually set for all samples. ddPCR data are presented as absolute copies of transcripts/μl of reaction sample ± Poisson 95% confidence intervals.
## 2.8. MiRNA Extraction and Quantification
MicroRNA extraction was performed from paraffin tumor tissue sections (RNeasy DSP FFPE Kit, Qiagen) and the RNA amount was determined using a NanoDrop spectrophotometer. Differently, miRNAs obtained from serum patients (200μl) were isolated by the Qiagen miRNeasy kit with further modifications for biofluids applications. Syn-cel-miR-39 spike in synthetic RNA (Qiagen) was added to monitor extraction efficiency. Afterward, on both tissue sections and sera samples the reverse transcription was performed using the “MiRCURY LNA Reverse Transcription Kit” (QIAGEN) in ThermoMixer 5436 (Eppendorf, Italy) according to the following protocol: 42°C for 60 seconds, 95°C for 5 minutes, 4°C forever [33].Selected miRNA levels such as hsa-miR-101, hsa-miR-126-3p, hsa-miR-486, and hsa-Let-7g were quantified by relative quantification using Qiagen LNA-based SYBR green detection method (miRCURY LNA miRNA PCR assay- Qiagen). Briefly, 3μl of cDNA was used on the Applied Biosystems 7900HT machine, adding the relevant ROX concentration to the qPCR mix [33]. The relative miRNA expression was calculated using hsa-miR-16 as endogenous control with the 2-ΔCt method for cancer tissues and miR-16/miR-39 for sera [33].
## 2.9. Statistical Analysis
The results were expressed as the arithmetic mean ± standard deviation (SD) for at least 3 repeated individual experiments for each sample group. Statistical differences between the values were determined by Student’sT-test, with a value of p<0.05 was considered statistically significant.
## 3. Results
Firstly, we investigated if early-stage lung adenocarcinoma could influence the biological behavior of MSC according to (1) their own tissue source (autologous lung or mediastinal adipose tissue-derived MSC isolated from a tumor-free nearby or remote area, respectively) and (2) the biological feature of the lung tumor (malignant or benign defined by the histological analysis) where the MSC was derived from. To this aim, we conditioned the A549 cell line with supernatants of autologous lung or adipose-derived MSC (LMSC or AMSC) derived in parallel from patients with benign disease (pulmonary hamartochondroma) or malignant tumor (early-stage lung adenocarcinoma, see supplementary Tables1a and 1b). We tested both cell proliferation (at 0, 3, 5, and 7 days) and the clonogenic capacity of A549. The experimental plan is depicted in Figure 1(a). Results showed a significant increase of A549 cell proliferation at 5 and 7 days compared to control (Figure 1(a), p<0.001 both vs control) after conditioning the lung tumor cell line with supernatants derived from LMSC or AMSC.Figure 1
(a) Experimental design of the study on A549 cell line. 1 and 2 represent the steps of the experiments. (b) A549 proliferation by MTS assay by employing supernatants of autologous A- and LMSC-derived from patient with lung adenocarcinoma or (c) hamartochondroma. (d) Clonogenic assay on A549 with the corresponding autologous supernatants of B and CN = 4 different conditioned media for each source of A- and LMSC. Samples were normalized on time 0. ∗p<0.05; #p<0.001.
(a)
(b)
(c)
(d)When A549 were cultured with supernatants derived from autologous LMSC or AMSC both obtained from patients with pulmonary hamartochondroma, A549 proliferation decreased compared to control (Figure1(c), day 7 p<0.05 both vs control). The stimulus with conditioned media of benign or malignant origin did not increase apoptosis of A549 compared to control over time as demonstrated by the percentage of double-positive Annexin V/Iodide propidium cells detected by FACS analysis (Supplementary Figure 1). This result has suggested that no apoptotic effect was related to both tumor microenvironments, apart from the physiological cell turnover similar to controls.Interestingly, a similar scenario was also reproduced regarding the clonogenic capacity of A549, where only mediastinal AMSC derived from patients with early-stage lung adenocarcinoma were able to enhance the clonogenicity respect to control (Figure1(c), p<0.05vs control). The conditioning of A549 with supernatants derived from L- or AMSC of patients with pulmonary hamartochondroma, did not alter the clonogenic capacity of A549 (Figure 1(c)).Afterward, we verified whether the same effects were also reproducible on early-stage human primary lung adenocarcinoma cells (4 lines of LAC staging T1/N0/M0 G1), by employing the same supernatants as for the A549 cell line. The experimental sequence is described in Figure2(a). Notably, both supernatants from autologous LMSC and AMSC (because derived from the same subject) of patients with lung adenocarcinoma were able to sustain cell proliferation of LAC cells similarly to controls at all time points (Figure 2(b), p>0.05). Differently, conditioned media of autologous L- or AMSC obtained from patients with pulmonary hamartochondroma, were able to decrease cell proliferation of LAC compared to controls at days 5 and 7 (Figure 2(c), day 5 LMSC p<0.001, AMSC p<0.05vs controls; day 7 both LMSC and AMDC p<0.001vs controls). Coherently, we also found a significant enhancement of the clonogenic ability of LAC after the culturing with supernatants of autologous LMSC o AMSC derived from patients with early-stage lung adenocarcinoma (Figure 2(d), p=0.04 LMSC and p=0.02 AMSC vs controls). Conditioned media derived from autologous L- or AMSC of patient with hamartochondroma decreased or did not alter the clonogenic capacity of LAC cells (Figure 2(d), p=0.011 LMSC vs controls).Figure 2
(a) Experimental design of the study on human primary lung adenocarcinoma cells (LAC). From 1 to 4 are represented the steps of the experiments (b) LAC proliferation by MTS assay with supernatants of autologous A- and LMSC-derived from patient with lung adenocarcinoma or (c) hamartochondroma. (d) Clonogenic assay of heterologous LAC with the corresponding autologous supernatants of B and C. Below the graph representative images of Giemsa staining of the clones generated by heterologous LAC cells after 3 weeks of culture in presence of autologous A- or LMSC-derived conditioned media.N = 4 different conditioned media for each source of A- and LMSC. ∗p<0.05; #p<0.001.
(a)
(b)
(c)
(d)We then evaluated if these effects entailed also a different paracrine signature of autologous MSC-derived microenvironments. Thus, we screened a panel of 80 pro- and anti-inflammatory cytokines and growth factors excreted by both sources of MSC in the conditioned media. Results analyzed through the heatmap (soluble factors with no expression in both L- and AMSC were ruled out) revealed that the secretome of both autologous L- and AMSC of patients with early-stage lung adenocarcinoma was similar (Figure3(a)), therefore suggesting a comparable in vitro microenvironment. Interestingly, supernatants derived from AMSC of patients with hamartochondroma exhibited a more heterogenous profile of soluble factors with respect to that derived from the corresponding autologous MSC counterpart in the lung (Figure 3(b)). Afterward, we further examined these secretomes. By keeping constant the source of the stromal fraction during the analysis, we calculated the increasing ratio of the soluble factors between malignant and benign microenvironments. An upregulation ratio of >1.2 was set as a cut-off. Then, we sought for those common cytokine or growth factors upregulated among the two microenvironments and we generated a functional protein association network with the STRING software. Results showed a shared upregulation of 6 soluble factors including EGF, IL-1β, IL-3, TNF-α, CCL2, and SPP1 (osteopontin). Notably, the STRING analysis which identifies protein-protein interaction, also displayed that all cytokines but SPP1 were strictly interconnected in the same cluster with a high confidence score of 0.7. Afterward, we used Metascape for the pathway enrichment analysis of this network, and we found that the most significant terms within the cluster consisting of EGF, IL-1β, IL-3, TNF-α, CCL2 were associated with lung fibrosis and proinflammatory/fibrotic mediators (Figure 3(d)).Figure 3
(a) and (b) Hierarchic clustering heatmap based on the cytokine arrays of autologous A- and LMSC-derived conditioned media from patients with lung adenocarcinoma and hamartochondroma (malignant and benign microenvironments, respectively). The red-yellow and the blue range colors indicate cytokines with high and low average levels, respectively. (c) Protein-protein interaction network generated by STRING database on the 6 upregulated cytokines commonly shared by autologous A- and LMSC when their malignant and benign microenvironments are compared. The number of nodes is proportional to a strict correlation among the proteins analyzed. (d) The analysis of the pathway and process enrichment evaluation derived from the Metascape analysis of the cytokines displayed in (c) showing that IL-1β, IL-3, MCP-1, TNF-α, EGF are all related to lung fibrosis and inflammation.
(a)
(b)
(c)
(d)These results indicated a potential biological supporting tumor behavior of AMSC localized in remote areas even at the early stages of the disease. Thus, to validate this aspect, we attempt to similarly “educate” in vitro the benign AMSC obtained from a patient with hamartochondroma towards a biological “malignant-like” behavior, by reversing the experiment and culturing benign AMSC for 7, 14, and 21 days with the sole supernatants produced by LAC cells. Afterward, the same conditioned media produced by AMSC after conditioning with adenocarcinoma was removed and replaced on A549 cells which were tested for cell proliferation. We found a significant increase in cell proliferation of A549 at 14 and 21 days of tumor preconditioning compared to day 7 (Figure4, p<0.001 both).Figure 4
Proliferation assay of A549 after culture with supernatants of benign AMSC previously preconditioned with LAC-derived conditioned media up to 21 days and so retested on A549. The graph shows an increase in cell proliferation of A549 at 14 and 21 days compared to 7 days. Samples were normalized on basal media of AMSC. #p<0.001.Considering the analysis of the malignant and benign secretome of A- and LMSC (Figures3(a)–3(d)) and that miRNAs have been demonstrated as an eligible candidate to mediate a long-distance paracrine effect, to instruct MSC and endothelial cells in lung cancer [36], we performed a computational analysis using the miRNA-target prediction tools miRecords, miRTarBase, and TarBase combined to a revision of the literature on lung cancer of miRNAs targeting the pool of the 5 cytokines. We found four miRNAs including miR-126, 101, 486, and let-7-g. To validate the four miRNAs, we explore their expression levels on matched lung cancer tissues and samples of blood from the same set of patients employed to isolate LAC cells. Results showed that among all analyzed miRNAs the miR-126 displayed the highest expression in the lung cancer tissue (Figure 5(a), p<0.01 and p<0.001), differently from the circulation where is the miR-486 to show the highest levels (Figure 5(b), p<0.05 and p<0.01).Figure 5
Real time PCR for miR-101, 126, 486 and let-7-g in (a) cancer tissues and the corresponding (b) sera of 3 patients with lung adenocarcinoma at early stages. Samples were normalized on miR-16 and miR-16/cell39 for tissue and sera, respectively.∗∗p<0.01; #p<0.001.
(a)
(b)The miR-126 is described as one of the most important differentially expressed miRNAs in lung tumors [37] and a key miRNA regulating the pathogenesis [38] and angiogenesis of NSLC [39]. We also recently showed the angiogenic property exerted by miR-126 in endothelial cells [34]. Thus, to investigate the proliferative effects of only cancer cells by respective miR-126, we firstly evaluated in A549 by digital droplet PCR if the AMSC-derived conditioned media obtained from patients with lung adenocarcinoma was able to increase the levels of miR-126-3p. Results have shown that the treatment did not significantly increase the number of copies of miR-126-3p up to 7 days compared to the corresponding physiological levels (Figure 6(a)). Afterward, we knocked down the miR-126-3p in A549 (the recipient cells) by small interfering-based experiments at 24 hours (which was the best time performing, data not shown), halving the number of copies in cells (Figure 6(b)). After silencing, cells were reconditioned with the AMSC-derived supernatants from patients with lung adenocarcinoma and the MTS assay was performed. Notably, cell proliferation was significantly decreased only when miR-126-3p was silenced compared to the scramble (Figure 6(c), p<0.05). This effect was not reproducible in A549 silenced for miR-126-3p but conditioned with the basal media of the cells (Figure 6(d)).Figure 6
MiR-126-3p silencing in A549 cell line. (a) Absolute copy number of miR-126-3p quantified by digital droplet PCR in A549 treated with AMSC-derived supernatants from patients with lung adenocarcinoma or control basal media. (b) miR-126-3p silencing-based experiments at 24 hours in A549 conditioned with AMSC-derived supernatants from patients with lung adenocarcinoma, showing the decrease of the copy number of the endogenous miR-126-3p. Samples were normalized to the control condition. (c) MTS assay of the A549 after silencing of the endogenous miR-126-3p and treated with AMSC-derived supernatants from patients with lung adenocarcinoma, showing the significant decrease of cell proliferation after 24 hours. The effect was not reproducible in the control (d).∗p<0.05.
(a)
(b)
(c)
(d)
## 4. Discussion
This short report highlights how the tumor microenvironment is already defined at early stages, such that the stromal fraction can be influenced even at remote sites and in absence of metastasis. Intriguingly, remote AMSC derived from subjects with lung adenocarcinoma, are permissive to cell proliferation and clonogenic properties when tested on both A549 and LAC cells similar to the stromal lung counterpart. Oppositely, this effect is lost in “benign conditions.” The first important point of novelty of our brief study is that we have been able to isolate both A- and LMSC within the same patient (autologous stromal cells), therefore ruling out the variability in biological performance within the individual. A further point of originality is that our study has been focused on the early stages of lung adenocarcinoma, which has been currently given more clinical attention. Accordingly, we have shown that at the early stages of the tumor, malignant-like microenvironments are already generated from AMSC at remote sites, and they can be overlapped with the lung stromal counterpart, which is tumor adjacent. This is in line with the concept that cancer cannot be interpreted only as a local disease, but rather than a systemic disorder [40, 41] and with the concept of tumor permissive microenvironment as the result of the interaction between stroma and cancer [42]. Our data extend this idea also to the early stages of the tumor and not only when metastasis occurs [43]. In fact, several pieces of evidence already exist regarding cancer cell spreading, considered a very early event [44].The biological alterations we have highlighted here, are centered on the microenvironment, which is considered a critical hallmark to elucidate mechanisms of cancer plasticity [45] and where the stromal fraction exerts a critical regulatory role within the tissue [46]. Specifically, our data show that the difference among the secretome obtained by benign and malignant-like microenvironments of AMSC at early stages is limited to the increase of a small pool of soluble factors. Notably, among them (IL1-β, IL-3, MCP-1, TNF-α), the EGF, the most acknowledged target for lung cancer therapy [47], emerges. We also found that this set of soluble mediators is functionally interconnected and related to lung inflammation and fibrosis. The grade of fibrosis in lung cancer is a key issue and represents the modification of a permissive microenvironment induced by the continuous crosstalk between tumor and cancer-associated fibroblasts (as part of the stromal fraction), which leads to the manipulation of the extracellular matrix components and to the transition towards the epithelial-mesenchymal traits [48]. Lung fibrosis may be a prerequisite for the development of lung adenocarcinoma [49] and so is the perpetuating inflammatory condition which fosters the most suitable biological background for tumor progression [50]. Profibrotic markers (alpha-smooth muscle actin, fibrillar collagens, SMAD3) expressed in histological samples of patients with lung cancer, are correlated to low survival [51]. Other important observations derived from pulmonary idiopathic fibrosis cases and interstitial fibrosis, are considered an independent risk variable for lung adenocarcinoma [52–54]. Lung fibrosis also positively correlates with a glycolytic metabolism of the tumor in subjects with IIIA NSCLC [55].Notably, we have provided a first biological indication of the possibility to “educate” the benign AMSC toward a malignant-like behavior. This is in line with several observations regarding the process of educating MSC [56] which has been described for bone marrow-derived MSC differentiating towards malignant phenotype, once recruited by cancer microenvironment [57]. Novel clinical applications by employing chemotherapeutic agents or enhancing CAR-T cells/-NK in cancer immunotherapy exploit the ability to educate MSC to guide the tropism of the stromal fraction [4, 58]. Our study is also coherent with additional reports showing that tumor cells educate MSC depending on the tumor microenvironment [59], strengthening the significance of the microenvironmental control exerted by cancer.Although we have not identified the exact molecular mechanism by which a malignant microenvironment can be also generated in the remote stromal area, we have provided a first biological correlation between the secretome (the pool of the 5 increasing soluble factors between malignant and benign microenvironments) of the stromal fraction at remote sites and changes of matched circulating and tissue miRNAs. Our data highlight how serum levels of miR-101, 126, and let-7g already reflect the same expression profile of the corresponding cancer tissue in patients with early stages of lung adenocarcinoma. The miR-486 represents an exception in our findings as its levels are downregulated and upregulated in tissue and serum, respectively. This is not surprising, considering that the decrease of miR-486 found in NSLC tissues, is inversely correlated to both lung metastasis [60] and cancer stages [61]. Plasma levels of miR-486 are reported to increase after NSLC resection [62]. Based on these findings, miR-486 is currently considered one of the most significant prognostic markers for early diagnosis in NSLC [63]. Besides, the miR-126-3p which is known as angiomiRNA and mainly produced by platelets and endothelial cells [33], is already upregulated in both cancer tissue and serum, likely suggesting a potential early involvement of a dysregulated angiogenesis towards the influence of lung adenocarcinoma at early stages. The miR-126 possesses prognostic value and its involvement in lung cancer angiogenesis has been described [64, 65]. By decreasing the proliferation through the silencing of miR-126-3p, our results suggest a specific effect on the “malignant-like” AMSC-derived media. Our explanation is that the AMSC-derived supernatants did not influence per se the level of the miR-126-3p in A549 (as it is similar to the control media up to 7 days), but rather the endogenous target/s of the miR-126-3p associated to proliferation, once cells were treated. We found that a defined pool of 5 soluble factors (EGF, IL-1b, IL-3, TNF-a, CCL2) resulted in predictive under the control of miR-126-3p. However, even the bidirectional cytokine-miRNAs relationship in inflammatory systems has been reported, where also soluble molecules are able to influence the activity of miRNAs and vice versa [66, 67]. Thus, it is plausible that the proliferative effects were mediated by those cytokines within the AMSC-derived media and miR-126 feedback loop. A main involvement in the regulation of the miR-126-associated proliferation could be the epidermal growth factor-like domain-containing gene 7 (EGFL7) a master regulator of angiogenesis and cancer pathogenesis [38, 68]. The miR-126 is encoded by EGFL7 and it may silence genes such as mTOR and PIK3R2 [69], which are all linked to cell proliferation.Notably, the enhancement of miR-126 in A549 cultures impairs tumor cell proliferation [71]. However, these studies have been mainly performed in absence of specific treatments such as AMSC-derived conditioned media strengthening the role of miRNAs in targeting different molecular partners according to the biological stimulus applied.
## 5. Conclusions
Our study has several limitations. We have not demonstrated the direct association between the soluble factors-miRNAs and systemic effect towards stromal cells at remote sites upon the influence of lung adenocarcinoma at early stages. The education of the stromal fraction cannot be also restricted to the sole secretome by cancer cells and certainly additional molecular and biological mechanisms including genomic alterations and mutations need to occur, in order to favor the progression of lung cancer. Besides, we must consider that AMSC retains the intrinsic ability to favor cell proliferation and proangiogenic properties depending on the adipose depot [20], suggesting further prudence to the use of MSC in cancer-related clinical applications.Despite this, our study sheds further light on the complex relationship between cancer and stromal compartment mediated by the microenvironment to communicate with niches at remote sites.
---
*Source: 1011063-2023-01-24.xml* | 2023 |
# Research on Demand Forecasting of Engineering Positions Based on Fusion of Multisource and Heterogeneous Data
**Authors:** Ning Li; Tianqi Wang; Qianhui Zhang
**Journal:** Scientific Programming
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011070
---
## Abstract
Aiming at the project demand forecasting problem based on multisource data fusion, a multisource heterogeneous data fusion model is established, and the unified quantitative representation method of heterogeneous data based on triangular fuzzy numbers is studied, and the ordered weighted average operator is used to integrate the preferences of decision-makers. A multisource heterogeneous data fusion algorithm that supports multiuser decision-making is designed. Based on the analysis of the internal and external environment of human resources in a company’s engineering positions, this paper qualitatively analyzes and selects the factors affecting the demand for talents in key positions in a company based on the characteristics of demand influencing factors and finds out the quantifiable and influential factors from the representative factors of talent demand for key positions in a company. Using historical data, statistical methods are used to process the eight related factors of a certain company, which confirms the factors that have a greater impact on the demand for talents in key positions of a company and influences the demand for talents in key positions in companies of the same type. The identification of factors provides a basic argument for a company. According to the results of statistical analysis and the characteristics of existing data, two variables of factory output and time are selected to be used in regression analysis forecasting model and gray system forecasting model of a certain company to predict the demand for key talents of a certain company. The company finally adopts combined forecasting. The method determines the predicted value of the talent demand for a certain company’s key positions. According to the results of demand forecasting and the current status of human resource management in a company, this article proposes a company’s key position talent management planning measures, in order to provide a reference for the management of a company’s key company position talents and ensure a company’s key company positions in the future talent demand reserve.
---
## Body
## 1. Introduction
In the process of enterprise informatization construction, due to the phased, technical, and other economic and human factors of the construction of various business systems and the implementation of data management systems, enterprises have accumulated a large number of business data in different storage methods during the development process. Data fusion is the multilevel, multifaceted, multilevel processing and combination of multiple sets of sensor data obtained from the same target to generate new meaningful information. The sensor here is in a broad sense, generally referring to the relevant databases of various data acquisition systems. Data fusion is a processing method of multisource information. Simply put, data fusion is a comprehensive algorithm of multiple data. The purpose of processing is to reason and identify the obtained information and make estimates and judgments accordingly. By fusing multisensor data, confidence can be increased, ambiguity can be reduced, and system reliability can be improved [1–7].Data fusion originated in the 1970s, initially out of military needs, and rapidly expanded to areas such as automatic control, medicine, intelligent buildings, and commerce in research. The analysis objects are also extended from physical goals to information goals and even cognitive goals. The theoretical basis of data fusion is information theory, detection and estimation theory, statistical signal theory, fuzzy mathematics, cognitive engineering, systems engineering, and so on. In 1986, Joint Directors of Laboratories established a function-oriented basic model and basic terminology, and in 1998, it was further improved. Although the JDL model is based on military proposals, it is also suitable for other application fields. The JDL model does not involve system structure. Bowman et al. extended this and proposed the concept of hierarchical data fusion tree, dividing the fusion problem into nodes [8–16]. Each node conceptually includes functions, such as data association, correlation, and evaluation. On this basis, Boss 6 colors further developed and proposed a set of modeling and simulation methods to realize the design of data fusion system, as can be seen in Figure 1, which mainly explains the whole system of data fusion system and its content.Figure 1
Data fusion system.Data fusion is essentially an integrated process of using computers to process, control, and make decisions on various information sources. The functions of the data fusion system mainly include detection, correlation, identification, and estimation. Data fusion can be divided into five levels: detection-level fusion, location-level fusion, attribute- (target recognition-) level fusion, situation assessment, and threat estimation. Related to this article is mainly the attribute -level fusion, also known as target-recognition-level fusion. It refers to the combination of target recognition data from multiple sensors to obtain a joint estimation of target identity. Attribute-level fusion uses multiple sensors to collect the data of the observed target, performs feature extraction and data combination, grouping the same target, and then using the content algorithm to synthesize the grouped data of the same target, and finally gets the joint attribute judgment of the target. That is, the type and category of the target are obtained. According to different fusion locations, attribute-level fusion is divided into three methods: decision-level fusion, feature-level fusion, and data-level fusion [17–20].So far, researchers have proposed more than 30 data fusion models; the most cited is the JDL model of the US Department of Defense. Many mature applications of these models have appeared in the fields of target tracking, image fusion, and so on, but there are relatively few applications in data mining and natural language processing. Data fusion technology is an emerging interdisciplinary comprehensive theory and method. After decades of development, breakthrough progress has been made, but there are still many problems. For example, there is no unified definition, lack of systematic and complete basic theory, and so on. To sum up, in the face of the emerging scientific theory and method of data fusion, it is necessary to conduct in-depth systematic research on the existing data fusion technology and find its fit with the field of natural language processing from both theoretical and practical levels.There are documents that have studied multisensor data fusion technology based on statistics and artificial intelligence methods and others that have studied the organization and management of multisource heterogeneous data in mobile geographic information systems and have established multisource heterogeneous data fusion models. The combination of sensor network and data fusion technology proposes a Kalman filter batch estimation fusion algorithm; some literature has studied the fusion method of massive multisource heterogeneous data in the Internet of Things environment and has been successfully applied in the process of target positioning and tracking. The literature has studied the intelligent maintenance decision-making architecture of the high-speed rail signal system based on heterogeneous data fusion, which has improved the accuracy and effectiveness of decision-making. Someone has studied the multisource heterogeneous data fusion technology in the construction of digital mines, ensuring that the construction of digital mines is in progress. The basic information platform is safe, stable, and efficient [21–25].The type and structure of the fusion data is limited, and most work only incorporates an additional type of auxiliary information, which has domain limitations and sometimes relies on rules and expert knowledge and high labor costs. The more sufficient the auxiliary data is, the more comprehensive the hidden representation of users and items can be obtained. In the recommendation prediction, the rich feature relationship between the two can be integrated to obtain more accurate results. The structural dimensions of different data sources are also different, and the distribution of data is also very different, which also increases the difficulty of integrating more data in breadth. Including the adopted data management system is also very different; from simple file database to complex network database, they constitute the heterogeneous data source of the enterprise.The fusion method is relatively preliminary, and the barriers between heterogeneous data have not been broken. In a related research, linear transformation is performed in the hidden semantic space of multisource data, and the method of adding and multiplying is used to integrate into the recommendation model, and it is impossible to fit the relationship between complex multisource heterogeneous data. To make matters worse, other auxiliary data may contain other information that has nothing to do with the recommendation. The mechanized fusion will introduce unnecessary noise, which will reduce the accuracy of the recommendation model. In addition, multisource data only supplements the characteristics of the recommendation model itself, and the feature information between each multisource heterogeneous data source also lacks in-depth interaction and cannot achieve synergistic effects [26–29].In addition to numerical data, there are other description forms such as language or symbols. Various descriptions lead to the ambiguity, difference, and heterogeneity of the structure and semantics of the data information. On the other hand, the decision-making process needs to comprehensively consider various heterogeneous data and information and make final decisions through the fusion of data and information. Therefore, starting from the characteristics of heterogeneous data, this paper studies a multisource heterogeneous data fusion method that supports multiuser decision-making.Human resource forecasting refers to the assumption of the human resource situation in a certain period of time in the future based on the evaluation and prediction of the enterprise, as shown in Figure2. It mainly includes the forecast of the quantity and type of human resource demand for the future development of the enterprise; the forecast of the future human resource status of the enterprise; the forecast of the future industry competition situation; the forecast of the supply and demand relationship of social human resources. Experience forecasting method, current situation planning method, model method, expert discussion method, quota method, and top-down method are commonly used methods for human resource forecasting. At present, the research on the special field of enterprise human resource forecasting needs to be in-depth. In practical applications, it is necessary to further improve the accuracy and feasibility of the forecast and at the same time increase the forecast of ability and quality based on the forecast of the number of personnel. In addition, there are few applied researches on human resource forecasting methods in specific enterprises.Figure 2
Human resource.
## 2. Multisource Heterogeneous Data Fusion Model
Human resource forecasting can be divided into human resource demand forecasting and human resource supply forecasting, including the dual meanings of foreseeing and measuring the future. When studying the status of a large-scale system, the status of each part of the system is usually judged first and then integrated to comprehensively judge the overall status. Therefore, a fusion method is needed to fuse the data of each part. Subsequently, multisource data fusion technology emerged, which can associate and combine data from multiple sensors, and integrate them together for a unified evaluation. According to the characteristics of the fusion algorithm, it can be summarized as data-level fusion, feature-level fusion, and decision-level fusion. Data-level fusion directly integrates the original log information obtained by the detector, and then, the fused data is processed in the next step. In this way, many details of the original data are retained, the amount of information lost is relatively small, and the granularity of fusion is relatively high. However, this fusion method is easily affected by the original data. When the original data is incomplete or the data stability is poor, it will directly affect the effect of the fusion, and the fused data must be homogeneous data. In addition, because many details in the original data are retained, the amount of calculation is relatively large and the processing cost is relatively high, which is not suitable for real-time fusion. This method has many applications in the field of image processing. Different from data-level fusion, feature-level fusion first performs data preprocessing and feature extraction on the data obtained by each detector and removes attributes that are weakly or irrelevant to the researched problem and then performs data extraction on the extracted feature data. Compared with the data-level fusion method, because the data fused by this fusion method is the data after feature extraction, the amount of data is small, the processing cost is low, the anti-interference ability is strong, the real-time performance is better, and heterogeneous data can be fused. Decision-level fusion first extracts the features of the original log information of each detector and analyzes and models it and then uses the single-source decision output from the model as a fusion factor to fuse, and the result of the fusion is the decision result of comprehensive multisource information. Compared with the other two fusion methods, this method has the smallest amount of data, so it has the best real-time performance and the lowest computational cost. And when the original data is unstable, the impact on fusion is minimal, and heterogeneous data can be fused. Enterprise human resources forecasting is a series of studies on the development trend, prospects, various possibilities, and consequences of enterprise human resources.
### 2.1. Multisource Heterogeneous Data Fusion Method
Data fusion is essentially the collaborative processing of data from multiple parties to achieve the purpose of reducing redundancy, comprehensive complementation, and capturing collaborative information. Data fusion is divided into data-level fusion, feature-level fusion, and decision-level fusion according to operation level. This paper studies the fusion of multiple data sources at the decision-making level, and its methods mainly include weighted average method, D-S evidence theory, and voting.
#### 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
#### 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
#### 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
### 2.2. Multisource Heterogeneous Data Fusion Structure
The fusion structure of multiple data sources is shown in Figure4. The data fusion process takes into account the characteristic factors expressing user needs and the reliability of information, uses context knowledge and domain knowledge, and uses voting to resolve data conflicts and other issues.Figure 4
Multisource data source fusion structure.Aiming at the previously mentioned model, this paper designs a multisource heterogeneous data fusion structure model that supports multiuser decision-making. The data fusion engine in the model includes four modules: data warehouse, decision support calculation, OWA operator weight vector calculation, and data conversion and sorting. The specific descriptions are as follows. (1) The data warehouse implements data selection, feature extraction, and statistics operations: data integration, elimination of data heterogeneity and differences, and providing data sources for subsequent data processing. (2) The decision support calculation module obtains data of relevant dimensions from the data warehouse according to the decision attributes and calculates the impact of each data source on the decision: the support valuesij (the support degree of the data source i for the j-th decision). (3) The OWA operator weight vector calculation module calculates the OWA weight wi according to the fuzzy semantic principle provided by the decision maker. The choice of fuzzy semantic parameters reflects the decision maker’s: the preference attitude of the data source. (4) Data are converted and sorted according to the credibility or importance of the data source provided by the decision maker, combined with the OWA weight vector wi to convert sij and sort the converted results in order of size, and sort the result, which is calculated by summing the final decision value.
## 2.1. Multisource Heterogeneous Data Fusion Method
Data fusion is essentially the collaborative processing of data from multiple parties to achieve the purpose of reducing redundancy, comprehensive complementation, and capturing collaborative information. Data fusion is divided into data-level fusion, feature-level fusion, and decision-level fusion according to operation level. This paper studies the fusion of multiple data sources at the decision-making level, and its methods mainly include weighted average method, D-S evidence theory, and voting.
### 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
### 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
### 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
## 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
## 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
## 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
## 2.2. Multisource Heterogeneous Data Fusion Structure
The fusion structure of multiple data sources is shown in Figure4. The data fusion process takes into account the characteristic factors expressing user needs and the reliability of information, uses context knowledge and domain knowledge, and uses voting to resolve data conflicts and other issues.Figure 4
Multisource data source fusion structure.Aiming at the previously mentioned model, this paper designs a multisource heterogeneous data fusion structure model that supports multiuser decision-making. The data fusion engine in the model includes four modules: data warehouse, decision support calculation, OWA operator weight vector calculation, and data conversion and sorting. The specific descriptions are as follows. (1) The data warehouse implements data selection, feature extraction, and statistics operations: data integration, elimination of data heterogeneity and differences, and providing data sources for subsequent data processing. (2) The decision support calculation module obtains data of relevant dimensions from the data warehouse according to the decision attributes and calculates the impact of each data source on the decision: the support valuesij (the support degree of the data source i for the j-th decision). (3) The OWA operator weight vector calculation module calculates the OWA weight wi according to the fuzzy semantic principle provided by the decision maker. The choice of fuzzy semantic parameters reflects the decision maker’s: the preference attitude of the data source. (4) Data are converted and sorted according to the credibility or importance of the data source provided by the decision maker, combined with the OWA weight vector wi to convert sij and sort the converted results in order of size, and sort the result, which is calculated by summing the final decision value.
## 3. Multisource Heterogeneous Data Fusion Algorithm
### 3.1. Data Types and Their Characteristics
This technology has become a research hotspot in the fields of data processing, target recognition, situation assessment, and intelligent decision-making. Data can be described in terms of quantity and quality. The quantity is represented by numerical values, and the quality is described by linguistic variables. According to the different ways of data description, this paper divides the data into qualitative and quantitative types, focusing on the four types of descriptions of random variables, binary type, language level, and vocabulary terminology. The predicted values are compared in Figure5. As can be found for these figures, the third one exhibits the best performance of all, which is also consist with the previously mentioned analysis.Figure 5
Value comparison.In the case of large samples, random variables follow a normal distribution. Binary data is used to describe the affirmation or negation of facts, and the value space is mostly {1, 0} or {True, False}. The data indicating the degree is generally expressed by Chinese adverbs of degree: very good, very poor, and so on. The degree level mostly uses 7 or 9 standards. The data based on the vocabulary term uses the vocabulary or term specified in the vocabulary space to give a qualitative description of things, and the number of vocabulary depends on the specific situation.
### 3.2. Support Calculation Based on Triangular Fuzzy Numbers
Taking into account the existence of ambiguity in the description of multisource data, triangular fuzzy numbers can be used to calculate the support value of the data for decision-making.
#### 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
#### 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
#### 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
### 3.3. Data Fusion Algorithm
Supposen decisions: A=(A1, A2,...,An), m data sources: S=(S1, S2,...,Sm), and the credibility (or importance) of each data source is pi. The data fusion algorithm is described as follows.Step 1.
Calculate the support of the data source for decision-making; extract data from the data warehouse, according to the different types of data and according to the previously mentioned method to convert it into the support for decision-making:(10)Sij=aij,bij,cij.
Among them,Sij is the support degree of the i-th data source for the j-th decision target, and (aij, bij, cij) is the triangular fuzzy number representation of the support degree.Step 2.
Determine the weight vector of the OWA operator; select the appropriate fuzzy semantic quantization criterion according to the preference of the decision maker” and determine the corresponding parameter and value. The principle of fuzzy semantics is generally “majority,” “at least half,” or “as much as possible,” and their parameter values are (0.3, 0.8), (0, 0.5), and (0.5, 1), which can be determined according to the parameters Fuzzy semantic quantization operatorf(x). According to f(x), obtain the OWA weight vector w=(w1, w2,..., wn); n is the number of data sources. Obtain the value of c.Step 3.
Convertsij according to the credibility (or importance) pi and support value sij of each data source; in order to use the OWA weight vector, each decision value needs to be converted according to pi and sij and sorted in order of magnitude. The conversion method adopts the fuzzy judgment method. Assume(11)sij_min=pisij,sij_max=pi+sij−pisij,sij_ave=n∑i=1npipisij.Step 4.
Fuse the data according to the OWA operator weight vector and the converted support, and calculate the final decision value of each decision:(12)sj=∑i=1mwibij,j=1,2,…,n.Step 5.
Make a decision based on the actual problem according to the decision value. The corresponding prediction is shown in Figure7.Figure 7
Prediction in different x and y.
## 3.1. Data Types and Their Characteristics
This technology has become a research hotspot in the fields of data processing, target recognition, situation assessment, and intelligent decision-making. Data can be described in terms of quantity and quality. The quantity is represented by numerical values, and the quality is described by linguistic variables. According to the different ways of data description, this paper divides the data into qualitative and quantitative types, focusing on the four types of descriptions of random variables, binary type, language level, and vocabulary terminology. The predicted values are compared in Figure5. As can be found for these figures, the third one exhibits the best performance of all, which is also consist with the previously mentioned analysis.Figure 5
Value comparison.In the case of large samples, random variables follow a normal distribution. Binary data is used to describe the affirmation or negation of facts, and the value space is mostly {1, 0} or {True, False}. The data indicating the degree is generally expressed by Chinese adverbs of degree: very good, very poor, and so on. The degree level mostly uses 7 or 9 standards. The data based on the vocabulary term uses the vocabulary or term specified in the vocabulary space to give a qualitative description of things, and the number of vocabulary depends on the specific situation.
## 3.2. Support Calculation Based on Triangular Fuzzy Numbers
Taking into account the existence of ambiguity in the description of multisource data, triangular fuzzy numbers can be used to calculate the support value of the data for decision-making.
### 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
### 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
### 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
## 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
## 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
## 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
## 3.3. Data Fusion Algorithm
Supposen decisions: A=(A1, A2,...,An), m data sources: S=(S1, S2,...,Sm), and the credibility (or importance) of each data source is pi. The data fusion algorithm is described as follows.Step 1.
Calculate the support of the data source for decision-making; extract data from the data warehouse, according to the different types of data and according to the previously mentioned method to convert it into the support for decision-making:(10)Sij=aij,bij,cij.
Among them,Sij is the support degree of the i-th data source for the j-th decision target, and (aij, bij, cij) is the triangular fuzzy number representation of the support degree.Step 2.
Determine the weight vector of the OWA operator; select the appropriate fuzzy semantic quantization criterion according to the preference of the decision maker” and determine the corresponding parameter and value. The principle of fuzzy semantics is generally “majority,” “at least half,” or “as much as possible,” and their parameter values are (0.3, 0.8), (0, 0.5), and (0.5, 1), which can be determined according to the parameters Fuzzy semantic quantization operatorf(x). According to f(x), obtain the OWA weight vector w=(w1, w2,..., wn); n is the number of data sources. Obtain the value of c.Step 3.
Convertsij according to the credibility (or importance) pi and support value sij of each data source; in order to use the OWA weight vector, each decision value needs to be converted according to pi and sij and sorted in order of magnitude. The conversion method adopts the fuzzy judgment method. Assume(11)sij_min=pisij,sij_max=pi+sij−pisij,sij_ave=n∑i=1npipisij.Step 4.
Fuse the data according to the OWA operator weight vector and the converted support, and calculate the final decision value of each decision:(12)sj=∑i=1mwibij,j=1,2,…,n.Step 5.
Make a decision based on the actual problem according to the decision value. The corresponding prediction is shown in Figure7.Figure 7
Prediction in different x and y.
## 4. Forecast of Engineering Job Demand
Enterprise human resource demand forecasting is to predict human resources, which is a complex system, so, it must be completed based on a scientific forecasting model. This article has already sorted out the existing human resource demand forecasting models. In the qualitative and quantitative models of demand forecasting, each category contains many models with different forecasting focuses. To predict the needs of enterprise human resources, only those practical methods are scientific. After the previous analysis of the forecasting object and the internal and external human resources environment of the enterprise, the appropriate forecasting method can be determined according to the characteristics of the enterprise.This article is a mid- and long-term forecast of the talent needs of a company's key positions. When choosing a forecasting method, it should be taken into account that due to the different degree of influence of internal and external factors, the results of the forecast of the talent demand for key positions will be different, so the main influencing factors need to be selected when forecasting. When predicting the total demand for talents, different variables are selected and a variety of forecasting schemes are used for forecasting, so that the information contained in various methods can be integrated to obtain more accurate forecasting values. Generally speaking, the selection of the method for forecasting talent demand for key positions in this article is mainly based on the following considerations.(1)
There are many factors that affect the demand for talents in key positions, but different factors sometimes have inherent correlations. Therefore, we should consider screening all factors to find out the main factors. In this way, we can consider using a few A variables to describe the nature of multiple variables.(2)
According to the data processing results, consider using one or more factors that have a greater impact on the demand for talents in key positions, and use mathematical models in statistical methods such as regression analysis and forecasting to predict the demand for talents, which will make the results more scientific, so as to obtain a better prediction effect.(3)
The development of enterprise human resources is a function with time as the basic variable. As time changes, the quantity and status of enterprise human resources are changing. Through the analysis of the internal and external environment of human resources of a company, it can be seen that the company is in a period of stable development, and the demand for talents in key positions has time continuity. Therefore, the time factor is an indispensable and important factor in the forecast of demand for talents in key position variable.(4)
Both theory and practical experience show that the combined forecasting method concentrates more relevant information and forecasting skills, so it can obtain better forecasting effects than single forecasting models, significantly improve the forecasting effect, and reduce the systematic error of the forecast. Therefore, this article will consider the use of combined forecasting methods to obtain the forecast value of the total demand for talents in order to reduce forecast errors and improve forecast accuracy.Based on the previously mentioned considerations, this article will choose a quantitative forecasting method to predict a company's demand for key position talents and engineering professional and technical personnel (scarce talents) from 2006 to 2010. Comprehensive analysis and applicable methods are regression prediction model, gray systemG M (1, l), prediction model, and combination prediction. This article will first use the first two methods to forecast the demand for talents in key positions and finally use the combined forecasting method to comprehensively process the results of the two forecasts and obtain the forecast value of the demand for key positions in a company. The x y variation is shown in Figure 8.Figure 8
x and y variation.Based on the analysis of the internal and external environment of a company's human resources, the preceding article qualitatively describes the factors that affect the talent needs of key positions. This qualitative analysis is only a preliminary identification of influencing factors and does not clarify the correlation and degree of influence on the number of talents in key positions. Therefore, it is necessary to carry out statistical analysis of various factors to find out the main factors affecting the demand for talents in key positions.Based on the qualitative analysis of the internal and external influencing factors of talents in key positions, this paper selects some representative indicators that can be quantified among the factors to study the contribution of each factor to the demand for talents in key positions and engineering and technical personnel of a company degree and inner law.Through the correlation test, we found that the A djus ted R Squ s re (① value) among the total talents in key positions is closest to 1 and has three important factors in qualitative analysis: the annual output of raw coal, the resource recovery rate, and the annual output of clean coal. Among them, the correlation coefficient between the annual output of raw coal and the demand for talents in key positions is the largest, with a value of 0.907. The resource recovery rate is negatively correlated with the demand for talents in key positions, with a value of -0.75. The correlation between the annual output of products is also not very significant, and the correlation coefficient value is 0.772 after the correlation analysis and processing of each factor index by SPSS software. There is an asterisk next to the correlation coefficient value of the annual output of raw coal, which indicates that when the specified significance level is 0.0 5. The associated probability of the statistical test is less than or equal to 0.0 5 (shown as 0.013 in the table); that is, the annual output of raw coal is talents in key positions, which are significantly correlated and positively correlated.Therefore, the results of data processing show that, among the selected factors, the factor that has the greatest impact on the demand for talents in a company's key positions is the company's annual output value. This result is also the basis for the next personnel demand forecast. Based on this conclusion, we will use regression analysis methods to predict talents in key positions, which is the basis for selecting regression prediction methods in this article.Analyzing the correlation test results between engineering professional and technical personnel and indicators of various influencing factors, it can be known that there are no indicators that have an important influence on engineering professional and technical personnel in terms of qualitative analysis among the selected indicators. That is to say, there is no index suitable for the regression prediction of engineering and technical talents among the selected indicators, so this article will use the gray prediction model to predict such talents.Although the current coal industry has a good momentum of development, due to the country’s relevant regulations on coal production, excessive exploitation of resources is not allowed (mine production is not allowed to exceed its approved production capacity), which is why the planned annual production value tends to stabilize. This planned value will be reserved relative to actual production, so the predicted value obtained by using the regression prediction model will be slightly smaller than the actual demand. At the same time, based on the actual situation of a certain company, since some people engaged in extractive work will meet the requirements for relocating extractive positions for 25 years in the next two years, considering the number of gaps in this part of personnel, there will be additional extractive positions in 2008 and 2009 number of talents. Based on the previously mentioned reasons, it can be seen that a company's actual demand for talents in key positions will increase based on the combined forecast value.In addition, through the previous analysis, we know that another aspect of a company's demand for talents in key positions is the demand for competence and quality. Combining the development goals of the mine’s future development plan for the personnel’s educational structure, it can be known that the proportion of professional and technical personnel will increase in the personnel structure of key positions, and the proportion of high-level scientific and technological personnel will also increase to a certain extent.The purpose of forecasting is to meet the demand for personnel in key positions and improve labor productivity. Only by strengthening the management of talents in key positions, fully mobilizing the enthusiasm of these personnel, and improving the overall competence and performance of talents in key positions can the goal of improving the overall performance of the enterprise be achieved. Therefore, it is necessary to use the theory of human resource management and combine the actual situation of a company to formulate corresponding planning measures for the management of talents in key positions of a company, in order to achieve the role of management and incentives for talents in key positions and promote the steady development of the company.
## 5. Conclusion
This paper qualitatively analyzes and selects the factors affecting the talent demand of key positions; a certain unit finds the quantifiable and influential factors according to the characteristics of demand influencing factors. Representative factors of talent demand for key positions in a certain unit. Using historical data, statistical methods are used to process the eight related factors of a certain unit, which confirms the factors that have a greater impact on the demand for talents in key positions of a certain unit, affecting the demand for talents in key positions of the same type of enterprise. The identification of factors provides a basic argument for a certain unit. Hence, the following conclusions can be obtained.(1)
Based on the results of statistical analysis and the characteristics of existing data, two variables of factory output and time are selected to be used in regression analysis forecasting model and gray system forecasting model for a certain unit to predict the demand for talents in a key unit; a certain unit finally adopts a combination forecast. The method determines the predicted value of talent demand for a certain unit’s key positions.(2)
Besides, according to the results of demand forecasting and the current status of human resource management in a certain unit, this article proposes talent management planning measures for key positions in a certain unit, hoping to provide a reference for the management of key talents in a certain unit and ensure a certain unit’s key positions in the future talent demand reserve.
---
*Source: 1011070-2022-02-02.xml* | 1011070-2022-02-02_1011070-2022-02-02.md | 48,866 | Research on Demand Forecasting of Engineering Positions Based on Fusion of Multisource and Heterogeneous Data | Ning Li; Tianqi Wang; Qianhui Zhang | Scientific Programming
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011070 | 1011070-2022-02-02.xml | ---
## Abstract
Aiming at the project demand forecasting problem based on multisource data fusion, a multisource heterogeneous data fusion model is established, and the unified quantitative representation method of heterogeneous data based on triangular fuzzy numbers is studied, and the ordered weighted average operator is used to integrate the preferences of decision-makers. A multisource heterogeneous data fusion algorithm that supports multiuser decision-making is designed. Based on the analysis of the internal and external environment of human resources in a company’s engineering positions, this paper qualitatively analyzes and selects the factors affecting the demand for talents in key positions in a company based on the characteristics of demand influencing factors and finds out the quantifiable and influential factors from the representative factors of talent demand for key positions in a company. Using historical data, statistical methods are used to process the eight related factors of a certain company, which confirms the factors that have a greater impact on the demand for talents in key positions of a company and influences the demand for talents in key positions in companies of the same type. The identification of factors provides a basic argument for a company. According to the results of statistical analysis and the characteristics of existing data, two variables of factory output and time are selected to be used in regression analysis forecasting model and gray system forecasting model of a certain company to predict the demand for key talents of a certain company. The company finally adopts combined forecasting. The method determines the predicted value of the talent demand for a certain company’s key positions. According to the results of demand forecasting and the current status of human resource management in a company, this article proposes a company’s key position talent management planning measures, in order to provide a reference for the management of a company’s key company position talents and ensure a company’s key company positions in the future talent demand reserve.
---
## Body
## 1. Introduction
In the process of enterprise informatization construction, due to the phased, technical, and other economic and human factors of the construction of various business systems and the implementation of data management systems, enterprises have accumulated a large number of business data in different storage methods during the development process. Data fusion is the multilevel, multifaceted, multilevel processing and combination of multiple sets of sensor data obtained from the same target to generate new meaningful information. The sensor here is in a broad sense, generally referring to the relevant databases of various data acquisition systems. Data fusion is a processing method of multisource information. Simply put, data fusion is a comprehensive algorithm of multiple data. The purpose of processing is to reason and identify the obtained information and make estimates and judgments accordingly. By fusing multisensor data, confidence can be increased, ambiguity can be reduced, and system reliability can be improved [1–7].Data fusion originated in the 1970s, initially out of military needs, and rapidly expanded to areas such as automatic control, medicine, intelligent buildings, and commerce in research. The analysis objects are also extended from physical goals to information goals and even cognitive goals. The theoretical basis of data fusion is information theory, detection and estimation theory, statistical signal theory, fuzzy mathematics, cognitive engineering, systems engineering, and so on. In 1986, Joint Directors of Laboratories established a function-oriented basic model and basic terminology, and in 1998, it was further improved. Although the JDL model is based on military proposals, it is also suitable for other application fields. The JDL model does not involve system structure. Bowman et al. extended this and proposed the concept of hierarchical data fusion tree, dividing the fusion problem into nodes [8–16]. Each node conceptually includes functions, such as data association, correlation, and evaluation. On this basis, Boss 6 colors further developed and proposed a set of modeling and simulation methods to realize the design of data fusion system, as can be seen in Figure 1, which mainly explains the whole system of data fusion system and its content.Figure 1
Data fusion system.Data fusion is essentially an integrated process of using computers to process, control, and make decisions on various information sources. The functions of the data fusion system mainly include detection, correlation, identification, and estimation. Data fusion can be divided into five levels: detection-level fusion, location-level fusion, attribute- (target recognition-) level fusion, situation assessment, and threat estimation. Related to this article is mainly the attribute -level fusion, also known as target-recognition-level fusion. It refers to the combination of target recognition data from multiple sensors to obtain a joint estimation of target identity. Attribute-level fusion uses multiple sensors to collect the data of the observed target, performs feature extraction and data combination, grouping the same target, and then using the content algorithm to synthesize the grouped data of the same target, and finally gets the joint attribute judgment of the target. That is, the type and category of the target are obtained. According to different fusion locations, attribute-level fusion is divided into three methods: decision-level fusion, feature-level fusion, and data-level fusion [17–20].So far, researchers have proposed more than 30 data fusion models; the most cited is the JDL model of the US Department of Defense. Many mature applications of these models have appeared in the fields of target tracking, image fusion, and so on, but there are relatively few applications in data mining and natural language processing. Data fusion technology is an emerging interdisciplinary comprehensive theory and method. After decades of development, breakthrough progress has been made, but there are still many problems. For example, there is no unified definition, lack of systematic and complete basic theory, and so on. To sum up, in the face of the emerging scientific theory and method of data fusion, it is necessary to conduct in-depth systematic research on the existing data fusion technology and find its fit with the field of natural language processing from both theoretical and practical levels.There are documents that have studied multisensor data fusion technology based on statistics and artificial intelligence methods and others that have studied the organization and management of multisource heterogeneous data in mobile geographic information systems and have established multisource heterogeneous data fusion models. The combination of sensor network and data fusion technology proposes a Kalman filter batch estimation fusion algorithm; some literature has studied the fusion method of massive multisource heterogeneous data in the Internet of Things environment and has been successfully applied in the process of target positioning and tracking. The literature has studied the intelligent maintenance decision-making architecture of the high-speed rail signal system based on heterogeneous data fusion, which has improved the accuracy and effectiveness of decision-making. Someone has studied the multisource heterogeneous data fusion technology in the construction of digital mines, ensuring that the construction of digital mines is in progress. The basic information platform is safe, stable, and efficient [21–25].The type and structure of the fusion data is limited, and most work only incorporates an additional type of auxiliary information, which has domain limitations and sometimes relies on rules and expert knowledge and high labor costs. The more sufficient the auxiliary data is, the more comprehensive the hidden representation of users and items can be obtained. In the recommendation prediction, the rich feature relationship between the two can be integrated to obtain more accurate results. The structural dimensions of different data sources are also different, and the distribution of data is also very different, which also increases the difficulty of integrating more data in breadth. Including the adopted data management system is also very different; from simple file database to complex network database, they constitute the heterogeneous data source of the enterprise.The fusion method is relatively preliminary, and the barriers between heterogeneous data have not been broken. In a related research, linear transformation is performed in the hidden semantic space of multisource data, and the method of adding and multiplying is used to integrate into the recommendation model, and it is impossible to fit the relationship between complex multisource heterogeneous data. To make matters worse, other auxiliary data may contain other information that has nothing to do with the recommendation. The mechanized fusion will introduce unnecessary noise, which will reduce the accuracy of the recommendation model. In addition, multisource data only supplements the characteristics of the recommendation model itself, and the feature information between each multisource heterogeneous data source also lacks in-depth interaction and cannot achieve synergistic effects [26–29].In addition to numerical data, there are other description forms such as language or symbols. Various descriptions lead to the ambiguity, difference, and heterogeneity of the structure and semantics of the data information. On the other hand, the decision-making process needs to comprehensively consider various heterogeneous data and information and make final decisions through the fusion of data and information. Therefore, starting from the characteristics of heterogeneous data, this paper studies a multisource heterogeneous data fusion method that supports multiuser decision-making.Human resource forecasting refers to the assumption of the human resource situation in a certain period of time in the future based on the evaluation and prediction of the enterprise, as shown in Figure2. It mainly includes the forecast of the quantity and type of human resource demand for the future development of the enterprise; the forecast of the future human resource status of the enterprise; the forecast of the future industry competition situation; the forecast of the supply and demand relationship of social human resources. Experience forecasting method, current situation planning method, model method, expert discussion method, quota method, and top-down method are commonly used methods for human resource forecasting. At present, the research on the special field of enterprise human resource forecasting needs to be in-depth. In practical applications, it is necessary to further improve the accuracy and feasibility of the forecast and at the same time increase the forecast of ability and quality based on the forecast of the number of personnel. In addition, there are few applied researches on human resource forecasting methods in specific enterprises.Figure 2
Human resource.
## 2. Multisource Heterogeneous Data Fusion Model
Human resource forecasting can be divided into human resource demand forecasting and human resource supply forecasting, including the dual meanings of foreseeing and measuring the future. When studying the status of a large-scale system, the status of each part of the system is usually judged first and then integrated to comprehensively judge the overall status. Therefore, a fusion method is needed to fuse the data of each part. Subsequently, multisource data fusion technology emerged, which can associate and combine data from multiple sensors, and integrate them together for a unified evaluation. According to the characteristics of the fusion algorithm, it can be summarized as data-level fusion, feature-level fusion, and decision-level fusion. Data-level fusion directly integrates the original log information obtained by the detector, and then, the fused data is processed in the next step. In this way, many details of the original data are retained, the amount of information lost is relatively small, and the granularity of fusion is relatively high. However, this fusion method is easily affected by the original data. When the original data is incomplete or the data stability is poor, it will directly affect the effect of the fusion, and the fused data must be homogeneous data. In addition, because many details in the original data are retained, the amount of calculation is relatively large and the processing cost is relatively high, which is not suitable for real-time fusion. This method has many applications in the field of image processing. Different from data-level fusion, feature-level fusion first performs data preprocessing and feature extraction on the data obtained by each detector and removes attributes that are weakly or irrelevant to the researched problem and then performs data extraction on the extracted feature data. Compared with the data-level fusion method, because the data fused by this fusion method is the data after feature extraction, the amount of data is small, the processing cost is low, the anti-interference ability is strong, the real-time performance is better, and heterogeneous data can be fused. Decision-level fusion first extracts the features of the original log information of each detector and analyzes and models it and then uses the single-source decision output from the model as a fusion factor to fuse, and the result of the fusion is the decision result of comprehensive multisource information. Compared with the other two fusion methods, this method has the smallest amount of data, so it has the best real-time performance and the lowest computational cost. And when the original data is unstable, the impact on fusion is minimal, and heterogeneous data can be fused. Enterprise human resources forecasting is a series of studies on the development trend, prospects, various possibilities, and consequences of enterprise human resources.
### 2.1. Multisource Heterogeneous Data Fusion Method
Data fusion is essentially the collaborative processing of data from multiple parties to achieve the purpose of reducing redundancy, comprehensive complementation, and capturing collaborative information. Data fusion is divided into data-level fusion, feature-level fusion, and decision-level fusion according to operation level. This paper studies the fusion of multiple data sources at the decision-making level, and its methods mainly include weighted average method, D-S evidence theory, and voting.
#### 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
#### 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
#### 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
### 2.2. Multisource Heterogeneous Data Fusion Structure
The fusion structure of multiple data sources is shown in Figure4. The data fusion process takes into account the characteristic factors expressing user needs and the reliability of information, uses context knowledge and domain knowledge, and uses voting to resolve data conflicts and other issues.Figure 4
Multisource data source fusion structure.Aiming at the previously mentioned model, this paper designs a multisource heterogeneous data fusion structure model that supports multiuser decision-making. The data fusion engine in the model includes four modules: data warehouse, decision support calculation, OWA operator weight vector calculation, and data conversion and sorting. The specific descriptions are as follows. (1) The data warehouse implements data selection, feature extraction, and statistics operations: data integration, elimination of data heterogeneity and differences, and providing data sources for subsequent data processing. (2) The decision support calculation module obtains data of relevant dimensions from the data warehouse according to the decision attributes and calculates the impact of each data source on the decision: the support valuesij (the support degree of the data source i for the j-th decision). (3) The OWA operator weight vector calculation module calculates the OWA weight wi according to the fuzzy semantic principle provided by the decision maker. The choice of fuzzy semantic parameters reflects the decision maker’s: the preference attitude of the data source. (4) Data are converted and sorted according to the credibility or importance of the data source provided by the decision maker, combined with the OWA weight vector wi to convert sij and sort the converted results in order of size, and sort the result, which is calculated by summing the final decision value.
## 2.1. Multisource Heterogeneous Data Fusion Method
Data fusion is essentially the collaborative processing of data from multiple parties to achieve the purpose of reducing redundancy, comprehensive complementation, and capturing collaborative information. Data fusion is divided into data-level fusion, feature-level fusion, and decision-level fusion according to operation level. This paper studies the fusion of multiple data sources at the decision-making level, and its methods mainly include weighted average method, D-S evidence theory, and voting.
### 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
### 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
### 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
## 2.1.1. Weighted Average Method
Calculate the support value of each data source for decision-making, wi is the weight of data sourcei, and tij is the support of data source i to the j-th decision. This method judges the pros and cons of decision-making schemes according to the degree of support, which is easy to operate, and considers the importance of the data source and other characteristics, but the determination of the weight contains subjective factors.
## 2.1.2. D-S Evidence Theory
The space formed by all possible results of the object to be recognized is defined as the recognition frameD, and its subset is marked as 2D, and the definition is as follows:(1)m:2D⟶0,1,where(2)mΦ=0,∑A⊆2DmA=1.φ is the quasi-empty set, and then, m is the basic probability allocation function (BPAF) on 2D, which actually assigns the trust degree of the subset of D based on the evidence.In practice, different mi is often obtained for the same problem due to different evidences. After considering all the evidence,m can be obtained by the following formula:(3)mA=K−1∑∩Ai=A∏miAi1≤i≤n,where(4)K=∑∩Ai≠ϕ∏miAi.The D-S evidence theory is based on BPAF and can deal with the uncertainty caused by “not knowing.” The disadvantage is that the elements inD must meet the mutually exclusive condition, and the calculation is complicated when there are too many BPAFs.
## 2.1.3. Voting Method
Consider each data source as a voter, and determine the pros and cons by comparing the number of votes obtained by each decision. The calculation method is(5)Supai=FSupjai.Among them,ai is the i-th decision, and Sup(ai) is the “number of votes”; Supj(ai) is the support of the j-th data source for ai. If it supports it, it is 1; otherwise, it is 0, and the function F can be defined as continuous add and sum. It is difficult to determine the BPAF for multisource heterogeneous data. The voting method cannot distinguish decisions with the same number of votes. Taking the preferences of decision makers into consideration, the OWA method is used to fuse the data in this article. The error is compared in Figure 3.Figure 3
Error comparison.
## 2.2. Multisource Heterogeneous Data Fusion Structure
The fusion structure of multiple data sources is shown in Figure4. The data fusion process takes into account the characteristic factors expressing user needs and the reliability of information, uses context knowledge and domain knowledge, and uses voting to resolve data conflicts and other issues.Figure 4
Multisource data source fusion structure.Aiming at the previously mentioned model, this paper designs a multisource heterogeneous data fusion structure model that supports multiuser decision-making. The data fusion engine in the model includes four modules: data warehouse, decision support calculation, OWA operator weight vector calculation, and data conversion and sorting. The specific descriptions are as follows. (1) The data warehouse implements data selection, feature extraction, and statistics operations: data integration, elimination of data heterogeneity and differences, and providing data sources for subsequent data processing. (2) The decision support calculation module obtains data of relevant dimensions from the data warehouse according to the decision attributes and calculates the impact of each data source on the decision: the support valuesij (the support degree of the data source i for the j-th decision). (3) The OWA operator weight vector calculation module calculates the OWA weight wi according to the fuzzy semantic principle provided by the decision maker. The choice of fuzzy semantic parameters reflects the decision maker’s: the preference attitude of the data source. (4) Data are converted and sorted according to the credibility or importance of the data source provided by the decision maker, combined with the OWA weight vector wi to convert sij and sort the converted results in order of size, and sort the result, which is calculated by summing the final decision value.
## 3. Multisource Heterogeneous Data Fusion Algorithm
### 3.1. Data Types and Their Characteristics
This technology has become a research hotspot in the fields of data processing, target recognition, situation assessment, and intelligent decision-making. Data can be described in terms of quantity and quality. The quantity is represented by numerical values, and the quality is described by linguistic variables. According to the different ways of data description, this paper divides the data into qualitative and quantitative types, focusing on the four types of descriptions of random variables, binary type, language level, and vocabulary terminology. The predicted values are compared in Figure5. As can be found for these figures, the third one exhibits the best performance of all, which is also consist with the previously mentioned analysis.Figure 5
Value comparison.In the case of large samples, random variables follow a normal distribution. Binary data is used to describe the affirmation or negation of facts, and the value space is mostly {1, 0} or {True, False}. The data indicating the degree is generally expressed by Chinese adverbs of degree: very good, very poor, and so on. The degree level mostly uses 7 or 9 standards. The data based on the vocabulary term uses the vocabulary or term specified in the vocabulary space to give a qualitative description of things, and the number of vocabulary depends on the specific situation.
### 3.2. Support Calculation Based on Triangular Fuzzy Numbers
Taking into account the existence of ambiguity in the description of multisource data, triangular fuzzy numbers can be used to calculate the support value of the data for decision-making.
#### 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
#### 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
#### 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
### 3.3. Data Fusion Algorithm
Supposen decisions: A=(A1, A2,...,An), m data sources: S=(S1, S2,...,Sm), and the credibility (or importance) of each data source is pi. The data fusion algorithm is described as follows.Step 1.
Calculate the support of the data source for decision-making; extract data from the data warehouse, according to the different types of data and according to the previously mentioned method to convert it into the support for decision-making:(10)Sij=aij,bij,cij.
Among them,Sij is the support degree of the i-th data source for the j-th decision target, and (aij, bij, cij) is the triangular fuzzy number representation of the support degree.Step 2.
Determine the weight vector of the OWA operator; select the appropriate fuzzy semantic quantization criterion according to the preference of the decision maker” and determine the corresponding parameter and value. The principle of fuzzy semantics is generally “majority,” “at least half,” or “as much as possible,” and their parameter values are (0.3, 0.8), (0, 0.5), and (0.5, 1), which can be determined according to the parameters Fuzzy semantic quantization operatorf(x). According to f(x), obtain the OWA weight vector w=(w1, w2,..., wn); n is the number of data sources. Obtain the value of c.Step 3.
Convertsij according to the credibility (or importance) pi and support value sij of each data source; in order to use the OWA weight vector, each decision value needs to be converted according to pi and sij and sorted in order of magnitude. The conversion method adopts the fuzzy judgment method. Assume(11)sij_min=pisij,sij_max=pi+sij−pisij,sij_ave=n∑i=1npipisij.Step 4.
Fuse the data according to the OWA operator weight vector and the converted support, and calculate the final decision value of each decision:(12)sj=∑i=1mwibij,j=1,2,…,n.Step 5.
Make a decision based on the actual problem according to the decision value. The corresponding prediction is shown in Figure7.Figure 7
Prediction in different x and y.
## 3.1. Data Types and Their Characteristics
This technology has become a research hotspot in the fields of data processing, target recognition, situation assessment, and intelligent decision-making. Data can be described in terms of quantity and quality. The quantity is represented by numerical values, and the quality is described by linguistic variables. According to the different ways of data description, this paper divides the data into qualitative and quantitative types, focusing on the four types of descriptions of random variables, binary type, language level, and vocabulary terminology. The predicted values are compared in Figure5. As can be found for these figures, the third one exhibits the best performance of all, which is also consist with the previously mentioned analysis.Figure 5
Value comparison.In the case of large samples, random variables follow a normal distribution. Binary data is used to describe the affirmation or negation of facts, and the value space is mostly {1, 0} or {True, False}. The data indicating the degree is generally expressed by Chinese adverbs of degree: very good, very poor, and so on. The degree level mostly uses 7 or 9 standards. The data based on the vocabulary term uses the vocabulary or term specified in the vocabulary space to give a qualitative description of things, and the number of vocabulary depends on the specific situation.
## 3.2. Support Calculation Based on Triangular Fuzzy Numbers
Taking into account the existence of ambiguity in the description of multisource data, triangular fuzzy numbers can be used to calculate the support value of the data for decision-making.
### 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
### 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
### 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
## 3.2.1. Conversion of Random Data
Suppose(6)x0=μ−3σ,x′=x−x06σ.If the value of the random variable is larger, its support for decision-making is also greater. If the interval [μ − 3σ, μ + 3σ] is divided into n equal parts, the conversion from random data to support can be defined as(7)sx=0,0,0,x≤μ−3σ,in,x′,i+1n,3σin+x0<x≤6σi+1n+x0,1,1,1,x>μ−3σ.If the value of the random variable is smaller, its support for the decision plan is greater, and then, the support is defined as(8)s′x=π1,1,1−sx.
## 3.2.2. Conversion of Binary Data
Binary data is described by 1 or 0. If the numbers of 1 and 0 in the data source aren and m, respectively, and the support is based on the value 1, and then, the support of the data source for decision-making is defined as(9)sx=nn+m,nn+m,nn+m.
## 3.2.3. Conversion of Degree Data
Generally speaking, 7 or 9 standards can be used to describe the quality of objects. This article adopts 7-level standards. The expression of degree adverbs can be divided into proportional type (the higher the efficiency, the better) and the inverse type (the higher the cost, the worse), and the degree of support for decision-making of each level can be quantified. The power in different situation is show in Figure6, which shows the agreement between the prediction and the previously mentioned analysis in detail.Figure 6
Power.
## 3.3. Data Fusion Algorithm
Supposen decisions: A=(A1, A2,...,An), m data sources: S=(S1, S2,...,Sm), and the credibility (or importance) of each data source is pi. The data fusion algorithm is described as follows.Step 1.
Calculate the support of the data source for decision-making; extract data from the data warehouse, according to the different types of data and according to the previously mentioned method to convert it into the support for decision-making:(10)Sij=aij,bij,cij.
Among them,Sij is the support degree of the i-th data source for the j-th decision target, and (aij, bij, cij) is the triangular fuzzy number representation of the support degree.Step 2.
Determine the weight vector of the OWA operator; select the appropriate fuzzy semantic quantization criterion according to the preference of the decision maker” and determine the corresponding parameter and value. The principle of fuzzy semantics is generally “majority,” “at least half,” or “as much as possible,” and their parameter values are (0.3, 0.8), (0, 0.5), and (0.5, 1), which can be determined according to the parameters Fuzzy semantic quantization operatorf(x). According to f(x), obtain the OWA weight vector w=(w1, w2,..., wn); n is the number of data sources. Obtain the value of c.Step 3.
Convertsij according to the credibility (or importance) pi and support value sij of each data source; in order to use the OWA weight vector, each decision value needs to be converted according to pi and sij and sorted in order of magnitude. The conversion method adopts the fuzzy judgment method. Assume(11)sij_min=pisij,sij_max=pi+sij−pisij,sij_ave=n∑i=1npipisij.Step 4.
Fuse the data according to the OWA operator weight vector and the converted support, and calculate the final decision value of each decision:(12)sj=∑i=1mwibij,j=1,2,…,n.Step 5.
Make a decision based on the actual problem according to the decision value. The corresponding prediction is shown in Figure7.Figure 7
Prediction in different x and y.
## 4. Forecast of Engineering Job Demand
Enterprise human resource demand forecasting is to predict human resources, which is a complex system, so, it must be completed based on a scientific forecasting model. This article has already sorted out the existing human resource demand forecasting models. In the qualitative and quantitative models of demand forecasting, each category contains many models with different forecasting focuses. To predict the needs of enterprise human resources, only those practical methods are scientific. After the previous analysis of the forecasting object and the internal and external human resources environment of the enterprise, the appropriate forecasting method can be determined according to the characteristics of the enterprise.This article is a mid- and long-term forecast of the talent needs of a company's key positions. When choosing a forecasting method, it should be taken into account that due to the different degree of influence of internal and external factors, the results of the forecast of the talent demand for key positions will be different, so the main influencing factors need to be selected when forecasting. When predicting the total demand for talents, different variables are selected and a variety of forecasting schemes are used for forecasting, so that the information contained in various methods can be integrated to obtain more accurate forecasting values. Generally speaking, the selection of the method for forecasting talent demand for key positions in this article is mainly based on the following considerations.(1)
There are many factors that affect the demand for talents in key positions, but different factors sometimes have inherent correlations. Therefore, we should consider screening all factors to find out the main factors. In this way, we can consider using a few A variables to describe the nature of multiple variables.(2)
According to the data processing results, consider using one or more factors that have a greater impact on the demand for talents in key positions, and use mathematical models in statistical methods such as regression analysis and forecasting to predict the demand for talents, which will make the results more scientific, so as to obtain a better prediction effect.(3)
The development of enterprise human resources is a function with time as the basic variable. As time changes, the quantity and status of enterprise human resources are changing. Through the analysis of the internal and external environment of human resources of a company, it can be seen that the company is in a period of stable development, and the demand for talents in key positions has time continuity. Therefore, the time factor is an indispensable and important factor in the forecast of demand for talents in key position variable.(4)
Both theory and practical experience show that the combined forecasting method concentrates more relevant information and forecasting skills, so it can obtain better forecasting effects than single forecasting models, significantly improve the forecasting effect, and reduce the systematic error of the forecast. Therefore, this article will consider the use of combined forecasting methods to obtain the forecast value of the total demand for talents in order to reduce forecast errors and improve forecast accuracy.Based on the previously mentioned considerations, this article will choose a quantitative forecasting method to predict a company's demand for key position talents and engineering professional and technical personnel (scarce talents) from 2006 to 2010. Comprehensive analysis and applicable methods are regression prediction model, gray systemG M (1, l), prediction model, and combination prediction. This article will first use the first two methods to forecast the demand for talents in key positions and finally use the combined forecasting method to comprehensively process the results of the two forecasts and obtain the forecast value of the demand for key positions in a company. The x y variation is shown in Figure 8.Figure 8
x and y variation.Based on the analysis of the internal and external environment of a company's human resources, the preceding article qualitatively describes the factors that affect the talent needs of key positions. This qualitative analysis is only a preliminary identification of influencing factors and does not clarify the correlation and degree of influence on the number of talents in key positions. Therefore, it is necessary to carry out statistical analysis of various factors to find out the main factors affecting the demand for talents in key positions.Based on the qualitative analysis of the internal and external influencing factors of talents in key positions, this paper selects some representative indicators that can be quantified among the factors to study the contribution of each factor to the demand for talents in key positions and engineering and technical personnel of a company degree and inner law.Through the correlation test, we found that the A djus ted R Squ s re (① value) among the total talents in key positions is closest to 1 and has three important factors in qualitative analysis: the annual output of raw coal, the resource recovery rate, and the annual output of clean coal. Among them, the correlation coefficient between the annual output of raw coal and the demand for talents in key positions is the largest, with a value of 0.907. The resource recovery rate is negatively correlated with the demand for talents in key positions, with a value of -0.75. The correlation between the annual output of products is also not very significant, and the correlation coefficient value is 0.772 after the correlation analysis and processing of each factor index by SPSS software. There is an asterisk next to the correlation coefficient value of the annual output of raw coal, which indicates that when the specified significance level is 0.0 5. The associated probability of the statistical test is less than or equal to 0.0 5 (shown as 0.013 in the table); that is, the annual output of raw coal is talents in key positions, which are significantly correlated and positively correlated.Therefore, the results of data processing show that, among the selected factors, the factor that has the greatest impact on the demand for talents in a company's key positions is the company's annual output value. This result is also the basis for the next personnel demand forecast. Based on this conclusion, we will use regression analysis methods to predict talents in key positions, which is the basis for selecting regression prediction methods in this article.Analyzing the correlation test results between engineering professional and technical personnel and indicators of various influencing factors, it can be known that there are no indicators that have an important influence on engineering professional and technical personnel in terms of qualitative analysis among the selected indicators. That is to say, there is no index suitable for the regression prediction of engineering and technical talents among the selected indicators, so this article will use the gray prediction model to predict such talents.Although the current coal industry has a good momentum of development, due to the country’s relevant regulations on coal production, excessive exploitation of resources is not allowed (mine production is not allowed to exceed its approved production capacity), which is why the planned annual production value tends to stabilize. This planned value will be reserved relative to actual production, so the predicted value obtained by using the regression prediction model will be slightly smaller than the actual demand. At the same time, based on the actual situation of a certain company, since some people engaged in extractive work will meet the requirements for relocating extractive positions for 25 years in the next two years, considering the number of gaps in this part of personnel, there will be additional extractive positions in 2008 and 2009 number of talents. Based on the previously mentioned reasons, it can be seen that a company's actual demand for talents in key positions will increase based on the combined forecast value.In addition, through the previous analysis, we know that another aspect of a company's demand for talents in key positions is the demand for competence and quality. Combining the development goals of the mine’s future development plan for the personnel’s educational structure, it can be known that the proportion of professional and technical personnel will increase in the personnel structure of key positions, and the proportion of high-level scientific and technological personnel will also increase to a certain extent.The purpose of forecasting is to meet the demand for personnel in key positions and improve labor productivity. Only by strengthening the management of talents in key positions, fully mobilizing the enthusiasm of these personnel, and improving the overall competence and performance of talents in key positions can the goal of improving the overall performance of the enterprise be achieved. Therefore, it is necessary to use the theory of human resource management and combine the actual situation of a company to formulate corresponding planning measures for the management of talents in key positions of a company, in order to achieve the role of management and incentives for talents in key positions and promote the steady development of the company.
## 5. Conclusion
This paper qualitatively analyzes and selects the factors affecting the talent demand of key positions; a certain unit finds the quantifiable and influential factors according to the characteristics of demand influencing factors. Representative factors of talent demand for key positions in a certain unit. Using historical data, statistical methods are used to process the eight related factors of a certain unit, which confirms the factors that have a greater impact on the demand for talents in key positions of a certain unit, affecting the demand for talents in key positions of the same type of enterprise. The identification of factors provides a basic argument for a certain unit. Hence, the following conclusions can be obtained.(1)
Based on the results of statistical analysis and the characteristics of existing data, two variables of factory output and time are selected to be used in regression analysis forecasting model and gray system forecasting model for a certain unit to predict the demand for talents in a key unit; a certain unit finally adopts a combination forecast. The method determines the predicted value of talent demand for a certain unit’s key positions.(2)
Besides, according to the results of demand forecasting and the current status of human resource management in a certain unit, this article proposes talent management planning measures for key positions in a certain unit, hoping to provide a reference for the management of key talents in a certain unit and ensure a certain unit’s key positions in the future talent demand reserve.
---
*Source: 1011070-2022-02-02.xml* | 2022 |
# Supporting Risk-Aware Use of Online Translation Tools in Delivering Mental Healthcare Services among Spanish-Speaking Populations
**Authors:** Wenxiu Xie; Meng Ji; Mengdan Zhao; Xiaobo Qian; Chi-Yin Chow; Kam-Yiu Lam; Tianyong Hao
**Journal:** Computational Intelligence and Neuroscience
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1011197
---
## Abstract
Neural machine translation technologies are having increasing applications in clinical and healthcare settings. In multicultural countries, automatic translation tools provide critical support to medical and health professionals in their interaction and exchange of health messages with migrant patients with limited or non-English proficiency. While research has mainly explored the usability and limitations of state-of-the-art machine translation tools in the detection and diagnosis of physical diseases and conditions, there is a persistent lack of evidence-based studies on the applicability of machine translation tools in the delivery of mental healthcare services for vulnerable populations. Our study developed Bayesian machine learning algorithms using relevance vector machine to support frontline health workers and medical professionals to make better informed decisions between risks and convenience of using online translation tools when delivering mental healthcare services to Spanish-speaking minority populations living in English-speaking countries. Major strengths of the machine learning classifier that we developed include scalability, interpretability, and adaptability of the classifier for diverse mental healthcare settings. In this paper, we report on the process of the Bayesian machine learning classifier development through automatic feature optimisation and the interpretation of the classifier-enabled assessment of the suitability of original English mental health information for automatic online translation. We elaborate on the interpretation of the assessment results in clinical settings using statistical tools such as positive likelihood ratios and negative likelihood ratios.
---
## Body
## 1. Introduction
Despite the increasing public awareness of the prevalence of mental health issues among populations from low and middle-income countries, accurate, scientific, non-discriminative communication of mental disorders remains a real challenge [1–3]. Within different societal, cultural systems, conventionalised linguistic constructs have been developed over years and decades to describe and convey the underlying social attitudes and understanding of different mental disorders like varieties of anxiety or depressive disorders. In English-speaking multicultural countries, the communication and interpretation of mental disorders and their treatment for non-English-speaking migrant populations pose important challenges to frontline health workers and clinicians [4–6]. The rapid development of machine translation technologies has offered necessary technical means to interact and engage with multicultural vulnerable communities and people who have limited access to mental healthcare services, despite the prevalence of mental health issues among such populations who are at higher risks of developing clinical mental disorders or other comorbidities such as chronic non-communicable diseases or physical health conditions that are likely to exacerbate their mental health issues. Currently, there is very limited research which examines the reliability, safety, or levels of risks in using state-of-the-art online translation tools such as Google Translate in clinical settings for communicating and talking with patients about mental disorders.Much of current research shows that the use of automatic translation tools in primary healthcare settings is driven by a persistent lack of qualified bilingual health professionals [7–10]. The risk of an unchecked use of translation technologies in specialised health and medical settings which have been developed largely for general cross-lingual communication purposes is real and well documented [11–14]. However, the practical needs for cost-effective translation tools in disease diagnosis and medical treatments are increasing. Although the provision of proper training to a certain number of bilingual health professionals can help reduce healthcare inequality issues, real-life scenarios can be much more complex, uncertain, and dynamic for any health systems at various levels. For example, a recurring issue in clinical settings is the lack of adequately qualified health translators working with under-resourced languages. Even for resourced language pairs such as English-Spanish and English-Chinese, it can be challenging to find bilingual health translators with extensive in-depth knowledge of different medical specialities. That is, the quality of human translation can also be compromised by the complexity and speciality of medical communications. In fact, with simple, direct sentences, online translations tools can fulfil their function to support a meaningful exchange of information between doctors and patients. We argue that health communication technologies like neural translation tools are evolving rapidly and that they have important potential for scaled uptake in health systems, especially those serving vulnerable or disadvantaged communities in under-resourced local health districts. Health technologies can be leveraged to help close the gaps in current medical and healthcare structures and improve the quality and accessibility of healthcare resources to populations and people at risk. Research is needed to develop instruments and aids to improve the safety and reliability of available translation technologies to be adopted in health and clinical settings.
## 2. Materials and Methods
### 2.1. Data Collection
We collected authoritative health information on anxiety disorders from the websites of federal and state health agencies and not-for-profit health promotion organisations in the U. S., Australia, Canada, and the United Kingdom. The total database contains 557 full-length articles including 270 (48.47%) original English materials associated with automatic translations to Spanish which contained misleading errors. We labelled these materials as positive or “risk” samples. Around 51.53% of the total samples we collected were original English texts whose automatic translation into Spanish using Google Translate did not contain any misleading information. The evaluation of the Spanish translations by Google was through the comparison of the original English texts with their backtranslations from Spanish. This method known as forward and backward translation was endorsed by international health organisations such as the World Health Organisation [15]. We subsequently labelled such English texts as negative or “safe” cases for automatic translation. We divided the entire database into 67.65% training data (389) and 32.35% testing data. Within the training dataset, there were 187 positive “risk” English texts which were prone to automatic translation errors and 202 negative “safe” English texts which had been proven to be suitable and reliable for automatic translation to Spanish. Similarly, within the testing dataset, there were 83 positive samples and 85 negative samples for testing the performance of classification of the classifiers to be developed. We applied 5-fold cross-validation on the training dataset to develop the classifiers to help remove biases in the development of algorithms.
### 2.2. Annotation of Feature Sets
Traditionally, health translation mistakes are believed to be associated with or triggered by the linguistic difficulty or lack of readability of the original English materials including complex, sophisticated, structural, syntactic, and lexical features. However, with the rapid developments of machine translation technologies, more research shows that semantic polysemy, that is, the multiple meanings of a certain word across domains and other issues, could be more challenging for latest neural machine translation tools. As a result, we included four large sets of features to investigate possible reasons which have triggered significant mistakes in machine-translated health materials from English to Spanish.We annotated both training and testing datasets with 4 large sets of linguistic features: structural features (24 in total using Readability Studio software), lexical dispersion rates based on the British National Corpus (20 features in total), English lexical semantic features (115 features in total), which we annotated using the USAS system developed by the University of Lancaster [16–18], UK, and lexical sentiment features (92 features in total) that we annotated using Linguistic Inquiry and Word Count software. Appendix A shows the details of these 4 linguistic features.
### 2.3. Bayesian Machine Learning Classifiers (MLCs)
Bayesian MLC is a sparse classifier which can effectively counter model overfitting with relatively small datasets like ours. Bayesian classifiers are different from other supervised machine learning techniques in that they produce the posterior odds of a class dependent on the prior odds of an event and asymmetrical classification errors of the model, whereas frequentist ML classifiers only return a hard binary decision. In solving practical questions like ours, posterior odds are much more informative than a certain predicted binary outcome, as the Bayesian-style prediction using posterior odds helps practitioners and decision makers to appreciate the levels of risks of negative and positive cases over a continuous probability scale and assists in developing more effective intervention strategies to achieve optimal outcomes. This advantage of Bayesian MLs suits the purpose of our study, as we aimed to identify original English mental health materials which are more likely to cause significant errors if translated using automatic tools without further human evaluation. This can help health agencies developing bilingual health materials to better invest their resource and minimise risks of using machine translation in healthcare settings.
## 2.1. Data Collection
We collected authoritative health information on anxiety disorders from the websites of federal and state health agencies and not-for-profit health promotion organisations in the U. S., Australia, Canada, and the United Kingdom. The total database contains 557 full-length articles including 270 (48.47%) original English materials associated with automatic translations to Spanish which contained misleading errors. We labelled these materials as positive or “risk” samples. Around 51.53% of the total samples we collected were original English texts whose automatic translation into Spanish using Google Translate did not contain any misleading information. The evaluation of the Spanish translations by Google was through the comparison of the original English texts with their backtranslations from Spanish. This method known as forward and backward translation was endorsed by international health organisations such as the World Health Organisation [15]. We subsequently labelled such English texts as negative or “safe” cases for automatic translation. We divided the entire database into 67.65% training data (389) and 32.35% testing data. Within the training dataset, there were 187 positive “risk” English texts which were prone to automatic translation errors and 202 negative “safe” English texts which had been proven to be suitable and reliable for automatic translation to Spanish. Similarly, within the testing dataset, there were 83 positive samples and 85 negative samples for testing the performance of classification of the classifiers to be developed. We applied 5-fold cross-validation on the training dataset to develop the classifiers to help remove biases in the development of algorithms.
## 2.2. Annotation of Feature Sets
Traditionally, health translation mistakes are believed to be associated with or triggered by the linguistic difficulty or lack of readability of the original English materials including complex, sophisticated, structural, syntactic, and lexical features. However, with the rapid developments of machine translation technologies, more research shows that semantic polysemy, that is, the multiple meanings of a certain word across domains and other issues, could be more challenging for latest neural machine translation tools. As a result, we included four large sets of features to investigate possible reasons which have triggered significant mistakes in machine-translated health materials from English to Spanish.We annotated both training and testing datasets with 4 large sets of linguistic features: structural features (24 in total using Readability Studio software), lexical dispersion rates based on the British National Corpus (20 features in total), English lexical semantic features (115 features in total), which we annotated using the USAS system developed by the University of Lancaster [16–18], UK, and lexical sentiment features (92 features in total) that we annotated using Linguistic Inquiry and Word Count software. Appendix A shows the details of these 4 linguistic features.
## 2.3. Bayesian Machine Learning Classifiers (MLCs)
Bayesian MLC is a sparse classifier which can effectively counter model overfitting with relatively small datasets like ours. Bayesian classifiers are different from other supervised machine learning techniques in that they produce the posterior odds of a class dependent on the prior odds of an event and asymmetrical classification errors of the model, whereas frequentist ML classifiers only return a hard binary decision. In solving practical questions like ours, posterior odds are much more informative than a certain predicted binary outcome, as the Bayesian-style prediction using posterior odds helps practitioners and decision makers to appreciate the levels of risks of negative and positive cases over a continuous probability scale and assists in developing more effective intervention strategies to achieve optimal outcomes. This advantage of Bayesian MLs suits the purpose of our study, as we aimed to identify original English mental health materials which are more likely to cause significant errors if translated using automatic tools without further human evaluation. This can help health agencies developing bilingual health materials to better invest their resource and minimise risks of using machine translation in healthcare settings.
## 3. Methods
To identify the best subset of features within each annotation category, as well as the best subset of features across annotation categories, we applied separate and combined feature optimisation techniques. The automatic feature selection technique we used was recursive feature elimination (RFE) with cross-validation in Python scikit-learn to increase the generalisability and accuracy of the Bayesian machine learning classifiers we developed. To identify and rank highly predictive features, recursive feature elimination used linear kernel support vector machine (SVM) as the base estimator. An optimal set of features was identified when the cross-validated classification error reached the minimal value. Figure1 shows the results of the automatic optimisation of different feature sets: in Figure 1(a), the optimised features of lexical dispersion rates were 4, as the cross-validated classification error dropped from 0.425 with the full feature set (20 in total) to 0.393 when the features were reduced to 4. In Figure 1(b), the optimised feature set of English semantic features was 10, as the cross-validated classification error decreased from 0.40 with the full feature set (115) to 0.375 when the features were reduced to 10. Further elimination of features however led to a spike in the classification error. In Figure 1(c), the optimised feature set of English sentiment features (annotated using the LIWC software) was 10, as we observed that the minimal classification error of 0.416 was reached when the total number of sentiment features was scaled back from 92 to 10. In Figure 1(d), 5 optimal structural features (92 in total) were identified when the minimal classification error of 0.409 was reached. Lastly, in Figure 1(e), we conducted the combined feature selection by integrating the 4 feature sets (251 in total): lexical dispersion rates and semantic, sentiment, and structural features. The optimal number of features emerged from the combined optimisation was 33 which was associated with the minimal classification error of 0.383.Figure 1
Automatic feature selection recursive feature elimination with SVM as base estimator.
(a)(b)(c)(d)(e)
## 4. Results and Discussion
### 4.1. Results
Following automatic feature optimisation to enhance the classification accuracy of classifiers, we evaluated the performance of Bayesian models (relevance vector machine (RVM)) with different feature sets on both the training and testing datasets. As discussed earlier, we applied 5-fold cross-validation on the training dataset to minimise biases in the classifiers being developed. First, we compared the original feature sets with their respective optimised feature sets in Table1–5. Next, we compared the performance of RVM classifiers with different pairs of optimised feature sets. Table 6 shows the comparison of RVM classifiers with double, triple, and quadruple optimised feature sets, respectively. Like feature optimisation, feature normalisation is another useful automatic technique to enhance the performance of machine learning classifiers. We applied three popular feature normalisation techniques: min-max, L2-norm (L2), and Z-score normalisation with each RVM classifier to see whether this could help balance asymmetrical classification errors within each model.Table 1
Performance of RVM classifiers with lexical dispersion features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityDispersion rates (full 20 features)0.6250.060.6490.6670.5780.753Disp all 20 with min-max normalisation0.5470.040.6540.6430.5660.718Disp all 20 with L2 normalisation0.5730.070.5940.5600.5180.600Disp all 20 with Z-score normalisation0.5610.060.6450.6370.5300.741D4 (automatically optimised)0.6170.060.6480.6610.5660.753D4 with min-max0.6110.060.6860.6490.5420.753D4 with L20.5710.070.5950.5600.5180.600D4 with Z-score0.6100.060.6890.6490.5660.729Table 2
Performance of RVM classifiers with lexical semantic features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityUSAS all 1150.6520.0520.6920.6610.5900.729USAS all 115 with min-max0.5930.0820.6770.6430.5780.706USAS all 115 with L20.5840.0870.6810.6250.5900.659USAS all 115 with Z-score0.5890.1110.6940.6550.6390.671U10 (automatically optimised)0.6590.0450.7140.6790.5780.777U10 with min-max0.6630.0440.7230.6790.5780.777U10 with L20.6140.0890.7070.6250.5180.729U10 with Z-score0.6460.0420.7130.6490.5060.788Table 3
Performance of RVM classifiers with lexical sentiment features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityLIWC all 920.6140.0570.5800.5480.5180.577LIWC all 92 with min-max0.5770.0540.6460.6190.5660.671LIWC all 92 with L20.6100.0640.5730.5480.5180.577LIWC all 92 with Z-score0.6190.0460.6700.6190.5060.729L10 (automatically optimised)0.6020.0400.6050.5710.6510.494L10 with min-max0.6290.0550.6070.5660.5660.565L10 with L20.6040.0340.6090.5710.5180.624L10 with Z-score0.6140.0680.6160.5830.5780.588Table 4
Performance of RVM classifiers with structural features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityStructural all 240.6430.0480.6360.6250.5180.729Structural all 24 with min-max0.6030.0470.5950.5540.4340.671Structural all 24 with L20.6470.0460.6160.5950.5900.600Structural all 24 with Z-score0.6130.0480.6210.5830.4820.682S4 (automatically optimised)0.6330.0500.6210.6190.4460.788S4 with min-max0.6160.0470.6150.5950.5540.635S4 with Z-score0.6240.0440.6030.6010.5420.659Table 5
Performance of RVM classifiers with all (dispersion + USAS + LIWC + structural) features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityALL 2510.6420.0380.6470.6130.5180.706ALL 251 with min-max0.6260.0410.6970.6430.5900.694ALL 251 with L20.6700.0450.6530.6190.6390.600ALL 251 with Z-score0.6330.0850.6800.6250.5900.659ALL33 (automatically optimised)0.6580.0140.6700.6490.5540.741ALL33 with min-max0.6780.0340.7180.6960.6510.741ALL33 with L20.7100.0150.6700.6370.6510.624ALL33 with Z-score0.6720.0360.6820.6430.6270.659Table 6
Performance of RVM classifiers with paired feature sets.
FeatureRVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityF1D4 + U100.6640.0410.7150.6610.5900.729F2D4 + U10 with min-max0.6390.0410.7360.6610.5660.753F3D4 + U10 with L20.6730.0340.6810.6430.5780.706F4D4 + U10 with Z-score0.6270.0650.7220.6490.5540.741F5D4 + L100.6500.0410.6810.5950.6150.577F6D4 + L10 with min-max0.6500.0320.6440.5830.5660.600F7D4 + L10 with L20.6680.0310.6690.5890.5900.588F8D4 + L10 with Z-score0.6400.0190.6520.6010.5540.647F9D4 + S50.6380.0500.6360.6430.5180.765F10D4 + S5 with min-max0.6480.0410.6500.6250.5660.682F11D4 + S5 with L20.6550.0540.6080.6070.6020.612F12D4 + S5 with Z-score0.5800.0350.5840.5660.4820.647F13U10 + L100.6700.0580.7050.6730.6390.706F14U10 + L10 with min-max0.6910.0190.7170.6670.5900.741F15U10 + L10 with L20.6850.0350.7010.6550.6150.694F16U10 + L10 with Z-score0.6410.0180.7120.6430.5900.694F17U10 + S50.6330.0450.6990.6430.5180.765F18U10 + S5 with min-max0.6270.0310.7130.6610.5420.777F19U10 + S5 with L20.5880.0780.6170.5660.4580.671F20U10 + S5 with Z-score0.6330.0300.7040.6430.5660.718F21L10 + S50.6710.0550.6680.6310.5540.706F22L10 + S5 with min-max0.6880.0370.6470.6370.6390.635F23L10 + S5 with L20.6820.0370.6530.5830.6150.553F24L10 + S5 with Z-score0.6480.0090.6520.6250.6510.600F25D4 + U10 + L100.6520.0490.6880.6370.5780.694F26D4 + U10 + L10 with min-max0.6740.0410.6580.6010.5180.682F27D4 + U10 + L10 with L20.6890.0470.6940.6670.6150.718F28D4 + U10 + L10 with Z-score0.6690.0330.6530.6130.5540.671F29D4 + U10 + S50.6490.0410.6970.6490.5660.729F30D4 + U10 + S5 with min-max0.6260.0550.6770.6190.4580.777F31D4 + U10 + S5 with L20.6650.0570.6740.6550.6390.671F32D4 + U10 + S5 with Z-score0.6290.0210.6800.6730.5300.812F33U10 + L10 + S50.6430.0450.6890.6490.5660.729F34U10 + L10 + S5 with min-max0.6850.0270.6970.6370.5900.682F35U10 + L10 + S5 with L20.6630.0540.6900.6370.6630.612F36U10 + L10 + S5 with Z-score0.6670.0180.6960.6370.6270.647F37D4 + U10 + L10 + S50.6220.0330.6570.6430.5780.706F38D4 + U10 + L10 + S5 with min-max0.6680.0310.6980.6730.6020.741F39D4 + U10 + L10 + S5 with L20.6870.0420.6830.6610.6270.694F40D4 + U10 + L10 + S5 with Z-score0.6660.0150.6670.6370.5300.741Table1 shows the performance of RVM classifiers with lexical dispersion rates as features. It shows that after automatic feature selection, RVM with the reduced and optimised feature set (D4) reached a largely comparable performance to that of the classifier run on the full feature set: on the training dataset, the mean of area under the curve (AUC) of RVM (D4) was 0.617 (SD = 0.06), compared to 0.625 (SD = 0.06) of RVM (full 20 features), suggesting that feature reduction could also help encounter the issue of overfitting in training machine learning classifiers. On the testing dataset, the AUC of the RVM (D4) (0.648) was similar to that of RVM (All 20) (0.649). Sensitivity dropped slightly from 0.578 (RVM-All 20) to 0.566 (RVM-D4), and specificity remained unchanged at 0.753. Normalisation did not improve RVMs with the entire or optimised feature sets of lexical dispersion rates.Table2 compares the performance of RVM classifiers run on English semantic features. It shows that after automatic feature selection, the performance of the RVMs improved on both the training and the testing datasets: on the training data, the mean of AUC of RVM with the full semantic feature set (USAS115) observed a marginal improvement from 0.652 to 0.659 with a slightly reduced standard deviation (SD) from 0.052 to 0.045. On the testing dataset, the AUC of RVM (USAS115) saw an improvement from 0.692 to 0.714. Specificity improved from 0.729 of RVM (USAS115) to 0.777 of RVM (U10); sensitivity decreased from 0.590 of RVM (USAS115) to 0.578 of RVM (U10). Normalisation did not improve model performance.Table3 compares the performance of RVMs with English sentiment features annotated with the Linguistic Inquiry and Word Count (LIWC) software. It shows that after automatic feature optimisation, the performance of the RVM classifier (LWIC all 92) improved on the testing datasets. The AUC of RVM (L10) increased from 0.580 to 0.605. Model sensitivity increased from 0.518 to 0.651, but specificity decreased from 0.577 to 0.494. The impact of feature normalisation on RVMs with all and optimised feature sets was similar, while the classifier specificity improved, sensitivity decreased, and the overall model accuracy on the testing dataset however did not improve significantly.Table4 compares the performance of RVMs with various structural features which we annotated with the Readability Studio software. After automatic feature optimisation, the AUC of the classifier RVM (structural all 24) decreased from 0.636 to 0.621, which was due to decreased model sensitivity from 0.518 to 0.446, but the model specificity increased from 0.729 to 0.788. Feature normalisation helped to balance the asymmetric classification errors on the classifier RVM with both the entire feature set and the optimised feature set: the model specificity decreased and sensitivity increased. However, the overall model accuracy or the AUC did not improve with different feature normalisation techniques.Table5 shows the performance of the RVM with the combined feature sets of lexical dispersion rates and semantic, sentiment, and structural features, which represented 251 features in total. Automatic feature optimisation reduced the original feature set of 251 features to a parsimonious model containing 33 features only. With less predictive and noisy features involved in the model, the performance of the classifier also improved significantly on both the training and the testing datasets: on the training data, the model AUC was 0.642 (SD = 0.038). This increased to 0.658 (SD = 0.034) with the optimised RVM classifier. On the testing data, the AUC improved from 0.647 to 0.718. With automatic feature optimisation, both sensitivity and specificity improved: sensitivity increased from 0.518 to 0.554 and specificity increased from 0.706 to 0.741. Importantly, feature normalisation played a critical role in balancing asymmetrical classification errors on RVMs with combined feature sets. Specifically, min-max normalisation increased sensitivity of the optimised classifier to 0.651, the highest so far, and retained the high specificity of the classifier at 0.741. This sensitivity and specificity pair was the best combination among the classifiers developed so far.Table6 compares the performance of classifiers with double and multiple optimised feature sets. We compared in total 10 different pairs of optimised feature sets and conducted feature normalisation with each combination of optimised features, and as a result, each RVM in Table 6 has four different versions: the unnormalised version, followed by normalised versions with min-max, L2, and Z-score normalisation. The 10 combinations of optimised features were as follows: optimised lexical dispersion rates (D4) and optimised semantic feature (U10) (F1–F4), optimised lexical dispersion rates (D4) and optimised sentiment features (L10) (F5–F8), optimised lexical dispersion rates (D4) and optimised structural features (S5) (F9–F12), optimised semantic features (U10) and optimised sentiment feature (L10) (F13–F16), optimised semantic features (U10) and optimised structural features (S5) (F17–F20), optimised sentiment features (L10) and optimised structural features (S5) (F21–F24), and so on. We identified 5 high-performing models based on considerations of the overall model AUC, accuracy, sensitivity, and specificity: F13 was the unnormalised combination of optimised semantic (U10) and optimised sentiment features (L10). It had an overall AUC on the testing data of 0.705, with a relatively high sensitivity of 0.639 and specificity of 0.706. F27 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and sentiment features (L10). It had an overall AUC of 0.694 on the testing dataset, with sensitivity of 0.615 and specificity of 0.718. F31 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and optimised structural features (S5). It had an overall AUC of 0.674 on the testing dataset, with sensitivity of 0.639 and specificity of 0.671. F35 was the normalised version (through L2 normalisation) of optimised semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.690 on the testing dataset, with sensitivity of 0.663 and specificity of 0.612. Finally, F39 was the normalised version (L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.683 on the testing dataset, with sensitivity of 0.627 and specificity of 0.694. Figure 2 shows the visualised comparison of the AUCs of these 5 high-performing classifiers, the RVM with the entire combined feature sets (251 features) with L2 normalisation, and the best-performing classifier we developed (RVM (All33)) with min-max normalisation.Figure 2
AUCs of RVMs on testing data using different feature sets.Tables7 and 8 show the paired sample t-tests assessing the significance levels of differences in sensitivity and specificity between the various competing high-performance classifiers and the best-performing RVM classifier we developed through the combined automatic optimisation of four different feature sets followed by automatic feature normalisation. To reduce false discovery rates in multiple comparison, we applied the Benjamini–Hochberg correction procedure [19–21]. The results show that sensitivity of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F35 (p = 0.0017); specificity of our best-performing RVM classifier was statistically higher than that of all other competing models with p values equal to or smaller than 0.004.Table 7
Paired sample t-test of the difference in sensitivity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. F270.03610.00210.03190.04030.001210.0083∗∗2All33 (min-max) vs. F390.02410.00150.02120.02700.001320.0167∗∗3All33 (min-max) vs. All251 (L2)0.01200.00080.01050.01360.001430.0250∗∗4All33 (min-max) vs. F130.01200.00080.01050.01360.001440.0333∗∗5All33 (min-max) vs. F310.01200.00080.01050.01360.001450.0417∗∗6All33 (min-max) vs. F35-0.01210.0009−0.0137−0.01040.001760.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table 8
Paired samplet-test of the difference in specificity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. All 251 (L2)0.14120.01100.11950.16280.002010.0083∗∗2All33 (min-max) vs. F350.12940.01050.10880.15000.002220.0167∗∗3All33 (min-max) vs. F310.07060.00680.05730.08390.003130.0250∗∗4All33 (min-max) vs. F390.04710.00480.03760.05660.003540.0333∗∗5All33 (min-max) vs. F130.03530.00380.02790.04270.003850.0417∗∗6All33 (min-max) vs. F270.02350.00260.01850.02860.004060.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table9 shows the paired sample t-tests assessing the significance levels of differences in AUCs between various competing high-performance classifiers and the best-performing RVM classifier on testing data using different training dataset sizes (i. e., 100, 150, 200, 250, 300, and all 389 training samples). We applied Benjamini–Hochberg correction to reduce false discovery rates in multiple comparison. The results show that AUC under different training dataset sizes of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F13 (p = 0.0752) and F27 (p = 0.1698). Figure 3 shows the visualised comparison of the mean AUCs of these 6 competitive classifiers and the developed best-performing classifier. As shown in Figure 3, our best-performing RVM classifier gained the highest mean AUC than all other competing models.Table 9
Paired samplet-test of the difference in AUCs between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs F310.04820.00910.03040.06600.000010.0083∗∗2All33 (min-max) vs. All 251 (L2)0.06160.02790.00700.11630.002920.0167∗∗3All33 (min-max) vs F390.02950.0165−0.00280.06180.007130.0250∗∗4All33 (min-max) vs F350.03040.0217−0.01210.07280.018540.0333∗∗5All33 (min-max) vs F130.01960.0214−0.02240.06160.075250.04176All33 (min-max) vs F270.01380.0211−0.02760.05520.169860.0500∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Figure 3
Mean AUCs of RVMs with different feature sets on testing data using different training dataset sizes.
## 4.1. Results
Following automatic feature optimisation to enhance the classification accuracy of classifiers, we evaluated the performance of Bayesian models (relevance vector machine (RVM)) with different feature sets on both the training and testing datasets. As discussed earlier, we applied 5-fold cross-validation on the training dataset to minimise biases in the classifiers being developed. First, we compared the original feature sets with their respective optimised feature sets in Table1–5. Next, we compared the performance of RVM classifiers with different pairs of optimised feature sets. Table 6 shows the comparison of RVM classifiers with double, triple, and quadruple optimised feature sets, respectively. Like feature optimisation, feature normalisation is another useful automatic technique to enhance the performance of machine learning classifiers. We applied three popular feature normalisation techniques: min-max, L2-norm (L2), and Z-score normalisation with each RVM classifier to see whether this could help balance asymmetrical classification errors within each model.Table 1
Performance of RVM classifiers with lexical dispersion features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityDispersion rates (full 20 features)0.6250.060.6490.6670.5780.753Disp all 20 with min-max normalisation0.5470.040.6540.6430.5660.718Disp all 20 with L2 normalisation0.5730.070.5940.5600.5180.600Disp all 20 with Z-score normalisation0.5610.060.6450.6370.5300.741D4 (automatically optimised)0.6170.060.6480.6610.5660.753D4 with min-max0.6110.060.6860.6490.5420.753D4 with L20.5710.070.5950.5600.5180.600D4 with Z-score0.6100.060.6890.6490.5660.729Table 2
Performance of RVM classifiers with lexical semantic features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityUSAS all 1150.6520.0520.6920.6610.5900.729USAS all 115 with min-max0.5930.0820.6770.6430.5780.706USAS all 115 with L20.5840.0870.6810.6250.5900.659USAS all 115 with Z-score0.5890.1110.6940.6550.6390.671U10 (automatically optimised)0.6590.0450.7140.6790.5780.777U10 with min-max0.6630.0440.7230.6790.5780.777U10 with L20.6140.0890.7070.6250.5180.729U10 with Z-score0.6460.0420.7130.6490.5060.788Table 3
Performance of RVM classifiers with lexical sentiment features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityLIWC all 920.6140.0570.5800.5480.5180.577LIWC all 92 with min-max0.5770.0540.6460.6190.5660.671LIWC all 92 with L20.6100.0640.5730.5480.5180.577LIWC all 92 with Z-score0.6190.0460.6700.6190.5060.729L10 (automatically optimised)0.6020.0400.6050.5710.6510.494L10 with min-max0.6290.0550.6070.5660.5660.565L10 with L20.6040.0340.6090.5710.5180.624L10 with Z-score0.6140.0680.6160.5830.5780.588Table 4
Performance of RVM classifiers with structural features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityStructural all 240.6430.0480.6360.6250.5180.729Structural all 24 with min-max0.6030.0470.5950.5540.4340.671Structural all 24 with L20.6470.0460.6160.5950.5900.600Structural all 24 with Z-score0.6130.0480.6210.5830.4820.682S4 (automatically optimised)0.6330.0500.6210.6190.4460.788S4 with min-max0.6160.0470.6150.5950.5540.635S4 with Z-score0.6240.0440.6030.6010.5420.659Table 5
Performance of RVM classifiers with all (dispersion + USAS + LIWC + structural) features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityALL 2510.6420.0380.6470.6130.5180.706ALL 251 with min-max0.6260.0410.6970.6430.5900.694ALL 251 with L20.6700.0450.6530.6190.6390.600ALL 251 with Z-score0.6330.0850.6800.6250.5900.659ALL33 (automatically optimised)0.6580.0140.6700.6490.5540.741ALL33 with min-max0.6780.0340.7180.6960.6510.741ALL33 with L20.7100.0150.6700.6370.6510.624ALL33 with Z-score0.6720.0360.6820.6430.6270.659Table 6
Performance of RVM classifiers with paired feature sets.
FeatureRVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityF1D4 + U100.6640.0410.7150.6610.5900.729F2D4 + U10 with min-max0.6390.0410.7360.6610.5660.753F3D4 + U10 with L20.6730.0340.6810.6430.5780.706F4D4 + U10 with Z-score0.6270.0650.7220.6490.5540.741F5D4 + L100.6500.0410.6810.5950.6150.577F6D4 + L10 with min-max0.6500.0320.6440.5830.5660.600F7D4 + L10 with L20.6680.0310.6690.5890.5900.588F8D4 + L10 with Z-score0.6400.0190.6520.6010.5540.647F9D4 + S50.6380.0500.6360.6430.5180.765F10D4 + S5 with min-max0.6480.0410.6500.6250.5660.682F11D4 + S5 with L20.6550.0540.6080.6070.6020.612F12D4 + S5 with Z-score0.5800.0350.5840.5660.4820.647F13U10 + L100.6700.0580.7050.6730.6390.706F14U10 + L10 with min-max0.6910.0190.7170.6670.5900.741F15U10 + L10 with L20.6850.0350.7010.6550.6150.694F16U10 + L10 with Z-score0.6410.0180.7120.6430.5900.694F17U10 + S50.6330.0450.6990.6430.5180.765F18U10 + S5 with min-max0.6270.0310.7130.6610.5420.777F19U10 + S5 with L20.5880.0780.6170.5660.4580.671F20U10 + S5 with Z-score0.6330.0300.7040.6430.5660.718F21L10 + S50.6710.0550.6680.6310.5540.706F22L10 + S5 with min-max0.6880.0370.6470.6370.6390.635F23L10 + S5 with L20.6820.0370.6530.5830.6150.553F24L10 + S5 with Z-score0.6480.0090.6520.6250.6510.600F25D4 + U10 + L100.6520.0490.6880.6370.5780.694F26D4 + U10 + L10 with min-max0.6740.0410.6580.6010.5180.682F27D4 + U10 + L10 with L20.6890.0470.6940.6670.6150.718F28D4 + U10 + L10 with Z-score0.6690.0330.6530.6130.5540.671F29D4 + U10 + S50.6490.0410.6970.6490.5660.729F30D4 + U10 + S5 with min-max0.6260.0550.6770.6190.4580.777F31D4 + U10 + S5 with L20.6650.0570.6740.6550.6390.671F32D4 + U10 + S5 with Z-score0.6290.0210.6800.6730.5300.812F33U10 + L10 + S50.6430.0450.6890.6490.5660.729F34U10 + L10 + S5 with min-max0.6850.0270.6970.6370.5900.682F35U10 + L10 + S5 with L20.6630.0540.6900.6370.6630.612F36U10 + L10 + S5 with Z-score0.6670.0180.6960.6370.6270.647F37D4 + U10 + L10 + S50.6220.0330.6570.6430.5780.706F38D4 + U10 + L10 + S5 with min-max0.6680.0310.6980.6730.6020.741F39D4 + U10 + L10 + S5 with L20.6870.0420.6830.6610.6270.694F40D4 + U10 + L10 + S5 with Z-score0.6660.0150.6670.6370.5300.741Table1 shows the performance of RVM classifiers with lexical dispersion rates as features. It shows that after automatic feature selection, RVM with the reduced and optimised feature set (D4) reached a largely comparable performance to that of the classifier run on the full feature set: on the training dataset, the mean of area under the curve (AUC) of RVM (D4) was 0.617 (SD = 0.06), compared to 0.625 (SD = 0.06) of RVM (full 20 features), suggesting that feature reduction could also help encounter the issue of overfitting in training machine learning classifiers. On the testing dataset, the AUC of the RVM (D4) (0.648) was similar to that of RVM (All 20) (0.649). Sensitivity dropped slightly from 0.578 (RVM-All 20) to 0.566 (RVM-D4), and specificity remained unchanged at 0.753. Normalisation did not improve RVMs with the entire or optimised feature sets of lexical dispersion rates.Table2 compares the performance of RVM classifiers run on English semantic features. It shows that after automatic feature selection, the performance of the RVMs improved on both the training and the testing datasets: on the training data, the mean of AUC of RVM with the full semantic feature set (USAS115) observed a marginal improvement from 0.652 to 0.659 with a slightly reduced standard deviation (SD) from 0.052 to 0.045. On the testing dataset, the AUC of RVM (USAS115) saw an improvement from 0.692 to 0.714. Specificity improved from 0.729 of RVM (USAS115) to 0.777 of RVM (U10); sensitivity decreased from 0.590 of RVM (USAS115) to 0.578 of RVM (U10). Normalisation did not improve model performance.Table3 compares the performance of RVMs with English sentiment features annotated with the Linguistic Inquiry and Word Count (LIWC) software. It shows that after automatic feature optimisation, the performance of the RVM classifier (LWIC all 92) improved on the testing datasets. The AUC of RVM (L10) increased from 0.580 to 0.605. Model sensitivity increased from 0.518 to 0.651, but specificity decreased from 0.577 to 0.494. The impact of feature normalisation on RVMs with all and optimised feature sets was similar, while the classifier specificity improved, sensitivity decreased, and the overall model accuracy on the testing dataset however did not improve significantly.Table4 compares the performance of RVMs with various structural features which we annotated with the Readability Studio software. After automatic feature optimisation, the AUC of the classifier RVM (structural all 24) decreased from 0.636 to 0.621, which was due to decreased model sensitivity from 0.518 to 0.446, but the model specificity increased from 0.729 to 0.788. Feature normalisation helped to balance the asymmetric classification errors on the classifier RVM with both the entire feature set and the optimised feature set: the model specificity decreased and sensitivity increased. However, the overall model accuracy or the AUC did not improve with different feature normalisation techniques.Table5 shows the performance of the RVM with the combined feature sets of lexical dispersion rates and semantic, sentiment, and structural features, which represented 251 features in total. Automatic feature optimisation reduced the original feature set of 251 features to a parsimonious model containing 33 features only. With less predictive and noisy features involved in the model, the performance of the classifier also improved significantly on both the training and the testing datasets: on the training data, the model AUC was 0.642 (SD = 0.038). This increased to 0.658 (SD = 0.034) with the optimised RVM classifier. On the testing data, the AUC improved from 0.647 to 0.718. With automatic feature optimisation, both sensitivity and specificity improved: sensitivity increased from 0.518 to 0.554 and specificity increased from 0.706 to 0.741. Importantly, feature normalisation played a critical role in balancing asymmetrical classification errors on RVMs with combined feature sets. Specifically, min-max normalisation increased sensitivity of the optimised classifier to 0.651, the highest so far, and retained the high specificity of the classifier at 0.741. This sensitivity and specificity pair was the best combination among the classifiers developed so far.Table6 compares the performance of classifiers with double and multiple optimised feature sets. We compared in total 10 different pairs of optimised feature sets and conducted feature normalisation with each combination of optimised features, and as a result, each RVM in Table 6 has four different versions: the unnormalised version, followed by normalised versions with min-max, L2, and Z-score normalisation. The 10 combinations of optimised features were as follows: optimised lexical dispersion rates (D4) and optimised semantic feature (U10) (F1–F4), optimised lexical dispersion rates (D4) and optimised sentiment features (L10) (F5–F8), optimised lexical dispersion rates (D4) and optimised structural features (S5) (F9–F12), optimised semantic features (U10) and optimised sentiment feature (L10) (F13–F16), optimised semantic features (U10) and optimised structural features (S5) (F17–F20), optimised sentiment features (L10) and optimised structural features (S5) (F21–F24), and so on. We identified 5 high-performing models based on considerations of the overall model AUC, accuracy, sensitivity, and specificity: F13 was the unnormalised combination of optimised semantic (U10) and optimised sentiment features (L10). It had an overall AUC on the testing data of 0.705, with a relatively high sensitivity of 0.639 and specificity of 0.706. F27 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and sentiment features (L10). It had an overall AUC of 0.694 on the testing dataset, with sensitivity of 0.615 and specificity of 0.718. F31 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and optimised structural features (S5). It had an overall AUC of 0.674 on the testing dataset, with sensitivity of 0.639 and specificity of 0.671. F35 was the normalised version (through L2 normalisation) of optimised semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.690 on the testing dataset, with sensitivity of 0.663 and specificity of 0.612. Finally, F39 was the normalised version (L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.683 on the testing dataset, with sensitivity of 0.627 and specificity of 0.694. Figure 2 shows the visualised comparison of the AUCs of these 5 high-performing classifiers, the RVM with the entire combined feature sets (251 features) with L2 normalisation, and the best-performing classifier we developed (RVM (All33)) with min-max normalisation.Figure 2
AUCs of RVMs on testing data using different feature sets.Tables7 and 8 show the paired sample t-tests assessing the significance levels of differences in sensitivity and specificity between the various competing high-performance classifiers and the best-performing RVM classifier we developed through the combined automatic optimisation of four different feature sets followed by automatic feature normalisation. To reduce false discovery rates in multiple comparison, we applied the Benjamini–Hochberg correction procedure [19–21]. The results show that sensitivity of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F35 (p = 0.0017); specificity of our best-performing RVM classifier was statistically higher than that of all other competing models with p values equal to or smaller than 0.004.Table 7
Paired sample t-test of the difference in sensitivity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. F270.03610.00210.03190.04030.001210.0083∗∗2All33 (min-max) vs. F390.02410.00150.02120.02700.001320.0167∗∗3All33 (min-max) vs. All251 (L2)0.01200.00080.01050.01360.001430.0250∗∗4All33 (min-max) vs. F130.01200.00080.01050.01360.001440.0333∗∗5All33 (min-max) vs. F310.01200.00080.01050.01360.001450.0417∗∗6All33 (min-max) vs. F35-0.01210.0009−0.0137−0.01040.001760.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table 8
Paired samplet-test of the difference in specificity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. All 251 (L2)0.14120.01100.11950.16280.002010.0083∗∗2All33 (min-max) vs. F350.12940.01050.10880.15000.002220.0167∗∗3All33 (min-max) vs. F310.07060.00680.05730.08390.003130.0250∗∗4All33 (min-max) vs. F390.04710.00480.03760.05660.003540.0333∗∗5All33 (min-max) vs. F130.03530.00380.02790.04270.003850.0417∗∗6All33 (min-max) vs. F270.02350.00260.01850.02860.004060.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table9 shows the paired sample t-tests assessing the significance levels of differences in AUCs between various competing high-performance classifiers and the best-performing RVM classifier on testing data using different training dataset sizes (i. e., 100, 150, 200, 250, 300, and all 389 training samples). We applied Benjamini–Hochberg correction to reduce false discovery rates in multiple comparison. The results show that AUC under different training dataset sizes of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F13 (p = 0.0752) and F27 (p = 0.1698). Figure 3 shows the visualised comparison of the mean AUCs of these 6 competitive classifiers and the developed best-performing classifier. As shown in Figure 3, our best-performing RVM classifier gained the highest mean AUC than all other competing models.Table 9
Paired samplet-test of the difference in AUCs between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs F310.04820.00910.03040.06600.000010.0083∗∗2All33 (min-max) vs. All 251 (L2)0.06160.02790.00700.11630.002920.0167∗∗3All33 (min-max) vs F390.02950.0165−0.00280.06180.007130.0250∗∗4All33 (min-max) vs F350.03040.0217−0.01210.07280.018540.0333∗∗5All33 (min-max) vs F130.01960.0214−0.02240.06160.075250.04176All33 (min-max) vs F270.01380.0211−0.02760.05520.169860.0500∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Figure 3
Mean AUCs of RVMs with different feature sets on testing data using different training dataset sizes.
## 5. Discussion
A few important findings emerged in our extensive computational analyses, especially the search for the best subset of features for developing Bayesian machine learning classifiers to address our core research question, which was to predict and assess the risk levels of original English mental healthcare materials in terms of their suitability for automatic neural machine translation targeting Spanish-speaking patients. Our study shows that separate feature optimisation on the four distinct feature sets did not achieve acceptable pairs of model sensitivity and specificity. Let us take a close look at features retained in each optimised feature set.Table10 summarises the list of separately and jointly optimised features. First, the optimised feature set of lexical dispersion (D4) contained DiSp8: 0.7–0.8, DiSp9: 0.8–0.9, DiSp10:0.9–1.0, and DiWr10:0.9–1.0. Lexical dispersion rate is a measurement of familiarity of language to the public. We used existing lexical dispersion rates of the British National Corpus which had 10 intervals between 0 and 1 for spoken and written materials, respectively. In both spoken and written materials, higher lexical dispersion rates like those in the optimised feature set (D4) indicate that automatic machine translation mistakes were strongly associated with lexical items of higher familiarity in spoken and written materials. We used non-parametric independent sample test and Mann–Whitney U test to compare samples labelled as “risky” and “safe” original English mental health materials for automatic machine translation. The result shows that all 4 optimised lexical dispersion rates had statistically higher means in “risky” than in “safe” English mental health materials: DiSp8:0.7–0.8 (safe text class: mean (M) = 20.689, standard deviation (SD) = 11.963, standard error (SE) = 0.943; risky text class: M = 27.854, SD = 14.628, SE = 1.283, p<0.0001), DiSp9:0.8–0.9 (safe texts: M = 44.553, SD = 15.679, SE = 1.236; risky texts: M = 54.400, SD = 17.275, SE = 1.515, p<0.0001), DiSp10 : 0.9–1.0 (safe texts: M = 78.217, SD = 21.115, SE = 1.664; risky texts: M = 88.423, SD = 23.223, SE = 2.037, p < 0.0001), and DiWr10 : 0.9–1.0 (safe texts: M = 147.453, SD = 47.024, SE = 3.706; risky texts: M = 176.346, SD = 53.828, SE = 4.721, p<0.0001).Table 10
Results of automatic feature optimisation.
Optimised features (number)LabelOptimised featureLexical dispersion rates (4)D4DiSp8:0.7–0.8, DiSp9:0.8–0.9, DiSp10 : 0.9–1.0, DiWr10 : 0.9–1.0Semantic features (10)U10A2 (words depicting change), A3 (words depicting being/existing), A4 (classification), A6 (comparing), A7 (probability), A13 (degree adverbs), E5 (trepidation, courage, surprise), O4 (physical attributes), Z5 (functional words), Z6 (negative particles).Sentiment features (10)L10Clout expressions, emotional tones, words per sentences, they (third person pronouns), affect words (incl. positive and negative emotions, anxiety, anger, sad), negative emotions, anxiety, tentativeness, differentiation, core drives and needs (reward focus)Structural features (5)S5Number of difficult sentences (more than 22 words), number of monosyllabic words, number of long (6+ characters) words, number of sentences which use the same word multiple times (overused words), passive voiceAll features (33)All33Dispersion rates (1): DiSp8:0.7–0.8Semantic features (21): A3 (being/existing), A13 (degree adverbs), A15 (abstract terms of safety, danger), B2 (physical conditions), B3 (medical treatment), E5 (trepidation, courage, surprise), E6 (apprehension, confidence), K5 (leisure, activities), M1 (movement), M3 (transport on land), M5 (transport by air), M6 (points of reference), N3 (measurement), O4 (physical attributes), S1 (social action, state, process), S5 (affiliation), T1 (time), X2 (reasoning, belief, scepticism), Z3 (organisations names), Z5 (functional words), Z8 (pronouns)Sentiment features (L4): clout expressions, affect words, negative emotions, anxietyStructural features (S7): number of sentences using the same words multiple times (overused words), average number of sentences per paragraph, number of questions, number of proper nouns, number of unique multiple (3+) syllable words, number of unique long (more than 6 letters) words, out-of-dictionary words.With the optimised semantic feature set (U10), there were 10 semantic features identified as most relevant predictive features for the classifier. Similar to the optimised feature set of lexical dispersion rates (D4), the 10 optimised semantic features also had statistically higher means in potentially “risky” than in “safe” English mental health materials with regard to their suitability for automatic machine translation: A2 (changes, modifications) (safe texts:M = 15.429, SD = 15.093, SE = 1.190; risky texts: M = 24.185, SD = 20.646, SE = 1.811, p<0.0001); A3 (existing status of objects, things, people) (safe texts: M = 37.267, SD = 28.570, SE = 2.252; risky texts: M = 57.062, SD = 41.987, SE = 3.682, p<0.0001); A4 (classification) (safe texts: M = 4.311, SD = 5.299, SE = 0.418; risky texts: M = 8.062, SD = 10.504, SE = 0.921, p<0.0001); A6 (comparison) (safe texts: M = 12.689, SD = 11.913, SE = 0.939; risky texts: M = 20.846, SD = 22.697, SE = 1.991, p<0.0001); A7 (probability) (safe texts: M = 24.373, SD = 19.357, SE = 1.526; risky texts: M = 37.262, SD = 25.795, SE = 2.262, p<0.0001); A13 (degree adverbs) (safe texts: M = 9.857, SD = 9.073, SE = 0.715; risky texts: M = 15.654, SD = 12.487, SE = 1.095, p<0.0001); E5 (trepidation, courage, surprise) (safe texts: M = 3.901, SD = 6.927, SE = 0.546; risky texts: M = 11.515, SD = 18.045, SE = 1.583, p<0.0001); O4 (physical attributes) (safe texts: M = 5.509, SD = 5.671, SE = 0.447; risky texts: M = 8.115, SD = 6.668, SE = 0.585, p<0.0001); Z5 (functional words) (safe texts: M = 217.242, SD = 151.680, SE = 11.954; risky texts: M = 326.238, SD = 222.992, SE = 19.558, p<0.0001); and Z6 (negative particles) (safe texts: M = 7.944, SD = 7.246, SE = 0.571; risky texts: M = 12.062, SD = 9.090, SE = 0.797, p<0.0001).Next, we examined the optimised feature set of English sentiment features (L10). This includes clout expressions (negative particles) (safe texts:M = 86.714, SD = 13.275, SE = 1.046; risky texts: M = 81.142, SD = 16.857, SE = 1.478, p = 0.004); emotional tones (safe texts: M = 29.106, SD = 32.179, SE = 2.536; risky texts: M = 18.367, SD = 27.907, SE = 2.448, p < 0.0001); words per sentences (safe texts: M = 18.714, SD = 5.075, SE = 0.400; risky texts: M = 19.728, SD = 4.813, SE = 0.422, p = 0.009); they (third person pronouns) (safe texts: M = 0.746, SD = 0.660, SE = 0.052; risky texts: M = 1.017, SD = 0.953, SE = 0.084, p = 0.028); affect words (safe texts: M = 8.581, SD = 2.517, SE = 0.198; risky texts: M = 9.382, SD = 2.604, SE = 0.228, p = 0.002); negative emotions (safe texts: M = 4.830, SD = 2.609, SE = 0.206; risky texts: M = 6.127, SD = 3.165, SE = 0.278, p < 0.0001); anxiety words (safe texts: M = 3.049, SD = 2.128, SE = 0.168; risky texts: M = 4.088, SD = 2.659, SE = 0.233, p < 0.0001); tentativeness words (safe texts: M = 5.043, SD = 1.643 SE = 0.129; risky texts: M = 5.599, SD = 1.590, SE = 0.139, p = 0.005); differentiation (safe texts: M = 4.176, SD = 1.515, SE = 0.119; risky texts: M = 4.717, SD = 1.333, SE = 0.117, p = 0.002); and core drives and needs (reward focus) (safe texts: M = 1.731, SD = 1.083, SE = 0.085; risky texts: M = 1.339, SD = 0.858, SE = 0.075, p = 0.001).Within the optimised feature set of structural linguistic features (S5), there were 5 optimised features. Like the other three sets of optimised features, features retained in S5 had statistically higher means in “risky” texts than in “safe” English health texts: number of difficult sentences (more than 22 words) (safe texts:M = 10.491, SD = 8.821, SE = 0.695; risky texts: M = 16.200, SD = 14.048, SE = 1.23, p < 0.0001); number of monosyllabic words (safe texts: M = 560.186, SD = 358.796, SE = 28.277; risky texts: M = 811.446, SD = 515.400, SE = 45.204, p < 0.0001); number of long (6+ characters) words (safe texts: M = 280.255, SD = 215.525, SE = 16.986; risky texts: M = 439.215, SD = 347.782, SE = 30.502, p < 0.0001); number of sentences which use same words multiple times (safe texts: M = 10.814, SD = 12.263, SE = 0.966; risky texts: M = 19.177, SD = 22.999, SE = 2.017, p < 0.0001); and passive voice (safe texts: M = 2.944, SD = 5.639, SE = 0.444; risky texts: M = 4.931, SD = 6.191, SE = 0.543, p < 0.0001).We found that the accuracy, sensitivity, and specificity of Bayesian classifiers based on these separately optimised features were suboptimal, in spite of the individual features retained in each optimised feature set being statistically significant features. Recent studies suggest that statistical significance and predictivity of features are often taken as exchangeable concepts, mistakenly [22–26]. Adding statistically significant features identified between case and control samples however do not necessarily improve the predictive performance of machine learning classifiers. This was verified in our study through joint optimisation of different feature sets combining lexical, semantic, sentiment, and structural features (251 features in total). The joint optimisation led to an optimised mixed feature set of 33 features, including 5 which did not have statistically different distribution in “safe” versus “risky” English mental health texts: K5 (leisure, activities) (safe texts: M = 2.596, SD = 5.473, SE = 0.431; risky texts: M = 2.292, SD = 2.900, SE = 0.254, p = 0.346); M3 (means of transport on land) (safe texts: M = 0.839, SD = 1.680, SE = 0.132; risky texts: M = 2.038, SD = 7.276, SE = 0.638, p = 0.076); Z3 (organisations names) (means of transport on land) (safe texts: M = 2.466, SD = 6.791, SE = 0.535; risky texts: M = 1.854, SD = 2.910, SE = 0.255, p = 0.893); average number of sentences per paragraph (safe texts: M = 2.160, SD = 3.394, SE = 0.267; risky texts: M = 2.434, SD = 6.268, SE = 0.550, p = 0.384); and number of questions (safe texts: M = 3.068, SD = 3.607, SE = 0.284; risky texts: M = 3.815, SD = 5.059, SE = 0.444, p = 0.312).All other features in the jointly optimised feature set had statistically higher means in “risky” than in “safe” English mental health materials. Specifically, this included A3 (being/existing) (p < 0.001), A13 (degree adverbs) (p < 0.001), A15 (abstract terms of safety, danger) (p < 0.001), B2 (physical conditions) (p < 0.001), B3 (medical treatment) (p < 0.001), E5 (trepidation, courage, surprise) (p < 0.001), E6 (apprehension, confidence) (p = 0.001), M5 (transport by air) (p = 0.002), M6 (points of reference) (p < 0.001), N3 (measurement) (p = 0.004), O4 (physical attributes) (p < 0.001), S1 (social action, state, process) (p < 0.001), S5 (affiliation) (p = 0.001), T1 (time) (p < 0.001), X2 (reasoning, belief, scepticism) (p < 0.001), Z5 (functional words) (p < 0.001), Z8 (pronouns) (p < 0.001), clout expressions (p = 0.004), affect words (p = 0.002), negative emotions (p < 0.001), anxiety (p < 0.001), number of sentences using the same words multiple times (overused words) (p < 0.001), number of proper nouns (p = 0.008), number of unique multiple (3+) syllable words (p < 0.001), number of unique long (more than 6 letters) words (p < 0.001), and out-of-dictionary words (p = 0.004). For Bayesian machine learning classifiers to reach higher prediction accuracy, both these statistically significant features and statistically non-significant yet highly predictive features were identified “risk factors” contributing to the increased probability of conceptual mistakes in machine translated mental health information in Spanish.The major advantage of the relevance vector machine classifier (RVM) based on the optimised and mixed feature set was the balanced sensitivity (0.651) and specificity (0.741), which made the instrument more applicable and useful in practical settings such as development and evaluation of mental health education and promotion resources for Spanish-speaking patients. The list of optimised linguistic features included in the best-performing classifier also provides important opportunities for health professionals to make well-targeted, cost-effective interventions to English health materials to improve their suitability for automatic translation purposes. For example, health professionals could adjust the distribution patterns of relevant linguistic features, especially those associated with higher risks of causing automatic translation mistakes, and rerun the automatic assessment of the English input materials using our machine learning classifier, iteratively, until the predicted risk level reaches an acceptable level. Importantly, this process does not require any linguistic knowledge on the part of English-speaking medical professionals of patients’ language (in this case, Spanish).
## 6. Conclusions
Our paper developed probabilistic machine learning algorithms to assess and predict the levels of risks of using the Google Translate application in translating and delivering mental health information to Spanish-speaking populations. Our model can inform clinical decision making around the usability of the online translation tool when translating different original English texts on anxiety disorders into Spanish. This was achieved through the probabilistic prediction of Bayesian machine learning classifiers: if an input English text was assigned a high probability (over 50%) of causing erroneous and misleading automatic translation output, health professionals should become alert of the risk of using Google Translate; by contrast, if an input English text was assigned a low risk probability (below 50%), health professionals can feel reassured that the whole piece of English information can be translated safely to its intended user, using the online automatic translation tool. The smaller the risk probability of an English text is, the safer it is for the text to be translated automatically online. For original English materials which were labelled as non-suitable for automatic translation, our machine learning offers the opportunity to adjust, modify, and fine-tune the text to improve its suitability for automatic translation. This was achieved through the feature optimisation technique developed in our study. An important and useful feature of our model is that it does not require any linguistic knowledge on the part of English-speaking medical professionals of the patients’ language. The classifier can be applied as a practical decision aid to help increase the efficiency and cost-effectiveness of multicultural health communication, translation, and education.
---
*Source: 1011197-2021-10-28.xml* | 1011197-2021-10-28_1011197-2021-10-28.md | 67,378 | Supporting Risk-Aware Use of Online Translation Tools in Delivering Mental Healthcare Services among Spanish-Speaking Populations | Wenxiu Xie; Meng Ji; Mengdan Zhao; Xiaobo Qian; Chi-Yin Chow; Kam-Yiu Lam; Tianyong Hao | Computational Intelligence and Neuroscience
(2021) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1011197 | 1011197-2021-10-28.xml | ---
## Abstract
Neural machine translation technologies are having increasing applications in clinical and healthcare settings. In multicultural countries, automatic translation tools provide critical support to medical and health professionals in their interaction and exchange of health messages with migrant patients with limited or non-English proficiency. While research has mainly explored the usability and limitations of state-of-the-art machine translation tools in the detection and diagnosis of physical diseases and conditions, there is a persistent lack of evidence-based studies on the applicability of machine translation tools in the delivery of mental healthcare services for vulnerable populations. Our study developed Bayesian machine learning algorithms using relevance vector machine to support frontline health workers and medical professionals to make better informed decisions between risks and convenience of using online translation tools when delivering mental healthcare services to Spanish-speaking minority populations living in English-speaking countries. Major strengths of the machine learning classifier that we developed include scalability, interpretability, and adaptability of the classifier for diverse mental healthcare settings. In this paper, we report on the process of the Bayesian machine learning classifier development through automatic feature optimisation and the interpretation of the classifier-enabled assessment of the suitability of original English mental health information for automatic online translation. We elaborate on the interpretation of the assessment results in clinical settings using statistical tools such as positive likelihood ratios and negative likelihood ratios.
---
## Body
## 1. Introduction
Despite the increasing public awareness of the prevalence of mental health issues among populations from low and middle-income countries, accurate, scientific, non-discriminative communication of mental disorders remains a real challenge [1–3]. Within different societal, cultural systems, conventionalised linguistic constructs have been developed over years and decades to describe and convey the underlying social attitudes and understanding of different mental disorders like varieties of anxiety or depressive disorders. In English-speaking multicultural countries, the communication and interpretation of mental disorders and their treatment for non-English-speaking migrant populations pose important challenges to frontline health workers and clinicians [4–6]. The rapid development of machine translation technologies has offered necessary technical means to interact and engage with multicultural vulnerable communities and people who have limited access to mental healthcare services, despite the prevalence of mental health issues among such populations who are at higher risks of developing clinical mental disorders or other comorbidities such as chronic non-communicable diseases or physical health conditions that are likely to exacerbate their mental health issues. Currently, there is very limited research which examines the reliability, safety, or levels of risks in using state-of-the-art online translation tools such as Google Translate in clinical settings for communicating and talking with patients about mental disorders.Much of current research shows that the use of automatic translation tools in primary healthcare settings is driven by a persistent lack of qualified bilingual health professionals [7–10]. The risk of an unchecked use of translation technologies in specialised health and medical settings which have been developed largely for general cross-lingual communication purposes is real and well documented [11–14]. However, the practical needs for cost-effective translation tools in disease diagnosis and medical treatments are increasing. Although the provision of proper training to a certain number of bilingual health professionals can help reduce healthcare inequality issues, real-life scenarios can be much more complex, uncertain, and dynamic for any health systems at various levels. For example, a recurring issue in clinical settings is the lack of adequately qualified health translators working with under-resourced languages. Even for resourced language pairs such as English-Spanish and English-Chinese, it can be challenging to find bilingual health translators with extensive in-depth knowledge of different medical specialities. That is, the quality of human translation can also be compromised by the complexity and speciality of medical communications. In fact, with simple, direct sentences, online translations tools can fulfil their function to support a meaningful exchange of information between doctors and patients. We argue that health communication technologies like neural translation tools are evolving rapidly and that they have important potential for scaled uptake in health systems, especially those serving vulnerable or disadvantaged communities in under-resourced local health districts. Health technologies can be leveraged to help close the gaps in current medical and healthcare structures and improve the quality and accessibility of healthcare resources to populations and people at risk. Research is needed to develop instruments and aids to improve the safety and reliability of available translation technologies to be adopted in health and clinical settings.
## 2. Materials and Methods
### 2.1. Data Collection
We collected authoritative health information on anxiety disorders from the websites of federal and state health agencies and not-for-profit health promotion organisations in the U. S., Australia, Canada, and the United Kingdom. The total database contains 557 full-length articles including 270 (48.47%) original English materials associated with automatic translations to Spanish which contained misleading errors. We labelled these materials as positive or “risk” samples. Around 51.53% of the total samples we collected were original English texts whose automatic translation into Spanish using Google Translate did not contain any misleading information. The evaluation of the Spanish translations by Google was through the comparison of the original English texts with their backtranslations from Spanish. This method known as forward and backward translation was endorsed by international health organisations such as the World Health Organisation [15]. We subsequently labelled such English texts as negative or “safe” cases for automatic translation. We divided the entire database into 67.65% training data (389) and 32.35% testing data. Within the training dataset, there were 187 positive “risk” English texts which were prone to automatic translation errors and 202 negative “safe” English texts which had been proven to be suitable and reliable for automatic translation to Spanish. Similarly, within the testing dataset, there were 83 positive samples and 85 negative samples for testing the performance of classification of the classifiers to be developed. We applied 5-fold cross-validation on the training dataset to develop the classifiers to help remove biases in the development of algorithms.
### 2.2. Annotation of Feature Sets
Traditionally, health translation mistakes are believed to be associated with or triggered by the linguistic difficulty or lack of readability of the original English materials including complex, sophisticated, structural, syntactic, and lexical features. However, with the rapid developments of machine translation technologies, more research shows that semantic polysemy, that is, the multiple meanings of a certain word across domains and other issues, could be more challenging for latest neural machine translation tools. As a result, we included four large sets of features to investigate possible reasons which have triggered significant mistakes in machine-translated health materials from English to Spanish.We annotated both training and testing datasets with 4 large sets of linguistic features: structural features (24 in total using Readability Studio software), lexical dispersion rates based on the British National Corpus (20 features in total), English lexical semantic features (115 features in total), which we annotated using the USAS system developed by the University of Lancaster [16–18], UK, and lexical sentiment features (92 features in total) that we annotated using Linguistic Inquiry and Word Count software. Appendix A shows the details of these 4 linguistic features.
### 2.3. Bayesian Machine Learning Classifiers (MLCs)
Bayesian MLC is a sparse classifier which can effectively counter model overfitting with relatively small datasets like ours. Bayesian classifiers are different from other supervised machine learning techniques in that they produce the posterior odds of a class dependent on the prior odds of an event and asymmetrical classification errors of the model, whereas frequentist ML classifiers only return a hard binary decision. In solving practical questions like ours, posterior odds are much more informative than a certain predicted binary outcome, as the Bayesian-style prediction using posterior odds helps practitioners and decision makers to appreciate the levels of risks of negative and positive cases over a continuous probability scale and assists in developing more effective intervention strategies to achieve optimal outcomes. This advantage of Bayesian MLs suits the purpose of our study, as we aimed to identify original English mental health materials which are more likely to cause significant errors if translated using automatic tools without further human evaluation. This can help health agencies developing bilingual health materials to better invest their resource and minimise risks of using machine translation in healthcare settings.
## 2.1. Data Collection
We collected authoritative health information on anxiety disorders from the websites of federal and state health agencies and not-for-profit health promotion organisations in the U. S., Australia, Canada, and the United Kingdom. The total database contains 557 full-length articles including 270 (48.47%) original English materials associated with automatic translations to Spanish which contained misleading errors. We labelled these materials as positive or “risk” samples. Around 51.53% of the total samples we collected were original English texts whose automatic translation into Spanish using Google Translate did not contain any misleading information. The evaluation of the Spanish translations by Google was through the comparison of the original English texts with their backtranslations from Spanish. This method known as forward and backward translation was endorsed by international health organisations such as the World Health Organisation [15]. We subsequently labelled such English texts as negative or “safe” cases for automatic translation. We divided the entire database into 67.65% training data (389) and 32.35% testing data. Within the training dataset, there were 187 positive “risk” English texts which were prone to automatic translation errors and 202 negative “safe” English texts which had been proven to be suitable and reliable for automatic translation to Spanish. Similarly, within the testing dataset, there were 83 positive samples and 85 negative samples for testing the performance of classification of the classifiers to be developed. We applied 5-fold cross-validation on the training dataset to develop the classifiers to help remove biases in the development of algorithms.
## 2.2. Annotation of Feature Sets
Traditionally, health translation mistakes are believed to be associated with or triggered by the linguistic difficulty or lack of readability of the original English materials including complex, sophisticated, structural, syntactic, and lexical features. However, with the rapid developments of machine translation technologies, more research shows that semantic polysemy, that is, the multiple meanings of a certain word across domains and other issues, could be more challenging for latest neural machine translation tools. As a result, we included four large sets of features to investigate possible reasons which have triggered significant mistakes in machine-translated health materials from English to Spanish.We annotated both training and testing datasets with 4 large sets of linguistic features: structural features (24 in total using Readability Studio software), lexical dispersion rates based on the British National Corpus (20 features in total), English lexical semantic features (115 features in total), which we annotated using the USAS system developed by the University of Lancaster [16–18], UK, and lexical sentiment features (92 features in total) that we annotated using Linguistic Inquiry and Word Count software. Appendix A shows the details of these 4 linguistic features.
## 2.3. Bayesian Machine Learning Classifiers (MLCs)
Bayesian MLC is a sparse classifier which can effectively counter model overfitting with relatively small datasets like ours. Bayesian classifiers are different from other supervised machine learning techniques in that they produce the posterior odds of a class dependent on the prior odds of an event and asymmetrical classification errors of the model, whereas frequentist ML classifiers only return a hard binary decision. In solving practical questions like ours, posterior odds are much more informative than a certain predicted binary outcome, as the Bayesian-style prediction using posterior odds helps practitioners and decision makers to appreciate the levels of risks of negative and positive cases over a continuous probability scale and assists in developing more effective intervention strategies to achieve optimal outcomes. This advantage of Bayesian MLs suits the purpose of our study, as we aimed to identify original English mental health materials which are more likely to cause significant errors if translated using automatic tools without further human evaluation. This can help health agencies developing bilingual health materials to better invest their resource and minimise risks of using machine translation in healthcare settings.
## 3. Methods
To identify the best subset of features within each annotation category, as well as the best subset of features across annotation categories, we applied separate and combined feature optimisation techniques. The automatic feature selection technique we used was recursive feature elimination (RFE) with cross-validation in Python scikit-learn to increase the generalisability and accuracy of the Bayesian machine learning classifiers we developed. To identify and rank highly predictive features, recursive feature elimination used linear kernel support vector machine (SVM) as the base estimator. An optimal set of features was identified when the cross-validated classification error reached the minimal value. Figure1 shows the results of the automatic optimisation of different feature sets: in Figure 1(a), the optimised features of lexical dispersion rates were 4, as the cross-validated classification error dropped from 0.425 with the full feature set (20 in total) to 0.393 when the features were reduced to 4. In Figure 1(b), the optimised feature set of English semantic features was 10, as the cross-validated classification error decreased from 0.40 with the full feature set (115) to 0.375 when the features were reduced to 10. Further elimination of features however led to a spike in the classification error. In Figure 1(c), the optimised feature set of English sentiment features (annotated using the LIWC software) was 10, as we observed that the minimal classification error of 0.416 was reached when the total number of sentiment features was scaled back from 92 to 10. In Figure 1(d), 5 optimal structural features (92 in total) were identified when the minimal classification error of 0.409 was reached. Lastly, in Figure 1(e), we conducted the combined feature selection by integrating the 4 feature sets (251 in total): lexical dispersion rates and semantic, sentiment, and structural features. The optimal number of features emerged from the combined optimisation was 33 which was associated with the minimal classification error of 0.383.Figure 1
Automatic feature selection recursive feature elimination with SVM as base estimator.
(a)(b)(c)(d)(e)
## 4. Results and Discussion
### 4.1. Results
Following automatic feature optimisation to enhance the classification accuracy of classifiers, we evaluated the performance of Bayesian models (relevance vector machine (RVM)) with different feature sets on both the training and testing datasets. As discussed earlier, we applied 5-fold cross-validation on the training dataset to minimise biases in the classifiers being developed. First, we compared the original feature sets with their respective optimised feature sets in Table1–5. Next, we compared the performance of RVM classifiers with different pairs of optimised feature sets. Table 6 shows the comparison of RVM classifiers with double, triple, and quadruple optimised feature sets, respectively. Like feature optimisation, feature normalisation is another useful automatic technique to enhance the performance of machine learning classifiers. We applied three popular feature normalisation techniques: min-max, L2-norm (L2), and Z-score normalisation with each RVM classifier to see whether this could help balance asymmetrical classification errors within each model.Table 1
Performance of RVM classifiers with lexical dispersion features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityDispersion rates (full 20 features)0.6250.060.6490.6670.5780.753Disp all 20 with min-max normalisation0.5470.040.6540.6430.5660.718Disp all 20 with L2 normalisation0.5730.070.5940.5600.5180.600Disp all 20 with Z-score normalisation0.5610.060.6450.6370.5300.741D4 (automatically optimised)0.6170.060.6480.6610.5660.753D4 with min-max0.6110.060.6860.6490.5420.753D4 with L20.5710.070.5950.5600.5180.600D4 with Z-score0.6100.060.6890.6490.5660.729Table 2
Performance of RVM classifiers with lexical semantic features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityUSAS all 1150.6520.0520.6920.6610.5900.729USAS all 115 with min-max0.5930.0820.6770.6430.5780.706USAS all 115 with L20.5840.0870.6810.6250.5900.659USAS all 115 with Z-score0.5890.1110.6940.6550.6390.671U10 (automatically optimised)0.6590.0450.7140.6790.5780.777U10 with min-max0.6630.0440.7230.6790.5780.777U10 with L20.6140.0890.7070.6250.5180.729U10 with Z-score0.6460.0420.7130.6490.5060.788Table 3
Performance of RVM classifiers with lexical sentiment features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityLIWC all 920.6140.0570.5800.5480.5180.577LIWC all 92 with min-max0.5770.0540.6460.6190.5660.671LIWC all 92 with L20.6100.0640.5730.5480.5180.577LIWC all 92 with Z-score0.6190.0460.6700.6190.5060.729L10 (automatically optimised)0.6020.0400.6050.5710.6510.494L10 with min-max0.6290.0550.6070.5660.5660.565L10 with L20.6040.0340.6090.5710.5180.624L10 with Z-score0.6140.0680.6160.5830.5780.588Table 4
Performance of RVM classifiers with structural features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityStructural all 240.6430.0480.6360.6250.5180.729Structural all 24 with min-max0.6030.0470.5950.5540.4340.671Structural all 24 with L20.6470.0460.6160.5950.5900.600Structural all 24 with Z-score0.6130.0480.6210.5830.4820.682S4 (automatically optimised)0.6330.0500.6210.6190.4460.788S4 with min-max0.6160.0470.6150.5950.5540.635S4 with Z-score0.6240.0440.6030.6010.5420.659Table 5
Performance of RVM classifiers with all (dispersion + USAS + LIWC + structural) features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityALL 2510.6420.0380.6470.6130.5180.706ALL 251 with min-max0.6260.0410.6970.6430.5900.694ALL 251 with L20.6700.0450.6530.6190.6390.600ALL 251 with Z-score0.6330.0850.6800.6250.5900.659ALL33 (automatically optimised)0.6580.0140.6700.6490.5540.741ALL33 with min-max0.6780.0340.7180.6960.6510.741ALL33 with L20.7100.0150.6700.6370.6510.624ALL33 with Z-score0.6720.0360.6820.6430.6270.659Table 6
Performance of RVM classifiers with paired feature sets.
FeatureRVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityF1D4 + U100.6640.0410.7150.6610.5900.729F2D4 + U10 with min-max0.6390.0410.7360.6610.5660.753F3D4 + U10 with L20.6730.0340.6810.6430.5780.706F4D4 + U10 with Z-score0.6270.0650.7220.6490.5540.741F5D4 + L100.6500.0410.6810.5950.6150.577F6D4 + L10 with min-max0.6500.0320.6440.5830.5660.600F7D4 + L10 with L20.6680.0310.6690.5890.5900.588F8D4 + L10 with Z-score0.6400.0190.6520.6010.5540.647F9D4 + S50.6380.0500.6360.6430.5180.765F10D4 + S5 with min-max0.6480.0410.6500.6250.5660.682F11D4 + S5 with L20.6550.0540.6080.6070.6020.612F12D4 + S5 with Z-score0.5800.0350.5840.5660.4820.647F13U10 + L100.6700.0580.7050.6730.6390.706F14U10 + L10 with min-max0.6910.0190.7170.6670.5900.741F15U10 + L10 with L20.6850.0350.7010.6550.6150.694F16U10 + L10 with Z-score0.6410.0180.7120.6430.5900.694F17U10 + S50.6330.0450.6990.6430.5180.765F18U10 + S5 with min-max0.6270.0310.7130.6610.5420.777F19U10 + S5 with L20.5880.0780.6170.5660.4580.671F20U10 + S5 with Z-score0.6330.0300.7040.6430.5660.718F21L10 + S50.6710.0550.6680.6310.5540.706F22L10 + S5 with min-max0.6880.0370.6470.6370.6390.635F23L10 + S5 with L20.6820.0370.6530.5830.6150.553F24L10 + S5 with Z-score0.6480.0090.6520.6250.6510.600F25D4 + U10 + L100.6520.0490.6880.6370.5780.694F26D4 + U10 + L10 with min-max0.6740.0410.6580.6010.5180.682F27D4 + U10 + L10 with L20.6890.0470.6940.6670.6150.718F28D4 + U10 + L10 with Z-score0.6690.0330.6530.6130.5540.671F29D4 + U10 + S50.6490.0410.6970.6490.5660.729F30D4 + U10 + S5 with min-max0.6260.0550.6770.6190.4580.777F31D4 + U10 + S5 with L20.6650.0570.6740.6550.6390.671F32D4 + U10 + S5 with Z-score0.6290.0210.6800.6730.5300.812F33U10 + L10 + S50.6430.0450.6890.6490.5660.729F34U10 + L10 + S5 with min-max0.6850.0270.6970.6370.5900.682F35U10 + L10 + S5 with L20.6630.0540.6900.6370.6630.612F36U10 + L10 + S5 with Z-score0.6670.0180.6960.6370.6270.647F37D4 + U10 + L10 + S50.6220.0330.6570.6430.5780.706F38D4 + U10 + L10 + S5 with min-max0.6680.0310.6980.6730.6020.741F39D4 + U10 + L10 + S5 with L20.6870.0420.6830.6610.6270.694F40D4 + U10 + L10 + S5 with Z-score0.6660.0150.6670.6370.5300.741Table1 shows the performance of RVM classifiers with lexical dispersion rates as features. It shows that after automatic feature selection, RVM with the reduced and optimised feature set (D4) reached a largely comparable performance to that of the classifier run on the full feature set: on the training dataset, the mean of area under the curve (AUC) of RVM (D4) was 0.617 (SD = 0.06), compared to 0.625 (SD = 0.06) of RVM (full 20 features), suggesting that feature reduction could also help encounter the issue of overfitting in training machine learning classifiers. On the testing dataset, the AUC of the RVM (D4) (0.648) was similar to that of RVM (All 20) (0.649). Sensitivity dropped slightly from 0.578 (RVM-All 20) to 0.566 (RVM-D4), and specificity remained unchanged at 0.753. Normalisation did not improve RVMs with the entire or optimised feature sets of lexical dispersion rates.Table2 compares the performance of RVM classifiers run on English semantic features. It shows that after automatic feature selection, the performance of the RVMs improved on both the training and the testing datasets: on the training data, the mean of AUC of RVM with the full semantic feature set (USAS115) observed a marginal improvement from 0.652 to 0.659 with a slightly reduced standard deviation (SD) from 0.052 to 0.045. On the testing dataset, the AUC of RVM (USAS115) saw an improvement from 0.692 to 0.714. Specificity improved from 0.729 of RVM (USAS115) to 0.777 of RVM (U10); sensitivity decreased from 0.590 of RVM (USAS115) to 0.578 of RVM (U10). Normalisation did not improve model performance.Table3 compares the performance of RVMs with English sentiment features annotated with the Linguistic Inquiry and Word Count (LIWC) software. It shows that after automatic feature optimisation, the performance of the RVM classifier (LWIC all 92) improved on the testing datasets. The AUC of RVM (L10) increased from 0.580 to 0.605. Model sensitivity increased from 0.518 to 0.651, but specificity decreased from 0.577 to 0.494. The impact of feature normalisation on RVMs with all and optimised feature sets was similar, while the classifier specificity improved, sensitivity decreased, and the overall model accuracy on the testing dataset however did not improve significantly.Table4 compares the performance of RVMs with various structural features which we annotated with the Readability Studio software. After automatic feature optimisation, the AUC of the classifier RVM (structural all 24) decreased from 0.636 to 0.621, which was due to decreased model sensitivity from 0.518 to 0.446, but the model specificity increased from 0.729 to 0.788. Feature normalisation helped to balance the asymmetric classification errors on the classifier RVM with both the entire feature set and the optimised feature set: the model specificity decreased and sensitivity increased. However, the overall model accuracy or the AUC did not improve with different feature normalisation techniques.Table5 shows the performance of the RVM with the combined feature sets of lexical dispersion rates and semantic, sentiment, and structural features, which represented 251 features in total. Automatic feature optimisation reduced the original feature set of 251 features to a parsimonious model containing 33 features only. With less predictive and noisy features involved in the model, the performance of the classifier also improved significantly on both the training and the testing datasets: on the training data, the model AUC was 0.642 (SD = 0.038). This increased to 0.658 (SD = 0.034) with the optimised RVM classifier. On the testing data, the AUC improved from 0.647 to 0.718. With automatic feature optimisation, both sensitivity and specificity improved: sensitivity increased from 0.518 to 0.554 and specificity increased from 0.706 to 0.741. Importantly, feature normalisation played a critical role in balancing asymmetrical classification errors on RVMs with combined feature sets. Specifically, min-max normalisation increased sensitivity of the optimised classifier to 0.651, the highest so far, and retained the high specificity of the classifier at 0.741. This sensitivity and specificity pair was the best combination among the classifiers developed so far.Table6 compares the performance of classifiers with double and multiple optimised feature sets. We compared in total 10 different pairs of optimised feature sets and conducted feature normalisation with each combination of optimised features, and as a result, each RVM in Table 6 has four different versions: the unnormalised version, followed by normalised versions with min-max, L2, and Z-score normalisation. The 10 combinations of optimised features were as follows: optimised lexical dispersion rates (D4) and optimised semantic feature (U10) (F1–F4), optimised lexical dispersion rates (D4) and optimised sentiment features (L10) (F5–F8), optimised lexical dispersion rates (D4) and optimised structural features (S5) (F9–F12), optimised semantic features (U10) and optimised sentiment feature (L10) (F13–F16), optimised semantic features (U10) and optimised structural features (S5) (F17–F20), optimised sentiment features (L10) and optimised structural features (S5) (F21–F24), and so on. We identified 5 high-performing models based on considerations of the overall model AUC, accuracy, sensitivity, and specificity: F13 was the unnormalised combination of optimised semantic (U10) and optimised sentiment features (L10). It had an overall AUC on the testing data of 0.705, with a relatively high sensitivity of 0.639 and specificity of 0.706. F27 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and sentiment features (L10). It had an overall AUC of 0.694 on the testing dataset, with sensitivity of 0.615 and specificity of 0.718. F31 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and optimised structural features (S5). It had an overall AUC of 0.674 on the testing dataset, with sensitivity of 0.639 and specificity of 0.671. F35 was the normalised version (through L2 normalisation) of optimised semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.690 on the testing dataset, with sensitivity of 0.663 and specificity of 0.612. Finally, F39 was the normalised version (L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.683 on the testing dataset, with sensitivity of 0.627 and specificity of 0.694. Figure 2 shows the visualised comparison of the AUCs of these 5 high-performing classifiers, the RVM with the entire combined feature sets (251 features) with L2 normalisation, and the best-performing classifier we developed (RVM (All33)) with min-max normalisation.Figure 2
AUCs of RVMs on testing data using different feature sets.Tables7 and 8 show the paired sample t-tests assessing the significance levels of differences in sensitivity and specificity between the various competing high-performance classifiers and the best-performing RVM classifier we developed through the combined automatic optimisation of four different feature sets followed by automatic feature normalisation. To reduce false discovery rates in multiple comparison, we applied the Benjamini–Hochberg correction procedure [19–21]. The results show that sensitivity of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F35 (p = 0.0017); specificity of our best-performing RVM classifier was statistically higher than that of all other competing models with p values equal to or smaller than 0.004.Table 7
Paired sample t-test of the difference in sensitivity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. F270.03610.00210.03190.04030.001210.0083∗∗2All33 (min-max) vs. F390.02410.00150.02120.02700.001320.0167∗∗3All33 (min-max) vs. All251 (L2)0.01200.00080.01050.01360.001430.0250∗∗4All33 (min-max) vs. F130.01200.00080.01050.01360.001440.0333∗∗5All33 (min-max) vs. F310.01200.00080.01050.01360.001450.0417∗∗6All33 (min-max) vs. F35-0.01210.0009−0.0137−0.01040.001760.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table 8
Paired samplet-test of the difference in specificity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. All 251 (L2)0.14120.01100.11950.16280.002010.0083∗∗2All33 (min-max) vs. F350.12940.01050.10880.15000.002220.0167∗∗3All33 (min-max) vs. F310.07060.00680.05730.08390.003130.0250∗∗4All33 (min-max) vs. F390.04710.00480.03760.05660.003540.0333∗∗5All33 (min-max) vs. F130.03530.00380.02790.04270.003850.0417∗∗6All33 (min-max) vs. F270.02350.00260.01850.02860.004060.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table9 shows the paired sample t-tests assessing the significance levels of differences in AUCs between various competing high-performance classifiers and the best-performing RVM classifier on testing data using different training dataset sizes (i. e., 100, 150, 200, 250, 300, and all 389 training samples). We applied Benjamini–Hochberg correction to reduce false discovery rates in multiple comparison. The results show that AUC under different training dataset sizes of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F13 (p = 0.0752) and F27 (p = 0.1698). Figure 3 shows the visualised comparison of the mean AUCs of these 6 competitive classifiers and the developed best-performing classifier. As shown in Figure 3, our best-performing RVM classifier gained the highest mean AUC than all other competing models.Table 9
Paired samplet-test of the difference in AUCs between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs F310.04820.00910.03040.06600.000010.0083∗∗2All33 (min-max) vs. All 251 (L2)0.06160.02790.00700.11630.002920.0167∗∗3All33 (min-max) vs F390.02950.0165−0.00280.06180.007130.0250∗∗4All33 (min-max) vs F350.03040.0217−0.01210.07280.018540.0333∗∗5All33 (min-max) vs F130.01960.0214−0.02240.06160.075250.04176All33 (min-max) vs F270.01380.0211−0.02760.05520.169860.0500∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Figure 3
Mean AUCs of RVMs with different feature sets on testing data using different training dataset sizes.
## 4.1. Results
Following automatic feature optimisation to enhance the classification accuracy of classifiers, we evaluated the performance of Bayesian models (relevance vector machine (RVM)) with different feature sets on both the training and testing datasets. As discussed earlier, we applied 5-fold cross-validation on the training dataset to minimise biases in the classifiers being developed. First, we compared the original feature sets with their respective optimised feature sets in Table1–5. Next, we compared the performance of RVM classifiers with different pairs of optimised feature sets. Table 6 shows the comparison of RVM classifiers with double, triple, and quadruple optimised feature sets, respectively. Like feature optimisation, feature normalisation is another useful automatic technique to enhance the performance of machine learning classifiers. We applied three popular feature normalisation techniques: min-max, L2-norm (L2), and Z-score normalisation with each RVM classifier to see whether this could help balance asymmetrical classification errors within each model.Table 1
Performance of RVM classifiers with lexical dispersion features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityDispersion rates (full 20 features)0.6250.060.6490.6670.5780.753Disp all 20 with min-max normalisation0.5470.040.6540.6430.5660.718Disp all 20 with L2 normalisation0.5730.070.5940.5600.5180.600Disp all 20 with Z-score normalisation0.5610.060.6450.6370.5300.741D4 (automatically optimised)0.6170.060.6480.6610.5660.753D4 with min-max0.6110.060.6860.6490.5420.753D4 with L20.5710.070.5950.5600.5180.600D4 with Z-score0.6100.060.6890.6490.5660.729Table 2
Performance of RVM classifiers with lexical semantic features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityUSAS all 1150.6520.0520.6920.6610.5900.729USAS all 115 with min-max0.5930.0820.6770.6430.5780.706USAS all 115 with L20.5840.0870.6810.6250.5900.659USAS all 115 with Z-score0.5890.1110.6940.6550.6390.671U10 (automatically optimised)0.6590.0450.7140.6790.5780.777U10 with min-max0.6630.0440.7230.6790.5780.777U10 with L20.6140.0890.7070.6250.5180.729U10 with Z-score0.6460.0420.7130.6490.5060.788Table 3
Performance of RVM classifiers with lexical sentiment features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityLIWC all 920.6140.0570.5800.5480.5180.577LIWC all 92 with min-max0.5770.0540.6460.6190.5660.671LIWC all 92 with L20.6100.0640.5730.5480.5180.577LIWC all 92 with Z-score0.6190.0460.6700.6190.5060.729L10 (automatically optimised)0.6020.0400.6050.5710.6510.494L10 with min-max0.6290.0550.6070.5660.5660.565L10 with L20.6040.0340.6090.5710.5180.624L10 with Z-score0.6140.0680.6160.5830.5780.588Table 4
Performance of RVM classifiers with structural features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityStructural all 240.6430.0480.6360.6250.5180.729Structural all 24 with min-max0.6030.0470.5950.5540.4340.671Structural all 24 with L20.6470.0460.6160.5950.5900.600Structural all 24 with Z-score0.6130.0480.6210.5830.4820.682S4 (automatically optimised)0.6330.0500.6210.6190.4460.788S4 with min-max0.6160.0470.6150.5950.5540.635S4 with Z-score0.6240.0440.6030.6010.5420.659Table 5
Performance of RVM classifiers with all (dispersion + USAS + LIWC + structural) features.
RVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityALL 2510.6420.0380.6470.6130.5180.706ALL 251 with min-max0.6260.0410.6970.6430.5900.694ALL 251 with L20.6700.0450.6530.6190.6390.600ALL 251 with Z-score0.6330.0850.6800.6250.5900.659ALL33 (automatically optimised)0.6580.0140.6700.6490.5540.741ALL33 with min-max0.6780.0340.7180.6960.6510.741ALL33 with L20.7100.0150.6700.6370.6510.624ALL33 with Z-score0.6720.0360.6820.6430.6270.659Table 6
Performance of RVM classifiers with paired feature sets.
FeatureRVMTraining dataTesting dataAUC meanSDAUCAccuracySensitivitySpecificityF1D4 + U100.6640.0410.7150.6610.5900.729F2D4 + U10 with min-max0.6390.0410.7360.6610.5660.753F3D4 + U10 with L20.6730.0340.6810.6430.5780.706F4D4 + U10 with Z-score0.6270.0650.7220.6490.5540.741F5D4 + L100.6500.0410.6810.5950.6150.577F6D4 + L10 with min-max0.6500.0320.6440.5830.5660.600F7D4 + L10 with L20.6680.0310.6690.5890.5900.588F8D4 + L10 with Z-score0.6400.0190.6520.6010.5540.647F9D4 + S50.6380.0500.6360.6430.5180.765F10D4 + S5 with min-max0.6480.0410.6500.6250.5660.682F11D4 + S5 with L20.6550.0540.6080.6070.6020.612F12D4 + S5 with Z-score0.5800.0350.5840.5660.4820.647F13U10 + L100.6700.0580.7050.6730.6390.706F14U10 + L10 with min-max0.6910.0190.7170.6670.5900.741F15U10 + L10 with L20.6850.0350.7010.6550.6150.694F16U10 + L10 with Z-score0.6410.0180.7120.6430.5900.694F17U10 + S50.6330.0450.6990.6430.5180.765F18U10 + S5 with min-max0.6270.0310.7130.6610.5420.777F19U10 + S5 with L20.5880.0780.6170.5660.4580.671F20U10 + S5 with Z-score0.6330.0300.7040.6430.5660.718F21L10 + S50.6710.0550.6680.6310.5540.706F22L10 + S5 with min-max0.6880.0370.6470.6370.6390.635F23L10 + S5 with L20.6820.0370.6530.5830.6150.553F24L10 + S5 with Z-score0.6480.0090.6520.6250.6510.600F25D4 + U10 + L100.6520.0490.6880.6370.5780.694F26D4 + U10 + L10 with min-max0.6740.0410.6580.6010.5180.682F27D4 + U10 + L10 with L20.6890.0470.6940.6670.6150.718F28D4 + U10 + L10 with Z-score0.6690.0330.6530.6130.5540.671F29D4 + U10 + S50.6490.0410.6970.6490.5660.729F30D4 + U10 + S5 with min-max0.6260.0550.6770.6190.4580.777F31D4 + U10 + S5 with L20.6650.0570.6740.6550.6390.671F32D4 + U10 + S5 with Z-score0.6290.0210.6800.6730.5300.812F33U10 + L10 + S50.6430.0450.6890.6490.5660.729F34U10 + L10 + S5 with min-max0.6850.0270.6970.6370.5900.682F35U10 + L10 + S5 with L20.6630.0540.6900.6370.6630.612F36U10 + L10 + S5 with Z-score0.6670.0180.6960.6370.6270.647F37D4 + U10 + L10 + S50.6220.0330.6570.6430.5780.706F38D4 + U10 + L10 + S5 with min-max0.6680.0310.6980.6730.6020.741F39D4 + U10 + L10 + S5 with L20.6870.0420.6830.6610.6270.694F40D4 + U10 + L10 + S5 with Z-score0.6660.0150.6670.6370.5300.741Table1 shows the performance of RVM classifiers with lexical dispersion rates as features. It shows that after automatic feature selection, RVM with the reduced and optimised feature set (D4) reached a largely comparable performance to that of the classifier run on the full feature set: on the training dataset, the mean of area under the curve (AUC) of RVM (D4) was 0.617 (SD = 0.06), compared to 0.625 (SD = 0.06) of RVM (full 20 features), suggesting that feature reduction could also help encounter the issue of overfitting in training machine learning classifiers. On the testing dataset, the AUC of the RVM (D4) (0.648) was similar to that of RVM (All 20) (0.649). Sensitivity dropped slightly from 0.578 (RVM-All 20) to 0.566 (RVM-D4), and specificity remained unchanged at 0.753. Normalisation did not improve RVMs with the entire or optimised feature sets of lexical dispersion rates.Table2 compares the performance of RVM classifiers run on English semantic features. It shows that after automatic feature selection, the performance of the RVMs improved on both the training and the testing datasets: on the training data, the mean of AUC of RVM with the full semantic feature set (USAS115) observed a marginal improvement from 0.652 to 0.659 with a slightly reduced standard deviation (SD) from 0.052 to 0.045. On the testing dataset, the AUC of RVM (USAS115) saw an improvement from 0.692 to 0.714. Specificity improved from 0.729 of RVM (USAS115) to 0.777 of RVM (U10); sensitivity decreased from 0.590 of RVM (USAS115) to 0.578 of RVM (U10). Normalisation did not improve model performance.Table3 compares the performance of RVMs with English sentiment features annotated with the Linguistic Inquiry and Word Count (LIWC) software. It shows that after automatic feature optimisation, the performance of the RVM classifier (LWIC all 92) improved on the testing datasets. The AUC of RVM (L10) increased from 0.580 to 0.605. Model sensitivity increased from 0.518 to 0.651, but specificity decreased from 0.577 to 0.494. The impact of feature normalisation on RVMs with all and optimised feature sets was similar, while the classifier specificity improved, sensitivity decreased, and the overall model accuracy on the testing dataset however did not improve significantly.Table4 compares the performance of RVMs with various structural features which we annotated with the Readability Studio software. After automatic feature optimisation, the AUC of the classifier RVM (structural all 24) decreased from 0.636 to 0.621, which was due to decreased model sensitivity from 0.518 to 0.446, but the model specificity increased from 0.729 to 0.788. Feature normalisation helped to balance the asymmetric classification errors on the classifier RVM with both the entire feature set and the optimised feature set: the model specificity decreased and sensitivity increased. However, the overall model accuracy or the AUC did not improve with different feature normalisation techniques.Table5 shows the performance of the RVM with the combined feature sets of lexical dispersion rates and semantic, sentiment, and structural features, which represented 251 features in total. Automatic feature optimisation reduced the original feature set of 251 features to a parsimonious model containing 33 features only. With less predictive and noisy features involved in the model, the performance of the classifier also improved significantly on both the training and the testing datasets: on the training data, the model AUC was 0.642 (SD = 0.038). This increased to 0.658 (SD = 0.034) with the optimised RVM classifier. On the testing data, the AUC improved from 0.647 to 0.718. With automatic feature optimisation, both sensitivity and specificity improved: sensitivity increased from 0.518 to 0.554 and specificity increased from 0.706 to 0.741. Importantly, feature normalisation played a critical role in balancing asymmetrical classification errors on RVMs with combined feature sets. Specifically, min-max normalisation increased sensitivity of the optimised classifier to 0.651, the highest so far, and retained the high specificity of the classifier at 0.741. This sensitivity and specificity pair was the best combination among the classifiers developed so far.Table6 compares the performance of classifiers with double and multiple optimised feature sets. We compared in total 10 different pairs of optimised feature sets and conducted feature normalisation with each combination of optimised features, and as a result, each RVM in Table 6 has four different versions: the unnormalised version, followed by normalised versions with min-max, L2, and Z-score normalisation. The 10 combinations of optimised features were as follows: optimised lexical dispersion rates (D4) and optimised semantic feature (U10) (F1–F4), optimised lexical dispersion rates (D4) and optimised sentiment features (L10) (F5–F8), optimised lexical dispersion rates (D4) and optimised structural features (S5) (F9–F12), optimised semantic features (U10) and optimised sentiment feature (L10) (F13–F16), optimised semantic features (U10) and optimised structural features (S5) (F17–F20), optimised sentiment features (L10) and optimised structural features (S5) (F21–F24), and so on. We identified 5 high-performing models based on considerations of the overall model AUC, accuracy, sensitivity, and specificity: F13 was the unnormalised combination of optimised semantic (U10) and optimised sentiment features (L10). It had an overall AUC on the testing data of 0.705, with a relatively high sensitivity of 0.639 and specificity of 0.706. F27 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and sentiment features (L10). It had an overall AUC of 0.694 on the testing dataset, with sensitivity of 0.615 and specificity of 0.718. F31 was the normalised version (through L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), and optimised structural features (S5). It had an overall AUC of 0.674 on the testing dataset, with sensitivity of 0.639 and specificity of 0.671. F35 was the normalised version (through L2 normalisation) of optimised semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.690 on the testing dataset, with sensitivity of 0.663 and specificity of 0.612. Finally, F39 was the normalised version (L2 normalisation) of optimised lexical dispersion rates (D4), semantic features (U10), sentiment features (L10), and optimised structural features (S5). It had an overall AUC of 0.683 on the testing dataset, with sensitivity of 0.627 and specificity of 0.694. Figure 2 shows the visualised comparison of the AUCs of these 5 high-performing classifiers, the RVM with the entire combined feature sets (251 features) with L2 normalisation, and the best-performing classifier we developed (RVM (All33)) with min-max normalisation.Figure 2
AUCs of RVMs on testing data using different feature sets.Tables7 and 8 show the paired sample t-tests assessing the significance levels of differences in sensitivity and specificity between the various competing high-performance classifiers and the best-performing RVM classifier we developed through the combined automatic optimisation of four different feature sets followed by automatic feature normalisation. To reduce false discovery rates in multiple comparison, we applied the Benjamini–Hochberg correction procedure [19–21]. The results show that sensitivity of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F35 (p = 0.0017); specificity of our best-performing RVM classifier was statistically higher than that of all other competing models with p values equal to or smaller than 0.004.Table 7
Paired sample t-test of the difference in sensitivity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. F270.03610.00210.03190.04030.001210.0083∗∗2All33 (min-max) vs. F390.02410.00150.02120.02700.001320.0167∗∗3All33 (min-max) vs. All251 (L2)0.01200.00080.01050.01360.001430.0250∗∗4All33 (min-max) vs. F130.01200.00080.01050.01360.001440.0333∗∗5All33 (min-max) vs. F310.01200.00080.01050.01360.001450.0417∗∗6All33 (min-max) vs. F35-0.01210.0009−0.0137−0.01040.001760.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table 8
Paired samplet-test of the difference in specificity between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs. All 251 (L2)0.14120.01100.11950.16280.002010.0083∗∗2All33 (min-max) vs. F350.12940.01050.10880.15000.002220.0167∗∗3All33 (min-max) vs. F310.07060.00680.05730.08390.003130.0250∗∗4All33 (min-max) vs. F390.04710.00480.03760.05660.003540.0333∗∗5All33 (min-max) vs. F130.03530.00380.02790.04270.003850.0417∗∗6All33 (min-max) vs. F270.02350.00260.01850.02860.004060.0500∗∗∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Table9 shows the paired sample t-tests assessing the significance levels of differences in AUCs between various competing high-performance classifiers and the best-performing RVM classifier on testing data using different training dataset sizes (i. e., 100, 150, 200, 250, 300, and all 389 training samples). We applied Benjamini–Hochberg correction to reduce false discovery rates in multiple comparison. The results show that AUC under different training dataset sizes of our best-performing RVM classifier was significantly higher than that of most other high-performing models, except for F13 (p = 0.0752) and F27 (p = 0.1698). Figure 3 shows the visualised comparison of the mean AUCs of these 6 competitive classifiers and the developed best-performing classifier. As shown in Figure 3, our best-performing RVM classifier gained the highest mean AUC than all other competing models.Table 9
Paired samplet-test of the difference in AUCs between the best-performing model and other models.
No.Pairs of RVMsMean differenceSD95% CIp valueRank(i/m) QSig.LowerUpper1All33 (min-max) vs F310.04820.00910.03040.06600.000010.0083∗∗2All33 (min-max) vs. All 251 (L2)0.06160.02790.00700.11630.002920.0167∗∗3All33 (min-max) vs F390.02950.0165−0.00280.06180.007130.0250∗∗4All33 (min-max) vs F350.03040.0217−0.01210.07280.018540.0333∗∗5All33 (min-max) vs F130.01960.0214−0.02240.06160.075250.04176All33 (min-max) vs F270.01380.0211−0.02760.05520.169860.0500∗∗Statistical significance at 0.05 level using Benjamini–Hochberg correction procedure.Figure 3
Mean AUCs of RVMs with different feature sets on testing data using different training dataset sizes.
## 5. Discussion
A few important findings emerged in our extensive computational analyses, especially the search for the best subset of features for developing Bayesian machine learning classifiers to address our core research question, which was to predict and assess the risk levels of original English mental healthcare materials in terms of their suitability for automatic neural machine translation targeting Spanish-speaking patients. Our study shows that separate feature optimisation on the four distinct feature sets did not achieve acceptable pairs of model sensitivity and specificity. Let us take a close look at features retained in each optimised feature set.Table10 summarises the list of separately and jointly optimised features. First, the optimised feature set of lexical dispersion (D4) contained DiSp8: 0.7–0.8, DiSp9: 0.8–0.9, DiSp10:0.9–1.0, and DiWr10:0.9–1.0. Lexical dispersion rate is a measurement of familiarity of language to the public. We used existing lexical dispersion rates of the British National Corpus which had 10 intervals between 0 and 1 for spoken and written materials, respectively. In both spoken and written materials, higher lexical dispersion rates like those in the optimised feature set (D4) indicate that automatic machine translation mistakes were strongly associated with lexical items of higher familiarity in spoken and written materials. We used non-parametric independent sample test and Mann–Whitney U test to compare samples labelled as “risky” and “safe” original English mental health materials for automatic machine translation. The result shows that all 4 optimised lexical dispersion rates had statistically higher means in “risky” than in “safe” English mental health materials: DiSp8:0.7–0.8 (safe text class: mean (M) = 20.689, standard deviation (SD) = 11.963, standard error (SE) = 0.943; risky text class: M = 27.854, SD = 14.628, SE = 1.283, p<0.0001), DiSp9:0.8–0.9 (safe texts: M = 44.553, SD = 15.679, SE = 1.236; risky texts: M = 54.400, SD = 17.275, SE = 1.515, p<0.0001), DiSp10 : 0.9–1.0 (safe texts: M = 78.217, SD = 21.115, SE = 1.664; risky texts: M = 88.423, SD = 23.223, SE = 2.037, p < 0.0001), and DiWr10 : 0.9–1.0 (safe texts: M = 147.453, SD = 47.024, SE = 3.706; risky texts: M = 176.346, SD = 53.828, SE = 4.721, p<0.0001).Table 10
Results of automatic feature optimisation.
Optimised features (number)LabelOptimised featureLexical dispersion rates (4)D4DiSp8:0.7–0.8, DiSp9:0.8–0.9, DiSp10 : 0.9–1.0, DiWr10 : 0.9–1.0Semantic features (10)U10A2 (words depicting change), A3 (words depicting being/existing), A4 (classification), A6 (comparing), A7 (probability), A13 (degree adverbs), E5 (trepidation, courage, surprise), O4 (physical attributes), Z5 (functional words), Z6 (negative particles).Sentiment features (10)L10Clout expressions, emotional tones, words per sentences, they (third person pronouns), affect words (incl. positive and negative emotions, anxiety, anger, sad), negative emotions, anxiety, tentativeness, differentiation, core drives and needs (reward focus)Structural features (5)S5Number of difficult sentences (more than 22 words), number of monosyllabic words, number of long (6+ characters) words, number of sentences which use the same word multiple times (overused words), passive voiceAll features (33)All33Dispersion rates (1): DiSp8:0.7–0.8Semantic features (21): A3 (being/existing), A13 (degree adverbs), A15 (abstract terms of safety, danger), B2 (physical conditions), B3 (medical treatment), E5 (trepidation, courage, surprise), E6 (apprehension, confidence), K5 (leisure, activities), M1 (movement), M3 (transport on land), M5 (transport by air), M6 (points of reference), N3 (measurement), O4 (physical attributes), S1 (social action, state, process), S5 (affiliation), T1 (time), X2 (reasoning, belief, scepticism), Z3 (organisations names), Z5 (functional words), Z8 (pronouns)Sentiment features (L4): clout expressions, affect words, negative emotions, anxietyStructural features (S7): number of sentences using the same words multiple times (overused words), average number of sentences per paragraph, number of questions, number of proper nouns, number of unique multiple (3+) syllable words, number of unique long (more than 6 letters) words, out-of-dictionary words.With the optimised semantic feature set (U10), there were 10 semantic features identified as most relevant predictive features for the classifier. Similar to the optimised feature set of lexical dispersion rates (D4), the 10 optimised semantic features also had statistically higher means in potentially “risky” than in “safe” English mental health materials with regard to their suitability for automatic machine translation: A2 (changes, modifications) (safe texts:M = 15.429, SD = 15.093, SE = 1.190; risky texts: M = 24.185, SD = 20.646, SE = 1.811, p<0.0001); A3 (existing status of objects, things, people) (safe texts: M = 37.267, SD = 28.570, SE = 2.252; risky texts: M = 57.062, SD = 41.987, SE = 3.682, p<0.0001); A4 (classification) (safe texts: M = 4.311, SD = 5.299, SE = 0.418; risky texts: M = 8.062, SD = 10.504, SE = 0.921, p<0.0001); A6 (comparison) (safe texts: M = 12.689, SD = 11.913, SE = 0.939; risky texts: M = 20.846, SD = 22.697, SE = 1.991, p<0.0001); A7 (probability) (safe texts: M = 24.373, SD = 19.357, SE = 1.526; risky texts: M = 37.262, SD = 25.795, SE = 2.262, p<0.0001); A13 (degree adverbs) (safe texts: M = 9.857, SD = 9.073, SE = 0.715; risky texts: M = 15.654, SD = 12.487, SE = 1.095, p<0.0001); E5 (trepidation, courage, surprise) (safe texts: M = 3.901, SD = 6.927, SE = 0.546; risky texts: M = 11.515, SD = 18.045, SE = 1.583, p<0.0001); O4 (physical attributes) (safe texts: M = 5.509, SD = 5.671, SE = 0.447; risky texts: M = 8.115, SD = 6.668, SE = 0.585, p<0.0001); Z5 (functional words) (safe texts: M = 217.242, SD = 151.680, SE = 11.954; risky texts: M = 326.238, SD = 222.992, SE = 19.558, p<0.0001); and Z6 (negative particles) (safe texts: M = 7.944, SD = 7.246, SE = 0.571; risky texts: M = 12.062, SD = 9.090, SE = 0.797, p<0.0001).Next, we examined the optimised feature set of English sentiment features (L10). This includes clout expressions (negative particles) (safe texts:M = 86.714, SD = 13.275, SE = 1.046; risky texts: M = 81.142, SD = 16.857, SE = 1.478, p = 0.004); emotional tones (safe texts: M = 29.106, SD = 32.179, SE = 2.536; risky texts: M = 18.367, SD = 27.907, SE = 2.448, p < 0.0001); words per sentences (safe texts: M = 18.714, SD = 5.075, SE = 0.400; risky texts: M = 19.728, SD = 4.813, SE = 0.422, p = 0.009); they (third person pronouns) (safe texts: M = 0.746, SD = 0.660, SE = 0.052; risky texts: M = 1.017, SD = 0.953, SE = 0.084, p = 0.028); affect words (safe texts: M = 8.581, SD = 2.517, SE = 0.198; risky texts: M = 9.382, SD = 2.604, SE = 0.228, p = 0.002); negative emotions (safe texts: M = 4.830, SD = 2.609, SE = 0.206; risky texts: M = 6.127, SD = 3.165, SE = 0.278, p < 0.0001); anxiety words (safe texts: M = 3.049, SD = 2.128, SE = 0.168; risky texts: M = 4.088, SD = 2.659, SE = 0.233, p < 0.0001); tentativeness words (safe texts: M = 5.043, SD = 1.643 SE = 0.129; risky texts: M = 5.599, SD = 1.590, SE = 0.139, p = 0.005); differentiation (safe texts: M = 4.176, SD = 1.515, SE = 0.119; risky texts: M = 4.717, SD = 1.333, SE = 0.117, p = 0.002); and core drives and needs (reward focus) (safe texts: M = 1.731, SD = 1.083, SE = 0.085; risky texts: M = 1.339, SD = 0.858, SE = 0.075, p = 0.001).Within the optimised feature set of structural linguistic features (S5), there were 5 optimised features. Like the other three sets of optimised features, features retained in S5 had statistically higher means in “risky” texts than in “safe” English health texts: number of difficult sentences (more than 22 words) (safe texts:M = 10.491, SD = 8.821, SE = 0.695; risky texts: M = 16.200, SD = 14.048, SE = 1.23, p < 0.0001); number of monosyllabic words (safe texts: M = 560.186, SD = 358.796, SE = 28.277; risky texts: M = 811.446, SD = 515.400, SE = 45.204, p < 0.0001); number of long (6+ characters) words (safe texts: M = 280.255, SD = 215.525, SE = 16.986; risky texts: M = 439.215, SD = 347.782, SE = 30.502, p < 0.0001); number of sentences which use same words multiple times (safe texts: M = 10.814, SD = 12.263, SE = 0.966; risky texts: M = 19.177, SD = 22.999, SE = 2.017, p < 0.0001); and passive voice (safe texts: M = 2.944, SD = 5.639, SE = 0.444; risky texts: M = 4.931, SD = 6.191, SE = 0.543, p < 0.0001).We found that the accuracy, sensitivity, and specificity of Bayesian classifiers based on these separately optimised features were suboptimal, in spite of the individual features retained in each optimised feature set being statistically significant features. Recent studies suggest that statistical significance and predictivity of features are often taken as exchangeable concepts, mistakenly [22–26]. Adding statistically significant features identified between case and control samples however do not necessarily improve the predictive performance of machine learning classifiers. This was verified in our study through joint optimisation of different feature sets combining lexical, semantic, sentiment, and structural features (251 features in total). The joint optimisation led to an optimised mixed feature set of 33 features, including 5 which did not have statistically different distribution in “safe” versus “risky” English mental health texts: K5 (leisure, activities) (safe texts: M = 2.596, SD = 5.473, SE = 0.431; risky texts: M = 2.292, SD = 2.900, SE = 0.254, p = 0.346); M3 (means of transport on land) (safe texts: M = 0.839, SD = 1.680, SE = 0.132; risky texts: M = 2.038, SD = 7.276, SE = 0.638, p = 0.076); Z3 (organisations names) (means of transport on land) (safe texts: M = 2.466, SD = 6.791, SE = 0.535; risky texts: M = 1.854, SD = 2.910, SE = 0.255, p = 0.893); average number of sentences per paragraph (safe texts: M = 2.160, SD = 3.394, SE = 0.267; risky texts: M = 2.434, SD = 6.268, SE = 0.550, p = 0.384); and number of questions (safe texts: M = 3.068, SD = 3.607, SE = 0.284; risky texts: M = 3.815, SD = 5.059, SE = 0.444, p = 0.312).All other features in the jointly optimised feature set had statistically higher means in “risky” than in “safe” English mental health materials. Specifically, this included A3 (being/existing) (p < 0.001), A13 (degree adverbs) (p < 0.001), A15 (abstract terms of safety, danger) (p < 0.001), B2 (physical conditions) (p < 0.001), B3 (medical treatment) (p < 0.001), E5 (trepidation, courage, surprise) (p < 0.001), E6 (apprehension, confidence) (p = 0.001), M5 (transport by air) (p = 0.002), M6 (points of reference) (p < 0.001), N3 (measurement) (p = 0.004), O4 (physical attributes) (p < 0.001), S1 (social action, state, process) (p < 0.001), S5 (affiliation) (p = 0.001), T1 (time) (p < 0.001), X2 (reasoning, belief, scepticism) (p < 0.001), Z5 (functional words) (p < 0.001), Z8 (pronouns) (p < 0.001), clout expressions (p = 0.004), affect words (p = 0.002), negative emotions (p < 0.001), anxiety (p < 0.001), number of sentences using the same words multiple times (overused words) (p < 0.001), number of proper nouns (p = 0.008), number of unique multiple (3+) syllable words (p < 0.001), number of unique long (more than 6 letters) words (p < 0.001), and out-of-dictionary words (p = 0.004). For Bayesian machine learning classifiers to reach higher prediction accuracy, both these statistically significant features and statistically non-significant yet highly predictive features were identified “risk factors” contributing to the increased probability of conceptual mistakes in machine translated mental health information in Spanish.The major advantage of the relevance vector machine classifier (RVM) based on the optimised and mixed feature set was the balanced sensitivity (0.651) and specificity (0.741), which made the instrument more applicable and useful in practical settings such as development and evaluation of mental health education and promotion resources for Spanish-speaking patients. The list of optimised linguistic features included in the best-performing classifier also provides important opportunities for health professionals to make well-targeted, cost-effective interventions to English health materials to improve their suitability for automatic translation purposes. For example, health professionals could adjust the distribution patterns of relevant linguistic features, especially those associated with higher risks of causing automatic translation mistakes, and rerun the automatic assessment of the English input materials using our machine learning classifier, iteratively, until the predicted risk level reaches an acceptable level. Importantly, this process does not require any linguistic knowledge on the part of English-speaking medical professionals of patients’ language (in this case, Spanish).
## 6. Conclusions
Our paper developed probabilistic machine learning algorithms to assess and predict the levels of risks of using the Google Translate application in translating and delivering mental health information to Spanish-speaking populations. Our model can inform clinical decision making around the usability of the online translation tool when translating different original English texts on anxiety disorders into Spanish. This was achieved through the probabilistic prediction of Bayesian machine learning classifiers: if an input English text was assigned a high probability (over 50%) of causing erroneous and misleading automatic translation output, health professionals should become alert of the risk of using Google Translate; by contrast, if an input English text was assigned a low risk probability (below 50%), health professionals can feel reassured that the whole piece of English information can be translated safely to its intended user, using the online automatic translation tool. The smaller the risk probability of an English text is, the safer it is for the text to be translated automatically online. For original English materials which were labelled as non-suitable for automatic translation, our machine learning offers the opportunity to adjust, modify, and fine-tune the text to improve its suitability for automatic translation. This was achieved through the feature optimisation technique developed in our study. An important and useful feature of our model is that it does not require any linguistic knowledge on the part of English-speaking medical professionals of the patients’ language. The classifier can be applied as a practical decision aid to help increase the efficiency and cost-effectiveness of multicultural health communication, translation, and education.
---
*Source: 1011197-2021-10-28.xml* | 2021 |
# Multiple Positive Solutions and Estimates of Extremal Values for a Nonlocal Problem with Critical Sobolev Exponent and Concave-Convex Nonlinearities
**Authors:** Zhigao Shi; Xiaotao Qian
**Journal:** Journal of Function Spaces
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011342
---
## Abstract
We are concerned with the following nonlocal problem involving critical Sobolev exponent−a−b∫Ω∇u2dxΔu=λuq−2u+δu2u,x∈Ω,u=0,x∈∂Ω, where Ω is a smooth bounded domain in ℝ4, a,b>0, 1<q<2, δ, and λ are positive parameters. We prove the existence of two positive solutions and obtain uniform estimates of extremal values for the problem. Moreover, the blow-up and the asymptotic behavior of these solutions are also discussed when b↘0 and δ↘0. In the proofs, we apply variational methods.
---
## Body
## 1. Introduction and Main Results
In this paper, we study a new class of Kirchhoff type problem with critical exponent and concave-convex nonlinearities(1)−a−b∫Ω∇u2dxΔu=λuq−2u+δu2u,x∈Ω,u=0,x∈∂Ω,Pb,δ,where Ω is a smooth bounded domain in ℝ4 (2∗=4 is the critical exponent in dimension four), a,b>0, 1<q<2, δ, and λ are positive parameters.We callPb,δ a Kirchhoff type problem since the presence of the term ∫Ω∇u2dx, which means that Pb,δ is no longer a pointwise identity. Such nonlocal problem arises in various models concerning physical and biological systems, see, e.g., [1–3]. Among others, Kirchhoff [2] built a model defined by the equation
(2)ρ∂2u∂t2−P0h+E2L∫0L∂u∂x2dx∂2u∂x2=0,where u=ux,t represents the lateral displacement, ρ denotes the mass density, P0 is the initial tension, h denotes the area of the cross-section, E denotes the Young modulus of the material, and L is the length of the string. This equation is an extension of the classical D’Alembert wave equation for free vibrations of elastic strings.Different from the traditional Kirchhoff type problem, the sign of nonlocal term included inPb,δ is negative, which causes some interesting difficulties. In the past few years, much attention has been paid to the existence, multiplicity, and the behaviour of solutions for this kind of nonlocal problem but without critical growth. In particular, Yin and Liu [4] were concerned with the following problem
(3)−a−b∫Ω∇u2dxΔu=up−2u,x∈Ω,u=0,x∈∂Ω,where 1<p<2∗ and Ω is a bounded domain in ℝN with N≥1 and succeeded to find the problem (3) admits at least two nontrivial solutions. In [5, 6], sign-changing solutions to (3) were further obtained. When N=3 and the nonlinear term has an indefinite potential, Lei et al. [7] and Qian and Chao [8] established the existence of positive solution of (3) for 1<p<2 and 3<p<6, respectively. For the singular nonlinearity, two positive solutions to (3) with N=3 were proved in [9]. In our previous work [10], we obtained two positive solutions of Pb,δ with δ=0, as well as their blow-up and asymptotic behavior when b↘0. For more related results, we refer the interested readers to [11–15] and the references therein.In 1994, Ambrosetti et al. [16] first studied the following critical local problem involving concave-convex nonlinearities
(4)−Δu=λuq−2u+u2∗−1u,x∈Ω,u=0,x∈∂Ω,where 1<q<2 and Ω⊂ℝN is a smooth bounded domain. The authors proved that there exists λ0>0 such that the problem (4) has two positive solutions for λ∈0,λ0 and no positive solutions for λ>λ0. Since then, many scholars have considered problems with critical exponent and concave-convex nonlinearities, see, e.g., [7, 16–21]. Also, the problem (4) of traditional Kirchhoff type is studied in [22–26] and the reference therein. An interesting question now is whether the same existence results as in [16] occur to the nonlocal problem Pb,δ with critical exponent. For λ=0 and δ=1, Wang et al. [27] proved the existence of two positive solutions of Pb,δ with an additional inhomogeneous perturbation on the whole space ℝ4. When 2<q<2∗ and δ is replaced by a nonnegative function Qx, [28] showed how the shape of the graph of Qx affects the number of positive solutions to Pb,δ. However, there are no known existence results for Pb,δ provided λ>0 and 1<q<2.Motivated by the works described above, in the present paper, we try to prove the existence and multiplicity of positive solutions of problemPb,δ when λ∈0,T− for some T−>0 (see Theorem 1), provide uniform estimates of extremal values λ∗ for problem Pb,δ (see Theorem 2), and obtain the blow-up and asymptotic behavior of these positive solutions when b↘0 and δ↘0 (see Theorem 3).Denote byH01Ω the standard Sobolev space endowed with the standard norm ·. Let ·p be the norm of the space LsΩ. Denote by ⟶ (⇀) the strong (weak) convergence. C and Ci denote various positive constants whose exact values are not important. Let μ1 be the positive principal eigenvalue of the operator −Δ on Ω with corresponding positive principal eigenfunction e1. Denote by S the best constant in the Sobolev embedding H01Ω°L2∗Ω, namely,
(5)S=infu∈H01Ω\0u2u42>0.It is well known that the weak solutions of problemPb,δ correspond to the critical points of the following energy functional
(6)Ib,δu=a2u2−b4u4−λquqq−δ4u44.Moreover, we easily see thatIb,δ∈C1H01Ω,ℝ.Define the manifold(7)Mb,δ=u∈H01Ω:I′b,δu,u=0=u∈H01Ω:au2=bu4+λuqq+δu44,and decompose Mb,δ into three subsets as follows:
(8)Mb,δ0=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44=0,Mb,δ+=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44>0,Mb,δ−=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44<0.Set(9)T1=2aSq/24−qΩ4−q/4a2−qb+δS−24−q2−q/2,T2=2qaS4−q/22−q4−qbS2+δΩ4−q/4δ2−qqq/2,T−=minT1,T2.Our main results are as follows.Theorem 1.
Assume thatλ∈0,T−, then problem Pb,δ has at least two positive solutions u∗∈Mb,δ+ and U∗∈Mb,δ− with u∗<U∗.Theorem 2.
Let(10)λ∗=supλ>0:Pb,δhasatleasttwopositivesolutions.
Then, we have(11)0<T−≤λ∗≤T+<∞,where T− is defined as above and
(12)T+=2aμ14−q2−qaμ14−qδ1/2+1.Theorem 3.
Assume thatbn and δn are two sequences satisfying bn↘0 and δn↘0 as n⟶∞. Let un and Un be the two positive solutions of Pb,δ corresponding to bn and δn obtained in Theorem 1 with un∈Mbn,δn+ and Un∈Mbn,δn−, then passing to a subsequence if necessary,
(i)
Un⟶∞ as n⟶∞(ii)
un⟶u¯ in H01Ω as n⟶∞, where u¯ is a positive ground state solution of the problem(13)−aΔu=λuq−2u,x∈Ω,P0,0,u=0,x∈∂Ω.Remark 4.
The multiplicity result ofPb,δ with δ=0 has been proved by [10]. So, our result presented in Theorem 1 can be viewed as an extension of [10] considering the subcritical case where δ=0. In particular, we provide uniform estimates of extremal values λ∗ for the problem, which are observed for the first time in the studies of such nonlocal problem like Pb,δ.Remark 5.
Comparing with [16], which considered problem Pb,δ with b=0, we in this paper investigate the nonlocal case of b≠0. Moreover, unlike [22–24, 26], where the nonlocal term is positive, here we study the case of negative sign of nonlocal term and additionally obtain a bound from above for the parameter.The plan of this paper is as follows. In Section2, we give some preliminaries. Section 3 is devoted to the Proof of Theorem 1. In Section 4, we prove Theorems 2 and 3. In the proof of our main results, we use variational methods, and they are inspired by [10, 16]. However, in the present paper, we encounter some new difficulties due to the critical growth and nonlocal term. Firstly, compared with [10], the calculations here are more delicate and difficult since we now face the critical problem Pb,δ. Secondly, to provide the bound from above for λ∗ of Pb,δ involving nonlocal term, we need to develop some techniques applied in [16] where dealt with local case. Thirdly, in order to obtain the asymptotic behavior of the solutions of Pb,δ as in the work of [10], we add the condition of δ↘0 and conduct some new analysis.
## 2. Preliminaries
Lemma 6.
Letλ∈0,T1. Then, Mb,δ±≠∅ and Mb,δ0=0.Proof.
A simple calculation shows that(14)∂Ib,δϕ∂ttu=tq−1at2−qu2−bt4−qu4−λuqq−δt4−qu44.
For anyu∈H01Ω\0, t>0, set
(15)ψt=at2−qu2−bt4−qu4−δt4−qu44,t>0,ψ1t=at2−qu2−t4−qb+δS−2u4,t>0.
Since1<q<2, it is clear that limt⟶0+ψ1t=0 and limt⟶+∞ψ1t=−∞. Moreover, ψ1t is concave and achieves its maximum at the point tmax=a2−qu2/b+δS−24−qu41/2 with
(16)ψ1tmax=24−q2−q4−q2−q/2au24−qb+δS−2u42−q1/2.
By Hölder and Sobolev inequalities, forλ∈0,T1, we obtain
(17)λuqq≤λΩ4−q4S−q/2uq<ψ1tmax≤ψtmax.
From which we infer that there exist two constantst+=t+u and t−=t−u satisfying t+>tmax>t−>0 and
(18)ψt+=λuqq=ψt−,ψ′t+<0<ψ′t−.
This gives thatt+u∈Mb,δ− and t−u∈Mb,δ+.
In what follows, we prove thatMb,δ0=0. Suppose to the contrary that there is w∈Mb,δ0 with w≠0. By w∈Mb,δ0, we have
(19)a2−qw2=b4−qw4+δ4−qw44.
As a consequence, by Sobolev inequality,(20)a2−qw2≤b4−qw4+δ4−qS−2w4=b+δS−24−qw4.
Moreover, we can also infer fromw∈Mb,δ0 that −2aw2+λ4−qwqq=0 and so
(21)λwqq=2a4−qw2.
Combining (20) and (21), for λ∈0,T1, we conclude that
(22)0<24−q2−q4−q2−q/2aw24−qb+δS−2w42−q1/2−λwqq≤24−q2−q4−q2−q/2aw24−qa2−q/4−qw22−q1/2−λwqq=2a4−qw2−λwqq=0,which is absurd. The proof of Lemma 6 is completed.Lemma 7.
Assume thatλ∈0,T1, then there is a gap structure in Mb,δ:
(23)u≤Aλ<A0≤U,∀u∈Mb,δ+,U∈Mb,δ−,where
(24)A0=a2−q4−qb+δS−21/2,Aλ=λ4−qΩ4−q/42aSq/21/2−q.Proof.
In the case ofU∈Mb,δ−, using Sobolev inequality, we have
(25)a2−qU2<b4−qU4+δ4−qU44≤b+δS−24−qU4,which yields U≥A0.
In the case ofu∈Mb,δ+, it holds
(26)2au2<λ4−quqq≤λ4−qΩ4−q/4S−q/2uq,which gives that u≤Aλ. Moreover, we easily check that if λ∈0,T1, then Aλ<A0.Lemma 8.
For anyu∈Mb,δ±, there exist ρu>0 and a differential functional gρu:Bρu0⟶ℝ+ such that
(27)gρu0=1,gρuwu−w∈Mb,δ±,gρu′0,ϕ=2a−4bu2∫Ω∇u∇ϕdx−qλ∫Ωuq−2uϕdx−4δ∫Ωu2uϕdxa2−qu2−b4−qu4−δ4−qu44.Proof.
Fixu∈Mb,δ− and define F:ℝ+×H⟶ℝ by
(28)Ft,w=at2−qu−w2−bt4−qu−w4−λu−wqq−δt4−qu−w44.
Since foru∈Mb,δ−⊂Mb,δ, one has F1,0=0 and
(29)Ft1,0=a2−qu2−b4−qu4−δ4−qu44<0,then we can employ the implicit function theorem for F at the point 1,0 and derive ρ¯>0 and a differential functional g=gw>0 defined for w∈H01Ω, w<ρ¯ such that
(30)g0=1,gwu−w∈Mb,δ,∀w∈H,w<ρ¯.
In view of the continuity ofg, we may choose ρ>0 possibly smaller (ρ<ρ¯) such that for any w∈H01Ω, w<ρ, it holds
(31)gwu−w∈Mb,δ−.
In a similar way, we can prove the case ofu∈Mb,δ+, and thus, Lemma 8 follows.Lemma 9.
Ifλ∈0,T1, then we have
(i)
The functionalIb,δ is coercive and bounded from below on Mb,δ(ii)
infMb,δ+∪Mb,δ0Ib,δ=infMb,δ+Ib,δ∈−∞,0Proof.
(i)
Foru∈Mb,δ, using Hölder’s inequality, we obtain(32)Ib,δu=Ib,δu−14I′b,δu,u=a4u2−λ1q−14uqq≥a4u2−λ1q−14Ω4−q/4S−q/2uq.
This proves the conclusion (i).(ii)
Foru∈Mb,δ+, it holds(33)Ib,δu=Ib,δu−1qI′b,δu,u=a12−1qu2+b1q−14u4+δ1q−14u44<−a2−qu2+b4−qu4+δ4−qu444q<0.
Combining this and Lemma6, we have that infMb,δ+∪Mb,δ0Ib,δ=infMb,δ+Ib,δ<0. Furthermore, we deduce from (i) that infMb,δ+∪Mb,δ0Ib,δ≠−∞. Thus, infMb,δ+∪Mb,δ0Ib,δ∈−∞,0.Lemma 10.
Ifλ∈0,T1, then Mb,δ+∪Mb,δ0 and Mb,δ− are closed.Proof.
LetUn be a sequence in Mb,δ− such that Un⟶U0 in H01Ω. Since Un⊂Mb,δ−⊂Mb,δ, we have
(34)aU02−bU04=limn⟶∞aUn2−bUn4=limn⟶∞λUnqq+δUn4=λU0qq+δU04,a2−qU02−b4−qU04−δ4−qU04=limn⟶∞a2−qUn2−b4−qUn4−δ4−qUn4≤0,namely, U0∈Mb,δ−∪Mb,δ0. For λ∈0,T1, it then follows from Lemma 7 that U0∉Mb,δ0. In turn, we obtain U0∈Mb,δ−, and so, Mb,δ− is closed for λ∈0,T1. The same argument can prove that Mb,δ0∪Mb,δ+ is closed. This completes the proof of Lemma 10.
## 3. Proof of Theorem 1
Lemma 11.
Suppose thatλ∈0,T1, then problem Pb,δ admits a positive solution u∗ with u∗∈Mb+.Proof.
By Lemmas9 and 10, we can apply Ekeland variational principle to get a minimizing sequence un⊂Mb,δ+∪Mb,δ0 such that
(35)limn⟶∞Ib,δun=infMb,δ+∪Mb,δ0Ib,δ<0,(36)Ib,δz≥Ib,δun−1nz−un,∀z∈Mb,δ+∪Mb,δ0.
SinceIb,δu=Ib,δu, we can assume that un≥0 in Ω. By Lemma 9, un is bounded in H01Ω, and so, we may assume that
(37)un⇀u∗,inH01Ω,un⟶u∗,inLsΩ,1≤s<4,un⟶u∗,a.e.inΩ.
In the following, we prove thatu∗ is a positive solution to Pb,δ. To this purpose, we divide the proof into five steps.
Step 1.u∗≠0.
If, to the contrary, we haveu∗=0. Since un∈Mb,δ+∪Mb,δ0, it follows that for n large,
(38)aun2≥4−q2−qbun4+4−q2−qδun44,and hence,
(39)Ib,δun=12aun2−14bun4−14δun44+o1>4−q22−q−14bun4+4−q22−q−14δun44+o1>0,which contradicts with (35). Therefore, u∗≠0.
Step 2. There is a positive constantC1 satisfying
(40)2aun2−λ4−qunqq<−C1.
To prove that, it suffices to check that(41)2alimsupn⟶∞un2<λ4−qu∗qq.
In view ofun∈Mb,δ+∪Mb,δ0, one has
(42)2alimsupn⟶∞un2≤λ4−qu∗qq.
Assume to the contrary that(43)2alimsupn⟶∞un2=λ4−qu∗qq.
Then, we can supposeun2⟶A>0 as n⟶∞, where A satisfies
(44)λu∗qq=2aA4−q.
From this, we have that forλ∈0,T1,
(45)0≤24−q2−q4−q2−q/2a4−q/2b+δS−22−q/2−λΩ4−q/4S−q/2unq≤24−q2−q4−q2−q/2aun24−qb+δS−2un42−q1/2−λunqq≤24−q2−q4−q2−q/2aun24−qaun2−λunqq2−q1/2−λunqq⟶24−q2−q4−q2−q/2aA4−q2−q/4−qaA2−q1/2−2aA4−q=0,which implies that un⟶0 in H01Ω, contradicting u∗≠0. In turn, we deduce that (40) holds.
Step 3.I′b,δun⟶0 as n⟶∞.
Let0<ρ<ρn≡ρun, gn≡gun, where ρun and gun are defined as Lemma 8 with u=un. Let wρ=ρv with v=u/u. Fix n and set zρ=gnwρun−wρ. Since zρ∈Mb,δ+, it follows from (36) that
(46)Ib,δzρ−Ib,δun≥−1nzρ−un.
By the definition of Fréchet derivative, we obtain(47)Ib,δ′un,zρ−un+ozρ−un≥−1nzρ−un.
Then,(48)Ib,δ′un,−wρ+gnwρ−1un−wρ≥−1nzρ−un+ozρ−un,and hence,
(49)−ρIb,δ′un,v+gnwρ−1Ib,δ′un,un−wρ≥−1nzρ−un+ozρ−un,which yields that
(50)Ib,δ′un,v≤1nzρ−unρ+ozρ−unρ+gnwρ−1ρI′b,δun,un−wρ.
From Step 2, Lemma8, and the boundedness of un, we also have
(51)zρ−un=gnwρ−1un−wρ−wρ≤gnwρ−1C2+ρ,limρ⟶0gnwρ−1ρ=gn′0,v≤g′n0≤C3,I′b,δun,un−wρ=I′b,δun,−wρ=−ρI′b,δun,v.
As a consequence, for fixedn, we can derive letting ρ⟶0 in (50) that
(52)I′b,δun,v≤Cn,which implies that I′b,δun⟶0 as n⟶∞.
Step 4.un⟶u∗ in H01Ω.
Setvn=un−u∗. If vn⟶0, we are done, thus assume vn⟶L>0. By I′b,δun,u∗=o1 and (37),
(53)0=au∗2−bL2+u∗2u∗2−λu∗qq−δu∗44.
Moreover, fromI′b,δun⟶0, the boundedness of un, and Brézis-Lieb lemma, we have that
(54)o1=Ib,δ′un,un=avn2+u∗2−bvn4+2vn2u∗2+u∗4−λu∗qq−δvn44−δu∗44+o1.
Combining this and (53), we get
(55)o1=avn2−bvn4−bvn2u∗2−δvn44.
It then follows from Sobolev inequality that(56)avn2−bvn4−bvn2u∗2=δvn44+o1≤δS−2vn4+o1.
Passing the limit asn⟶∞, we obtain that
(57)L2≥S2a−bu∗2bS2+δ>0.
By (53), (57), and Hölder inequality,
(58)Ib,δu∗=a2u∗2−b4u∗4−λq∫u∗qdx−δ4u∗44=a4u∗2+b4L2u∗2−λ1q−14u∗qq≥abS2u∗24bS2+δ+b4L2u∗2+aδ4bS2+δu∗2−λ1q−14Ω4−q/4S−q/2u∗q.
Forξ≔aδ/4bS2+δ and η≔λ1/q−1/4Ω4−q/4S−q/2, define
(59)ft=ξt2−ηtq.
By easy calculation, we have thatft achieves its minimum value at tmin=qη/2ξ1/2−q and
(60)ftmin=−2−q2η2/2−qq2ξq/2−q.
Therefore, we obtain(61)Ib,δu∗≥abS2u∗24bS2+δ+b4L2u∗2+ftmin=abS2u∗24bS2+δ+b4L2u∗2−2−q2λ4−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−q.
Using (37), (53), and (61), we deduce that for λ∈0,T1,
(62)Ib,δun=Ib,δu∗+a4vn2−b4vn2u∗2+o1≥abS2u∗24bS2+δ+a4L2−2−q24−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−qλ2/2−q+o1≥a2S24bS2+δ−2−q24−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−qλ2/2−q+o1>0,which is a contradiction since limn⟶∞Ib,δun<0. This implies that vn⟶L>0 is impossible. Hence, vn⟶0; that is, un⟶u∗ in H01Ω.
Step 5.u∗ is a positive solution of problem Pb,δ and u∗∈Mb,δ+.
From (35) and Steps 3 and 4, we have that, up to a subsequence, un⟶u∗ in H01Ω with Ib,δu∗<0 and I′b,δu∗=0. Namely, u∗≥0 is a weak nontrivial solution of problem Pb,δ. Moreover, by Lemmas 6 and 10, we know u∗∈Mb,δ+. Standard elliptic regularity argument and strong maximum principle provide that u∗ is positive. Therefore, the proof of Lemma 11 is completed.Lemma 12.
Letλ∈0,T1, then problem Pb,δ has a positive solution U∗ with U∗∈Mb−.Proof.
As in the proof of Lemma11, we can prove that there exists a bounded and nonnegative sequence Un⊂Mb,δ− with the properties
(i)
limn⟶∞Ib,δUn=infMb,δ−Ib,δ(ii)
Ib,δz≥Ib,δUn−1nz−un,∀z∈Mb,δ−(iii)
Un⇀U∗inH01Ω(iv)
Un⟶U∗inLsΩ,2≤s<4(v)
Un⟶U∗a.e.inΩ
Without loss of generality, we may assume that0∈Ω. Let φx∈C0∞Ω be a cut-off function such that 0≤φ≤1 in Ω and φx≡1 near zero. Set
(63)vεx=φx81/2εε2+x2.
By [29, 30], one has for ε>0 small,
(64)vε2=S2+Oε2,vε44=S2+Oε4,vε33=Oε.
In the first place, we prove the following upper bound forinfMb,δ−Ib,δ,
(65)infMb,δ−Ib,δ≤supt>0Ib,δu∗+tvε<Ib,δu∗+a2S24bS2+δ,where u∗ is the positive solution obtained in Lemma 11. Since u∗∈Mb,δ+, it is easy to verify that a−bu∗2>0. By the fact that Ib,δ′u∗,tvε=0, we also have
(66)0=ta−bu∗2∫Ω∇u∗∇vεdx−tλ∫Ωu∗q−1vεdx−tδ∫Ωu∗3vεdx,and hence,
(67)∫Ω∇u∗∇vεdx=λ∫Ωu∗q−1vεdx+δ∫Ωu∗3vεdxa−bu∗2>0.
Letwε=u∗+Rvε with R>1. It follows from (67) that
(68)wε2=u∗2+2R∫Ω∇u∗∇vεdx+R2vε2≥u∗2+R2S2+Oε2.
Letψt be given by Lemma 6. As can be seen from the proof of Lemma 6, there exist ψtε=λwε/wεqq and ψ′tε<0, where tε=t+wε/wε. From the structure of ψ and the fact of wε/wεqq>0, we easily see that tε is uniformly bounded by a suitable constant C1>0, ∀R≥1, and ∀ε>0.
Moreover, we have from (68) that there is ε1>0 satisfying
(69)wε2≥u∗2+12R2S2,∀ε∈0,ε1.
Therefore, we may findR1≥1 such that wε>C1, ∀R≥R1, and ∀ε∈0,ε1.
Define(70)E1=u:u=0oru<t+uu,E2=u:u>t+uu.
Notice thatH01Ω−Mb,δ−=E1∪E2 and Mb,δ+⊂E1. Because u∗∈Mb,δ+ and the continuity of t+u, we have that u∗+tR1vε for t∈0,1 must intersect Mb,δ−. As a consequence,
(71)infMb,δ−Ib,δ≤supt>0Ib,δu∗+tvε.
Thus, to complete the proof of (65), it suffices to show that
(72)supt>0Ib,δu∗+tvε<Ib,δu∗+a2S24bS2+δ.
By mean value theorem, there existsδx∈0,1 such that
(73)u∗x+tvεxq−u∗qx=qu∗x+δxtvεxq−1tvεx≥qtu∗q−1xvεx,for any x∈Ω. Using (66), (67), and (73), we obtain
(74)Ib,δu∗+tvε=a2u∗2+at∫Ω∇u∗∇vεdx+a2t2vε2−b4u∗4−bt2∫Ω∇u∗∇vεdx2−b4t4vε4−btu∗2∫Ω∇u∗∇vεdx−b2t2u∗2vε2−bt3vε2∫Ω∇u∗∇vεdx−λq∫Ωu∗+tvεqdx−δ4∫Ωu∗+tvε4dx≤Ib,δu∗+a2t2vε2−b4t4vε4−b2t2u∗2vε2−λq∫Ωu∗+tvεq−u∗q−qtu∗q−1vεdx−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx≤Ib,δu∗+a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx.
To proceed, we set(75)Jv=a2v2−b4v4−b2u∗2v2−δ4∫Ωu∗+v4−u∗4−4u∗3vdx.
Recall that, forr,s≥1, it holds
(76)r+s4−r4−4r3s≥s4+C1rs3,for some C1>0. By (73) and (76), we have that
(77)Jtvε=a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx≤a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωtvε4+C1u∗tvε3dx=a2t2vε2−b2t2u∗2vε2−b4t4vε4−δ4t4vε44−δ4C1t3∫Ωu∗vε3dx,which implies that there exists a constant t1>0 small enough such that
(78)sup0<t<t1Ib,δu∗+tvε<a24b.
Thus, we only need to consider the case oft≥t1. By the same argument of Lemma 11 of [21], we have
(79)∫Ωu∗vε3dx=83/2εu∗0∫ℝ411+x23dx+oε.
Combining this and (64), we have for ε>0 sufficiently small,
(80)supt≥t1Jtvε≤supt>0a2t2vε2−b2t2u∗2vε2−b4t4vε4vε4−δ4t4vε44−δ4C1t13∫Ωu∗vε3dx≤avε2−bu∗2vε224bvε4+vε44−C2ε+oε=aS2−bS2u∗224bS4+δS2+Oε2−C2ε+oε<S2a−bu∗224bS2+δ<a2S24bS2+δ,where C2>0 is a positive constant independent of ε. This together with (74) implies that (65) holds.
In the second place, we claim thatU∗≠0. If, to the contrary, we have U∗≡0. Since Un∈Mb,δ−⊂Mb,δ, it follows that
(81)aUn2−bUn4−λUnqq−δUn44=0,and so, by Sobolev inequality
(82)aUn2=bUn4+δUn44+o1≤b+δS−2Un4.
Assume thatUn2⟶ι2. By Un⊂Mb,δ− and Lemma 7, we obtain that ι2>0. Taking n⟶∞ in (82), we have ι2≥aS2/bS2+δ, and thus
(83)infMb,δ−Ib,δ=limn⟶∞Ib,δUn=limn⟶∞Ib,δUn−14Ib,δ′Un,Un=limn⟶∞a4Un2−λ1q−14Unqq=a4ι2≥a2S24bS2+δ,which is a contradiction with (65). Therefore, the claim follows. At this point, we may proceed as in the proof of Lemma 11 and conclude that U∗ is a positive solution of problem Pb,δ with U∗∈Mb,δ−. This completes the proof of Lemma 12.Proof of Theorem 1.
Theorem1 is an immediate consequence of Lemmas 7, 11, and 12.
## 4. Proofs of Theorems 2 and 3
Proof of Theorem 2.
By the definition ofλ∗ and Theorem 1, we easily see that λ∗≥T−. Hence, Proof of Theorem 2 is completed if we show that λ∗≤T+. To this goal, let us define the functions
(84)hλt=tq−1δt4−q−aμ1t2−q+λ,t>0,h~λt=δt4−q−aμ1t2−q+λ,t>0.
Obviously, we have thath~t is convex and attains its minimum at the point tmin=2−qaμ1/4−qδ1/2 with
(85)h~λtmin=−2aμ14−q2−qaμ14−qδ1/2+λ.
As a consequence, we can take(86)T+=2aμ14−q2−qaμ14−qδ1/2+1,such that
(87)h~T+t≥h~T+tmin=1>0,∀t>0.
This gives that(88)hT+t≥tq−1h~T+t>0,∀t>0,namely,
(89)T+tq−1+δt3>aμ1t,∀t>0.
Assume that anyλ>0 is such that Pb,δ admits a positive solution u. On the one hand, using (89) with t=u, multiplying by e1, and integrating over Ω, we get
(90)T+∫Ωuq−1e1dx+δ∫Ωu3e1dx>aμ1∫Ωue1dx.
On the other hand, multiplyingPb,δ by e1 and integrating over Ω, there holds
(91)a−bu2∫Ω∇u∇e1dx=λ∫Ωuq−1e1dx+δ∫Ωu3e1dx>0.
Since(92)aμ1∫Ωue1dx=a∫Ω∇u∇e1dx>a−bu2∫Ω∇u∇e1dx,we infer from (90) and (91) that λ<T+. By the arbitrariness of λ and the definition of λ∗, we conclude that λ∗≤T+<∞. Proof of Theorem 2 is thus completed.Proof of Theorem 3.
Letbn and δn be two sequences satisfying bn↘0 and δn↘0 as n⟶∞, and let un and Un be the two positive solutions of Pbn,δn obtained in Theorem 1 with un∈Mbn,δn+ and Un∈Mbn,δn−.
Using Lemma7 and Un∈Mbn−, we have that
(93)limn⟶∞Un2≥limn⟶∞a2−q4−qbn+δnS−21/2=+∞,and thus, the conclusion (i) holds.
In what follows, we prove the conclusion (ii) of Theorem3. Noting that
(94)Ibn,δnun=infMbn,δn+∪Mbn,δn0Ibn,δn<0,for all n∈ℕ, we obtain from Hölder inequality that
(95)0≥Ibn,δnun−14I′bn,δnun,un≥12−14un2−λ1q−14Ω4−q/4S−q/2unq.
As a consequence of1<q<2, we have that un is bounded in H01Ω. Thus, there is a subsequence of un (still denoted by un) such that un⇀u¯ in H01Ω as n⟶∞. Furthermore, for all ϕ∈H01Ω, it holds
(96)0=limn⟶∞Ibn,δn′un,ϕ=limn⟶∞a−bnun2∫Ω∇un∇ϕdx−λ∫Ωunq−1ϕdx−δn∫Ωun3ϕdx=a∫Ω∇u0∇ϕdx−λ∫Ωu0q−1ϕdx,which provides that u¯ is a nonnegative weak solution of problem P0,0. Let I0,0u be the corresponding functional of P0,0 defined by
(97)I0,0u=a2u2−λquqq.
Since(98)aun−u¯2=Ibn,δn′un−I0,0′u¯,un−u¯+bn∫Ω∇un2dx∫Ω∇un∇un−u¯dx+λ∫Ωunq−1−u¯q−1un−u¯dx+δn∫Ωun3un−u¯dx⟶0,as n⟶∞, it follows that un⟶u¯ in H01Ω.
Definec0=infI0,0u:u∈H01Ω. It is easy to check that there exists v0∈H01Ω\0 such that c0=I0,0v0 and c0<0. As I0,0u≥Ibn,δnu for any u∈H01Ω, we easily see that infMbn,δn+∪Mbn,δn0Ibn,δn≤c0. Set cbn,δn=Ibn,δnun and suppose that limn⟶∞cbn,δn=k. We claim k=c0. Otherwise, we have k<c0, and hence, by bn⟶0, δn⟶0 as n⟶∞ and un is bounded in H01Ω; one has for large n,
(99)c0≤I0,0un=Ibn,δnun+bn4un4+δn4un44=cbn,δn+bn4un4+δn4un44≤k+c0−k2=c0+k2<c0,a contradiction. Thus, the claim follows. Then,
(100)c0=limn⟶∞Ibn,δnun=a2u¯2−λqu¯qq=I0,0u¯,which implies that u¯ is a global minimum of I0,0. This result, together with strong maximum principle proves that u¯ is a positive ground state solution of P0,0. Theorem 3 is thus proved.
---
*Source: 1011342-2022-06-02.xml* | 1011342-2022-06-02_1011342-2022-06-02.md | 25,097 | Multiple Positive Solutions and Estimates of Extremal Values for a Nonlocal Problem with Critical Sobolev Exponent and Concave-Convex Nonlinearities | Zhigao Shi; Xiaotao Qian | Journal of Function Spaces
(2022) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011342 | 1011342-2022-06-02.xml | ---
## Abstract
We are concerned with the following nonlocal problem involving critical Sobolev exponent−a−b∫Ω∇u2dxΔu=λuq−2u+δu2u,x∈Ω,u=0,x∈∂Ω, where Ω is a smooth bounded domain in ℝ4, a,b>0, 1<q<2, δ, and λ are positive parameters. We prove the existence of two positive solutions and obtain uniform estimates of extremal values for the problem. Moreover, the blow-up and the asymptotic behavior of these solutions are also discussed when b↘0 and δ↘0. In the proofs, we apply variational methods.
---
## Body
## 1. Introduction and Main Results
In this paper, we study a new class of Kirchhoff type problem with critical exponent and concave-convex nonlinearities(1)−a−b∫Ω∇u2dxΔu=λuq−2u+δu2u,x∈Ω,u=0,x∈∂Ω,Pb,δ,where Ω is a smooth bounded domain in ℝ4 (2∗=4 is the critical exponent in dimension four), a,b>0, 1<q<2, δ, and λ are positive parameters.We callPb,δ a Kirchhoff type problem since the presence of the term ∫Ω∇u2dx, which means that Pb,δ is no longer a pointwise identity. Such nonlocal problem arises in various models concerning physical and biological systems, see, e.g., [1–3]. Among others, Kirchhoff [2] built a model defined by the equation
(2)ρ∂2u∂t2−P0h+E2L∫0L∂u∂x2dx∂2u∂x2=0,where u=ux,t represents the lateral displacement, ρ denotes the mass density, P0 is the initial tension, h denotes the area of the cross-section, E denotes the Young modulus of the material, and L is the length of the string. This equation is an extension of the classical D’Alembert wave equation for free vibrations of elastic strings.Different from the traditional Kirchhoff type problem, the sign of nonlocal term included inPb,δ is negative, which causes some interesting difficulties. In the past few years, much attention has been paid to the existence, multiplicity, and the behaviour of solutions for this kind of nonlocal problem but without critical growth. In particular, Yin and Liu [4] were concerned with the following problem
(3)−a−b∫Ω∇u2dxΔu=up−2u,x∈Ω,u=0,x∈∂Ω,where 1<p<2∗ and Ω is a bounded domain in ℝN with N≥1 and succeeded to find the problem (3) admits at least two nontrivial solutions. In [5, 6], sign-changing solutions to (3) were further obtained. When N=3 and the nonlinear term has an indefinite potential, Lei et al. [7] and Qian and Chao [8] established the existence of positive solution of (3) for 1<p<2 and 3<p<6, respectively. For the singular nonlinearity, two positive solutions to (3) with N=3 were proved in [9]. In our previous work [10], we obtained two positive solutions of Pb,δ with δ=0, as well as their blow-up and asymptotic behavior when b↘0. For more related results, we refer the interested readers to [11–15] and the references therein.In 1994, Ambrosetti et al. [16] first studied the following critical local problem involving concave-convex nonlinearities
(4)−Δu=λuq−2u+u2∗−1u,x∈Ω,u=0,x∈∂Ω,where 1<q<2 and Ω⊂ℝN is a smooth bounded domain. The authors proved that there exists λ0>0 such that the problem (4) has two positive solutions for λ∈0,λ0 and no positive solutions for λ>λ0. Since then, many scholars have considered problems with critical exponent and concave-convex nonlinearities, see, e.g., [7, 16–21]. Also, the problem (4) of traditional Kirchhoff type is studied in [22–26] and the reference therein. An interesting question now is whether the same existence results as in [16] occur to the nonlocal problem Pb,δ with critical exponent. For λ=0 and δ=1, Wang et al. [27] proved the existence of two positive solutions of Pb,δ with an additional inhomogeneous perturbation on the whole space ℝ4. When 2<q<2∗ and δ is replaced by a nonnegative function Qx, [28] showed how the shape of the graph of Qx affects the number of positive solutions to Pb,δ. However, there are no known existence results for Pb,δ provided λ>0 and 1<q<2.Motivated by the works described above, in the present paper, we try to prove the existence and multiplicity of positive solutions of problemPb,δ when λ∈0,T− for some T−>0 (see Theorem 1), provide uniform estimates of extremal values λ∗ for problem Pb,δ (see Theorem 2), and obtain the blow-up and asymptotic behavior of these positive solutions when b↘0 and δ↘0 (see Theorem 3).Denote byH01Ω the standard Sobolev space endowed with the standard norm ·. Let ·p be the norm of the space LsΩ. Denote by ⟶ (⇀) the strong (weak) convergence. C and Ci denote various positive constants whose exact values are not important. Let μ1 be the positive principal eigenvalue of the operator −Δ on Ω with corresponding positive principal eigenfunction e1. Denote by S the best constant in the Sobolev embedding H01Ω°L2∗Ω, namely,
(5)S=infu∈H01Ω\0u2u42>0.It is well known that the weak solutions of problemPb,δ correspond to the critical points of the following energy functional
(6)Ib,δu=a2u2−b4u4−λquqq−δ4u44.Moreover, we easily see thatIb,δ∈C1H01Ω,ℝ.Define the manifold(7)Mb,δ=u∈H01Ω:I′b,δu,u=0=u∈H01Ω:au2=bu4+λuqq+δu44,and decompose Mb,δ into three subsets as follows:
(8)Mb,δ0=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44=0,Mb,δ+=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44>0,Mb,δ−=u∈Mb,δ:a2−qu2−b4−qu4−δ4−qu44<0.Set(9)T1=2aSq/24−qΩ4−q/4a2−qb+δS−24−q2−q/2,T2=2qaS4−q/22−q4−qbS2+δΩ4−q/4δ2−qqq/2,T−=minT1,T2.Our main results are as follows.Theorem 1.
Assume thatλ∈0,T−, then problem Pb,δ has at least two positive solutions u∗∈Mb,δ+ and U∗∈Mb,δ− with u∗<U∗.Theorem 2.
Let(10)λ∗=supλ>0:Pb,δhasatleasttwopositivesolutions.
Then, we have(11)0<T−≤λ∗≤T+<∞,where T− is defined as above and
(12)T+=2aμ14−q2−qaμ14−qδ1/2+1.Theorem 3.
Assume thatbn and δn are two sequences satisfying bn↘0 and δn↘0 as n⟶∞. Let un and Un be the two positive solutions of Pb,δ corresponding to bn and δn obtained in Theorem 1 with un∈Mbn,δn+ and Un∈Mbn,δn−, then passing to a subsequence if necessary,
(i)
Un⟶∞ as n⟶∞(ii)
un⟶u¯ in H01Ω as n⟶∞, where u¯ is a positive ground state solution of the problem(13)−aΔu=λuq−2u,x∈Ω,P0,0,u=0,x∈∂Ω.Remark 4.
The multiplicity result ofPb,δ with δ=0 has been proved by [10]. So, our result presented in Theorem 1 can be viewed as an extension of [10] considering the subcritical case where δ=0. In particular, we provide uniform estimates of extremal values λ∗ for the problem, which are observed for the first time in the studies of such nonlocal problem like Pb,δ.Remark 5.
Comparing with [16], which considered problem Pb,δ with b=0, we in this paper investigate the nonlocal case of b≠0. Moreover, unlike [22–24, 26], where the nonlocal term is positive, here we study the case of negative sign of nonlocal term and additionally obtain a bound from above for the parameter.The plan of this paper is as follows. In Section2, we give some preliminaries. Section 3 is devoted to the Proof of Theorem 1. In Section 4, we prove Theorems 2 and 3. In the proof of our main results, we use variational methods, and they are inspired by [10, 16]. However, in the present paper, we encounter some new difficulties due to the critical growth and nonlocal term. Firstly, compared with [10], the calculations here are more delicate and difficult since we now face the critical problem Pb,δ. Secondly, to provide the bound from above for λ∗ of Pb,δ involving nonlocal term, we need to develop some techniques applied in [16] where dealt with local case. Thirdly, in order to obtain the asymptotic behavior of the solutions of Pb,δ as in the work of [10], we add the condition of δ↘0 and conduct some new analysis.
## 2. Preliminaries
Lemma 6.
Letλ∈0,T1. Then, Mb,δ±≠∅ and Mb,δ0=0.Proof.
A simple calculation shows that(14)∂Ib,δϕ∂ttu=tq−1at2−qu2−bt4−qu4−λuqq−δt4−qu44.
For anyu∈H01Ω\0, t>0, set
(15)ψt=at2−qu2−bt4−qu4−δt4−qu44,t>0,ψ1t=at2−qu2−t4−qb+δS−2u4,t>0.
Since1<q<2, it is clear that limt⟶0+ψ1t=0 and limt⟶+∞ψ1t=−∞. Moreover, ψ1t is concave and achieves its maximum at the point tmax=a2−qu2/b+δS−24−qu41/2 with
(16)ψ1tmax=24−q2−q4−q2−q/2au24−qb+δS−2u42−q1/2.
By Hölder and Sobolev inequalities, forλ∈0,T1, we obtain
(17)λuqq≤λΩ4−q4S−q/2uq<ψ1tmax≤ψtmax.
From which we infer that there exist two constantst+=t+u and t−=t−u satisfying t+>tmax>t−>0 and
(18)ψt+=λuqq=ψt−,ψ′t+<0<ψ′t−.
This gives thatt+u∈Mb,δ− and t−u∈Mb,δ+.
In what follows, we prove thatMb,δ0=0. Suppose to the contrary that there is w∈Mb,δ0 with w≠0. By w∈Mb,δ0, we have
(19)a2−qw2=b4−qw4+δ4−qw44.
As a consequence, by Sobolev inequality,(20)a2−qw2≤b4−qw4+δ4−qS−2w4=b+δS−24−qw4.
Moreover, we can also infer fromw∈Mb,δ0 that −2aw2+λ4−qwqq=0 and so
(21)λwqq=2a4−qw2.
Combining (20) and (21), for λ∈0,T1, we conclude that
(22)0<24−q2−q4−q2−q/2aw24−qb+δS−2w42−q1/2−λwqq≤24−q2−q4−q2−q/2aw24−qa2−q/4−qw22−q1/2−λwqq=2a4−qw2−λwqq=0,which is absurd. The proof of Lemma 6 is completed.Lemma 7.
Assume thatλ∈0,T1, then there is a gap structure in Mb,δ:
(23)u≤Aλ<A0≤U,∀u∈Mb,δ+,U∈Mb,δ−,where
(24)A0=a2−q4−qb+δS−21/2,Aλ=λ4−qΩ4−q/42aSq/21/2−q.Proof.
In the case ofU∈Mb,δ−, using Sobolev inequality, we have
(25)a2−qU2<b4−qU4+δ4−qU44≤b+δS−24−qU4,which yields U≥A0.
In the case ofu∈Mb,δ+, it holds
(26)2au2<λ4−quqq≤λ4−qΩ4−q/4S−q/2uq,which gives that u≤Aλ. Moreover, we easily check that if λ∈0,T1, then Aλ<A0.Lemma 8.
For anyu∈Mb,δ±, there exist ρu>0 and a differential functional gρu:Bρu0⟶ℝ+ such that
(27)gρu0=1,gρuwu−w∈Mb,δ±,gρu′0,ϕ=2a−4bu2∫Ω∇u∇ϕdx−qλ∫Ωuq−2uϕdx−4δ∫Ωu2uϕdxa2−qu2−b4−qu4−δ4−qu44.Proof.
Fixu∈Mb,δ− and define F:ℝ+×H⟶ℝ by
(28)Ft,w=at2−qu−w2−bt4−qu−w4−λu−wqq−δt4−qu−w44.
Since foru∈Mb,δ−⊂Mb,δ, one has F1,0=0 and
(29)Ft1,0=a2−qu2−b4−qu4−δ4−qu44<0,then we can employ the implicit function theorem for F at the point 1,0 and derive ρ¯>0 and a differential functional g=gw>0 defined for w∈H01Ω, w<ρ¯ such that
(30)g0=1,gwu−w∈Mb,δ,∀w∈H,w<ρ¯.
In view of the continuity ofg, we may choose ρ>0 possibly smaller (ρ<ρ¯) such that for any w∈H01Ω, w<ρ, it holds
(31)gwu−w∈Mb,δ−.
In a similar way, we can prove the case ofu∈Mb,δ+, and thus, Lemma 8 follows.Lemma 9.
Ifλ∈0,T1, then we have
(i)
The functionalIb,δ is coercive and bounded from below on Mb,δ(ii)
infMb,δ+∪Mb,δ0Ib,δ=infMb,δ+Ib,δ∈−∞,0Proof.
(i)
Foru∈Mb,δ, using Hölder’s inequality, we obtain(32)Ib,δu=Ib,δu−14I′b,δu,u=a4u2−λ1q−14uqq≥a4u2−λ1q−14Ω4−q/4S−q/2uq.
This proves the conclusion (i).(ii)
Foru∈Mb,δ+, it holds(33)Ib,δu=Ib,δu−1qI′b,δu,u=a12−1qu2+b1q−14u4+δ1q−14u44<−a2−qu2+b4−qu4+δ4−qu444q<0.
Combining this and Lemma6, we have that infMb,δ+∪Mb,δ0Ib,δ=infMb,δ+Ib,δ<0. Furthermore, we deduce from (i) that infMb,δ+∪Mb,δ0Ib,δ≠−∞. Thus, infMb,δ+∪Mb,δ0Ib,δ∈−∞,0.Lemma 10.
Ifλ∈0,T1, then Mb,δ+∪Mb,δ0 and Mb,δ− are closed.Proof.
LetUn be a sequence in Mb,δ− such that Un⟶U0 in H01Ω. Since Un⊂Mb,δ−⊂Mb,δ, we have
(34)aU02−bU04=limn⟶∞aUn2−bUn4=limn⟶∞λUnqq+δUn4=λU0qq+δU04,a2−qU02−b4−qU04−δ4−qU04=limn⟶∞a2−qUn2−b4−qUn4−δ4−qUn4≤0,namely, U0∈Mb,δ−∪Mb,δ0. For λ∈0,T1, it then follows from Lemma 7 that U0∉Mb,δ0. In turn, we obtain U0∈Mb,δ−, and so, Mb,δ− is closed for λ∈0,T1. The same argument can prove that Mb,δ0∪Mb,δ+ is closed. This completes the proof of Lemma 10.
## 3. Proof of Theorem 1
Lemma 11.
Suppose thatλ∈0,T1, then problem Pb,δ admits a positive solution u∗ with u∗∈Mb+.Proof.
By Lemmas9 and 10, we can apply Ekeland variational principle to get a minimizing sequence un⊂Mb,δ+∪Mb,δ0 such that
(35)limn⟶∞Ib,δun=infMb,δ+∪Mb,δ0Ib,δ<0,(36)Ib,δz≥Ib,δun−1nz−un,∀z∈Mb,δ+∪Mb,δ0.
SinceIb,δu=Ib,δu, we can assume that un≥0 in Ω. By Lemma 9, un is bounded in H01Ω, and so, we may assume that
(37)un⇀u∗,inH01Ω,un⟶u∗,inLsΩ,1≤s<4,un⟶u∗,a.e.inΩ.
In the following, we prove thatu∗ is a positive solution to Pb,δ. To this purpose, we divide the proof into five steps.
Step 1.u∗≠0.
If, to the contrary, we haveu∗=0. Since un∈Mb,δ+∪Mb,δ0, it follows that for n large,
(38)aun2≥4−q2−qbun4+4−q2−qδun44,and hence,
(39)Ib,δun=12aun2−14bun4−14δun44+o1>4−q22−q−14bun4+4−q22−q−14δun44+o1>0,which contradicts with (35). Therefore, u∗≠0.
Step 2. There is a positive constantC1 satisfying
(40)2aun2−λ4−qunqq<−C1.
To prove that, it suffices to check that(41)2alimsupn⟶∞un2<λ4−qu∗qq.
In view ofun∈Mb,δ+∪Mb,δ0, one has
(42)2alimsupn⟶∞un2≤λ4−qu∗qq.
Assume to the contrary that(43)2alimsupn⟶∞un2=λ4−qu∗qq.
Then, we can supposeun2⟶A>0 as n⟶∞, where A satisfies
(44)λu∗qq=2aA4−q.
From this, we have that forλ∈0,T1,
(45)0≤24−q2−q4−q2−q/2a4−q/2b+δS−22−q/2−λΩ4−q/4S−q/2unq≤24−q2−q4−q2−q/2aun24−qb+δS−2un42−q1/2−λunqq≤24−q2−q4−q2−q/2aun24−qaun2−λunqq2−q1/2−λunqq⟶24−q2−q4−q2−q/2aA4−q2−q/4−qaA2−q1/2−2aA4−q=0,which implies that un⟶0 in H01Ω, contradicting u∗≠0. In turn, we deduce that (40) holds.
Step 3.I′b,δun⟶0 as n⟶∞.
Let0<ρ<ρn≡ρun, gn≡gun, where ρun and gun are defined as Lemma 8 with u=un. Let wρ=ρv with v=u/u. Fix n and set zρ=gnwρun−wρ. Since zρ∈Mb,δ+, it follows from (36) that
(46)Ib,δzρ−Ib,δun≥−1nzρ−un.
By the definition of Fréchet derivative, we obtain(47)Ib,δ′un,zρ−un+ozρ−un≥−1nzρ−un.
Then,(48)Ib,δ′un,−wρ+gnwρ−1un−wρ≥−1nzρ−un+ozρ−un,and hence,
(49)−ρIb,δ′un,v+gnwρ−1Ib,δ′un,un−wρ≥−1nzρ−un+ozρ−un,which yields that
(50)Ib,δ′un,v≤1nzρ−unρ+ozρ−unρ+gnwρ−1ρI′b,δun,un−wρ.
From Step 2, Lemma8, and the boundedness of un, we also have
(51)zρ−un=gnwρ−1un−wρ−wρ≤gnwρ−1C2+ρ,limρ⟶0gnwρ−1ρ=gn′0,v≤g′n0≤C3,I′b,δun,un−wρ=I′b,δun,−wρ=−ρI′b,δun,v.
As a consequence, for fixedn, we can derive letting ρ⟶0 in (50) that
(52)I′b,δun,v≤Cn,which implies that I′b,δun⟶0 as n⟶∞.
Step 4.un⟶u∗ in H01Ω.
Setvn=un−u∗. If vn⟶0, we are done, thus assume vn⟶L>0. By I′b,δun,u∗=o1 and (37),
(53)0=au∗2−bL2+u∗2u∗2−λu∗qq−δu∗44.
Moreover, fromI′b,δun⟶0, the boundedness of un, and Brézis-Lieb lemma, we have that
(54)o1=Ib,δ′un,un=avn2+u∗2−bvn4+2vn2u∗2+u∗4−λu∗qq−δvn44−δu∗44+o1.
Combining this and (53), we get
(55)o1=avn2−bvn4−bvn2u∗2−δvn44.
It then follows from Sobolev inequality that(56)avn2−bvn4−bvn2u∗2=δvn44+o1≤δS−2vn4+o1.
Passing the limit asn⟶∞, we obtain that
(57)L2≥S2a−bu∗2bS2+δ>0.
By (53), (57), and Hölder inequality,
(58)Ib,δu∗=a2u∗2−b4u∗4−λq∫u∗qdx−δ4u∗44=a4u∗2+b4L2u∗2−λ1q−14u∗qq≥abS2u∗24bS2+δ+b4L2u∗2+aδ4bS2+δu∗2−λ1q−14Ω4−q/4S−q/2u∗q.
Forξ≔aδ/4bS2+δ and η≔λ1/q−1/4Ω4−q/4S−q/2, define
(59)ft=ξt2−ηtq.
By easy calculation, we have thatft achieves its minimum value at tmin=qη/2ξ1/2−q and
(60)ftmin=−2−q2η2/2−qq2ξq/2−q.
Therefore, we obtain(61)Ib,δu∗≥abS2u∗24bS2+δ+b4L2u∗2+ftmin=abS2u∗24bS2+δ+b4L2u∗2−2−q2λ4−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−q.
Using (37), (53), and (61), we deduce that for λ∈0,T1,
(62)Ib,δun=Ib,δu∗+a4vn2−b4vn2u∗2+o1≥abS2u∗24bS2+δ+a4L2−2−q24−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−qλ2/2−q+o1≥a2S24bS2+δ−2−q24−q4qΩ4−q/4S−q/22/2−q2qbS2+δaδq/2−qλ2/2−q+o1>0,which is a contradiction since limn⟶∞Ib,δun<0. This implies that vn⟶L>0 is impossible. Hence, vn⟶0; that is, un⟶u∗ in H01Ω.
Step 5.u∗ is a positive solution of problem Pb,δ and u∗∈Mb,δ+.
From (35) and Steps 3 and 4, we have that, up to a subsequence, un⟶u∗ in H01Ω with Ib,δu∗<0 and I′b,δu∗=0. Namely, u∗≥0 is a weak nontrivial solution of problem Pb,δ. Moreover, by Lemmas 6 and 10, we know u∗∈Mb,δ+. Standard elliptic regularity argument and strong maximum principle provide that u∗ is positive. Therefore, the proof of Lemma 11 is completed.Lemma 12.
Letλ∈0,T1, then problem Pb,δ has a positive solution U∗ with U∗∈Mb−.Proof.
As in the proof of Lemma11, we can prove that there exists a bounded and nonnegative sequence Un⊂Mb,δ− with the properties
(i)
limn⟶∞Ib,δUn=infMb,δ−Ib,δ(ii)
Ib,δz≥Ib,δUn−1nz−un,∀z∈Mb,δ−(iii)
Un⇀U∗inH01Ω(iv)
Un⟶U∗inLsΩ,2≤s<4(v)
Un⟶U∗a.e.inΩ
Without loss of generality, we may assume that0∈Ω. Let φx∈C0∞Ω be a cut-off function such that 0≤φ≤1 in Ω and φx≡1 near zero. Set
(63)vεx=φx81/2εε2+x2.
By [29, 30], one has for ε>0 small,
(64)vε2=S2+Oε2,vε44=S2+Oε4,vε33=Oε.
In the first place, we prove the following upper bound forinfMb,δ−Ib,δ,
(65)infMb,δ−Ib,δ≤supt>0Ib,δu∗+tvε<Ib,δu∗+a2S24bS2+δ,where u∗ is the positive solution obtained in Lemma 11. Since u∗∈Mb,δ+, it is easy to verify that a−bu∗2>0. By the fact that Ib,δ′u∗,tvε=0, we also have
(66)0=ta−bu∗2∫Ω∇u∗∇vεdx−tλ∫Ωu∗q−1vεdx−tδ∫Ωu∗3vεdx,and hence,
(67)∫Ω∇u∗∇vεdx=λ∫Ωu∗q−1vεdx+δ∫Ωu∗3vεdxa−bu∗2>0.
Letwε=u∗+Rvε with R>1. It follows from (67) that
(68)wε2=u∗2+2R∫Ω∇u∗∇vεdx+R2vε2≥u∗2+R2S2+Oε2.
Letψt be given by Lemma 6. As can be seen from the proof of Lemma 6, there exist ψtε=λwε/wεqq and ψ′tε<0, where tε=t+wε/wε. From the structure of ψ and the fact of wε/wεqq>0, we easily see that tε is uniformly bounded by a suitable constant C1>0, ∀R≥1, and ∀ε>0.
Moreover, we have from (68) that there is ε1>0 satisfying
(69)wε2≥u∗2+12R2S2,∀ε∈0,ε1.
Therefore, we may findR1≥1 such that wε>C1, ∀R≥R1, and ∀ε∈0,ε1.
Define(70)E1=u:u=0oru<t+uu,E2=u:u>t+uu.
Notice thatH01Ω−Mb,δ−=E1∪E2 and Mb,δ+⊂E1. Because u∗∈Mb,δ+ and the continuity of t+u, we have that u∗+tR1vε for t∈0,1 must intersect Mb,δ−. As a consequence,
(71)infMb,δ−Ib,δ≤supt>0Ib,δu∗+tvε.
Thus, to complete the proof of (65), it suffices to show that
(72)supt>0Ib,δu∗+tvε<Ib,δu∗+a2S24bS2+δ.
By mean value theorem, there existsδx∈0,1 such that
(73)u∗x+tvεxq−u∗qx=qu∗x+δxtvεxq−1tvεx≥qtu∗q−1xvεx,for any x∈Ω. Using (66), (67), and (73), we obtain
(74)Ib,δu∗+tvε=a2u∗2+at∫Ω∇u∗∇vεdx+a2t2vε2−b4u∗4−bt2∫Ω∇u∗∇vεdx2−b4t4vε4−btu∗2∫Ω∇u∗∇vεdx−b2t2u∗2vε2−bt3vε2∫Ω∇u∗∇vεdx−λq∫Ωu∗+tvεqdx−δ4∫Ωu∗+tvε4dx≤Ib,δu∗+a2t2vε2−b4t4vε4−b2t2u∗2vε2−λq∫Ωu∗+tvεq−u∗q−qtu∗q−1vεdx−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx≤Ib,δu∗+a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx.
To proceed, we set(75)Jv=a2v2−b4v4−b2u∗2v2−δ4∫Ωu∗+v4−u∗4−4u∗3vdx.
Recall that, forr,s≥1, it holds
(76)r+s4−r4−4r3s≥s4+C1rs3,for some C1>0. By (73) and (76), we have that
(77)Jtvε=a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωu∗+tvε4−u∗4−4tu∗3vεdx≤a2t2vε2−b4t4vε4−b2t2u∗2vε2−δ4∫Ωtvε4+C1u∗tvε3dx=a2t2vε2−b2t2u∗2vε2−b4t4vε4−δ4t4vε44−δ4C1t3∫Ωu∗vε3dx,which implies that there exists a constant t1>0 small enough such that
(78)sup0<t<t1Ib,δu∗+tvε<a24b.
Thus, we only need to consider the case oft≥t1. By the same argument of Lemma 11 of [21], we have
(79)∫Ωu∗vε3dx=83/2εu∗0∫ℝ411+x23dx+oε.
Combining this and (64), we have for ε>0 sufficiently small,
(80)supt≥t1Jtvε≤supt>0a2t2vε2−b2t2u∗2vε2−b4t4vε4vε4−δ4t4vε44−δ4C1t13∫Ωu∗vε3dx≤avε2−bu∗2vε224bvε4+vε44−C2ε+oε=aS2−bS2u∗224bS4+δS2+Oε2−C2ε+oε<S2a−bu∗224bS2+δ<a2S24bS2+δ,where C2>0 is a positive constant independent of ε. This together with (74) implies that (65) holds.
In the second place, we claim thatU∗≠0. If, to the contrary, we have U∗≡0. Since Un∈Mb,δ−⊂Mb,δ, it follows that
(81)aUn2−bUn4−λUnqq−δUn44=0,and so, by Sobolev inequality
(82)aUn2=bUn4+δUn44+o1≤b+δS−2Un4.
Assume thatUn2⟶ι2. By Un⊂Mb,δ− and Lemma 7, we obtain that ι2>0. Taking n⟶∞ in (82), we have ι2≥aS2/bS2+δ, and thus
(83)infMb,δ−Ib,δ=limn⟶∞Ib,δUn=limn⟶∞Ib,δUn−14Ib,δ′Un,Un=limn⟶∞a4Un2−λ1q−14Unqq=a4ι2≥a2S24bS2+δ,which is a contradiction with (65). Therefore, the claim follows. At this point, we may proceed as in the proof of Lemma 11 and conclude that U∗ is a positive solution of problem Pb,δ with U∗∈Mb,δ−. This completes the proof of Lemma 12.Proof of Theorem 1.
Theorem1 is an immediate consequence of Lemmas 7, 11, and 12.
## 4. Proofs of Theorems 2 and 3
Proof of Theorem 2.
By the definition ofλ∗ and Theorem 1, we easily see that λ∗≥T−. Hence, Proof of Theorem 2 is completed if we show that λ∗≤T+. To this goal, let us define the functions
(84)hλt=tq−1δt4−q−aμ1t2−q+λ,t>0,h~λt=δt4−q−aμ1t2−q+λ,t>0.
Obviously, we have thath~t is convex and attains its minimum at the point tmin=2−qaμ1/4−qδ1/2 with
(85)h~λtmin=−2aμ14−q2−qaμ14−qδ1/2+λ.
As a consequence, we can take(86)T+=2aμ14−q2−qaμ14−qδ1/2+1,such that
(87)h~T+t≥h~T+tmin=1>0,∀t>0.
This gives that(88)hT+t≥tq−1h~T+t>0,∀t>0,namely,
(89)T+tq−1+δt3>aμ1t,∀t>0.
Assume that anyλ>0 is such that Pb,δ admits a positive solution u. On the one hand, using (89) with t=u, multiplying by e1, and integrating over Ω, we get
(90)T+∫Ωuq−1e1dx+δ∫Ωu3e1dx>aμ1∫Ωue1dx.
On the other hand, multiplyingPb,δ by e1 and integrating over Ω, there holds
(91)a−bu2∫Ω∇u∇e1dx=λ∫Ωuq−1e1dx+δ∫Ωu3e1dx>0.
Since(92)aμ1∫Ωue1dx=a∫Ω∇u∇e1dx>a−bu2∫Ω∇u∇e1dx,we infer from (90) and (91) that λ<T+. By the arbitrariness of λ and the definition of λ∗, we conclude that λ∗≤T+<∞. Proof of Theorem 2 is thus completed.Proof of Theorem 3.
Letbn and δn be two sequences satisfying bn↘0 and δn↘0 as n⟶∞, and let un and Un be the two positive solutions of Pbn,δn obtained in Theorem 1 with un∈Mbn,δn+ and Un∈Mbn,δn−.
Using Lemma7 and Un∈Mbn−, we have that
(93)limn⟶∞Un2≥limn⟶∞a2−q4−qbn+δnS−21/2=+∞,and thus, the conclusion (i) holds.
In what follows, we prove the conclusion (ii) of Theorem3. Noting that
(94)Ibn,δnun=infMbn,δn+∪Mbn,δn0Ibn,δn<0,for all n∈ℕ, we obtain from Hölder inequality that
(95)0≥Ibn,δnun−14I′bn,δnun,un≥12−14un2−λ1q−14Ω4−q/4S−q/2unq.
As a consequence of1<q<2, we have that un is bounded in H01Ω. Thus, there is a subsequence of un (still denoted by un) such that un⇀u¯ in H01Ω as n⟶∞. Furthermore, for all ϕ∈H01Ω, it holds
(96)0=limn⟶∞Ibn,δn′un,ϕ=limn⟶∞a−bnun2∫Ω∇un∇ϕdx−λ∫Ωunq−1ϕdx−δn∫Ωun3ϕdx=a∫Ω∇u0∇ϕdx−λ∫Ωu0q−1ϕdx,which provides that u¯ is a nonnegative weak solution of problem P0,0. Let I0,0u be the corresponding functional of P0,0 defined by
(97)I0,0u=a2u2−λquqq.
Since(98)aun−u¯2=Ibn,δn′un−I0,0′u¯,un−u¯+bn∫Ω∇un2dx∫Ω∇un∇un−u¯dx+λ∫Ωunq−1−u¯q−1un−u¯dx+δn∫Ωun3un−u¯dx⟶0,as n⟶∞, it follows that un⟶u¯ in H01Ω.
Definec0=infI0,0u:u∈H01Ω. It is easy to check that there exists v0∈H01Ω\0 such that c0=I0,0v0 and c0<0. As I0,0u≥Ibn,δnu for any u∈H01Ω, we easily see that infMbn,δn+∪Mbn,δn0Ibn,δn≤c0. Set cbn,δn=Ibn,δnun and suppose that limn⟶∞cbn,δn=k. We claim k=c0. Otherwise, we have k<c0, and hence, by bn⟶0, δn⟶0 as n⟶∞ and un is bounded in H01Ω; one has for large n,
(99)c0≤I0,0un=Ibn,δnun+bn4un4+δn4un44=cbn,δn+bn4un4+δn4un44≤k+c0−k2=c0+k2<c0,a contradiction. Thus, the claim follows. Then,
(100)c0=limn⟶∞Ibn,δnun=a2u¯2−λqu¯qq=I0,0u¯,which implies that u¯ is a global minimum of I0,0. This result, together with strong maximum principle proves that u¯ is a positive ground state solution of P0,0. Theorem 3 is thus proved.
---
*Source: 1011342-2022-06-02.xml* | 2022 |
# Severe Hypophosphatemia Occurring After Repeated Exposure to a Parenteral Iron Formulation
**Authors:** Keerthana Haridas; Alice Yau
**Journal:** Case Reports in Endocrinology
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011401
---
## Abstract
Hypophosphatemia is a less known complication of parenteral iron use, particularly after the use of certain iron formulations. We report the case of a young male with inflammatory bowel disease and iron deficiency anemia, who developed severe symptomatic hypophosphatemia after his third exposure to iron carboxymaltose with no evidence of the same occurring upon prior exposures to the compound. Investigations revealed serum phosphorous levels of 0.7 mg/dl, corrected serum calcium of 8–9.5 mg/dl, alkaline phosphatase of 50 U/L (38–126), 25 hydroxy vitamin D level of 40.2 ng/ml, and intact PTH elevated to 207 pg/ml. Urine studies indicated renal phosphate wasting. Presentation was not in keeping with refeeding syndrome. Intact fibroblast growth factor 23 level, measured after the initiation of treatment was within the normal range at 179 RU/mL (44–215). 1,25 dihydroxy vitamin D level, also measured after the initiation of treatment, was normal at 26.3 pg/ml (19.9–79.3). The patient was treated with calcitriol and aggressive oral and intravenous phosphorous repletion. Symptoms then resolved and the patient was discharged on an oral regimen. This phenomenon is postulated to occur due to an increase in the level and activity of FGF23 and decreased cleavage of the same, due to anemia as well as use of specific iron formulations. This is the first instance, in our literature review, of this complication known to occur, not after initial exposure to an implicated iron formulation but occurring on subsequent exposure.
---
## Body
## 1. Introduction
Hypophosphatemia is a lesser-known side effect of intravenous iron repletion, particularly with certain formulations of iron including saccharated ferric oxide and ferric carboxymaltose [1]. First described in the late 1980s [2] with the former, it has since been reported in more cases, induced by the latter [1] although clear timelines associated with development, time to nadir, and duration to resolution are less clear. While it is transient and self-limiting in most cases, it can be symptomatic and long lasting, requiring aggressive treatment and supplementation in other instances [3–5].
## 2. Case Report
A 35 year old male with a past medical history of Crohn’s disease with associated malnutrition (diagnosed 20 years ago, refractory to therapy with multiple medical agents and with no prior history of surgery, on methotrexate, Ustekinumab and prednisone) as well as iron deficiency anemia (mean Hgb in the range of 6–9 gdl, serum Iron 9–29 mg/dl, total iron binding capacity-286–303 mcg/dl, transferrin receptor saturation 3–10%, and ferritin 3–73 mg/ml) was admitted after the detection of severe hypophosphatemia (serum phosphorus-0.7 mg/dl), two weeks after receiving 685 mg of parenteral ferric carboxymaltose. Patient’s treatment for Crohn’s disease had been started 4 years ago. The patient reported no symptoms at presentation but endorsed minimal tingling and cramping in fingers of bilateral upper extremities two days after presentation. No other sensorimotor complaints were endorsed. No cardiovascular or respiratory symptoms were endorsed. The patient reported no change in appetite or diet or no recent alteration in food intake, endorsed good oral intake and no recent weight loss. General examination was notable for Body Mass Index (BMI) of 16.14 kg/m2. Examination was unremarkable and no focal sensorimotor deficits were noted. Chvostek’s and Trousseau’s signs were negative.Investigations revealed serum phosphorous levels of 0.7 mg/dl, corrected serum calcium of 8–8.5 mg/dl which rose to 9–9.5 mg/dl after Vitamin D supplementation was started, alkaline phosphatase of 50 U/L (38–126), with bone specific alkaline phosphatase level, measured using ELISA, of 8.5 mcg/L (4–27). 25 hydroxy vitamin D level measured using ELISA was 40.2 ng/ml, and intact PTH was elevated to 207 pg/ml. 24 hour urine phosphorous level was 1.2 g and TMP/GFR (tubular maximum reabsorption of phosphate per glomerular filtration rate) was 1.39 mg/dl (normal range 2.6–3.8) with fractional tubular excretion of phosphate of 31.82%, suggestive of renal phosphate wasting.1,25 dihydroxy vitamin D level, measured by RIA, was 26.3 pg/ml (19.9–79.3); although, this was measured after calcitriol supplementation was initiated. Intact fibroblast growth factor 23 level was measured using ELISA to be 179 RU/mL (44–215) although the level was obtained 8 days after initiation of treatment.Interestingly, the patient had received infusions of the same formulation in doses of 675 mg and 700 mg in 2018 and 750 mg in 2017 with no documented evidence of consequent hypophosphatemia (Serum P 3.3–3.7 mg/dl). The patient also received iron dextran and ferrous gluconate in the 2 months prior to the described instance with no resulting hypophosphatemia.The patient was started on calcitriol 0.25 mcg daily which was then increased to 0.5 mcg daily to achieve phosphate repletion while ensuring normalized serum calcium levels. Corrected serum calcium levels rose to 9–9.5 mg/dl after this supplementation was started. PTH level, which was likely elevated secondary to mild hypocalcemia, was slated to be monitored upon outpatient follow up.Aggressive phosphate supplementation was initiated through oral and parenteral routes due to severity of hypophosphatemia and likely coexistent malabsorption in the setting of Crohn’s disease. Symptoms subsided after repletion was started.The patient’s phosphorous level rose to a maximum of 2.2 mg/dl during the hospitalization (day 24 post infusion) and the patient was discharged on sodium phosphate 500 mg four times daily and calcitriol 0.5 mcg daily with close follow-up scheduled.
## 3. Discussion
Hypophosphatemia has been increasingly reported following therapy with parenteral iron formulations recently [6]. Certain formulations like ferric carboxymaltose and saccharated ferric oxide are implicated at a higher frequency than others [1, 4, 6]. This patient developed hypophosphatemia after his second exposure to Ferric carboxymaltose, although he did not develop the same after his first exposure. This has not been reported in any instances, in our literature review. It is unclear why this occurred as the patient had no flares of inflammatory bowel disease, altered oral intake or exposure to higher doses of the iron formulation.The occurrence of de novo hypophosphatemia is reported with varying incidences across studies, but most studies reported an incidence between 18.5% and 50% with about a third to half of patients developing severe hypophosphatemia (P<1mg/dl) [6–8]. Severe acute hypophosphatemia can result in muscle cramps and spasms, rhabdomyolysis, cardiac arrhythmias and conduction disturbances, while chronic hypophosphatemia may present as nonspecific fatigue, bone pain, osteomalacia and pathological fractures [6].The presence of coexistent inflammatory bowel disease or other causes of malabsorption, low body weight, and African American race, as seen in our patient have been shown to increase the risk of developing hypophosphatemia [6–8].The refeeding syndrome was ruled out as the patient had no prior history of decrease followed by increase in dietary intake, no other electrolyte abnormalities, no episodes of hypoglycemia and hypophosphatemia resolved with no alteration in feeding rate while in the hospital. In addition, Ustekinumab has not been associated with hypophosphatemia.Hypophosphatemia is caused due to an increase in the amount of intact fibroblast growth factor 23 (FGF 23) in circulation [1, 6]. FGF 23 is largely produced by the bone and acts on its receptors in the renal tubules where it inhibits the sodium-phosphorus cotransporters (NaPi-2a and NaPi-2c) on the apical membrane of proximal tubule cells [9]. It also decreases the synthesis of 1,25 dihydroxy vitamin D, thus lowering calcium and phosphate absorption and reabsorption, consequently decreasing their levels in circulation [9]. It also increases calcium reabsorption through the TRVP5 (transient receptor potential cation channel subfamily V member 5) channel, thus decreasing the occurrence of hypocalcemia due to coexistent vitamin D deficiency [9].Iron deficiency and the resulting anemia increase the levels of iFGF23 through an increase in the level of hypoxia inducible factor alpha 1 (HIF-α1) [9]. However, it also increases cleavage of iFGF23 in osteocytes, resulting in an increase in the level of C terminal FGF23 (cFGF23), which inhibits the action of iFGF23 [9, 10]. Ferric carboxymaltose, however, decreases the cleavage of iFGF23, increasing levels of the bioactive precursor [9, 10]. In addition, an increase in the uptake of phosphate intracellularly during erythropoiesis contributes to a drop in measured value in the serum [9]. The level of iFGF23 in our patient was normal, most likely as it was measured after the initiation of treatment and improvement in hypophosphatemia. The level of cFGF23 unfortunately, could not be measured in our patient.The timeline for development of hypophosphatemia indicates that the lowest levels of serum phosphorus were noted at week 2 and increased in the following weeks [6]. The time to resolution varied between a mean of 8 to 12 weeks but certain instances of persistent hypophosphatemia that lasted as long as 6 months to 2 years have been reported [6, 10, 11]. Parathyroid hormone levels are usually elevated given the low 1,25 hydroxy vitamin D level, as in our patient. Patients are treated with calcitriol and aggressive phosphate repletion as needed. Studies however report minimal utility in treating moderately low phosphorus levels unless severe hypophosphatemia (<1 mg/dl) is reported or the patient is symptomatic [12]. One report also described success with the use of Burosumab, a monoclonal antibody directed against FGF23 in a patient with severe iron‐induced osteomalacia [13].
## 4. Conclusion
Hypophosphatemia is a little known but common adverse effect with the use of certain preparations of intravenous iron, especially in patients with antecedent risk factors, must be anticipated and treated in order to avoid osseous and systemic complications. In addition, this may develop upon subsequent exposures to a particular intravenous formulation even if not encountered after first exposure, like in our patient, and must therefore not be ruled out in these situations.
---
*Source: 1011401-2022-10-07.xml* | 1011401-2022-10-07_1011401-2022-10-07.md | 10,891 | Severe Hypophosphatemia Occurring After Repeated Exposure to a Parenteral Iron Formulation | Keerthana Haridas; Alice Yau | Case Reports in Endocrinology
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011401 | 1011401-2022-10-07.xml | ---
## Abstract
Hypophosphatemia is a less known complication of parenteral iron use, particularly after the use of certain iron formulations. We report the case of a young male with inflammatory bowel disease and iron deficiency anemia, who developed severe symptomatic hypophosphatemia after his third exposure to iron carboxymaltose with no evidence of the same occurring upon prior exposures to the compound. Investigations revealed serum phosphorous levels of 0.7 mg/dl, corrected serum calcium of 8–9.5 mg/dl, alkaline phosphatase of 50 U/L (38–126), 25 hydroxy vitamin D level of 40.2 ng/ml, and intact PTH elevated to 207 pg/ml. Urine studies indicated renal phosphate wasting. Presentation was not in keeping with refeeding syndrome. Intact fibroblast growth factor 23 level, measured after the initiation of treatment was within the normal range at 179 RU/mL (44–215). 1,25 dihydroxy vitamin D level, also measured after the initiation of treatment, was normal at 26.3 pg/ml (19.9–79.3). The patient was treated with calcitriol and aggressive oral and intravenous phosphorous repletion. Symptoms then resolved and the patient was discharged on an oral regimen. This phenomenon is postulated to occur due to an increase in the level and activity of FGF23 and decreased cleavage of the same, due to anemia as well as use of specific iron formulations. This is the first instance, in our literature review, of this complication known to occur, not after initial exposure to an implicated iron formulation but occurring on subsequent exposure.
---
## Body
## 1. Introduction
Hypophosphatemia is a lesser-known side effect of intravenous iron repletion, particularly with certain formulations of iron including saccharated ferric oxide and ferric carboxymaltose [1]. First described in the late 1980s [2] with the former, it has since been reported in more cases, induced by the latter [1] although clear timelines associated with development, time to nadir, and duration to resolution are less clear. While it is transient and self-limiting in most cases, it can be symptomatic and long lasting, requiring aggressive treatment and supplementation in other instances [3–5].
## 2. Case Report
A 35 year old male with a past medical history of Crohn’s disease with associated malnutrition (diagnosed 20 years ago, refractory to therapy with multiple medical agents and with no prior history of surgery, on methotrexate, Ustekinumab and prednisone) as well as iron deficiency anemia (mean Hgb in the range of 6–9 gdl, serum Iron 9–29 mg/dl, total iron binding capacity-286–303 mcg/dl, transferrin receptor saturation 3–10%, and ferritin 3–73 mg/ml) was admitted after the detection of severe hypophosphatemia (serum phosphorus-0.7 mg/dl), two weeks after receiving 685 mg of parenteral ferric carboxymaltose. Patient’s treatment for Crohn’s disease had been started 4 years ago. The patient reported no symptoms at presentation but endorsed minimal tingling and cramping in fingers of bilateral upper extremities two days after presentation. No other sensorimotor complaints were endorsed. No cardiovascular or respiratory symptoms were endorsed. The patient reported no change in appetite or diet or no recent alteration in food intake, endorsed good oral intake and no recent weight loss. General examination was notable for Body Mass Index (BMI) of 16.14 kg/m2. Examination was unremarkable and no focal sensorimotor deficits were noted. Chvostek’s and Trousseau’s signs were negative.Investigations revealed serum phosphorous levels of 0.7 mg/dl, corrected serum calcium of 8–8.5 mg/dl which rose to 9–9.5 mg/dl after Vitamin D supplementation was started, alkaline phosphatase of 50 U/L (38–126), with bone specific alkaline phosphatase level, measured using ELISA, of 8.5 mcg/L (4–27). 25 hydroxy vitamin D level measured using ELISA was 40.2 ng/ml, and intact PTH was elevated to 207 pg/ml. 24 hour urine phosphorous level was 1.2 g and TMP/GFR (tubular maximum reabsorption of phosphate per glomerular filtration rate) was 1.39 mg/dl (normal range 2.6–3.8) with fractional tubular excretion of phosphate of 31.82%, suggestive of renal phosphate wasting.1,25 dihydroxy vitamin D level, measured by RIA, was 26.3 pg/ml (19.9–79.3); although, this was measured after calcitriol supplementation was initiated. Intact fibroblast growth factor 23 level was measured using ELISA to be 179 RU/mL (44–215) although the level was obtained 8 days after initiation of treatment.Interestingly, the patient had received infusions of the same formulation in doses of 675 mg and 700 mg in 2018 and 750 mg in 2017 with no documented evidence of consequent hypophosphatemia (Serum P 3.3–3.7 mg/dl). The patient also received iron dextran and ferrous gluconate in the 2 months prior to the described instance with no resulting hypophosphatemia.The patient was started on calcitriol 0.25 mcg daily which was then increased to 0.5 mcg daily to achieve phosphate repletion while ensuring normalized serum calcium levels. Corrected serum calcium levels rose to 9–9.5 mg/dl after this supplementation was started. PTH level, which was likely elevated secondary to mild hypocalcemia, was slated to be monitored upon outpatient follow up.Aggressive phosphate supplementation was initiated through oral and parenteral routes due to severity of hypophosphatemia and likely coexistent malabsorption in the setting of Crohn’s disease. Symptoms subsided after repletion was started.The patient’s phosphorous level rose to a maximum of 2.2 mg/dl during the hospitalization (day 24 post infusion) and the patient was discharged on sodium phosphate 500 mg four times daily and calcitriol 0.5 mcg daily with close follow-up scheduled.
## 3. Discussion
Hypophosphatemia has been increasingly reported following therapy with parenteral iron formulations recently [6]. Certain formulations like ferric carboxymaltose and saccharated ferric oxide are implicated at a higher frequency than others [1, 4, 6]. This patient developed hypophosphatemia after his second exposure to Ferric carboxymaltose, although he did not develop the same after his first exposure. This has not been reported in any instances, in our literature review. It is unclear why this occurred as the patient had no flares of inflammatory bowel disease, altered oral intake or exposure to higher doses of the iron formulation.The occurrence of de novo hypophosphatemia is reported with varying incidences across studies, but most studies reported an incidence between 18.5% and 50% with about a third to half of patients developing severe hypophosphatemia (P<1mg/dl) [6–8]. Severe acute hypophosphatemia can result in muscle cramps and spasms, rhabdomyolysis, cardiac arrhythmias and conduction disturbances, while chronic hypophosphatemia may present as nonspecific fatigue, bone pain, osteomalacia and pathological fractures [6].The presence of coexistent inflammatory bowel disease or other causes of malabsorption, low body weight, and African American race, as seen in our patient have been shown to increase the risk of developing hypophosphatemia [6–8].The refeeding syndrome was ruled out as the patient had no prior history of decrease followed by increase in dietary intake, no other electrolyte abnormalities, no episodes of hypoglycemia and hypophosphatemia resolved with no alteration in feeding rate while in the hospital. In addition, Ustekinumab has not been associated with hypophosphatemia.Hypophosphatemia is caused due to an increase in the amount of intact fibroblast growth factor 23 (FGF 23) in circulation [1, 6]. FGF 23 is largely produced by the bone and acts on its receptors in the renal tubules where it inhibits the sodium-phosphorus cotransporters (NaPi-2a and NaPi-2c) on the apical membrane of proximal tubule cells [9]. It also decreases the synthesis of 1,25 dihydroxy vitamin D, thus lowering calcium and phosphate absorption and reabsorption, consequently decreasing their levels in circulation [9]. It also increases calcium reabsorption through the TRVP5 (transient receptor potential cation channel subfamily V member 5) channel, thus decreasing the occurrence of hypocalcemia due to coexistent vitamin D deficiency [9].Iron deficiency and the resulting anemia increase the levels of iFGF23 through an increase in the level of hypoxia inducible factor alpha 1 (HIF-α1) [9]. However, it also increases cleavage of iFGF23 in osteocytes, resulting in an increase in the level of C terminal FGF23 (cFGF23), which inhibits the action of iFGF23 [9, 10]. Ferric carboxymaltose, however, decreases the cleavage of iFGF23, increasing levels of the bioactive precursor [9, 10]. In addition, an increase in the uptake of phosphate intracellularly during erythropoiesis contributes to a drop in measured value in the serum [9]. The level of iFGF23 in our patient was normal, most likely as it was measured after the initiation of treatment and improvement in hypophosphatemia. The level of cFGF23 unfortunately, could not be measured in our patient.The timeline for development of hypophosphatemia indicates that the lowest levels of serum phosphorus were noted at week 2 and increased in the following weeks [6]. The time to resolution varied between a mean of 8 to 12 weeks but certain instances of persistent hypophosphatemia that lasted as long as 6 months to 2 years have been reported [6, 10, 11]. Parathyroid hormone levels are usually elevated given the low 1,25 hydroxy vitamin D level, as in our patient. Patients are treated with calcitriol and aggressive phosphate repletion as needed. Studies however report minimal utility in treating moderately low phosphorus levels unless severe hypophosphatemia (<1 mg/dl) is reported or the patient is symptomatic [12]. One report also described success with the use of Burosumab, a monoclonal antibody directed against FGF23 in a patient with severe iron‐induced osteomalacia [13].
## 4. Conclusion
Hypophosphatemia is a little known but common adverse effect with the use of certain preparations of intravenous iron, especially in patients with antecedent risk factors, must be anticipated and treated in order to avoid osseous and systemic complications. In addition, this may develop upon subsequent exposures to a particular intravenous formulation even if not encountered after first exposure, like in our patient, and must therefore not be ruled out in these situations.
---
*Source: 1011401-2022-10-07.xml* | 2022 |
# Smart Communication and Security by Key Distribution in Multicast Environment
**Authors:** Manisha Yadav; Karan Singh; Ajay Shekhar Pandey; Adesh Kumar; Rajeev Kumar
**Journal:** Wireless Communications and Mobile Computing
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011407
---
## Abstract
The service providers are aiming to provide multicast applications, primarily in the area of content delivery and secure wireless networks, due to the increased adoption of network systems and demand for secured wireless networks communication. Cryptography enables the users to send information across insecure networks using data encryption and decryption with key management. The research paper proposes a unique way of safeguarding network systems using cryptographic keys, as well as a fuzzy-based technique for improving security by reducing symmetric and asymmetric key overhead. To enable efficient communication, fuzzy-based rules with security triads and cryptographic key management methods are used. When the key distribution is decentralized, security implementation becomes more difficult, and multiple types of attacks are possible. Fuzzy logic-based key management methods are used in addition to offering a novel technique for secure cryptography systems. The novelty of the work is that the simulation work is also carried out to verify the data in on-demand distance vector (AODV) multicast wireless routing that supports 100 nodes with network performance parameters such as delay, control overhead, throughput, and packet delivery ratio. The system supports up to 128-bit key embedded with 128-bit plain data in cryptographic encryption and decryption.
---
## Body
## 1. Introduction
The safety and security of communication have become now the most important aspect because of the requirement of privacy and authentication in wireless multicast communications [1]. Group communication is based on broadcast or multicast technologies, such as internet protocol multicast that offers efficient transmission of group messages using encryption, signatures, authentication, and integrity, comparable to secure two-party communication (STPC) [2]. Cryptographic technologies are used to secure in-group communication. Multicast is a packet transmission method that sends a data packet to a large number of people [3]. A duplicate copy of the package is sent to everyone. Over the last few years, varieties of technologies have emerged that take advantage of new possibilities in the form of a new basic structure for key distribution and key creation in cryptography [4]. In in-group communication, multiple messages transmission is required at the same time to transmit in multicast groups of senders and receivers at reduced bandwidth requirements. The key distribution must ensure to users that various channels are not allowed to unauthorized users and unauthorized access to use the medium. It may access only when the users are fully authorized in the term of security [5]. In in-group communication, there is always a possibility of users being together and separated through the network system anytime means any of them may join the group or may leave the group communication [6]. Nothing like unicast communication, this communication link ends with the disappearance of a group member.The distribution cycle involved in secure group communication is depicted in Figure1. To ensure the security of group communication in multicast communication, group communication information should be restricted whenever a member leaves or joins the group. The members will require new keys to do so. Multicasting is a very unique communication technique that facilitates group communications and applications, in which data is sent to a group of users at the same time while maintaining high security and using fewer network resources. As a result, most group-oriented applications, such as software delivery, multiple users video conferencing, and remote learning, are expected to become more practical shortly than previously announced network systems [7].Figure 1
Distribution cycle.Secure communication and efficient key management bring the requirements of a cryptographic key management system. A highly protected network system, information, data, and nodes are required for security. Security is the most important feature, which is required in the development of a network system. The network security depends on the key distribution and the policy for the cryptosystem. Network security vulnerabilities emerge from various poor improvement practices, the new method of attacks, and unsecured connections between node-to-node network-based systems. Confidentiality is one of the most vital factors of data and information that are transferred to the node. It is also an important factor of a secure network with the keys distribution concept. The estimation of a secured network has played an important role in transferring, sharing, and accessing information at various nodes. Somehow, key distribution time is also the most appropriate stage to estimate the security of the network because this stage is the first step towards secure communication. This has a positive impact on the overall security, cost, and efforts. Cryptographic systems are also needed to understand how various components of a network interact with each other to secure and enhance the reliability of key distribution during the passing of information [8].Some experts emphasized the adaptive key management and privacy-protection aggregation scheme with revocation of user data in the smart communication to prevent the appearance of nontrusted nodes. In particular, they examine a light collection scheme to enable aggregate certification first, which protects the nontrusted aggregator from revelations of personal user data. Furthermore, a proposal for an adaptive key management system with efficient repeal, in which users can update their encryption keys automatically if any user is not included or is out of the system. End key time is resolved to stand up to the user to adaptive key management. Security analysis shows that at the same time, forward and backward secrecy is taken under consideration for performance evaluation [9].The research work proposes a fuzzy rules-based secure and lightweight scalable multicast network system. Due to the wireless network and dynamic nature, secure communication in any multicast communication such as mobile ad hoc network (MANET) and vehicular ad hoc network (VANET) is highly important. The security function should have the capability to effectively manage any of the multicast networks. The main factors are credibility, integrity, and availability [10]. Authentication in t4erms of security is the capacity that depends on whether a peer unit in an association is the one that evidences its presence, or the data is used to determine its origin. Survival of network service depends on symmetric or asymmetric. The first is based on a shared secret key between two nodes that allows for safe communication, and the second is based on two separate sorts of keys, one private and the other public. The public key is used for encryption, and it is made public. Decryption is done mostly with the private key. Asymmetric cryptography necessitates arranging more resources than symmetric cryptography [11]. From any aspect, security is built on three pillars, as indicated in Figure 2.Figure 2
Factors for network security.The essential underpinnings of information security include parameters like confidentiality, integrity, and availability (CIA). Each security control and vulnerability can be analyzed using one or more of these basic ideas. Any security measure must appropriately address the entire CIA triad to be called comprehensive and complete.
## 2. Related Work
A large variety of multicast applications is available in wireless networks, but security is the main concern [12]. Moreover, the lack of security safeguards in multicast fields is a hindrance. The information can be shared by enabling access management cryptography in multicast applications. To encrypt group information, a shared key, also known as a traffic encryption key or a group key, is employed [13]. These keys are only accessible to authorized users; thus, they can only enter in groups. As a result, key management is an important part of secured wireless multicast. When ordinary text is encrypted, the key changes it to ciphertext, and vice versa when decrypted. Algorithms employ keys in a variety of ways. In practice, public cryptographic algorithms are widely used in traditional cryptocurrencies because of the difficulties in key distributions. The distribution of secret keys in a medium is required in such a manner that it does not affect any kind of information whether available, privately, or publicly, which is very important in both aspects as depicted in Figure 3.Figure 3
Cryptology segment.A cryptographer attempts to create more and more sophisticated means of transmitting sensitive information, but hackers and code breakers work furiously to crack the system. System security is possible with the help of cryptography. This process of obtaining any information and the process of influencing the system using decryption [14] are an endless extended process.
### 2.1. Network and Keys Distributions
In wireless multicast communication, the three most significant aspects of key management are key generation [15], key sharing, and key storage. Cryptography merely employs a mixed perspective to establish effective controls, particularly in hierarchical architecture. However, symmetric key and public key cause a huge exhibition center of cryptography. It is still attractive to give the impression of being a symmetric key-based improvement. Network topologies can be split into two types based on the motions for key management that are available: the hierarchical and nonhierarchical architecture of nodes [16]. A hierarchical network is usually the result of a symmetric key-based management protocol, which is a reasonable choice of nodes in the network. Experts recommend haphazard ways of key management in hierarchical networks, with no guarantees of successful key installation but the risk of a compromised node capacity. The network protocols require large storage space for key storage at each node [17]. Key distribution (KD) effectively supports a hierarchical network. KD makes it easier to generate and manage less flexible groups, since unlimited keys are used in the schemes which are based on the encryption methods and the basis of their key installation system. This plan is based on the Merkel Identity [18]. Through this scheme, one can implement information communication directly, and with it, any subgroup reduces the communication capacity, making the multicast communication scheme better and in the correct format [19].
### 2.2. Public Key Cryptography
In the 1970s, there were primarily two types of public key schemes discovered: the Diffie-Hellman agreement in 1975 and digital signature plans in 1977, Rivest, Shamir, and Edelman (RSA). A discrete logarithm problem underpins the D-H agreement scheme. The RSA encryption algorithm is based on a whole-number factorization problem, such as a number “n” is the result of two primes, “p” and “q” that are discovered. The hardness of the number factorization problem is crucial for security. After El-Gamal, public key encryption and signature schemes were matched by the integer factor in 1984 RSA, and then, elastic charging engine (ECE) came into existence [20].
#### 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
#### 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
#### 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
#### 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
### 2.3. Identity-Based Cryptography
An identity-based encryption (IBE) system can identify public key users with the public key system such as an email address [21].
#### 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 2.1. Network and Keys Distributions
In wireless multicast communication, the three most significant aspects of key management are key generation [15], key sharing, and key storage. Cryptography merely employs a mixed perspective to establish effective controls, particularly in hierarchical architecture. However, symmetric key and public key cause a huge exhibition center of cryptography. It is still attractive to give the impression of being a symmetric key-based improvement. Network topologies can be split into two types based on the motions for key management that are available: the hierarchical and nonhierarchical architecture of nodes [16]. A hierarchical network is usually the result of a symmetric key-based management protocol, which is a reasonable choice of nodes in the network. Experts recommend haphazard ways of key management in hierarchical networks, with no guarantees of successful key installation but the risk of a compromised node capacity. The network protocols require large storage space for key storage at each node [17]. Key distribution (KD) effectively supports a hierarchical network. KD makes it easier to generate and manage less flexible groups, since unlimited keys are used in the schemes which are based on the encryption methods and the basis of their key installation system. This plan is based on the Merkel Identity [18]. Through this scheme, one can implement information communication directly, and with it, any subgroup reduces the communication capacity, making the multicast communication scheme better and in the correct format [19].
## 2.2. Public Key Cryptography
In the 1970s, there were primarily two types of public key schemes discovered: the Diffie-Hellman agreement in 1975 and digital signature plans in 1977, Rivest, Shamir, and Edelman (RSA). A discrete logarithm problem underpins the D-H agreement scheme. The RSA encryption algorithm is based on a whole-number factorization problem, such as a number “n” is the result of two primes, “p” and “q” that are discovered. The hardness of the number factorization problem is crucial for security. After El-Gamal, public key encryption and signature schemes were matched by the integer factor in 1984 RSA, and then, elastic charging engine (ECE) came into existence [20].
### 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
### 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
### 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
### 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
## 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
## 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
## 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
## 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
## 2.3. Identity-Based Cryptography
An identity-based encryption (IBE) system can identify public key users with the public key system such as an email address [21].
### 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 3. Proposed Work
In the paper, we propose the implementation of key distribution based on a fuzzy set of rules to generate random keys. These methods are based on logical AND, logical OR, and logical AND-OR rules. ANN is used in cryptography, used to generate strong cipher, and offers less overhead. The main aim of the research work is to build an encryption system based on fuzzy logic to secure confidentiality, availability, and integrity in the key management of wireless multicast communication. The principles of symmetric cryptography with fuzzy-based rules are applied to encrypt information. As we studied in the previous work, it was observed that the fuzzy IBE scheme is sensitive and offers security only for selective-ID attacks in very few models. However, this scheme is secure as long as one hashes the identity before using it. Currently, there is no fuzzy IBE available that is indistinguishable under an adaptive ciphertext attack (CCA2) secure. Therefore, a new fuzzy IBE scheme is suggested to achieve CCA2 security based on public key parameters whose size is not dependent on the number of attributes associated with an identity.The research work is focused on secured wireless communication using fuzzy logic-based high-speed symmetric key cryptographic key management methods that have been proposed to addresses the main issues like computational safety, power reduction, and less memory in multicast communication and also covers CIA.Though conventional methods of cryptography work on the digital values, i.e., 0 or 1, here proposed methods are based on fuzzy values of key distribution parameters like initial, mid, low, and high, which offers more accurate constraints for security pillars. Though conventional cryptography methods are a sort of public key cryptography used in wireless multicast communication that provides an equivalent level of security with higher overheads, fuzzy-based offer reduced computation and storage overheads. In comparison to the previous fuzzy IBE schemes, our scheme has short parameters and a tight reduction simultaneously. This method offers a shorter computational time for keys, reduced power consumption, and limited usage of memory without compromising the CIA attributes.
### 3.1. Fuzzy Implementation
Fuzzy logic is based on critical thinking. To accomplish security, the algorithm utilizes variables. It features a diverse key structure of up to 128-bit. The client can define the key in the correct format in it that is fixed as a secret key. It includes a method that is similar to human reasoning, and possible digital values are “0,” “1,” and intermediate [18]. If we do so and do not believe the two Boolean values, fuzzy logic may accept any of them as “yes,” “it is conceivable,” “of course,” “we cannot say,” “not possible,” and “definitely not.” It helps in dealing with the uncertainty of various areas.Algorithm 1: Fuzzy implementation algorithm.
Step 1 (variables declarations): in this step, we will select the most prominent variable that affects the key distribution (parties/principals, shared secret keys, etc.)Step 2 (fuzzification): this is the most important step of our method. This step is itself divided into two parts, i.e., fuzzification and defuzzification. This will help us to convert the fractional values into “0” and “1” valuesStep 3 (rule implementations): rule preparation is based on the logical AND of each variable involve and its impact on the final predicted value. Same as that logical OR, each variable and its impact on the final predicted value are involvedStep 4 (convert to graph): this graph helps show the rise and fall of the final output on the 3D surface
#### 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
#### 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
#### 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
#### 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 3.1. Fuzzy Implementation
Fuzzy logic is based on critical thinking. To accomplish security, the algorithm utilizes variables. It features a diverse key structure of up to 128-bit. The client can define the key in the correct format in it that is fixed as a secret key. It includes a method that is similar to human reasoning, and possible digital values are “0,” “1,” and intermediate [18]. If we do so and do not believe the two Boolean values, fuzzy logic may accept any of them as “yes,” “it is conceivable,” “of course,” “we cannot say,” “not possible,” and “definitely not.” It helps in dealing with the uncertainty of various areas.Algorithm 1: Fuzzy implementation algorithm.
Step 1 (variables declarations): in this step, we will select the most prominent variable that affects the key distribution (parties/principals, shared secret keys, etc.)Step 2 (fuzzification): this is the most important step of our method. This step is itself divided into two parts, i.e., fuzzification and defuzzification. This will help us to convert the fractional values into “0” and “1” valuesStep 3 (rule implementations): rule preparation is based on the logical AND of each variable involve and its impact on the final predicted value. Same as that logical OR, each variable and its impact on the final predicted value are involvedStep 4 (convert to graph): this graph helps show the rise and fall of the final output on the 3D surface
### 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
### 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
### 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
### 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
## 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
## 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
## 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 4. Results and Discussions
MATLAB 2018 is used to model and simulate the fuzzy logic control system. It consists of the fuzzy logic toolbox that gives a fuzzy controller block in Simulink. This toolbox provides a fuzzy interface (FIS) editor, membership function editor, rule editor, rule viewer, and surface viewer. Simulink is an environment based on block representations that help in modeling, simulation, and analysis.
### 4.1. Implementation of Fuzzification Rules-Based Models and Results
Fuzzy inference is the process of formulating the mapping from given input for key distribution to an output using fuzzy logic-based rules.The mapping (key distribution) provides a basic platform for decision-making and patterns. The process of fuzzy inference involves all the pieces that are described in membership functions, logical operations, and if-then rules [26]. After implementing the encryption algorithm, the results are presented at various levels.Figure4 shows the AND rule-based member function. Four inputs are inserted as parties with the shared keys command for the key distribution function and the output is in the 3D surface graph. Logical key distribution applies the implication and aggregation of variables. Figure 5 highlights the second structure for key distribution in which three-axis are used. The x-axis, y-axis, and z-axis are for parities, shared key loss integrity, and output of key distribution policy, respectively. Figure 6 shows the 3D structure for key distribution based on OR rules. The x-axis, y-axis, and z-axis are for shared secret keys, parties and principle loss integrity, and constant output using key distribution policy, respectively.Figure 4
AND rule member function.Figure 5
Surface evaluation of logical AND membership.Figure 6
Surface evaluation of logical OR rule membership.
### 4.2. Parameters Used for Fuzzy-Based Model for Key Compromise Prediction
The fuzzy model for the key compromise prediction technique is based on the following policy [26].
#### 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
#### 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
#### 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
#### 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
### 4.3. Surface Generation
Figure7 shows the 3D structure of surface_1 for key distribution based on the AND-OR model. The x-axis represents the loss of confidentiality, the y-axis represents the loss of integrity, and the z-axis represents the constant key compromised. In this segment, we found two-fragment evaluations such as integrity and confidentiality of key loss distribution. It gives key distribution based on the AND-OR model for integrity and confidentiality analysis.Figure 7
Surface_1 for integrity and confidentiality analysis.In the same way, Figure8 shows the suface_2 plot of loss of security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left from the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and authentication. Figure 9 depicts the surface_3 plot that shows the loss security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left and the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and nonrepudiation. Figure 10 shows the surface_4 structure for key distribution, in which the x-axis is used for loss of authentication, the y-axis for loss of nonrepudiation, and the z-axis as constant for key compromised. In this segment, two fragments are considered for evaluation such as authentication and nonrepudiation of the key loss distribution.Figure 8
Surface_2 for confidentiality and authentication analysis.Figure 9
Surface_3 for confidentiality and nonrepudiation analysis.Figure 10
Surface_4 for authentication and nonrepudiation analysis.The data communication is verified among the different nodes in AODV routing to support the multicast routing in the wireless networking for effective cryptographic communication with the secured key. The simulation is done to verify the encryption and decryption of the plain text with the variable key length of 8-bit to 128-bit. Test case-1 and test case-2 summarize the test inputs for input plain text, ciphertext, and decrypted text. The communication system supports 100 nodes (M0 to M99). The source nodes have the plain text, encryption key as the inputs, and ciphertext as the output. In the encryption end, the destination nodes have the ciphertext, decryption key as inputs, and decrypted text as output.
#### 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
#### 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 4.1. Implementation of Fuzzification Rules-Based Models and Results
Fuzzy inference is the process of formulating the mapping from given input for key distribution to an output using fuzzy logic-based rules.The mapping (key distribution) provides a basic platform for decision-making and patterns. The process of fuzzy inference involves all the pieces that are described in membership functions, logical operations, and if-then rules [26]. After implementing the encryption algorithm, the results are presented at various levels.Figure4 shows the AND rule-based member function. Four inputs are inserted as parties with the shared keys command for the key distribution function and the output is in the 3D surface graph. Logical key distribution applies the implication and aggregation of variables. Figure 5 highlights the second structure for key distribution in which three-axis are used. The x-axis, y-axis, and z-axis are for parities, shared key loss integrity, and output of key distribution policy, respectively. Figure 6 shows the 3D structure for key distribution based on OR rules. The x-axis, y-axis, and z-axis are for shared secret keys, parties and principle loss integrity, and constant output using key distribution policy, respectively.Figure 4
AND rule member function.Figure 5
Surface evaluation of logical AND membership.Figure 6
Surface evaluation of logical OR rule membership.
## 4.2. Parameters Used for Fuzzy-Based Model for Key Compromise Prediction
The fuzzy model for the key compromise prediction technique is based on the following policy [26].
### 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
### 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
### 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
### 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
## 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
## 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
## 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
## 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
## 4.3. Surface Generation
Figure7 shows the 3D structure of surface_1 for key distribution based on the AND-OR model. The x-axis represents the loss of confidentiality, the y-axis represents the loss of integrity, and the z-axis represents the constant key compromised. In this segment, we found two-fragment evaluations such as integrity and confidentiality of key loss distribution. It gives key distribution based on the AND-OR model for integrity and confidentiality analysis.Figure 7
Surface_1 for integrity and confidentiality analysis.In the same way, Figure8 shows the suface_2 plot of loss of security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left from the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and authentication. Figure 9 depicts the surface_3 plot that shows the loss security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left and the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and nonrepudiation. Figure 10 shows the surface_4 structure for key distribution, in which the x-axis is used for loss of authentication, the y-axis for loss of nonrepudiation, and the z-axis as constant for key compromised. In this segment, two fragments are considered for evaluation such as authentication and nonrepudiation of the key loss distribution.Figure 8
Surface_2 for confidentiality and authentication analysis.Figure 9
Surface_3 for confidentiality and nonrepudiation analysis.Figure 10
Surface_4 for authentication and nonrepudiation analysis.The data communication is verified among the different nodes in AODV routing to support the multicast routing in the wireless networking for effective cryptographic communication with the secured key. The simulation is done to verify the encryption and decryption of the plain text with the variable key length of 8-bit to 128-bit. Test case-1 and test case-2 summarize the test inputs for input plain text, ciphertext, and decrypted text. The communication system supports 100 nodes (M0 to M99). The source nodes have the plain text, encryption key as the inputs, and ciphertext as the output. In the encryption end, the destination nodes have the ciphertext, decryption key as inputs, and decrypted text as output.
### 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
### 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
## 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 5. Conclusions
The research work provides a novel way of key distribution using a fuzzy-based cryptography model. To anticipate the key distribution in symmetric key cryptography, initially, the AND logic is created, and rule sets are prepared. Then, using the same parameters, the OR logic is created. We used a general data set of 20-25 values for each set to set the formulation of the generic rules. A 3D surface graph has been created based on these models, and these graphs are useful in demonstrating the challenges of associated factors and their final effect on key distribution. Various security features are analyzed because of the implementation of these factors. The cryptographic encryption and decryption of the data are verified successfully based on AODV routing in wireless communication. The large key size provides greater security. The 64-bit and 128-bit key encryption and decryption are experienced with 64-bit and 128-bit plain text. The multicast system supports 100 nodes of data interchange with minimum overhead, delay, and maximum throughput. In the future, we are planning to integrate the system with field-programmable gate array (FPGA) hardware with a larger key size and data. The number of nodes can be enhanced to evaluate the performance of the system for large scalable networks and efficient multicast communication.
---
*Source: 1011407-2022-03-19.xml* | 1011407-2022-03-19_1011407-2022-03-19.md | 91,622 | Smart Communication and Security by Key Distribution in Multicast Environment | Manisha Yadav; Karan Singh; Ajay Shekhar Pandey; Adesh Kumar; Rajeev Kumar | Wireless Communications and Mobile Computing
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011407 | 1011407-2022-03-19.xml | ---
## Abstract
The service providers are aiming to provide multicast applications, primarily in the area of content delivery and secure wireless networks, due to the increased adoption of network systems and demand for secured wireless networks communication. Cryptography enables the users to send information across insecure networks using data encryption and decryption with key management. The research paper proposes a unique way of safeguarding network systems using cryptographic keys, as well as a fuzzy-based technique for improving security by reducing symmetric and asymmetric key overhead. To enable efficient communication, fuzzy-based rules with security triads and cryptographic key management methods are used. When the key distribution is decentralized, security implementation becomes more difficult, and multiple types of attacks are possible. Fuzzy logic-based key management methods are used in addition to offering a novel technique for secure cryptography systems. The novelty of the work is that the simulation work is also carried out to verify the data in on-demand distance vector (AODV) multicast wireless routing that supports 100 nodes with network performance parameters such as delay, control overhead, throughput, and packet delivery ratio. The system supports up to 128-bit key embedded with 128-bit plain data in cryptographic encryption and decryption.
---
## Body
## 1. Introduction
The safety and security of communication have become now the most important aspect because of the requirement of privacy and authentication in wireless multicast communications [1]. Group communication is based on broadcast or multicast technologies, such as internet protocol multicast that offers efficient transmission of group messages using encryption, signatures, authentication, and integrity, comparable to secure two-party communication (STPC) [2]. Cryptographic technologies are used to secure in-group communication. Multicast is a packet transmission method that sends a data packet to a large number of people [3]. A duplicate copy of the package is sent to everyone. Over the last few years, varieties of technologies have emerged that take advantage of new possibilities in the form of a new basic structure for key distribution and key creation in cryptography [4]. In in-group communication, multiple messages transmission is required at the same time to transmit in multicast groups of senders and receivers at reduced bandwidth requirements. The key distribution must ensure to users that various channels are not allowed to unauthorized users and unauthorized access to use the medium. It may access only when the users are fully authorized in the term of security [5]. In in-group communication, there is always a possibility of users being together and separated through the network system anytime means any of them may join the group or may leave the group communication [6]. Nothing like unicast communication, this communication link ends with the disappearance of a group member.The distribution cycle involved in secure group communication is depicted in Figure1. To ensure the security of group communication in multicast communication, group communication information should be restricted whenever a member leaves or joins the group. The members will require new keys to do so. Multicasting is a very unique communication technique that facilitates group communications and applications, in which data is sent to a group of users at the same time while maintaining high security and using fewer network resources. As a result, most group-oriented applications, such as software delivery, multiple users video conferencing, and remote learning, are expected to become more practical shortly than previously announced network systems [7].Figure 1
Distribution cycle.Secure communication and efficient key management bring the requirements of a cryptographic key management system. A highly protected network system, information, data, and nodes are required for security. Security is the most important feature, which is required in the development of a network system. The network security depends on the key distribution and the policy for the cryptosystem. Network security vulnerabilities emerge from various poor improvement practices, the new method of attacks, and unsecured connections between node-to-node network-based systems. Confidentiality is one of the most vital factors of data and information that are transferred to the node. It is also an important factor of a secure network with the keys distribution concept. The estimation of a secured network has played an important role in transferring, sharing, and accessing information at various nodes. Somehow, key distribution time is also the most appropriate stage to estimate the security of the network because this stage is the first step towards secure communication. This has a positive impact on the overall security, cost, and efforts. Cryptographic systems are also needed to understand how various components of a network interact with each other to secure and enhance the reliability of key distribution during the passing of information [8].Some experts emphasized the adaptive key management and privacy-protection aggregation scheme with revocation of user data in the smart communication to prevent the appearance of nontrusted nodes. In particular, they examine a light collection scheme to enable aggregate certification first, which protects the nontrusted aggregator from revelations of personal user data. Furthermore, a proposal for an adaptive key management system with efficient repeal, in which users can update their encryption keys automatically if any user is not included or is out of the system. End key time is resolved to stand up to the user to adaptive key management. Security analysis shows that at the same time, forward and backward secrecy is taken under consideration for performance evaluation [9].The research work proposes a fuzzy rules-based secure and lightweight scalable multicast network system. Due to the wireless network and dynamic nature, secure communication in any multicast communication such as mobile ad hoc network (MANET) and vehicular ad hoc network (VANET) is highly important. The security function should have the capability to effectively manage any of the multicast networks. The main factors are credibility, integrity, and availability [10]. Authentication in t4erms of security is the capacity that depends on whether a peer unit in an association is the one that evidences its presence, or the data is used to determine its origin. Survival of network service depends on symmetric or asymmetric. The first is based on a shared secret key between two nodes that allows for safe communication, and the second is based on two separate sorts of keys, one private and the other public. The public key is used for encryption, and it is made public. Decryption is done mostly with the private key. Asymmetric cryptography necessitates arranging more resources than symmetric cryptography [11]. From any aspect, security is built on three pillars, as indicated in Figure 2.Figure 2
Factors for network security.The essential underpinnings of information security include parameters like confidentiality, integrity, and availability (CIA). Each security control and vulnerability can be analyzed using one or more of these basic ideas. Any security measure must appropriately address the entire CIA triad to be called comprehensive and complete.
## 2. Related Work
A large variety of multicast applications is available in wireless networks, but security is the main concern [12]. Moreover, the lack of security safeguards in multicast fields is a hindrance. The information can be shared by enabling access management cryptography in multicast applications. To encrypt group information, a shared key, also known as a traffic encryption key or a group key, is employed [13]. These keys are only accessible to authorized users; thus, they can only enter in groups. As a result, key management is an important part of secured wireless multicast. When ordinary text is encrypted, the key changes it to ciphertext, and vice versa when decrypted. Algorithms employ keys in a variety of ways. In practice, public cryptographic algorithms are widely used in traditional cryptocurrencies because of the difficulties in key distributions. The distribution of secret keys in a medium is required in such a manner that it does not affect any kind of information whether available, privately, or publicly, which is very important in both aspects as depicted in Figure 3.Figure 3
Cryptology segment.A cryptographer attempts to create more and more sophisticated means of transmitting sensitive information, but hackers and code breakers work furiously to crack the system. System security is possible with the help of cryptography. This process of obtaining any information and the process of influencing the system using decryption [14] are an endless extended process.
### 2.1. Network and Keys Distributions
In wireless multicast communication, the three most significant aspects of key management are key generation [15], key sharing, and key storage. Cryptography merely employs a mixed perspective to establish effective controls, particularly in hierarchical architecture. However, symmetric key and public key cause a huge exhibition center of cryptography. It is still attractive to give the impression of being a symmetric key-based improvement. Network topologies can be split into two types based on the motions for key management that are available: the hierarchical and nonhierarchical architecture of nodes [16]. A hierarchical network is usually the result of a symmetric key-based management protocol, which is a reasonable choice of nodes in the network. Experts recommend haphazard ways of key management in hierarchical networks, with no guarantees of successful key installation but the risk of a compromised node capacity. The network protocols require large storage space for key storage at each node [17]. Key distribution (KD) effectively supports a hierarchical network. KD makes it easier to generate and manage less flexible groups, since unlimited keys are used in the schemes which are based on the encryption methods and the basis of their key installation system. This plan is based on the Merkel Identity [18]. Through this scheme, one can implement information communication directly, and with it, any subgroup reduces the communication capacity, making the multicast communication scheme better and in the correct format [19].
### 2.2. Public Key Cryptography
In the 1970s, there were primarily two types of public key schemes discovered: the Diffie-Hellman agreement in 1975 and digital signature plans in 1977, Rivest, Shamir, and Edelman (RSA). A discrete logarithm problem underpins the D-H agreement scheme. The RSA encryption algorithm is based on a whole-number factorization problem, such as a number “n” is the result of two primes, “p” and “q” that are discovered. The hardness of the number factorization problem is crucial for security. After El-Gamal, public key encryption and signature schemes were matched by the integer factor in 1984 RSA, and then, elastic charging engine (ECE) came into existence [20].
#### 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
#### 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
#### 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
#### 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
### 2.3. Identity-Based Cryptography
An identity-based encryption (IBE) system can identify public key users with the public key system such as an email address [21].
#### 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 2.1. Network and Keys Distributions
In wireless multicast communication, the three most significant aspects of key management are key generation [15], key sharing, and key storage. Cryptography merely employs a mixed perspective to establish effective controls, particularly in hierarchical architecture. However, symmetric key and public key cause a huge exhibition center of cryptography. It is still attractive to give the impression of being a symmetric key-based improvement. Network topologies can be split into two types based on the motions for key management that are available: the hierarchical and nonhierarchical architecture of nodes [16]. A hierarchical network is usually the result of a symmetric key-based management protocol, which is a reasonable choice of nodes in the network. Experts recommend haphazard ways of key management in hierarchical networks, with no guarantees of successful key installation but the risk of a compromised node capacity. The network protocols require large storage space for key storage at each node [17]. Key distribution (KD) effectively supports a hierarchical network. KD makes it easier to generate and manage less flexible groups, since unlimited keys are used in the schemes which are based on the encryption methods and the basis of their key installation system. This plan is based on the Merkel Identity [18]. Through this scheme, one can implement information communication directly, and with it, any subgroup reduces the communication capacity, making the multicast communication scheme better and in the correct format [19].
## 2.2. Public Key Cryptography
In the 1970s, there were primarily two types of public key schemes discovered: the Diffie-Hellman agreement in 1975 and digital signature plans in 1977, Rivest, Shamir, and Edelman (RSA). A discrete logarithm problem underpins the D-H agreement scheme. The RSA encryption algorithm is based on a whole-number factorization problem, such as a number “n” is the result of two primes, “p” and “q” that are discovered. The hardness of the number factorization problem is crucial for security. After El-Gamal, public key encryption and signature schemes were matched by the integer factor in 1984 RSA, and then, elastic charging engine (ECE) came into existence [20].
### 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
### 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
### 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
### 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
## 2.2.1. Public Key Encryption
A public key encryption (PKI) polynomial is a kind of algorithm.
## 2.2.2. Key-Gen (1λ)
A private key generator (PKG) takes a random key generation algorithm that uses a security parameter1^λ as the input and the secret key (SK) and public key (PK) output.
## 2.2.3. Encryption (m, PK)
Random encryption algorithm takes the message, PK input, and output ciphertext “C” from the public key.
## 2.2.4. Decryption (C, SK)
The deterministic polynomial-time algorithm takes a ciphertextC and secret key SK of the receiver as input and outputs ciphertext “C.”
## 2.3. Identity-Based Cryptography
An identity-based encryption (IBE) system can identify public key users with the public key system such as an email address [21].
### 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 2.3.1. Encryption Scheme Model
The first three-step can be random but not final, and it is deterministic. An identity-based encryption (IBE) scheme includes four steps of algorithms:(1)
Setup (1λ) is powered by a private key generator (PKG) that is a random polynomial-time algorithm for which input is 1λ(2)
Key-Gen (ID, MSK, and Params) is powered by PKG, a secret key Params output related to a random polynomial-time algorithm that identifies the master secret key secret and public parameter secret inputs(3)
Encryptionm is a random polyalgorithm that takes the message m, the public key of sender pkPKG, and the public parameters Params of input and outputs ciphertext “C.”(4)
Decryption (C, SKID, and Pa) is a deterministic polynomial-time algorithm that takes a ciphertext C [22]The route efficiency to convey data to so many nodes in a network was determined using multicast communication based on an artificial neural network (ANN). Multicasting in MANETS handles concerns of security and quality of service (QoS), making this an excellent field for ANN implementation. The relationship between past, current, and future route discoveries of the distinct nodes in the mobility range can be discovered using ANN. The author proposed an innovative and practical use of ANN for secure multicast communication with supporting nodes. ANN consists of variable inputs used to determine the optimum number of neurons for the hidden layer by selecting the multicasting and supporting a node-routing function. The proposed model was based on the feedforward neural network (FFN) and backpropagation algorithms [23].Fuzzy-based policies are also used to enhance the performance of the On-demand multicast routing protocol. The main objective is to establish a small, high-quality, and efficient forwarding group. Hence, the packet delivery rate also increases up to 40% and reduces the average end-to-end delay by about 35% [24]. There are several mechanisms of detection in distributed denial of service (DDoS) attacks over a multicast network. This attack affects the ongoing communication in the multicast network while also causing the wireless nodes to exhaust their energy much earlier than expected. This attack also results in a collision and minimal interference. A fuzzy-based system was designed to increase the reliability of attack detection [25]. Wireless sensor networks are also designed to provide various real-time applications. For providing energy-efficient transmissions, a congestion control mechanism is proposed at an optimized rate. The rate-based congestion control algorithm is based on cluster routing to offer minimum energy consumption. The rate control process reduces the end-to-end delay to improve network lifetime for a large simulation period [26].To secure downlink multicast communication in edge-envisioned advanced metering infrastructure networks [27], a lightweight elliptic curve signcryption technique based on cipher text-policy feature-based encryption was proposed. The classic secret method maintains security by extending the length of keys [28], but it also raises the difficulty of calculation with the advancement of technology and cryptographic processing technology. As a result, creating a better encryption algorithm is a good way to ensure multicast communication. In the presence of multiple eavesdroppers, an intelligent reflecting surface (IRS) [29] is aided with the secure wireless powered communication network (WPCN), in which the transmitter uses the energy from a power station (PS), and that energy was used to multicast the transmit information to many IoT devices. Surface image security can be enhanced using artificial neural networks. Convolutional neural networks (CNNs) are useful to extract the features and information from hyperspectral images [30]. The deep spatial-spectral global reasoning network [31] takes into account both local and global information for hyperspectral images noise removal. Trust-based key management [32] is used to accomplish secure and efficient wireless multicast communication which can be applied for the security of destination-sequenced distance vector (DSDV), optimized link state routing (OLSR) [33], and ad hoc on-demand distance vector (AODV) [34] routing protocols. For device-to-device communication in the wireless system, the delay, memory, and hardware resources utilization [35, 36] are a major concern. It has been identified in different topological communication [37] such as in Zigbee IEEE 802.14 [38, 39], wireless sensor network, network-on-chip communication, wireless monitoring of plant information, and security. Users require wireless connectivity regardless of their geographic location; hence, mobile ad hoc networks are gaining popularity at an all-time high. Mobile ad hoc networks [40] are becoming more vulnerable to security threats. MANETs must use a secure manner of communication and transmission, which is a difficult and time-consuming task. Researchers worked specifically on the security challenges in MANETs to enable safe transmission and communication. Fuzzy adaptive data transmission congestion prediction [41] is used to increase network stability since traffic congestion is widespread in multimedia networks. A fuzzy adaptive prediction solution for data transmission congestion has been developed in multimedia networks. The unique approach of fuzzification-defuzzification has been proposed in the paper to support multicast communication and cryptography with different parameters in the wireless communication system.
## 3. Proposed Work
In the paper, we propose the implementation of key distribution based on a fuzzy set of rules to generate random keys. These methods are based on logical AND, logical OR, and logical AND-OR rules. ANN is used in cryptography, used to generate strong cipher, and offers less overhead. The main aim of the research work is to build an encryption system based on fuzzy logic to secure confidentiality, availability, and integrity in the key management of wireless multicast communication. The principles of symmetric cryptography with fuzzy-based rules are applied to encrypt information. As we studied in the previous work, it was observed that the fuzzy IBE scheme is sensitive and offers security only for selective-ID attacks in very few models. However, this scheme is secure as long as one hashes the identity before using it. Currently, there is no fuzzy IBE available that is indistinguishable under an adaptive ciphertext attack (CCA2) secure. Therefore, a new fuzzy IBE scheme is suggested to achieve CCA2 security based on public key parameters whose size is not dependent on the number of attributes associated with an identity.The research work is focused on secured wireless communication using fuzzy logic-based high-speed symmetric key cryptographic key management methods that have been proposed to addresses the main issues like computational safety, power reduction, and less memory in multicast communication and also covers CIA.Though conventional methods of cryptography work on the digital values, i.e., 0 or 1, here proposed methods are based on fuzzy values of key distribution parameters like initial, mid, low, and high, which offers more accurate constraints for security pillars. Though conventional cryptography methods are a sort of public key cryptography used in wireless multicast communication that provides an equivalent level of security with higher overheads, fuzzy-based offer reduced computation and storage overheads. In comparison to the previous fuzzy IBE schemes, our scheme has short parameters and a tight reduction simultaneously. This method offers a shorter computational time for keys, reduced power consumption, and limited usage of memory without compromising the CIA attributes.
### 3.1. Fuzzy Implementation
Fuzzy logic is based on critical thinking. To accomplish security, the algorithm utilizes variables. It features a diverse key structure of up to 128-bit. The client can define the key in the correct format in it that is fixed as a secret key. It includes a method that is similar to human reasoning, and possible digital values are “0,” “1,” and intermediate [18]. If we do so and do not believe the two Boolean values, fuzzy logic may accept any of them as “yes,” “it is conceivable,” “of course,” “we cannot say,” “not possible,” and “definitely not.” It helps in dealing with the uncertainty of various areas.Algorithm 1: Fuzzy implementation algorithm.
Step 1 (variables declarations): in this step, we will select the most prominent variable that affects the key distribution (parties/principals, shared secret keys, etc.)Step 2 (fuzzification): this is the most important step of our method. This step is itself divided into two parts, i.e., fuzzification and defuzzification. This will help us to convert the fractional values into “0” and “1” valuesStep 3 (rule implementations): rule preparation is based on the logical AND of each variable involve and its impact on the final predicted value. Same as that logical OR, each variable and its impact on the final predicted value are involvedStep 4 (convert to graph): this graph helps show the rise and fall of the final output on the 3D surface
#### 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
#### 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
#### 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
#### 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 3.1. Fuzzy Implementation
Fuzzy logic is based on critical thinking. To accomplish security, the algorithm utilizes variables. It features a diverse key structure of up to 128-bit. The client can define the key in the correct format in it that is fixed as a secret key. It includes a method that is similar to human reasoning, and possible digital values are “0,” “1,” and intermediate [18]. If we do so and do not believe the two Boolean values, fuzzy logic may accept any of them as “yes,” “it is conceivable,” “of course,” “we cannot say,” “not possible,” and “definitely not.” It helps in dealing with the uncertainty of various areas.Algorithm 1: Fuzzy implementation algorithm.
Step 1 (variables declarations): in this step, we will select the most prominent variable that affects the key distribution (parties/principals, shared secret keys, etc.)Step 2 (fuzzification): this is the most important step of our method. This step is itself divided into two parts, i.e., fuzzification and defuzzification. This will help us to convert the fractional values into “0” and “1” valuesStep 3 (rule implementations): rule preparation is based on the logical AND of each variable involve and its impact on the final predicted value. Same as that logical OR, each variable and its impact on the final predicted value are involvedStep 4 (convert to graph): this graph helps show the rise and fall of the final output on the 3D surface
### 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
### 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
### 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
### 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 3.1.1. Parameters for Key Distribution in Symmetric Key Perspective
In the symmetric random set approach based on fuzzy logic, all valid keys make a key pool and each party will set up a set of keys pool randomly. The type of the key pool housed in each member is selected properly. A key group of members is very likely, i.e., sharing it. In our approach, we are taking advantage of this feature, i.e., when authentication went to share a pool, keys are provided for symmetric key distribution for the purpose, and RSA separates symmetric key distribution by sending a key request, as other parties may also provide the key. Following notation and assumptions are used for key distribution using symmetric key protocols.(1) Parties/Principles (A, B, S, and E). We assume the two parties who wish to agree on a secret are A and B, while a trusted third party is S and an attacker is E.(2) Shared Secret Keys. Kab, Kbs, Kas, and Kab denote a secret key known only to A and B.(3) Nonces (M, N, Na, and Nb). Nonces are random numbers. Na denotes a nonce originally produced by the principle A.(4) Timestamps (Ta, Tb, and Ts). Ta is the time stamp produced by A. Timestamp is used for synchronization.
## 3.1.2. Logical AND-OR-Based Rule
The fuzzy rules set for various parameters for key distribution are decided based on an AND-OR logic. The parameters may take any value in the range of initial, min, mid, and high. A key distribution policy is observed by setting various combinations of parameter values.
## 3.1.3. AND Rules-Based Algorithm
(i)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iii)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(iv)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_High) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(ix)
If (Parties/Principal is High) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(x)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Final_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(xi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)The algorithm provides a set of AND rules that are prepared to show the logical summation of all the possible effects of the key distribution policy, as listed in Table1.Table 1
Key distribution based on logical AND.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stamp)(Key_distribution_policy is Less_Key)2.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)Key_distribution_policyisHigh_key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policyisHigh_key)7.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_High)(Timestamps is Final_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)9.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisMid_key)10.(Parties/Principal is high)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Mid_TIme_stamp)(Key_distribution_policyisHigh_key)11.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Final_Time_stamp)(Key_distribution_policy is Mid_key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_Time_stamp)(Key_distribution_policy is Mid_key)Similarly, the set of OR rules is prepared that show the logical OR-based summation of all the possible effects on key distribution policy, which is given below.
## 3.1.4. OR Rules-Based Algorithm
(i)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Min_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(ii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Min) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(iii)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(iv)
If (Parties/Principal is Initial) or (Shared_Secret_Keys is Mid_Shared) or (Nonces is Nonces_Mid) or (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)(v)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is High_key) (1)(vi)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is High_key) (1)(vii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Min_Shared) and (Nonces is Nonces_High) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Mid_key) (1)(viii)
If (Parties/Principal is High) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(ix)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(x)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xi)
If (Parties/Principal is Initial) and (Shared_Secret_Keys is High_Shared) and (Nonces is Nonces_Min) and (Timestamps is Inital_Time_stanp), then (Key_distribution_policy is Less_Key) (1)(xii)
If (Parties/Principal is normal) and (Shared_Secret_Keys is Mid_Shared) and (Nonces is Nonces_Mid) and (Timestamps is Mid_TIme_stamp), then (Key_distribution_policy is Mid_key) (1)Fuzzy logic is a measure of the membership of certainty or uncertainty of the elements of a set that were chosen. Key distribution rules are decided based on the principle, which are defined by the fuzzy logic for similar cases, as listed in Table2.Table 2
Key distribution based on logical OR.
1.(Parties/Principal is initial)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)2.(Parties/Principal is initial)Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)3.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)4.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)5.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is High_key)6.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is High_key)7.(Parties/Principal is high)(Shared_Secret_Keys is Min_Shared)(Nonces is Nonces_High)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Mid_key)8.(Parties/Principal is high)(Shared_Secret_Keys is Mid_Shared(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)9.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)10.(Parties/Principal is initial)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)11.(Parties/Principal is initial)(Shared_Secret_Keys is High_Shared)(Nonces is Nonces_Min)(Timestamps is Inital_Time_stanp)(Key_distribution_policy is Less_Key)12.(Parties/Principal is normal)(Shared_Secret_Keys is Mid_Shared)(Nonces is Nonces_Mid)(Timestamps is Mid_TIme_stamp)(Key_distribution_policy is Mid_key)It is found that OR- and AND-fuzzy algorithms are strong, in the sense that they are not very sensitive to the changing environment and misplaced or forgotten the rules. Because computational logic is generally considerably simpler than exact systems logic, it uses less processing power.
## 4. Results and Discussions
MATLAB 2018 is used to model and simulate the fuzzy logic control system. It consists of the fuzzy logic toolbox that gives a fuzzy controller block in Simulink. This toolbox provides a fuzzy interface (FIS) editor, membership function editor, rule editor, rule viewer, and surface viewer. Simulink is an environment based on block representations that help in modeling, simulation, and analysis.
### 4.1. Implementation of Fuzzification Rules-Based Models and Results
Fuzzy inference is the process of formulating the mapping from given input for key distribution to an output using fuzzy logic-based rules.The mapping (key distribution) provides a basic platform for decision-making and patterns. The process of fuzzy inference involves all the pieces that are described in membership functions, logical operations, and if-then rules [26]. After implementing the encryption algorithm, the results are presented at various levels.Figure4 shows the AND rule-based member function. Four inputs are inserted as parties with the shared keys command for the key distribution function and the output is in the 3D surface graph. Logical key distribution applies the implication and aggregation of variables. Figure 5 highlights the second structure for key distribution in which three-axis are used. The x-axis, y-axis, and z-axis are for parities, shared key loss integrity, and output of key distribution policy, respectively. Figure 6 shows the 3D structure for key distribution based on OR rules. The x-axis, y-axis, and z-axis are for shared secret keys, parties and principle loss integrity, and constant output using key distribution policy, respectively.Figure 4
AND rule member function.Figure 5
Surface evaluation of logical AND membership.Figure 6
Surface evaluation of logical OR rule membership.
### 4.2. Parameters Used for Fuzzy-Based Model for Key Compromise Prediction
The fuzzy model for the key compromise prediction technique is based on the following policy [26].
#### 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
#### 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
#### 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
#### 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
### 4.3. Surface Generation
Figure7 shows the 3D structure of surface_1 for key distribution based on the AND-OR model. The x-axis represents the loss of confidentiality, the y-axis represents the loss of integrity, and the z-axis represents the constant key compromised. In this segment, we found two-fragment evaluations such as integrity and confidentiality of key loss distribution. It gives key distribution based on the AND-OR model for integrity and confidentiality analysis.Figure 7
Surface_1 for integrity and confidentiality analysis.In the same way, Figure8 shows the suface_2 plot of loss of security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left from the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and authentication. Figure 9 depicts the surface_3 plot that shows the loss security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left and the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and nonrepudiation. Figure 10 shows the surface_4 structure for key distribution, in which the x-axis is used for loss of authentication, the y-axis for loss of nonrepudiation, and the z-axis as constant for key compromised. In this segment, two fragments are considered for evaluation such as authentication and nonrepudiation of the key loss distribution.Figure 8
Surface_2 for confidentiality and authentication analysis.Figure 9
Surface_3 for confidentiality and nonrepudiation analysis.Figure 10
Surface_4 for authentication and nonrepudiation analysis.The data communication is verified among the different nodes in AODV routing to support the multicast routing in the wireless networking for effective cryptographic communication with the secured key. The simulation is done to verify the encryption and decryption of the plain text with the variable key length of 8-bit to 128-bit. Test case-1 and test case-2 summarize the test inputs for input plain text, ciphertext, and decrypted text. The communication system supports 100 nodes (M0 to M99). The source nodes have the plain text, encryption key as the inputs, and ciphertext as the output. In the encryption end, the destination nodes have the ciphertext, decryption key as inputs, and decrypted text as output.
#### 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
#### 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 4.1. Implementation of Fuzzification Rules-Based Models and Results
Fuzzy inference is the process of formulating the mapping from given input for key distribution to an output using fuzzy logic-based rules.The mapping (key distribution) provides a basic platform for decision-making and patterns. The process of fuzzy inference involves all the pieces that are described in membership functions, logical operations, and if-then rules [26]. After implementing the encryption algorithm, the results are presented at various levels.Figure4 shows the AND rule-based member function. Four inputs are inserted as parties with the shared keys command for the key distribution function and the output is in the 3D surface graph. Logical key distribution applies the implication and aggregation of variables. Figure 5 highlights the second structure for key distribution in which three-axis are used. The x-axis, y-axis, and z-axis are for parities, shared key loss integrity, and output of key distribution policy, respectively. Figure 6 shows the 3D structure for key distribution based on OR rules. The x-axis, y-axis, and z-axis are for shared secret keys, parties and principle loss integrity, and constant output using key distribution policy, respectively.Figure 4
AND rule member function.Figure 5
Surface evaluation of logical AND membership.Figure 6
Surface evaluation of logical OR rule membership.
## 4.2. Parameters Used for Fuzzy-Based Model for Key Compromise Prediction
The fuzzy model for the key compromise prediction technique is based on the following policy [26].
### 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
### 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
### 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
### 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
## 4.2.1. Loss of Confidentiality
The control and encryption of members are in a fixed format that works to prevent damage to its structure. As an example, users will first need to install, and then users can access based on their proven identity. Allowed users may only provide access to data in a detailed format. If users are not allowed, they are denied access.
## 4.2.2. Loss of Integrity
The general way of determining the nearness of hashing is fixed. In a detailed format, the estimation can be done with the help of a hashing algorithm for a certain file or data string.
## 4.2.3. Loss of Authentication
Authentication is an important security aspect. The public key and symmetric key cryptography can provide those services. Symmetric key cryptography with message authentication (MAC) only provides evidence that one of the parties is associated with the shared key.
## 4.2.4. Loss of Nonrepudiation
It is assurance that the party cannot deny the validity of anything.
## 4.3. Surface Generation
Figure7 shows the 3D structure of surface_1 for key distribution based on the AND-OR model. The x-axis represents the loss of confidentiality, the y-axis represents the loss of integrity, and the z-axis represents the constant key compromised. In this segment, we found two-fragment evaluations such as integrity and confidentiality of key loss distribution. It gives key distribution based on the AND-OR model for integrity and confidentiality analysis.Figure 7
Surface_1 for integrity and confidentiality analysis.In the same way, Figure8 shows the suface_2 plot of loss of security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left from the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and authentication. Figure 9 depicts the surface_3 plot that shows the loss security key graphical charts from left to right. The distribution key loss is counted with security parameters from the right to the left and the end of distribution to its beginning. Typically, the distribution ends with two events such as confidentiality and nonrepudiation. Figure 10 shows the surface_4 structure for key distribution, in which the x-axis is used for loss of authentication, the y-axis for loss of nonrepudiation, and the z-axis as constant for key compromised. In this segment, two fragments are considered for evaluation such as authentication and nonrepudiation of the key loss distribution.Figure 8
Surface_2 for confidentiality and authentication analysis.Figure 9
Surface_3 for confidentiality and nonrepudiation analysis.Figure 10
Surface_4 for authentication and nonrepudiation analysis.The data communication is verified among the different nodes in AODV routing to support the multicast routing in the wireless networking for effective cryptographic communication with the secured key. The simulation is done to verify the encryption and decryption of the plain text with the variable key length of 8-bit to 128-bit. Test case-1 and test case-2 summarize the test inputs for input plain text, ciphertext, and decrypted text. The communication system supports 100 nodes (M0 to M99). The source nodes have the plain text, encryption key as the inputs, and ciphertext as the output. In the encryption end, the destination nodes have the ciphertext, decryption key as inputs, and decrypted text as output.
### 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
### 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 4.3.1. Test Case 1 (64-Bit)
The data communication is verified from the source node M9 to destination node M99: Text_in64bits=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII, Enecryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”binary=4D616E6973686140hexadecimal=Manisha@inASCII, Cipher_text64bits=“0000010000001111000010100000000000001000010110010101001101110011”binary=040F0A0008595373hexadecimal=□□□□□YSsinASCII, Decryption_Key_Gen64bits=“0100110101100001011011100110100101110011011010000110000101000000”=1’h4D616E6973686140=Manisha@inASCII, and Decrypted_text=“0100100101101110011001000110100101100001001100010011001000110011”binary=496E646961313233hexadecimal=India123inASCII.
## 4.3.2. Test Case 2 (128-Bit)
The data communication is verified from the source node M6 to destination node M90: Text_in128bits=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII, Enecryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233hexadecimal=Manishayadav@123inASCII, Cipher_text128bits=“00001000000011010000101100001010000001110001101000001110000101110000100000000111000100100011011001110001000000110000000100000111”=080D0B0A071A0E170807123671030107hexadecimal=□□□□□□□□□□□6q□□□inASCII, Decryption_Key_Gen128bits=“01001101011000010110111001101001011100110110100001100001011110010110000101100100011000010111011001000000001100010011001000110011”=4D616E69736861796164617640313233=Manishayadav@123inASCII, and Decrypted_text=“01000101011011000110010101100011011101000111001001101111011011100110100101100011011100110100000000110001001100100011001100110100”binary=456C656374726F6E6963734031323334hexadecimal=Electronics@1234inASCII.The performance of the multicast system is evaluated based on the different performance indices such as end-to-end delay, throughput, packet delivery ratio, and control overhead. Table3 lists the values of the parameters. Figure 11 presents the graph corresponding to the analysis. It has been analyzed that the delay is increasing with the nodes. The packet delivery ratio is good with optimal values of control overhead and throughput.Table 3
Performance parameters for AODV.
Delay(sec)Throughput(bps)Control overheadPDR100.0370.2100.00570.980200.0850.1950.00580.978300.0910.1810.00610.996400.1420.1720.00580.985500.1960.1580.00480.983600.2890.1520.00580.995700.3240.1720.00530.991800.3490.1910.00400.986900.3970.1970.00350.9751000.4100.2250.00650.989Figure 11
Multicast nodes variations and parameters.
## 5. Conclusions
The research work provides a novel way of key distribution using a fuzzy-based cryptography model. To anticipate the key distribution in symmetric key cryptography, initially, the AND logic is created, and rule sets are prepared. Then, using the same parameters, the OR logic is created. We used a general data set of 20-25 values for each set to set the formulation of the generic rules. A 3D surface graph has been created based on these models, and these graphs are useful in demonstrating the challenges of associated factors and their final effect on key distribution. Various security features are analyzed because of the implementation of these factors. The cryptographic encryption and decryption of the data are verified successfully based on AODV routing in wireless communication. The large key size provides greater security. The 64-bit and 128-bit key encryption and decryption are experienced with 64-bit and 128-bit plain text. The multicast system supports 100 nodes of data interchange with minimum overhead, delay, and maximum throughput. In the future, we are planning to integrate the system with field-programmable gate array (FPGA) hardware with a larger key size and data. The number of nodes can be enhanced to evaluate the performance of the system for large scalable networks and efficient multicast communication.
---
*Source: 1011407-2022-03-19.xml* | 2022 |
# Clinical Effects of Primary Nursing on Diabetic Nephropathy Patients Undergoing Hemodialysis and Its Impact on the Inflammatory Responses
**Authors:** Yujuan Guo; Qi Song; Yuhong Cui; Caihong Wang
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011415
---
## Abstract
Objective. To assess the clinical effects of primary nursing on diabetic nephropathy patients undergoing hemodialysis and its impact on inflammatory responses. Methods. Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to receive either routine nursing (routine group) or primary nursing (primary group). The outcome measures included nursing outcomes, inflammatory factor levels, and psychological status. Results. Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05). Patients receiving primary nursing showed significantly lower levels of interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α) versus those given routine nursing (P<0.05). The patients in the primary group had significantly lower scores on the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) versus those in the routine group (P<0.05). Conclusion. Primary nursing improves the renal function of diabetic nephropathy patients undergoing hemodialysis, reduces the inflammatory response, and eliminates their negative emotions, which shows great potential for clinical application.
---
## Body
## 1. Introduction
In recent years, with improvements in people’s living standards, the incidence of diabetes mellitus has been on the rise [1]. Diabetic nephropathy is one of the common comorbidities of diabetes mellitus, and its incidence has also been increasing [2]. The main cause of diabetic nephropathy is the untimely and ineffective control of diabetes. Chronic hyperglycemia in patients leads to sustained damage to the renal vasculature due to blood pressure, which results in blood filtration overload in the kidneys and thus nephropathy [3]. The loss of metabolic function of the kidney over a certain level will progress to uremia, which requires dialysis treatment or kidney transplantation [4]. Diabetic nephropathy seriously compromises the life quality of patients. Hemodialysis can effectively improve the life quality of patients with diabetic nephropathy with a high safety profile; however, patients after hemodialysis treatment are predisposed to various complications, which seriously impair their physiological and psychological health and even threaten their life safety in serious cases [5]. A large body of clinical research has demonstrated the importance of active and effective nursing measures for diabetic nephropathy patients undergoing hemodialysis [6]. Although primary nursing care modality has been applied in numerous departments and diseases, the reports on diabetic nephropathy undergoing hemodialysis remain scant. In the present study, 80 patients with diabetic nephropathy undergoing hemodialysis in our institution were recruited to assess the clinical effects of primary nursing on these patients and its impact on inflammatory responses.
## 2. Materials and Methods
### 2.1. Baseline Data
Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to either a routine group or a primary group. The baseline characteristics of the routine group (22 males, 18 females, mean age of 57.67 ± 3.48 years, mean disease duration of 14.21 ± 2.25 years, and mean dialysis time of 12.66 ± 3.59 months) were comparable with those of the primary group (23 males, 17 females, mean age of 57.28 ± 3.56 years, mean disease duration of 14.13 ± 2.31 years, and mean dialysis time of 12.87 ± 3.45 months) (P>0.05) (Table 1). This study has been reviewed and approved by the Medical Ethics Committee of the Qingdao Hospital of Traditional Chinese Medicine, No. 928797.Table 1
Comparison of baseline data (n (%)).
Routine group (n = 40)Primary group (n = 40)t or x2P valueGender0.0510.822Male2223Female1817Mean age57.67 ± 3.4857.28 ± 3.560.4950.622Mean disease duration14.21 ± 2.2514.13 ± 2.310.1570.876Mean dialysis time12.66 ± 3.5912.87 ± 3.45−0.2670.79
### 2.2. Inclusion and Exclusion Criteria
Inclusion criteria: patients without other vital organ disorders and without a history of psychiatric disorders and language and cognitive impairment were included in this study.Exclusion criteria: patients with a history of psychiatric disorders or recent use of antidepressants, with malignant tumors or hematological or immune system disorders, with congenital genetic disorders, who refused to participate in this study, with poor compliance, or who were unable to follow up were excluded from this study.
### 2.3. Methods
(1)
Patients in the routine group received routine care. The nursing staff recorded the patient’s nutrition, blood pressure, blood glucose, and relevant indicators, and informed the physicians promptly of the presence of adverse reactions in patients. The patients and their families were given health education about the disease and instructed to have a reasonable diet and appropriate exercise.(2)
Patients in the primary group received primary nursing. (1) Nursing staff educated patients about the knowledge and information of hypoglycemic drugs and taught them the correct method of insulin injection and preservation to ensure the correct and reasonable use of drugs. The patients were provided with health education about diabetic nephropathy to help them fully understand their disease [7]. (2) Hemodialysis is prone to organ function damage and even blindness in severe cases, which may lead to negative emotions such as anxiety and depression, resulting in the loss of patients’ confidence in treatment. Therefore, strengthened communication and timely psychological guidance to patients contributed to improving their psychological status and eliminating negative emotions, thereby ensuring the treatment effect. The patients were informed of the approach, process, necessity, and importance of hemodialysis, which facilitates the active cooperation of patients in hemodialysis treatment. (3) The nursing staff formulated an individualized and reasonable diet plan according to each patient’s condition and assessed the patient’s nutritional status by calculating the patient’s daily intake according to the patient’s urine volume. Insulin is usually used to control blood glucose in patients with diabetic nephropathy, but the compromised insulin catabolism and clearance in these patients predisposes them to exogenous insulin accumulation in the body. Thus, blood glucose requires close monitoring for adjustment of the insulin dosage. In the case of the signs of hypoglycemia, the patient was given food as soon as possible to avoid hypoglycemia [8]. (4) Nursing staff provided patients with targeted guidance on complication prevention, and patients with complications were treated with drugs with protective functions on renal function. The patient was given antihypertensive drugs as prescribed by the doctor if intractable hypertension occurred during hemodialysis. In the event of nausea, pallor, and cold sweats during dialysis, oxygen therapy was performed, the patient’s blood flow was reduced, and the patient’s ultrafiltration was discontinued. Plasma and hypertonic saline infusions were administered if necessary. (6) The patient’s puncture site was given medical care to avoid infection. Patients with long-term indwelling catheters were treated according to the requirements of aseptic practice.In addition, two groups were given traditional Chinese nursing. Patients with diabetic nephropathy suffer from dryness, heat, loss of energy, and deficiency of both Qi and Yin. The Guanyuan, Sanyinjiao, and Zusanli acupoints were massaged according to the doctor’s instructions to replenish the Qi and Yin and alleviate the symptoms of palpitations and irritability. For those with spleen and kidney yang deficiency, the ward was ventilated regularly to ensure sufficient sunlight, and hot compression was applied to the waist, knees, and abdomen. For abdominal distension, moxibustion was performed at the Pishu and bilateral Shenshu acupoints.
### 2.4. Outcome Measures
(1)
Nursing outcomes: the nursing outcomes included blood creatinine, fasting glucose, urea nitrogen, and proteinuria. Blood creatinine, urea nitrogen, and proteinuria levels were determined before and after the nursing using an immunoenzymatic assay with a fully automated biochemical analyzer. The fasting blood glucose level of the patients was determined using the Myriad BS-350 automatic biochemical analyzer with matching reagents of the analyzer. The lower the level of the above indexes, the better the nursing outcome.(2)
Inflammatory response indexes: the main indexes included interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α). 5 ml of fasting elbow venous blood was collected before and after the intervention, and the levels of IL-6, hs-CRP, and TNF-α were determined by ELISA. The specific operation was performed according to the instructions of the kit. The lower the level of the above-mentioned inflammatory response indexes, the better the nursing outcomes.(3)
Negative emotions: the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were used to assess the psychological status of the patients. SAS: <50 points indicate no anxiety, 50–59 points indicate mild anxiety, 60–69 points indicate moderate anxiety, and >70 points indicate severe anxiety. <53 points indicate no depression, 53–62 points indicate mild depression, 63–72 points indicate moderate depression and >73 points indicate severe depression. The lower the score, the better the psychological status of the patient.
### 2.5. Statistical Analysis
SPSS 22.0 was used for data analyses. The measurement data were expressed as (x¯ ± s) and processed using the independent sample t-test. The count data were expressed as the number of cases (rate) and analyzed using the chi-square test. Differences were considered statistically significant at P<0.05.
## 2.1. Baseline Data
Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to either a routine group or a primary group. The baseline characteristics of the routine group (22 males, 18 females, mean age of 57.67 ± 3.48 years, mean disease duration of 14.21 ± 2.25 years, and mean dialysis time of 12.66 ± 3.59 months) were comparable with those of the primary group (23 males, 17 females, mean age of 57.28 ± 3.56 years, mean disease duration of 14.13 ± 2.31 years, and mean dialysis time of 12.87 ± 3.45 months) (P>0.05) (Table 1). This study has been reviewed and approved by the Medical Ethics Committee of the Qingdao Hospital of Traditional Chinese Medicine, No. 928797.Table 1
Comparison of baseline data (n (%)).
Routine group (n = 40)Primary group (n = 40)t or x2P valueGender0.0510.822Male2223Female1817Mean age57.67 ± 3.4857.28 ± 3.560.4950.622Mean disease duration14.21 ± 2.2514.13 ± 2.310.1570.876Mean dialysis time12.66 ± 3.5912.87 ± 3.45−0.2670.79
## 2.2. Inclusion and Exclusion Criteria
Inclusion criteria: patients without other vital organ disorders and without a history of psychiatric disorders and language and cognitive impairment were included in this study.Exclusion criteria: patients with a history of psychiatric disorders or recent use of antidepressants, with malignant tumors or hematological or immune system disorders, with congenital genetic disorders, who refused to participate in this study, with poor compliance, or who were unable to follow up were excluded from this study.
## 2.3. Methods
(1)
Patients in the routine group received routine care. The nursing staff recorded the patient’s nutrition, blood pressure, blood glucose, and relevant indicators, and informed the physicians promptly of the presence of adverse reactions in patients. The patients and their families were given health education about the disease and instructed to have a reasonable diet and appropriate exercise.(2)
Patients in the primary group received primary nursing. (1) Nursing staff educated patients about the knowledge and information of hypoglycemic drugs and taught them the correct method of insulin injection and preservation to ensure the correct and reasonable use of drugs. The patients were provided with health education about diabetic nephropathy to help them fully understand their disease [7]. (2) Hemodialysis is prone to organ function damage and even blindness in severe cases, which may lead to negative emotions such as anxiety and depression, resulting in the loss of patients’ confidence in treatment. Therefore, strengthened communication and timely psychological guidance to patients contributed to improving their psychological status and eliminating negative emotions, thereby ensuring the treatment effect. The patients were informed of the approach, process, necessity, and importance of hemodialysis, which facilitates the active cooperation of patients in hemodialysis treatment. (3) The nursing staff formulated an individualized and reasonable diet plan according to each patient’s condition and assessed the patient’s nutritional status by calculating the patient’s daily intake according to the patient’s urine volume. Insulin is usually used to control blood glucose in patients with diabetic nephropathy, but the compromised insulin catabolism and clearance in these patients predisposes them to exogenous insulin accumulation in the body. Thus, blood glucose requires close monitoring for adjustment of the insulin dosage. In the case of the signs of hypoglycemia, the patient was given food as soon as possible to avoid hypoglycemia [8]. (4) Nursing staff provided patients with targeted guidance on complication prevention, and patients with complications were treated with drugs with protective functions on renal function. The patient was given antihypertensive drugs as prescribed by the doctor if intractable hypertension occurred during hemodialysis. In the event of nausea, pallor, and cold sweats during dialysis, oxygen therapy was performed, the patient’s blood flow was reduced, and the patient’s ultrafiltration was discontinued. Plasma and hypertonic saline infusions were administered if necessary. (6) The patient’s puncture site was given medical care to avoid infection. Patients with long-term indwelling catheters were treated according to the requirements of aseptic practice.In addition, two groups were given traditional Chinese nursing. Patients with diabetic nephropathy suffer from dryness, heat, loss of energy, and deficiency of both Qi and Yin. The Guanyuan, Sanyinjiao, and Zusanli acupoints were massaged according to the doctor’s instructions to replenish the Qi and Yin and alleviate the symptoms of palpitations and irritability. For those with spleen and kidney yang deficiency, the ward was ventilated regularly to ensure sufficient sunlight, and hot compression was applied to the waist, knees, and abdomen. For abdominal distension, moxibustion was performed at the Pishu and bilateral Shenshu acupoints.
## 2.4. Outcome Measures
(1)
Nursing outcomes: the nursing outcomes included blood creatinine, fasting glucose, urea nitrogen, and proteinuria. Blood creatinine, urea nitrogen, and proteinuria levels were determined before and after the nursing using an immunoenzymatic assay with a fully automated biochemical analyzer. The fasting blood glucose level of the patients was determined using the Myriad BS-350 automatic biochemical analyzer with matching reagents of the analyzer. The lower the level of the above indexes, the better the nursing outcome.(2)
Inflammatory response indexes: the main indexes included interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α). 5 ml of fasting elbow venous blood was collected before and after the intervention, and the levels of IL-6, hs-CRP, and TNF-α were determined by ELISA. The specific operation was performed according to the instructions of the kit. The lower the level of the above-mentioned inflammatory response indexes, the better the nursing outcomes.(3)
Negative emotions: the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were used to assess the psychological status of the patients. SAS: <50 points indicate no anxiety, 50–59 points indicate mild anxiety, 60–69 points indicate moderate anxiety, and >70 points indicate severe anxiety. <53 points indicate no depression, 53–62 points indicate mild depression, 63–72 points indicate moderate depression and >73 points indicate severe depression. The lower the score, the better the psychological status of the patient.
## 2.5. Statistical Analysis
SPSS 22.0 was used for data analyses. The measurement data were expressed as (x¯ ± s) and processed using the independent sample t-test. The count data were expressed as the number of cases (rate) and analyzed using the chi-square test. Differences were considered statistically significant at P<0.05.
## 3. Results
### 3.1. Nursing Outcomes
Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05) (Table 2).Table 2
Comparison of nursing outcomes (x¯ ± s).
GroupsnBlood creatinine (mmol/L)Fasting glucose (mmol/L)Before nursingAfter nursingBefore nursingAfter nursingRoutine group40353.41 ± 12.78237.56 ± 11.1013.57 ± 1.769.32 ± 1.31Primary group40353.27 ± 12.59221.52 ± 10.1313.69 ± 1.727.11 ± 1.02t-value—0.0496.751-0.3088.419P value—0.961<0.0010.759<0.001GroupsnUrea nitrogen (mmol/L)Proteinuria (g/24 h)Before nursingAfter nursingBefore nursingAfter nursingRoutine group4018.53 ± 2.2712.56 ± 2.362.85 ± 1.142.82 ± 1.05Primary group4018.42 ± 2.368.63 ± 2.602.91 ± 1.031.11 ± 0.59t-value—0.2127.079-0.2478.98P value—0.833<0.0010.806<0.001
### 3.2. Inflammatory Responses Indices
Patients in the primary group had significantly lower levels of IL-6, hs-CRP, and TNF-α than those in the routine group (P<0.05) (Table 3).Table 3
Comparison of inflammatory responses indices (x¯ ± s).
GroupsnIL-6 (pg/ml)Hs-CRP (mg/L)TNF-α (pg/ml)Before nursingAfter nursingBefore nursingAfter nursingBefore nursingAfter nursingRoutine group402.34 ± 0.811.62 ± 0.5710.83 ± 2.678.35 ± 2.1064.83 ± 9.8256.28 ± 5.77Primary group402.27 ± 0.851.11 ± 0.3410.92 ± 2.596.11 ± 1.6864.80 ± 9.7945.32 ± 5.21t-value—0.3774.86−0.1535.2680.0148.916P value—0.707<0.0010.879<0.0010.989<0.001
### 3.3. SAS and SDS Scores
The patients receiving primary nursing showed significantly lower scores of SAS and SDS than those given routine nursing (P<0.05) (Table 4).Table 4
Comparison of SAS and SDS scores (x¯ ± s).
GroupsnSAS scoresSDS scoresBefore nursingAfter nursingBefore nursingAfter nursingRoutine group4056.83 ± 3.4536.56 ± 3.5856.89 ± 3.7639.28 ± 3.31Primary group4056.77 ± 3.5922.13 ± 3.1556.92 ± 3.7220.56 ± 2.87t-value—0.07619.139−0.03627.025P value—0.94<0.0010.971<0.001
## 3.1. Nursing Outcomes
Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05) (Table 2).Table 2
Comparison of nursing outcomes (x¯ ± s).
GroupsnBlood creatinine (mmol/L)Fasting glucose (mmol/L)Before nursingAfter nursingBefore nursingAfter nursingRoutine group40353.41 ± 12.78237.56 ± 11.1013.57 ± 1.769.32 ± 1.31Primary group40353.27 ± 12.59221.52 ± 10.1313.69 ± 1.727.11 ± 1.02t-value—0.0496.751-0.3088.419P value—0.961<0.0010.759<0.001GroupsnUrea nitrogen (mmol/L)Proteinuria (g/24 h)Before nursingAfter nursingBefore nursingAfter nursingRoutine group4018.53 ± 2.2712.56 ± 2.362.85 ± 1.142.82 ± 1.05Primary group4018.42 ± 2.368.63 ± 2.602.91 ± 1.031.11 ± 0.59t-value—0.2127.079-0.2478.98P value—0.833<0.0010.806<0.001
## 3.2. Inflammatory Responses Indices
Patients in the primary group had significantly lower levels of IL-6, hs-CRP, and TNF-α than those in the routine group (P<0.05) (Table 3).Table 3
Comparison of inflammatory responses indices (x¯ ± s).
GroupsnIL-6 (pg/ml)Hs-CRP (mg/L)TNF-α (pg/ml)Before nursingAfter nursingBefore nursingAfter nursingBefore nursingAfter nursingRoutine group402.34 ± 0.811.62 ± 0.5710.83 ± 2.678.35 ± 2.1064.83 ± 9.8256.28 ± 5.77Primary group402.27 ± 0.851.11 ± 0.3410.92 ± 2.596.11 ± 1.6864.80 ± 9.7945.32 ± 5.21t-value—0.3774.86−0.1535.2680.0148.916P value—0.707<0.0010.879<0.0010.989<0.001
## 3.3. SAS and SDS Scores
The patients receiving primary nursing showed significantly lower scores of SAS and SDS than those given routine nursing (P<0.05) (Table 4).Table 4
Comparison of SAS and SDS scores (x¯ ± s).
GroupsnSAS scoresSDS scoresBefore nursingAfter nursingBefore nursingAfter nursingRoutine group4056.83 ± 3.4536.56 ± 3.5856.89 ± 3.7639.28 ± 3.31Primary group4056.77 ± 3.5922.13 ± 3.1556.92 ± 3.7220.56 ± 2.87t-value—0.07619.139−0.03627.025P value—0.94<0.0010.971<0.001
## 4. Discussion
Diabetic nephropathy is one of the major complications of diabetic patients, and diabetic nephropathy complicated by end-stage renal disease is the main cause of death in diabetic patients. Hemodialysis is an effective means for diabetic nephropathy, but prolonged hemodialysis may lead to inflammation, hypoproteinemia, muscle protein depletion, and other nutritional deficiencies in the organism, which reduces the immunity of patients, aggravates the inflammatory response, and compromises the prognosis [9]. A related study by Lee et al.[10] revealed that effective nursing measures for patients with diabetic nephropathy are essential for reducing clinical mortality and improving the quality of survival of patients [11]. Patient-oriented primary nursing is appreciated in clinical practice. Patients are given systematic and targeted nursing measures to reduce the incidence of complications and are instructed in terms of psychological management, medication, diet, and health education [12]. Clinical studies have shown that primary nursing facilitates the recovery of patients and lays a good foundation for the rehabilitation of patients. Diabetic nephropathy usually develops in the terminal stage of the disease due to the insidiousness of its early-stage symptoms, which complicates the treatment [13]. Currently, proteinuria, blood creatinine, and urea nitrogen are mostly determined in clinical practice by means of serological tests of renal function. Proteinuria levels reflect the severity of kidney damage and early nephropathy in patients. Blood creatinine is a product of human muscle metabolism in clinical practice and can be used to inspect renal function. Urea nitrogen is a nitrogenous compound in the plasma of the patient’s organism that is filtered from the glomerulus and excreted out of the body, which is generally used to determine the severity of the disease. Fasting glucose can be used to assess the glycemic index of the patient [14].The results of the present study showed that primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing, indicating that primary nursing is effective in improving renal function and controlling glycemic index in patients [15], which may be attributed to the scientific and reasonable targeted disease guidance and diet of the primary nursing. Ren et al. [16] found that inflammatory factors such as interleukin (IL)-6, hypersensitive c-reactive-protein (hs-CRP), and tumor necrosis factor-α (TNF-α) in patients with diabetic nephropathy accelerated the renal vascular sclerosis lesions in the body, which reduces the vascular elasticity and stimulates the proliferation of glomerular thylakoid cells in the body, thereby aggravating the disease. Thipsawat et al. [17] stated that the control of inflammatory response and proteinuria in patients with diabetic nephropathy contributed to the alleviation of impaired renal function in patients. Vlachou et al. [18] demonstrated that effective nursing interventions in patients with diabetic nephropathy treated with hemodialysis mitigated the inflammatory response and improved the patient’s prognosis. Moreover, primary nursing herein was associated with significantly lower levels of IL-6, hs-CRP, and TNF-α versus routine nursing, suggesting an effective alleviation of inflammatory response after the intervention of primary nursing. The reason may be that primary nursing closely monitors the patient’s conditions and the medication, which allows first response strategies in the case of any adverse conditions [19]. In addition, a sound physical and mental health status contributes to the enhancement of patients’ treatment outcomes [20]. Here, the patients receiving primary nursing showed significantly lower scores of SAS and SDS versus those given routine nursing, indicating a promising efficiency of primary nursing in eliminating the negative emotions of patients. The reason may be that primary nursing provides timely comfort and encouragement to help patients resolve their negative emotions [21]. It is worth noting that patients experience negative psychological changes after the illness. The task of psychological nursing is to improve the patient’s psychological state and help them adapt to the new status, which is beneficial to the treatment and recovery. However, important limitations should be outlined. The sample studied comprised a select and relatively small number of participants, which compromise the generalizability of the findings.
## 5. Conclusion
Primary nursing improves the renal function of diabetic nephropathy patients undergoing hemodialysis, reduces the inflammatory response, and eliminates the negative emotions of patients, which shows great potential for clinical application.
---
*Source: 1011415-2022-08-09.xml* | 1011415-2022-08-09_1011415-2022-08-09.md | 27,060 | Clinical Effects of Primary Nursing on Diabetic Nephropathy Patients Undergoing Hemodialysis and Its Impact on the Inflammatory Responses | Yujuan Guo; Qi Song; Yuhong Cui; Caihong Wang | Evidence-Based Complementary and Alternative Medicine
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011415 | 1011415-2022-08-09.xml | ---
## Abstract
Objective. To assess the clinical effects of primary nursing on diabetic nephropathy patients undergoing hemodialysis and its impact on inflammatory responses. Methods. Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to receive either routine nursing (routine group) or primary nursing (primary group). The outcome measures included nursing outcomes, inflammatory factor levels, and psychological status. Results. Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05). Patients receiving primary nursing showed significantly lower levels of interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α) versus those given routine nursing (P<0.05). The patients in the primary group had significantly lower scores on the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) versus those in the routine group (P<0.05). Conclusion. Primary nursing improves the renal function of diabetic nephropathy patients undergoing hemodialysis, reduces the inflammatory response, and eliminates their negative emotions, which shows great potential for clinical application.
---
## Body
## 1. Introduction
In recent years, with improvements in people’s living standards, the incidence of diabetes mellitus has been on the rise [1]. Diabetic nephropathy is one of the common comorbidities of diabetes mellitus, and its incidence has also been increasing [2]. The main cause of diabetic nephropathy is the untimely and ineffective control of diabetes. Chronic hyperglycemia in patients leads to sustained damage to the renal vasculature due to blood pressure, which results in blood filtration overload in the kidneys and thus nephropathy [3]. The loss of metabolic function of the kidney over a certain level will progress to uremia, which requires dialysis treatment or kidney transplantation [4]. Diabetic nephropathy seriously compromises the life quality of patients. Hemodialysis can effectively improve the life quality of patients with diabetic nephropathy with a high safety profile; however, patients after hemodialysis treatment are predisposed to various complications, which seriously impair their physiological and psychological health and even threaten their life safety in serious cases [5]. A large body of clinical research has demonstrated the importance of active and effective nursing measures for diabetic nephropathy patients undergoing hemodialysis [6]. Although primary nursing care modality has been applied in numerous departments and diseases, the reports on diabetic nephropathy undergoing hemodialysis remain scant. In the present study, 80 patients with diabetic nephropathy undergoing hemodialysis in our institution were recruited to assess the clinical effects of primary nursing on these patients and its impact on inflammatory responses.
## 2. Materials and Methods
### 2.1. Baseline Data
Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to either a routine group or a primary group. The baseline characteristics of the routine group (22 males, 18 females, mean age of 57.67 ± 3.48 years, mean disease duration of 14.21 ± 2.25 years, and mean dialysis time of 12.66 ± 3.59 months) were comparable with those of the primary group (23 males, 17 females, mean age of 57.28 ± 3.56 years, mean disease duration of 14.13 ± 2.31 years, and mean dialysis time of 12.87 ± 3.45 months) (P>0.05) (Table 1). This study has been reviewed and approved by the Medical Ethics Committee of the Qingdao Hospital of Traditional Chinese Medicine, No. 928797.Table 1
Comparison of baseline data (n (%)).
Routine group (n = 40)Primary group (n = 40)t or x2P valueGender0.0510.822Male2223Female1817Mean age57.67 ± 3.4857.28 ± 3.560.4950.622Mean disease duration14.21 ± 2.2514.13 ± 2.310.1570.876Mean dialysis time12.66 ± 3.5912.87 ± 3.45−0.2670.79
### 2.2. Inclusion and Exclusion Criteria
Inclusion criteria: patients without other vital organ disorders and without a history of psychiatric disorders and language and cognitive impairment were included in this study.Exclusion criteria: patients with a history of psychiatric disorders or recent use of antidepressants, with malignant tumors or hematological or immune system disorders, with congenital genetic disorders, who refused to participate in this study, with poor compliance, or who were unable to follow up were excluded from this study.
### 2.3. Methods
(1)
Patients in the routine group received routine care. The nursing staff recorded the patient’s nutrition, blood pressure, blood glucose, and relevant indicators, and informed the physicians promptly of the presence of adverse reactions in patients. The patients and their families were given health education about the disease and instructed to have a reasonable diet and appropriate exercise.(2)
Patients in the primary group received primary nursing. (1) Nursing staff educated patients about the knowledge and information of hypoglycemic drugs and taught them the correct method of insulin injection and preservation to ensure the correct and reasonable use of drugs. The patients were provided with health education about diabetic nephropathy to help them fully understand their disease [7]. (2) Hemodialysis is prone to organ function damage and even blindness in severe cases, which may lead to negative emotions such as anxiety and depression, resulting in the loss of patients’ confidence in treatment. Therefore, strengthened communication and timely psychological guidance to patients contributed to improving their psychological status and eliminating negative emotions, thereby ensuring the treatment effect. The patients were informed of the approach, process, necessity, and importance of hemodialysis, which facilitates the active cooperation of patients in hemodialysis treatment. (3) The nursing staff formulated an individualized and reasonable diet plan according to each patient’s condition and assessed the patient’s nutritional status by calculating the patient’s daily intake according to the patient’s urine volume. Insulin is usually used to control blood glucose in patients with diabetic nephropathy, but the compromised insulin catabolism and clearance in these patients predisposes them to exogenous insulin accumulation in the body. Thus, blood glucose requires close monitoring for adjustment of the insulin dosage. In the case of the signs of hypoglycemia, the patient was given food as soon as possible to avoid hypoglycemia [8]. (4) Nursing staff provided patients with targeted guidance on complication prevention, and patients with complications were treated with drugs with protective functions on renal function. The patient was given antihypertensive drugs as prescribed by the doctor if intractable hypertension occurred during hemodialysis. In the event of nausea, pallor, and cold sweats during dialysis, oxygen therapy was performed, the patient’s blood flow was reduced, and the patient’s ultrafiltration was discontinued. Plasma and hypertonic saline infusions were administered if necessary. (6) The patient’s puncture site was given medical care to avoid infection. Patients with long-term indwelling catheters were treated according to the requirements of aseptic practice.In addition, two groups were given traditional Chinese nursing. Patients with diabetic nephropathy suffer from dryness, heat, loss of energy, and deficiency of both Qi and Yin. The Guanyuan, Sanyinjiao, and Zusanli acupoints were massaged according to the doctor’s instructions to replenish the Qi and Yin and alleviate the symptoms of palpitations and irritability. For those with spleen and kidney yang deficiency, the ward was ventilated regularly to ensure sufficient sunlight, and hot compression was applied to the waist, knees, and abdomen. For abdominal distension, moxibustion was performed at the Pishu and bilateral Shenshu acupoints.
### 2.4. Outcome Measures
(1)
Nursing outcomes: the nursing outcomes included blood creatinine, fasting glucose, urea nitrogen, and proteinuria. Blood creatinine, urea nitrogen, and proteinuria levels were determined before and after the nursing using an immunoenzymatic assay with a fully automated biochemical analyzer. The fasting blood glucose level of the patients was determined using the Myriad BS-350 automatic biochemical analyzer with matching reagents of the analyzer. The lower the level of the above indexes, the better the nursing outcome.(2)
Inflammatory response indexes: the main indexes included interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α). 5 ml of fasting elbow venous blood was collected before and after the intervention, and the levels of IL-6, hs-CRP, and TNF-α were determined by ELISA. The specific operation was performed according to the instructions of the kit. The lower the level of the above-mentioned inflammatory response indexes, the better the nursing outcomes.(3)
Negative emotions: the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were used to assess the psychological status of the patients. SAS: <50 points indicate no anxiety, 50–59 points indicate mild anxiety, 60–69 points indicate moderate anxiety, and >70 points indicate severe anxiety. <53 points indicate no depression, 53–62 points indicate mild depression, 63–72 points indicate moderate depression and >73 points indicate severe depression. The lower the score, the better the psychological status of the patient.
### 2.5. Statistical Analysis
SPSS 22.0 was used for data analyses. The measurement data were expressed as (x¯ ± s) and processed using the independent sample t-test. The count data were expressed as the number of cases (rate) and analyzed using the chi-square test. Differences were considered statistically significant at P<0.05.
## 2.1. Baseline Data
Between July 2019 and April 2021, 80 patients with diabetic nephropathy who underwent hemodialysis in our institution were recruited and assigned at a ratio of 1 : 1 to either a routine group or a primary group. The baseline characteristics of the routine group (22 males, 18 females, mean age of 57.67 ± 3.48 years, mean disease duration of 14.21 ± 2.25 years, and mean dialysis time of 12.66 ± 3.59 months) were comparable with those of the primary group (23 males, 17 females, mean age of 57.28 ± 3.56 years, mean disease duration of 14.13 ± 2.31 years, and mean dialysis time of 12.87 ± 3.45 months) (P>0.05) (Table 1). This study has been reviewed and approved by the Medical Ethics Committee of the Qingdao Hospital of Traditional Chinese Medicine, No. 928797.Table 1
Comparison of baseline data (n (%)).
Routine group (n = 40)Primary group (n = 40)t or x2P valueGender0.0510.822Male2223Female1817Mean age57.67 ± 3.4857.28 ± 3.560.4950.622Mean disease duration14.21 ± 2.2514.13 ± 2.310.1570.876Mean dialysis time12.66 ± 3.5912.87 ± 3.45−0.2670.79
## 2.2. Inclusion and Exclusion Criteria
Inclusion criteria: patients without other vital organ disorders and without a history of psychiatric disorders and language and cognitive impairment were included in this study.Exclusion criteria: patients with a history of psychiatric disorders or recent use of antidepressants, with malignant tumors or hematological or immune system disorders, with congenital genetic disorders, who refused to participate in this study, with poor compliance, or who were unable to follow up were excluded from this study.
## 2.3. Methods
(1)
Patients in the routine group received routine care. The nursing staff recorded the patient’s nutrition, blood pressure, blood glucose, and relevant indicators, and informed the physicians promptly of the presence of adverse reactions in patients. The patients and their families were given health education about the disease and instructed to have a reasonable diet and appropriate exercise.(2)
Patients in the primary group received primary nursing. (1) Nursing staff educated patients about the knowledge and information of hypoglycemic drugs and taught them the correct method of insulin injection and preservation to ensure the correct and reasonable use of drugs. The patients were provided with health education about diabetic nephropathy to help them fully understand their disease [7]. (2) Hemodialysis is prone to organ function damage and even blindness in severe cases, which may lead to negative emotions such as anxiety and depression, resulting in the loss of patients’ confidence in treatment. Therefore, strengthened communication and timely psychological guidance to patients contributed to improving their psychological status and eliminating negative emotions, thereby ensuring the treatment effect. The patients were informed of the approach, process, necessity, and importance of hemodialysis, which facilitates the active cooperation of patients in hemodialysis treatment. (3) The nursing staff formulated an individualized and reasonable diet plan according to each patient’s condition and assessed the patient’s nutritional status by calculating the patient’s daily intake according to the patient’s urine volume. Insulin is usually used to control blood glucose in patients with diabetic nephropathy, but the compromised insulin catabolism and clearance in these patients predisposes them to exogenous insulin accumulation in the body. Thus, blood glucose requires close monitoring for adjustment of the insulin dosage. In the case of the signs of hypoglycemia, the patient was given food as soon as possible to avoid hypoglycemia [8]. (4) Nursing staff provided patients with targeted guidance on complication prevention, and patients with complications were treated with drugs with protective functions on renal function. The patient was given antihypertensive drugs as prescribed by the doctor if intractable hypertension occurred during hemodialysis. In the event of nausea, pallor, and cold sweats during dialysis, oxygen therapy was performed, the patient’s blood flow was reduced, and the patient’s ultrafiltration was discontinued. Plasma and hypertonic saline infusions were administered if necessary. (6) The patient’s puncture site was given medical care to avoid infection. Patients with long-term indwelling catheters were treated according to the requirements of aseptic practice.In addition, two groups were given traditional Chinese nursing. Patients with diabetic nephropathy suffer from dryness, heat, loss of energy, and deficiency of both Qi and Yin. The Guanyuan, Sanyinjiao, and Zusanli acupoints were massaged according to the doctor’s instructions to replenish the Qi and Yin and alleviate the symptoms of palpitations and irritability. For those with spleen and kidney yang deficiency, the ward was ventilated regularly to ensure sufficient sunlight, and hot compression was applied to the waist, knees, and abdomen. For abdominal distension, moxibustion was performed at the Pishu and bilateral Shenshu acupoints.
## 2.4. Outcome Measures
(1)
Nursing outcomes: the nursing outcomes included blood creatinine, fasting glucose, urea nitrogen, and proteinuria. Blood creatinine, urea nitrogen, and proteinuria levels were determined before and after the nursing using an immunoenzymatic assay with a fully automated biochemical analyzer. The fasting blood glucose level of the patients was determined using the Myriad BS-350 automatic biochemical analyzer with matching reagents of the analyzer. The lower the level of the above indexes, the better the nursing outcome.(2)
Inflammatory response indexes: the main indexes included interleukin (IL)-6, high-sensitivity C-reactive protein (hs-CRP), and tumor necrosis factor-alpha (TNF-α). 5 ml of fasting elbow venous blood was collected before and after the intervention, and the levels of IL-6, hs-CRP, and TNF-α were determined by ELISA. The specific operation was performed according to the instructions of the kit. The lower the level of the above-mentioned inflammatory response indexes, the better the nursing outcomes.(3)
Negative emotions: the self-rating anxiety scale (SAS) and self-rating depression scale (SDS) were used to assess the psychological status of the patients. SAS: <50 points indicate no anxiety, 50–59 points indicate mild anxiety, 60–69 points indicate moderate anxiety, and >70 points indicate severe anxiety. <53 points indicate no depression, 53–62 points indicate mild depression, 63–72 points indicate moderate depression and >73 points indicate severe depression. The lower the score, the better the psychological status of the patient.
## 2.5. Statistical Analysis
SPSS 22.0 was used for data analyses. The measurement data were expressed as (x¯ ± s) and processed using the independent sample t-test. The count data were expressed as the number of cases (rate) and analyzed using the chi-square test. Differences were considered statistically significant at P<0.05.
## 3. Results
### 3.1. Nursing Outcomes
Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05) (Table 2).Table 2
Comparison of nursing outcomes (x¯ ± s).
GroupsnBlood creatinine (mmol/L)Fasting glucose (mmol/L)Before nursingAfter nursingBefore nursingAfter nursingRoutine group40353.41 ± 12.78237.56 ± 11.1013.57 ± 1.769.32 ± 1.31Primary group40353.27 ± 12.59221.52 ± 10.1313.69 ± 1.727.11 ± 1.02t-value—0.0496.751-0.3088.419P value—0.961<0.0010.759<0.001GroupsnUrea nitrogen (mmol/L)Proteinuria (g/24 h)Before nursingAfter nursingBefore nursingAfter nursingRoutine group4018.53 ± 2.2712.56 ± 2.362.85 ± 1.142.82 ± 1.05Primary group4018.42 ± 2.368.63 ± 2.602.91 ± 1.031.11 ± 0.59t-value—0.2127.079-0.2478.98P value—0.833<0.0010.806<0.001
### 3.2. Inflammatory Responses Indices
Patients in the primary group had significantly lower levels of IL-6, hs-CRP, and TNF-α than those in the routine group (P<0.05) (Table 3).Table 3
Comparison of inflammatory responses indices (x¯ ± s).
GroupsnIL-6 (pg/ml)Hs-CRP (mg/L)TNF-α (pg/ml)Before nursingAfter nursingBefore nursingAfter nursingBefore nursingAfter nursingRoutine group402.34 ± 0.811.62 ± 0.5710.83 ± 2.678.35 ± 2.1064.83 ± 9.8256.28 ± 5.77Primary group402.27 ± 0.851.11 ± 0.3410.92 ± 2.596.11 ± 1.6864.80 ± 9.7945.32 ± 5.21t-value—0.3774.86−0.1535.2680.0148.916P value—0.707<0.0010.879<0.0010.989<0.001
### 3.3. SAS and SDS Scores
The patients receiving primary nursing showed significantly lower scores of SAS and SDS than those given routine nursing (P<0.05) (Table 4).Table 4
Comparison of SAS and SDS scores (x¯ ± s).
GroupsnSAS scoresSDS scoresBefore nursingAfter nursingBefore nursingAfter nursingRoutine group4056.83 ± 3.4536.56 ± 3.5856.89 ± 3.7639.28 ± 3.31Primary group4056.77 ± 3.5922.13 ± 3.1556.92 ± 3.7220.56 ± 2.87t-value—0.07619.139−0.03627.025P value—0.94<0.0010.971<0.001
## 3.1. Nursing Outcomes
Primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing (P<0.05) (Table 2).Table 2
Comparison of nursing outcomes (x¯ ± s).
GroupsnBlood creatinine (mmol/L)Fasting glucose (mmol/L)Before nursingAfter nursingBefore nursingAfter nursingRoutine group40353.41 ± 12.78237.56 ± 11.1013.57 ± 1.769.32 ± 1.31Primary group40353.27 ± 12.59221.52 ± 10.1313.69 ± 1.727.11 ± 1.02t-value—0.0496.751-0.3088.419P value—0.961<0.0010.759<0.001GroupsnUrea nitrogen (mmol/L)Proteinuria (g/24 h)Before nursingAfter nursingBefore nursingAfter nursingRoutine group4018.53 ± 2.2712.56 ± 2.362.85 ± 1.142.82 ± 1.05Primary group4018.42 ± 2.368.63 ± 2.602.91 ± 1.031.11 ± 0.59t-value—0.2127.079-0.2478.98P value—0.833<0.0010.806<0.001
## 3.2. Inflammatory Responses Indices
Patients in the primary group had significantly lower levels of IL-6, hs-CRP, and TNF-α than those in the routine group (P<0.05) (Table 3).Table 3
Comparison of inflammatory responses indices (x¯ ± s).
GroupsnIL-6 (pg/ml)Hs-CRP (mg/L)TNF-α (pg/ml)Before nursingAfter nursingBefore nursingAfter nursingBefore nursingAfter nursingRoutine group402.34 ± 0.811.62 ± 0.5710.83 ± 2.678.35 ± 2.1064.83 ± 9.8256.28 ± 5.77Primary group402.27 ± 0.851.11 ± 0.3410.92 ± 2.596.11 ± 1.6864.80 ± 9.7945.32 ± 5.21t-value—0.3774.86−0.1535.2680.0148.916P value—0.707<0.0010.879<0.0010.989<0.001
## 3.3. SAS and SDS Scores
The patients receiving primary nursing showed significantly lower scores of SAS and SDS than those given routine nursing (P<0.05) (Table 4).Table 4
Comparison of SAS and SDS scores (x¯ ± s).
GroupsnSAS scoresSDS scoresBefore nursingAfter nursingBefore nursingAfter nursingRoutine group4056.83 ± 3.4536.56 ± 3.5856.89 ± 3.7639.28 ± 3.31Primary group4056.77 ± 3.5922.13 ± 3.1556.92 ± 3.7220.56 ± 2.87t-value—0.07619.139−0.03627.025P value—0.94<0.0010.971<0.001
## 4. Discussion
Diabetic nephropathy is one of the major complications of diabetic patients, and diabetic nephropathy complicated by end-stage renal disease is the main cause of death in diabetic patients. Hemodialysis is an effective means for diabetic nephropathy, but prolonged hemodialysis may lead to inflammation, hypoproteinemia, muscle protein depletion, and other nutritional deficiencies in the organism, which reduces the immunity of patients, aggravates the inflammatory response, and compromises the prognosis [9]. A related study by Lee et al.[10] revealed that effective nursing measures for patients with diabetic nephropathy are essential for reducing clinical mortality and improving the quality of survival of patients [11]. Patient-oriented primary nursing is appreciated in clinical practice. Patients are given systematic and targeted nursing measures to reduce the incidence of complications and are instructed in terms of psychological management, medication, diet, and health education [12]. Clinical studies have shown that primary nursing facilitates the recovery of patients and lays a good foundation for the rehabilitation of patients. Diabetic nephropathy usually develops in the terminal stage of the disease due to the insidiousness of its early-stage symptoms, which complicates the treatment [13]. Currently, proteinuria, blood creatinine, and urea nitrogen are mostly determined in clinical practice by means of serological tests of renal function. Proteinuria levels reflect the severity of kidney damage and early nephropathy in patients. Blood creatinine is a product of human muscle metabolism in clinical practice and can be used to inspect renal function. Urea nitrogen is a nitrogenous compound in the plasma of the patient’s organism that is filtered from the glomerulus and excreted out of the body, which is generally used to determine the severity of the disease. Fasting glucose can be used to assess the glycemic index of the patient [14].The results of the present study showed that primary nursing resulted in lower levels of blood creatinine, fasting glucose, urea nitrogen, and proteinuria versus routine nursing, indicating that primary nursing is effective in improving renal function and controlling glycemic index in patients [15], which may be attributed to the scientific and reasonable targeted disease guidance and diet of the primary nursing. Ren et al. [16] found that inflammatory factors such as interleukin (IL)-6, hypersensitive c-reactive-protein (hs-CRP), and tumor necrosis factor-α (TNF-α) in patients with diabetic nephropathy accelerated the renal vascular sclerosis lesions in the body, which reduces the vascular elasticity and stimulates the proliferation of glomerular thylakoid cells in the body, thereby aggravating the disease. Thipsawat et al. [17] stated that the control of inflammatory response and proteinuria in patients with diabetic nephropathy contributed to the alleviation of impaired renal function in patients. Vlachou et al. [18] demonstrated that effective nursing interventions in patients with diabetic nephropathy treated with hemodialysis mitigated the inflammatory response and improved the patient’s prognosis. Moreover, primary nursing herein was associated with significantly lower levels of IL-6, hs-CRP, and TNF-α versus routine nursing, suggesting an effective alleviation of inflammatory response after the intervention of primary nursing. The reason may be that primary nursing closely monitors the patient’s conditions and the medication, which allows first response strategies in the case of any adverse conditions [19]. In addition, a sound physical and mental health status contributes to the enhancement of patients’ treatment outcomes [20]. Here, the patients receiving primary nursing showed significantly lower scores of SAS and SDS versus those given routine nursing, indicating a promising efficiency of primary nursing in eliminating the negative emotions of patients. The reason may be that primary nursing provides timely comfort and encouragement to help patients resolve their negative emotions [21]. It is worth noting that patients experience negative psychological changes after the illness. The task of psychological nursing is to improve the patient’s psychological state and help them adapt to the new status, which is beneficial to the treatment and recovery. However, important limitations should be outlined. The sample studied comprised a select and relatively small number of participants, which compromise the generalizability of the findings.
## 5. Conclusion
Primary nursing improves the renal function of diabetic nephropathy patients undergoing hemodialysis, reduces the inflammatory response, and eliminates the negative emotions of patients, which shows great potential for clinical application.
---
*Source: 1011415-2022-08-09.xml* | 2022 |
# The Effect of Nursing Intervention Model Using Mobile Nursing System on Pregnancy Outcome of Pregnant Women
**Authors:** Yang Lu
**Journal:** Journal of Healthcare Engineering
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011595
---
## Abstract
Due to the recent advancement in technology specifically mobile phones, these devices can be used in the hospital to monitor or speed up various activities, which are related to doctors and nurses. In literature, various mechanisms have been presented to resolve this issue, but none of these approaches have considered effectiveness of this technology in the development of a proper mobile nursing system, which is specifically designed for pregnant women. Therefore, in this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system. The results showed that the incidence of each indicator of pregnancy outcome in group B was significantly lower than that in group A, and the difference was statistically significant (P<0.05). The nursing intervention model based on the mobile nursing system can effectively improve the pregnancy outcome. The mobile nursing system can help nursing staff find the abnormalities of pregnant women during pregnancy and give effective nursing measures in time, which helped improve the pregnancy outcomes, reduce the probability of adverse pregnancy outcomes, ensure the safety of puerperae and newborns, and lessen the delivery risk factors.
---
## Body
## 1. Introduction
There are many factors that cause bad pregnancy outcome, which are caused by the mutual influence of multiple factors such as parents, heredity, personal living habits, environment, and psychology [1]. With the gradual rise of social status, women have the possibility of receiving more education or work and are no longer confined to the family. Therefore, the marriage and childbearing ages of women have been postponed, and the elderly pregnant women have increased year by year. Many studies have proved that the rise of age leads to an increase in the incidence of pregnancy complications and complications, which greatly increases the risk of birth and the probability of adverse pregnancy outcomes, such as large babies and low birth weight babies [2]. With the development of modern society, people’s living standards have improved, and there are unreasonable diets such as high-calorie and high-protein food intake. In addition, because pregnant women should not be too active during pregnancy, more and more pregnant women cannot control their own weight gain well. Studies have confirmed that there is a close relationship between the growth of pregnant women’s own weight and pregnancy outcome. Obese people are more likely to develop gestational diabetes or hypertension during pregnancy, and the probability of cesarean section is greatly increased [3, 4]. With the rapid development of modern industrialized society, environmental pollution has become more and more serious, which has also led to the increasing probability of pregnant women’s bad pregnancy outcomes, such as miscarriage and defects in newborns [5]. The occurrence of various bad pregnancy outcomes is accompanied by the risk of maternal and child death, which has had a huge impact on the public health and social problems of the quality of newborn populations in various countries, and it has brought huge economic and mental pressures to the family and society.Placenta previa, postpartum hemorrhage, placental abruption, and premature rupture of membranes are all important factors that cause premature birth [6]. Studies have shown that advanced age, smoking, drug abuse, multiple pregnancy, curettage, and multiple abortions are all risk factors for placenta previa [7]. Puerperae, high blood pressure, smoking, drug use, and advanced age are also important reasons for placental abruption [8]. Lack of trace elements such as zinc in the body, lack of vitamin C, and reproductive tract infections can all lead to premature rupture of membranes [9]. In addition to the body’s own factors such as soft birth canal injury and weak uterine contractions, multiple curettage, excessive births, overfrequency, poor physical fitness, and other chronic systemic diseases in pregnant women may all cause postpartum hemorrhage [10]. In order to reduce the occurrence of the abovementioned bad pregnancy outcomes, the Chinese government and local medical and health care departments have begun to pay attention to and strengthen the health care of pregnant women during pregnancy [11]. With the strengthening of publicity and the continuous improvement of health care during pregnancy, people have begun to pay more attention to health care during pregnancy, health awareness has gradually increased, and compliance has also improved. However, the incidence of adverse pregnancy outcomes such as newborns defects, low birth weight, pregnancy, or childbirth complications has not been found to be significantly reduced. Therefore, pregnancy nursing intervention has gradually been paid attention to by medical staff in order to reduce adverse pregnancy outcomes and improve population quality [12]. However, pregnant women usually go to the hospital for pregnancy checkups on a regular basis due to conditions or other influences. However, for pregnant women, especially pregnant women in the third trimester of pregnancy, it is necessary to detect vital signs in real time, so that they can be helped when the bodies are abnormal.The technology and modern telemedicine involve many industries such as software technology, computer network, medical image processing, communication technology, and electronic information technology. The rapid development of hospital informatization has made the implementation of the mobile nursing system possible. The mobile nursing system covers the entire process from the patient entering the hospital, through the preadmission, hospitalization, and discharge. With the development of patient satisfaction as the core, it provides medical staff and managers with a process-based, informatized, paperless, automated, and intelligent one-stop nursing system management platform [13]. The mobile nursing system can record patient vital signs in real time and directly record and display the implementation of nursing-related diagnosis and nursing measures after admission, effectively reducing the occurrence of medical accidents. The computer automatically generates the electronic version of the nursing record, which makes the whole process of nursing diagnosis and treatment a qualitative change, allowing nurses to dispense, check, and execute more efficiently, and making the whole nursing process “paperless.” In addition, it can store the basic information of patients in time, realize the optimal medical process, minimize the medical error rate, and lessen the work intensity maximally. The electronic medical record can be moved and written, the medical decision-making is scientific and intelligent, and the quality assessment standard of the medical staff is digital.Therefore, mobile nursing monitoring system based on Android this time is a way of health monitoring in remote mode. Physiological parameter monitoring equipment monitors and transmits human physiological data, real-time monitoring the physiological data such as heart rate, blood pressure, and body temperature of the user. If the physiological parameters of users are abnormal, an alarm can be issued in time. It was compared with pregnant women who have not undergone nursing intervention and only have routine physical examinations during pregnancy to explore the effect of this method on pregnancy outcome. In this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.The remaining sections of the paper are managed and represented accordingly as given below.In subsequent section, the android based mobile nursing monitoring system is described and presented in detailed along with its various portions or sections. Experimental results and observations were reported in Section3 of the manuscript. These results have verified the exceptional performance of the android based mobile nursing system. A detailed discussion section is provided, which is followed by comprehensive and brief concluding remarks sections. Finally, reference materials are provided.
## 2. Android Based Mobile Nursing Management System
### 2.1. Research Objects
In this study, the clinical data of 266 pregnant women were extracted from the electronic record (E-record) system. When the data were extracted, it was necessary to exclude I. pregnant women who were selected to induce labor or abortion due to family planning restrictions, personal or family factors after pregnancy; II. pregnant women who suffered from mental illness who did not know their prepregnancy healthcare or had poor compliance and were unwilling to cooperate with the investigation. According to the treatment method of the patients, they were divided into two groups: pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.
### 2.2. Mobile Nursing System
In this study, a remote nursing monitoring system based on the Android platform was adopted, which was functioned with remote guidance monitoring and physiological data monitoring to provide pregnant women with medical interaction, realize seamless connection and all-round medical monitoring services between hospitals and families, and help nurses give timely nursing interventions (Figure1).Figure 1
The remote nursing monitoring system based on the android platform.In the beginning, the heart rate, blood pressure, and body temperature of user were detected by the remote medical monitoring system through various physiological data collection terminals. After that, the collected physiological data were performed with D/B conversion, and the wireless WiFi method was used to communicate with the Android mobile phone monitoring terminal (MPMT). The Android MPMT processed the detected physiological data such as body temperature and blood pressure and then monitored and stored these data. In addition, the Android MPMT transmitted the acquired data to the service system of the remote monitoring center through interaction with the wire beam electrode (WBE) service platform of the remote monitoring center, so that the remote monitoring center can monitor these data in real time. Users, their relatives, and medical staff can log in at any time and view the general trend of current or past physiological data and various other related service information.
### 2.3. Related Algorithms for Physiological Parameter Collection
Because the monitoring system remotely detects the physiological data of the user, unlike a hospital that can help patients solve problems at any time, it is extremely necessary to predict the physiological state of the patient in real time. For example, if the heart abnormality cannot be found in time for heart disease patients, it may lead to sudden death. If the system can predict the physical condition of future patients in time, it can give nurses enough time to prepare solutions so as not to overwhelm nurses when problems arise. The prediction system not only reduces the risk of patients, but also reduces the pressure on nurses and effectively improves the nursing effect. Therefore, a time series prediction algorithm was applied in this study to predict the blood pressure, electrocardiograph (ECG), and other conditions of patients.At this stage, time series prediction methods roughly include two major categories: statistical methods and artificial intelligence methods [14, 15]. In addition, it can be divided into global model and local model according to the difference of modeling principle and structure [16] (Figure 2). Statistical methods can be divided into two methods: qualitative and quantitative prediction [17]. The qualitative prediction method is based on the existing relevant theories and knowledge and obtains the prediction results based on intuition. Therefore, the accuracy of its conclusions is mostly determined by the subjective thoughts of the judges, so the accuracy of the prediction results is poor. Quantitative prediction is based on existing experimental data to establish and calculate a certain mathematical model to obtain prediction data. Compared with qualitative prediction, the accuracy of prediction results has been effectively improved, so it has also attracted more attention.Figure 2
Comparison of prediction principles between global model and local model. (a) Global model. (b) Local model.
(a)(b)
#### 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
### 2.4. Local Model Prediction
The main purpose of the local model is to find the nearest neighbor set of the query sample. Its focus is to select the similarity measurement algorithm. The appropriate measurement algorithm can ensure that the local model has a better prediction effect. In this study, the multitask LA-SVM algorithm and the K-means algorithm method were adopted and combined with semisupervised and supervised learning algorithms. The input values were clustered through the K-mean algorithm, adjacent samples were classified into the same cluster, and each of them was trained and simulated by the MTLS-SVM algorithm to obtain the predicted output value. This method could not only make up for the shortcomings of mining sequence information in traditional algorithms, but also did not consider the poor prediction accuracy caused by comprehensive intersequence correlation and other reasons. At the same time, it reduces the dimensionality in the support vector machine and effectively improves the operating efficiency.The algorithm mainly included three parts: data preprocessing, task construction, and model training (Figure4):Figure 4
The time series prediction models based onK-means and MTLS-SVM.The data preprocessing was described as follows.It was supposed that the time seriesy1,y2,…,yn,yn+1,…,yt was sampled at the same time interval, t was the current time, and the predicted time series was to estimate the time t + h based on the historical sampling value. The yt+h was shown in the following equation:(12)yt+h=fyt,yt−1,…,yt−n+1.In equation (12), h represented the prediction step length; when h = 1, it meant single-step prediction, and when h > 1, it meant multistep prediction. f represented the prediction model, and n represented the delay time. The collected observation value y1,y2,…,yn,yn+1,…,yt was clustered to construct a data set sample through the K-means algorithm.For the task construction (shown in Figures4 and 5), the single task and multiple tasks constructed in the time series, the combination of multitask learning methods and time series prediction focuses mainly on the close relationship between adjacent time points. In the past, single task was used. The learning method was to predict the value at time t + h (h ≥ 1) from the observation value at n times, and the model output value was single. The multitask learning method can jointly represent the prediction output of t + 1, t + 2,…, t + k, which had the effect of inductive bias and improves the accuracy of prediction.Figure 5
Multitask construction in time series.Applying the multitask learning method only needed to build a multioutput prediction model, and the output values ofk tasks could be obtained at a time. According to the time series prediction results, the appropriate k tasks were selected, and a corresponding training set was constructed for each task.The model training was described as follows.It could train the corresponding training sets ofk learning tasks and the MTLS-SVM model synchronously. Because these tasks were to predict similar time values synchronously, there was a parallel relationship and a close relationship among them. Throughout the training process, tasks can be restricted to each other to achieve the effect of inductive bias, thereby improving the model’s ability to predict unknown data (Algorithm 1).Algorithm 1: The process of predicting the time series based on MTLS-SVM was clarified as follows.
Input: time seriesy1,y2,…,yn,yn+1,…,ytOutput: estimated valueyt+h at a future moment(1)
Data preprocessing: after the original time seriesy was clustered through the K-means algorithm, the corresponding h should be determined according to the prediction, and the data set sample should be constructed.(2)
k learning tasks were determined according to the target estimated value: t + 1, t + 2,…, t + k, k < h, each task outputted the corresponding yt+h, and the data set was clustered by the K-means algorithm one by one, and k classes formed a training set.(3)
The set ofk tasks and the MTLS-SVM model were trained at the same time, and the final prediction model can be obtained.(4)
Inputting the corresponding test samples ofk tasks into the trained MTLS-SVM model can obtain k estimated values at the same time, and the required estimated value yt+h could be selected from the obtained values.After the required estimated value was selected, it can be compared with the measured data to illustrate the accuracy of the algorithm prediction in this study.
### 2.5. Statistical Methods
The SPSS22.0 was adopted to perform statistical analysis on the data, and descriptive statistics were performed on the pregnant women pregnancy outcome of each group. The data between the two groups were compared using analysis of variance, andP<0.05 was considered statistically significant.
## 2.1. Research Objects
In this study, the clinical data of 266 pregnant women were extracted from the electronic record (E-record) system. When the data were extracted, it was necessary to exclude I. pregnant women who were selected to induce labor or abortion due to family planning restrictions, personal or family factors after pregnancy; II. pregnant women who suffered from mental illness who did not know their prepregnancy healthcare or had poor compliance and were unwilling to cooperate with the investigation. According to the treatment method of the patients, they were divided into two groups: pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.
## 2.2. Mobile Nursing System
In this study, a remote nursing monitoring system based on the Android platform was adopted, which was functioned with remote guidance monitoring and physiological data monitoring to provide pregnant women with medical interaction, realize seamless connection and all-round medical monitoring services between hospitals and families, and help nurses give timely nursing interventions (Figure1).Figure 1
The remote nursing monitoring system based on the android platform.In the beginning, the heart rate, blood pressure, and body temperature of user were detected by the remote medical monitoring system through various physiological data collection terminals. After that, the collected physiological data were performed with D/B conversion, and the wireless WiFi method was used to communicate with the Android mobile phone monitoring terminal (MPMT). The Android MPMT processed the detected physiological data such as body temperature and blood pressure and then monitored and stored these data. In addition, the Android MPMT transmitted the acquired data to the service system of the remote monitoring center through interaction with the wire beam electrode (WBE) service platform of the remote monitoring center, so that the remote monitoring center can monitor these data in real time. Users, their relatives, and medical staff can log in at any time and view the general trend of current or past physiological data and various other related service information.
## 2.3. Related Algorithms for Physiological Parameter Collection
Because the monitoring system remotely detects the physiological data of the user, unlike a hospital that can help patients solve problems at any time, it is extremely necessary to predict the physiological state of the patient in real time. For example, if the heart abnormality cannot be found in time for heart disease patients, it may lead to sudden death. If the system can predict the physical condition of future patients in time, it can give nurses enough time to prepare solutions so as not to overwhelm nurses when problems arise. The prediction system not only reduces the risk of patients, but also reduces the pressure on nurses and effectively improves the nursing effect. Therefore, a time series prediction algorithm was applied in this study to predict the blood pressure, electrocardiograph (ECG), and other conditions of patients.At this stage, time series prediction methods roughly include two major categories: statistical methods and artificial intelligence methods [14, 15]. In addition, it can be divided into global model and local model according to the difference of modeling principle and structure [16] (Figure 2). Statistical methods can be divided into two methods: qualitative and quantitative prediction [17]. The qualitative prediction method is based on the existing relevant theories and knowledge and obtains the prediction results based on intuition. Therefore, the accuracy of its conclusions is mostly determined by the subjective thoughts of the judges, so the accuracy of the prediction results is poor. Quantitative prediction is based on existing experimental data to establish and calculate a certain mathematical model to obtain prediction data. Compared with qualitative prediction, the accuracy of prediction results has been effectively improved, so it has also attracted more attention.Figure 2
Comparison of prediction principles between global model and local model. (a) Global model. (b) Local model.
(a)(b)
### 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
## 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
## 2.4. Local Model Prediction
The main purpose of the local model is to find the nearest neighbor set of the query sample. Its focus is to select the similarity measurement algorithm. The appropriate measurement algorithm can ensure that the local model has a better prediction effect. In this study, the multitask LA-SVM algorithm and the K-means algorithm method were adopted and combined with semisupervised and supervised learning algorithms. The input values were clustered through the K-mean algorithm, adjacent samples were classified into the same cluster, and each of them was trained and simulated by the MTLS-SVM algorithm to obtain the predicted output value. This method could not only make up for the shortcomings of mining sequence information in traditional algorithms, but also did not consider the poor prediction accuracy caused by comprehensive intersequence correlation and other reasons. At the same time, it reduces the dimensionality in the support vector machine and effectively improves the operating efficiency.The algorithm mainly included three parts: data preprocessing, task construction, and model training (Figure4):Figure 4
The time series prediction models based onK-means and MTLS-SVM.The data preprocessing was described as follows.It was supposed that the time seriesy1,y2,…,yn,yn+1,…,yt was sampled at the same time interval, t was the current time, and the predicted time series was to estimate the time t + h based on the historical sampling value. The yt+h was shown in the following equation:(12)yt+h=fyt,yt−1,…,yt−n+1.In equation (12), h represented the prediction step length; when h = 1, it meant single-step prediction, and when h > 1, it meant multistep prediction. f represented the prediction model, and n represented the delay time. The collected observation value y1,y2,…,yn,yn+1,…,yt was clustered to construct a data set sample through the K-means algorithm.For the task construction (shown in Figures4 and 5), the single task and multiple tasks constructed in the time series, the combination of multitask learning methods and time series prediction focuses mainly on the close relationship between adjacent time points. In the past, single task was used. The learning method was to predict the value at time t + h (h ≥ 1) from the observation value at n times, and the model output value was single. The multitask learning method can jointly represent the prediction output of t + 1, t + 2,…, t + k, which had the effect of inductive bias and improves the accuracy of prediction.Figure 5
Multitask construction in time series.Applying the multitask learning method only needed to build a multioutput prediction model, and the output values ofk tasks could be obtained at a time. According to the time series prediction results, the appropriate k tasks were selected, and a corresponding training set was constructed for each task.The model training was described as follows.It could train the corresponding training sets ofk learning tasks and the MTLS-SVM model synchronously. Because these tasks were to predict similar time values synchronously, there was a parallel relationship and a close relationship among them. Throughout the training process, tasks can be restricted to each other to achieve the effect of inductive bias, thereby improving the model’s ability to predict unknown data (Algorithm 1).Algorithm 1: The process of predicting the time series based on MTLS-SVM was clarified as follows.
Input: time seriesy1,y2,…,yn,yn+1,…,ytOutput: estimated valueyt+h at a future moment(1)
Data preprocessing: after the original time seriesy was clustered through the K-means algorithm, the corresponding h should be determined according to the prediction, and the data set sample should be constructed.(2)
k learning tasks were determined according to the target estimated value: t + 1, t + 2,…, t + k, k < h, each task outputted the corresponding yt+h, and the data set was clustered by the K-means algorithm one by one, and k classes formed a training set.(3)
The set ofk tasks and the MTLS-SVM model were trained at the same time, and the final prediction model can be obtained.(4)
Inputting the corresponding test samples ofk tasks into the trained MTLS-SVM model can obtain k estimated values at the same time, and the required estimated value yt+h could be selected from the obtained values.After the required estimated value was selected, it can be compared with the measured data to illustrate the accuracy of the algorithm prediction in this study.
## 2.5. Statistical Methods
The SPSS22.0 was adopted to perform statistical analysis on the data, and descriptive statistics were performed on the pregnant women pregnancy outcome of each group. The data between the two groups were compared using analysis of variance, andP<0.05 was considered statistically significant.
## 3. Results and Evaluation
In this section, a detailed discussion on various evaluation metrics and performance of the proposed system is presented.
### 3.1. Algorithm Model Experiment Results
In order to verify the practicability of the research algorithm and obtain accurate experimental results, the case data of 95 patients diagnosed with heart disease and hypertension were extracted. After data collection and processing, K-means algorithm was used to implement the clustering algorithm. In this study, the appropriate value ofk was determined firstly so that the results that can be easily obtained showed higher accuracy. The results when the k value was 2, 3 and 4, 5 were simulated through different subsets, as shown in Figures 6–9.Figure 6
Accuracy test results whenK = 2.Figure 7
Accuracy test results whenK = 3.Figure 8
Accuracy test results whenK = 4.Figure 9
Accuracy test results whenK = 5.The number of sample values contained in data sets 1, 2, and 3 was 198, 427, and 562, respectively. It can be explained that when the data set sample was small, the preaccuracy was best whenk = 2; and when the data set sample was large, the prediction accuracy was the best when the k value was 3.
### 3.2. Comparison on Pregnancy Outcome Results
The comparison results of pregnancy outcome indicators of the two groups are shown in Figure10. The incidence of related pregnancy outcome indicators of group A was significantly higher than that of group B, and the difference was statistically significant (P<0.05).Figure 10
Comparison on pregnant women pregnancy outcome (∗suggested P<0.05 compared with the control group).
## 3.1. Algorithm Model Experiment Results
In order to verify the practicability of the research algorithm and obtain accurate experimental results, the case data of 95 patients diagnosed with heart disease and hypertension were extracted. After data collection and processing, K-means algorithm was used to implement the clustering algorithm. In this study, the appropriate value ofk was determined firstly so that the results that can be easily obtained showed higher accuracy. The results when the k value was 2, 3 and 4, 5 were simulated through different subsets, as shown in Figures 6–9.Figure 6
Accuracy test results whenK = 2.Figure 7
Accuracy test results whenK = 3.Figure 8
Accuracy test results whenK = 4.Figure 9
Accuracy test results whenK = 5.The number of sample values contained in data sets 1, 2, and 3 was 198, 427, and 562, respectively. It can be explained that when the data set sample was small, the preaccuracy was best whenk = 2; and when the data set sample was large, the prediction accuracy was the best when the k value was 3.
## 3.2. Comparison on Pregnancy Outcome Results
The comparison results of pregnancy outcome indicators of the two groups are shown in Figure10. The incidence of related pregnancy outcome indicators of group A was significantly higher than that of group B, and the difference was statistically significant (P<0.05).Figure 10
Comparison on pregnant women pregnancy outcome (∗suggested P<0.05 compared with the control group).
## 4. Discussion
Poor pregnancy outcome is the result of a variety of factors before and during pregnancy, which seriously threatens the safety of mothers and babies. Some pregnant women have poor prognostic effects, which bring huge pressure to the family and society. Poor pregnancy outcome includes various complications during childbirth, various complications during pregnancy, induction of labor, and premature birth caused by various pathological factors during pregnancy [20]. Some studies all over the world have proved that nursing intervention can effectively reduce the probability of adverse pregnancy outcome, and there is an effective method to ensure the health of pregnant women during pregnancy and after delivery [12, 21]. However, traditional nursing interventions cannot guarantee that pregnant women are always concerned about their physical conditions. Therefore, this study was made to explore the impacts of the nursing intervention model based on the mobile nursing system on the pregnancy outcome, provide strong data support for further improving the health care effect during pregnancy, and reduce the occurrence of undesirable pregnancy outcomes.In this article, the K-means algorithm and the multitask LS-SVM algorithm were combined, and the human physiological parameter monitoring system was combined; the K-means algorithm was adopted to cluster the data, and then the model was trained through the multitask LS-SVM algorithm; and more accurate results were obtained in the prediction of physiological parameters of patients such as ECG and blood pressure. In addition, the pregnancy outcomes were compared between pregnant women who had not undergone special nursing interventions and pregnant women who had been given nursing interventions based on the mobile nursing system. It can be found that, regarding the pregnancy outcome indicators of placenta previa, premature delivery, fetal distress, postpartum hemorrhage, premature rupture of membranes, urinary tract infection, polyhydramnios, and cesarean section, the probability of occurrence of various adverse pregnancy outcomes in group B was significantly lower in contrast to that in group A, and the difference was obvious with statistical significance (P<0.05). This shows that the nursing intervention model based on the mobile nursing system can effectively improve the pregnancy outcome. Preterm birth refers to babies whose gestation period is 28 weeks but less than 37 weeks at the time of delivery. The research results show that preterm birth accounts for 8.4% of the total number of bad pregnancy outcomes in puerperae, which is roughly the same as the average incidence of 5%∼15% in China previously reported. Postpartum hemorrhage means that pregnant women lose more than 500 mL of blood within 24 hours after delivery, and the prevalence rate accounts for 28%–3% of the total number of deliveries. The results of the study showed that postpartum hemorrhage accounted for 9.4%. This situation may be related to the increase in the cesarean section rate. In this study, the cesarean section rate was as high as 54.1%.
## 5. Conclusion
In this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system. In summary, this study made up for the shortcomings of inability to monitor in real time and effectively respond to the emergencies of pregnant women. During pregnancy, there would be various factors that will affect the pregnancy outcome. Only by doing a good job in pregnancy health care, timely discovering pregnancy risk factors and giving corresponding nursing interventions, can it effectively reduce the appearance of bad pregnancy outcomes, ensure the health of pregnant women and newborns, and improve the birth quality of the population.
---
*Source: 1011595-2022-02-23.xml* | 1011595-2022-02-23_1011595-2022-02-23.md | 53,248 | The Effect of Nursing Intervention Model Using Mobile Nursing System on Pregnancy Outcome of Pregnant Women | Yang Lu | Journal of Healthcare Engineering
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011595 | 1011595-2022-02-23.xml | ---
## Abstract
Due to the recent advancement in technology specifically mobile phones, these devices can be used in the hospital to monitor or speed up various activities, which are related to doctors and nurses. In literature, various mechanisms have been presented to resolve this issue, but none of these approaches have considered effectiveness of this technology in the development of a proper mobile nursing system, which is specifically designed for pregnant women. Therefore, in this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system. The results showed that the incidence of each indicator of pregnancy outcome in group B was significantly lower than that in group A, and the difference was statistically significant (P<0.05). The nursing intervention model based on the mobile nursing system can effectively improve the pregnancy outcome. The mobile nursing system can help nursing staff find the abnormalities of pregnant women during pregnancy and give effective nursing measures in time, which helped improve the pregnancy outcomes, reduce the probability of adverse pregnancy outcomes, ensure the safety of puerperae and newborns, and lessen the delivery risk factors.
---
## Body
## 1. Introduction
There are many factors that cause bad pregnancy outcome, which are caused by the mutual influence of multiple factors such as parents, heredity, personal living habits, environment, and psychology [1]. With the gradual rise of social status, women have the possibility of receiving more education or work and are no longer confined to the family. Therefore, the marriage and childbearing ages of women have been postponed, and the elderly pregnant women have increased year by year. Many studies have proved that the rise of age leads to an increase in the incidence of pregnancy complications and complications, which greatly increases the risk of birth and the probability of adverse pregnancy outcomes, such as large babies and low birth weight babies [2]. With the development of modern society, people’s living standards have improved, and there are unreasonable diets such as high-calorie and high-protein food intake. In addition, because pregnant women should not be too active during pregnancy, more and more pregnant women cannot control their own weight gain well. Studies have confirmed that there is a close relationship between the growth of pregnant women’s own weight and pregnancy outcome. Obese people are more likely to develop gestational diabetes or hypertension during pregnancy, and the probability of cesarean section is greatly increased [3, 4]. With the rapid development of modern industrialized society, environmental pollution has become more and more serious, which has also led to the increasing probability of pregnant women’s bad pregnancy outcomes, such as miscarriage and defects in newborns [5]. The occurrence of various bad pregnancy outcomes is accompanied by the risk of maternal and child death, which has had a huge impact on the public health and social problems of the quality of newborn populations in various countries, and it has brought huge economic and mental pressures to the family and society.Placenta previa, postpartum hemorrhage, placental abruption, and premature rupture of membranes are all important factors that cause premature birth [6]. Studies have shown that advanced age, smoking, drug abuse, multiple pregnancy, curettage, and multiple abortions are all risk factors for placenta previa [7]. Puerperae, high blood pressure, smoking, drug use, and advanced age are also important reasons for placental abruption [8]. Lack of trace elements such as zinc in the body, lack of vitamin C, and reproductive tract infections can all lead to premature rupture of membranes [9]. In addition to the body’s own factors such as soft birth canal injury and weak uterine contractions, multiple curettage, excessive births, overfrequency, poor physical fitness, and other chronic systemic diseases in pregnant women may all cause postpartum hemorrhage [10]. In order to reduce the occurrence of the abovementioned bad pregnancy outcomes, the Chinese government and local medical and health care departments have begun to pay attention to and strengthen the health care of pregnant women during pregnancy [11]. With the strengthening of publicity and the continuous improvement of health care during pregnancy, people have begun to pay more attention to health care during pregnancy, health awareness has gradually increased, and compliance has also improved. However, the incidence of adverse pregnancy outcomes such as newborns defects, low birth weight, pregnancy, or childbirth complications has not been found to be significantly reduced. Therefore, pregnancy nursing intervention has gradually been paid attention to by medical staff in order to reduce adverse pregnancy outcomes and improve population quality [12]. However, pregnant women usually go to the hospital for pregnancy checkups on a regular basis due to conditions or other influences. However, for pregnant women, especially pregnant women in the third trimester of pregnancy, it is necessary to detect vital signs in real time, so that they can be helped when the bodies are abnormal.The technology and modern telemedicine involve many industries such as software technology, computer network, medical image processing, communication technology, and electronic information technology. The rapid development of hospital informatization has made the implementation of the mobile nursing system possible. The mobile nursing system covers the entire process from the patient entering the hospital, through the preadmission, hospitalization, and discharge. With the development of patient satisfaction as the core, it provides medical staff and managers with a process-based, informatized, paperless, automated, and intelligent one-stop nursing system management platform [13]. The mobile nursing system can record patient vital signs in real time and directly record and display the implementation of nursing-related diagnosis and nursing measures after admission, effectively reducing the occurrence of medical accidents. The computer automatically generates the electronic version of the nursing record, which makes the whole process of nursing diagnosis and treatment a qualitative change, allowing nurses to dispense, check, and execute more efficiently, and making the whole nursing process “paperless.” In addition, it can store the basic information of patients in time, realize the optimal medical process, minimize the medical error rate, and lessen the work intensity maximally. The electronic medical record can be moved and written, the medical decision-making is scientific and intelligent, and the quality assessment standard of the medical staff is digital.Therefore, mobile nursing monitoring system based on Android this time is a way of health monitoring in remote mode. Physiological parameter monitoring equipment monitors and transmits human physiological data, real-time monitoring the physiological data such as heart rate, blood pressure, and body temperature of the user. If the physiological parameters of users are abnormal, an alarm can be issued in time. It was compared with pregnant women who have not undergone nursing intervention and only have routine physical examinations during pregnancy to explore the effect of this method on pregnancy outcome. In this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.The remaining sections of the paper are managed and represented accordingly as given below.In subsequent section, the android based mobile nursing monitoring system is described and presented in detailed along with its various portions or sections. Experimental results and observations were reported in Section3 of the manuscript. These results have verified the exceptional performance of the android based mobile nursing system. A detailed discussion section is provided, which is followed by comprehensive and brief concluding remarks sections. Finally, reference materials are provided.
## 2. Android Based Mobile Nursing Management System
### 2.1. Research Objects
In this study, the clinical data of 266 pregnant women were extracted from the electronic record (E-record) system. When the data were extracted, it was necessary to exclude I. pregnant women who were selected to induce labor or abortion due to family planning restrictions, personal or family factors after pregnancy; II. pregnant women who suffered from mental illness who did not know their prepregnancy healthcare or had poor compliance and were unwilling to cooperate with the investigation. According to the treatment method of the patients, they were divided into two groups: pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.
### 2.2. Mobile Nursing System
In this study, a remote nursing monitoring system based on the Android platform was adopted, which was functioned with remote guidance monitoring and physiological data monitoring to provide pregnant women with medical interaction, realize seamless connection and all-round medical monitoring services between hospitals and families, and help nurses give timely nursing interventions (Figure1).Figure 1
The remote nursing monitoring system based on the android platform.In the beginning, the heart rate, blood pressure, and body temperature of user were detected by the remote medical monitoring system through various physiological data collection terminals. After that, the collected physiological data were performed with D/B conversion, and the wireless WiFi method was used to communicate with the Android mobile phone monitoring terminal (MPMT). The Android MPMT processed the detected physiological data such as body temperature and blood pressure and then monitored and stored these data. In addition, the Android MPMT transmitted the acquired data to the service system of the remote monitoring center through interaction with the wire beam electrode (WBE) service platform of the remote monitoring center, so that the remote monitoring center can monitor these data in real time. Users, their relatives, and medical staff can log in at any time and view the general trend of current or past physiological data and various other related service information.
### 2.3. Related Algorithms for Physiological Parameter Collection
Because the monitoring system remotely detects the physiological data of the user, unlike a hospital that can help patients solve problems at any time, it is extremely necessary to predict the physiological state of the patient in real time. For example, if the heart abnormality cannot be found in time for heart disease patients, it may lead to sudden death. If the system can predict the physical condition of future patients in time, it can give nurses enough time to prepare solutions so as not to overwhelm nurses when problems arise. The prediction system not only reduces the risk of patients, but also reduces the pressure on nurses and effectively improves the nursing effect. Therefore, a time series prediction algorithm was applied in this study to predict the blood pressure, electrocardiograph (ECG), and other conditions of patients.At this stage, time series prediction methods roughly include two major categories: statistical methods and artificial intelligence methods [14, 15]. In addition, it can be divided into global model and local model according to the difference of modeling principle and structure [16] (Figure 2). Statistical methods can be divided into two methods: qualitative and quantitative prediction [17]. The qualitative prediction method is based on the existing relevant theories and knowledge and obtains the prediction results based on intuition. Therefore, the accuracy of its conclusions is mostly determined by the subjective thoughts of the judges, so the accuracy of the prediction results is poor. Quantitative prediction is based on existing experimental data to establish and calculate a certain mathematical model to obtain prediction data. Compared with qualitative prediction, the accuracy of prediction results has been effectively improved, so it has also attracted more attention.Figure 2
Comparison of prediction principles between global model and local model. (a) Global model. (b) Local model.
(a)(b)
#### 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
### 2.4. Local Model Prediction
The main purpose of the local model is to find the nearest neighbor set of the query sample. Its focus is to select the similarity measurement algorithm. The appropriate measurement algorithm can ensure that the local model has a better prediction effect. In this study, the multitask LA-SVM algorithm and the K-means algorithm method were adopted and combined with semisupervised and supervised learning algorithms. The input values were clustered through the K-mean algorithm, adjacent samples were classified into the same cluster, and each of them was trained and simulated by the MTLS-SVM algorithm to obtain the predicted output value. This method could not only make up for the shortcomings of mining sequence information in traditional algorithms, but also did not consider the poor prediction accuracy caused by comprehensive intersequence correlation and other reasons. At the same time, it reduces the dimensionality in the support vector machine and effectively improves the operating efficiency.The algorithm mainly included three parts: data preprocessing, task construction, and model training (Figure4):Figure 4
The time series prediction models based onK-means and MTLS-SVM.The data preprocessing was described as follows.It was supposed that the time seriesy1,y2,…,yn,yn+1,…,yt was sampled at the same time interval, t was the current time, and the predicted time series was to estimate the time t + h based on the historical sampling value. The yt+h was shown in the following equation:(12)yt+h=fyt,yt−1,…,yt−n+1.In equation (12), h represented the prediction step length; when h = 1, it meant single-step prediction, and when h > 1, it meant multistep prediction. f represented the prediction model, and n represented the delay time. The collected observation value y1,y2,…,yn,yn+1,…,yt was clustered to construct a data set sample through the K-means algorithm.For the task construction (shown in Figures4 and 5), the single task and multiple tasks constructed in the time series, the combination of multitask learning methods and time series prediction focuses mainly on the close relationship between adjacent time points. In the past, single task was used. The learning method was to predict the value at time t + h (h ≥ 1) from the observation value at n times, and the model output value was single. The multitask learning method can jointly represent the prediction output of t + 1, t + 2,…, t + k, which had the effect of inductive bias and improves the accuracy of prediction.Figure 5
Multitask construction in time series.Applying the multitask learning method only needed to build a multioutput prediction model, and the output values ofk tasks could be obtained at a time. According to the time series prediction results, the appropriate k tasks were selected, and a corresponding training set was constructed for each task.The model training was described as follows.It could train the corresponding training sets ofk learning tasks and the MTLS-SVM model synchronously. Because these tasks were to predict similar time values synchronously, there was a parallel relationship and a close relationship among them. Throughout the training process, tasks can be restricted to each other to achieve the effect of inductive bias, thereby improving the model’s ability to predict unknown data (Algorithm 1).Algorithm 1: The process of predicting the time series based on MTLS-SVM was clarified as follows.
Input: time seriesy1,y2,…,yn,yn+1,…,ytOutput: estimated valueyt+h at a future moment(1)
Data preprocessing: after the original time seriesy was clustered through the K-means algorithm, the corresponding h should be determined according to the prediction, and the data set sample should be constructed.(2)
k learning tasks were determined according to the target estimated value: t + 1, t + 2,…, t + k, k < h, each task outputted the corresponding yt+h, and the data set was clustered by the K-means algorithm one by one, and k classes formed a training set.(3)
The set ofk tasks and the MTLS-SVM model were trained at the same time, and the final prediction model can be obtained.(4)
Inputting the corresponding test samples ofk tasks into the trained MTLS-SVM model can obtain k estimated values at the same time, and the required estimated value yt+h could be selected from the obtained values.After the required estimated value was selected, it can be compared with the measured data to illustrate the accuracy of the algorithm prediction in this study.
### 2.5. Statistical Methods
The SPSS22.0 was adopted to perform statistical analysis on the data, and descriptive statistics were performed on the pregnant women pregnancy outcome of each group. The data between the two groups were compared using analysis of variance, andP<0.05 was considered statistically significant.
## 2.1. Research Objects
In this study, the clinical data of 266 pregnant women were extracted from the electronic record (E-record) system. When the data were extracted, it was necessary to exclude I. pregnant women who were selected to induce labor or abortion due to family planning restrictions, personal or family factors after pregnancy; II. pregnant women who suffered from mental illness who did not know their prepregnancy healthcare or had poor compliance and were unwilling to cooperate with the investigation. According to the treatment method of the patients, they were divided into two groups: pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system.
## 2.2. Mobile Nursing System
In this study, a remote nursing monitoring system based on the Android platform was adopted, which was functioned with remote guidance monitoring and physiological data monitoring to provide pregnant women with medical interaction, realize seamless connection and all-round medical monitoring services between hospitals and families, and help nurses give timely nursing interventions (Figure1).Figure 1
The remote nursing monitoring system based on the android platform.In the beginning, the heart rate, blood pressure, and body temperature of user were detected by the remote medical monitoring system through various physiological data collection terminals. After that, the collected physiological data were performed with D/B conversion, and the wireless WiFi method was used to communicate with the Android mobile phone monitoring terminal (MPMT). The Android MPMT processed the detected physiological data such as body temperature and blood pressure and then monitored and stored these data. In addition, the Android MPMT transmitted the acquired data to the service system of the remote monitoring center through interaction with the wire beam electrode (WBE) service platform of the remote monitoring center, so that the remote monitoring center can monitor these data in real time. Users, their relatives, and medical staff can log in at any time and view the general trend of current or past physiological data and various other related service information.
## 2.3. Related Algorithms for Physiological Parameter Collection
Because the monitoring system remotely detects the physiological data of the user, unlike a hospital that can help patients solve problems at any time, it is extremely necessary to predict the physiological state of the patient in real time. For example, if the heart abnormality cannot be found in time for heart disease patients, it may lead to sudden death. If the system can predict the physical condition of future patients in time, it can give nurses enough time to prepare solutions so as not to overwhelm nurses when problems arise. The prediction system not only reduces the risk of patients, but also reduces the pressure on nurses and effectively improves the nursing effect. Therefore, a time series prediction algorithm was applied in this study to predict the blood pressure, electrocardiograph (ECG), and other conditions of patients.At this stage, time series prediction methods roughly include two major categories: statistical methods and artificial intelligence methods [14, 15]. In addition, it can be divided into global model and local model according to the difference of modeling principle and structure [16] (Figure 2). Statistical methods can be divided into two methods: qualitative and quantitative prediction [17]. The qualitative prediction method is based on the existing relevant theories and knowledge and obtains the prediction results based on intuition. Therefore, the accuracy of its conclusions is mostly determined by the subjective thoughts of the judges, so the accuracy of the prediction results is poor. Quantitative prediction is based on existing experimental data to establish and calculate a certain mathematical model to obtain prediction data. Compared with qualitative prediction, the accuracy of prediction results has been effectively improved, so it has also attracted more attention.Figure 2
Comparison of prediction principles between global model and local model. (a) Global model. (b) Local model.
(a)(b)
### 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
## 2.3.1. Global Model Prediction
Statistical methods represented by autoregressive and moving average model (ARIMA) achieved the time series fitting by determining model parameters and model coefficients. Then, the fitted model was adopted to predict the future trend, which showed good prediction effect on linear systems [18]. The specific statistical method was given as follows:(1) Autoregressive (AR) Model. AR model is a dynamic description of random process with good stability and is suitable for the prediction of linear time series data. In this model, the current time value y1 of the time series was represented by the linear combination of the p historical time values of the sequence plus the white noise perturbation a1, as shown in the following equation:(1)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+a1.The above equation represented thep-order AR model AR (p), a1 represented the white noise disturbance, showing the sequence when the mean was equal to 0 and the variance was σn2, and ∅ (i = 1, 2, …, p) was the model coefficient. The definition of AR model can explain that its function was to return the historical data of the sequence itself.(2) Moving Average (MA) Model. In MA model, the linear combination of the white noise disturbance term in the past moment and the white noise disturbance term at the current moment were used to represent the current moment values of the time series, as shown in the following equation:(2)yi=∅1ai−1+∅2ai−2+⋯+∅pai−p+a1yt.The above equation represented theq-order MA model MA (q), where the weighting factor ∅ (i = 1, 2, …, q) was the model coefficient. The MA model can run in reverse, and it can also be used to predict linear system time series.(3) Autoregressive Moving Average (ARMA) Model. ARMAp,q can be expressed as the following equation:(3)y1=∅1yi−1+∅2yi−2+⋯+∅pyi−p+aiθ1ai−1+θ2ai−2+⋯+θpai−p+ai.As shown in the above equation, the ARMA model can be classified into two types: AR model and MA model. It is generally used to predict linear and stable time series modeling. Because modeling is easy to understand and easy to operate, it is widely used in many fields. However, in reality, most of the environment is not stable, or the sequence data is nonlinear, and ARMA is a linear model, so it cannot meet the requirements of such data modeling accuracy.(4) Autoregressive and Moving Average (ARIMA) Model. The structural model of ARIMAp,d,q can be expressed as the following equation:(4)∅B∇dyt=θBai.In the above equation, ∇ = 1 −B represented the difference operator, and the nonstationary time series y1 can be obtained by using the d-order difference processing to obtain the stationary time series ∇dy1, ∅B=1−∅1B−⋯−∅pBp, and ∅B=1−∅1B−⋯−∅qBq. It can be explained from the above equation that when d = 0, the ARIMA (p, d, q) model was the ARIMA (p, q) model. The process of the ARIMA time series prediction was shown in Figure 3.Figure 3
The process of the ARIMA time series prediction.The ARIMA model further improved the ARMA model, applying difference to deal with non-stationary data. Although there were applications within a certain range, it had a strong pertinence to application data and cannot adapt to some complex nonlinear data.With the continuous advancement of modern machine learning technology, some artificial intelligence methods have gradually emerged and are widely used in various fields of predicting time series [19]. Compared with traditional statistical methods, these artificial intelligence methods have obvious advantages in processing time series data with complex nonlinearities, and some data can build the mapping relationship between input and output without human intervention. These methods of predicting sequences have been widely used at this stage. The least squares support vector machine (LS-SVM) method was briefly described in this study.LS-SVM is mostly used to deal with function estimation or nonlinear regression. The initial weight space of the regression model was shown in the following equation:(5)yx=ωT∅x+b.In the equation above,ω represented the weight vector, b was the bias term, and ∅ (x) was the mapping function from the input space to the high-dimensional feature space. The optimization steps of LS-SVM in the initial space can be as shown in the following equation:(6)minJω,b,eω,e=12↑ωTω+12γ∑k−tNek2.The constraints were defined as follows:(7)yk=ωT∅xk+b+ek,k=1,…,N.In equation (7), yk is the output value, γ is the regularization parameter, which is the penalty for the error term, and ek is the regression error of the k-th training sample.Applying Lagrangian function to solve the above optimization, the following equation can be obtained:(8)Lω,ak,b,ek=J+∑k=1nakyk−ωT∅xk−b−ek.It could solve all the variables in equation (8), where ak is a Lagrangian multiplier, and the inequality constraint term is transformed into an equality constraint term according to KKT:(9)∂L∂ω=0⇒ω=∑k=1Nak∅xk,∂L∂b=0⇒∑k=1Nak=0,∂L∂ek=0⇒γek−ai=0,∂L∂ak=0⇒yk−ωT∅xk−b−ek=0.After training to obtain the values ofω and ek, the following linear system can be obtained:(10)01T1K+1γba=0y.In equation (10), 1=1,1,…,1T was a matrix in N×1, a=a1,a2,…,akT, and K referred to the kernel function.Therefore, the LS-SVM prediction model can be obtained as follows:(11)y^x=∑k=1NakKx,xk+b.In equation (11), y^x represented the output value of the prediction model, x,xk=∅xT∅x, and the following radial basis kernel function was used: Kx,xk=ex−xk2/σ2.
## 2.4. Local Model Prediction
The main purpose of the local model is to find the nearest neighbor set of the query sample. Its focus is to select the similarity measurement algorithm. The appropriate measurement algorithm can ensure that the local model has a better prediction effect. In this study, the multitask LA-SVM algorithm and the K-means algorithm method were adopted and combined with semisupervised and supervised learning algorithms. The input values were clustered through the K-mean algorithm, adjacent samples were classified into the same cluster, and each of them was trained and simulated by the MTLS-SVM algorithm to obtain the predicted output value. This method could not only make up for the shortcomings of mining sequence information in traditional algorithms, but also did not consider the poor prediction accuracy caused by comprehensive intersequence correlation and other reasons. At the same time, it reduces the dimensionality in the support vector machine and effectively improves the operating efficiency.The algorithm mainly included three parts: data preprocessing, task construction, and model training (Figure4):Figure 4
The time series prediction models based onK-means and MTLS-SVM.The data preprocessing was described as follows.It was supposed that the time seriesy1,y2,…,yn,yn+1,…,yt was sampled at the same time interval, t was the current time, and the predicted time series was to estimate the time t + h based on the historical sampling value. The yt+h was shown in the following equation:(12)yt+h=fyt,yt−1,…,yt−n+1.In equation (12), h represented the prediction step length; when h = 1, it meant single-step prediction, and when h > 1, it meant multistep prediction. f represented the prediction model, and n represented the delay time. The collected observation value y1,y2,…,yn,yn+1,…,yt was clustered to construct a data set sample through the K-means algorithm.For the task construction (shown in Figures4 and 5), the single task and multiple tasks constructed in the time series, the combination of multitask learning methods and time series prediction focuses mainly on the close relationship between adjacent time points. In the past, single task was used. The learning method was to predict the value at time t + h (h ≥ 1) from the observation value at n times, and the model output value was single. The multitask learning method can jointly represent the prediction output of t + 1, t + 2,…, t + k, which had the effect of inductive bias and improves the accuracy of prediction.Figure 5
Multitask construction in time series.Applying the multitask learning method only needed to build a multioutput prediction model, and the output values ofk tasks could be obtained at a time. According to the time series prediction results, the appropriate k tasks were selected, and a corresponding training set was constructed for each task.The model training was described as follows.It could train the corresponding training sets ofk learning tasks and the MTLS-SVM model synchronously. Because these tasks were to predict similar time values synchronously, there was a parallel relationship and a close relationship among them. Throughout the training process, tasks can be restricted to each other to achieve the effect of inductive bias, thereby improving the model’s ability to predict unknown data (Algorithm 1).Algorithm 1: The process of predicting the time series based on MTLS-SVM was clarified as follows.
Input: time seriesy1,y2,…,yn,yn+1,…,ytOutput: estimated valueyt+h at a future moment(1)
Data preprocessing: after the original time seriesy was clustered through the K-means algorithm, the corresponding h should be determined according to the prediction, and the data set sample should be constructed.(2)
k learning tasks were determined according to the target estimated value: t + 1, t + 2,…, t + k, k < h, each task outputted the corresponding yt+h, and the data set was clustered by the K-means algorithm one by one, and k classes formed a training set.(3)
The set ofk tasks and the MTLS-SVM model were trained at the same time, and the final prediction model can be obtained.(4)
Inputting the corresponding test samples ofk tasks into the trained MTLS-SVM model can obtain k estimated values at the same time, and the required estimated value yt+h could be selected from the obtained values.After the required estimated value was selected, it can be compared with the measured data to illustrate the accuracy of the algorithm prediction in this study.
## 2.5. Statistical Methods
The SPSS22.0 was adopted to perform statistical analysis on the data, and descriptive statistics were performed on the pregnant women pregnancy outcome of each group. The data between the two groups were compared using analysis of variance, andP<0.05 was considered statistically significant.
## 3. Results and Evaluation
In this section, a detailed discussion on various evaluation metrics and performance of the proposed system is presented.
### 3.1. Algorithm Model Experiment Results
In order to verify the practicability of the research algorithm and obtain accurate experimental results, the case data of 95 patients diagnosed with heart disease and hypertension were extracted. After data collection and processing, K-means algorithm was used to implement the clustering algorithm. In this study, the appropriate value ofk was determined firstly so that the results that can be easily obtained showed higher accuracy. The results when the k value was 2, 3 and 4, 5 were simulated through different subsets, as shown in Figures 6–9.Figure 6
Accuracy test results whenK = 2.Figure 7
Accuracy test results whenK = 3.Figure 8
Accuracy test results whenK = 4.Figure 9
Accuracy test results whenK = 5.The number of sample values contained in data sets 1, 2, and 3 was 198, 427, and 562, respectively. It can be explained that when the data set sample was small, the preaccuracy was best whenk = 2; and when the data set sample was large, the prediction accuracy was the best when the k value was 3.
### 3.2. Comparison on Pregnancy Outcome Results
The comparison results of pregnancy outcome indicators of the two groups are shown in Figure10. The incidence of related pregnancy outcome indicators of group A was significantly higher than that of group B, and the difference was statistically significant (P<0.05).Figure 10
Comparison on pregnant women pregnancy outcome (∗suggested P<0.05 compared with the control group).
## 3.1. Algorithm Model Experiment Results
In order to verify the practicability of the research algorithm and obtain accurate experimental results, the case data of 95 patients diagnosed with heart disease and hypertension were extracted. After data collection and processing, K-means algorithm was used to implement the clustering algorithm. In this study, the appropriate value ofk was determined firstly so that the results that can be easily obtained showed higher accuracy. The results when the k value was 2, 3 and 4, 5 were simulated through different subsets, as shown in Figures 6–9.Figure 6
Accuracy test results whenK = 2.Figure 7
Accuracy test results whenK = 3.Figure 8
Accuracy test results whenK = 4.Figure 9
Accuracy test results whenK = 5.The number of sample values contained in data sets 1, 2, and 3 was 198, 427, and 562, respectively. It can be explained that when the data set sample was small, the preaccuracy was best whenk = 2; and when the data set sample was large, the prediction accuracy was the best when the k value was 3.
## 3.2. Comparison on Pregnancy Outcome Results
The comparison results of pregnancy outcome indicators of the two groups are shown in Figure10. The incidence of related pregnancy outcome indicators of group A was significantly higher than that of group B, and the difference was statistically significant (P<0.05).Figure 10
Comparison on pregnant women pregnancy outcome (∗suggested P<0.05 compared with the control group).
## 4. Discussion
Poor pregnancy outcome is the result of a variety of factors before and during pregnancy, which seriously threatens the safety of mothers and babies. Some pregnant women have poor prognostic effects, which bring huge pressure to the family and society. Poor pregnancy outcome includes various complications during childbirth, various complications during pregnancy, induction of labor, and premature birth caused by various pathological factors during pregnancy [20]. Some studies all over the world have proved that nursing intervention can effectively reduce the probability of adverse pregnancy outcome, and there is an effective method to ensure the health of pregnant women during pregnancy and after delivery [12, 21]. However, traditional nursing interventions cannot guarantee that pregnant women are always concerned about their physical conditions. Therefore, this study was made to explore the impacts of the nursing intervention model based on the mobile nursing system on the pregnancy outcome, provide strong data support for further improving the health care effect during pregnancy, and reduce the occurrence of undesirable pregnancy outcomes.In this article, the K-means algorithm and the multitask LS-SVM algorithm were combined, and the human physiological parameter monitoring system was combined; the K-means algorithm was adopted to cluster the data, and then the model was trained through the multitask LS-SVM algorithm; and more accurate results were obtained in the prediction of physiological parameters of patients such as ECG and blood pressure. In addition, the pregnancy outcomes were compared between pregnant women who had not undergone special nursing interventions and pregnant women who had been given nursing interventions based on the mobile nursing system. It can be found that, regarding the pregnancy outcome indicators of placenta previa, premature delivery, fetal distress, postpartum hemorrhage, premature rupture of membranes, urinary tract infection, polyhydramnios, and cesarean section, the probability of occurrence of various adverse pregnancy outcomes in group B was significantly lower in contrast to that in group A, and the difference was obvious with statistical significance (P<0.05). This shows that the nursing intervention model based on the mobile nursing system can effectively improve the pregnancy outcome. Preterm birth refers to babies whose gestation period is 28 weeks but less than 37 weeks at the time of delivery. The research results show that preterm birth accounts for 8.4% of the total number of bad pregnancy outcomes in puerperae, which is roughly the same as the average incidence of 5%∼15% in China previously reported. Postpartum hemorrhage means that pregnant women lose more than 500 mL of blood within 24 hours after delivery, and the prevalence rate accounts for 28%–3% of the total number of deliveries. The results of the study showed that postpartum hemorrhage accounted for 9.4%. This situation may be related to the increase in the cesarean section rate. In this study, the cesarean section rate was as high as 54.1%.
## 5. Conclusion
In this paper, we have explored the effect of the intervention model based on the mobile nursing system on the pregnancy outcome of pregnant women. In this study, an Android-based mobile nursing monitoring system was adopted to monitor and transmit the human physiological data through physiological parameter monitoring equipment and continuously monitor the physiological parameter data of pregnant women. If the physiological health data of the pregnant woman was abnormal, it had to implement timely nursing intervention. In this study, 266 pregnant women in the electronic records (E-records) were selected as the research objects and divided into two groups according to the intervention method. Pregnant women in group A received routine physical examination during pregnancy, while those in group B received nursing intervention based on mobile nursing system. In summary, this study made up for the shortcomings of inability to monitor in real time and effectively respond to the emergencies of pregnant women. During pregnancy, there would be various factors that will affect the pregnancy outcome. Only by doing a good job in pregnancy health care, timely discovering pregnancy risk factors and giving corresponding nursing interventions, can it effectively reduce the appearance of bad pregnancy outcomes, ensure the health of pregnant women and newborns, and improve the birth quality of the population.
---
*Source: 1011595-2022-02-23.xml* | 2022 |
# Comparative Evaluation of Adaptation of Esthetic Prefabricated Fiberglass and CAD/CAM Crowns for Primary Teeth: Microcomputed Tomography Analysis
**Authors:** Ece Irem Oguz; Tuğba Bezgin; Ayse Isıl Orhan; Kaan Orhan
**Journal:** BioMed Research International
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1011661
---
## Abstract
Adaptation is an important factor for the clinical success of restorations. However, no studies are available evaluating the adaptation of primary crowns. The aim of this study was to compare the adaptation of crowns fabricated by CAD/CAM technology versus prefabricated fiberglass primary crowns. Typodont maxillary central, canine, and mandibular molar teeth were prepared to serve as master dies after the size of Figaro crowns was determined (n=10). Master dies were scanned with an intraoral scanner, and 10 identical CAD/CAM crowns were fabricated from resin-ceramic blocks. Figaro and CAD/CAM crowns were placed on the corresponding master dies and scanned via micro-CT. Three-dimensional volumetric gap measurements were performed to evaluate the overall adaptation. A total of 255 location-based linear measurements were allocated into 4 categories: marginal, cervical-axial, middle-axial, and occlusal. Statistical analyses were performed with factorial ANOVA, repeated measure ANOVA, and LSD tests (α=0.05). CAD/CAM crowns showed significantly lower overall and location-based gap measurements than Figaro crowns regardless of tooth number (p<0.05). For all groups, mean marginal discrepancies were lower than occlusal measurements (p<0.05). Both crown types showed higher marginal gaps for molar teeth than for canine and central incisors with no significant difference between them (p>0.05). CAD/CAM-fabricated crowns showed better marginal and internal adaptation than prefabricated Figaro crowns.
---
## Body
## 1. Introduction
Early childhood caries (ECC) is defined as the presence of decay and decay-related filled or lost tooth surfaces in one or more teeth of children aged 71 months or younger [1, 2]. ECC begins with white lesions along the margin of the maxillary primary incisors and can progress rapidly, leading to the destruction of the crown [1, 3]. Besides esthetic, nutrition, and phonation problems, ECC may cause detrimental effects on general health [4]. If treatment for ECC is delayed, serious disorders such as pain dysfunction, negative effects on growth and development, psychological problems, and a decrease in quality of life may occur [1, 2, 5].Depending on the progression of the disease, different treatment modalities for ECC can be applied from preventive techniques to crown restorations [4]. Primary teeth with widespread crown damage have been successfully treated with stainless steel crowns (SSCs) for many years [6, 7]. However, SSCs could not meet the esthetic expectations of paediatric patients and parents [8, 9]. Restorations that can satisfy increasing expectations have been obtained as a result of the developments in technology and esthetic material science for crowns in paediatric dentistry [8, 10, 11]. Veneered SSCs, composite strip crowns, and prefabricated zirconia crowns were the first materials introduced to accomplish an esthetic outcome [10–12]. The most preferred esthetic crown nowadays is prefabricated zirconia crowns which are available from different manufacturers. The most important advantage of zirconia crowns is that gingival and plaque indices are lower among these crowns than other crown types [8]. However, these crowns have certain disadvantages such as (i) they are very technique sensitive and (ii) they require excessive tooth preparation to provide a passive fit [12, 13].One of the newly launched materials to overcome such disadvantages is Figaro crowns made of fiberglass [14]. They are tooth colored and require less tooth reduction than paediatric preformed zirconia crowns with its flex-fit technology [14]. It is less technique sensitive than both composite strip crowns and zirconia crowns with a similar technique to place a SSC [15]. However, a previous study indicated failures in terms of crown retention, fracture resistance, and color deterioration for Figaro crowns compared to SSCs after 6 months of clinical evaluation period [14].Another method of note to achieve esthetics in paediatric dentistry is the computer-aided design and computer-aided manufacturing (CAD/CAM) technology. Developments in CAD/CAM technics have enabled the production of esthetic and functional restorations for both permanent and primary dentition [16]. Customized crowns can be manufactured chairside by using CAD/CAM in a single appointment. Among a wide variety of blocks available for CAD/CAM, resin-ceramic blocks stand out with advantageous features for primary dentition including the wear prevention of opposing dentition due to their low hardness values and absorption of functional stresses because of their low modulus of elasticity [17, 18]. Another beneficial outcome of low modulus of elasticity was reported as the accurate adaptation of the restoration [18].The marginal and internal adaptations are critical factors that determine the success of the restoration. While the marginal misfit was related to cement dissolution, microleakage, plaque accumulation, secondary caries, and periodontal disease, the internal misfit was associated with poor mechanical retention and reduced fractural strength [7, 18–20]. The adaptation may vary depending on the restorative material or production method of the restoration [20, 21]. No study to date has focused on the adaptation of crowns applied on primary teeth.Therefore, the present in vitro study was aimed at comparing the adaptation of two types of esthetic paediatric crowns, the prefabricated fiberglass and custom-made resin-ceramic crowns, for primary teeth. The null hypothesis tested was that prefabricated fiberglass and CAD/CAM crowns would not differ in terms of adaptation.
## 2. Materials and Methods
This study has followed the CRIS guidelines for in vitro studies as discussed in the 2014 concept note.
### 2.1. Master Die Preparation
The marginal and internal adaptations of CAD/CAM crowns and fiberglass primary crowns were compared by using microcomputed tomography. A sample size of 10 per group was determined based on a power analysis (expecteddifference=0.01, standarderrorofthemean=23.85, α=0.05, 1−β=0.8) [21]; 10 identical fiberglass crowns (Figaro crowns, Size XS; Figaro Crowns Inc., Minnesota, US) were selected considering the size of typodont primary central incisor (#51), canine (#53), and molar (#75) teeth prior to preparations as suggested by the manufacturer (n=10). Typodont teeth were placed on a typodont model (Frasaco Dental Model, AK-6; Frasaco GmbH, Tettnang, Germany) and prepared by the same operator (TB) according to the manufacturer’s preparation guide and suggestions for Figaro crowns [22]. The finished preparations and seating of the chosen Figaro crowns were approved by 2 operators (EİO and TB). The margin lines of the 3 master dies were marked by using a permanent marker.
### 2.2. CAD/CAM Process
The prepared #51, #53, and #75 master dies were placed on the typodont model one by one and digitized with an intraoral scanner (CEREC Omnicam; Dentsply Sirona, York, US). To replicate the external form of fiberglass crowns, the “biocopy” tool of the CEREC software (SW 4.6, Dentsply Sirona) was used. For this purpose, prefabricated fiberglass crowns were placed on the corresponding master dies and scanned with the CEREC Omnicam. The scanning process took approximately 5 min for each tooth. Preparation margins were drawn by the “automatic margin finder” tool, and deviations from the marked margin line were corrected manually. The die spacer parameter was set as 120μm for all teeth, and the software automatically designed virtual crowns based on the scans of the fiberglass crowns. Ten CAD/CAM crowns for each master die (#51, #53, and #75) were milled from resin-ceramic blocks (CERASMART 270; GC Dental Products, Tokyo, Japan) by using a clinical type milling unit (CEREC MC XL; Dentsply Sirona) (N=30). The milling time of each crown was about 10 min.The sample size and test groups of the study are presented in Table1.Table 1
Test groups of the study.
Crown typeTooth numberCAD/CAM (n)Prefabricated fiberglass (n)511010531010751010
### 2.3. Micro-CT Evaluation
Figaro and CAD/CAM crowns were placed on the corresponding master dies one by one with finger pressure until complete seating, maintained in that position under an axial load of 5 kg for 10 min in a seating pressure device, and were fixed with a parafilm (Parafilm M film; Bemis Company, Inc., Oshkosh, WI, US). The master dies were scanned with and without crowns by using a high-resolution desktop micro-CT (Bruker Skyscan 1275, Kontich, Belgium). Each stabilized specimen was positioned perpendicularly to the X-ray beam to ensure standardized positioning in the scanning tube and scanned with the following conditions: beam current at 100 kVp, 100 mA, 0.5 mm Al/Cu filter, 10.1μm pixel size, rotation at 0.5 step, and 360° within an integration time of 10 min. The mean scanning time for each specimen was about 1 hour. Air calibration of the detector was done before each scan to minimize the ring artifacts. Beam-hardening correction and input of optimal contrast limits according to the manufacturer’s instructions were carried out based on the former scanning and reconstruction.Visualization and quantitative measurements were utilized by using NRecon (ver. 1.6.10.4, SkyScan, Kontich, Belgium), DataViewer (version 1.5.6.2, SkyScan), and CTAn (version 1.17.7.2, SkyScan) software. For the reconstruction parameters, ring artifact correction and smoothing were fixed at zero, and the beam artifact correction was set at 30%. First, the reconstructed images were superimposed with the DataViewer software. The scans of the master die alone were used as a reference for the standardization of the measurement points. The master die images without a crown (reference) and with a crown (target) were superimposed, generating a volume of subtracting image. This image represented the entire area and volume of the gap between the crown and the master die. Then, the CTAn software was used for the 3-dimensional (3D) volumetric gap measurements (mm3) to evaluate the overall adaptation.A semiautomatic global thresholding (binarization) process was applied with CTAn software to distinguish the gap from other structures by processing the range of grey levels and to obtain imposed images of black and white pixels only. In this procedure, a Gaussian low-pass filter for noise reduction and an automatic segmentation threshold was used. Then, 5 fixed regions of interest (ROI) with the same dimensions (1.5×1.5mm for the central and canine teeth and 2.0×2.0mm for molar tooth) were determined separately for each master die and for each slice to include the crown entirely. Forty equidistant vertical cuts from axial images were made in the mesiodistal direction. This procedure ensured the standardization of the location-based measurements. Seventeen measurement points were determined, and 85 measurements were done from 5 predesignated ROIs. Moreover, the observer repeated the measurements for each point 3 times. The mean values of all measurements were noted and were included in the statistical analysis. The observer also performed the study twice with an interval of 2 weeks to detect intraobserver variability. In total, 255 measurements were done for each crown. These 2D linear measurements (μm) were allocated into 4 location categories as follows: marginal (absolute marginal discrepancy: the average of the linear distances from the finish line of the preparation to the outer margin of the restoration) [21, 23], cervical-axial (the average of horizontal gap measurements performed in the cervical third of the axial walls), middle-axial (the average of horizontal gap measurements performed in the middle third of the axial walls), and incisal/occlusal discrepancies (the average of vertical gap measurements performed in the incisal/occlusal surface). The reconstructed images were also processed in SkyScan CTVox (ver. 3.3.0, SkyScan) for visualization (Figures 1(a)–1(f)).Figure 1
Representative micro-CT images of crowns applied on the corresponding dies. (a) CAD/CAM crown for #51; (b) Figaro crown for #51; (c) CAD/CAM crown for #53; (d) Figaro crown for #53; (e) CAD/CAM crown for #75; (f) Figaro crown for #75.
(a)(b)(c)(d)(e)(f)
### 2.4. Statistical Analysis
To assess intraobserver reliability, the Wilcoxon matched pairs signed rank test was used for repeated measurements. The mean values of these measurements were considered to be the final data. The normality of the data was verified using Shapiro-Wilk test (p>0.05). The overall volumetric gap measurements were statistically analyzed with factorial analysis of variance (ANOVA) and least significant difference (LSD) tests. Location-based linear measurement data were evaluated with repeated measure ANOVA and LSD tests. The statistical analyses were performed using R v.3.5.3 (Microsoft Corporation, Redmond, WA, US) (α=0.05).
## 2.1. Master Die Preparation
The marginal and internal adaptations of CAD/CAM crowns and fiberglass primary crowns were compared by using microcomputed tomography. A sample size of 10 per group was determined based on a power analysis (expecteddifference=0.01, standarderrorofthemean=23.85, α=0.05, 1−β=0.8) [21]; 10 identical fiberglass crowns (Figaro crowns, Size XS; Figaro Crowns Inc., Minnesota, US) were selected considering the size of typodont primary central incisor (#51), canine (#53), and molar (#75) teeth prior to preparations as suggested by the manufacturer (n=10). Typodont teeth were placed on a typodont model (Frasaco Dental Model, AK-6; Frasaco GmbH, Tettnang, Germany) and prepared by the same operator (TB) according to the manufacturer’s preparation guide and suggestions for Figaro crowns [22]. The finished preparations and seating of the chosen Figaro crowns were approved by 2 operators (EİO and TB). The margin lines of the 3 master dies were marked by using a permanent marker.
## 2.2. CAD/CAM Process
The prepared #51, #53, and #75 master dies were placed on the typodont model one by one and digitized with an intraoral scanner (CEREC Omnicam; Dentsply Sirona, York, US). To replicate the external form of fiberglass crowns, the “biocopy” tool of the CEREC software (SW 4.6, Dentsply Sirona) was used. For this purpose, prefabricated fiberglass crowns were placed on the corresponding master dies and scanned with the CEREC Omnicam. The scanning process took approximately 5 min for each tooth. Preparation margins were drawn by the “automatic margin finder” tool, and deviations from the marked margin line were corrected manually. The die spacer parameter was set as 120μm for all teeth, and the software automatically designed virtual crowns based on the scans of the fiberglass crowns. Ten CAD/CAM crowns for each master die (#51, #53, and #75) were milled from resin-ceramic blocks (CERASMART 270; GC Dental Products, Tokyo, Japan) by using a clinical type milling unit (CEREC MC XL; Dentsply Sirona) (N=30). The milling time of each crown was about 10 min.The sample size and test groups of the study are presented in Table1.Table 1
Test groups of the study.
Crown typeTooth numberCAD/CAM (n)Prefabricated fiberglass (n)511010531010751010
## 2.3. Micro-CT Evaluation
Figaro and CAD/CAM crowns were placed on the corresponding master dies one by one with finger pressure until complete seating, maintained in that position under an axial load of 5 kg for 10 min in a seating pressure device, and were fixed with a parafilm (Parafilm M film; Bemis Company, Inc., Oshkosh, WI, US). The master dies were scanned with and without crowns by using a high-resolution desktop micro-CT (Bruker Skyscan 1275, Kontich, Belgium). Each stabilized specimen was positioned perpendicularly to the X-ray beam to ensure standardized positioning in the scanning tube and scanned with the following conditions: beam current at 100 kVp, 100 mA, 0.5 mm Al/Cu filter, 10.1μm pixel size, rotation at 0.5 step, and 360° within an integration time of 10 min. The mean scanning time for each specimen was about 1 hour. Air calibration of the detector was done before each scan to minimize the ring artifacts. Beam-hardening correction and input of optimal contrast limits according to the manufacturer’s instructions were carried out based on the former scanning and reconstruction.Visualization and quantitative measurements were utilized by using NRecon (ver. 1.6.10.4, SkyScan, Kontich, Belgium), DataViewer (version 1.5.6.2, SkyScan), and CTAn (version 1.17.7.2, SkyScan) software. For the reconstruction parameters, ring artifact correction and smoothing were fixed at zero, and the beam artifact correction was set at 30%. First, the reconstructed images were superimposed with the DataViewer software. The scans of the master die alone were used as a reference for the standardization of the measurement points. The master die images without a crown (reference) and with a crown (target) were superimposed, generating a volume of subtracting image. This image represented the entire area and volume of the gap between the crown and the master die. Then, the CTAn software was used for the 3-dimensional (3D) volumetric gap measurements (mm3) to evaluate the overall adaptation.A semiautomatic global thresholding (binarization) process was applied with CTAn software to distinguish the gap from other structures by processing the range of grey levels and to obtain imposed images of black and white pixels only. In this procedure, a Gaussian low-pass filter for noise reduction and an automatic segmentation threshold was used. Then, 5 fixed regions of interest (ROI) with the same dimensions (1.5×1.5mm for the central and canine teeth and 2.0×2.0mm for molar tooth) were determined separately for each master die and for each slice to include the crown entirely. Forty equidistant vertical cuts from axial images were made in the mesiodistal direction. This procedure ensured the standardization of the location-based measurements. Seventeen measurement points were determined, and 85 measurements were done from 5 predesignated ROIs. Moreover, the observer repeated the measurements for each point 3 times. The mean values of all measurements were noted and were included in the statistical analysis. The observer also performed the study twice with an interval of 2 weeks to detect intraobserver variability. In total, 255 measurements were done for each crown. These 2D linear measurements (μm) were allocated into 4 location categories as follows: marginal (absolute marginal discrepancy: the average of the linear distances from the finish line of the preparation to the outer margin of the restoration) [21, 23], cervical-axial (the average of horizontal gap measurements performed in the cervical third of the axial walls), middle-axial (the average of horizontal gap measurements performed in the middle third of the axial walls), and incisal/occlusal discrepancies (the average of vertical gap measurements performed in the incisal/occlusal surface). The reconstructed images were also processed in SkyScan CTVox (ver. 3.3.0, SkyScan) for visualization (Figures 1(a)–1(f)).Figure 1
Representative micro-CT images of crowns applied on the corresponding dies. (a) CAD/CAM crown for #51; (b) Figaro crown for #51; (c) CAD/CAM crown for #53; (d) Figaro crown for #53; (e) CAD/CAM crown for #75; (f) Figaro crown for #75.
(a)(b)(c)(d)(e)(f)
## 2.4. Statistical Analysis
To assess intraobserver reliability, the Wilcoxon matched pairs signed rank test was used for repeated measurements. The mean values of these measurements were considered to be the final data. The normality of the data was verified using Shapiro-Wilk test (p>0.05). The overall volumetric gap measurements were statistically analyzed with factorial analysis of variance (ANOVA) and least significant difference (LSD) tests. Location-based linear measurement data were evaluated with repeated measure ANOVA and LSD tests. The statistical analyses were performed using R v.3.5.3 (Microsoft Corporation, Redmond, WA, US) (α=0.05).
## 3. Results
Repeated measurements indicated no significant intraobserver difference for the observer (p>0.05). Overall intraobserver consistency was rated at 92.6% between the two measurements, and all measurements were found to be highly reproducible.Factorial ANOVA results and descriptive statistics for overall gap measurements are shown in Tables2 and 3, respectively. CAD/CAM crowns showed lower overall mean gaps than fiberglass crowns irrespective of the tooth number (p<0.05). Both crown types showed the highest volumetric gap for #75 (p<0.05). The lowest overall volumetric gap for fiberglass crowns was obtained for the central incisor (p<0.05), whereas for CAD/CAM crowns, no statistical difference was found between central and canine incisors (p>0.05).Table 2
Factorial ANOVA results for overall gap measurements.
SSdfMSFp valueTooth number185.777292.888279.999<.0001Crown type365.5851365.5851102.003<.0001Toothnumber∗crowntype39.124219.56258.967<.0001SS: sum of squares; df: degree of freedom; MS: mean squares.Table 3
Mean and standard deviations (±SD) for overall gap measurements (mm3).
Crown typeTooth numberCAD/CAMPrefabricated fiberglass513.08 (0.38)Aa8.15 (0.74)Ab532.93 (0.48)Aa5.82 (0.61)Bb755.15 (0.56)Ba11.99 (0.61)CbDifferent superscript uppercase letters (A, B, C) in the same column and different superscript lowercase letters (a, b) in the same line indicate statistically significant difference (p<0.05).Repeated ANOVA results for linear measurements showed that the interactions between tooth number, crown type, and measurement location were significant (p<0.001) (Table 4). Considering the differences between the location-based measurements for a certain tooth and crown type (Table 5), all groups showed lower mean gap values for the margin than that for the occlusal surface (p<0.05). Both crown types applied on #51 and CAD/CAM crowns applied on #53 showed similar mean values for the marginal discrepancy and cervical-axial location (p>0.05), whereas the other groups showed lower gap measurements for marginal discrepancy than that for the cervical-axial location (p<0.05). Regardless of the crown type, middle-axial and incisal gap measurements were comparable for #51 and #53 (p>0.05). The highest gap measurement for both crown types was obtained for the occlusal surface of #75 (p<0.05).Table 4
Repeated measure ANOVA results for linear gap measurements.
SSdfMSFpTooth number985943624929718315.871<.0001Crown type286728612867286183.721<.0001Toothnumber∗crowntype25832821291648.276<.0001Measurement location614591132048637255.888<.0001Measurementlocation∗toothnumber4239697670661688.261<.0001Measurementlocation∗crowntype1506053502026.271<.0001Measurementlocation∗toothnumber∗crowntype4743066790519.874<.0001SS: sum of squares; df: degree of freedom; MS: mean squares.Table 5
Mean and standard deviations (±SD) for location-based gap measurements (μm).
Tooth numberCrown typeLocationMean (SD)Range51CAD/CAMMarginal197.8 (30.7)A157.89-261.41Cervical-axial235.9 (19.79)AB199.80-257.1Middle-axial297.03 (49.12)BC230.15-393.29Incisal356.11 (103.31)C241.72-613.87FigaroMarginal336.53 (22.59)A297.76-366.87Cervical-axial400.44 (48.5)AB295.27-464.57Middle-axial454.01 (85.37)B362.44-621.71Incisal445.21 (180.73)B288.36-763.253CAD/CAMMarginal170.98 (22.11)A148-218.21Cervical-axial213.09 (17.87)A189.52-249.88Middle-axial301.44 (23.45)B276.43-342.69Incisal371.41 (50.91)B284.39-459.88FigaroMarginal313.17 (32.39)A252.95-373.85Cervical-axial454.26 (40.31)B386.98-501.17Middle-axial580.43 (64.71)C496.07-676.08Incisal590.84 (129.51)C461.73-897.0475CAD/CAMMarginal295.17 (26.48)A241.77-327.34Cervical-axial538.55 (74.31)B418.21-651.8Middle-axial672.18 (115.99)C533.05-881.19Occlusal1043.47 (254.81)D673.32-1589.18FigaroMarginal520.55 (47.55)A461.54-613.36Cervical-axial668.74 (55.39)B587.72-747.95Middle-axial933.65 (74.73)C795.15-1041.24Occlusal1618.56 (239.58)D1358.85-2131.47Different superscript uppercase letters (A, B, C, D) in the same column indicate statistically significant difference (p<0.05). μm: micrometer.Gap measurements for CAD/CAM crowns were lower than fiberglass crowns regardless of the location and tooth number (p<0.05) with an exception of #51 for which comparable incisal gap measurements were found for CAD/CAM and fiberglass crowns (p>0.05) (Figure 2).Figure 2
Comparison of gap measurements of different crown types based on location for each tooth. The asterisks (∗) indicate no statistical difference between groups (p>0.05).When the gap measurements obtained for different teeth were compared, #75 showed higher gap measurements than #51 and #53 irrespective of the location and for both crown types (p<0.05). Considering fiberglass crowns, #53 showed higher middle-axial and occlusal gaps than #51 (p<0.05). However, no significant differences were found between #51 and #53 at other locations either for CAD/CAM or for fiberglass crowns (p>0.05).
## 4. Discussion
The adaptation of a crown is of importance for primary teeth as well as permanent dentition, considering that poor-fitting crowns may cause secondary caries or gingivitis [23]. This in vitro study compared the adaptation of CAD/CAM resin-ceramic and prefabricated fiberglass primary crowns by calculating overall, marginal, and internal gaps via micro-CT. The results showed significant differences between the gap measurements for CAD/CAM and fiberglass crowns concerning both overall and location-based evaluations. Therefore, the null hypothesis suggesting that the adaptation of fiberglass and resin-ceramic CAD/CAM crowns would be similar was rejected.The adaptation can be evaluated by measuring the gap between the restoration and preparation with various methods such as direct microscopic measurement [18, 24], silicone replica technique [18], virtual seating of the crown and die by using their 3D scan data via reverse engineering software [20], and, as the newest technique, micro-CT imaging [21]. Precise linear 2D and volumetric 3D measurements can be performed in micron-level precision by using micro-CT, which was recommended as an innovative and nondestructive method for the in vitro evaluation of the adaptation [25]. Micro-CT allows for a great number of measurement points with close sectioning of the specimen, which ensures the reliability of the results [25]. In the present study, 5 ROIs were determined with equal sectioning in slices, and 255 measurements for each specimen were performed from 17 standardized points to ensure a comprehensive evaluation of internal and marginal adaptation of the crowns.The present study compared the adaptation of two different esthetic crown types for primary teeth. Figaro crowns are composed of fiberglass, aramid, carbon, and quartz filaments embedded within a composite resin material [14]. The combination of these materials brings flexibility which enables a slight elastic deformation while placing the crown on the prepared tooth [26]. This flex-fit technology allows minimal tooth reduction, unlike zirconia esthetic crowns which require excessive preparation to compensate for the lack of flexibility [13, 14, 26]. To ensure the passive fit of zirconia crowns as recommended, retention and adaptation problems are frequently encountered [13]. Based on these considerations, zirconia esthetic crowns were not included in this study. SSCs are the most appropriate restorative materials in paediatric dentistry [8]. In addition, the tooth preparation is minimal, and their adaptation is flex-fit. However, SSCs were not included in this study as a control group because micro-CT does not allow the scanning of materials with high atomic number such as metals [27]. On the other side, CAD/CAM crowns were fabricated from resin-ceramic blocks considering the advantages for primary teeth and the similarity in composition to Figaro crowns. Therefore, custom-made CAD/CAM crowns were included as the control group.In the present study, the overall adaptation was evaluated based on the 3D volumetric analysis and should be regarded as the total cement space [21, 28]. However, location-based linear 2D measurements provide data indicative of increased cement thickness at particular internal measurement points, as well as marginal adaptation [28]. CAD/CAM crowns showed better adaptation than Figaro crowns for both overall and location-based gap measurements and for all teeth. CAD/CAM crowns were designed based on the scans of Figaro crowns, and both crown types had identical outer forms. However, the internal contours were different as Figaro crowns have prefabricated, nonanatomical, and standardized inner surfaces while CAD/CAM crowns were custom-made. Micro-CT images for the mandibular molar showed that the CAD/CAM crown had rounded inner corners which were in harmony with the preparation outline (Figure 1(e)). On the other hand, right-angled internal corners that did not fit the preparation outline at the axioocclusal transition areas of the Figaro crown were observed (Figure 1(f)). Therefore, according to the present findings, it can be suggested that despite Figaro crowns allowing the restoration to adapt on the prepared tooth with flex-fitting, custom-made crowns fabricated with CAD/CAM technology provide better adaptation for primary teeth.The uniformity of the gap between the preparation and the crown is important to ensure the retention form as well as fracture strength [21, 29]. Overall adaptation gives a general overview of the entire gap between the preparation and the crown; however, to evaluate the uniformity, location-based analysis is essential [21]. Previous studies reported that increased gap spaces at axial walls and the occlusal surface may reduce resistance to fracture [21, 29, 30]. Considering location-based adaptation, all groups showed a tendency for increased gap measurements from the marginal region to the occlusal surface. This finding corroborates with previous studies that reported the highest location-based gap measurements for the occlusal surface [31, 32]. For CAD/CAM restorations, the diameter and shape of the milling tools might limit the machining ability which would adversely affect the internal adaptation, especially at the occlusal surface [29]. On the other hand, frictional contacts that exceeded the flexibility limit of the Figaro crowns in the cervical region may have prevented proper fitting, resulting in an increased occlusal gap. Since disadvantages related to excessive occlusal gap include stress concentration and restoration fractures, clinicians should be cautious about occlusal adaptation when restoring primary teeth with Figaro or CAD/CAM crowns [19, 32].Previous studies reported that the marginal gap values of ceramic crowns may range from 50 to 200μm [18, 20, 21]. Only CAD/CAM crowns fabricated for central and canine incisors were within these limits. The marginal gap for CAD/CAM molar crowns was above 200 μm, yet lower than the marginal gap for Figaro crowns. All Figaro crowns exhibited marginal gap values exceeding the clinically acceptable range irrespective of tooth number. In the present study, the preparation design recommended for Figaro crowns was performed for all teeth, and optical impressions of the same prepared teeth were obtained to fabricate CAD/CAM crowns. Therefore, for both crown types, marginal adaptations were evaluated for knife-edge margin which was reported to result in the greatest marginal discrepancy compared to other margin designs [24]. Furthermore, the present study evaluated marginal adaptation based on the absolute marginal discrepancy which considers both the horizontal and vertical directions [20]. Marginal design and marginal adaptation evaluation method employed in the present study may be the reason for high marginal gap values. Furthermore, micro-CT images of the Figaro crowns showed an overextension in the outer margin line which would have increased the gap measurements for the absolute marginal discrepancy (Figures 1(b), 1(d), and 1(f)). Based on these findings, CAD/CAM crowns may be preferred over Figaro crowns considering the clinical significance of marginal adaptation.In the present in vitro study, consistency of the results was ensured with standardized test conditions. Location-based gap measurements were performed by using the same ROIs and measurement points for all scans. To implement micro-CT measurements under the same conditions, CAD/CAM crowns were fabricated by scanning the same preparations on which Figaro crowns were adapted. Therefore, the preparations were standardized for both groups. Also, to eliminate differences in crown geometry, CAD/CAM crowns were designed based on the scans of Figaro crowns by utilizing the “biocopy” tool of the CEREC software. Nevertheless, limitations exist as in any in vitro study. Gap measurements were executed without cementation which may influence the fit of the restoration [24]. However, if the crowns were cemented on the corresponding master dies, the adaptation evaluation should have been performed on different preparations. To use a single standardized master die for each tooth, adaptation evaluation was performed without cementation. In addition, intraoral conditions such as soft tissue, saliva, and gingival fluid may affect the quality of the digital impression, thus adaptation. Further in vivo studies are warranted to evaluate the applicability of the CAD/CAM and Figaro crowns in paediatric dentistry and the effect of intraoral variables on the adaptation.
## 5. Conclusion
In this study, microcomputed tomography was first used to evaluate the adaptation of crowns for primary teeth, and the results showed that resin ceramic CAD/CAM crowns showed better overall, marginal, and internal adaptation compared to prefabricated fiberglass primary crowns for all primary teeth.All crowns showed lower gap measurements at the marginal region compared to the occlusal surface, which is important for the clinical prognosis. A modality to define the clinically acceptable adaptation parameters for crowns applied to primary teeth can be developed based on the findings of this study.
---
*Source: 1011661-2021-09-26.xml* | 1011661-2021-09-26_1011661-2021-09-26.md | 34,750 | Comparative Evaluation of Adaptation of Esthetic Prefabricated Fiberglass and CAD/CAM Crowns for Primary Teeth: Microcomputed Tomography Analysis | Ece Irem Oguz; Tuğba Bezgin; Ayse Isıl Orhan; Kaan Orhan | BioMed Research International
(2021) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1011661 | 1011661-2021-09-26.xml | ---
## Abstract
Adaptation is an important factor for the clinical success of restorations. However, no studies are available evaluating the adaptation of primary crowns. The aim of this study was to compare the adaptation of crowns fabricated by CAD/CAM technology versus prefabricated fiberglass primary crowns. Typodont maxillary central, canine, and mandibular molar teeth were prepared to serve as master dies after the size of Figaro crowns was determined (n=10). Master dies were scanned with an intraoral scanner, and 10 identical CAD/CAM crowns were fabricated from resin-ceramic blocks. Figaro and CAD/CAM crowns were placed on the corresponding master dies and scanned via micro-CT. Three-dimensional volumetric gap measurements were performed to evaluate the overall adaptation. A total of 255 location-based linear measurements were allocated into 4 categories: marginal, cervical-axial, middle-axial, and occlusal. Statistical analyses were performed with factorial ANOVA, repeated measure ANOVA, and LSD tests (α=0.05). CAD/CAM crowns showed significantly lower overall and location-based gap measurements than Figaro crowns regardless of tooth number (p<0.05). For all groups, mean marginal discrepancies were lower than occlusal measurements (p<0.05). Both crown types showed higher marginal gaps for molar teeth than for canine and central incisors with no significant difference between them (p>0.05). CAD/CAM-fabricated crowns showed better marginal and internal adaptation than prefabricated Figaro crowns.
---
## Body
## 1. Introduction
Early childhood caries (ECC) is defined as the presence of decay and decay-related filled or lost tooth surfaces in one or more teeth of children aged 71 months or younger [1, 2]. ECC begins with white lesions along the margin of the maxillary primary incisors and can progress rapidly, leading to the destruction of the crown [1, 3]. Besides esthetic, nutrition, and phonation problems, ECC may cause detrimental effects on general health [4]. If treatment for ECC is delayed, serious disorders such as pain dysfunction, negative effects on growth and development, psychological problems, and a decrease in quality of life may occur [1, 2, 5].Depending on the progression of the disease, different treatment modalities for ECC can be applied from preventive techniques to crown restorations [4]. Primary teeth with widespread crown damage have been successfully treated with stainless steel crowns (SSCs) for many years [6, 7]. However, SSCs could not meet the esthetic expectations of paediatric patients and parents [8, 9]. Restorations that can satisfy increasing expectations have been obtained as a result of the developments in technology and esthetic material science for crowns in paediatric dentistry [8, 10, 11]. Veneered SSCs, composite strip crowns, and prefabricated zirconia crowns were the first materials introduced to accomplish an esthetic outcome [10–12]. The most preferred esthetic crown nowadays is prefabricated zirconia crowns which are available from different manufacturers. The most important advantage of zirconia crowns is that gingival and plaque indices are lower among these crowns than other crown types [8]. However, these crowns have certain disadvantages such as (i) they are very technique sensitive and (ii) they require excessive tooth preparation to provide a passive fit [12, 13].One of the newly launched materials to overcome such disadvantages is Figaro crowns made of fiberglass [14]. They are tooth colored and require less tooth reduction than paediatric preformed zirconia crowns with its flex-fit technology [14]. It is less technique sensitive than both composite strip crowns and zirconia crowns with a similar technique to place a SSC [15]. However, a previous study indicated failures in terms of crown retention, fracture resistance, and color deterioration for Figaro crowns compared to SSCs after 6 months of clinical evaluation period [14].Another method of note to achieve esthetics in paediatric dentistry is the computer-aided design and computer-aided manufacturing (CAD/CAM) technology. Developments in CAD/CAM technics have enabled the production of esthetic and functional restorations for both permanent and primary dentition [16]. Customized crowns can be manufactured chairside by using CAD/CAM in a single appointment. Among a wide variety of blocks available for CAD/CAM, resin-ceramic blocks stand out with advantageous features for primary dentition including the wear prevention of opposing dentition due to their low hardness values and absorption of functional stresses because of their low modulus of elasticity [17, 18]. Another beneficial outcome of low modulus of elasticity was reported as the accurate adaptation of the restoration [18].The marginal and internal adaptations are critical factors that determine the success of the restoration. While the marginal misfit was related to cement dissolution, microleakage, plaque accumulation, secondary caries, and periodontal disease, the internal misfit was associated with poor mechanical retention and reduced fractural strength [7, 18–20]. The adaptation may vary depending on the restorative material or production method of the restoration [20, 21]. No study to date has focused on the adaptation of crowns applied on primary teeth.Therefore, the present in vitro study was aimed at comparing the adaptation of two types of esthetic paediatric crowns, the prefabricated fiberglass and custom-made resin-ceramic crowns, for primary teeth. The null hypothesis tested was that prefabricated fiberglass and CAD/CAM crowns would not differ in terms of adaptation.
## 2. Materials and Methods
This study has followed the CRIS guidelines for in vitro studies as discussed in the 2014 concept note.
### 2.1. Master Die Preparation
The marginal and internal adaptations of CAD/CAM crowns and fiberglass primary crowns were compared by using microcomputed tomography. A sample size of 10 per group was determined based on a power analysis (expecteddifference=0.01, standarderrorofthemean=23.85, α=0.05, 1−β=0.8) [21]; 10 identical fiberglass crowns (Figaro crowns, Size XS; Figaro Crowns Inc., Minnesota, US) were selected considering the size of typodont primary central incisor (#51), canine (#53), and molar (#75) teeth prior to preparations as suggested by the manufacturer (n=10). Typodont teeth were placed on a typodont model (Frasaco Dental Model, AK-6; Frasaco GmbH, Tettnang, Germany) and prepared by the same operator (TB) according to the manufacturer’s preparation guide and suggestions for Figaro crowns [22]. The finished preparations and seating of the chosen Figaro crowns were approved by 2 operators (EİO and TB). The margin lines of the 3 master dies were marked by using a permanent marker.
### 2.2. CAD/CAM Process
The prepared #51, #53, and #75 master dies were placed on the typodont model one by one and digitized with an intraoral scanner (CEREC Omnicam; Dentsply Sirona, York, US). To replicate the external form of fiberglass crowns, the “biocopy” tool of the CEREC software (SW 4.6, Dentsply Sirona) was used. For this purpose, prefabricated fiberglass crowns were placed on the corresponding master dies and scanned with the CEREC Omnicam. The scanning process took approximately 5 min for each tooth. Preparation margins were drawn by the “automatic margin finder” tool, and deviations from the marked margin line were corrected manually. The die spacer parameter was set as 120μm for all teeth, and the software automatically designed virtual crowns based on the scans of the fiberglass crowns. Ten CAD/CAM crowns for each master die (#51, #53, and #75) were milled from resin-ceramic blocks (CERASMART 270; GC Dental Products, Tokyo, Japan) by using a clinical type milling unit (CEREC MC XL; Dentsply Sirona) (N=30). The milling time of each crown was about 10 min.The sample size and test groups of the study are presented in Table1.Table 1
Test groups of the study.
Crown typeTooth numberCAD/CAM (n)Prefabricated fiberglass (n)511010531010751010
### 2.3. Micro-CT Evaluation
Figaro and CAD/CAM crowns were placed on the corresponding master dies one by one with finger pressure until complete seating, maintained in that position under an axial load of 5 kg for 10 min in a seating pressure device, and were fixed with a parafilm (Parafilm M film; Bemis Company, Inc., Oshkosh, WI, US). The master dies were scanned with and without crowns by using a high-resolution desktop micro-CT (Bruker Skyscan 1275, Kontich, Belgium). Each stabilized specimen was positioned perpendicularly to the X-ray beam to ensure standardized positioning in the scanning tube and scanned with the following conditions: beam current at 100 kVp, 100 mA, 0.5 mm Al/Cu filter, 10.1μm pixel size, rotation at 0.5 step, and 360° within an integration time of 10 min. The mean scanning time for each specimen was about 1 hour. Air calibration of the detector was done before each scan to minimize the ring artifacts. Beam-hardening correction and input of optimal contrast limits according to the manufacturer’s instructions were carried out based on the former scanning and reconstruction.Visualization and quantitative measurements were utilized by using NRecon (ver. 1.6.10.4, SkyScan, Kontich, Belgium), DataViewer (version 1.5.6.2, SkyScan), and CTAn (version 1.17.7.2, SkyScan) software. For the reconstruction parameters, ring artifact correction and smoothing were fixed at zero, and the beam artifact correction was set at 30%. First, the reconstructed images were superimposed with the DataViewer software. The scans of the master die alone were used as a reference for the standardization of the measurement points. The master die images without a crown (reference) and with a crown (target) were superimposed, generating a volume of subtracting image. This image represented the entire area and volume of the gap between the crown and the master die. Then, the CTAn software was used for the 3-dimensional (3D) volumetric gap measurements (mm3) to evaluate the overall adaptation.A semiautomatic global thresholding (binarization) process was applied with CTAn software to distinguish the gap from other structures by processing the range of grey levels and to obtain imposed images of black and white pixels only. In this procedure, a Gaussian low-pass filter for noise reduction and an automatic segmentation threshold was used. Then, 5 fixed regions of interest (ROI) with the same dimensions (1.5×1.5mm for the central and canine teeth and 2.0×2.0mm for molar tooth) were determined separately for each master die and for each slice to include the crown entirely. Forty equidistant vertical cuts from axial images were made in the mesiodistal direction. This procedure ensured the standardization of the location-based measurements. Seventeen measurement points were determined, and 85 measurements were done from 5 predesignated ROIs. Moreover, the observer repeated the measurements for each point 3 times. The mean values of all measurements were noted and were included in the statistical analysis. The observer also performed the study twice with an interval of 2 weeks to detect intraobserver variability. In total, 255 measurements were done for each crown. These 2D linear measurements (μm) were allocated into 4 location categories as follows: marginal (absolute marginal discrepancy: the average of the linear distances from the finish line of the preparation to the outer margin of the restoration) [21, 23], cervical-axial (the average of horizontal gap measurements performed in the cervical third of the axial walls), middle-axial (the average of horizontal gap measurements performed in the middle third of the axial walls), and incisal/occlusal discrepancies (the average of vertical gap measurements performed in the incisal/occlusal surface). The reconstructed images were also processed in SkyScan CTVox (ver. 3.3.0, SkyScan) for visualization (Figures 1(a)–1(f)).Figure 1
Representative micro-CT images of crowns applied on the corresponding dies. (a) CAD/CAM crown for #51; (b) Figaro crown for #51; (c) CAD/CAM crown for #53; (d) Figaro crown for #53; (e) CAD/CAM crown for #75; (f) Figaro crown for #75.
(a)(b)(c)(d)(e)(f)
### 2.4. Statistical Analysis
To assess intraobserver reliability, the Wilcoxon matched pairs signed rank test was used for repeated measurements. The mean values of these measurements were considered to be the final data. The normality of the data was verified using Shapiro-Wilk test (p>0.05). The overall volumetric gap measurements were statistically analyzed with factorial analysis of variance (ANOVA) and least significant difference (LSD) tests. Location-based linear measurement data were evaluated with repeated measure ANOVA and LSD tests. The statistical analyses were performed using R v.3.5.3 (Microsoft Corporation, Redmond, WA, US) (α=0.05).
## 2.1. Master Die Preparation
The marginal and internal adaptations of CAD/CAM crowns and fiberglass primary crowns were compared by using microcomputed tomography. A sample size of 10 per group was determined based on a power analysis (expecteddifference=0.01, standarderrorofthemean=23.85, α=0.05, 1−β=0.8) [21]; 10 identical fiberglass crowns (Figaro crowns, Size XS; Figaro Crowns Inc., Minnesota, US) were selected considering the size of typodont primary central incisor (#51), canine (#53), and molar (#75) teeth prior to preparations as suggested by the manufacturer (n=10). Typodont teeth were placed on a typodont model (Frasaco Dental Model, AK-6; Frasaco GmbH, Tettnang, Germany) and prepared by the same operator (TB) according to the manufacturer’s preparation guide and suggestions for Figaro crowns [22]. The finished preparations and seating of the chosen Figaro crowns were approved by 2 operators (EİO and TB). The margin lines of the 3 master dies were marked by using a permanent marker.
## 2.2. CAD/CAM Process
The prepared #51, #53, and #75 master dies were placed on the typodont model one by one and digitized with an intraoral scanner (CEREC Omnicam; Dentsply Sirona, York, US). To replicate the external form of fiberglass crowns, the “biocopy” tool of the CEREC software (SW 4.6, Dentsply Sirona) was used. For this purpose, prefabricated fiberglass crowns were placed on the corresponding master dies and scanned with the CEREC Omnicam. The scanning process took approximately 5 min for each tooth. Preparation margins were drawn by the “automatic margin finder” tool, and deviations from the marked margin line were corrected manually. The die spacer parameter was set as 120μm for all teeth, and the software automatically designed virtual crowns based on the scans of the fiberglass crowns. Ten CAD/CAM crowns for each master die (#51, #53, and #75) were milled from resin-ceramic blocks (CERASMART 270; GC Dental Products, Tokyo, Japan) by using a clinical type milling unit (CEREC MC XL; Dentsply Sirona) (N=30). The milling time of each crown was about 10 min.The sample size and test groups of the study are presented in Table1.Table 1
Test groups of the study.
Crown typeTooth numberCAD/CAM (n)Prefabricated fiberglass (n)511010531010751010
## 2.3. Micro-CT Evaluation
Figaro and CAD/CAM crowns were placed on the corresponding master dies one by one with finger pressure until complete seating, maintained in that position under an axial load of 5 kg for 10 min in a seating pressure device, and were fixed with a parafilm (Parafilm M film; Bemis Company, Inc., Oshkosh, WI, US). The master dies were scanned with and without crowns by using a high-resolution desktop micro-CT (Bruker Skyscan 1275, Kontich, Belgium). Each stabilized specimen was positioned perpendicularly to the X-ray beam to ensure standardized positioning in the scanning tube and scanned with the following conditions: beam current at 100 kVp, 100 mA, 0.5 mm Al/Cu filter, 10.1μm pixel size, rotation at 0.5 step, and 360° within an integration time of 10 min. The mean scanning time for each specimen was about 1 hour. Air calibration of the detector was done before each scan to minimize the ring artifacts. Beam-hardening correction and input of optimal contrast limits according to the manufacturer’s instructions were carried out based on the former scanning and reconstruction.Visualization and quantitative measurements were utilized by using NRecon (ver. 1.6.10.4, SkyScan, Kontich, Belgium), DataViewer (version 1.5.6.2, SkyScan), and CTAn (version 1.17.7.2, SkyScan) software. For the reconstruction parameters, ring artifact correction and smoothing were fixed at zero, and the beam artifact correction was set at 30%. First, the reconstructed images were superimposed with the DataViewer software. The scans of the master die alone were used as a reference for the standardization of the measurement points. The master die images without a crown (reference) and with a crown (target) were superimposed, generating a volume of subtracting image. This image represented the entire area and volume of the gap between the crown and the master die. Then, the CTAn software was used for the 3-dimensional (3D) volumetric gap measurements (mm3) to evaluate the overall adaptation.A semiautomatic global thresholding (binarization) process was applied with CTAn software to distinguish the gap from other structures by processing the range of grey levels and to obtain imposed images of black and white pixels only. In this procedure, a Gaussian low-pass filter for noise reduction and an automatic segmentation threshold was used. Then, 5 fixed regions of interest (ROI) with the same dimensions (1.5×1.5mm for the central and canine teeth and 2.0×2.0mm for molar tooth) were determined separately for each master die and for each slice to include the crown entirely. Forty equidistant vertical cuts from axial images were made in the mesiodistal direction. This procedure ensured the standardization of the location-based measurements. Seventeen measurement points were determined, and 85 measurements were done from 5 predesignated ROIs. Moreover, the observer repeated the measurements for each point 3 times. The mean values of all measurements were noted and were included in the statistical analysis. The observer also performed the study twice with an interval of 2 weeks to detect intraobserver variability. In total, 255 measurements were done for each crown. These 2D linear measurements (μm) were allocated into 4 location categories as follows: marginal (absolute marginal discrepancy: the average of the linear distances from the finish line of the preparation to the outer margin of the restoration) [21, 23], cervical-axial (the average of horizontal gap measurements performed in the cervical third of the axial walls), middle-axial (the average of horizontal gap measurements performed in the middle third of the axial walls), and incisal/occlusal discrepancies (the average of vertical gap measurements performed in the incisal/occlusal surface). The reconstructed images were also processed in SkyScan CTVox (ver. 3.3.0, SkyScan) for visualization (Figures 1(a)–1(f)).Figure 1
Representative micro-CT images of crowns applied on the corresponding dies. (a) CAD/CAM crown for #51; (b) Figaro crown for #51; (c) CAD/CAM crown for #53; (d) Figaro crown for #53; (e) CAD/CAM crown for #75; (f) Figaro crown for #75.
(a)(b)(c)(d)(e)(f)
## 2.4. Statistical Analysis
To assess intraobserver reliability, the Wilcoxon matched pairs signed rank test was used for repeated measurements. The mean values of these measurements were considered to be the final data. The normality of the data was verified using Shapiro-Wilk test (p>0.05). The overall volumetric gap measurements were statistically analyzed with factorial analysis of variance (ANOVA) and least significant difference (LSD) tests. Location-based linear measurement data were evaluated with repeated measure ANOVA and LSD tests. The statistical analyses were performed using R v.3.5.3 (Microsoft Corporation, Redmond, WA, US) (α=0.05).
## 3. Results
Repeated measurements indicated no significant intraobserver difference for the observer (p>0.05). Overall intraobserver consistency was rated at 92.6% between the two measurements, and all measurements were found to be highly reproducible.Factorial ANOVA results and descriptive statistics for overall gap measurements are shown in Tables2 and 3, respectively. CAD/CAM crowns showed lower overall mean gaps than fiberglass crowns irrespective of the tooth number (p<0.05). Both crown types showed the highest volumetric gap for #75 (p<0.05). The lowest overall volumetric gap for fiberglass crowns was obtained for the central incisor (p<0.05), whereas for CAD/CAM crowns, no statistical difference was found between central and canine incisors (p>0.05).Table 2
Factorial ANOVA results for overall gap measurements.
SSdfMSFp valueTooth number185.777292.888279.999<.0001Crown type365.5851365.5851102.003<.0001Toothnumber∗crowntype39.124219.56258.967<.0001SS: sum of squares; df: degree of freedom; MS: mean squares.Table 3
Mean and standard deviations (±SD) for overall gap measurements (mm3).
Crown typeTooth numberCAD/CAMPrefabricated fiberglass513.08 (0.38)Aa8.15 (0.74)Ab532.93 (0.48)Aa5.82 (0.61)Bb755.15 (0.56)Ba11.99 (0.61)CbDifferent superscript uppercase letters (A, B, C) in the same column and different superscript lowercase letters (a, b) in the same line indicate statistically significant difference (p<0.05).Repeated ANOVA results for linear measurements showed that the interactions between tooth number, crown type, and measurement location were significant (p<0.001) (Table 4). Considering the differences between the location-based measurements for a certain tooth and crown type (Table 5), all groups showed lower mean gap values for the margin than that for the occlusal surface (p<0.05). Both crown types applied on #51 and CAD/CAM crowns applied on #53 showed similar mean values for the marginal discrepancy and cervical-axial location (p>0.05), whereas the other groups showed lower gap measurements for marginal discrepancy than that for the cervical-axial location (p<0.05). Regardless of the crown type, middle-axial and incisal gap measurements were comparable for #51 and #53 (p>0.05). The highest gap measurement for both crown types was obtained for the occlusal surface of #75 (p<0.05).Table 4
Repeated measure ANOVA results for linear gap measurements.
SSdfMSFpTooth number985943624929718315.871<.0001Crown type286728612867286183.721<.0001Toothnumber∗crowntype25832821291648.276<.0001Measurement location614591132048637255.888<.0001Measurementlocation∗toothnumber4239697670661688.261<.0001Measurementlocation∗crowntype1506053502026.271<.0001Measurementlocation∗toothnumber∗crowntype4743066790519.874<.0001SS: sum of squares; df: degree of freedom; MS: mean squares.Table 5
Mean and standard deviations (±SD) for location-based gap measurements (μm).
Tooth numberCrown typeLocationMean (SD)Range51CAD/CAMMarginal197.8 (30.7)A157.89-261.41Cervical-axial235.9 (19.79)AB199.80-257.1Middle-axial297.03 (49.12)BC230.15-393.29Incisal356.11 (103.31)C241.72-613.87FigaroMarginal336.53 (22.59)A297.76-366.87Cervical-axial400.44 (48.5)AB295.27-464.57Middle-axial454.01 (85.37)B362.44-621.71Incisal445.21 (180.73)B288.36-763.253CAD/CAMMarginal170.98 (22.11)A148-218.21Cervical-axial213.09 (17.87)A189.52-249.88Middle-axial301.44 (23.45)B276.43-342.69Incisal371.41 (50.91)B284.39-459.88FigaroMarginal313.17 (32.39)A252.95-373.85Cervical-axial454.26 (40.31)B386.98-501.17Middle-axial580.43 (64.71)C496.07-676.08Incisal590.84 (129.51)C461.73-897.0475CAD/CAMMarginal295.17 (26.48)A241.77-327.34Cervical-axial538.55 (74.31)B418.21-651.8Middle-axial672.18 (115.99)C533.05-881.19Occlusal1043.47 (254.81)D673.32-1589.18FigaroMarginal520.55 (47.55)A461.54-613.36Cervical-axial668.74 (55.39)B587.72-747.95Middle-axial933.65 (74.73)C795.15-1041.24Occlusal1618.56 (239.58)D1358.85-2131.47Different superscript uppercase letters (A, B, C, D) in the same column indicate statistically significant difference (p<0.05). μm: micrometer.Gap measurements for CAD/CAM crowns were lower than fiberglass crowns regardless of the location and tooth number (p<0.05) with an exception of #51 for which comparable incisal gap measurements were found for CAD/CAM and fiberglass crowns (p>0.05) (Figure 2).Figure 2
Comparison of gap measurements of different crown types based on location for each tooth. The asterisks (∗) indicate no statistical difference between groups (p>0.05).When the gap measurements obtained for different teeth were compared, #75 showed higher gap measurements than #51 and #53 irrespective of the location and for both crown types (p<0.05). Considering fiberglass crowns, #53 showed higher middle-axial and occlusal gaps than #51 (p<0.05). However, no significant differences were found between #51 and #53 at other locations either for CAD/CAM or for fiberglass crowns (p>0.05).
## 4. Discussion
The adaptation of a crown is of importance for primary teeth as well as permanent dentition, considering that poor-fitting crowns may cause secondary caries or gingivitis [23]. This in vitro study compared the adaptation of CAD/CAM resin-ceramic and prefabricated fiberglass primary crowns by calculating overall, marginal, and internal gaps via micro-CT. The results showed significant differences between the gap measurements for CAD/CAM and fiberglass crowns concerning both overall and location-based evaluations. Therefore, the null hypothesis suggesting that the adaptation of fiberglass and resin-ceramic CAD/CAM crowns would be similar was rejected.The adaptation can be evaluated by measuring the gap between the restoration and preparation with various methods such as direct microscopic measurement [18, 24], silicone replica technique [18], virtual seating of the crown and die by using their 3D scan data via reverse engineering software [20], and, as the newest technique, micro-CT imaging [21]. Precise linear 2D and volumetric 3D measurements can be performed in micron-level precision by using micro-CT, which was recommended as an innovative and nondestructive method for the in vitro evaluation of the adaptation [25]. Micro-CT allows for a great number of measurement points with close sectioning of the specimen, which ensures the reliability of the results [25]. In the present study, 5 ROIs were determined with equal sectioning in slices, and 255 measurements for each specimen were performed from 17 standardized points to ensure a comprehensive evaluation of internal and marginal adaptation of the crowns.The present study compared the adaptation of two different esthetic crown types for primary teeth. Figaro crowns are composed of fiberglass, aramid, carbon, and quartz filaments embedded within a composite resin material [14]. The combination of these materials brings flexibility which enables a slight elastic deformation while placing the crown on the prepared tooth [26]. This flex-fit technology allows minimal tooth reduction, unlike zirconia esthetic crowns which require excessive preparation to compensate for the lack of flexibility [13, 14, 26]. To ensure the passive fit of zirconia crowns as recommended, retention and adaptation problems are frequently encountered [13]. Based on these considerations, zirconia esthetic crowns were not included in this study. SSCs are the most appropriate restorative materials in paediatric dentistry [8]. In addition, the tooth preparation is minimal, and their adaptation is flex-fit. However, SSCs were not included in this study as a control group because micro-CT does not allow the scanning of materials with high atomic number such as metals [27]. On the other side, CAD/CAM crowns were fabricated from resin-ceramic blocks considering the advantages for primary teeth and the similarity in composition to Figaro crowns. Therefore, custom-made CAD/CAM crowns were included as the control group.In the present study, the overall adaptation was evaluated based on the 3D volumetric analysis and should be regarded as the total cement space [21, 28]. However, location-based linear 2D measurements provide data indicative of increased cement thickness at particular internal measurement points, as well as marginal adaptation [28]. CAD/CAM crowns showed better adaptation than Figaro crowns for both overall and location-based gap measurements and for all teeth. CAD/CAM crowns were designed based on the scans of Figaro crowns, and both crown types had identical outer forms. However, the internal contours were different as Figaro crowns have prefabricated, nonanatomical, and standardized inner surfaces while CAD/CAM crowns were custom-made. Micro-CT images for the mandibular molar showed that the CAD/CAM crown had rounded inner corners which were in harmony with the preparation outline (Figure 1(e)). On the other hand, right-angled internal corners that did not fit the preparation outline at the axioocclusal transition areas of the Figaro crown were observed (Figure 1(f)). Therefore, according to the present findings, it can be suggested that despite Figaro crowns allowing the restoration to adapt on the prepared tooth with flex-fitting, custom-made crowns fabricated with CAD/CAM technology provide better adaptation for primary teeth.The uniformity of the gap between the preparation and the crown is important to ensure the retention form as well as fracture strength [21, 29]. Overall adaptation gives a general overview of the entire gap between the preparation and the crown; however, to evaluate the uniformity, location-based analysis is essential [21]. Previous studies reported that increased gap spaces at axial walls and the occlusal surface may reduce resistance to fracture [21, 29, 30]. Considering location-based adaptation, all groups showed a tendency for increased gap measurements from the marginal region to the occlusal surface. This finding corroborates with previous studies that reported the highest location-based gap measurements for the occlusal surface [31, 32]. For CAD/CAM restorations, the diameter and shape of the milling tools might limit the machining ability which would adversely affect the internal adaptation, especially at the occlusal surface [29]. On the other hand, frictional contacts that exceeded the flexibility limit of the Figaro crowns in the cervical region may have prevented proper fitting, resulting in an increased occlusal gap. Since disadvantages related to excessive occlusal gap include stress concentration and restoration fractures, clinicians should be cautious about occlusal adaptation when restoring primary teeth with Figaro or CAD/CAM crowns [19, 32].Previous studies reported that the marginal gap values of ceramic crowns may range from 50 to 200μm [18, 20, 21]. Only CAD/CAM crowns fabricated for central and canine incisors were within these limits. The marginal gap for CAD/CAM molar crowns was above 200 μm, yet lower than the marginal gap for Figaro crowns. All Figaro crowns exhibited marginal gap values exceeding the clinically acceptable range irrespective of tooth number. In the present study, the preparation design recommended for Figaro crowns was performed for all teeth, and optical impressions of the same prepared teeth were obtained to fabricate CAD/CAM crowns. Therefore, for both crown types, marginal adaptations were evaluated for knife-edge margin which was reported to result in the greatest marginal discrepancy compared to other margin designs [24]. Furthermore, the present study evaluated marginal adaptation based on the absolute marginal discrepancy which considers both the horizontal and vertical directions [20]. Marginal design and marginal adaptation evaluation method employed in the present study may be the reason for high marginal gap values. Furthermore, micro-CT images of the Figaro crowns showed an overextension in the outer margin line which would have increased the gap measurements for the absolute marginal discrepancy (Figures 1(b), 1(d), and 1(f)). Based on these findings, CAD/CAM crowns may be preferred over Figaro crowns considering the clinical significance of marginal adaptation.In the present in vitro study, consistency of the results was ensured with standardized test conditions. Location-based gap measurements were performed by using the same ROIs and measurement points for all scans. To implement micro-CT measurements under the same conditions, CAD/CAM crowns were fabricated by scanning the same preparations on which Figaro crowns were adapted. Therefore, the preparations were standardized for both groups. Also, to eliminate differences in crown geometry, CAD/CAM crowns were designed based on the scans of Figaro crowns by utilizing the “biocopy” tool of the CEREC software. Nevertheless, limitations exist as in any in vitro study. Gap measurements were executed without cementation which may influence the fit of the restoration [24]. However, if the crowns were cemented on the corresponding master dies, the adaptation evaluation should have been performed on different preparations. To use a single standardized master die for each tooth, adaptation evaluation was performed without cementation. In addition, intraoral conditions such as soft tissue, saliva, and gingival fluid may affect the quality of the digital impression, thus adaptation. Further in vivo studies are warranted to evaluate the applicability of the CAD/CAM and Figaro crowns in paediatric dentistry and the effect of intraoral variables on the adaptation.
## 5. Conclusion
In this study, microcomputed tomography was first used to evaluate the adaptation of crowns for primary teeth, and the results showed that resin ceramic CAD/CAM crowns showed better overall, marginal, and internal adaptation compared to prefabricated fiberglass primary crowns for all primary teeth.All crowns showed lower gap measurements at the marginal region compared to the occlusal surface, which is important for the clinical prognosis. A modality to define the clinically acceptable adaptation parameters for crowns applied to primary teeth can be developed based on the findings of this study.
---
*Source: 1011661-2021-09-26.xml* | 2021 |
# The Relationship between Body Composition and Bone Mineral Density of Female Workers in A Unit of Tai’an
**Authors:** Yan Wang; Siqi Wang; Zhengxiu Chen; Zhangshen Ran
**Journal:** Computational and Mathematical Methods in Medicine
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1011768
---
## Abstract
Objective. To explore the relationship between body composition and bone mineral density (BMD) of female workers in a university of Tai’an. Methods. This study randomly selected 90 female employees in a university of Tai’an. The body composition was monitored by body composition analyzer (inbody770), and the lumbar bone mineral density was monitored by dual energy X-ray absorptiometry (BMD model). The data were analyzed by SPSS 22.0 statistical software. Results. With the increasing of body mass index (BMI), BMD of female lumbar spines 1-4 (L1-4) increased gradually. Spearman correlation analysis showed that BMI, skeletal muscle mass, upper limb muscle mass, trunk muscle mass, lower limb muscle mass, and whole-body phase angle were positively correlated with L1-4BMD. Age was negatively correlated with L1-4BMD. Linear regression analysis showed that age was a negative factor of L1-4BMD, and skeletal muscle mass was a protective factor of abnormal bone mass, especially lower limb muscle mass. Conclusions. Lower limb muscle mass is a protective factor of female BMD. Strengthening physical exercise to improve lower limb muscle mass is conducive to the prevention of female osteoporosis.
---
## Body
## 1. Introduction
The aging population in China has been dramatically growing, which becomes a heavy burden for the government at all levels. In addition to the common metal diseases such as Alzheimer’s disease and Parkinson’s disease [1, 2], an increasing attention is paid to the physical diseases.Osteopenia/osteoporosis (OP) is a metabolic bone disease mainly caused by aging, which is associated with gradual changes in body composition [3–6]. By means of the long-standing bone mass reduction and bone microstructure destruction, OP is making the appendicular bone brittler and brittler, which is prone to fracture. There are many known and potential driving factors influencing OP, for instance, the intake of vitamin D, hormone level, and sex-specific metabolic diseases [4]. Thus, the whole progression is entirely different between men and women [7, 8]. The population with abnormal bone mass in women is significantly higher than that in men [9], especially in postmenopausal women [4, 10]. As a hotspot, the association has been widely concerned between bone mineral density (BMD) and aging, body weight, nutritional status, smoking, alcohol, physical activity, etc. [11–13]. However, the research conclusions regarding the contribution to the adequate maintenance of BMD cannot reach a consensus [14–23]. Reid et al. took the lead in demonstrated that fat mass (FM), one main component of the body weight, is closely related to fat mass in premenopausal women [19]. Next year, the Framingham study stated that weight or BMI is much more strongly associated with BMD in elder women and men as a load factor [24]. Further, Dimitri et al. believed that obesity is a protective against fracture based on their study [25]. For postmenopausal women, Mpalaris and his colleagues proposed a U-bend curve to illustrate the relationship between BMI and the fracture risk [26]. For children, neither the diagnostic criteria is uniform nor the decision of therapy as well as therapeutic efficacy comes to reach agreement [8, 25, 27–32]. What is worse, the acataleptic relationship between body composition and BMD affects the diagnosis and treatments of the patients with other complex diseases [33–38].The study conducted here is aimed at checking if different assemblies of body components may play different roles in bone metabolism. Should the further study on the correlation between body composition and bone health help to guide women to prevent and treat osteoporosis and improve their life quality?
## 2. Objects and Methods
### 2.1. Research Objects
Female employees of one university who had routine physical check-ups in the examination center of the Second Affiliated Hospital of Shandong First Medical University from January 2021 to May 2021 were initially recruited as the research objects. They had not participated in formal vocational physical exercise and voluntarily accepted DEXA monitoring bone mineral density and body composition analysis. Finally, 90 cases were included, aged 37-85 years, with an average of (54.84±7.88) years. This study was approved by the ethics committee of the Second Affiliated Hospital of Shandong First Medical University, and all subjects signed informed consent. Specific inclusion criteria were as follows: (1) no disease affecting bone metabolism (thyroid disease, Xin syndrome, etc.); (2) no drugs affecting bone metabolism and body composition (glucocorticoid, estrogen, thyroid hormone, parathyroid hormone, calcitonin, diphosphate, etc.); (3) no serious liver and kidney diseases; (4) do not drink alcohol; and (5) no history of tumor.
### 2.2. General Data Collection
The subjects were fasting, took off their shoes, bareheaded, wearing single clothes, measured their weight (kg) and height (cm), and calculated their BMI. Calculation formula was as follows: weight (kg) divided by the square of height (m2). Based on the BMI grouping standard formulated by the working group on obesity in China (wgoc) and the guidelines in references [3, 5], the cohort with 18.5≤BMI<24kg/m2 is the normal weight group, while the cohort with BMI≥24kg/m2 is the overweight/obese group.
### 2.3. Body Composition Test
Use the body 770 body composition tester to test the body composition of the subjects. The test indexes include waist hip ratio, muscle mass (kg), fat mass (kg), body fat percentage (%), upper limb muscle mass (kg), trunk muscle mass (kg), lower limb muscle mass (kg), upper limb fat mass (kg), trunk fat mass (kg), lower limb fat mass (kg), whole-body phase angle (°), and visceral fat area (cm2).
### 2.4. Bone Mineral Density Test
According to the Chinese expert consensus about the diagnosis of osteoporosis [39], the bone mineral density of lumbar spines 1-4 (L1-4) was tested by dual energy X-ray bone mineral density tester (Primus, OSTO, Korea) [40].
### 2.5. Statistical Analysis
The SPSS v22.0 statistical software is used for data analysis, and the measurement data are expressed bymean±standarddeviation (± s). The two samples were compared by t-test. Spearman was used to analyze the correlation between body composition and bone mineral density. The relationship between bone mineral density and body composition was analyzed by linear regression. The difference was statistically significant (P<0.05).
## 2.1. Research Objects
Female employees of one university who had routine physical check-ups in the examination center of the Second Affiliated Hospital of Shandong First Medical University from January 2021 to May 2021 were initially recruited as the research objects. They had not participated in formal vocational physical exercise and voluntarily accepted DEXA monitoring bone mineral density and body composition analysis. Finally, 90 cases were included, aged 37-85 years, with an average of (54.84±7.88) years. This study was approved by the ethics committee of the Second Affiliated Hospital of Shandong First Medical University, and all subjects signed informed consent. Specific inclusion criteria were as follows: (1) no disease affecting bone metabolism (thyroid disease, Xin syndrome, etc.); (2) no drugs affecting bone metabolism and body composition (glucocorticoid, estrogen, thyroid hormone, parathyroid hormone, calcitonin, diphosphate, etc.); (3) no serious liver and kidney diseases; (4) do not drink alcohol; and (5) no history of tumor.
## 2.2. General Data Collection
The subjects were fasting, took off their shoes, bareheaded, wearing single clothes, measured their weight (kg) and height (cm), and calculated their BMI. Calculation formula was as follows: weight (kg) divided by the square of height (m2). Based on the BMI grouping standard formulated by the working group on obesity in China (wgoc) and the guidelines in references [3, 5], the cohort with 18.5≤BMI<24kg/m2 is the normal weight group, while the cohort with BMI≥24kg/m2 is the overweight/obese group.
## 2.3. Body Composition Test
Use the body 770 body composition tester to test the body composition of the subjects. The test indexes include waist hip ratio, muscle mass (kg), fat mass (kg), body fat percentage (%), upper limb muscle mass (kg), trunk muscle mass (kg), lower limb muscle mass (kg), upper limb fat mass (kg), trunk fat mass (kg), lower limb fat mass (kg), whole-body phase angle (°), and visceral fat area (cm2).
## 2.4. Bone Mineral Density Test
According to the Chinese expert consensus about the diagnosis of osteoporosis [39], the bone mineral density of lumbar spines 1-4 (L1-4) was tested by dual energy X-ray bone mineral density tester (Primus, OSTO, Korea) [40].
## 2.5. Statistical Analysis
The SPSS v22.0 statistical software is used for data analysis, and the measurement data are expressed bymean±standarddeviation (± s). The two samples were compared by t-test. Spearman was used to analyze the correlation between body composition and bone mineral density. The relationship between bone mineral density and body composition was analyzed by linear regression. The difference was statistically significant (P<0.05).
## 3. Results
### 3.1. General Information
The screened 90 women were finally included in this study, age (54.84±7.88) years, height (158.94±5.30) cm, weight (65.37±10.41) kg, and BMI (25.86±3.87) kg/m2. The waist hip ratio and body fat percentage of the study group were higher than the normal level, and the body composition was (see Table 1 for details).Table 1
General information about body composition of female employees.
ProjectMinimumMaximumMean (−x)Standard deviation (s)Age (y)378554.847.88Height (cm)146.1171.3158.945.3Weight (kg)46.5101.865.3710.41BMI(kg/m2)19.339.825.863.87Waist hip ratio0.831.110.940.061Skeletal muscle mass (kg)15.828.821.992.85Body fat (kg)13.251.924.687.4Percentage of body fat (%)25.652.737.156.05Upper limb muscle mass (kg)2.746.474.240.73Trunk muscle mass (kg)14.125.218.922.25Lower limb muscle mass (kg)8.3715.9712.341.77Upper limb fat (kg)1.611.33.751.66Trunk fat (kg)6.125.612.63.81Lower limb fat (kg)413.17.111.84Visceral fat area (cm2)54.4262.6127.6843.83Whole-body phase angle (°)3.65.94.880.47Lumbar 1-4 bone mineral density (g/cm2)0.5661.5391.0070.191
### 3.2. Comparison of Bone Mineral Density with Different BMI
According to the BMI grouping standard formulated by WGOC, the population is divided into the normal weight group and overweight/obesity group. By comparing the bone mineral density of lumbar spines L1-4 between the two groups, it can be seen that the bone mineral density of the overweight/obesity group is significantly higher than that of the normal weight group (see Table2 for details).Table 2
Comparison of bone mineral density of different BMI.
BMInMeanStandard deviationtPNormal weight group330.9430.207-2.008<0.05Overweight group371.0390.195
### 3.3. Correlation Analysis and Multiple Regression Analysis between Body Composition and Bone Mineral Density
Spearman correlation analysis showed that age was negatively correlated with L1-4 BMD, and skeletal muscle, upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle were negatively correlated with L1-4 BMD (see Table3 for details).Table 3
Correlation analysis between body composition and bone mineral density.
ProjectrPAge (y)-0.43<0.01BMI (kg/m2)0.251<0.05Waist hip ratio-0.041>0.05Skeletal muscle mass (kg)0.453<0.01Body fat (kg)0.152>0.05Percentage of body fat (%)0.059>0.05Upper limb muscle mass (kg)0.388<0.01Trunk muscle mass (kg)0.373<0.01Lower limb muscle mass (kg)0.419<0.01Upper limb fat (kg)0.138>0.05Trunk fat (kg)0.144>0.05Lower limb fat (kg)0.176>0.05Visceral fat area (cm2)0.049>0.05Whole-body phase angle (°)0.208<0.05
### 3.4. Multiple Regression Analysis of Body Composition and Bone Mineral Density
Skeletal muscle, including upper limb muscle, trunk muscle, and lower limb muscle, was used as independent variables for multiple linear regression analysis. L1-4 bone mineral density is the dependent variable and age, and skeletal muscle and whole-body phase angle are the independent variables. Multiple linear regression analysis shows that age is the negative influencing factor of bone mineral density, and skeletal muscle is the protective factor of BMD (see Table4 for details).Table 4
Linear regression analysis of body composition and bone mineral density.
ProjectStandardβ valueP valueAge (y)-0.355<0.01Skeletal muscle mass (kg)0.35<0.01Taking L1-4 BMD as dependent variable and age, and upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle as independent variables, linear regression analysis shows that age is a negative influencing factor of BMD, and lower limb muscle is a protective factor of BMD (see Table5 for details).Table 5
Linear regression analysis of body composition and bone mineral density.
Projectr valueP valueAge (y)-0.333<0.01Lower limb muscle mass (kg)0.618<0.01
## 3.1. General Information
The screened 90 women were finally included in this study, age (54.84±7.88) years, height (158.94±5.30) cm, weight (65.37±10.41) kg, and BMI (25.86±3.87) kg/m2. The waist hip ratio and body fat percentage of the study group were higher than the normal level, and the body composition was (see Table 1 for details).Table 1
General information about body composition of female employees.
ProjectMinimumMaximumMean (−x)Standard deviation (s)Age (y)378554.847.88Height (cm)146.1171.3158.945.3Weight (kg)46.5101.865.3710.41BMI(kg/m2)19.339.825.863.87Waist hip ratio0.831.110.940.061Skeletal muscle mass (kg)15.828.821.992.85Body fat (kg)13.251.924.687.4Percentage of body fat (%)25.652.737.156.05Upper limb muscle mass (kg)2.746.474.240.73Trunk muscle mass (kg)14.125.218.922.25Lower limb muscle mass (kg)8.3715.9712.341.77Upper limb fat (kg)1.611.33.751.66Trunk fat (kg)6.125.612.63.81Lower limb fat (kg)413.17.111.84Visceral fat area (cm2)54.4262.6127.6843.83Whole-body phase angle (°)3.65.94.880.47Lumbar 1-4 bone mineral density (g/cm2)0.5661.5391.0070.191
## 3.2. Comparison of Bone Mineral Density with Different BMI
According to the BMI grouping standard formulated by WGOC, the population is divided into the normal weight group and overweight/obesity group. By comparing the bone mineral density of lumbar spines L1-4 between the two groups, it can be seen that the bone mineral density of the overweight/obesity group is significantly higher than that of the normal weight group (see Table2 for details).Table 2
Comparison of bone mineral density of different BMI.
BMInMeanStandard deviationtPNormal weight group330.9430.207-2.008<0.05Overweight group371.0390.195
## 3.3. Correlation Analysis and Multiple Regression Analysis between Body Composition and Bone Mineral Density
Spearman correlation analysis showed that age was negatively correlated with L1-4 BMD, and skeletal muscle, upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle were negatively correlated with L1-4 BMD (see Table3 for details).Table 3
Correlation analysis between body composition and bone mineral density.
ProjectrPAge (y)-0.43<0.01BMI (kg/m2)0.251<0.05Waist hip ratio-0.041>0.05Skeletal muscle mass (kg)0.453<0.01Body fat (kg)0.152>0.05Percentage of body fat (%)0.059>0.05Upper limb muscle mass (kg)0.388<0.01Trunk muscle mass (kg)0.373<0.01Lower limb muscle mass (kg)0.419<0.01Upper limb fat (kg)0.138>0.05Trunk fat (kg)0.144>0.05Lower limb fat (kg)0.176>0.05Visceral fat area (cm2)0.049>0.05Whole-body phase angle (°)0.208<0.05
## 3.4. Multiple Regression Analysis of Body Composition and Bone Mineral Density
Skeletal muscle, including upper limb muscle, trunk muscle, and lower limb muscle, was used as independent variables for multiple linear regression analysis. L1-4 bone mineral density is the dependent variable and age, and skeletal muscle and whole-body phase angle are the independent variables. Multiple linear regression analysis shows that age is the negative influencing factor of bone mineral density, and skeletal muscle is the protective factor of BMD (see Table4 for details).Table 4
Linear regression analysis of body composition and bone mineral density.
ProjectStandardβ valueP valueAge (y)-0.355<0.01Skeletal muscle mass (kg)0.35<0.01Taking L1-4 BMD as dependent variable and age, and upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle as independent variables, linear regression analysis shows that age is a negative influencing factor of BMD, and lower limb muscle is a protective factor of BMD (see Table5 for details).Table 5
Linear regression analysis of body composition and bone mineral density.
Projectr valueP valueAge (y)-0.333<0.01Lower limb muscle mass (kg)0.618<0.01
## 4. Discussion
The outcomes of body composition analysis mainly provide the proportion of water, muscle, fat, and inorganic salts in the total body mass. These values can be used to evaluate the nutritional and energy metabolism status and overall health status of the body and can guide the diagnosis, treatment, and prognosis of a variety of diseases [41]. The present study suggests that lower limb muscle mass is a protective factor to keep the normal index of BMD in the female working in the city of Tai’an.As early as the 1990s, the relationship between BMI, body weight, and bone health has been well studied, but the conclusions derived from different studies have not been consensual. Evans et al. illustrated that the areal BMD of the whole body, lumbar spine, hip, tibia, and radius of obese adults was higher than that with normal weight. Plus, the number of cortical bone and trabecular bone was significantly increased [17]. Yang and Shen also proved that the BMD of spine and femoral neck received a promotion accompanied with the increase of BMI and hip circumference [42]. However, Compston et al. reported an increased risk of ankle and femoral fractures in postmenopausal obese women [43]. The proportion of body compositions is various among the different populations, even they have the same body mass. The application of BMI or body weight alone cannot truly reflect the individual composition and health status of individuals.In recent years, studies have shown that muscle mass and adipose tissue can affect bone mineral density. In our previous studies on male body composition and bone mineral density, we found that lean mass (LM) and lean body mass index (LBMI) play a decisive role in bone mineral density. In overweight/obese people, percentage body fat (% BF) and fat mass index (FMI) are negative factors of bone mineral density [44]. A study based on men over 50 years old shows that FMI is the protective factor of lumbar spine, and LBMI and fat free mass index (FFMI) are the protective factors of hip BMD [11]. Limb muscles are the protective factor of bone strength in female population, and subcutaneous adipose tissue is the risk factor of bone strength decline. This conclusion is not affected by menopause. The risk of osteoporosis increases with the decrease of total fat and lean tissue in postmenopausal women. Previous studies suggest that body composition plays different roles in bone health of different populations [10, 45]. In this study, there is no significant correlation between fat/obesity and bone mineral density in the female population. The effect of adipose tissue on bone health is still controversial and needs further research. Most studies have not seen further stratification analysis of whole-body muscles. However, it is found that there is a positive correlation between muscle and BMD [46]. Lower limb muscle mass is more conducive to the prevention of osteoporosis than body muscle and upper limb muscle. The possible protective mechanisms of muscle for bone health are as follows: (1) the gravity generated by muscle tissue through its own weight and the stress generated by muscle contraction on bone promote bone growth and development and increase BMD and bone strength. (2) Muscle produces insulin-like growth factor 1, fibroblast growth factor 21, myostatin, irisin, and other chemicals, which act on mesenchymal, osteoblasts, osteoclasts, and osteoblasts through paracrine to regulate bone metabolism [47, 48].The limitation of this study is as follows: first of all, the sample size of objects is kind of small, which may lead to the bias in the results. Secondly, this study only detects the measurement of lumbar bone mineral density and cannot determine the correlation between bone mineral density and body composition in other parts. In future research, the sample size can be further expanded to further determine the reliability of the research results, and the monitoring parts of bone mineral density can be extended to femur bone mineral density of forearm and whole body to further verify the correlation between body composition and bone mineral density of various parts of the body.There is a significant correlation between human body composition and bone health, but there are few studies with large sample on the correlation between bone health and human body composition in different populations. In clinic, we should pay more attention to the multiple measurements [42] and data analysis involving human body composition in the further studies, so as to improve the life quality of the patients.
---
*Source: 1011768-2022-02-08.xml* | 1011768-2022-02-08_1011768-2022-02-08.md | 21,925 | The Relationship between Body Composition and Bone Mineral Density of Female Workers in A Unit of Tai’an | Yan Wang; Siqi Wang; Zhengxiu Chen; Zhangshen Ran | Computational and Mathematical Methods in Medicine
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1011768 | 1011768-2022-02-08.xml | ---
## Abstract
Objective. To explore the relationship between body composition and bone mineral density (BMD) of female workers in a university of Tai’an. Methods. This study randomly selected 90 female employees in a university of Tai’an. The body composition was monitored by body composition analyzer (inbody770), and the lumbar bone mineral density was monitored by dual energy X-ray absorptiometry (BMD model). The data were analyzed by SPSS 22.0 statistical software. Results. With the increasing of body mass index (BMI), BMD of female lumbar spines 1-4 (L1-4) increased gradually. Spearman correlation analysis showed that BMI, skeletal muscle mass, upper limb muscle mass, trunk muscle mass, lower limb muscle mass, and whole-body phase angle were positively correlated with L1-4BMD. Age was negatively correlated with L1-4BMD. Linear regression analysis showed that age was a negative factor of L1-4BMD, and skeletal muscle mass was a protective factor of abnormal bone mass, especially lower limb muscle mass. Conclusions. Lower limb muscle mass is a protective factor of female BMD. Strengthening physical exercise to improve lower limb muscle mass is conducive to the prevention of female osteoporosis.
---
## Body
## 1. Introduction
The aging population in China has been dramatically growing, which becomes a heavy burden for the government at all levels. In addition to the common metal diseases such as Alzheimer’s disease and Parkinson’s disease [1, 2], an increasing attention is paid to the physical diseases.Osteopenia/osteoporosis (OP) is a metabolic bone disease mainly caused by aging, which is associated with gradual changes in body composition [3–6]. By means of the long-standing bone mass reduction and bone microstructure destruction, OP is making the appendicular bone brittler and brittler, which is prone to fracture. There are many known and potential driving factors influencing OP, for instance, the intake of vitamin D, hormone level, and sex-specific metabolic diseases [4]. Thus, the whole progression is entirely different between men and women [7, 8]. The population with abnormal bone mass in women is significantly higher than that in men [9], especially in postmenopausal women [4, 10]. As a hotspot, the association has been widely concerned between bone mineral density (BMD) and aging, body weight, nutritional status, smoking, alcohol, physical activity, etc. [11–13]. However, the research conclusions regarding the contribution to the adequate maintenance of BMD cannot reach a consensus [14–23]. Reid et al. took the lead in demonstrated that fat mass (FM), one main component of the body weight, is closely related to fat mass in premenopausal women [19]. Next year, the Framingham study stated that weight or BMI is much more strongly associated with BMD in elder women and men as a load factor [24]. Further, Dimitri et al. believed that obesity is a protective against fracture based on their study [25]. For postmenopausal women, Mpalaris and his colleagues proposed a U-bend curve to illustrate the relationship between BMI and the fracture risk [26]. For children, neither the diagnostic criteria is uniform nor the decision of therapy as well as therapeutic efficacy comes to reach agreement [8, 25, 27–32]. What is worse, the acataleptic relationship between body composition and BMD affects the diagnosis and treatments of the patients with other complex diseases [33–38].The study conducted here is aimed at checking if different assemblies of body components may play different roles in bone metabolism. Should the further study on the correlation between body composition and bone health help to guide women to prevent and treat osteoporosis and improve their life quality?
## 2. Objects and Methods
### 2.1. Research Objects
Female employees of one university who had routine physical check-ups in the examination center of the Second Affiliated Hospital of Shandong First Medical University from January 2021 to May 2021 were initially recruited as the research objects. They had not participated in formal vocational physical exercise and voluntarily accepted DEXA monitoring bone mineral density and body composition analysis. Finally, 90 cases were included, aged 37-85 years, with an average of (54.84±7.88) years. This study was approved by the ethics committee of the Second Affiliated Hospital of Shandong First Medical University, and all subjects signed informed consent. Specific inclusion criteria were as follows: (1) no disease affecting bone metabolism (thyroid disease, Xin syndrome, etc.); (2) no drugs affecting bone metabolism and body composition (glucocorticoid, estrogen, thyroid hormone, parathyroid hormone, calcitonin, diphosphate, etc.); (3) no serious liver and kidney diseases; (4) do not drink alcohol; and (5) no history of tumor.
### 2.2. General Data Collection
The subjects were fasting, took off their shoes, bareheaded, wearing single clothes, measured their weight (kg) and height (cm), and calculated their BMI. Calculation formula was as follows: weight (kg) divided by the square of height (m2). Based on the BMI grouping standard formulated by the working group on obesity in China (wgoc) and the guidelines in references [3, 5], the cohort with 18.5≤BMI<24kg/m2 is the normal weight group, while the cohort with BMI≥24kg/m2 is the overweight/obese group.
### 2.3. Body Composition Test
Use the body 770 body composition tester to test the body composition of the subjects. The test indexes include waist hip ratio, muscle mass (kg), fat mass (kg), body fat percentage (%), upper limb muscle mass (kg), trunk muscle mass (kg), lower limb muscle mass (kg), upper limb fat mass (kg), trunk fat mass (kg), lower limb fat mass (kg), whole-body phase angle (°), and visceral fat area (cm2).
### 2.4. Bone Mineral Density Test
According to the Chinese expert consensus about the diagnosis of osteoporosis [39], the bone mineral density of lumbar spines 1-4 (L1-4) was tested by dual energy X-ray bone mineral density tester (Primus, OSTO, Korea) [40].
### 2.5. Statistical Analysis
The SPSS v22.0 statistical software is used for data analysis, and the measurement data are expressed bymean±standarddeviation (± s). The two samples were compared by t-test. Spearman was used to analyze the correlation between body composition and bone mineral density. The relationship between bone mineral density and body composition was analyzed by linear regression. The difference was statistically significant (P<0.05).
## 2.1. Research Objects
Female employees of one university who had routine physical check-ups in the examination center of the Second Affiliated Hospital of Shandong First Medical University from January 2021 to May 2021 were initially recruited as the research objects. They had not participated in formal vocational physical exercise and voluntarily accepted DEXA monitoring bone mineral density and body composition analysis. Finally, 90 cases were included, aged 37-85 years, with an average of (54.84±7.88) years. This study was approved by the ethics committee of the Second Affiliated Hospital of Shandong First Medical University, and all subjects signed informed consent. Specific inclusion criteria were as follows: (1) no disease affecting bone metabolism (thyroid disease, Xin syndrome, etc.); (2) no drugs affecting bone metabolism and body composition (glucocorticoid, estrogen, thyroid hormone, parathyroid hormone, calcitonin, diphosphate, etc.); (3) no serious liver and kidney diseases; (4) do not drink alcohol; and (5) no history of tumor.
## 2.2. General Data Collection
The subjects were fasting, took off their shoes, bareheaded, wearing single clothes, measured their weight (kg) and height (cm), and calculated their BMI. Calculation formula was as follows: weight (kg) divided by the square of height (m2). Based on the BMI grouping standard formulated by the working group on obesity in China (wgoc) and the guidelines in references [3, 5], the cohort with 18.5≤BMI<24kg/m2 is the normal weight group, while the cohort with BMI≥24kg/m2 is the overweight/obese group.
## 2.3. Body Composition Test
Use the body 770 body composition tester to test the body composition of the subjects. The test indexes include waist hip ratio, muscle mass (kg), fat mass (kg), body fat percentage (%), upper limb muscle mass (kg), trunk muscle mass (kg), lower limb muscle mass (kg), upper limb fat mass (kg), trunk fat mass (kg), lower limb fat mass (kg), whole-body phase angle (°), and visceral fat area (cm2).
## 2.4. Bone Mineral Density Test
According to the Chinese expert consensus about the diagnosis of osteoporosis [39], the bone mineral density of lumbar spines 1-4 (L1-4) was tested by dual energy X-ray bone mineral density tester (Primus, OSTO, Korea) [40].
## 2.5. Statistical Analysis
The SPSS v22.0 statistical software is used for data analysis, and the measurement data are expressed bymean±standarddeviation (± s). The two samples were compared by t-test. Spearman was used to analyze the correlation between body composition and bone mineral density. The relationship between bone mineral density and body composition was analyzed by linear regression. The difference was statistically significant (P<0.05).
## 3. Results
### 3.1. General Information
The screened 90 women were finally included in this study, age (54.84±7.88) years, height (158.94±5.30) cm, weight (65.37±10.41) kg, and BMI (25.86±3.87) kg/m2. The waist hip ratio and body fat percentage of the study group were higher than the normal level, and the body composition was (see Table 1 for details).Table 1
General information about body composition of female employees.
ProjectMinimumMaximumMean (−x)Standard deviation (s)Age (y)378554.847.88Height (cm)146.1171.3158.945.3Weight (kg)46.5101.865.3710.41BMI(kg/m2)19.339.825.863.87Waist hip ratio0.831.110.940.061Skeletal muscle mass (kg)15.828.821.992.85Body fat (kg)13.251.924.687.4Percentage of body fat (%)25.652.737.156.05Upper limb muscle mass (kg)2.746.474.240.73Trunk muscle mass (kg)14.125.218.922.25Lower limb muscle mass (kg)8.3715.9712.341.77Upper limb fat (kg)1.611.33.751.66Trunk fat (kg)6.125.612.63.81Lower limb fat (kg)413.17.111.84Visceral fat area (cm2)54.4262.6127.6843.83Whole-body phase angle (°)3.65.94.880.47Lumbar 1-4 bone mineral density (g/cm2)0.5661.5391.0070.191
### 3.2. Comparison of Bone Mineral Density with Different BMI
According to the BMI grouping standard formulated by WGOC, the population is divided into the normal weight group and overweight/obesity group. By comparing the bone mineral density of lumbar spines L1-4 between the two groups, it can be seen that the bone mineral density of the overweight/obesity group is significantly higher than that of the normal weight group (see Table2 for details).Table 2
Comparison of bone mineral density of different BMI.
BMInMeanStandard deviationtPNormal weight group330.9430.207-2.008<0.05Overweight group371.0390.195
### 3.3. Correlation Analysis and Multiple Regression Analysis between Body Composition and Bone Mineral Density
Spearman correlation analysis showed that age was negatively correlated with L1-4 BMD, and skeletal muscle, upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle were negatively correlated with L1-4 BMD (see Table3 for details).Table 3
Correlation analysis between body composition and bone mineral density.
ProjectrPAge (y)-0.43<0.01BMI (kg/m2)0.251<0.05Waist hip ratio-0.041>0.05Skeletal muscle mass (kg)0.453<0.01Body fat (kg)0.152>0.05Percentage of body fat (%)0.059>0.05Upper limb muscle mass (kg)0.388<0.01Trunk muscle mass (kg)0.373<0.01Lower limb muscle mass (kg)0.419<0.01Upper limb fat (kg)0.138>0.05Trunk fat (kg)0.144>0.05Lower limb fat (kg)0.176>0.05Visceral fat area (cm2)0.049>0.05Whole-body phase angle (°)0.208<0.05
### 3.4. Multiple Regression Analysis of Body Composition and Bone Mineral Density
Skeletal muscle, including upper limb muscle, trunk muscle, and lower limb muscle, was used as independent variables for multiple linear regression analysis. L1-4 bone mineral density is the dependent variable and age, and skeletal muscle and whole-body phase angle are the independent variables. Multiple linear regression analysis shows that age is the negative influencing factor of bone mineral density, and skeletal muscle is the protective factor of BMD (see Table4 for details).Table 4
Linear regression analysis of body composition and bone mineral density.
ProjectStandardβ valueP valueAge (y)-0.355<0.01Skeletal muscle mass (kg)0.35<0.01Taking L1-4 BMD as dependent variable and age, and upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle as independent variables, linear regression analysis shows that age is a negative influencing factor of BMD, and lower limb muscle is a protective factor of BMD (see Table5 for details).Table 5
Linear regression analysis of body composition and bone mineral density.
Projectr valueP valueAge (y)-0.333<0.01Lower limb muscle mass (kg)0.618<0.01
## 3.1. General Information
The screened 90 women were finally included in this study, age (54.84±7.88) years, height (158.94±5.30) cm, weight (65.37±10.41) kg, and BMI (25.86±3.87) kg/m2. The waist hip ratio and body fat percentage of the study group were higher than the normal level, and the body composition was (see Table 1 for details).Table 1
General information about body composition of female employees.
ProjectMinimumMaximumMean (−x)Standard deviation (s)Age (y)378554.847.88Height (cm)146.1171.3158.945.3Weight (kg)46.5101.865.3710.41BMI(kg/m2)19.339.825.863.87Waist hip ratio0.831.110.940.061Skeletal muscle mass (kg)15.828.821.992.85Body fat (kg)13.251.924.687.4Percentage of body fat (%)25.652.737.156.05Upper limb muscle mass (kg)2.746.474.240.73Trunk muscle mass (kg)14.125.218.922.25Lower limb muscle mass (kg)8.3715.9712.341.77Upper limb fat (kg)1.611.33.751.66Trunk fat (kg)6.125.612.63.81Lower limb fat (kg)413.17.111.84Visceral fat area (cm2)54.4262.6127.6843.83Whole-body phase angle (°)3.65.94.880.47Lumbar 1-4 bone mineral density (g/cm2)0.5661.5391.0070.191
## 3.2. Comparison of Bone Mineral Density with Different BMI
According to the BMI grouping standard formulated by WGOC, the population is divided into the normal weight group and overweight/obesity group. By comparing the bone mineral density of lumbar spines L1-4 between the two groups, it can be seen that the bone mineral density of the overweight/obesity group is significantly higher than that of the normal weight group (see Table2 for details).Table 2
Comparison of bone mineral density of different BMI.
BMInMeanStandard deviationtPNormal weight group330.9430.207-2.008<0.05Overweight group371.0390.195
## 3.3. Correlation Analysis and Multiple Regression Analysis between Body Composition and Bone Mineral Density
Spearman correlation analysis showed that age was negatively correlated with L1-4 BMD, and skeletal muscle, upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle were negatively correlated with L1-4 BMD (see Table3 for details).Table 3
Correlation analysis between body composition and bone mineral density.
ProjectrPAge (y)-0.43<0.01BMI (kg/m2)0.251<0.05Waist hip ratio-0.041>0.05Skeletal muscle mass (kg)0.453<0.01Body fat (kg)0.152>0.05Percentage of body fat (%)0.059>0.05Upper limb muscle mass (kg)0.388<0.01Trunk muscle mass (kg)0.373<0.01Lower limb muscle mass (kg)0.419<0.01Upper limb fat (kg)0.138>0.05Trunk fat (kg)0.144>0.05Lower limb fat (kg)0.176>0.05Visceral fat area (cm2)0.049>0.05Whole-body phase angle (°)0.208<0.05
## 3.4. Multiple Regression Analysis of Body Composition and Bone Mineral Density
Skeletal muscle, including upper limb muscle, trunk muscle, and lower limb muscle, was used as independent variables for multiple linear regression analysis. L1-4 bone mineral density is the dependent variable and age, and skeletal muscle and whole-body phase angle are the independent variables. Multiple linear regression analysis shows that age is the negative influencing factor of bone mineral density, and skeletal muscle is the protective factor of BMD (see Table4 for details).Table 4
Linear regression analysis of body composition and bone mineral density.
ProjectStandardβ valueP valueAge (y)-0.355<0.01Skeletal muscle mass (kg)0.35<0.01Taking L1-4 BMD as dependent variable and age, and upper limb muscle, trunk muscle, lower limb muscle, and whole-body phase angle as independent variables, linear regression analysis shows that age is a negative influencing factor of BMD, and lower limb muscle is a protective factor of BMD (see Table5 for details).Table 5
Linear regression analysis of body composition and bone mineral density.
Projectr valueP valueAge (y)-0.333<0.01Lower limb muscle mass (kg)0.618<0.01
## 4. Discussion
The outcomes of body composition analysis mainly provide the proportion of water, muscle, fat, and inorganic salts in the total body mass. These values can be used to evaluate the nutritional and energy metabolism status and overall health status of the body and can guide the diagnosis, treatment, and prognosis of a variety of diseases [41]. The present study suggests that lower limb muscle mass is a protective factor to keep the normal index of BMD in the female working in the city of Tai’an.As early as the 1990s, the relationship between BMI, body weight, and bone health has been well studied, but the conclusions derived from different studies have not been consensual. Evans et al. illustrated that the areal BMD of the whole body, lumbar spine, hip, tibia, and radius of obese adults was higher than that with normal weight. Plus, the number of cortical bone and trabecular bone was significantly increased [17]. Yang and Shen also proved that the BMD of spine and femoral neck received a promotion accompanied with the increase of BMI and hip circumference [42]. However, Compston et al. reported an increased risk of ankle and femoral fractures in postmenopausal obese women [43]. The proportion of body compositions is various among the different populations, even they have the same body mass. The application of BMI or body weight alone cannot truly reflect the individual composition and health status of individuals.In recent years, studies have shown that muscle mass and adipose tissue can affect bone mineral density. In our previous studies on male body composition and bone mineral density, we found that lean mass (LM) and lean body mass index (LBMI) play a decisive role in bone mineral density. In overweight/obese people, percentage body fat (% BF) and fat mass index (FMI) are negative factors of bone mineral density [44]. A study based on men over 50 years old shows that FMI is the protective factor of lumbar spine, and LBMI and fat free mass index (FFMI) are the protective factors of hip BMD [11]. Limb muscles are the protective factor of bone strength in female population, and subcutaneous adipose tissue is the risk factor of bone strength decline. This conclusion is not affected by menopause. The risk of osteoporosis increases with the decrease of total fat and lean tissue in postmenopausal women. Previous studies suggest that body composition plays different roles in bone health of different populations [10, 45]. In this study, there is no significant correlation between fat/obesity and bone mineral density in the female population. The effect of adipose tissue on bone health is still controversial and needs further research. Most studies have not seen further stratification analysis of whole-body muscles. However, it is found that there is a positive correlation between muscle and BMD [46]. Lower limb muscle mass is more conducive to the prevention of osteoporosis than body muscle and upper limb muscle. The possible protective mechanisms of muscle for bone health are as follows: (1) the gravity generated by muscle tissue through its own weight and the stress generated by muscle contraction on bone promote bone growth and development and increase BMD and bone strength. (2) Muscle produces insulin-like growth factor 1, fibroblast growth factor 21, myostatin, irisin, and other chemicals, which act on mesenchymal, osteoblasts, osteoclasts, and osteoblasts through paracrine to regulate bone metabolism [47, 48].The limitation of this study is as follows: first of all, the sample size of objects is kind of small, which may lead to the bias in the results. Secondly, this study only detects the measurement of lumbar bone mineral density and cannot determine the correlation between bone mineral density and body composition in other parts. In future research, the sample size can be further expanded to further determine the reliability of the research results, and the monitoring parts of bone mineral density can be extended to femur bone mineral density of forearm and whole body to further verify the correlation between body composition and bone mineral density of various parts of the body.There is a significant correlation between human body composition and bone health, but there are few studies with large sample on the correlation between bone health and human body composition in different populations. In clinic, we should pay more attention to the multiple measurements [42] and data analysis involving human body composition in the further studies, so as to improve the life quality of the patients.
---
*Source: 1011768-2022-02-08.xml* | 2022 |
# REE Geochemistry of Euphrates River, Turkey
**Authors:** Leyla Kalender; Gamze Aytimur
**Journal:** Journal of Chemistry
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1012021
---
## Abstract
The study area is located on the Euphrates River at 38°41°32.48′′N–38°14′24.10′′N latitude and 39°56′4.59′′E–39°8°13.41′′E longitude. The Euphrates is the longest river in Western Asia. The lithological units observed from the bottom to the top are Permo-Triassic Keban Metamorphites, Late Cretaceous Kömürhan Ophiolites, Upper Cretaceous Elazığ Magmatic Complex, Middle Eocene Maden Complex and Kırkgeçit Formation, Upper Pliocene and Lower Eocene Seske Formation and Upper Miocene, Pliocene Karabakır and Çaybağı Formations, Palu Formation, and Holocene Euphrates River sediments. The geochemical studies show that87Sr/86Sr and 143Nd/144Nd isotopic compositions in the Euphrates River bank sediments are 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. These values indicate mixing of both carbonate-rich shallow marine sediment and felsic-mafic rocks from Elazığ Magmatic Complex into the stream sediments. The positive ε
N
d
(
0
) values (0.35, 3.9, and 2.7) are higher downstream in the studied sediments due to weathering of the mafic volcanic rocks. The chondrite, NAS, and UCC normalized patterns show that the REE compositions of the Euphrates River sediments are higher than chondrite composition but close to NAS and UCC. The river sediments in the tectonic zone and the weathered granodioritic rocks of the Elazığ Magmatic complex affect upstream water compositions.
---
## Body
## 1. Introduction
A number of researchers have studied Nd-Sr isotopic and trace element geochemistry of river sediments and soils as tracers of clastic sources. The geochemical characterizations and Sr-Nd isotopic fingerprinting of sediments in any fluvial system can be done using radiogenic isotopic compositions [1–5]. Rare earth elements (REEs) compositions have been studied in stream sediments [6–8] and in chemical weathering of drainage systems [9–12]. A number of researchers have studied REE composition of both river sediments and river water and discovered that heavy REE concentration is higher in the river sediments than in suspended matter in river water. They also indicated that shale, Upper Continental Crust, and chondrite normalized REE patterns showed that chemical weathering from source rocks in the continental crust, erosion, and terrigenous fluviatile sediment sources can be distinguished using the REE compositions of rivers [13–18]. Leybourne et al. [19] indicated that Ce and Eu can be redox sensitive and attributed to determination of redox conditions. Yang et al. [8] stated that source rock composition is a more important factor affecting REE composition than weathering processes. There are fewer studies on the Euphrates River. Kalender and Bölücek [20] studied the stream sediments in the north Keban Dam Lake in the Eastern Anatolian district and demonstrated that more REEs are transported with Fe and Mn rich oxides and fine size fraction sediments (e.g., clay minerals) via adsorption. Kalender and Çiçek Uçar [21] indicated that the calculated enrichment factor values of the heavy REE are more than those of light REE in the Geli stream sediments. Geli stream is a tributary of the Euphrates River. Rivers carry the weathered rock products from the continents to the dam lakes, natural lakes, and the sea. Thus, this study focuses on the REE concentrations in river bank sediments from the initial point of the Euphrates River (10 kilometers upstream of Keban Dam) to Karakaya Dam Lake. In order to evaluate distribution of REE along the flowing direction of the Euphrates River, the sediment sources and lithological controls were identified using Upper Continental Crust (UCC), North American Shale (NAS), and Chondrite (Ch) normalized REE patterns (all average values taken from [8, 22–25]). Goldstein and Jacobsen [13] studied REE compositions in river water, and major rivers have LREE enriched patterns relative to the NAS, and negative Ce anomalies occur at high pH. This paper firstly presents REE concentrations and includes source rock composition of the Euphrates River sediments and waters.
## 2. Geology
The Euphrates River is located in the Eastern Anatolian district in Turkey and is located on the active tectonic zone of the East Anatolian fault zone. The East Anatolian fault zone is seismically one of the most active regions in the world and is located within the Mediterranean Earthquake Zone, which is a complicated deformation area that was formed by the continental collision between the African-Arabian and Eurasian continents. These deformations involve thrust faults, suture zones, and active strike slip and normal faults, as well as basin formations arising from these faults. The Euphrates River is located on the East Anatolian fault zone and is formed by the mixing of the Karasu River and Murat River 10 kilometers upstream of the Keban Dam. According to Frenken [27], the Euphrates River length is 1100 km from Palu to the Red Sea (Figures 1(a) and 1(b)). The studied sediments were sampled along approximately 50 kilometers of the Euphrates River length. The main stratigraphic units found in the Euphrates River basin range from the Permo-Triassic Keban Metamorphites to Plio-Quaternary Palu Formation (Figure 1(b)). The Keban Metamorphites outcrop on both right and left banks of the Euphrates River. Considering the regional scale, the Keban Metamorphites are represented by marble, recrystallized limestone, calc-schist, metaconglomerate, and calc-phyllite in the study area [28]. Additionally, in the study area, the Upper Cretaceous Elazığ Magmatic Complex consists of volcanic rocks (basalt, andesite, pillow lava, dacite, and volcanic breccia), subvolcanic rocks (aplite, microdiorite, and dolerite), and plutonic rocks (diorite, tonalite, granodiorite, granite, and monzonite). The magmatic rocks are overlain by recrystallized limestone of the Permo-Triassic Keban Metamorphites [29]. The Late Cretaceous Kömürhan Ophiolites are part of the southeast Anatolian ophiolite belt which are formed in a suprasubduction zone within the southern Neo-Tethys [30]. The Kömürhan Ophiolites consist of dunite, layered and isotropic gabbros, plagiogranite, sheet dyke complex, andesitic and basaltic rocks, and volcanosedimentary rocks. The unit is observed in the Karakaya Dam Lake area (Figure 1(b)). Upper Paleocene and Lower Eocene massive limestones are named the Seske Formation which is characterized by interbedded clastic and carbonate rocks. The stratigraphic position of the Seske Formation indicates the extent of how Neo-Tethys was controlled by the tectonic and topographic features of the region during the Eocene [31, 32]. The Seske Formation is observed along the Karakaya Dam Lake (Figure 1(b)). Middle Eocene Maden Complex is composed of basaltic and andesitic rocks and limestone, conglomerates, sandstones, and mudrocks (marl) [29, 33–35]. Middle Eocene Kırkgeçit Formation consists of marine conglomerates, marls, and limestone from the bottom to the top [36]. The Alibonca Formation was deposited after closure of the Neo-Tethys Ocean in Mesozoic time. The unit is composed of sandy limestone and marls which are observed in the Keban Dam Lake area [37] (Figure 1(b)). Upper Miocene-Pliocene Karabakır and Çaybağı Formations consist of sandstone, mudrock, marls, tuff, and basaltic rocks [36]. The units are observed in the northeast of the Keban Dam Lake (Figure 1(b)). The Çaybağı Formation is named by Türkmen [38] at the Çaybağı township in the east of Elazığ. Pliocene-Quaternary units are named the Palu Formation by Kerey and Türkmen [39]. The units consist of quaternary alluvial and fluvial deposits along the level bank of the Murat River which is the initial point of the Euphrates River (Figures 1(a) and 1(b)).Figure 1
(a) Location map. (b) Geology map of the Euphrates River and the sampling sites (from 1 to 90, the geology map was modified from Herece et al. [26]).
(a)
(b)
## 3. Analytical Methods
### 3.1. Climatic Information
Maximum flows of the Euphrates River occurred from February through April, whereas minimum flows occurred from August through October. The annual mean rainfall during that period was 372 mm, and the air temperature varied between 15.21°C (Elazığ) and 16.31°C (Malatya) with the highest and the lowest temperature of 34°C and minus 10, respectively, between 1992 and 2001. The continental climate of the Euphrates Basin is a subtropical plateau climate (data was taken from reports by the Elazığ Meteorology Department). The Euphrates River sediment samples were collected in September, the period of minimum flow of the Euphrates River water. It is also good to take into consideration the effect on chemical compositions of the river bed sediment of processes erosion of the earth’s surface. The sediment samples were taken from locations close to the center of the river bed in consideration of the erosional process and chemical composition of the river bed sediments.
### 3.2. Sampling Sites
In this study, directing studies could not be performed in advance of the development of sample taking methods and suitable particle size and chemical analysis methods. Ninety river sediments and water samples were taken from the right side of the river bed along the direction of the river water flow because it is suitable for sampling of river bed morphology (Figure1(b)).
### 3.3. Sample Preparing for Analysis
The samples were taken in September, when the water flow rate was low. In order to prevent the existence of particles in the sediment samples that were too large, the samples were sifted through sieve with hole diameter of 2 mm (BS10 mesh) so as to obtain the suitable particle sizes. 2 Kg of the river sediment samples was taken at 250–500 m intervals along the Euphrates River (from 10 kilometers upstream of the Keban Dam to Karakaya Dam, 50 kilometers) and placed into plastic bags, numbered, and dried at room temperature. After drying, the samples were sifted to different sieve dimensions in order to determine the particle size fractions suitable for analysis (−200 mesh). In the study of some metals and REE concentrations in sediments, many researchers prefer the grain size of fine-medium sand to silt (−74μm) as it shows very high concentrations, higher than −80 mesh (−180 μm) [8, 21, 40, 41]. In order to minimize the grain size dependencies of heavy metals, concentrations of −75 μm fraction, representing medium fine sand to silt, were used in the present study. Mechanical wet sieving was performed to separate the −74 μm sediment fraction from the bulk samples. Fifteen to twenty grams of each sample was freeze dried for 8 and 9 hours. The sieved fractions were placed in clean porcelain bowls and dried at room temperature.
### 3.4. Chemical Analysis
Elazığ Magmatic Complex includes zirconium- and titanium-bearing minerals [30]. As commonly known, zirconium- and titanium-bearing minerals contain amounts of REE. This method was preferred due to its convenience for the decomposition of the silicate minerals using HF. HF is an efficient disintegration agent used for the decomposition of zirconium silicate and apatite source from granitoids in nature. This decomposition method causes the loss of silica as SiF4 and a part of titanium as TiF4. Bulk samples were ground and sieved through 200-mesh (about 0.074 mm pore size) stainless steel sieve for chemical analysis. 0.5 g of each sediment sample was leached with 90 mL HCl-HNO3-HF at 95°C for 1 hour, diluted to 150 mL, and then analyzed by ICP-OES. The extraction method used by Saito et al. [42] was used to obtain the maximum REE concentration at 9.95 mL 0.01–1 m nitric acid aqueous solution (pH < 4). Standard DS5 was used for the sediment analyses. The sediment samples were analyzed for REE in Acme Analytical Laboratories Ltd., Canada, by Inductively Coupled Plasma-Optical Emission Spectrometer (ICP-OES). The river water samples have less than 0.1% total dissolved solids, and these analyses were used by ICP-OES at Bureau Veritas environmental lab, Maxxam Analytics. The isotopic measurements were made at the Middle East Technical University (Ankara, Turkey) following the protocol of Köksal and Göncüoğlu [43]. TLM-ARG-RIL-02 methods were adopted. An 80 mg aliquot was taken for analysis of Sr and Nd isotope ratios. The samples were dissolved in beakers in a 4 mL 52% HF at 160°C on the hotplate along four days. The samples were dried on the hotplate using 2.5 N HCl and 2 mL bis(ethylhexyl) phosphate using Bio-Rad AG50 W-X8, 100–200 mesh, and chemical seperation of Sr ionic chromatographic columns was prepared. After chemical separation of Sr, REE fractionation was collected using 6 N HCl. Sr isotopes were measured using a single Ta-activator with Re filament and 0.005 N H3PO4 [44]. 87Sr/86Sr ratios were corrected for mass fractionation by normalizing to 86Sr/88Sr = 0.1194, and strontium standard (NBS 987) was measured more than 2 times. The chemical separation of Nd from REE was made in a teflon column using 0.22 N HCl and 2 mL bis(ethylhexyl) phosphate. 143Nd/144Nd data were normalized by 146Nd/144Nd = 0.7219, and neodymium standard (0.511848
±
5) was measured more than 2 times.
## 3.1. Climatic Information
Maximum flows of the Euphrates River occurred from February through April, whereas minimum flows occurred from August through October. The annual mean rainfall during that period was 372 mm, and the air temperature varied between 15.21°C (Elazığ) and 16.31°C (Malatya) with the highest and the lowest temperature of 34°C and minus 10, respectively, between 1992 and 2001. The continental climate of the Euphrates Basin is a subtropical plateau climate (data was taken from reports by the Elazığ Meteorology Department). The Euphrates River sediment samples were collected in September, the period of minimum flow of the Euphrates River water. It is also good to take into consideration the effect on chemical compositions of the river bed sediment of processes erosion of the earth’s surface. The sediment samples were taken from locations close to the center of the river bed in consideration of the erosional process and chemical composition of the river bed sediments.
## 3.2. Sampling Sites
In this study, directing studies could not be performed in advance of the development of sample taking methods and suitable particle size and chemical analysis methods. Ninety river sediments and water samples were taken from the right side of the river bed along the direction of the river water flow because it is suitable for sampling of river bed morphology (Figure1(b)).
## 3.3. Sample Preparing for Analysis
The samples were taken in September, when the water flow rate was low. In order to prevent the existence of particles in the sediment samples that were too large, the samples were sifted through sieve with hole diameter of 2 mm (BS10 mesh) so as to obtain the suitable particle sizes. 2 Kg of the river sediment samples was taken at 250–500 m intervals along the Euphrates River (from 10 kilometers upstream of the Keban Dam to Karakaya Dam, 50 kilometers) and placed into plastic bags, numbered, and dried at room temperature. After drying, the samples were sifted to different sieve dimensions in order to determine the particle size fractions suitable for analysis (−200 mesh). In the study of some metals and REE concentrations in sediments, many researchers prefer the grain size of fine-medium sand to silt (−74μm) as it shows very high concentrations, higher than −80 mesh (−180 μm) [8, 21, 40, 41]. In order to minimize the grain size dependencies of heavy metals, concentrations of −75 μm fraction, representing medium fine sand to silt, were used in the present study. Mechanical wet sieving was performed to separate the −74 μm sediment fraction from the bulk samples. Fifteen to twenty grams of each sample was freeze dried for 8 and 9 hours. The sieved fractions were placed in clean porcelain bowls and dried at room temperature.
## 3.4. Chemical Analysis
Elazığ Magmatic Complex includes zirconium- and titanium-bearing minerals [30]. As commonly known, zirconium- and titanium-bearing minerals contain amounts of REE. This method was preferred due to its convenience for the decomposition of the silicate minerals using HF. HF is an efficient disintegration agent used for the decomposition of zirconium silicate and apatite source from granitoids in nature. This decomposition method causes the loss of silica as SiF4 and a part of titanium as TiF4. Bulk samples were ground and sieved through 200-mesh (about 0.074 mm pore size) stainless steel sieve for chemical analysis. 0.5 g of each sediment sample was leached with 90 mL HCl-HNO3-HF at 95°C for 1 hour, diluted to 150 mL, and then analyzed by ICP-OES. The extraction method used by Saito et al. [42] was used to obtain the maximum REE concentration at 9.95 mL 0.01–1 m nitric acid aqueous solution (pH < 4). Standard DS5 was used for the sediment analyses. The sediment samples were analyzed for REE in Acme Analytical Laboratories Ltd., Canada, by Inductively Coupled Plasma-Optical Emission Spectrometer (ICP-OES). The river water samples have less than 0.1% total dissolved solids, and these analyses were used by ICP-OES at Bureau Veritas environmental lab, Maxxam Analytics. The isotopic measurements were made at the Middle East Technical University (Ankara, Turkey) following the protocol of Köksal and Göncüoğlu [43]. TLM-ARG-RIL-02 methods were adopted. An 80 mg aliquot was taken for analysis of Sr and Nd isotope ratios. The samples were dissolved in beakers in a 4 mL 52% HF at 160°C on the hotplate along four days. The samples were dried on the hotplate using 2.5 N HCl and 2 mL bis(ethylhexyl) phosphate using Bio-Rad AG50 W-X8, 100–200 mesh, and chemical seperation of Sr ionic chromatographic columns was prepared. After chemical separation of Sr, REE fractionation was collected using 6 N HCl. Sr isotopes were measured using a single Ta-activator with Re filament and 0.005 N H3PO4 [44]. 87Sr/86Sr ratios were corrected for mass fractionation by normalizing to 86Sr/88Sr = 0.1194, and strontium standard (NBS 987) was measured more than 2 times. The chemical separation of Nd from REE was made in a teflon column using 0.22 N HCl and 2 mL bis(ethylhexyl) phosphate. 143Nd/144Nd data were normalized by 146Nd/144Nd = 0.7219, and neodymium standard (0.511848
±
5) was measured more than 2 times.
## 4. Results
### 4.1. Nd-Sr Isotope Compositions of the Euphrates River Sediments
REE concentrations of the Euphrates River sediments from Keban Dam to Karakaya Dam are presented in Table1. The result of isotopic composition of the studied sediments and those from different origin, summarized statistical values, and REE compositions of the Mississippi and Amazon River sediments are presented in Table 1. The 75 μm (200 mesh) size fraction from the Euphrates River sediment samples (A43, A61, and A89) was analyzed for 87Sr/86Sr and 143Nd/144Nd ratios (Figure 2 and Table 1). The study shows that 87Sr/86Sr and 143Nd/144Nd isotopic compositions have range of 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. ε
N
d
(
0
) values were calculated using(1)
ε
Nd
0
=
Nd
143
/
Nd
144
Measured
0.512636
-
1
×
10
4 (see [48, 49]).Table 1
Summary statistical values of REE from Euphrates River sediments. Density values (g/cm3) taken from Gupta and Krishnamurthy [45]. ∗Sediments from Sholkovitz [46]. Calculated εNd and 1000/Sr data from Martin and McCulloch [4]; ∗
∗Burke et al. [1]; ∗
∗
∗Bussy [2]; ∗
∗
∗
∗Akgül et al. [47], +Mensel et al. [48].
(a)
LREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
HREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
La
6.14
3.4/15.9
8.57
60.8
349
Tb
8.23
0.18/0.47
0.33
—
—
Ce
8.16
7.5/34.4
18.11
125.4
707
Dy
8.55
0.63/3.12
2.17
7.46
39.7
Pr
6.77
0.88/4.46
2.22
—
—
Ho
8.79
0.17/0.63
0.43
—
—
Nd
7.00
5.39/12.1
7.70
56.4
355
Er
9.06
0.43/1.67
1.20
4.94
21.7
Sm
7.52
0.82/4.05
2.14
—
—
Tm
9.32
0.05/0.28
0.17
—
—
Eu
5.24
0.23/0.82
0.57
2.11
10.9
Yb
6.96
0.29/1.59
1.11
3.94
20.5
Gd
7.90
0.44/3.65
2.20
9.86
46.3
Lu
9.84
0.02/0.23
0.15
0.47
3.02
Mean
5.93
—
—
Mean
0.79
—
—
St. deviation
6.18
—
—
St. deviation
0.74
—
—
(b)
The result of isotopic composition of the studied sediments and some examples in the world
Sample code
143Nd/144Nd
87Sr/86Sr
εNd (0)
1000/Sr
A43
0.512654
0.7053
0.35
16.1
A61
0.512836
0.7048
3.9
13.74
A89
0.512775
0.7057
2.7
13.17
Basaltic soil+
0.511783
0.705603
2.26
3.9
Metased. soil+
0.512501
0.709646
−2.67
6.57
Metagreywackes+
0.512847
0.705374
4.08
2.49
Sediments+
0.512805
0.704583
3.26
4.0
∗
∗Jurassic-Cretaceous sediments
—
0.707–0.708
—
—
∗
∗
∗Arve River sediments
—
0.728
—
Related to Mont-Blanc granite
0.704
—
Carbonate-rich rocks
∗
∗
∗
∗Elazığ Magmatites Upper Cretaceous
0.512414–0.512851
0.706022–0.708451
7.5
Diorites and granites
—
3.5
—Figure 2
Distribution of REE and the determined sample sites (A43, A61, and A89) for87Sr/86Sr and 143Nd/144Nd isotopic analysis from the Euphrates River sediments.The calculatedε
N
d
(
0
) values have range of 0.35, 3.9, and 2.7. Table 1 shows the result of the radiogenic isotope compositions in the Euphrates River sediments, 87Sr/86Sr and 143Nd/144Nd, and calculated ε
N
d
(
0
) data of the sediments from different origins. The isotope compositions ratios of 143Nd/144Nd and 87Sr/86Sr in the Euphrates River sediments suggest that the REE pattern of river sediments changes systematically with the compositions of the rocks in the drainage area. The range of 87Sr/86Sr ratios found in the Euphrates River sediments may be explained using metagreywackes and sediments (0.705374, 0.704835, and 0.704583, resp.) according to Table 1. The calculated 1000/Sr ratios are 16.1 (A43), 13.17 (A61), and 13.74 (A89), and Nd values are 5.98 (A43), 6.45 (A61), and 8.40 (A89) ppm, while La concentrations are 9.8 (A43), 7.8 (A61), and 4.5 (A89) ppm. The comparison with 1000/Sr and La values indicated that the weathering of felsic igneous rocks decreases downstream of the Euphrates River, while it increases in the mafic igneous rocks.
### 4.2. REE Results of the Euphrates River Sediments
Large variations are observed in the distribution of the REE in the Euphrates River sediments (Table1). La concentrations range from 3.4 to 15.9 ppm, while Lu concentrations range from 0.02 to 0.23 ppm. The concentrations of the light REE are 7.5 times higher than heavy REE concentrations. Ce concentrations range from 7.5 to 34.4 ppm. Figure 2 plots indicate that the concentrations of La and Ce are the highest along the flowing direction of the Euphrates River. The concentrations of the REE at the A37 sample site decrease because of the next thrust zone. Thus, the circulation of mixing waters possibly influenced the concentrations of REE in the studied river sediments. The REE plots for the Euphrates River sediments in this study exhibit a small degree of variation. The abundance of the REE in the river sediments is probably due to the mineralogic characterizations of the regional rocks. Figure 2 plots can be explained to mean that the variations of the light REE along the flowing direction are higher until A51 sample site, except for A34, A35, A36, and A37 sites. The heavy REE concentrations increase from A51 sample site to A90 because of the mineralogic characterizations of mafic volcanic and metaophiolitic rocks.
### 4.3. REE Results of the Euphrates River Water
REE concentrations of the Euphrates River waters are presented in Table3. The Amazon, Indus, Mississippi, and Ohio Rivers water data from Goldstein and Jacobsen [13] are also presented in Table 3. La concentrations have range from 0.02 to 2.63 ppb, and Ce concentrations have range from 0.12 to 5.43 ppb. The highest Ce and La values in the water samples are observed at the A34 site, and Pr, Sm, Gd, Dy, Er, and Yb have the highest values, 0.07, 0.07, 0.07, 0.06, 0.03, and 0.03 ppb, respectively, at the same site. However, the lowest Ce and La concentrations in the studied sediments were observed at the same site. The results indicate that the circulation of mixing water along the tectonic zones can be a much more important factor on the absolute abundance of the REE in river sediments than weathering of the regional rocks.
## 4.1. Nd-Sr Isotope Compositions of the Euphrates River Sediments
REE concentrations of the Euphrates River sediments from Keban Dam to Karakaya Dam are presented in Table1. The result of isotopic composition of the studied sediments and those from different origin, summarized statistical values, and REE compositions of the Mississippi and Amazon River sediments are presented in Table 1. The 75 μm (200 mesh) size fraction from the Euphrates River sediment samples (A43, A61, and A89) was analyzed for 87Sr/86Sr and 143Nd/144Nd ratios (Figure 2 and Table 1). The study shows that 87Sr/86Sr and 143Nd/144Nd isotopic compositions have range of 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. ε
N
d
(
0
) values were calculated using(1)
ε
Nd
0
=
Nd
143
/
Nd
144
Measured
0.512636
-
1
×
10
4 (see [48, 49]).Table 1
Summary statistical values of REE from Euphrates River sediments. Density values (g/cm3) taken from Gupta and Krishnamurthy [45]. ∗Sediments from Sholkovitz [46]. Calculated εNd and 1000/Sr data from Martin and McCulloch [4]; ∗
∗Burke et al. [1]; ∗
∗
∗Bussy [2]; ∗
∗
∗
∗Akgül et al. [47], +Mensel et al. [48].
(a)
LREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
HREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
La
6.14
3.4/15.9
8.57
60.8
349
Tb
8.23
0.18/0.47
0.33
—
—
Ce
8.16
7.5/34.4
18.11
125.4
707
Dy
8.55
0.63/3.12
2.17
7.46
39.7
Pr
6.77
0.88/4.46
2.22
—
—
Ho
8.79
0.17/0.63
0.43
—
—
Nd
7.00
5.39/12.1
7.70
56.4
355
Er
9.06
0.43/1.67
1.20
4.94
21.7
Sm
7.52
0.82/4.05
2.14
—
—
Tm
9.32
0.05/0.28
0.17
—
—
Eu
5.24
0.23/0.82
0.57
2.11
10.9
Yb
6.96
0.29/1.59
1.11
3.94
20.5
Gd
7.90
0.44/3.65
2.20
9.86
46.3
Lu
9.84
0.02/0.23
0.15
0.47
3.02
Mean
5.93
—
—
Mean
0.79
—
—
St. deviation
6.18
—
—
St. deviation
0.74
—
—
(b)
The result of isotopic composition of the studied sediments and some examples in the world
Sample code
143Nd/144Nd
87Sr/86Sr
εNd (0)
1000/Sr
A43
0.512654
0.7053
0.35
16.1
A61
0.512836
0.7048
3.9
13.74
A89
0.512775
0.7057
2.7
13.17
Basaltic soil+
0.511783
0.705603
2.26
3.9
Metased. soil+
0.512501
0.709646
−2.67
6.57
Metagreywackes+
0.512847
0.705374
4.08
2.49
Sediments+
0.512805
0.704583
3.26
4.0
∗
∗Jurassic-Cretaceous sediments
—
0.707–0.708
—
—
∗
∗
∗Arve River sediments
—
0.728
—
Related to Mont-Blanc granite
0.704
—
Carbonate-rich rocks
∗
∗
∗
∗Elazığ Magmatites Upper Cretaceous
0.512414–0.512851
0.706022–0.708451
7.5
Diorites and granites
—
3.5
—Figure 2
Distribution of REE and the determined sample sites (A43, A61, and A89) for87Sr/86Sr and 143Nd/144Nd isotopic analysis from the Euphrates River sediments.The calculatedε
N
d
(
0
) values have range of 0.35, 3.9, and 2.7. Table 1 shows the result of the radiogenic isotope compositions in the Euphrates River sediments, 87Sr/86Sr and 143Nd/144Nd, and calculated ε
N
d
(
0
) data of the sediments from different origins. The isotope compositions ratios of 143Nd/144Nd and 87Sr/86Sr in the Euphrates River sediments suggest that the REE pattern of river sediments changes systematically with the compositions of the rocks in the drainage area. The range of 87Sr/86Sr ratios found in the Euphrates River sediments may be explained using metagreywackes and sediments (0.705374, 0.704835, and 0.704583, resp.) according to Table 1. The calculated 1000/Sr ratios are 16.1 (A43), 13.17 (A61), and 13.74 (A89), and Nd values are 5.98 (A43), 6.45 (A61), and 8.40 (A89) ppm, while La concentrations are 9.8 (A43), 7.8 (A61), and 4.5 (A89) ppm. The comparison with 1000/Sr and La values indicated that the weathering of felsic igneous rocks decreases downstream of the Euphrates River, while it increases in the mafic igneous rocks.
## 4.2. REE Results of the Euphrates River Sediments
Large variations are observed in the distribution of the REE in the Euphrates River sediments (Table1). La concentrations range from 3.4 to 15.9 ppm, while Lu concentrations range from 0.02 to 0.23 ppm. The concentrations of the light REE are 7.5 times higher than heavy REE concentrations. Ce concentrations range from 7.5 to 34.4 ppm. Figure 2 plots indicate that the concentrations of La and Ce are the highest along the flowing direction of the Euphrates River. The concentrations of the REE at the A37 sample site decrease because of the next thrust zone. Thus, the circulation of mixing waters possibly influenced the concentrations of REE in the studied river sediments. The REE plots for the Euphrates River sediments in this study exhibit a small degree of variation. The abundance of the REE in the river sediments is probably due to the mineralogic characterizations of the regional rocks. Figure 2 plots can be explained to mean that the variations of the light REE along the flowing direction are higher until A51 sample site, except for A34, A35, A36, and A37 sites. The heavy REE concentrations increase from A51 sample site to A90 because of the mineralogic characterizations of mafic volcanic and metaophiolitic rocks.
## 4.3. REE Results of the Euphrates River Water
REE concentrations of the Euphrates River waters are presented in Table3. The Amazon, Indus, Mississippi, and Ohio Rivers water data from Goldstein and Jacobsen [13] are also presented in Table 3. La concentrations have range from 0.02 to 2.63 ppb, and Ce concentrations have range from 0.12 to 5.43 ppb. The highest Ce and La values in the water samples are observed at the A34 site, and Pr, Sm, Gd, Dy, Er, and Yb have the highest values, 0.07, 0.07, 0.07, 0.06, 0.03, and 0.03 ppb, respectively, at the same site. However, the lowest Ce and La concentrations in the studied sediments were observed at the same site. The results indicate that the circulation of mixing water along the tectonic zones can be a much more important factor on the absolute abundance of the REE in river sediments than weathering of the regional rocks.
## 5. Discussion
The results suggest that the tectonic zone is an important factor in controlling the abundance of the REE concentrations in the Euphrates River sediments. Moreover, the study indicates that the REE compositions are dependent upon the type (subtropical climatic influences, water circulation, riverbed morphology, and secular variation) of weathered regional rocks. The comparison of87Sr/86Sr compositions of the Euphrates River sediments with those from basaltic soil and metasedimentary soil from Sholkovitz [46] in Table 1 shows that the studied sediments have basaltic and metagreywackes characterization (Figure 3). Bussy [2] obtained 0.728 for 87Sr/86Sr ratio in the Arve River sediments which are related to the Mont-Blanc granites (Table 1). However, this study reveals that 87Sr/86Sr ratio in carbonate-rich rocks is 0.704. The lower 87Sr/86Sr ratios (0.7053, 0.7048, and 0.7057 obtained for the sample sites A43, A61, and A89, resp.) from river sediments in the Keban Dam Lake area, downstream of the contact between the Permo-Triassic shallow marine metasediments and the Upper Cretaceous magmatic crystalline complex of the Elazığ Magmatic Complex, are caused by mixing of the two sources. 143Nd/144Nd isotopic compositions in the studied river sediments determined have range values of 0.512650, 51283640, and 512775 (Table 1), and the values are similar to the composition of metagreywackes. 143Nd/144Nd isotopic composition ratios have range values 0.512414 to 0.512851 for granodiorites in Upper Cretaceous Elazığ Magmatites [47] (Table 1). These comparisons indicate that the weathered felsic igneous rocks in the river sediments are greater at the A43 sample site because of Upper Cretaceous granitic rocks from Elazığ Magmatic Complex than at the A61 and A89 sample sites. These sites have weathered basaltic igneous rocks compositions due to Upper Cretaceous basic volcanic rocks of the Elazığ Magmatic Complex. Figure 3 shows that neodymium isotope ratios in the New England fold belt change from positive ε
N
d
(
0
) values in tertiary basalt to negative values in granitoids and metapelitic rocks [48]. The positive ε
N
d
(
0
) values reflected the contribution of the isotope compositions in the downstream sediments due to weathering of the basic volcanic rocks. Thus, regional and local studies indicate that the Euphrates River sediments have different isotope composition due to mixing of different weathered source rocks (e.g., Permo-Triassic Keban Metamorphites and Upper Cretaceous Elazığ Magmatic Complex, granodioritic and basic volcanic rocks). The Euphrates River sediments have the lowest average REE concentrations, while the Mississippi and Amazon River sediments yield the highest values. Table 1 shows that the average LREE (La, Ce, Nd, Eu, and Gd) and HREE (Dy, Er, Yb, and Lu) compositions of the Euphrates River sediments are lower by 8.33 to 3.63 and 39.52 to 18.34 times compared to the REE compositions of the Mississippi River and Amazon River REE compositions. The researchers indicated that the light and middle REE enrichment in river sediments may be related to apatite-rich rocks [52, 53]. However, Kalender and Çiçek Uçar [21] suggested that the calculated enrichment factor values of the heavy REE are more than those for the light REE in the tributaries of the Euphrates River sediments due to Fe-Mn oxyhydroxide adsorption capacity for HREE linked to the basic volcanic rocks source of HREE. Yang et al. [8] stated that zircon contributes to the bulk HREEs in sediments because of the relatively high abundance of HREEs in zircon. The lithologic units along the Euphrates River bed contribute to the bulk of LREEs concentrations as a result of apatite-rich, zircon-poor Upper Cretaceous granodiorites. La/Yb mean ratio of 7.72 in the studied sediment samples indicated high erosional rate because La may be removed from crustal source via weathering process and this is in agreement with the finding of Obaje et al. [18]. The distribution trend of REE along the flowing direction of the Euphrates River at some of the sample locations A32, A33, A34, A35, and A36 indicates that the thrust zone between Upper Cretaceous Elazığ Magmatic Complex and Permo-Triassic Keban Metamorphites influences the decreasing REE composition in the river sediments due to mixing water circulation. Average LREE concentrations upstream have higher-than-average HREE concentration compared to downstream ones due to the shallow marine metasediments and felsic magmatic rocks (Permo-Triassic Keban Metamorphites, felsic rocks from Upper Cretaceous Elazığ Magmatites) upstream. However, the basic volcanic rocks (Maden Complex and Kömürhan metaophiolites) are observed downstream along the Euphrates River flowing direction. Obaje et al. [18] and Ramesh et al. [54] stated that positive Ce anomalies are related to the formation of Ce4+ and Ce hydroxides and terrigenous input and diagenetic conditions. The positive Ce anomalies may indicate hydromorphic distribution of the REE in river sediments. According to some researchers, Eu anomalies have less contribution from felsic magmatic rock weathering. The negative Eu anomalies (<0.01 detection limit) indicate that the REE compositions in the river sediments more or less contribute to felsic magmatic rock weathering compared with terrigenous input and basic magmatic rock weathering. Lower river flowing velocity may be responsible for REE homogenous REE distribution in agreement with Obaje et al. [18]. The average chondrite normalized values indicate LREE and HREE enrichment, and their summarized statistical values range from 26.77 to 7.35 and 7.23 to 4.68, respectively (Table 2). The chondrite normalized patterns illustrate that the REE composition of the Euphrates River sediments differs from chondrite composition but also the REE enrichment is close to NAS and UCC. The REE normalized values of the Nigerian Gora River from Obaje et al. [18] are shown in Table 3. The Ce/Ch. value is higher (20.15) in the Euphrates River sediments than the Nigerian Gora River sediments (17.96) due to sulfide-rich mineralization in the studied area, especially Keban polymetallic mine deposit. Ramesh et al. [54] revealed that the positive Ce anomalies indicate terrigenous input, depositional environment, and diagenetic conditions due to the formation of Ce4+ and stable Ce hydroxides. According to Nielsen et al. [55], Tl and Ce may be controlled by residual sulfide and clinopyroxene, respectively, during mantle melting due to their highly different ionic charges and radii. Luo et al. [56] indicated that while the adsorption ability of REE decreases on colloidal particles from light (La) to heavy (Lu), the complexes of REE with carbonate increase, and also the larger colloidal particles have stronger ability to adsorb Ce from weathering of granitic rocks. However, both LREE and HREE concentrations are lower than NAS and UCC (Figures 4 and 5). LREE patterns show that Sm and Eu patterns are closer to 1, and Gd pattern is higher than 1 La, Ce, Pr, and Nd (Figures 4(a), 4(b), 4(c), 4(d), 4(e), 4(f), and 4(g)). Gd/Y
b
NAS and UCC ratio >1 indicates that apatite from granodioritic rocks had contributed to the river sediment REE compositions [57–61]. According to Leybourne and Johannesson [60], Eu is mobilized during hydromorphic transport compared to Sm and Gd. However, the study reveals that Sm and Eu are mobilized more compared to Gd. Thus, Gd enrichment was observed in the river sediments. However, HREE patterns enrichment is relative to chondrite but HREE NAS and UCC normalized patterns are observed to be close to 1 (Figures 5(a), 5(b), 5(c), 5(d), 5(e), 5(f), and 5(g)). All of the river sediment REE compositions that display enrichment are relative to chondrite REE composition.Table 2
North American Shale (NAS), upper continental crust (UCC), and chondrite (Ch.) from Yang et al. [8], Condie [22], Taylor et al. [23], and Sholkovitz [50, 51] and normalized REE summary statistical values (ppm) in Euphrates River sediments; ∗Obaje et al. [18].
LREE
Mean
St. dev.
∗Nigerian Gora River
HREE
Mean
St. dev.
∗Nigerian Gora River
La/NAS
0.28
0.08
—
Tb/NAS
0.39
0.11
—
La/UCC
0.29
0.09
—
Tb/UCC
0.52
0.15
—
La/Ch.
26.77
8.08
39.76
Tb/Ch.
6.51
1.85
—
Ce/NAS
0.27
0.09
—
Dy/NAS
0.52
0.13
—
Ce/UCC
0.28
0.09
—
Dy/UCC
0.62
0.15
—
Ce/Ch.
20.12
6.34
17.96
Dy/Ch.
7.23
1.78
—
Pr/NAS
0.29
0.09
—
Ho/NAS
0.42
0.10
—
Pr/UCC
0.31
0.10
—
Ho/UCC
0.53
0.13
—
Pr/Ch.
17.10
5.46
—
Ho/Ch.
5.74
1.41
—
Nd/NAS
0.28
0.06
—
Er/NAS
0.42
0.10
—
Nd/UCC
0.30
0.06
—
Er/UCC
0.52
0.13
—
Nd/Ch.
14.26
2.97
18.96
Er/Ch.
5.73
1.39
—
Sm/NAS
0.38
0.11
—
Tm/NAS
0.35
0.09
—
Sm/UCC
0.48
0.14
—
Tm/UCC
0.51
0.14
—
Sm/Ch.
10.20
3.00
—
Tm/Ch.
5.22
1.41
—
Eu/NAS
0.48
0.12
—
Yb/NAS
0.36
0.09
—
Eu/UCC
0.65
0.17
—
Yb/UCC
0.51
0.12
—
Eu/Ch.
7.70
1.98
404.62
Yb/Ch.
6.18
1.47
25.52
Gd/NAS
0.45
0.12
—
Lu/NAS
0.33
0.09
—
Gd/UCC
0.58
0.16
—
Lu/UCC
0.47
0.12
—
Gd/Ch.
7.35
1.98
58.50
Lu/Ch.
4.68
1.23
—Table 3
REE concentrations in river water in ppb. The Amazon, Indus, Mississippi, and Ohio Rivers data from Goldstein and Jacobsen [13].
LREE
N
=
90
River water
Dissolved load
Euphrates
Amazon
Indus
Mississippi
Ohio
La
0.25
0.074
0.0029
0.020
0.0063
Ce
0.54
0.21
0.0024
0.010
0.010
Pr
0.04
—
—
—
—
Nd
0.18
0.13
0.0032
0.020
0.011
Sm
0.04
0.034
0.00071
0.004
0.0025
Eu
<0.01
0.008
0.00022
0.001
0.0006
Gd
0.04
—
0.050
—
—
Tb
<0.01
—
—
—
—
Dy
0.04
0.031
0.036
0.0075
0.006
Ho
<0.01
—
—
—
—
Er
0.02
0.016
0.017
0.0065
0.005
Tm
<0.01
—
—
—
Yb
0.2
0.015
0.0014
—
0.0036
Lu
<0.01
—
0.0021
—
0.0006Figure 3
Normalized patterns of the LREE average concentrations (N
=
90) of Euphrates River sediments in the NAS, UCC, and Ch. (a) La, (b) Ce, (c) Pr, (d) N
d
∗ (N
=
50), (e) Sm, (f) Eu, and (g) Gd normalized patterns.
(a)
(b)
(c)
(d)
(e)
(f)
(g)Figure 4
Normalized patterns of the HREE average concentrations (N
=
90) of Euphrates River sediments in the NAS, UCC, and Ch. (a) Tb, (b) Dy, (c) Ho, (d) Er, (e) Tm, (f) Yb, and (g) Lu normalized patterns.
(a)
(b)
(c)
(d)
(e)
(f)
(g)Figure 5
Diagram of Nd versus Sr isotopic composition in rock, soil, and sediment samples from Martin and McCulloch [4]. Lines connect rock and soil samples taken from the same locality. Outlined fields show isotopic compositions of the New England granitoids, metapelitic rocks, and metagraywackes from the New England fold belt [49].REE compositions of the Euphrates River water were compared with REE in dissolved load of Amazon, Indus, Mississippi, and Ohio River waters from Goldstein and Jacobsen [13] (Table 3). Figure 6 indicates that the REE compositions of the Euphrates River water are higher than REE compositions in the dissolved load compared of the Amazon, Indus, Mississippi, and Ohio Rivers. However, the REE patterns of the Euphrates River water were similar to REE in dissolved load in the Amazon, Indus, Mississippi, and Ohio River waters. Yb content in the Indus River water is higher while Ce content in the Mississippi River water is lower than in the Euphrates River water. According to Goldstein and Jacobsen [13], high Yb values in suspended materials may be derived from older rocks. Goldberg et al. [62] indicated that Ce depletion in river waters in a high pH environment may be related to the result of preferential removal of Ce4+ onto Fe-Mn oxide coatings of particles. This indicates that suspended materials load in the Euphrates River water is probably more than in the Mississippi, Ohio, and Indus River waters. Ce anomalies are calculated using the following equation:(2)
Ce
∗
=
3
Ce
NAS
2
La
NAS
+
Nd
NAS from [13], where(3)
Ce
∗
=
3
×
0
,
000024
2
×
0.000015
+
0
,
000020
=
1.16
.As shown, positive Ce anomalies (Ce
∗
>
1) support the fact that Ce may be fixed on the clay at pH > 7. Also, the calculated L
a
/
Y
b
N
A
S values in the Euphrates River water and sediment are 0.098 and 0.77. It is apparent that both Euphrates River waters and sediments have heavier REE composition than light REE according to NAS normalized patterns.Figure 6
Distribution of REE in the Euphrates River waters and the Amazon, Indus, Mississippi, and Ohio River waters data from Goldstein and Jacobsen [13].
## 6. Conclusions
(1) This paper indicates that a contribution from a third component can change the isotopic composition of the studied sediments: (a) Permo-Triassic carbonate-rich metasediments and (b) felsic magmatic and (c) mafic volcanic rocks from Upper Cretaceous Elazığ Magmatic Complex.(2) The study indicates that the Euphrates River average LREE (La, Ce, Nd, Eu, and Gd) and HREE (Dy, Er, Yb, and Lu) compositions have lower range values from 8.33 to 3.63 and 39.52 to 18.34 times less than Mississippi River REE and Amazon River sediment compositions, respectively.(3) This paper revealed that the Euphrates River has higher LREE than HREE concentrations, and also, in the thrust zone which is close to the Euphrates River bed, there is low REE composition due to fast water circulation.(4) Average LREE concentrations upstream in the Euphrates River are higher than average HREE concentrations downstream due to felsic magmatic rocks.(5) La/Yb ratio (7.72) indicates high erosion rate, and La may have been added from crustal sources via weathering processes.(6) The positive Ce and La anomalies indicate both terrigenous input and contribution of the oxidative compounds from sulfide-rich mineralization in the Euphrates River bed sediments.(7) The chondrite, NAS, and UCC normalized patterns show that the REE compositions of the Euphrates River sediments differ from chondrite but are similar to NAS and UCC.(8) The Sm and Eu patterns are close to 1, and Gd pattern is higher than 1 (>1), and also Gd/Y
b
NAS and UCC ratio greater than 1 indicates that the source of REE may be apatite-rich granodioritic rocks which are from the Elazığ Magmatic Complex. Also, terrigenous sediments and lithological control are more effective on the Euphrates River sediment REE compositions.(9) The Euphrates River waters have the highest composition values for both LREE and HREE in comparison to the other basic river waters (the Amazon, Indus, Ohio, and Mississippi River waters) due to regional felsic and mafic lithological units. Due to circulation of mixing water, REE concentrations increase in the river water but decrease in the river sediments.
---
*Source: 1012021-2016-05-23.xml* | 1012021-2016-05-23_1012021-2016-05-23.md | 45,640 | REE Geochemistry of Euphrates River, Turkey | Leyla Kalender; Gamze Aytimur | Journal of Chemistry
(2016) | Chemistry and Chemical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1012021 | 1012021-2016-05-23.xml | ---
## Abstract
The study area is located on the Euphrates River at 38°41°32.48′′N–38°14′24.10′′N latitude and 39°56′4.59′′E–39°8°13.41′′E longitude. The Euphrates is the longest river in Western Asia. The lithological units observed from the bottom to the top are Permo-Triassic Keban Metamorphites, Late Cretaceous Kömürhan Ophiolites, Upper Cretaceous Elazığ Magmatic Complex, Middle Eocene Maden Complex and Kırkgeçit Formation, Upper Pliocene and Lower Eocene Seske Formation and Upper Miocene, Pliocene Karabakır and Çaybağı Formations, Palu Formation, and Holocene Euphrates River sediments. The geochemical studies show that87Sr/86Sr and 143Nd/144Nd isotopic compositions in the Euphrates River bank sediments are 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. These values indicate mixing of both carbonate-rich shallow marine sediment and felsic-mafic rocks from Elazığ Magmatic Complex into the stream sediments. The positive ε
N
d
(
0
) values (0.35, 3.9, and 2.7) are higher downstream in the studied sediments due to weathering of the mafic volcanic rocks. The chondrite, NAS, and UCC normalized patterns show that the REE compositions of the Euphrates River sediments are higher than chondrite composition but close to NAS and UCC. The river sediments in the tectonic zone and the weathered granodioritic rocks of the Elazığ Magmatic complex affect upstream water compositions.
---
## Body
## 1. Introduction
A number of researchers have studied Nd-Sr isotopic and trace element geochemistry of river sediments and soils as tracers of clastic sources. The geochemical characterizations and Sr-Nd isotopic fingerprinting of sediments in any fluvial system can be done using radiogenic isotopic compositions [1–5]. Rare earth elements (REEs) compositions have been studied in stream sediments [6–8] and in chemical weathering of drainage systems [9–12]. A number of researchers have studied REE composition of both river sediments and river water and discovered that heavy REE concentration is higher in the river sediments than in suspended matter in river water. They also indicated that shale, Upper Continental Crust, and chondrite normalized REE patterns showed that chemical weathering from source rocks in the continental crust, erosion, and terrigenous fluviatile sediment sources can be distinguished using the REE compositions of rivers [13–18]. Leybourne et al. [19] indicated that Ce and Eu can be redox sensitive and attributed to determination of redox conditions. Yang et al. [8] stated that source rock composition is a more important factor affecting REE composition than weathering processes. There are fewer studies on the Euphrates River. Kalender and Bölücek [20] studied the stream sediments in the north Keban Dam Lake in the Eastern Anatolian district and demonstrated that more REEs are transported with Fe and Mn rich oxides and fine size fraction sediments (e.g., clay minerals) via adsorption. Kalender and Çiçek Uçar [21] indicated that the calculated enrichment factor values of the heavy REE are more than those of light REE in the Geli stream sediments. Geli stream is a tributary of the Euphrates River. Rivers carry the weathered rock products from the continents to the dam lakes, natural lakes, and the sea. Thus, this study focuses on the REE concentrations in river bank sediments from the initial point of the Euphrates River (10 kilometers upstream of Keban Dam) to Karakaya Dam Lake. In order to evaluate distribution of REE along the flowing direction of the Euphrates River, the sediment sources and lithological controls were identified using Upper Continental Crust (UCC), North American Shale (NAS), and Chondrite (Ch) normalized REE patterns (all average values taken from [8, 22–25]). Goldstein and Jacobsen [13] studied REE compositions in river water, and major rivers have LREE enriched patterns relative to the NAS, and negative Ce anomalies occur at high pH. This paper firstly presents REE concentrations and includes source rock composition of the Euphrates River sediments and waters.
## 2. Geology
The Euphrates River is located in the Eastern Anatolian district in Turkey and is located on the active tectonic zone of the East Anatolian fault zone. The East Anatolian fault zone is seismically one of the most active regions in the world and is located within the Mediterranean Earthquake Zone, which is a complicated deformation area that was formed by the continental collision between the African-Arabian and Eurasian continents. These deformations involve thrust faults, suture zones, and active strike slip and normal faults, as well as basin formations arising from these faults. The Euphrates River is located on the East Anatolian fault zone and is formed by the mixing of the Karasu River and Murat River 10 kilometers upstream of the Keban Dam. According to Frenken [27], the Euphrates River length is 1100 km from Palu to the Red Sea (Figures 1(a) and 1(b)). The studied sediments were sampled along approximately 50 kilometers of the Euphrates River length. The main stratigraphic units found in the Euphrates River basin range from the Permo-Triassic Keban Metamorphites to Plio-Quaternary Palu Formation (Figure 1(b)). The Keban Metamorphites outcrop on both right and left banks of the Euphrates River. Considering the regional scale, the Keban Metamorphites are represented by marble, recrystallized limestone, calc-schist, metaconglomerate, and calc-phyllite in the study area [28]. Additionally, in the study area, the Upper Cretaceous Elazığ Magmatic Complex consists of volcanic rocks (basalt, andesite, pillow lava, dacite, and volcanic breccia), subvolcanic rocks (aplite, microdiorite, and dolerite), and plutonic rocks (diorite, tonalite, granodiorite, granite, and monzonite). The magmatic rocks are overlain by recrystallized limestone of the Permo-Triassic Keban Metamorphites [29]. The Late Cretaceous Kömürhan Ophiolites are part of the southeast Anatolian ophiolite belt which are formed in a suprasubduction zone within the southern Neo-Tethys [30]. The Kömürhan Ophiolites consist of dunite, layered and isotropic gabbros, plagiogranite, sheet dyke complex, andesitic and basaltic rocks, and volcanosedimentary rocks. The unit is observed in the Karakaya Dam Lake area (Figure 1(b)). Upper Paleocene and Lower Eocene massive limestones are named the Seske Formation which is characterized by interbedded clastic and carbonate rocks. The stratigraphic position of the Seske Formation indicates the extent of how Neo-Tethys was controlled by the tectonic and topographic features of the region during the Eocene [31, 32]. The Seske Formation is observed along the Karakaya Dam Lake (Figure 1(b)). Middle Eocene Maden Complex is composed of basaltic and andesitic rocks and limestone, conglomerates, sandstones, and mudrocks (marl) [29, 33–35]. Middle Eocene Kırkgeçit Formation consists of marine conglomerates, marls, and limestone from the bottom to the top [36]. The Alibonca Formation was deposited after closure of the Neo-Tethys Ocean in Mesozoic time. The unit is composed of sandy limestone and marls which are observed in the Keban Dam Lake area [37] (Figure 1(b)). Upper Miocene-Pliocene Karabakır and Çaybağı Formations consist of sandstone, mudrock, marls, tuff, and basaltic rocks [36]. The units are observed in the northeast of the Keban Dam Lake (Figure 1(b)). The Çaybağı Formation is named by Türkmen [38] at the Çaybağı township in the east of Elazığ. Pliocene-Quaternary units are named the Palu Formation by Kerey and Türkmen [39]. The units consist of quaternary alluvial and fluvial deposits along the level bank of the Murat River which is the initial point of the Euphrates River (Figures 1(a) and 1(b)).Figure 1
(a) Location map. (b) Geology map of the Euphrates River and the sampling sites (from 1 to 90, the geology map was modified from Herece et al. [26]).
(a)
(b)
## 3. Analytical Methods
### 3.1. Climatic Information
Maximum flows of the Euphrates River occurred from February through April, whereas minimum flows occurred from August through October. The annual mean rainfall during that period was 372 mm, and the air temperature varied between 15.21°C (Elazığ) and 16.31°C (Malatya) with the highest and the lowest temperature of 34°C and minus 10, respectively, between 1992 and 2001. The continental climate of the Euphrates Basin is a subtropical plateau climate (data was taken from reports by the Elazığ Meteorology Department). The Euphrates River sediment samples were collected in September, the period of minimum flow of the Euphrates River water. It is also good to take into consideration the effect on chemical compositions of the river bed sediment of processes erosion of the earth’s surface. The sediment samples were taken from locations close to the center of the river bed in consideration of the erosional process and chemical composition of the river bed sediments.
### 3.2. Sampling Sites
In this study, directing studies could not be performed in advance of the development of sample taking methods and suitable particle size and chemical analysis methods. Ninety river sediments and water samples were taken from the right side of the river bed along the direction of the river water flow because it is suitable for sampling of river bed morphology (Figure1(b)).
### 3.3. Sample Preparing for Analysis
The samples were taken in September, when the water flow rate was low. In order to prevent the existence of particles in the sediment samples that were too large, the samples were sifted through sieve with hole diameter of 2 mm (BS10 mesh) so as to obtain the suitable particle sizes. 2 Kg of the river sediment samples was taken at 250–500 m intervals along the Euphrates River (from 10 kilometers upstream of the Keban Dam to Karakaya Dam, 50 kilometers) and placed into plastic bags, numbered, and dried at room temperature. After drying, the samples were sifted to different sieve dimensions in order to determine the particle size fractions suitable for analysis (−200 mesh). In the study of some metals and REE concentrations in sediments, many researchers prefer the grain size of fine-medium sand to silt (−74μm) as it shows very high concentrations, higher than −80 mesh (−180 μm) [8, 21, 40, 41]. In order to minimize the grain size dependencies of heavy metals, concentrations of −75 μm fraction, representing medium fine sand to silt, were used in the present study. Mechanical wet sieving was performed to separate the −74 μm sediment fraction from the bulk samples. Fifteen to twenty grams of each sample was freeze dried for 8 and 9 hours. The sieved fractions were placed in clean porcelain bowls and dried at room temperature.
### 3.4. Chemical Analysis
Elazığ Magmatic Complex includes zirconium- and titanium-bearing minerals [30]. As commonly known, zirconium- and titanium-bearing minerals contain amounts of REE. This method was preferred due to its convenience for the decomposition of the silicate minerals using HF. HF is an efficient disintegration agent used for the decomposition of zirconium silicate and apatite source from granitoids in nature. This decomposition method causes the loss of silica as SiF4 and a part of titanium as TiF4. Bulk samples were ground and sieved through 200-mesh (about 0.074 mm pore size) stainless steel sieve for chemical analysis. 0.5 g of each sediment sample was leached with 90 mL HCl-HNO3-HF at 95°C for 1 hour, diluted to 150 mL, and then analyzed by ICP-OES. The extraction method used by Saito et al. [42] was used to obtain the maximum REE concentration at 9.95 mL 0.01–1 m nitric acid aqueous solution (pH < 4). Standard DS5 was used for the sediment analyses. The sediment samples were analyzed for REE in Acme Analytical Laboratories Ltd., Canada, by Inductively Coupled Plasma-Optical Emission Spectrometer (ICP-OES). The river water samples have less than 0.1% total dissolved solids, and these analyses were used by ICP-OES at Bureau Veritas environmental lab, Maxxam Analytics. The isotopic measurements were made at the Middle East Technical University (Ankara, Turkey) following the protocol of Köksal and Göncüoğlu [43]. TLM-ARG-RIL-02 methods were adopted. An 80 mg aliquot was taken for analysis of Sr and Nd isotope ratios. The samples were dissolved in beakers in a 4 mL 52% HF at 160°C on the hotplate along four days. The samples were dried on the hotplate using 2.5 N HCl and 2 mL bis(ethylhexyl) phosphate using Bio-Rad AG50 W-X8, 100–200 mesh, and chemical seperation of Sr ionic chromatographic columns was prepared. After chemical separation of Sr, REE fractionation was collected using 6 N HCl. Sr isotopes were measured using a single Ta-activator with Re filament and 0.005 N H3PO4 [44]. 87Sr/86Sr ratios were corrected for mass fractionation by normalizing to 86Sr/88Sr = 0.1194, and strontium standard (NBS 987) was measured more than 2 times. The chemical separation of Nd from REE was made in a teflon column using 0.22 N HCl and 2 mL bis(ethylhexyl) phosphate. 143Nd/144Nd data were normalized by 146Nd/144Nd = 0.7219, and neodymium standard (0.511848
±
5) was measured more than 2 times.
## 3.1. Climatic Information
Maximum flows of the Euphrates River occurred from February through April, whereas minimum flows occurred from August through October. The annual mean rainfall during that period was 372 mm, and the air temperature varied between 15.21°C (Elazığ) and 16.31°C (Malatya) with the highest and the lowest temperature of 34°C and minus 10, respectively, between 1992 and 2001. The continental climate of the Euphrates Basin is a subtropical plateau climate (data was taken from reports by the Elazığ Meteorology Department). The Euphrates River sediment samples were collected in September, the period of minimum flow of the Euphrates River water. It is also good to take into consideration the effect on chemical compositions of the river bed sediment of processes erosion of the earth’s surface. The sediment samples were taken from locations close to the center of the river bed in consideration of the erosional process and chemical composition of the river bed sediments.
## 3.2. Sampling Sites
In this study, directing studies could not be performed in advance of the development of sample taking methods and suitable particle size and chemical analysis methods. Ninety river sediments and water samples were taken from the right side of the river bed along the direction of the river water flow because it is suitable for sampling of river bed morphology (Figure1(b)).
## 3.3. Sample Preparing for Analysis
The samples were taken in September, when the water flow rate was low. In order to prevent the existence of particles in the sediment samples that were too large, the samples were sifted through sieve with hole diameter of 2 mm (BS10 mesh) so as to obtain the suitable particle sizes. 2 Kg of the river sediment samples was taken at 250–500 m intervals along the Euphrates River (from 10 kilometers upstream of the Keban Dam to Karakaya Dam, 50 kilometers) and placed into plastic bags, numbered, and dried at room temperature. After drying, the samples were sifted to different sieve dimensions in order to determine the particle size fractions suitable for analysis (−200 mesh). In the study of some metals and REE concentrations in sediments, many researchers prefer the grain size of fine-medium sand to silt (−74μm) as it shows very high concentrations, higher than −80 mesh (−180 μm) [8, 21, 40, 41]. In order to minimize the grain size dependencies of heavy metals, concentrations of −75 μm fraction, representing medium fine sand to silt, were used in the present study. Mechanical wet sieving was performed to separate the −74 μm sediment fraction from the bulk samples. Fifteen to twenty grams of each sample was freeze dried for 8 and 9 hours. The sieved fractions were placed in clean porcelain bowls and dried at room temperature.
## 3.4. Chemical Analysis
Elazığ Magmatic Complex includes zirconium- and titanium-bearing minerals [30]. As commonly known, zirconium- and titanium-bearing minerals contain amounts of REE. This method was preferred due to its convenience for the decomposition of the silicate minerals using HF. HF is an efficient disintegration agent used for the decomposition of zirconium silicate and apatite source from granitoids in nature. This decomposition method causes the loss of silica as SiF4 and a part of titanium as TiF4. Bulk samples were ground and sieved through 200-mesh (about 0.074 mm pore size) stainless steel sieve for chemical analysis. 0.5 g of each sediment sample was leached with 90 mL HCl-HNO3-HF at 95°C for 1 hour, diluted to 150 mL, and then analyzed by ICP-OES. The extraction method used by Saito et al. [42] was used to obtain the maximum REE concentration at 9.95 mL 0.01–1 m nitric acid aqueous solution (pH < 4). Standard DS5 was used for the sediment analyses. The sediment samples were analyzed for REE in Acme Analytical Laboratories Ltd., Canada, by Inductively Coupled Plasma-Optical Emission Spectrometer (ICP-OES). The river water samples have less than 0.1% total dissolved solids, and these analyses were used by ICP-OES at Bureau Veritas environmental lab, Maxxam Analytics. The isotopic measurements were made at the Middle East Technical University (Ankara, Turkey) following the protocol of Köksal and Göncüoğlu [43]. TLM-ARG-RIL-02 methods were adopted. An 80 mg aliquot was taken for analysis of Sr and Nd isotope ratios. The samples were dissolved in beakers in a 4 mL 52% HF at 160°C on the hotplate along four days. The samples were dried on the hotplate using 2.5 N HCl and 2 mL bis(ethylhexyl) phosphate using Bio-Rad AG50 W-X8, 100–200 mesh, and chemical seperation of Sr ionic chromatographic columns was prepared. After chemical separation of Sr, REE fractionation was collected using 6 N HCl. Sr isotopes were measured using a single Ta-activator with Re filament and 0.005 N H3PO4 [44]. 87Sr/86Sr ratios were corrected for mass fractionation by normalizing to 86Sr/88Sr = 0.1194, and strontium standard (NBS 987) was measured more than 2 times. The chemical separation of Nd from REE was made in a teflon column using 0.22 N HCl and 2 mL bis(ethylhexyl) phosphate. 143Nd/144Nd data were normalized by 146Nd/144Nd = 0.7219, and neodymium standard (0.511848
±
5) was measured more than 2 times.
## 4. Results
### 4.1. Nd-Sr Isotope Compositions of the Euphrates River Sediments
REE concentrations of the Euphrates River sediments from Keban Dam to Karakaya Dam are presented in Table1. The result of isotopic composition of the studied sediments and those from different origin, summarized statistical values, and REE compositions of the Mississippi and Amazon River sediments are presented in Table 1. The 75 μm (200 mesh) size fraction from the Euphrates River sediment samples (A43, A61, and A89) was analyzed for 87Sr/86Sr and 143Nd/144Nd ratios (Figure 2 and Table 1). The study shows that 87Sr/86Sr and 143Nd/144Nd isotopic compositions have range of 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. ε
N
d
(
0
) values were calculated using(1)
ε
Nd
0
=
Nd
143
/
Nd
144
Measured
0.512636
-
1
×
10
4 (see [48, 49]).Table 1
Summary statistical values of REE from Euphrates River sediments. Density values (g/cm3) taken from Gupta and Krishnamurthy [45]. ∗Sediments from Sholkovitz [46]. Calculated εNd and 1000/Sr data from Martin and McCulloch [4]; ∗
∗Burke et al. [1]; ∗
∗
∗Bussy [2]; ∗
∗
∗
∗Akgül et al. [47], +Mensel et al. [48].
(a)
LREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
HREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
La
6.14
3.4/15.9
8.57
60.8
349
Tb
8.23
0.18/0.47
0.33
—
—
Ce
8.16
7.5/34.4
18.11
125.4
707
Dy
8.55
0.63/3.12
2.17
7.46
39.7
Pr
6.77
0.88/4.46
2.22
—
—
Ho
8.79
0.17/0.63
0.43
—
—
Nd
7.00
5.39/12.1
7.70
56.4
355
Er
9.06
0.43/1.67
1.20
4.94
21.7
Sm
7.52
0.82/4.05
2.14
—
—
Tm
9.32
0.05/0.28
0.17
—
—
Eu
5.24
0.23/0.82
0.57
2.11
10.9
Yb
6.96
0.29/1.59
1.11
3.94
20.5
Gd
7.90
0.44/3.65
2.20
9.86
46.3
Lu
9.84
0.02/0.23
0.15
0.47
3.02
Mean
5.93
—
—
Mean
0.79
—
—
St. deviation
6.18
—
—
St. deviation
0.74
—
—
(b)
The result of isotopic composition of the studied sediments and some examples in the world
Sample code
143Nd/144Nd
87Sr/86Sr
εNd (0)
1000/Sr
A43
0.512654
0.7053
0.35
16.1
A61
0.512836
0.7048
3.9
13.74
A89
0.512775
0.7057
2.7
13.17
Basaltic soil+
0.511783
0.705603
2.26
3.9
Metased. soil+
0.512501
0.709646
−2.67
6.57
Metagreywackes+
0.512847
0.705374
4.08
2.49
Sediments+
0.512805
0.704583
3.26
4.0
∗
∗Jurassic-Cretaceous sediments
—
0.707–0.708
—
—
∗
∗
∗Arve River sediments
—
0.728
—
Related to Mont-Blanc granite
0.704
—
Carbonate-rich rocks
∗
∗
∗
∗Elazığ Magmatites Upper Cretaceous
0.512414–0.512851
0.706022–0.708451
7.5
Diorites and granites
—
3.5
—Figure 2
Distribution of REE and the determined sample sites (A43, A61, and A89) for87Sr/86Sr and 143Nd/144Nd isotopic analysis from the Euphrates River sediments.The calculatedε
N
d
(
0
) values have range of 0.35, 3.9, and 2.7. Table 1 shows the result of the radiogenic isotope compositions in the Euphrates River sediments, 87Sr/86Sr and 143Nd/144Nd, and calculated ε
N
d
(
0
) data of the sediments from different origins. The isotope compositions ratios of 143Nd/144Nd and 87Sr/86Sr in the Euphrates River sediments suggest that the REE pattern of river sediments changes systematically with the compositions of the rocks in the drainage area. The range of 87Sr/86Sr ratios found in the Euphrates River sediments may be explained using metagreywackes and sediments (0.705374, 0.704835, and 0.704583, resp.) according to Table 1. The calculated 1000/Sr ratios are 16.1 (A43), 13.17 (A61), and 13.74 (A89), and Nd values are 5.98 (A43), 6.45 (A61), and 8.40 (A89) ppm, while La concentrations are 9.8 (A43), 7.8 (A61), and 4.5 (A89) ppm. The comparison with 1000/Sr and La values indicated that the weathering of felsic igneous rocks decreases downstream of the Euphrates River, while it increases in the mafic igneous rocks.
### 4.2. REE Results of the Euphrates River Sediments
Large variations are observed in the distribution of the REE in the Euphrates River sediments (Table1). La concentrations range from 3.4 to 15.9 ppm, while Lu concentrations range from 0.02 to 0.23 ppm. The concentrations of the light REE are 7.5 times higher than heavy REE concentrations. Ce concentrations range from 7.5 to 34.4 ppm. Figure 2 plots indicate that the concentrations of La and Ce are the highest along the flowing direction of the Euphrates River. The concentrations of the REE at the A37 sample site decrease because of the next thrust zone. Thus, the circulation of mixing waters possibly influenced the concentrations of REE in the studied river sediments. The REE plots for the Euphrates River sediments in this study exhibit a small degree of variation. The abundance of the REE in the river sediments is probably due to the mineralogic characterizations of the regional rocks. Figure 2 plots can be explained to mean that the variations of the light REE along the flowing direction are higher until A51 sample site, except for A34, A35, A36, and A37 sites. The heavy REE concentrations increase from A51 sample site to A90 because of the mineralogic characterizations of mafic volcanic and metaophiolitic rocks.
### 4.3. REE Results of the Euphrates River Water
REE concentrations of the Euphrates River waters are presented in Table3. The Amazon, Indus, Mississippi, and Ohio Rivers water data from Goldstein and Jacobsen [13] are also presented in Table 3. La concentrations have range from 0.02 to 2.63 ppb, and Ce concentrations have range from 0.12 to 5.43 ppb. The highest Ce and La values in the water samples are observed at the A34 site, and Pr, Sm, Gd, Dy, Er, and Yb have the highest values, 0.07, 0.07, 0.07, 0.06, 0.03, and 0.03 ppb, respectively, at the same site. However, the lowest Ce and La concentrations in the studied sediments were observed at the same site. The results indicate that the circulation of mixing water along the tectonic zones can be a much more important factor on the absolute abundance of the REE in river sediments than weathering of the regional rocks.
## 4.1. Nd-Sr Isotope Compositions of the Euphrates River Sediments
REE concentrations of the Euphrates River sediments from Keban Dam to Karakaya Dam are presented in Table1. The result of isotopic composition of the studied sediments and those from different origin, summarized statistical values, and REE compositions of the Mississippi and Amazon River sediments are presented in Table 1. The 75 μm (200 mesh) size fraction from the Euphrates River sediment samples (A43, A61, and A89) was analyzed for 87Sr/86Sr and 143Nd/144Nd ratios (Figure 2 and Table 1). The study shows that 87Sr/86Sr and 143Nd/144Nd isotopic compositions have range of 0.7053, 0.7048, and 0.7057 and 0.512654, 0.512836, and 0.512775, respectively. ε
N
d
(
0
) values were calculated using(1)
ε
Nd
0
=
Nd
143
/
Nd
144
Measured
0.512636
-
1
×
10
4 (see [48, 49]).Table 1
Summary statistical values of REE from Euphrates River sediments. Density values (g/cm3) taken from Gupta and Krishnamurthy [45]. ∗Sediments from Sholkovitz [46]. Calculated εNd and 1000/Sr data from Martin and McCulloch [4]; ∗
∗Burke et al. [1]; ∗
∗
∗Bussy [2]; ∗
∗
∗
∗Akgül et al. [47], +Mensel et al. [48].
(a)
LREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
HREE
N
=
90
Density
Max./min. values
Euphrates River (ppm)
∗Mississippi River
∗Amazon River
La
6.14
3.4/15.9
8.57
60.8
349
Tb
8.23
0.18/0.47
0.33
—
—
Ce
8.16
7.5/34.4
18.11
125.4
707
Dy
8.55
0.63/3.12
2.17
7.46
39.7
Pr
6.77
0.88/4.46
2.22
—
—
Ho
8.79
0.17/0.63
0.43
—
—
Nd
7.00
5.39/12.1
7.70
56.4
355
Er
9.06
0.43/1.67
1.20
4.94
21.7
Sm
7.52
0.82/4.05
2.14
—
—
Tm
9.32
0.05/0.28
0.17
—
—
Eu
5.24
0.23/0.82
0.57
2.11
10.9
Yb
6.96
0.29/1.59
1.11
3.94
20.5
Gd
7.90
0.44/3.65
2.20
9.86
46.3
Lu
9.84
0.02/0.23
0.15
0.47
3.02
Mean
5.93
—
—
Mean
0.79
—
—
St. deviation
6.18
—
—
St. deviation
0.74
—
—
(b)
The result of isotopic composition of the studied sediments and some examples in the world
Sample code
143Nd/144Nd
87Sr/86Sr
εNd (0)
1000/Sr
A43
0.512654
0.7053
0.35
16.1
A61
0.512836
0.7048
3.9
13.74
A89
0.512775
0.7057
2.7
13.17
Basaltic soil+
0.511783
0.705603
2.26
3.9
Metased. soil+
0.512501
0.709646
−2.67
6.57
Metagreywackes+
0.512847
0.705374
4.08
2.49
Sediments+
0.512805
0.704583
3.26
4.0
∗
∗Jurassic-Cretaceous sediments
—
0.707–0.708
—
—
∗
∗
∗Arve River sediments
—
0.728
—
Related to Mont-Blanc granite
0.704
—
Carbonate-rich rocks
∗
∗
∗
∗Elazığ Magmatites Upper Cretaceous
0.512414–0.512851
0.706022–0.708451
7.5
Diorites and granites
—
3.5
—Figure 2
Distribution of REE and the determined sample sites (A43, A61, and A89) for87Sr/86Sr and 143Nd/144Nd isotopic analysis from the Euphrates River sediments.The calculatedε
N
d
(
0
) values have range of 0.35, 3.9, and 2.7. Table 1 shows the result of the radiogenic isotope compositions in the Euphrates River sediments, 87Sr/86Sr and 143Nd/144Nd, and calculated ε
N
d
(
0
) data of the sediments from different origins. The isotope compositions ratios of 143Nd/144Nd and 87Sr/86Sr in the Euphrates River sediments suggest that the REE pattern of river sediments changes systematically with the compositions of the rocks in the drainage area. The range of 87Sr/86Sr ratios found in the Euphrates River sediments may be explained using metagreywackes and sediments (0.705374, 0.704835, and 0.704583, resp.) according to Table 1. The calculated 1000/Sr ratios are 16.1 (A43), 13.17 (A61), and 13.74 (A89), and Nd values are 5.98 (A43), 6.45 (A61), and 8.40 (A89) ppm, while La concentrations are 9.8 (A43), 7.8 (A61), and 4.5 (A89) ppm. The comparison with 1000/Sr and La values indicated that the weathering of felsic igneous rocks decreases downstream of the Euphrates River, while it increases in the mafic igneous rocks.
## 4.2. REE Results of the Euphrates River Sediments
Large variations are observed in the distribution of the REE in the Euphrates River sediments (Table1). La concentrations range from 3.4 to 15.9 ppm, while Lu concentrations range from 0.02 to 0.23 ppm. The concentrations of the light REE are 7.5 times higher than heavy REE concentrations. Ce concentrations range from 7.5 to 34.4 ppm. Figure 2 plots indicate that the concentrations of La and Ce are the highest along the flowing direction of the Euphrates River. The concentrations of the REE at the A37 sample site decrease because of the next thrust zone. Thus, the circulation of mixing waters possibly influenced the concentrations of REE in the studied river sediments. The REE plots for the Euphrates River sediments in this study exhibit a small degree of variation. The abundance of the REE in the river sediments is probably due to the mineralogic characterizations of the regional rocks. Figure 2 plots can be explained to mean that the variations of the light REE along the flowing direction are higher until A51 sample site, except for A34, A35, A36, and A37 sites. The heavy REE concentrations increase from A51 sample site to A90 because of the mineralogic characterizations of mafic volcanic and metaophiolitic rocks.
## 4.3. REE Results of the Euphrates River Water
REE concentrations of the Euphrates River waters are presented in Table3. The Amazon, Indus, Mississippi, and Ohio Rivers water data from Goldstein and Jacobsen [13] are also presented in Table 3. La concentrations have range from 0.02 to 2.63 ppb, and Ce concentrations have range from 0.12 to 5.43 ppb. The highest Ce and La values in the water samples are observed at the A34 site, and Pr, Sm, Gd, Dy, Er, and Yb have the highest values, 0.07, 0.07, 0.07, 0.06, 0.03, and 0.03 ppb, respectively, at the same site. However, the lowest Ce and La concentrations in the studied sediments were observed at the same site. The results indicate that the circulation of mixing water along the tectonic zones can be a much more important factor on the absolute abundance of the REE in river sediments than weathering of the regional rocks.
## 5. Discussion
The results suggest that the tectonic zone is an important factor in controlling the abundance of the REE concentrations in the Euphrates River sediments. Moreover, the study indicates that the REE compositions are dependent upon the type (subtropical climatic influences, water circulation, riverbed morphology, and secular variation) of weathered regional rocks. The comparison of87Sr/86Sr compositions of the Euphrates River sediments with those from basaltic soil and metasedimentary soil from Sholkovitz [46] in Table 1 shows that the studied sediments have basaltic and metagreywackes characterization (Figure 3). Bussy [2] obtained 0.728 for 87Sr/86Sr ratio in the Arve River sediments which are related to the Mont-Blanc granites (Table 1). However, this study reveals that 87Sr/86Sr ratio in carbonate-rich rocks is 0.704. The lower 87Sr/86Sr ratios (0.7053, 0.7048, and 0.7057 obtained for the sample sites A43, A61, and A89, resp.) from river sediments in the Keban Dam Lake area, downstream of the contact between the Permo-Triassic shallow marine metasediments and the Upper Cretaceous magmatic crystalline complex of the Elazığ Magmatic Complex, are caused by mixing of the two sources. 143Nd/144Nd isotopic compositions in the studied river sediments determined have range values of 0.512650, 51283640, and 512775 (Table 1), and the values are similar to the composition of metagreywackes. 143Nd/144Nd isotopic composition ratios have range values 0.512414 to 0.512851 for granodiorites in Upper Cretaceous Elazığ Magmatites [47] (Table 1). These comparisons indicate that the weathered felsic igneous rocks in the river sediments are greater at the A43 sample site because of Upper Cretaceous granitic rocks from Elazığ Magmatic Complex than at the A61 and A89 sample sites. These sites have weathered basaltic igneous rocks compositions due to Upper Cretaceous basic volcanic rocks of the Elazığ Magmatic Complex. Figure 3 shows that neodymium isotope ratios in the New England fold belt change from positive ε
N
d
(
0
) values in tertiary basalt to negative values in granitoids and metapelitic rocks [48]. The positive ε
N
d
(
0
) values reflected the contribution of the isotope compositions in the downstream sediments due to weathering of the basic volcanic rocks. Thus, regional and local studies indicate that the Euphrates River sediments have different isotope composition due to mixing of different weathered source rocks (e.g., Permo-Triassic Keban Metamorphites and Upper Cretaceous Elazığ Magmatic Complex, granodioritic and basic volcanic rocks). The Euphrates River sediments have the lowest average REE concentrations, while the Mississippi and Amazon River sediments yield the highest values. Table 1 shows that the average LREE (La, Ce, Nd, Eu, and Gd) and HREE (Dy, Er, Yb, and Lu) compositions of the Euphrates River sediments are lower by 8.33 to 3.63 and 39.52 to 18.34 times compared to the REE compositions of the Mississippi River and Amazon River REE compositions. The researchers indicated that the light and middle REE enrichment in river sediments may be related to apatite-rich rocks [52, 53]. However, Kalender and Çiçek Uçar [21] suggested that the calculated enrichment factor values of the heavy REE are more than those for the light REE in the tributaries of the Euphrates River sediments due to Fe-Mn oxyhydroxide adsorption capacity for HREE linked to the basic volcanic rocks source of HREE. Yang et al. [8] stated that zircon contributes to the bulk HREEs in sediments because of the relatively high abundance of HREEs in zircon. The lithologic units along the Euphrates River bed contribute to the bulk of LREEs concentrations as a result of apatite-rich, zircon-poor Upper Cretaceous granodiorites. La/Yb mean ratio of 7.72 in the studied sediment samples indicated high erosional rate because La may be removed from crustal source via weathering process and this is in agreement with the finding of Obaje et al. [18]. The distribution trend of REE along the flowing direction of the Euphrates River at some of the sample locations A32, A33, A34, A35, and A36 indicates that the thrust zone between Upper Cretaceous Elazığ Magmatic Complex and Permo-Triassic Keban Metamorphites influences the decreasing REE composition in the river sediments due to mixing water circulation. Average LREE concentrations upstream have higher-than-average HREE concentration compared to downstream ones due to the shallow marine metasediments and felsic magmatic rocks (Permo-Triassic Keban Metamorphites, felsic rocks from Upper Cretaceous Elazığ Magmatites) upstream. However, the basic volcanic rocks (Maden Complex and Kömürhan metaophiolites) are observed downstream along the Euphrates River flowing direction. Obaje et al. [18] and Ramesh et al. [54] stated that positive Ce anomalies are related to the formation of Ce4+ and Ce hydroxides and terrigenous input and diagenetic conditions. The positive Ce anomalies may indicate hydromorphic distribution of the REE in river sediments. According to some researchers, Eu anomalies have less contribution from felsic magmatic rock weathering. The negative Eu anomalies (<0.01 detection limit) indicate that the REE compositions in the river sediments more or less contribute to felsic magmatic rock weathering compared with terrigenous input and basic magmatic rock weathering. Lower river flowing velocity may be responsible for REE homogenous REE distribution in agreement with Obaje et al. [18]. The average chondrite normalized values indicate LREE and HREE enrichment, and their summarized statistical values range from 26.77 to 7.35 and 7.23 to 4.68, respectively (Table 2). The chondrite normalized patterns illustrate that the REE composition of the Euphrates River sediments differs from chondrite composition but also the REE enrichment is close to NAS and UCC. The REE normalized values of the Nigerian Gora River from Obaje et al. [18] are shown in Table 3. The Ce/Ch. value is higher (20.15) in the Euphrates River sediments than the Nigerian Gora River sediments (17.96) due to sulfide-rich mineralization in the studied area, especially Keban polymetallic mine deposit. Ramesh et al. [54] revealed that the positive Ce anomalies indicate terrigenous input, depositional environment, and diagenetic conditions due to the formation of Ce4+ and stable Ce hydroxides. According to Nielsen et al. [55], Tl and Ce may be controlled by residual sulfide and clinopyroxene, respectively, during mantle melting due to their highly different ionic charges and radii. Luo et al. [56] indicated that while the adsorption ability of REE decreases on colloidal particles from light (La) to heavy (Lu), the complexes of REE with carbonate increase, and also the larger colloidal particles have stronger ability to adsorb Ce from weathering of granitic rocks. However, both LREE and HREE concentrations are lower than NAS and UCC (Figures 4 and 5). LREE patterns show that Sm and Eu patterns are closer to 1, and Gd pattern is higher than 1 La, Ce, Pr, and Nd (Figures 4(a), 4(b), 4(c), 4(d), 4(e), 4(f), and 4(g)). Gd/Y
b
NAS and UCC ratio >1 indicates that apatite from granodioritic rocks had contributed to the river sediment REE compositions [57–61]. According to Leybourne and Johannesson [60], Eu is mobilized during hydromorphic transport compared to Sm and Gd. However, the study reveals that Sm and Eu are mobilized more compared to Gd. Thus, Gd enrichment was observed in the river sediments. However, HREE patterns enrichment is relative to chondrite but HREE NAS and UCC normalized patterns are observed to be close to 1 (Figures 5(a), 5(b), 5(c), 5(d), 5(e), 5(f), and 5(g)). All of the river sediment REE compositions that display enrichment are relative to chondrite REE composition.Table 2
North American Shale (NAS), upper continental crust (UCC), and chondrite (Ch.) from Yang et al. [8], Condie [22], Taylor et al. [23], and Sholkovitz [50, 51] and normalized REE summary statistical values (ppm) in Euphrates River sediments; ∗Obaje et al. [18].
LREE
Mean
St. dev.
∗Nigerian Gora River
HREE
Mean
St. dev.
∗Nigerian Gora River
La/NAS
0.28
0.08
—
Tb/NAS
0.39
0.11
—
La/UCC
0.29
0.09
—
Tb/UCC
0.52
0.15
—
La/Ch.
26.77
8.08
39.76
Tb/Ch.
6.51
1.85
—
Ce/NAS
0.27
0.09
—
Dy/NAS
0.52
0.13
—
Ce/UCC
0.28
0.09
—
Dy/UCC
0.62
0.15
—
Ce/Ch.
20.12
6.34
17.96
Dy/Ch.
7.23
1.78
—
Pr/NAS
0.29
0.09
—
Ho/NAS
0.42
0.10
—
Pr/UCC
0.31
0.10
—
Ho/UCC
0.53
0.13
—
Pr/Ch.
17.10
5.46
—
Ho/Ch.
5.74
1.41
—
Nd/NAS
0.28
0.06
—
Er/NAS
0.42
0.10
—
Nd/UCC
0.30
0.06
—
Er/UCC
0.52
0.13
—
Nd/Ch.
14.26
2.97
18.96
Er/Ch.
5.73
1.39
—
Sm/NAS
0.38
0.11
—
Tm/NAS
0.35
0.09
—
Sm/UCC
0.48
0.14
—
Tm/UCC
0.51
0.14
—
Sm/Ch.
10.20
3.00
—
Tm/Ch.
5.22
1.41
—
Eu/NAS
0.48
0.12
—
Yb/NAS
0.36
0.09
—
Eu/UCC
0.65
0.17
—
Yb/UCC
0.51
0.12
—
Eu/Ch.
7.70
1.98
404.62
Yb/Ch.
6.18
1.47
25.52
Gd/NAS
0.45
0.12
—
Lu/NAS
0.33
0.09
—
Gd/UCC
0.58
0.16
—
Lu/UCC
0.47
0.12
—
Gd/Ch.
7.35
1.98
58.50
Lu/Ch.
4.68
1.23
—Table 3
REE concentrations in river water in ppb. The Amazon, Indus, Mississippi, and Ohio Rivers data from Goldstein and Jacobsen [13].
LREE
N
=
90
River water
Dissolved load
Euphrates
Amazon
Indus
Mississippi
Ohio
La
0.25
0.074
0.0029
0.020
0.0063
Ce
0.54
0.21
0.0024
0.010
0.010
Pr
0.04
—
—
—
—
Nd
0.18
0.13
0.0032
0.020
0.011
Sm
0.04
0.034
0.00071
0.004
0.0025
Eu
<0.01
0.008
0.00022
0.001
0.0006
Gd
0.04
—
0.050
—
—
Tb
<0.01
—
—
—
—
Dy
0.04
0.031
0.036
0.0075
0.006
Ho
<0.01
—
—
—
—
Er
0.02
0.016
0.017
0.0065
0.005
Tm
<0.01
—
—
—
Yb
0.2
0.015
0.0014
—
0.0036
Lu
<0.01
—
0.0021
—
0.0006Figure 3
Normalized patterns of the LREE average concentrations (N
=
90) of Euphrates River sediments in the NAS, UCC, and Ch. (a) La, (b) Ce, (c) Pr, (d) N
d
∗ (N
=
50), (e) Sm, (f) Eu, and (g) Gd normalized patterns.
(a)
(b)
(c)
(d)
(e)
(f)
(g)Figure 4
Normalized patterns of the HREE average concentrations (N
=
90) of Euphrates River sediments in the NAS, UCC, and Ch. (a) Tb, (b) Dy, (c) Ho, (d) Er, (e) Tm, (f) Yb, and (g) Lu normalized patterns.
(a)
(b)
(c)
(d)
(e)
(f)
(g)Figure 5
Diagram of Nd versus Sr isotopic composition in rock, soil, and sediment samples from Martin and McCulloch [4]. Lines connect rock and soil samples taken from the same locality. Outlined fields show isotopic compositions of the New England granitoids, metapelitic rocks, and metagraywackes from the New England fold belt [49].REE compositions of the Euphrates River water were compared with REE in dissolved load of Amazon, Indus, Mississippi, and Ohio River waters from Goldstein and Jacobsen [13] (Table 3). Figure 6 indicates that the REE compositions of the Euphrates River water are higher than REE compositions in the dissolved load compared of the Amazon, Indus, Mississippi, and Ohio Rivers. However, the REE patterns of the Euphrates River water were similar to REE in dissolved load in the Amazon, Indus, Mississippi, and Ohio River waters. Yb content in the Indus River water is higher while Ce content in the Mississippi River water is lower than in the Euphrates River water. According to Goldstein and Jacobsen [13], high Yb values in suspended materials may be derived from older rocks. Goldberg et al. [62] indicated that Ce depletion in river waters in a high pH environment may be related to the result of preferential removal of Ce4+ onto Fe-Mn oxide coatings of particles. This indicates that suspended materials load in the Euphrates River water is probably more than in the Mississippi, Ohio, and Indus River waters. Ce anomalies are calculated using the following equation:(2)
Ce
∗
=
3
Ce
NAS
2
La
NAS
+
Nd
NAS from [13], where(3)
Ce
∗
=
3
×
0
,
000024
2
×
0.000015
+
0
,
000020
=
1.16
.As shown, positive Ce anomalies (Ce
∗
>
1) support the fact that Ce may be fixed on the clay at pH > 7. Also, the calculated L
a
/
Y
b
N
A
S values in the Euphrates River water and sediment are 0.098 and 0.77. It is apparent that both Euphrates River waters and sediments have heavier REE composition than light REE according to NAS normalized patterns.Figure 6
Distribution of REE in the Euphrates River waters and the Amazon, Indus, Mississippi, and Ohio River waters data from Goldstein and Jacobsen [13].
## 6. Conclusions
(1) This paper indicates that a contribution from a third component can change the isotopic composition of the studied sediments: (a) Permo-Triassic carbonate-rich metasediments and (b) felsic magmatic and (c) mafic volcanic rocks from Upper Cretaceous Elazığ Magmatic Complex.(2) The study indicates that the Euphrates River average LREE (La, Ce, Nd, Eu, and Gd) and HREE (Dy, Er, Yb, and Lu) compositions have lower range values from 8.33 to 3.63 and 39.52 to 18.34 times less than Mississippi River REE and Amazon River sediment compositions, respectively.(3) This paper revealed that the Euphrates River has higher LREE than HREE concentrations, and also, in the thrust zone which is close to the Euphrates River bed, there is low REE composition due to fast water circulation.(4) Average LREE concentrations upstream in the Euphrates River are higher than average HREE concentrations downstream due to felsic magmatic rocks.(5) La/Yb ratio (7.72) indicates high erosion rate, and La may have been added from crustal sources via weathering processes.(6) The positive Ce and La anomalies indicate both terrigenous input and contribution of the oxidative compounds from sulfide-rich mineralization in the Euphrates River bed sediments.(7) The chondrite, NAS, and UCC normalized patterns show that the REE compositions of the Euphrates River sediments differ from chondrite but are similar to NAS and UCC.(8) The Sm and Eu patterns are close to 1, and Gd pattern is higher than 1 (>1), and also Gd/Y
b
NAS and UCC ratio greater than 1 indicates that the source of REE may be apatite-rich granodioritic rocks which are from the Elazığ Magmatic Complex. Also, terrigenous sediments and lithological control are more effective on the Euphrates River sediment REE compositions.(9) The Euphrates River waters have the highest composition values for both LREE and HREE in comparison to the other basic river waters (the Amazon, Indus, Ohio, and Mississippi River waters) due to regional felsic and mafic lithological units. Due to circulation of mixing water, REE concentrations increase in the river water but decrease in the river sediments.
---
*Source: 1012021-2016-05-23.xml* | 2016 |
# Dynamic Price Competition between a Macrocell Operator and a Small Cell Operator: A Differential Game Model
**Authors:** Julián Romero; Angel Sanchis-Cano; Luis Guijarro
**Journal:** Wireless Communications and Mobile Computing
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1012041
---
## Abstract
An economic model was analyzed where a new supplier implements the technology of the small cells and positions itself as an incumbent service provider. This provider performs a dynamic reuse of resources to compete with the macrocells service provider. The model was analyzed using game theory as a two-stage game. In the first stage, the service providers play a Stackelberg differential game where the price is the control variable, the existing provider is the leader, and the new supplier is the follower. In the second stage, users’ behavior is modeled using an evolutionary game that allows predicting the population changes with variable conditions. This paper contributes to the implementation of new technologies in the market of mobile communications through analysis of competition between the new small cell service providers (SSPs) and the existing service providers along with the users’ behavior of mobile communications. The result shows that users get a better service, SSP profits are guaranteed, and SSP entry improves users’ welfare and social welfare.
---
## Body
## 1. Introduction
Mobile communications have experienced an enormous growth and this tendency still continues as the number of connected users is higher every day. Cisco reports that the number of connected mobile devices will be around 5500 million in 2020 and that 70% of the world’s population will be connected [1]. The traffic of mobile data generated will increase by 800% and the bandwidth demand from users will grow accordingly. The present times show a picture where mobile communications and their associated services have become an everyday need which expresses the users’ need to be connected all the time, everywhere with better and faster connections.This growth has made mobile communications a very attractive market for providers who wish to make their way and satisfy the great demand required by users. Due to the limitations of the radio spectrum and the unavailability of licenses for new bands, new service providers should seek to implement new technologies which allow them a greater market impact. The service providers (SP) are faced with the need to confront this growth in the number of users and their bandwidth demand and this explains their constant quest for innovation which enables total connection everywhere. Despite their efforts, many users experience low reception in indoor environments. This is due to the present mobile network model, also called macrocell, whose signal will become significantly lessened by factors such as distance, climate, and obstacles.In order to solve the issues related to the increase of mobile data and of the lessening of the signal in indoor environments, various technologies are being developed and integrated within the present model of mobile communications such as Cognitive Radio Networks and Heterogeneous Nets. This paper is focused on the solutions provided by HetNets and, more specifically, on one of its key elements: the small cells technology: micro-, pico-, and femtocells [2]. This technology has been developed and deployed over the latest years making use of small stations connected to the Internet able to capture the signal of users and route the calls towards the mobile network [3], thus achieving significant improvements in data speed, availability, and coverage [4, 5]. The integration of this technology is feasible from a technical point of view as far as it incorporates improvements to the network, but significant challenges are still pending for its full deployment. One of them is a feasibility study which makes it attractive for service providers in economic terms by considering the necessary improvements in infrastructures [6, 7] and the added value compared with the rest of providers who would not implement this technology [8].The limitations and challenges to the development success of a Hetnet are discussed in [9], where a theoretical model was proposed to show the effectiveness of incorporating the HetNets into the existing network model. In [5, 10] a study was carried out to find out which economic incentive would obtain a macrocells service provider (MSP) that implements the femtocells service. This model shows the feasibility of implementing the service for an existing SP but does not incorporates the entry of a new SP and the competition in the market. In addition, the behavior of users over time and the dynamic reuse of resources are not studied. Authors in [11] proposed an economic model that motivates that a small cell service provider (SSP) leases part of its resources to a MSP, and in this way the MSP increases the capacity of its networks. This model makes a dynamic control of the resources that the MSP leases, which allows using the resources of the small sell network more efficiently. The model also takes into account the evolutionary users’ behavior regarding which network is connected. On the other hand, in our paper a new service provider (SP) implements the technology of small cells to compete with the MSP for the users with a dynamic price control variable, the SSP resources vary over time, and the evolutionary users’ behaviors regarding subscribing the SP are analyzed. There are models which study the entry of a new SP as in [12], where the effects of the entry of an SP that uses femtocells to compete with the MSP are analyzed. The results show that all system agents improve their welfare with the implementation of femtocells technology, but as it is a static model, it does not consider the evolutionary behavior of users or the dynamic reuse of resources. Additionally, in [13] a similar model is studied where the interactions between a MSP, a SSP, and users are considered as a dynamic three-stage game, but in this model there is no reuse of existing resources. There are a lot of papers that analyze economic models that allow the integration of small cell technology in existing mobile telephony networks; in all of them it is concluded that there is an incentive for service providers to improve the quality of service and increase the capacity of the network as well as the spectral efficiency of the transmission channel.The main contributions of the paper are as follows:(i)
An economic model is developed for analyzing the implementation of the small cells technology and the effects in the market of the incorporation of a new provider offering new technologies for mobile communications.(ii)
An alternative is proposed for the deployment of mobile networks, which allows increasing the density of users using small cells by reusing the excess of bandwidth of the clients of the Internet Server Provider (ISP) service.(iii)
A dynamic model is analyzed which allows delving into the evolution of the users’ behavior and the competition between the SPs when the resources vary.(iv)
A dynamic reuse of resources is employed in order to use the bandwidth not used by the user of the Internet service of the ISP more efficiently.(v)
It is demonstrated that the users improve the quality of service obtained due to the fact that the new technology in the market increases the resources and improves the efficiency; in addition the price competition between SPs reduces the prices charged to the users. All these things improve the users’ welfare.(vi)
The analysis demonstrates the viability of the entry of the SSP. In addition, it is shown how profits increase when the resources increase and decrease when the MSP’s spectral efficiency decreases.The paper is structured as follows. The model is described in Section2. In Section 3, the game analysis is performed. The numerical results and discussion of scenarios are shown in Section 4. Finally, Section 5 draws the conclusions.
## 2. Model Description
Two operators that provide fixed wireless service (MSP and SSP) and a set ofN users are considered, as shown in Figure 1. The MSP is a conventional operator and owns a set of BS, each serving a macrocell that provides full coverage on the service area. The SSP deploys a radio access network (RAN) consisting only of small cells reusing the resources of an Internet Service Provider (ISP) to provide the mobile service. The coverage areas of the small cells are disjoint, included in the service area of the MSP and covering only a fraction of the latter. In the sequel, to simplify notation, it is considered without any loss of generality that the RAN of the MSP is composed of a single macrocell. Both SPs compete to serve the users inside the small cells.Figure 1
Scenario.The bandwidth available to the MSP is denoted byBm(t). The SSP deploys a total of K small cells. It refers to the ith small cell as si and the available bandwidth is obtained by reusing the resources of the ISP to provide the service in the following way: given that the ISP can offer a bandwidth of Bi and the clients of the Internet service only uses Bi(1-ri(t)), the remaining bandwidth can be used to provide the service of mobile communications, that is, Bsi(t)=Biri(t), where ri(t) is the bandwidth fraction available shown in Figure 2. The area of the macrocell not overlapped with the coverage area of any small cell is referred to as s0, where Bs0=0. While the MSP holds a license to exploit a spectrum band, the SSP does not hold such a license, but a generic authorization for providing wireless communications services.Figure 2
Small celli.
### 2.1. Users
Assuming that there areN users distributed thought the coverage area of the MSP. Users make their subscription decision according to the expected utility and independently from one another. To determine with which SP the users subscribe, the user’s utility proposed in [14] is used which integrates the following:(i)
The perceived rate reflects the fact that the higher rate the user is allocated, the greater the utility is; it is obtained in the same way as in [13–15], where the perceived rate depends directly on the perceived spectral efficiency and on the amount of bandwidth subscribed, in such a way that the perceived rate by the MSP in si is θminbm, where θmin is the perceived spectral efficiency by a user n from a MSPs’ BS, normalized in [0,1], and the perceived rate by the SSP is θsinbs. However, given that the area of the small cells is relatively small, we can assume that θsin=max(θmn)=1, and therefore, the perceived rate by the SSP can be simplified to bs∀si.(ii)
The amount of bandwidth subscribed by each user with the MSP and SSP isbm(t) and bs(t), respectively.(iii)
The payment for the (maximum) achievable rateb affects the utility through a negative exponential function (i.e., e-b·p). This is a similar effect to the one achieved by a quasilinear utility and a budget constraint, where the payment for the rate is linear (i.e., -b·p). Although the latter is a more common model in network economics, it is argued that our proposal reflects more realistically how the spectrum scarcity faced by a service provider is translated to the user. The payment made by the users for the amount of bandwidth subscribed per time unit with the MSP and SSP is pm(t)bm(t) and ps(t)bs(t), respectively.The utility of a user that subscribes to the MSP or to the SSP is, respectively,(1)umiθmi,bm,pm,t=θmibmte-pmtbmt,usibs,ps,t=bste-pstbst.Given that we consider rational users, they subscribe the bandwidth that maximizes their utility:(2)bm∗t=argmaxbmt>0umiθm,bm,pm,t=1pmt,bs∗t=argmaxbst>0usibs,ps,t=1pst;therefore the user’s utility given that they make an optimal decision of bandwidth is(3)umi∗θmi,pm,t=θmie-1pmt,(4)usi∗ps,t=us∗ps,t=e-1pst,where umi∗ and us∗ denote the maximum utility of the users that are in si and subscribe to the MSP and SSP, respectively. Lastly, the utility perceived by the users who do not subscribe the service is uo∗=0, which is consistent with a user subscribing zero bandwidth.We definexmi(t) and xsi(t) as the population ratio that subscribes to the MSP and SSP, respectively, in the small cell si, so the number of users subscribing to MSP in si is Ni(t)xmi(t) and the SSP is Ni(t)xsi(t), where Ni are the users in si, N=∑iNi. In addition, the bandwidth demanded cannot be greater than the bandwidth available, where the bandwidth demanded to the MSP and the SSP is, respectively,(5)Qmpm,xm,t=Ntxmtbm∗t,Qsps,xs,t=Ntxstbs∗t.These demands are limited by the available bandwidth of the corresponding operator, Ni(t)xmi(t)bm∗(t)≤Bm(t) and Ni(t)xmi(t)bs∗(t)≤Bsi(t). From these it is obtained that(6)xmit≤minpmtNit/Bmit,1,xsit≤minpstNit/Bsit,1.
### 2.2. Service Providers
The SPs compete in price for the users. Each SP posts a price per nominal-data-rate unit and time unit,pm(t),ps(t),∀t∈T, in order to maximize its profits over the time horizon, T=[0,T]. The instantaneous profits of the SP are defined as the income minus the costs in a time instant, where the income are given by the amount that users pay for all the demanded bandwidth, and the costs are assumed to be zero. The instantaneous profits of SPs are defined as(7)πmpm,xm,t=pmtQmpm,xm,t,πsps,xs,t=pstQsps,xs,t.The profits of SPs over a time horizon T are defined as(8)Πmpm,xm=∫0Te-ρtπmpm,xm,tdt,(9)Πsps,xs=∫0Te-ρtπsps,xs,tdt,where e-ρt is the discount rate, which influences in the future payments [16, 17]. The SPs compete against each other to determine the dynamic control strategy in equilibrium, that is, pm∗(t) and ps∗(t), given the profits obtained in (8) and (9).
## 2.1. Users
Assuming that there areN users distributed thought the coverage area of the MSP. Users make their subscription decision according to the expected utility and independently from one another. To determine with which SP the users subscribe, the user’s utility proposed in [14] is used which integrates the following:(i)
The perceived rate reflects the fact that the higher rate the user is allocated, the greater the utility is; it is obtained in the same way as in [13–15], where the perceived rate depends directly on the perceived spectral efficiency and on the amount of bandwidth subscribed, in such a way that the perceived rate by the MSP in si is θminbm, where θmin is the perceived spectral efficiency by a user n from a MSPs’ BS, normalized in [0,1], and the perceived rate by the SSP is θsinbs. However, given that the area of the small cells is relatively small, we can assume that θsin=max(θmn)=1, and therefore, the perceived rate by the SSP can be simplified to bs∀si.(ii)
The amount of bandwidth subscribed by each user with the MSP and SSP isbm(t) and bs(t), respectively.(iii)
The payment for the (maximum) achievable rateb affects the utility through a negative exponential function (i.e., e-b·p). This is a similar effect to the one achieved by a quasilinear utility and a budget constraint, where the payment for the rate is linear (i.e., -b·p). Although the latter is a more common model in network economics, it is argued that our proposal reflects more realistically how the spectrum scarcity faced by a service provider is translated to the user. The payment made by the users for the amount of bandwidth subscribed per time unit with the MSP and SSP is pm(t)bm(t) and ps(t)bs(t), respectively.The utility of a user that subscribes to the MSP or to the SSP is, respectively,(1)umiθmi,bm,pm,t=θmibmte-pmtbmt,usibs,ps,t=bste-pstbst.Given that we consider rational users, they subscribe the bandwidth that maximizes their utility:(2)bm∗t=argmaxbmt>0umiθm,bm,pm,t=1pmt,bs∗t=argmaxbst>0usibs,ps,t=1pst;therefore the user’s utility given that they make an optimal decision of bandwidth is(3)umi∗θmi,pm,t=θmie-1pmt,(4)usi∗ps,t=us∗ps,t=e-1pst,where umi∗ and us∗ denote the maximum utility of the users that are in si and subscribe to the MSP and SSP, respectively. Lastly, the utility perceived by the users who do not subscribe the service is uo∗=0, which is consistent with a user subscribing zero bandwidth.We definexmi(t) and xsi(t) as the population ratio that subscribes to the MSP and SSP, respectively, in the small cell si, so the number of users subscribing to MSP in si is Ni(t)xmi(t) and the SSP is Ni(t)xsi(t), where Ni are the users in si, N=∑iNi. In addition, the bandwidth demanded cannot be greater than the bandwidth available, where the bandwidth demanded to the MSP and the SSP is, respectively,(5)Qmpm,xm,t=Ntxmtbm∗t,Qsps,xs,t=Ntxstbs∗t.These demands are limited by the available bandwidth of the corresponding operator, Ni(t)xmi(t)bm∗(t)≤Bm(t) and Ni(t)xmi(t)bs∗(t)≤Bsi(t). From these it is obtained that(6)xmit≤minpmtNit/Bmit,1,xsit≤minpstNit/Bsit,1.
## 2.2. Service Providers
The SPs compete in price for the users. Each SP posts a price per nominal-data-rate unit and time unit,pm(t),ps(t),∀t∈T, in order to maximize its profits over the time horizon, T=[0,T]. The instantaneous profits of the SP are defined as the income minus the costs in a time instant, where the income are given by the amount that users pay for all the demanded bandwidth, and the costs are assumed to be zero. The instantaneous profits of SPs are defined as(7)πmpm,xm,t=pmtQmpm,xm,t,πsps,xs,t=pstQsps,xs,t.The profits of SPs over a time horizon T are defined as(8)Πmpm,xm=∫0Te-ρtπmpm,xm,tdt,(9)Πsps,xs=∫0Te-ρtπsps,xs,tdt,where e-ρt is the discount rate, which influences in the future payments [16, 17]. The SPs compete against each other to determine the dynamic control strategy in equilibrium, that is, pm∗(t) and ps∗(t), given the profits obtained in (8) and (9).
## 3. Game Analysis
The interactions between the SPs and users are analyzed using game theory [18], as a two-stage dynamic game, shown in Figure 3. In the first stage, the SPs play a Stackelberg differential game, where the control variable is the price [19, 20]. Each SP posts a price per nominal-data-rate unit and time unit, which is pm(t) for the MSP and ps(t) for the SSP, where the MSP is the leader in the price choice and the SSP in the follower. In the second stage, the users inside the small cells can subscribe to either MSP’s or SSP’s service and pay for a nominal data rate, which is equal to the bandwidth allocated by the SP [5]. The behavior of the users that play an evolutionary game to choose which SPs they subscribe is modeled using the replicator dynamic [21, 22], shown in Figure 3. The two-stage game is solved using backward induction to guarantee the perfect subgame equilibrium [18].Figure 3
Hierarchical dynamic game framework for pricing and SP selection.
### 3.1. Stage II: Evolutionary Game
In the second stage, the decision that users should take is to which SP they have to subscribe, knowing in advance the prices announced by the SPs (see Figure3). An evolutionary game is proposed that allows reaching solutions in equilibrium given that players play repeatedly and can adjust their behavior over time by learning on the fly. Evolutionary game is as follows:(i)
Strategies: S={m,s,o}, where m means subscribing to the MSP, s subscribing to the SSP, and o not subscribing to the service.(ii)
Population states: Xi(t)={xmi(t),xsi(t),xoi(t)}, where xmi(t) and xsi(t) are the population ratios that subscribe to the MSP and to the SSP, respectively, at small cell si, and xo(t)=1-xm(t)-xs(t) is the fraction of users not subscribing to the service at small cell si.(iii)
Payoffs: ui(t)={umi∗(t),usi∗(t),uoi∗(t)}, where umi∗(t),usi∗(t) are the users’ utilities perceived for the strategies defined in (3) and (4), respectively, while uoi∗(t)=0.(iv)
Replicator dynamic: it models the evolutionary behavior of the population among its different strategies over time. The Replicator Dynamic [21, 22] is defined as follows:(10)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,wherej∈S, δ is the learning rate of the population, which controls the frequency of strategy adaptation for service selection and Ui(θmn,pm,ps,t) is the user utility function average per unit time:(11)Uiθmn,pm,ps,t=xmipm,tumi∗θmn,pm,t+xsips,tusi∗ps,t+xoituo∗;the replicator dynamics within a small cells have the limitations of coverage and resources of the SPs shown in (6). This differential equation indicates how the population evolves along the time horizon and given the initial state of the population, allowing making predictions about the future behavior of the population. The population will evolve to the Evolutionary Stable Strategy (ESS), which is a stationary state where the population shares will not change [18, 21].
### 3.2. Stage I: Stackelberg Differential Game
In the first stage, SPs anticipate the evolutionary behavior of the population (see Figure3); based on this, the SPs will determine the dynamic prices pm∗(t) and ps∗(t). To analyze the decision making, a Stackelberg differential game with an open-loop control [19] is formulated, where the MSP is the leader and the SSP is the follower. The SSP optimal control problem is defined as(12)ps∗t=argmaxpst≥0∫0Te-ρtπsps,xs,tdt,s.t.x˙sit=δxsitusips,t-Uiθmn,pm,ps,t,xj0=xj0.The MSP optimal control problem, given the behavior of the SSP, is(13)pm∗t=argmaxpmt≥0∫0Te-ρtπmpm,xm,tdt,s.t.ps∗=argmaxps≥0∫0Te-ρtπsps∗,xs,tdt,x˙mit=δxmitumiθmn,pm,t-Uiθmn,pm,ps∗,t,xj0=xj0.The open-loop Stackelberg differential game is presented as the two optimal control problems with restrictions described in expressions (12) and (13), represented in the Lagrange form. To solve them, the Pontryagin Maximum Principle is used [23, 24], which provides us with the necessary optimization conditions for the optimal control problem given that it takes into account the effects of the choosing the price by the SPs and generating a first immediate effect through the instantaneous value of the profits of SP and a second effect on the variation of the population state.First, for the Lagrange problem of the follower, the Hamiltonian function is defined as follows:(14)Hspm,ps,xmi,xsi,λsj,t=πsps,xs,t+∑i=0Kλssitx˙sit+λsmitx˙mit,where λsj(t) is the costate variable associated with the state in the j state in time when it moves along the optimal trajectory, where j∈S.The optimal control strategyps∗(t) of the original problem (10) also maximizes the corresponding Hamiltonian function [23](15)ps∗t=argmaxHspm,ps,xmi,xsi,λsj,t.However, (15) it is only optimal if the multiplier vector is defined to reflect the marginal impact of the state vector on the profits of the SSP(16)∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt=-λ˙sjt+ρλsjt,where Hs∗ is obtained from (15).In comparison with the SSP, the Hamiltonian function of the MSP takes into account the maximization of the profits of the SSP, the dynamics of the variables of the costates of the MSP and the variation of the costates. The MSP Hamilton function is defined as(17)Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t=πmpm,xm,t+∑i=0Kλmmitx˙mit+λmsitx˙si+αssitλ˙ssit+αsmitλ˙smit,where λmj(t) is the costate variable associated with the state in the j state when it moves along the optimal trajectory and αsj is a variable associated with the effect of the variation of the population of the SSP on the MSP.Applying the Pontryagin Maximum Principle in the same way as with the SSP, the following optimization conditions are necessary to find the optimal control strategies of the original problempm∗(t)(18)∂Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t∂pmt=0(19)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt=-λ˙sjt+ρλsjt(20)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt=-α˙sjt+ραsjt.The first step is to solve the optimal prices that the service providers announce, (15) and (18). These prices are based on the state of the population (Xi(t)) and the costates (λmj, λij, and αsj). The optimal prices must be replaced in the optimization conditions of the Pontryagin Maximum Principle ((10), (16), (19), and (20)), obtaining the following system of differential equations:(21)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,λ˙sjt=ρλsjt-∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt,λ˙mjt=ρλmjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt,α˙sjt=ραsjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt;This problem is atwo-point boundary value problem (TPBVP), where the initial variables of the state of the population (X0) and the final variables of the costates of the population λjj(T)=0∀j∈i={1,2,…,K,m,o} and αsj(T)=0∀j∈i={1,2,…,K,m,o} are known. Solving the TPBVP [25], the optimal vectors of the states and costates are obtained along the time horizon, that is, xj∗(t),λmj∗(t), λsj∗(t), and αsj∗(t). Replacing these optimal vectors in the prices that were obtained from solving (15) and (18), the prices in equilibrium announced by SPs are obtained.
## 3.1. Stage II: Evolutionary Game
In the second stage, the decision that users should take is to which SP they have to subscribe, knowing in advance the prices announced by the SPs (see Figure3). An evolutionary game is proposed that allows reaching solutions in equilibrium given that players play repeatedly and can adjust their behavior over time by learning on the fly. Evolutionary game is as follows:(i)
Strategies: S={m,s,o}, where m means subscribing to the MSP, s subscribing to the SSP, and o not subscribing to the service.(ii)
Population states: Xi(t)={xmi(t),xsi(t),xoi(t)}, where xmi(t) and xsi(t) are the population ratios that subscribe to the MSP and to the SSP, respectively, at small cell si, and xo(t)=1-xm(t)-xs(t) is the fraction of users not subscribing to the service at small cell si.(iii)
Payoffs: ui(t)={umi∗(t),usi∗(t),uoi∗(t)}, where umi∗(t),usi∗(t) are the users’ utilities perceived for the strategies defined in (3) and (4), respectively, while uoi∗(t)=0.(iv)
Replicator dynamic: it models the evolutionary behavior of the population among its different strategies over time. The Replicator Dynamic [21, 22] is defined as follows:(10)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,wherej∈S, δ is the learning rate of the population, which controls the frequency of strategy adaptation for service selection and Ui(θmn,pm,ps,t) is the user utility function average per unit time:(11)Uiθmn,pm,ps,t=xmipm,tumi∗θmn,pm,t+xsips,tusi∗ps,t+xoituo∗;the replicator dynamics within a small cells have the limitations of coverage and resources of the SPs shown in (6). This differential equation indicates how the population evolves along the time horizon and given the initial state of the population, allowing making predictions about the future behavior of the population. The population will evolve to the Evolutionary Stable Strategy (ESS), which is a stationary state where the population shares will not change [18, 21].
## 3.2. Stage I: Stackelberg Differential Game
In the first stage, SPs anticipate the evolutionary behavior of the population (see Figure3); based on this, the SPs will determine the dynamic prices pm∗(t) and ps∗(t). To analyze the decision making, a Stackelberg differential game with an open-loop control [19] is formulated, where the MSP is the leader and the SSP is the follower. The SSP optimal control problem is defined as(12)ps∗t=argmaxpst≥0∫0Te-ρtπsps,xs,tdt,s.t.x˙sit=δxsitusips,t-Uiθmn,pm,ps,t,xj0=xj0.The MSP optimal control problem, given the behavior of the SSP, is(13)pm∗t=argmaxpmt≥0∫0Te-ρtπmpm,xm,tdt,s.t.ps∗=argmaxps≥0∫0Te-ρtπsps∗,xs,tdt,x˙mit=δxmitumiθmn,pm,t-Uiθmn,pm,ps∗,t,xj0=xj0.The open-loop Stackelberg differential game is presented as the two optimal control problems with restrictions described in expressions (12) and (13), represented in the Lagrange form. To solve them, the Pontryagin Maximum Principle is used [23, 24], which provides us with the necessary optimization conditions for the optimal control problem given that it takes into account the effects of the choosing the price by the SPs and generating a first immediate effect through the instantaneous value of the profits of SP and a second effect on the variation of the population state.First, for the Lagrange problem of the follower, the Hamiltonian function is defined as follows:(14)Hspm,ps,xmi,xsi,λsj,t=πsps,xs,t+∑i=0Kλssitx˙sit+λsmitx˙mit,where λsj(t) is the costate variable associated with the state in the j state in time when it moves along the optimal trajectory, where j∈S.The optimal control strategyps∗(t) of the original problem (10) also maximizes the corresponding Hamiltonian function [23](15)ps∗t=argmaxHspm,ps,xmi,xsi,λsj,t.However, (15) it is only optimal if the multiplier vector is defined to reflect the marginal impact of the state vector on the profits of the SSP(16)∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt=-λ˙sjt+ρλsjt,where Hs∗ is obtained from (15).In comparison with the SSP, the Hamiltonian function of the MSP takes into account the maximization of the profits of the SSP, the dynamics of the variables of the costates of the MSP and the variation of the costates. The MSP Hamilton function is defined as(17)Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t=πmpm,xm,t+∑i=0Kλmmitx˙mit+λmsitx˙si+αssitλ˙ssit+αsmitλ˙smit,where λmj(t) is the costate variable associated with the state in the j state when it moves along the optimal trajectory and αsj is a variable associated with the effect of the variation of the population of the SSP on the MSP.Applying the Pontryagin Maximum Principle in the same way as with the SSP, the following optimization conditions are necessary to find the optimal control strategies of the original problempm∗(t)(18)∂Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t∂pmt=0(19)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt=-λ˙sjt+ρλsjt(20)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt=-α˙sjt+ραsjt.The first step is to solve the optimal prices that the service providers announce, (15) and (18). These prices are based on the state of the population (Xi(t)) and the costates (λmj, λij, and αsj). The optimal prices must be replaced in the optimization conditions of the Pontryagin Maximum Principle ((10), (16), (19), and (20)), obtaining the following system of differential equations:(21)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,λ˙sjt=ρλsjt-∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt,λ˙mjt=ρλmjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt,α˙sjt=ραsjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt;This problem is atwo-point boundary value problem (TPBVP), where the initial variables of the state of the population (X0) and the final variables of the costates of the population λjj(T)=0∀j∈i={1,2,…,K,m,o} and αsj(T)=0∀j∈i={1,2,…,K,m,o} are known. Solving the TPBVP [25], the optimal vectors of the states and costates are obtained along the time horizon, that is, xj∗(t),λmj∗(t), λsj∗(t), and αsj∗(t). Replacing these optimal vectors in the prices that were obtained from solving (15) and (18), the prices in equilibrium announced by SPs are obtained.
## 4. Results and Discussion
In this section some results are presented in order to illustrate the capabilities of our model and analysis and to provide an insight into the system behavior. In order to quantify the viability of the model, the users’ welfare (UW) and social welfare (SW) functions are used. The UW is defined as the aggregate utility of all the users obtained along the time horizon. It allows us to quantify the welfare of the entire population of users and is obtained as (22)UW=∫0T∑i=0KNixmitumi∗t+Nixstusi∗tdt,where Nixmi(t)umi∗(t) and Nixsusi∗(t) are a the utilities that users who subscribe receive. The SW is defined as the aggregated utility of all the users and SPs and is obtained as(23)SW=Πmpm,xm+Πspm,xm+∫0T∑i=0KNixmitumi∗t+Nixstusi∗tdt.The numerical resolution of the dynamic problem of decision making was made using the function“BVP4C” of MATLAB [26, 27], which allows solving the system of differential equations of (21) given the initial state of the population and the final state of the costates. The scenario was evaluated along a time horizon of [0–100] with a jump of h=0.01 and the parameters were varied to know the effect that has the spectral efficiency, the bandwidth of the SPs, and the dynamic reuse of the resources of the SSP. The results are obtained assuming a small cell network that covers 100% of the BS area of the MSP. Given that the coverage areas of the small cells are disjoint Am=∑iAsi. The system parameters values are those shown in Table 1.Table 1
Parameter setting.
Parameter Value N 400 users A m 10000 m2 B m 60 MHz A s i 2000 m2 K 5 small cells ρ 0
### 4.1. Effect of Spectral Efficiency
Figures4, 5, and 6 show the effect of the spectral efficiency obtained with the MSP by all users (θm) on the scenario, evaluating the evolutionary behavior of users (xm∗(t), xs∗(t), and xo∗(t)), the dynamics of prices in equilibrium (pm∗(t) and ps∗(t)), the total profits of the SPs, the SW, and UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, all the users inside the small cells perceive on average the same spectral frequency with the BS of MSP, the learning rate of the population is δ=0.68, the bandwidth available for the SSP is Bsi=Bs = 10.4MHz in all small cells, and the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i.Figure 4
SP’s prices on the time horizon as a function ofθm.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 5
Evolutionary behavior of population ratios as a function ofθm.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 6
Π m, Πs, SW, and UW as a function of θm.In Figures4(a) and 4(b) the evolution of MSP and SSP prices over the time horizon for different values of θm is shown and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population almost reaches the ESS, that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The lowerθm, the lower the price announced by the MSP because it has to compensate for the low spectral efficiency that users receive. On the other hand, the lower the spectral efficiency, the higher the price that the SSP can announce, because users perceive a better utility with him.(iii)
The prices increase with the time, because users that do not subscribe to the service (xoi∗(0)=0.8) choose to subscribe to the SPs, and given that the subscribed users increase the price announced by the SPs can be higher (there are the same resources and a greater demand).In Figures5(a) and 5(b) the evolutionary behavior of the subscribing population is shown with the MSP and SSP, respectively. It is observed that the users always preferred the service and therefore they will evolve over time until they subscribe to a SP; besides, depending on the spectral efficiency perceived, they will prefer to subscribe with one or the other SP, in such a way that the higher θm is, the more the users prefer to subscribe with the MSP. Only the time values of [0–10] are shown because at that time the population reaches or approaches the ESS; therefore, the population is in a stationary strategy.In Figure6 the profits from the perspective of the SPs and the users’ and social welfare are shown. It is observed that the profits of the MSP increase as θm increases and that the SSP’s profits decrease in the same amount. The users are benefited if θm increases, as shown in the UW, which also increases the SW. Additionally, it is expected that the SW, UW, and the profits of the SPs will increase if the initial state of the population of users who do not subscribe the service is lower, given that 80% of users start not to subscribe.
### 4.2. Effect of the Available Bandwidth of the SSP
In Figures7, 8, and 9 the effect that the available resources of the SSP have on the scenario is shown, evaluating the evolutionary behavior of the users, the dynamics of equilibrium prices, the total profits of the SPs, the SW, and the UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, who perceive on average the same spectral efficiency θmi=θm = 0.8 bits/s/Hz, the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i, and the learning rate of the population is δ=0.68.Figure 7
SP’s prices on the time horizon as a function ofBs.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 8
Evolutionary behavior of population ratios as a function ofBs.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 9
Π m, Πs, SW, and UW as a function of Bs.In Figures7(a) and 7(b) the behavior of the prices of the MSP and SSP are shown along the evaluated time horizon (T=10) as a function of Bs and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population arrives or approaches the ESS; that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The higherBs is, the lower the price the SPs announces, because the SSP has more resources to compete and this means that both SPs have to lower prices.(iii)
The prices increase with the time, this is because the number of users subscribed also increases with the time, and it is needed to satisfy a greater bandwidth demand with the same resources.In Figures8(a) and 8(b) the evolutionary behavior of the subscribing population with the MSP and SSP, respectively, is shown. It was observed that the users always preferred the service and therefore they will evolve over time until they subscribe an SP; in addition, the higher the available resources of the SSP, the higher the users who want to subscribe with it. At T=10 the population arrives or approaches the ESS.In Figure9 the profits are shown from the perspective of the SPs, and users’ and social welfare are evaluated. It is observed that the profits of the SSP increase with Bs and that the MSP’s profits diminish in the same amount. If resources in the scenario increase, the SW and UW increase. Additionally, it is expected that SW, UW, and the SP’s profits increase if the initial state of the population of users that do not subscribe to the service is lower, given that 80% of users do not subscribe.
### 4.3. Effect of Dynamic Reuse of the Resources of SSP
In this section, approaching the available resources of the network from a traffic study at the University of Washington [28], the bandwidth available to service the SSP at each instant of time is shown in Figure 10 [29].Figure 10
Available bandwidth of the SSP.In Figures11(a) and 11(b) the equilibrium prices for the MSP and SSP, respectively, are shown, considering that the SSP’s bandwidth available varies. It is observed that the SSP prices are inversely proportional to the available bandwidth and the prices are displaced 0.8 due to the spectral efficiency that users perceive. It is also observed that when the learning rate is low (δ=0.1), it evolves more smoothly; i.e., users do not learn as quickly as resources vary.Figure 11
SP’s prices on the time horizon.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )In Figures12(a) and 12(b) the evolution of the distribution of the population for the MSP and SSP, respectively, is shown. It is observed that the population of the SSP has a direct relation to resources; that is, the higher the resources of the SSP, the higher the population that subscribes to the SSP. On the other hand, the higher the resources of the SSP, the lower the population that subscribes to the MSP. It is also observed that when the learning rate is low (δ=0.1), the distribution of the population varies more slowly; that is, the population do not learn as fast as the resources vary.Figure 12
Evolutionary behavior of the population.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )As there is a dynamic reuse of resources by the SSP, its profits will increase as the available resources increase and the user’s learning rate increases, given that this will allow the fact that the decisions of the users adjust more quickly to variations of resources. Given the existence of a dynamic reuse of resources of the SSP there will be more resources in the model, which lowers the SSP’s prices and increases theSW and the UW.
## 4.1. Effect of Spectral Efficiency
Figures4, 5, and 6 show the effect of the spectral efficiency obtained with the MSP by all users (θm) on the scenario, evaluating the evolutionary behavior of users (xm∗(t), xs∗(t), and xo∗(t)), the dynamics of prices in equilibrium (pm∗(t) and ps∗(t)), the total profits of the SPs, the SW, and UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, all the users inside the small cells perceive on average the same spectral frequency with the BS of MSP, the learning rate of the population is δ=0.68, the bandwidth available for the SSP is Bsi=Bs = 10.4MHz in all small cells, and the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i.Figure 4
SP’s prices on the time horizon as a function ofθm.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 5
Evolutionary behavior of population ratios as a function ofθm.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 6
Π m, Πs, SW, and UW as a function of θm.In Figures4(a) and 4(b) the evolution of MSP and SSP prices over the time horizon for different values of θm is shown and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population almost reaches the ESS, that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The lowerθm, the lower the price announced by the MSP because it has to compensate for the low spectral efficiency that users receive. On the other hand, the lower the spectral efficiency, the higher the price that the SSP can announce, because users perceive a better utility with him.(iii)
The prices increase with the time, because users that do not subscribe to the service (xoi∗(0)=0.8) choose to subscribe to the SPs, and given that the subscribed users increase the price announced by the SPs can be higher (there are the same resources and a greater demand).In Figures5(a) and 5(b) the evolutionary behavior of the subscribing population is shown with the MSP and SSP, respectively. It is observed that the users always preferred the service and therefore they will evolve over time until they subscribe to a SP; besides, depending on the spectral efficiency perceived, they will prefer to subscribe with one or the other SP, in such a way that the higher θm is, the more the users prefer to subscribe with the MSP. Only the time values of [0–10] are shown because at that time the population reaches or approaches the ESS; therefore, the population is in a stationary strategy.In Figure6 the profits from the perspective of the SPs and the users’ and social welfare are shown. It is observed that the profits of the MSP increase as θm increases and that the SSP’s profits decrease in the same amount. The users are benefited if θm increases, as shown in the UW, which also increases the SW. Additionally, it is expected that the SW, UW, and the profits of the SPs will increase if the initial state of the population of users who do not subscribe the service is lower, given that 80% of users start not to subscribe.
## 4.2. Effect of the Available Bandwidth of the SSP
In Figures7, 8, and 9 the effect that the available resources of the SSP have on the scenario is shown, evaluating the evolutionary behavior of the users, the dynamics of equilibrium prices, the total profits of the SPs, the SW, and the UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, who perceive on average the same spectral efficiency θmi=θm = 0.8 bits/s/Hz, the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i, and the learning rate of the population is δ=0.68.Figure 7
SP’s prices on the time horizon as a function ofBs.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 8
Evolutionary behavior of population ratios as a function ofBs.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 9
Π m, Πs, SW, and UW as a function of Bs.In Figures7(a) and 7(b) the behavior of the prices of the MSP and SSP are shown along the evaluated time horizon (T=10) as a function of Bs and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population arrives or approaches the ESS; that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The higherBs is, the lower the price the SPs announces, because the SSP has more resources to compete and this means that both SPs have to lower prices.(iii)
The prices increase with the time, this is because the number of users subscribed also increases with the time, and it is needed to satisfy a greater bandwidth demand with the same resources.In Figures8(a) and 8(b) the evolutionary behavior of the subscribing population with the MSP and SSP, respectively, is shown. It was observed that the users always preferred the service and therefore they will evolve over time until they subscribe an SP; in addition, the higher the available resources of the SSP, the higher the users who want to subscribe with it. At T=10 the population arrives or approaches the ESS.In Figure9 the profits are shown from the perspective of the SPs, and users’ and social welfare are evaluated. It is observed that the profits of the SSP increase with Bs and that the MSP’s profits diminish in the same amount. If resources in the scenario increase, the SW and UW increase. Additionally, it is expected that SW, UW, and the SP’s profits increase if the initial state of the population of users that do not subscribe to the service is lower, given that 80% of users do not subscribe.
## 4.3. Effect of Dynamic Reuse of the Resources of SSP
In this section, approaching the available resources of the network from a traffic study at the University of Washington [28], the bandwidth available to service the SSP at each instant of time is shown in Figure 10 [29].Figure 10
Available bandwidth of the SSP.In Figures11(a) and 11(b) the equilibrium prices for the MSP and SSP, respectively, are shown, considering that the SSP’s bandwidth available varies. It is observed that the SSP prices are inversely proportional to the available bandwidth and the prices are displaced 0.8 due to the spectral efficiency that users perceive. It is also observed that when the learning rate is low (δ=0.1), it evolves more smoothly; i.e., users do not learn as quickly as resources vary.Figure 11
SP’s prices on the time horizon.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )In Figures12(a) and 12(b) the evolution of the distribution of the population for the MSP and SSP, respectively, is shown. It is observed that the population of the SSP has a direct relation to resources; that is, the higher the resources of the SSP, the higher the population that subscribes to the SSP. On the other hand, the higher the resources of the SSP, the lower the population that subscribes to the MSP. It is also observed that when the learning rate is low (δ=0.1), the distribution of the population varies more slowly; that is, the population do not learn as fast as the resources vary.Figure 12
Evolutionary behavior of the population.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )As there is a dynamic reuse of resources by the SSP, its profits will increase as the available resources increase and the user’s learning rate increases, given that this will allow the fact that the decisions of the users adjust more quickly to variations of resources. Given the existence of a dynamic reuse of resources of the SSP there will be more resources in the model, which lowers the SSP’s prices and increases theSW and the UW.
## 5. Conclusions
This paper proposes a business model where a service provider implements small cells technology (SSP) and competes against the existing macrocell provider or MSP. The limitations of this technology were taken into accounts such as limited availability and coverage, dynamic reutilization of resources, and the decisions of users and service providers, while taking into consideration the influence of each provider over the decisions of their competitor.Game theory enabled us to predict the behavior of users and providers on the basis that users and providers take the decisions that suit them best and allowed us to know the effects of a new provider on the market of mobile communications, which are as follows:(i)
The users get a better service due to the fact that the SSP forces the MSP to lower prices, as the SSP increases the spectrum efficiency of users and the resources available in the scenario. In addition, all users would prefer to subscribe to the service, and they will adapt their decisions to subscribe.(ii)
The SSP goes into the communication market well aware that their profits are guaranteed. This is because the SSP is offering better spectrum efficiency and has competitive prices in relation to the MSP, as far as users want to subscribe to the new service.(iii)
The MSP becomes aggrieved by the new SSP in the market because its profits are lower, and the profits are lower as the spectrum efficiency is lower. This suggests that the MSP should improve the value of their services as perceived by the users so their profits do not become affected by competence.(iv)
The SSP’s entry improves the users’ welfare and social welfare.Given the results shown, the viability of providing a new small cells connectivity service by reusing dynamically the excess of bandwidth of the clients of an Internet Service Provider has been demonstrated.
---
*Source: 1012041-2018-05-15.xml* | 1012041-2018-05-15_1012041-2018-05-15.md | 50,643 | Dynamic Price Competition between a Macrocell Operator and a Small Cell Operator: A Differential Game Model | Julián Romero; Angel Sanchis-Cano; Luis Guijarro | Wireless Communications and Mobile Computing
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1012041 | 1012041-2018-05-15.xml | ---
## Abstract
An economic model was analyzed where a new supplier implements the technology of the small cells and positions itself as an incumbent service provider. This provider performs a dynamic reuse of resources to compete with the macrocells service provider. The model was analyzed using game theory as a two-stage game. In the first stage, the service providers play a Stackelberg differential game where the price is the control variable, the existing provider is the leader, and the new supplier is the follower. In the second stage, users’ behavior is modeled using an evolutionary game that allows predicting the population changes with variable conditions. This paper contributes to the implementation of new technologies in the market of mobile communications through analysis of competition between the new small cell service providers (SSPs) and the existing service providers along with the users’ behavior of mobile communications. The result shows that users get a better service, SSP profits are guaranteed, and SSP entry improves users’ welfare and social welfare.
---
## Body
## 1. Introduction
Mobile communications have experienced an enormous growth and this tendency still continues as the number of connected users is higher every day. Cisco reports that the number of connected mobile devices will be around 5500 million in 2020 and that 70% of the world’s population will be connected [1]. The traffic of mobile data generated will increase by 800% and the bandwidth demand from users will grow accordingly. The present times show a picture where mobile communications and their associated services have become an everyday need which expresses the users’ need to be connected all the time, everywhere with better and faster connections.This growth has made mobile communications a very attractive market for providers who wish to make their way and satisfy the great demand required by users. Due to the limitations of the radio spectrum and the unavailability of licenses for new bands, new service providers should seek to implement new technologies which allow them a greater market impact. The service providers (SP) are faced with the need to confront this growth in the number of users and their bandwidth demand and this explains their constant quest for innovation which enables total connection everywhere. Despite their efforts, many users experience low reception in indoor environments. This is due to the present mobile network model, also called macrocell, whose signal will become significantly lessened by factors such as distance, climate, and obstacles.In order to solve the issues related to the increase of mobile data and of the lessening of the signal in indoor environments, various technologies are being developed and integrated within the present model of mobile communications such as Cognitive Radio Networks and Heterogeneous Nets. This paper is focused on the solutions provided by HetNets and, more specifically, on one of its key elements: the small cells technology: micro-, pico-, and femtocells [2]. This technology has been developed and deployed over the latest years making use of small stations connected to the Internet able to capture the signal of users and route the calls towards the mobile network [3], thus achieving significant improvements in data speed, availability, and coverage [4, 5]. The integration of this technology is feasible from a technical point of view as far as it incorporates improvements to the network, but significant challenges are still pending for its full deployment. One of them is a feasibility study which makes it attractive for service providers in economic terms by considering the necessary improvements in infrastructures [6, 7] and the added value compared with the rest of providers who would not implement this technology [8].The limitations and challenges to the development success of a Hetnet are discussed in [9], where a theoretical model was proposed to show the effectiveness of incorporating the HetNets into the existing network model. In [5, 10] a study was carried out to find out which economic incentive would obtain a macrocells service provider (MSP) that implements the femtocells service. This model shows the feasibility of implementing the service for an existing SP but does not incorporates the entry of a new SP and the competition in the market. In addition, the behavior of users over time and the dynamic reuse of resources are not studied. Authors in [11] proposed an economic model that motivates that a small cell service provider (SSP) leases part of its resources to a MSP, and in this way the MSP increases the capacity of its networks. This model makes a dynamic control of the resources that the MSP leases, which allows using the resources of the small sell network more efficiently. The model also takes into account the evolutionary users’ behavior regarding which network is connected. On the other hand, in our paper a new service provider (SP) implements the technology of small cells to compete with the MSP for the users with a dynamic price control variable, the SSP resources vary over time, and the evolutionary users’ behaviors regarding subscribing the SP are analyzed. There are models which study the entry of a new SP as in [12], where the effects of the entry of an SP that uses femtocells to compete with the MSP are analyzed. The results show that all system agents improve their welfare with the implementation of femtocells technology, but as it is a static model, it does not consider the evolutionary behavior of users or the dynamic reuse of resources. Additionally, in [13] a similar model is studied where the interactions between a MSP, a SSP, and users are considered as a dynamic three-stage game, but in this model there is no reuse of existing resources. There are a lot of papers that analyze economic models that allow the integration of small cell technology in existing mobile telephony networks; in all of them it is concluded that there is an incentive for service providers to improve the quality of service and increase the capacity of the network as well as the spectral efficiency of the transmission channel.The main contributions of the paper are as follows:(i)
An economic model is developed for analyzing the implementation of the small cells technology and the effects in the market of the incorporation of a new provider offering new technologies for mobile communications.(ii)
An alternative is proposed for the deployment of mobile networks, which allows increasing the density of users using small cells by reusing the excess of bandwidth of the clients of the Internet Server Provider (ISP) service.(iii)
A dynamic model is analyzed which allows delving into the evolution of the users’ behavior and the competition between the SPs when the resources vary.(iv)
A dynamic reuse of resources is employed in order to use the bandwidth not used by the user of the Internet service of the ISP more efficiently.(v)
It is demonstrated that the users improve the quality of service obtained due to the fact that the new technology in the market increases the resources and improves the efficiency; in addition the price competition between SPs reduces the prices charged to the users. All these things improve the users’ welfare.(vi)
The analysis demonstrates the viability of the entry of the SSP. In addition, it is shown how profits increase when the resources increase and decrease when the MSP’s spectral efficiency decreases.The paper is structured as follows. The model is described in Section2. In Section 3, the game analysis is performed. The numerical results and discussion of scenarios are shown in Section 4. Finally, Section 5 draws the conclusions.
## 2. Model Description
Two operators that provide fixed wireless service (MSP and SSP) and a set ofN users are considered, as shown in Figure 1. The MSP is a conventional operator and owns a set of BS, each serving a macrocell that provides full coverage on the service area. The SSP deploys a radio access network (RAN) consisting only of small cells reusing the resources of an Internet Service Provider (ISP) to provide the mobile service. The coverage areas of the small cells are disjoint, included in the service area of the MSP and covering only a fraction of the latter. In the sequel, to simplify notation, it is considered without any loss of generality that the RAN of the MSP is composed of a single macrocell. Both SPs compete to serve the users inside the small cells.Figure 1
Scenario.The bandwidth available to the MSP is denoted byBm(t). The SSP deploys a total of K small cells. It refers to the ith small cell as si and the available bandwidth is obtained by reusing the resources of the ISP to provide the service in the following way: given that the ISP can offer a bandwidth of Bi and the clients of the Internet service only uses Bi(1-ri(t)), the remaining bandwidth can be used to provide the service of mobile communications, that is, Bsi(t)=Biri(t), where ri(t) is the bandwidth fraction available shown in Figure 2. The area of the macrocell not overlapped with the coverage area of any small cell is referred to as s0, where Bs0=0. While the MSP holds a license to exploit a spectrum band, the SSP does not hold such a license, but a generic authorization for providing wireless communications services.Figure 2
Small celli.
### 2.1. Users
Assuming that there areN users distributed thought the coverage area of the MSP. Users make their subscription decision according to the expected utility and independently from one another. To determine with which SP the users subscribe, the user’s utility proposed in [14] is used which integrates the following:(i)
The perceived rate reflects the fact that the higher rate the user is allocated, the greater the utility is; it is obtained in the same way as in [13–15], where the perceived rate depends directly on the perceived spectral efficiency and on the amount of bandwidth subscribed, in such a way that the perceived rate by the MSP in si is θminbm, where θmin is the perceived spectral efficiency by a user n from a MSPs’ BS, normalized in [0,1], and the perceived rate by the SSP is θsinbs. However, given that the area of the small cells is relatively small, we can assume that θsin=max(θmn)=1, and therefore, the perceived rate by the SSP can be simplified to bs∀si.(ii)
The amount of bandwidth subscribed by each user with the MSP and SSP isbm(t) and bs(t), respectively.(iii)
The payment for the (maximum) achievable rateb affects the utility through a negative exponential function (i.e., e-b·p). This is a similar effect to the one achieved by a quasilinear utility and a budget constraint, where the payment for the rate is linear (i.e., -b·p). Although the latter is a more common model in network economics, it is argued that our proposal reflects more realistically how the spectrum scarcity faced by a service provider is translated to the user. The payment made by the users for the amount of bandwidth subscribed per time unit with the MSP and SSP is pm(t)bm(t) and ps(t)bs(t), respectively.The utility of a user that subscribes to the MSP or to the SSP is, respectively,(1)umiθmi,bm,pm,t=θmibmte-pmtbmt,usibs,ps,t=bste-pstbst.Given that we consider rational users, they subscribe the bandwidth that maximizes their utility:(2)bm∗t=argmaxbmt>0umiθm,bm,pm,t=1pmt,bs∗t=argmaxbst>0usibs,ps,t=1pst;therefore the user’s utility given that they make an optimal decision of bandwidth is(3)umi∗θmi,pm,t=θmie-1pmt,(4)usi∗ps,t=us∗ps,t=e-1pst,where umi∗ and us∗ denote the maximum utility of the users that are in si and subscribe to the MSP and SSP, respectively. Lastly, the utility perceived by the users who do not subscribe the service is uo∗=0, which is consistent with a user subscribing zero bandwidth.We definexmi(t) and xsi(t) as the population ratio that subscribes to the MSP and SSP, respectively, in the small cell si, so the number of users subscribing to MSP in si is Ni(t)xmi(t) and the SSP is Ni(t)xsi(t), where Ni are the users in si, N=∑iNi. In addition, the bandwidth demanded cannot be greater than the bandwidth available, where the bandwidth demanded to the MSP and the SSP is, respectively,(5)Qmpm,xm,t=Ntxmtbm∗t,Qsps,xs,t=Ntxstbs∗t.These demands are limited by the available bandwidth of the corresponding operator, Ni(t)xmi(t)bm∗(t)≤Bm(t) and Ni(t)xmi(t)bs∗(t)≤Bsi(t). From these it is obtained that(6)xmit≤minpmtNit/Bmit,1,xsit≤minpstNit/Bsit,1.
### 2.2. Service Providers
The SPs compete in price for the users. Each SP posts a price per nominal-data-rate unit and time unit,pm(t),ps(t),∀t∈T, in order to maximize its profits over the time horizon, T=[0,T]. The instantaneous profits of the SP are defined as the income minus the costs in a time instant, where the income are given by the amount that users pay for all the demanded bandwidth, and the costs are assumed to be zero. The instantaneous profits of SPs are defined as(7)πmpm,xm,t=pmtQmpm,xm,t,πsps,xs,t=pstQsps,xs,t.The profits of SPs over a time horizon T are defined as(8)Πmpm,xm=∫0Te-ρtπmpm,xm,tdt,(9)Πsps,xs=∫0Te-ρtπsps,xs,tdt,where e-ρt is the discount rate, which influences in the future payments [16, 17]. The SPs compete against each other to determine the dynamic control strategy in equilibrium, that is, pm∗(t) and ps∗(t), given the profits obtained in (8) and (9).
## 2.1. Users
Assuming that there areN users distributed thought the coverage area of the MSP. Users make their subscription decision according to the expected utility and independently from one another. To determine with which SP the users subscribe, the user’s utility proposed in [14] is used which integrates the following:(i)
The perceived rate reflects the fact that the higher rate the user is allocated, the greater the utility is; it is obtained in the same way as in [13–15], where the perceived rate depends directly on the perceived spectral efficiency and on the amount of bandwidth subscribed, in such a way that the perceived rate by the MSP in si is θminbm, where θmin is the perceived spectral efficiency by a user n from a MSPs’ BS, normalized in [0,1], and the perceived rate by the SSP is θsinbs. However, given that the area of the small cells is relatively small, we can assume that θsin=max(θmn)=1, and therefore, the perceived rate by the SSP can be simplified to bs∀si.(ii)
The amount of bandwidth subscribed by each user with the MSP and SSP isbm(t) and bs(t), respectively.(iii)
The payment for the (maximum) achievable rateb affects the utility through a negative exponential function (i.e., e-b·p). This is a similar effect to the one achieved by a quasilinear utility and a budget constraint, where the payment for the rate is linear (i.e., -b·p). Although the latter is a more common model in network economics, it is argued that our proposal reflects more realistically how the spectrum scarcity faced by a service provider is translated to the user. The payment made by the users for the amount of bandwidth subscribed per time unit with the MSP and SSP is pm(t)bm(t) and ps(t)bs(t), respectively.The utility of a user that subscribes to the MSP or to the SSP is, respectively,(1)umiθmi,bm,pm,t=θmibmte-pmtbmt,usibs,ps,t=bste-pstbst.Given that we consider rational users, they subscribe the bandwidth that maximizes their utility:(2)bm∗t=argmaxbmt>0umiθm,bm,pm,t=1pmt,bs∗t=argmaxbst>0usibs,ps,t=1pst;therefore the user’s utility given that they make an optimal decision of bandwidth is(3)umi∗θmi,pm,t=θmie-1pmt,(4)usi∗ps,t=us∗ps,t=e-1pst,where umi∗ and us∗ denote the maximum utility of the users that are in si and subscribe to the MSP and SSP, respectively. Lastly, the utility perceived by the users who do not subscribe the service is uo∗=0, which is consistent with a user subscribing zero bandwidth.We definexmi(t) and xsi(t) as the population ratio that subscribes to the MSP and SSP, respectively, in the small cell si, so the number of users subscribing to MSP in si is Ni(t)xmi(t) and the SSP is Ni(t)xsi(t), where Ni are the users in si, N=∑iNi. In addition, the bandwidth demanded cannot be greater than the bandwidth available, where the bandwidth demanded to the MSP and the SSP is, respectively,(5)Qmpm,xm,t=Ntxmtbm∗t,Qsps,xs,t=Ntxstbs∗t.These demands are limited by the available bandwidth of the corresponding operator, Ni(t)xmi(t)bm∗(t)≤Bm(t) and Ni(t)xmi(t)bs∗(t)≤Bsi(t). From these it is obtained that(6)xmit≤minpmtNit/Bmit,1,xsit≤minpstNit/Bsit,1.
## 2.2. Service Providers
The SPs compete in price for the users. Each SP posts a price per nominal-data-rate unit and time unit,pm(t),ps(t),∀t∈T, in order to maximize its profits over the time horizon, T=[0,T]. The instantaneous profits of the SP are defined as the income minus the costs in a time instant, where the income are given by the amount that users pay for all the demanded bandwidth, and the costs are assumed to be zero. The instantaneous profits of SPs are defined as(7)πmpm,xm,t=pmtQmpm,xm,t,πsps,xs,t=pstQsps,xs,t.The profits of SPs over a time horizon T are defined as(8)Πmpm,xm=∫0Te-ρtπmpm,xm,tdt,(9)Πsps,xs=∫0Te-ρtπsps,xs,tdt,where e-ρt is the discount rate, which influences in the future payments [16, 17]. The SPs compete against each other to determine the dynamic control strategy in equilibrium, that is, pm∗(t) and ps∗(t), given the profits obtained in (8) and (9).
## 3. Game Analysis
The interactions between the SPs and users are analyzed using game theory [18], as a two-stage dynamic game, shown in Figure 3. In the first stage, the SPs play a Stackelberg differential game, where the control variable is the price [19, 20]. Each SP posts a price per nominal-data-rate unit and time unit, which is pm(t) for the MSP and ps(t) for the SSP, where the MSP is the leader in the price choice and the SSP in the follower. In the second stage, the users inside the small cells can subscribe to either MSP’s or SSP’s service and pay for a nominal data rate, which is equal to the bandwidth allocated by the SP [5]. The behavior of the users that play an evolutionary game to choose which SPs they subscribe is modeled using the replicator dynamic [21, 22], shown in Figure 3. The two-stage game is solved using backward induction to guarantee the perfect subgame equilibrium [18].Figure 3
Hierarchical dynamic game framework for pricing and SP selection.
### 3.1. Stage II: Evolutionary Game
In the second stage, the decision that users should take is to which SP they have to subscribe, knowing in advance the prices announced by the SPs (see Figure3). An evolutionary game is proposed that allows reaching solutions in equilibrium given that players play repeatedly and can adjust their behavior over time by learning on the fly. Evolutionary game is as follows:(i)
Strategies: S={m,s,o}, where m means subscribing to the MSP, s subscribing to the SSP, and o not subscribing to the service.(ii)
Population states: Xi(t)={xmi(t),xsi(t),xoi(t)}, where xmi(t) and xsi(t) are the population ratios that subscribe to the MSP and to the SSP, respectively, at small cell si, and xo(t)=1-xm(t)-xs(t) is the fraction of users not subscribing to the service at small cell si.(iii)
Payoffs: ui(t)={umi∗(t),usi∗(t),uoi∗(t)}, where umi∗(t),usi∗(t) are the users’ utilities perceived for the strategies defined in (3) and (4), respectively, while uoi∗(t)=0.(iv)
Replicator dynamic: it models the evolutionary behavior of the population among its different strategies over time. The Replicator Dynamic [21, 22] is defined as follows:(10)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,wherej∈S, δ is the learning rate of the population, which controls the frequency of strategy adaptation for service selection and Ui(θmn,pm,ps,t) is the user utility function average per unit time:(11)Uiθmn,pm,ps,t=xmipm,tumi∗θmn,pm,t+xsips,tusi∗ps,t+xoituo∗;the replicator dynamics within a small cells have the limitations of coverage and resources of the SPs shown in (6). This differential equation indicates how the population evolves along the time horizon and given the initial state of the population, allowing making predictions about the future behavior of the population. The population will evolve to the Evolutionary Stable Strategy (ESS), which is a stationary state where the population shares will not change [18, 21].
### 3.2. Stage I: Stackelberg Differential Game
In the first stage, SPs anticipate the evolutionary behavior of the population (see Figure3); based on this, the SPs will determine the dynamic prices pm∗(t) and ps∗(t). To analyze the decision making, a Stackelberg differential game with an open-loop control [19] is formulated, where the MSP is the leader and the SSP is the follower. The SSP optimal control problem is defined as(12)ps∗t=argmaxpst≥0∫0Te-ρtπsps,xs,tdt,s.t.x˙sit=δxsitusips,t-Uiθmn,pm,ps,t,xj0=xj0.The MSP optimal control problem, given the behavior of the SSP, is(13)pm∗t=argmaxpmt≥0∫0Te-ρtπmpm,xm,tdt,s.t.ps∗=argmaxps≥0∫0Te-ρtπsps∗,xs,tdt,x˙mit=δxmitumiθmn,pm,t-Uiθmn,pm,ps∗,t,xj0=xj0.The open-loop Stackelberg differential game is presented as the two optimal control problems with restrictions described in expressions (12) and (13), represented in the Lagrange form. To solve them, the Pontryagin Maximum Principle is used [23, 24], which provides us with the necessary optimization conditions for the optimal control problem given that it takes into account the effects of the choosing the price by the SPs and generating a first immediate effect through the instantaneous value of the profits of SP and a second effect on the variation of the population state.First, for the Lagrange problem of the follower, the Hamiltonian function is defined as follows:(14)Hspm,ps,xmi,xsi,λsj,t=πsps,xs,t+∑i=0Kλssitx˙sit+λsmitx˙mit,where λsj(t) is the costate variable associated with the state in the j state in time when it moves along the optimal trajectory, where j∈S.The optimal control strategyps∗(t) of the original problem (10) also maximizes the corresponding Hamiltonian function [23](15)ps∗t=argmaxHspm,ps,xmi,xsi,λsj,t.However, (15) it is only optimal if the multiplier vector is defined to reflect the marginal impact of the state vector on the profits of the SSP(16)∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt=-λ˙sjt+ρλsjt,where Hs∗ is obtained from (15).In comparison with the SSP, the Hamiltonian function of the MSP takes into account the maximization of the profits of the SSP, the dynamics of the variables of the costates of the MSP and the variation of the costates. The MSP Hamilton function is defined as(17)Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t=πmpm,xm,t+∑i=0Kλmmitx˙mit+λmsitx˙si+αssitλ˙ssit+αsmitλ˙smit,where λmj(t) is the costate variable associated with the state in the j state when it moves along the optimal trajectory and αsj is a variable associated with the effect of the variation of the population of the SSP on the MSP.Applying the Pontryagin Maximum Principle in the same way as with the SSP, the following optimization conditions are necessary to find the optimal control strategies of the original problempm∗(t)(18)∂Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t∂pmt=0(19)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt=-λ˙sjt+ρλsjt(20)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt=-α˙sjt+ραsjt.The first step is to solve the optimal prices that the service providers announce, (15) and (18). These prices are based on the state of the population (Xi(t)) and the costates (λmj, λij, and αsj). The optimal prices must be replaced in the optimization conditions of the Pontryagin Maximum Principle ((10), (16), (19), and (20)), obtaining the following system of differential equations:(21)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,λ˙sjt=ρλsjt-∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt,λ˙mjt=ρλmjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt,α˙sjt=ραsjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt;This problem is atwo-point boundary value problem (TPBVP), where the initial variables of the state of the population (X0) and the final variables of the costates of the population λjj(T)=0∀j∈i={1,2,…,K,m,o} and αsj(T)=0∀j∈i={1,2,…,K,m,o} are known. Solving the TPBVP [25], the optimal vectors of the states and costates are obtained along the time horizon, that is, xj∗(t),λmj∗(t), λsj∗(t), and αsj∗(t). Replacing these optimal vectors in the prices that were obtained from solving (15) and (18), the prices in equilibrium announced by SPs are obtained.
## 3.1. Stage II: Evolutionary Game
In the second stage, the decision that users should take is to which SP they have to subscribe, knowing in advance the prices announced by the SPs (see Figure3). An evolutionary game is proposed that allows reaching solutions in equilibrium given that players play repeatedly and can adjust their behavior over time by learning on the fly. Evolutionary game is as follows:(i)
Strategies: S={m,s,o}, where m means subscribing to the MSP, s subscribing to the SSP, and o not subscribing to the service.(ii)
Population states: Xi(t)={xmi(t),xsi(t),xoi(t)}, where xmi(t) and xsi(t) are the population ratios that subscribe to the MSP and to the SSP, respectively, at small cell si, and xo(t)=1-xm(t)-xs(t) is the fraction of users not subscribing to the service at small cell si.(iii)
Payoffs: ui(t)={umi∗(t),usi∗(t),uoi∗(t)}, where umi∗(t),usi∗(t) are the users’ utilities perceived for the strategies defined in (3) and (4), respectively, while uoi∗(t)=0.(iv)
Replicator dynamic: it models the evolutionary behavior of the population among its different strategies over time. The Replicator Dynamic [21, 22] is defined as follows:(10)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,wherej∈S, δ is the learning rate of the population, which controls the frequency of strategy adaptation for service selection and Ui(θmn,pm,ps,t) is the user utility function average per unit time:(11)Uiθmn,pm,ps,t=xmipm,tumi∗θmn,pm,t+xsips,tusi∗ps,t+xoituo∗;the replicator dynamics within a small cells have the limitations of coverage and resources of the SPs shown in (6). This differential equation indicates how the population evolves along the time horizon and given the initial state of the population, allowing making predictions about the future behavior of the population. The population will evolve to the Evolutionary Stable Strategy (ESS), which is a stationary state where the population shares will not change [18, 21].
## 3.2. Stage I: Stackelberg Differential Game
In the first stage, SPs anticipate the evolutionary behavior of the population (see Figure3); based on this, the SPs will determine the dynamic prices pm∗(t) and ps∗(t). To analyze the decision making, a Stackelberg differential game with an open-loop control [19] is formulated, where the MSP is the leader and the SSP is the follower. The SSP optimal control problem is defined as(12)ps∗t=argmaxpst≥0∫0Te-ρtπsps,xs,tdt,s.t.x˙sit=δxsitusips,t-Uiθmn,pm,ps,t,xj0=xj0.The MSP optimal control problem, given the behavior of the SSP, is(13)pm∗t=argmaxpmt≥0∫0Te-ρtπmpm,xm,tdt,s.t.ps∗=argmaxps≥0∫0Te-ρtπsps∗,xs,tdt,x˙mit=δxmitumiθmn,pm,t-Uiθmn,pm,ps∗,t,xj0=xj0.The open-loop Stackelberg differential game is presented as the two optimal control problems with restrictions described in expressions (12) and (13), represented in the Lagrange form. To solve them, the Pontryagin Maximum Principle is used [23, 24], which provides us with the necessary optimization conditions for the optimal control problem given that it takes into account the effects of the choosing the price by the SPs and generating a first immediate effect through the instantaneous value of the profits of SP and a second effect on the variation of the population state.First, for the Lagrange problem of the follower, the Hamiltonian function is defined as follows:(14)Hspm,ps,xmi,xsi,λsj,t=πsps,xs,t+∑i=0Kλssitx˙sit+λsmitx˙mit,where λsj(t) is the costate variable associated with the state in the j state in time when it moves along the optimal trajectory, where j∈S.The optimal control strategyps∗(t) of the original problem (10) also maximizes the corresponding Hamiltonian function [23](15)ps∗t=argmaxHspm,ps,xmi,xsi,λsj,t.However, (15) it is only optimal if the multiplier vector is defined to reflect the marginal impact of the state vector on the profits of the SSP(16)∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt=-λ˙sjt+ρλsjt,where Hs∗ is obtained from (15).In comparison with the SSP, the Hamiltonian function of the MSP takes into account the maximization of the profits of the SSP, the dynamics of the variables of the costates of the MSP and the variation of the costates. The MSP Hamilton function is defined as(17)Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t=πmpm,xm,t+∑i=0Kλmmitx˙mit+λmsitx˙si+αssitλ˙ssit+αsmitλ˙smit,where λmj(t) is the costate variable associated with the state in the j state when it moves along the optimal trajectory and αsj is a variable associated with the effect of the variation of the population of the SSP on the MSP.Applying the Pontryagin Maximum Principle in the same way as with the SSP, the following optimization conditions are necessary to find the optimal control strategies of the original problempm∗(t)(18)∂Hmpm,ps,xmi,xsi,λmj,λsj,αsj,t∂pmt=0(19)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt=-λ˙sjt+ρλsjt(20)∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt=-α˙sjt+ραsjt.The first step is to solve the optimal prices that the service providers announce, (15) and (18). These prices are based on the state of the population (Xi(t)) and the costates (λmj, λij, and αsj). The optimal prices must be replaced in the optimization conditions of the Pontryagin Maximum Principle ((10), (16), (19), and (20)), obtaining the following system of differential equations:(21)x˙jt=δxjtujθmn,pm,t-Uiθmn,pm,ps,t,λ˙sjt=ρλsjt-∂Hs∗pm,ps∗,xmi,xsi,λsj,t∂xjt,λ˙mjt=ρλmjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂xjt,α˙sjt=ραsjt-∂Hm∗pm∗,ps,xmi,xsi,λmj,λsj,αsj,t∂λsjt;This problem is atwo-point boundary value problem (TPBVP), where the initial variables of the state of the population (X0) and the final variables of the costates of the population λjj(T)=0∀j∈i={1,2,…,K,m,o} and αsj(T)=0∀j∈i={1,2,…,K,m,o} are known. Solving the TPBVP [25], the optimal vectors of the states and costates are obtained along the time horizon, that is, xj∗(t),λmj∗(t), λsj∗(t), and αsj∗(t). Replacing these optimal vectors in the prices that were obtained from solving (15) and (18), the prices in equilibrium announced by SPs are obtained.
## 4. Results and Discussion
In this section some results are presented in order to illustrate the capabilities of our model and analysis and to provide an insight into the system behavior. In order to quantify the viability of the model, the users’ welfare (UW) and social welfare (SW) functions are used. The UW is defined as the aggregate utility of all the users obtained along the time horizon. It allows us to quantify the welfare of the entire population of users and is obtained as (22)UW=∫0T∑i=0KNixmitumi∗t+Nixstusi∗tdt,where Nixmi(t)umi∗(t) and Nixsusi∗(t) are a the utilities that users who subscribe receive. The SW is defined as the aggregated utility of all the users and SPs and is obtained as(23)SW=Πmpm,xm+Πspm,xm+∫0T∑i=0KNixmitumi∗t+Nixstusi∗tdt.The numerical resolution of the dynamic problem of decision making was made using the function“BVP4C” of MATLAB [26, 27], which allows solving the system of differential equations of (21) given the initial state of the population and the final state of the costates. The scenario was evaluated along a time horizon of [0–100] with a jump of h=0.01 and the parameters were varied to know the effect that has the spectral efficiency, the bandwidth of the SPs, and the dynamic reuse of the resources of the SSP. The results are obtained assuming a small cell network that covers 100% of the BS area of the MSP. Given that the coverage areas of the small cells are disjoint Am=∑iAsi. The system parameters values are those shown in Table 1.Table 1
Parameter setting.
Parameter Value N 400 users A m 10000 m2 B m 60 MHz A s i 2000 m2 K 5 small cells ρ 0
### 4.1. Effect of Spectral Efficiency
Figures4, 5, and 6 show the effect of the spectral efficiency obtained with the MSP by all users (θm) on the scenario, evaluating the evolutionary behavior of users (xm∗(t), xs∗(t), and xo∗(t)), the dynamics of prices in equilibrium (pm∗(t) and ps∗(t)), the total profits of the SPs, the SW, and UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, all the users inside the small cells perceive on average the same spectral frequency with the BS of MSP, the learning rate of the population is δ=0.68, the bandwidth available for the SSP is Bsi=Bs = 10.4MHz in all small cells, and the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i.Figure 4
SP’s prices on the time horizon as a function ofθm.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 5
Evolutionary behavior of population ratios as a function ofθm.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 6
Π m, Πs, SW, and UW as a function of θm.In Figures4(a) and 4(b) the evolution of MSP and SSP prices over the time horizon for different values of θm is shown and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population almost reaches the ESS, that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The lowerθm, the lower the price announced by the MSP because it has to compensate for the low spectral efficiency that users receive. On the other hand, the lower the spectral efficiency, the higher the price that the SSP can announce, because users perceive a better utility with him.(iii)
The prices increase with the time, because users that do not subscribe to the service (xoi∗(0)=0.8) choose to subscribe to the SPs, and given that the subscribed users increase the price announced by the SPs can be higher (there are the same resources and a greater demand).In Figures5(a) and 5(b) the evolutionary behavior of the subscribing population is shown with the MSP and SSP, respectively. It is observed that the users always preferred the service and therefore they will evolve over time until they subscribe to a SP; besides, depending on the spectral efficiency perceived, they will prefer to subscribe with one or the other SP, in such a way that the higher θm is, the more the users prefer to subscribe with the MSP. Only the time values of [0–10] are shown because at that time the population reaches or approaches the ESS; therefore, the population is in a stationary strategy.In Figure6 the profits from the perspective of the SPs and the users’ and social welfare are shown. It is observed that the profits of the MSP increase as θm increases and that the SSP’s profits decrease in the same amount. The users are benefited if θm increases, as shown in the UW, which also increases the SW. Additionally, it is expected that the SW, UW, and the profits of the SPs will increase if the initial state of the population of users who do not subscribe the service is lower, given that 80% of users start not to subscribe.
### 4.2. Effect of the Available Bandwidth of the SSP
In Figures7, 8, and 9 the effect that the available resources of the SSP have on the scenario is shown, evaluating the evolutionary behavior of the users, the dynamics of equilibrium prices, the total profits of the SPs, the SW, and the UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, who perceive on average the same spectral efficiency θmi=θm = 0.8 bits/s/Hz, the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i, and the learning rate of the population is δ=0.68.Figure 7
SP’s prices on the time horizon as a function ofBs.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 8
Evolutionary behavior of population ratios as a function ofBs.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 9
Π m, Πs, SW, and UW as a function of Bs.In Figures7(a) and 7(b) the behavior of the prices of the MSP and SSP are shown along the evaluated time horizon (T=10) as a function of Bs and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population arrives or approaches the ESS; that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The higherBs is, the lower the price the SPs announces, because the SSP has more resources to compete and this means that both SPs have to lower prices.(iii)
The prices increase with the time, this is because the number of users subscribed also increases with the time, and it is needed to satisfy a greater bandwidth demand with the same resources.In Figures8(a) and 8(b) the evolutionary behavior of the subscribing population with the MSP and SSP, respectively, is shown. It was observed that the users always preferred the service and therefore they will evolve over time until they subscribe an SP; in addition, the higher the available resources of the SSP, the higher the users who want to subscribe with it. At T=10 the population arrives or approaches the ESS.In Figure9 the profits are shown from the perspective of the SPs, and users’ and social welfare are evaluated. It is observed that the profits of the SSP increase with Bs and that the MSP’s profits diminish in the same amount. If resources in the scenario increase, the SW and UW increase. Additionally, it is expected that SW, UW, and the SP’s profits increase if the initial state of the population of users that do not subscribe to the service is lower, given that 80% of users do not subscribe.
### 4.3. Effect of Dynamic Reuse of the Resources of SSP
In this section, approaching the available resources of the network from a traffic study at the University of Washington [28], the bandwidth available to service the SSP at each instant of time is shown in Figure 10 [29].Figure 10
Available bandwidth of the SSP.In Figures11(a) and 11(b) the equilibrium prices for the MSP and SSP, respectively, are shown, considering that the SSP’s bandwidth available varies. It is observed that the SSP prices are inversely proportional to the available bandwidth and the prices are displaced 0.8 due to the spectral efficiency that users perceive. It is also observed that when the learning rate is low (δ=0.1), it evolves more smoothly; i.e., users do not learn as quickly as resources vary.Figure 11
SP’s prices on the time horizon.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )In Figures12(a) and 12(b) the evolution of the distribution of the population for the MSP and SSP, respectively, is shown. It is observed that the population of the SSP has a direct relation to resources; that is, the higher the resources of the SSP, the higher the population that subscribes to the SSP. On the other hand, the higher the resources of the SSP, the lower the population that subscribes to the MSP. It is also observed that when the learning rate is low (δ=0.1), the distribution of the population varies more slowly; that is, the population do not learn as fast as the resources vary.Figure 12
Evolutionary behavior of the population.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )As there is a dynamic reuse of resources by the SSP, its profits will increase as the available resources increase and the user’s learning rate increases, given that this will allow the fact that the decisions of the users adjust more quickly to variations of resources. Given the existence of a dynamic reuse of resources of the SSP there will be more resources in the model, which lowers the SSP’s prices and increases theSW and the UW.
## 4.1. Effect of Spectral Efficiency
Figures4, 5, and 6 show the effect of the spectral efficiency obtained with the MSP by all users (θm) on the scenario, evaluating the evolutionary behavior of users (xm∗(t), xs∗(t), and xo∗(t)), the dynamics of prices in equilibrium (pm∗(t) and ps∗(t)), the total profits of the SPs, the SW, and UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, all the users inside the small cells perceive on average the same spectral frequency with the BS of MSP, the learning rate of the population is δ=0.68, the bandwidth available for the SSP is Bsi=Bs = 10.4MHz in all small cells, and the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i.Figure 4
SP’s prices on the time horizon as a function ofθm.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 5
Evolutionary behavior of population ratios as a function ofθm.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 6
Π m, Πs, SW, and UW as a function of θm.In Figures4(a) and 4(b) the evolution of MSP and SSP prices over the time horizon for different values of θm is shown and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population almost reaches the ESS, that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The lowerθm, the lower the price announced by the MSP because it has to compensate for the low spectral efficiency that users receive. On the other hand, the lower the spectral efficiency, the higher the price that the SSP can announce, because users perceive a better utility with him.(iii)
The prices increase with the time, because users that do not subscribe to the service (xoi∗(0)=0.8) choose to subscribe to the SPs, and given that the subscribed users increase the price announced by the SPs can be higher (there are the same resources and a greater demand).In Figures5(a) and 5(b) the evolutionary behavior of the subscribing population is shown with the MSP and SSP, respectively. It is observed that the users always preferred the service and therefore they will evolve over time until they subscribe to a SP; besides, depending on the spectral efficiency perceived, they will prefer to subscribe with one or the other SP, in such a way that the higher θm is, the more the users prefer to subscribe with the MSP. Only the time values of [0–10] are shown because at that time the population reaches or approaches the ESS; therefore, the population is in a stationary strategy.In Figure6 the profits from the perspective of the SPs and the users’ and social welfare are shown. It is observed that the profits of the MSP increase as θm increases and that the SSP’s profits decrease in the same amount. The users are benefited if θm increases, as shown in the UW, which also increases the SW. Additionally, it is expected that the SW, UW, and the profits of the SPs will increase if the initial state of the population of users who do not subscribe the service is lower, given that 80% of users start not to subscribe.
## 4.2. Effect of the Available Bandwidth of the SSP
In Figures7, 8, and 9 the effect that the available resources of the SSP have on the scenario is shown, evaluating the evolutionary behavior of the users, the dynamics of equilibrium prices, the total profits of the SPs, the SW, and the UW. The results were obtained for the following parameters in each of the small cells: there are Ni=80 users, who perceive on average the same spectral efficiency θmi=θm = 0.8 bits/s/Hz, the initial state of the population is xmi∗(0)=0.1, xsi∗(0)=0.1, and xoi∗(0)=0.8∀i, and the learning rate of the population is δ=0.68.Figure 7
SP’s prices on the time horizon as a function ofBs.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )Figure 8
Evolutionary behavior of population ratios as a function ofBs.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )Figure 9
Π m, Πs, SW, and UW as a function of Bs.In Figures7(a) and 7(b) the behavior of the prices of the MSP and SSP are shown along the evaluated time horizon (T=10) as a function of Bs and the following is observed:(i)
Only the time interval[0,10] is represented graphically, because at time 10 the population arrives or approaches the ESS; that is, the population is in a stationary strategy and its decisions do not change, and this makes the prices remain constant in the interval [10–100].(ii)
The higherBs is, the lower the price the SPs announces, because the SSP has more resources to compete and this means that both SPs have to lower prices.(iii)
The prices increase with the time, this is because the number of users subscribed also increases with the time, and it is needed to satisfy a greater bandwidth demand with the same resources.In Figures8(a) and 8(b) the evolutionary behavior of the subscribing population with the MSP and SSP, respectively, is shown. It was observed that the users always preferred the service and therefore they will evolve over time until they subscribe an SP; in addition, the higher the available resources of the SSP, the higher the users who want to subscribe with it. At T=10 the population arrives or approaches the ESS.In Figure9 the profits are shown from the perspective of the SPs, and users’ and social welfare are evaluated. It is observed that the profits of the SSP increase with Bs and that the MSP’s profits diminish in the same amount. If resources in the scenario increase, the SW and UW increase. Additionally, it is expected that SW, UW, and the SP’s profits increase if the initial state of the population of users that do not subscribe to the service is lower, given that 80% of users do not subscribe.
## 4.3. Effect of Dynamic Reuse of the Resources of SSP
In this section, approaching the available resources of the network from a traffic study at the University of Washington [28], the bandwidth available to service the SSP at each instant of time is shown in Figure 10 [29].Figure 10
Available bandwidth of the SSP.In Figures11(a) and 11(b) the equilibrium prices for the MSP and SSP, respectively, are shown, considering that the SSP’s bandwidth available varies. It is observed that the SSP prices are inversely proportional to the available bandwidth and the prices are displaced 0.8 due to the spectral efficiency that users perceive. It is also observed that when the learning rate is low (δ=0.1), it evolves more smoothly; i.e., users do not learn as quickly as resources vary.Figure 11
SP’s prices on the time horizon.
(a)
p m ∗ ( t ) (b)
p s ∗ ( t )In Figures12(a) and 12(b) the evolution of the distribution of the population for the MSP and SSP, respectively, is shown. It is observed that the population of the SSP has a direct relation to resources; that is, the higher the resources of the SSP, the higher the population that subscribes to the SSP. On the other hand, the higher the resources of the SSP, the lower the population that subscribes to the MSP. It is also observed that when the learning rate is low (δ=0.1), the distribution of the population varies more slowly; that is, the population do not learn as fast as the resources vary.Figure 12
Evolutionary behavior of the population.
(a)
x m ∗ ( t ) (b)
x s ∗ ( t )As there is a dynamic reuse of resources by the SSP, its profits will increase as the available resources increase and the user’s learning rate increases, given that this will allow the fact that the decisions of the users adjust more quickly to variations of resources. Given the existence of a dynamic reuse of resources of the SSP there will be more resources in the model, which lowers the SSP’s prices and increases theSW and the UW.
## 5. Conclusions
This paper proposes a business model where a service provider implements small cells technology (SSP) and competes against the existing macrocell provider or MSP. The limitations of this technology were taken into accounts such as limited availability and coverage, dynamic reutilization of resources, and the decisions of users and service providers, while taking into consideration the influence of each provider over the decisions of their competitor.Game theory enabled us to predict the behavior of users and providers on the basis that users and providers take the decisions that suit them best and allowed us to know the effects of a new provider on the market of mobile communications, which are as follows:(i)
The users get a better service due to the fact that the SSP forces the MSP to lower prices, as the SSP increases the spectrum efficiency of users and the resources available in the scenario. In addition, all users would prefer to subscribe to the service, and they will adapt their decisions to subscribe.(ii)
The SSP goes into the communication market well aware that their profits are guaranteed. This is because the SSP is offering better spectrum efficiency and has competitive prices in relation to the MSP, as far as users want to subscribe to the new service.(iii)
The MSP becomes aggrieved by the new SSP in the market because its profits are lower, and the profits are lower as the spectrum efficiency is lower. This suggests that the MSP should improve the value of their services as perceived by the users so their profits do not become affected by competence.(iv)
The SSP’s entry improves the users’ welfare and social welfare.Given the results shown, the viability of providing a new small cells connectivity service by reusing dynamically the excess of bandwidth of the clients of an Internet Service Provider has been demonstrated.
---
*Source: 1012041-2018-05-15.xml* | 2018 |
# Inhibition of Bone Loss byCissus quadrangularis in Mice: A Preliminary Report
**Authors:** Jameela Banu; Erika Varela; Ali N. Bahadur; Raheela Soomro; Nishu Kazi; Gabriel Fernandes
**Journal:** Journal of Osteoporosis
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101206
---
## Abstract
Women drastically loose bone during and after menopause leading to osteoporosis, a disease characterized by low bone mass increasing the risk of fractures with minor trauma. Existing therapies mainly reduce bone resorption, however, all existing drugs have severe side effects. Recently, the focus is to identify alternative medicines that can prevent and treat osteoporosis with minimal or no side effects. We usedCissus quadrangularis (CQ), a medicinal herb, to determine its effects on bone loss after ovariectomy in C57BL/6 mice. Two-month old mice were either sham operated or ovariectomized and fed CQ diet. After eleven weeks, mice were sacrificed and the long bones scanned using pQCT and μCT. In the distal femoral metaphysis, femoral diaphysis, and proximal tibia, control mice had decreased cancellous and cortical bone, while CQ-fed mice showed no significant differences in the trabecular number, thickness, and connectivity density, between Sham and OVX mice, except for cortical bone mineral content in the proximal tibia. There were no changes in the bone at the tibio-fibular junction between groups. We conclude that CQ effectively inhibited bone loss in the cancellous and cortical bones of femur and proximal tibia in these mice.
---
## Body
## 1. Introduction
Osteoporosis is a disease associated with aging that causes fragility of bones making them susceptible to fractures with minor trauma. Bone is a dynamic organ that undergoes lifelong changes by bone remodeling using specialized cells and is the predominant process after attaining peak bone mass around the third decade. Remodeling is an essential process for maintaining the skeleton by repairing any damaged portions and removal of old bone as well as for discharging calcium and phosphorus from bone stores to maintain ionic homeostasis in the body. An imbalance in the process of bone remodeling where there is increased bone resorption alone or in combination with decreased bone formation results in net loss of bone. After attaining peak bone mass during the third decade, humans start losing bones at the rate of 0.6 to 1% every year for the rest of their lives. In case of women, during menopausal age, there is drastic loss of bone.Treating and/or preventing bone loss can be focused on overall reduction of bone resorption and/or increasing bone formation. Currently, there are several different groups of agents that are used to treat and prevent osteoporosis. But the risks of side effects caused by these drugs are severe [1–4]. This has, recently, led to the search for alternative medicines to treat and prevent osteoporosis.Different cultures around the world have used herbs for thousands of years to treat several health conditions. One of the herbs that have shown beneficial effects on bone belongs to theCissus family of plants. Cissus quadrangularis (CQ) is a medicinal herb used in Siddha and Ayurvedic medicine since ancient times in Asia, as a general tonic and analgesic, especially for bone fracture healing [5]. Recently, CQ has been linked to several health benefits such as antiobesity [6], reduction of proinflammatory cytokines [7], anti-inflammatory [8], antioxidant [9], antiglucocorticoid [10], and antidiabetic properties [11].As early as in the 1960’s, CQ was used to determine its beneficial effects on bone fracture healing in young rats and it has been reported that CQ significantly enhances fracture healing process [12]. The study has further demonstrated that in the presence of CQ, bone mineralization takes place much earlier, when compared to that seen in its absence [13]. During bone mineralization, accumulation of mucopolysaccharides precedes the actual mineralization process and CQ increased mucopolysaccharides at the site of fracture [14]. Udupa and Prasad [15] have reported that CQ hastens fractures by reducing the total convalescent period by 33% in experimental animals when compared to those of the controls. CQ also increased calcium uptake and mechanical properties of bone in rats when compared to that of the controls [15]. More recently, Shirwaikar et al. [16] have demonstrated that the mechanical strength of bones in ovariectomized rats increased, significantly, in the long bones and lumbar vertebra. Petroleum ether extracts of CQ stimulated osteoblastogenesis and mineralization in bone marrow mesenchymal cells and murine osteoblastic cell lines [17, 18]. In the present study, we determined the effects of CQ on postmenopausal bone loss in the long bones of female mice. We tested the bones using peripheral quantitative computed tomography (pQCT) and microcomputed tomography (μCT). With pQCT, we determined the effects of CQ on the two envelopes (periosteal and endocortical) and the BMD and BMC measurements, while with μCT, we determined the microarchitecture of the cancellous bone in the distal femoral metaphysis. Both techniques give different measurements and they complement each other. We also tested some bone biochemical markers and proinflammatory cytokines in the serum.
## 2. Materials and Methods
### 2.1. Animals
Weanling C57BL/6 female mice were obtained from Jackson Laboratory (Bar Harbor, ME) and maintained in our laboratory animal facility. When mice were eight weeks of age, they were either sham operated or ovariectomized. After one month, mice were divided into the following groups: Group (1) Lab chow sham (LC S), (n=10); Group (2) Lab chow ovariectomy (LC O), (n=11); Group (3) Cissus quadrangularis sham (CQ S), (n=11); Group (4) CQ ovariectomy (CQ O), (n=11) and fed the respective diets. Mice were maintained in the dietary regimens for eleven weeks and sacrificed. CQ was purchased from 1fast400 (Northborough, MA) in powder form and mixed with modified AIN-93 diet at a concentration of 500 mg/kg b wt. Mice were weighed regularly. Blood was collected by retro-orbital bleeding from anesthetized mice and tibia and femur were removed and stored for pQCT and μCT densitometry. All animal procedures were done according to the UT Health Science Center at San Antonio IACUC guidelines.
### 2.2. Measurement of Body and Organ Weights
Mice were weight matched at the beginning of the treatment using a CS 200 (Ohaus, Pine Brook, NJ, USA) balance. At the time of sacrifice, body weight was recorded. Uterus, peritoneal adipose tissue, liver, spleen, and kidneys were carefully dissected out and weighed using a Mettler Balance (Columbus, OH, USA).
### 2.3. Collection of Blood Serum Biochemical Markers, Proinflammatory Cytokine Assays and Leptin
Blood was collected by retro-orbital bleeding from anesthetized mice and serum was obtained by centrifugation at 300× g for 15 minutes at 4°C. Procollagen type 1 amino terminal propeptide (P1NP), Tartrate Resistant Acid Phosphatase (Trap5b) and ALP levels in the serum were measured using Rat/Mouse P1NP EIA kit (IDS, Fountain Hills, CA, USA), Mouse Trap EIA kit (IDS, Fountain Hills, CA, USA), and Quantichrome ALP assay kit (Bioassay systems, Hayward, CA, USA), respectively. Osteocalcin was measured using IRMA kit (Alpco Diagnostics, Salem, NH, USA). TNF-α, IL-1, and IL-6 were assayed using OptiEIA kits (BD Biosciences Pharmingen, San Diego, CA, USA). Serum leptin was measured using ELISA kit (Diagnostic systems laboratory, Webster, TX, USA).
### 2.4. Peripheral Quantitative Computerized Tomography Densitometry (pQCT)
Cortical and cancellous bones of distal femoral metaphysis (DFM), proximal tibial metaphysis (PTM), and pure cortical bone at the femoral diaphysis (FD) and tibia fibula junction (TF) were analyzed by pQCT densitometry, using an XCT research M system (Norland Stratec, Birkenfeld, Germany) as described previously [19, 20]. In the proximal tibial metaphysis (PTM) and distal femoral metaphysis (DFM), both cancellous and cortical bone surrounding the cancellous bone were scanned and analyzed. At the PTM and DFM, 5 slices were scanned including the growth plate and one slice, 1 mm distal to the knee joint (PTM) and 1 mm proximal to the knee joint (DFM) was analyzed. The following parameters were determined for both sites: cancellous bone mineral content (Cn BMC), cancellous bone mineral density (Cn BMD), cortical bone area (Ct Ar), cortical BMC (Ct BMC), cortical BMD (Ct BMD), cortical thickness (Ct Th), periosteal perimeter (Peri PM), and endocortical perimeter (Endo PM).One slice was scanned at the FD (mid-diaphysis) and at the TF junction for measuring pure cortical bone and the following parameters determined: Ct B Ar, Ct BMC, Ct BMD, Ct Th, Peri PM, and Endo PM.
### 2.5. Micro Computerized Tomography (μCT)
Scans of the DFM was done using a high-resolution XradiaμCT 200 scanner (Xradia, Inc. Concord, CA) at 20 microns. All images were acquired using standard parameters, X-ray source of 90 KV, power of 8.0 W and current of 4.0 μA. Each scan consisted of 181 slices with an exposure time of 30 seconds per slice. The scans were analyzed using Tri/3D Bon (Ratoc System Engineering Co., Ltd., Tokyo, Japan) for the parameters including total volume (TV), bone volume (BV), BV/TV, trabecular number (Tb N), trabecular thickness (Tb Th), trabecular separation (Tb S) and connectivity density (Conn Den).
### 2.6. Statistical Analysis
Results are expressed as Mean ± SE. Data was analyzed with one-way ANOVA and unpairedt-test using GraphPad Prism 4 (GraphPad Software Inc., San Diego, CA, USA). P<0.05 was considered to be significant. Newman-Keuls multiple comparison test was used to analyze the differences between groups for significance.
## 2.1. Animals
Weanling C57BL/6 female mice were obtained from Jackson Laboratory (Bar Harbor, ME) and maintained in our laboratory animal facility. When mice were eight weeks of age, they were either sham operated or ovariectomized. After one month, mice were divided into the following groups: Group (1) Lab chow sham (LC S), (n=10); Group (2) Lab chow ovariectomy (LC O), (n=11); Group (3) Cissus quadrangularis sham (CQ S), (n=11); Group (4) CQ ovariectomy (CQ O), (n=11) and fed the respective diets. Mice were maintained in the dietary regimens for eleven weeks and sacrificed. CQ was purchased from 1fast400 (Northborough, MA) in powder form and mixed with modified AIN-93 diet at a concentration of 500 mg/kg b wt. Mice were weighed regularly. Blood was collected by retro-orbital bleeding from anesthetized mice and tibia and femur were removed and stored for pQCT and μCT densitometry. All animal procedures were done according to the UT Health Science Center at San Antonio IACUC guidelines.
## 2.2. Measurement of Body and Organ Weights
Mice were weight matched at the beginning of the treatment using a CS 200 (Ohaus, Pine Brook, NJ, USA) balance. At the time of sacrifice, body weight was recorded. Uterus, peritoneal adipose tissue, liver, spleen, and kidneys were carefully dissected out and weighed using a Mettler Balance (Columbus, OH, USA).
## 2.3. Collection of Blood Serum Biochemical Markers, Proinflammatory Cytokine Assays and Leptin
Blood was collected by retro-orbital bleeding from anesthetized mice and serum was obtained by centrifugation at 300× g for 15 minutes at 4°C. Procollagen type 1 amino terminal propeptide (P1NP), Tartrate Resistant Acid Phosphatase (Trap5b) and ALP levels in the serum were measured using Rat/Mouse P1NP EIA kit (IDS, Fountain Hills, CA, USA), Mouse Trap EIA kit (IDS, Fountain Hills, CA, USA), and Quantichrome ALP assay kit (Bioassay systems, Hayward, CA, USA), respectively. Osteocalcin was measured using IRMA kit (Alpco Diagnostics, Salem, NH, USA). TNF-α, IL-1, and IL-6 were assayed using OptiEIA kits (BD Biosciences Pharmingen, San Diego, CA, USA). Serum leptin was measured using ELISA kit (Diagnostic systems laboratory, Webster, TX, USA).
## 2.4. Peripheral Quantitative Computerized Tomography Densitometry (pQCT)
Cortical and cancellous bones of distal femoral metaphysis (DFM), proximal tibial metaphysis (PTM), and pure cortical bone at the femoral diaphysis (FD) and tibia fibula junction (TF) were analyzed by pQCT densitometry, using an XCT research M system (Norland Stratec, Birkenfeld, Germany) as described previously [19, 20]. In the proximal tibial metaphysis (PTM) and distal femoral metaphysis (DFM), both cancellous and cortical bone surrounding the cancellous bone were scanned and analyzed. At the PTM and DFM, 5 slices were scanned including the growth plate and one slice, 1 mm distal to the knee joint (PTM) and 1 mm proximal to the knee joint (DFM) was analyzed. The following parameters were determined for both sites: cancellous bone mineral content (Cn BMC), cancellous bone mineral density (Cn BMD), cortical bone area (Ct Ar), cortical BMC (Ct BMC), cortical BMD (Ct BMD), cortical thickness (Ct Th), periosteal perimeter (Peri PM), and endocortical perimeter (Endo PM).One slice was scanned at the FD (mid-diaphysis) and at the TF junction for measuring pure cortical bone and the following parameters determined: Ct B Ar, Ct BMC, Ct BMD, Ct Th, Peri PM, and Endo PM.
## 2.5. Micro Computerized Tomography (μCT)
Scans of the DFM was done using a high-resolution XradiaμCT 200 scanner (Xradia, Inc. Concord, CA) at 20 microns. All images were acquired using standard parameters, X-ray source of 90 KV, power of 8.0 W and current of 4.0 μA. Each scan consisted of 181 slices with an exposure time of 30 seconds per slice. The scans were analyzed using Tri/3D Bon (Ratoc System Engineering Co., Ltd., Tokyo, Japan) for the parameters including total volume (TV), bone volume (BV), BV/TV, trabecular number (Tb N), trabecular thickness (Tb Th), trabecular separation (Tb S) and connectivity density (Conn Den).
## 2.6. Statistical Analysis
Results are expressed as Mean ± SE. Data was analyzed with one-way ANOVA and unpairedt-test using GraphPad Prism 4 (GraphPad Software Inc., San Diego, CA, USA). P<0.05 was considered to be significant. Newman-Keuls multiple comparison test was used to analyze the differences between groups for significance.
## 3. Results
### 3.1. Effects on Body Weights and Organ Weights
Effects of CQ
Body weight (16%) and adipose tissue weight (148%) significantly increased in CQ S mice, when compared to that of LC S mice (Table1). Uterus weight did not change between CQ S and LC S mice (Table 1). Body weight (8%) of CQ O mice increased significantly when compared to that of LC O mice. Liver weight increased in CQ S mice but this was not statistically significant when compared to that of LC S mice (Table 1). No significant changes were seen in the weights of the spleen and kidney (Table 1).Table 1
Effects ofCissus quadrangularis on the body weight and weight of different organs of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
Initial body weight (g)
19.17 ± 0.42
19.26 ± 0.41
19.15 ± 0.45
19.25 ± 0.34
Final body weight (g)
22.86 ± 0.86a
27.42 ± 1.19
20↑
26.41 ± 0.65b
29.73 ± 0.66a
12↑
Adipose tissue weight (g)
0.45 ± 0.11a
1.14 ± 0.18
153↑
1.12 ± 0.09b
1.50 ± 0.10
34
*
↑
Uterus weight (g)
0.142 ± 0.011a
0.072 ± 0.011
51↓
0.144 ± 0.015
0.043 ± 0.008c
30
*
↓
Liver weight (g)
1.088 ± 0.043a
1.318 ± 0.050
21↑
1.211 ± 0.043
1.330 ± 0.050
Spleen weight (g)
0.086 ± 0.006a
0.150 ± 0.018
74↑
0.116 ± 0.011
0.096 ± 0.005a
Kidney weight (g)
0.267 ± 0.012
0.303 ± 0.015
0.275 ± 0.014
0.253 ± 0.009
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; *= P<0.05 versus LC; ↑ = increase; ↓ = decrease.Effects of Ovariectomy
Lab chow groups. Body weight (20%) and adipose tissue weight (153%) increased significantly in LC O mice, while uterus weight (51%) decreased significantly in LC O mice, when compared to that of LC S mice (Table 1). Liver and spleen weights significantly increased in the LC O mice by 21% and 74% respectively, when compared to that of LC S mice (Table 1).
CQ groups. Body weight significantly increased in the CQ O group although no significant differences were observed in the adipose tissue weight between CQ S and CQ O mice (Table 1). Uterus weight (30%) decreased significantly in CQ O mice, when compared to that of CQ S mice (Table 1). No significant differences were observed in the liver, spleen and kidneys weights (Table 1). Uterus was carefully examined for any changes and no visible changes were seen.
### 3.2. Effects on Serum Bone Biochemical Parameters, Proinflammatory Cytokines, and Leptin
#### 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
#### 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
#### 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
### 3.3. pQCT Densitometry
#### 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
#### 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
#### 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
#### 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
### 3.4.μCT Densitometry
Effects of CQ
CQ treatment did not significantly change the trabecular number, trabecular thickness, connectivity density, and BV/TV (Figure5) but trabecular separation (65%) significantly decreased in CQ S mice when compared to that of LC S mice (Figure 5). There was 45% increase in Tb N and 28% increase in connectivity density in CQ S mice but these increases were not statistically significant, when compared to those of LC S mice (Figure 5). CQ O mice had significantly higher trabecular number (353%) and connectivity density (363%) and lower trabecular separation (28%), when compared to those of LC O mice (Figure 5).Figure 5
Effects ofCissus quadrangularis on the static histomorphometry parameters of the distal femoral metaphysis of C57Bl/6 mice after ovariectomy using CT. Data are Mean ± SE. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S, *P<0.05 versus LC. The figures are representative of the mean values that were obtained.Effects of Ovariectomy
Lab Chow Groups. In LC O mice, Tb N (65%); connectivity density (92%) and BV/TV (69%) decreased significantly, when compared to those of LC S mice (Figure 5). Although Tb Th decreased by 20%, this decreases was not statistically significant.
CQ groups. In CQ O mice, Tb Sp (73%) significantly increased, and BV/TV (48%) significantly decreased when compared to those of CQ S mice (Figure 5).
## 3.1. Effects on Body Weights and Organ Weights
Effects of CQ
Body weight (16%) and adipose tissue weight (148%) significantly increased in CQ S mice, when compared to that of LC S mice (Table1). Uterus weight did not change between CQ S and LC S mice (Table 1). Body weight (8%) of CQ O mice increased significantly when compared to that of LC O mice. Liver weight increased in CQ S mice but this was not statistically significant when compared to that of LC S mice (Table 1). No significant changes were seen in the weights of the spleen and kidney (Table 1).Table 1
Effects ofCissus quadrangularis on the body weight and weight of different organs of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
Initial body weight (g)
19.17 ± 0.42
19.26 ± 0.41
19.15 ± 0.45
19.25 ± 0.34
Final body weight (g)
22.86 ± 0.86a
27.42 ± 1.19
20↑
26.41 ± 0.65b
29.73 ± 0.66a
12↑
Adipose tissue weight (g)
0.45 ± 0.11a
1.14 ± 0.18
153↑
1.12 ± 0.09b
1.50 ± 0.10
34
*
↑
Uterus weight (g)
0.142 ± 0.011a
0.072 ± 0.011
51↓
0.144 ± 0.015
0.043 ± 0.008c
30
*
↓
Liver weight (g)
1.088 ± 0.043a
1.318 ± 0.050
21↑
1.211 ± 0.043
1.330 ± 0.050
Spleen weight (g)
0.086 ± 0.006a
0.150 ± 0.018
74↑
0.116 ± 0.011
0.096 ± 0.005a
Kidney weight (g)
0.267 ± 0.012
0.303 ± 0.015
0.275 ± 0.014
0.253 ± 0.009
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; *= P<0.05 versus LC; ↑ = increase; ↓ = decrease.Effects of Ovariectomy
Lab chow groups. Body weight (20%) and adipose tissue weight (153%) increased significantly in LC O mice, while uterus weight (51%) decreased significantly in LC O mice, when compared to that of LC S mice (Table 1). Liver and spleen weights significantly increased in the LC O mice by 21% and 74% respectively, when compared to that of LC S mice (Table 1).
CQ groups. Body weight significantly increased in the CQ O group although no significant differences were observed in the adipose tissue weight between CQ S and CQ O mice (Table 1). Uterus weight (30%) decreased significantly in CQ O mice, when compared to that of CQ S mice (Table 1). No significant differences were observed in the liver, spleen and kidneys weights (Table 1). Uterus was carefully examined for any changes and no visible changes were seen.
## 3.2. Effects on Serum Bone Biochemical Parameters, Proinflammatory Cytokines, and Leptin
### 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
### 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
### 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
## 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
## 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
## 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
## 3.3. pQCT Densitometry
### 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
### 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
### 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
### 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
## 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
## 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
## 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
## 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
## 3.4.μCT Densitometry
Effects of CQ
CQ treatment did not significantly change the trabecular number, trabecular thickness, connectivity density, and BV/TV (Figure5) but trabecular separation (65%) significantly decreased in CQ S mice when compared to that of LC S mice (Figure 5). There was 45% increase in Tb N and 28% increase in connectivity density in CQ S mice but these increases were not statistically significant, when compared to those of LC S mice (Figure 5). CQ O mice had significantly higher trabecular number (353%) and connectivity density (363%) and lower trabecular separation (28%), when compared to those of LC O mice (Figure 5).Figure 5
Effects ofCissus quadrangularis on the static histomorphometry parameters of the distal femoral metaphysis of C57Bl/6 mice after ovariectomy using CT. Data are Mean ± SE. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S, *P<0.05 versus LC. The figures are representative of the mean values that were obtained.Effects of Ovariectomy
Lab Chow Groups. In LC O mice, Tb N (65%); connectivity density (92%) and BV/TV (69%) decreased significantly, when compared to those of LC S mice (Figure 5). Although Tb Th decreased by 20%, this decreases was not statistically significant.
CQ groups. In CQ O mice, Tb Sp (73%) significantly increased, and BV/TV (48%) significantly decreased when compared to those of CQ S mice (Figure 5).
## 4. Discussion
Cissus quadrangularis belongs to the vitaceae family and is found in South East Asia where it is edible and is used as a vegetable. This plant has been used from ancient times to enhance fracture healing and has several other health benefits including antiinflammatory [8], antiglucocorticoid [10], antidiabetic [11] antibacterial [5, 21], and antioxidant properties [9]. This plant has triterpenoids [22, 23], steroids [22, 24] stilbenes [25], flavanoids [13], lipids [13], and several catalpols [13]. Slowly, interest in natural products for the treatment and prevention of disease is growing in the quest to minimize severe side effects that existing drugs can cause and WHO has endorsed the safe and effective use of such medicines [26]. We studied the effects of CQ-dried powder (stems and leaves) in an animal model for postmenopausal bone loss. Although CQ by itself did not increase bone mass we observed that it decreased bone loss in the distal femoral metaphysis and proximal tibial metaphysis regions of the long bones that have both cancellous and cortical bones. Loss of cancellous bone from these regions is typical after ovariectomy and menopause, mainly because endocortical resorption is stimulated. Bone protection in the distal femur and proximal tibia is mainly due to decreased bone resorption at the endocortical bone surface and preservation of trabecular microarchitecture. Cancellous bone at the femur also showed higher trabecular number, connectivity density and BV/TV in CQ O mice suggesting that bone resorption is decreased considerably. This is supported by the well preserved trabecular morphology in the CQ O mice (Figure 5) and data is in line with reports using CQ ethanol extracts. When young Wistar rats were fed CQ ethanol extracts, after ovariectomy for three months, there was restoration of architecture and increased biomechanical properties in the femur [16]. However, this is the first report to show the effects of CQ on bone using densitometric morphometric analyses including actual BMD and BMC values for the different bone sites. Moreover, we have tested several bone sites (femoral and tibial metaphysis as well as femoral and tibial diaphysis) as it is well known that different bone sites do not react to treatment regimens in the same manner [27].We measured levels of several serum biochemical markers to determine the influence of CQ on the state of bone turnover in these mice. P1NP and ALP were decreased in LC O mice as expected. With CQ treatment P1NP was decreased in both sham and OVX mice which suggests that CQ maybe altering the processing of procollagen to collagen. However, there was no difference in P1NP levels between CQ S and CQ O groups. ALP that was measured is the total ALP which is an indirect marker for increased osteoblast activity, and with CQ treatment especially in the OVX group, there was increased activity which is in line with reports that show increased ALP activity in bone marrow cells in rats treated with CQ [18]. We also measured osteocalcin as a direct marker for osteoblast activity and Trap5b as a marker for osteoclast activity. We observed that there were no statistical differences between the levels, of either of these markers in any of the groups studied; therefore, CQ does not alter the levels of these markers. Based on these results, we suggest that CQ does not change the bone turnover rate in these mice.The mechanism(s) by which CQ inhibits bone loss is yet to be fully studied. As a preliminary investigation, we measured a few proinflammatory cytokines. Certain proinflammatory cytokines like TNF-α, IL-1, IL-6, and IL-11 play a critical role in the bone remodeling process [28, 29] mainly by activation of osteoclasts and increasing bone resorption [30–32]. While IL-1 activates NF-κB and MAPKs through TRAF-6, it may also induce PGE2 and the expression of RANKL in osteoblast [28]. IL-6 is produced by osteoblasts and stimulate the formation of osteoclasts [33]. Interestingly, IL-6 knockout mice do not lose cancellous bone after ovariectomy [33]. In our study, significant decreases in the serum levels of IL-1 (84%), IL-6 (53%), and TNF-α (40%) were observed with CQ diet. Even in CQ O mice the levels of IL-6 and TNF-α were significantly lower than those of the LC O mice and IL-1 levels decreased by 52%. Our results clearly show that CQ alters proinflammatory cytokines and maybe one of the major pathways used to reduce bone resorption—the characteristic response to ovariectomy.Based on the literature, CQ was also used to induce weight loss [6, 34], but in our mice, we noticed that there was increased body weight and peritoneal fat. Both the studies that reported weight reducing benefits of CQ were in obese humans. The major difference between these reports and our study is that they were using already obese patients while the mice in our study were not obese to begin with. So, it may be that a stimulus (obesity-linked proteins) is required for CQ to reduce fat mass or the increase in fat mass is specific to mice and may not affect humans. We measured the levels of leptin, a hormone derived from the adipose tissue which plays a key role in regulating energy intake and expenditure and also influences bone formation as well as bone resorption [35]. At lower than physiological concentrations leptin stimulates bone formation and probably can induce apoptosis of osteoclasts [36]. We were not surprised to find increased circulating levels of leptin and there was bone loss in the LC O mice, but CQ could protect bone in the long bones suggesting that CQ blocks the bone resorption action of leptin. It will be interesting to see if CQ has any influence on the adipocytes found in the bone marrow. If CQ reduces the adipocytes in the bone marrow then the bone resorbing properties of leptin will be reduced as local leptin concentrations produced by increased adipocytes in the bone marrow will increase bone loss [37]. Therefore, leptin CQ interaction needs to be further studied to determine the mechanism by which CQ beneficially influences bone through leptin.CQ may primarily attenuate bone resorption in OVX mice through the downregulation of proinflammatory cytokines but it does not rule out the possibility that it may also act through other pathways. There are reports that CQ also enhances bone mineralization by accumulating mucopolysaccharides at the site of bone formation [14]. Moreover, CQ is reported to increase calcium uptake and mechanical properties of bone in rats [15]. Phytochemical analyses of CQ show the presence of high levels of calcium, vitamin C, β-carotene [38, 39], and flavanoids [25] some of these substances have established beneficial properties on bone. In vitro studies have shown that ethanolic extracts of CQ increased mRNA and proteins related to the bone formation pathway and IGF-I, IGF-II, and IGF binding protein [40, 41]. More investigations are necessary to elucidate the mechanism(s) by which CQ influences bone metabolism. However, it is very encouraging to note that in studies done with CQ using very high doses (5000 mg/kg body weight) [9] have not reported any toxic side effects. In the present study, we have used only 500 mg/kg body weight of CQ and observed that the liver, spleen, and kidney weights were not altered significantly, suggesting that CQ may not have any severe side-effects.
## 5. Conclusions
We conclude that CQ can reduce OVX induced bone loss and it does this in the long bones in a site-specific manner with more effects on the cancellous bone of femur followed by tibia. CQ probably reduces bone resorption primarily by downregulating proinflammatory cytokines that are often increased after ovariectomy. The beneficial effects of CQ are probably due to the flavanoids present. Although the mechanism(s) by which CQ attenuates ovariectomy-induced bone loss has to be studied, CQ being an edible plant and with a history of medicinal effects, especially in healing bone fractures, may be a good supplement to existing medication for the reversal of postmenopausal bone loss.
---
*Source: 101206-2012-06-21.xml* | 101206-2012-06-21_101206-2012-06-21.md | 56,702 | Inhibition of Bone Loss byCissus quadrangularis in Mice: A Preliminary Report | Jameela Banu; Erika Varela; Ali N. Bahadur; Raheela Soomro; Nishu Kazi; Gabriel Fernandes | Journal of Osteoporosis
(2012) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101206 | 101206-2012-06-21.xml | ---
## Abstract
Women drastically loose bone during and after menopause leading to osteoporosis, a disease characterized by low bone mass increasing the risk of fractures with minor trauma. Existing therapies mainly reduce bone resorption, however, all existing drugs have severe side effects. Recently, the focus is to identify alternative medicines that can prevent and treat osteoporosis with minimal or no side effects. We usedCissus quadrangularis (CQ), a medicinal herb, to determine its effects on bone loss after ovariectomy in C57BL/6 mice. Two-month old mice were either sham operated or ovariectomized and fed CQ diet. After eleven weeks, mice were sacrificed and the long bones scanned using pQCT and μCT. In the distal femoral metaphysis, femoral diaphysis, and proximal tibia, control mice had decreased cancellous and cortical bone, while CQ-fed mice showed no significant differences in the trabecular number, thickness, and connectivity density, between Sham and OVX mice, except for cortical bone mineral content in the proximal tibia. There were no changes in the bone at the tibio-fibular junction between groups. We conclude that CQ effectively inhibited bone loss in the cancellous and cortical bones of femur and proximal tibia in these mice.
---
## Body
## 1. Introduction
Osteoporosis is a disease associated with aging that causes fragility of bones making them susceptible to fractures with minor trauma. Bone is a dynamic organ that undergoes lifelong changes by bone remodeling using specialized cells and is the predominant process after attaining peak bone mass around the third decade. Remodeling is an essential process for maintaining the skeleton by repairing any damaged portions and removal of old bone as well as for discharging calcium and phosphorus from bone stores to maintain ionic homeostasis in the body. An imbalance in the process of bone remodeling where there is increased bone resorption alone or in combination with decreased bone formation results in net loss of bone. After attaining peak bone mass during the third decade, humans start losing bones at the rate of 0.6 to 1% every year for the rest of their lives. In case of women, during menopausal age, there is drastic loss of bone.Treating and/or preventing bone loss can be focused on overall reduction of bone resorption and/or increasing bone formation. Currently, there are several different groups of agents that are used to treat and prevent osteoporosis. But the risks of side effects caused by these drugs are severe [1–4]. This has, recently, led to the search for alternative medicines to treat and prevent osteoporosis.Different cultures around the world have used herbs for thousands of years to treat several health conditions. One of the herbs that have shown beneficial effects on bone belongs to theCissus family of plants. Cissus quadrangularis (CQ) is a medicinal herb used in Siddha and Ayurvedic medicine since ancient times in Asia, as a general tonic and analgesic, especially for bone fracture healing [5]. Recently, CQ has been linked to several health benefits such as antiobesity [6], reduction of proinflammatory cytokines [7], anti-inflammatory [8], antioxidant [9], antiglucocorticoid [10], and antidiabetic properties [11].As early as in the 1960’s, CQ was used to determine its beneficial effects on bone fracture healing in young rats and it has been reported that CQ significantly enhances fracture healing process [12]. The study has further demonstrated that in the presence of CQ, bone mineralization takes place much earlier, when compared to that seen in its absence [13]. During bone mineralization, accumulation of mucopolysaccharides precedes the actual mineralization process and CQ increased mucopolysaccharides at the site of fracture [14]. Udupa and Prasad [15] have reported that CQ hastens fractures by reducing the total convalescent period by 33% in experimental animals when compared to those of the controls. CQ also increased calcium uptake and mechanical properties of bone in rats when compared to that of the controls [15]. More recently, Shirwaikar et al. [16] have demonstrated that the mechanical strength of bones in ovariectomized rats increased, significantly, in the long bones and lumbar vertebra. Petroleum ether extracts of CQ stimulated osteoblastogenesis and mineralization in bone marrow mesenchymal cells and murine osteoblastic cell lines [17, 18]. In the present study, we determined the effects of CQ on postmenopausal bone loss in the long bones of female mice. We tested the bones using peripheral quantitative computed tomography (pQCT) and microcomputed tomography (μCT). With pQCT, we determined the effects of CQ on the two envelopes (periosteal and endocortical) and the BMD and BMC measurements, while with μCT, we determined the microarchitecture of the cancellous bone in the distal femoral metaphysis. Both techniques give different measurements and they complement each other. We also tested some bone biochemical markers and proinflammatory cytokines in the serum.
## 2. Materials and Methods
### 2.1. Animals
Weanling C57BL/6 female mice were obtained from Jackson Laboratory (Bar Harbor, ME) and maintained in our laboratory animal facility. When mice were eight weeks of age, they were either sham operated or ovariectomized. After one month, mice were divided into the following groups: Group (1) Lab chow sham (LC S), (n=10); Group (2) Lab chow ovariectomy (LC O), (n=11); Group (3) Cissus quadrangularis sham (CQ S), (n=11); Group (4) CQ ovariectomy (CQ O), (n=11) and fed the respective diets. Mice were maintained in the dietary regimens for eleven weeks and sacrificed. CQ was purchased from 1fast400 (Northborough, MA) in powder form and mixed with modified AIN-93 diet at a concentration of 500 mg/kg b wt. Mice were weighed regularly. Blood was collected by retro-orbital bleeding from anesthetized mice and tibia and femur were removed and stored for pQCT and μCT densitometry. All animal procedures were done according to the UT Health Science Center at San Antonio IACUC guidelines.
### 2.2. Measurement of Body and Organ Weights
Mice were weight matched at the beginning of the treatment using a CS 200 (Ohaus, Pine Brook, NJ, USA) balance. At the time of sacrifice, body weight was recorded. Uterus, peritoneal adipose tissue, liver, spleen, and kidneys were carefully dissected out and weighed using a Mettler Balance (Columbus, OH, USA).
### 2.3. Collection of Blood Serum Biochemical Markers, Proinflammatory Cytokine Assays and Leptin
Blood was collected by retro-orbital bleeding from anesthetized mice and serum was obtained by centrifugation at 300× g for 15 minutes at 4°C. Procollagen type 1 amino terminal propeptide (P1NP), Tartrate Resistant Acid Phosphatase (Trap5b) and ALP levels in the serum were measured using Rat/Mouse P1NP EIA kit (IDS, Fountain Hills, CA, USA), Mouse Trap EIA kit (IDS, Fountain Hills, CA, USA), and Quantichrome ALP assay kit (Bioassay systems, Hayward, CA, USA), respectively. Osteocalcin was measured using IRMA kit (Alpco Diagnostics, Salem, NH, USA). TNF-α, IL-1, and IL-6 were assayed using OptiEIA kits (BD Biosciences Pharmingen, San Diego, CA, USA). Serum leptin was measured using ELISA kit (Diagnostic systems laboratory, Webster, TX, USA).
### 2.4. Peripheral Quantitative Computerized Tomography Densitometry (pQCT)
Cortical and cancellous bones of distal femoral metaphysis (DFM), proximal tibial metaphysis (PTM), and pure cortical bone at the femoral diaphysis (FD) and tibia fibula junction (TF) were analyzed by pQCT densitometry, using an XCT research M system (Norland Stratec, Birkenfeld, Germany) as described previously [19, 20]. In the proximal tibial metaphysis (PTM) and distal femoral metaphysis (DFM), both cancellous and cortical bone surrounding the cancellous bone were scanned and analyzed. At the PTM and DFM, 5 slices were scanned including the growth plate and one slice, 1 mm distal to the knee joint (PTM) and 1 mm proximal to the knee joint (DFM) was analyzed. The following parameters were determined for both sites: cancellous bone mineral content (Cn BMC), cancellous bone mineral density (Cn BMD), cortical bone area (Ct Ar), cortical BMC (Ct BMC), cortical BMD (Ct BMD), cortical thickness (Ct Th), periosteal perimeter (Peri PM), and endocortical perimeter (Endo PM).One slice was scanned at the FD (mid-diaphysis) and at the TF junction for measuring pure cortical bone and the following parameters determined: Ct B Ar, Ct BMC, Ct BMD, Ct Th, Peri PM, and Endo PM.
### 2.5. Micro Computerized Tomography (μCT)
Scans of the DFM was done using a high-resolution XradiaμCT 200 scanner (Xradia, Inc. Concord, CA) at 20 microns. All images were acquired using standard parameters, X-ray source of 90 KV, power of 8.0 W and current of 4.0 μA. Each scan consisted of 181 slices with an exposure time of 30 seconds per slice. The scans were analyzed using Tri/3D Bon (Ratoc System Engineering Co., Ltd., Tokyo, Japan) for the parameters including total volume (TV), bone volume (BV), BV/TV, trabecular number (Tb N), trabecular thickness (Tb Th), trabecular separation (Tb S) and connectivity density (Conn Den).
### 2.6. Statistical Analysis
Results are expressed as Mean ± SE. Data was analyzed with one-way ANOVA and unpairedt-test using GraphPad Prism 4 (GraphPad Software Inc., San Diego, CA, USA). P<0.05 was considered to be significant. Newman-Keuls multiple comparison test was used to analyze the differences between groups for significance.
## 2.1. Animals
Weanling C57BL/6 female mice were obtained from Jackson Laboratory (Bar Harbor, ME) and maintained in our laboratory animal facility. When mice were eight weeks of age, they were either sham operated or ovariectomized. After one month, mice were divided into the following groups: Group (1) Lab chow sham (LC S), (n=10); Group (2) Lab chow ovariectomy (LC O), (n=11); Group (3) Cissus quadrangularis sham (CQ S), (n=11); Group (4) CQ ovariectomy (CQ O), (n=11) and fed the respective diets. Mice were maintained in the dietary regimens for eleven weeks and sacrificed. CQ was purchased from 1fast400 (Northborough, MA) in powder form and mixed with modified AIN-93 diet at a concentration of 500 mg/kg b wt. Mice were weighed regularly. Blood was collected by retro-orbital bleeding from anesthetized mice and tibia and femur were removed and stored for pQCT and μCT densitometry. All animal procedures were done according to the UT Health Science Center at San Antonio IACUC guidelines.
## 2.2. Measurement of Body and Organ Weights
Mice were weight matched at the beginning of the treatment using a CS 200 (Ohaus, Pine Brook, NJ, USA) balance. At the time of sacrifice, body weight was recorded. Uterus, peritoneal adipose tissue, liver, spleen, and kidneys were carefully dissected out and weighed using a Mettler Balance (Columbus, OH, USA).
## 2.3. Collection of Blood Serum Biochemical Markers, Proinflammatory Cytokine Assays and Leptin
Blood was collected by retro-orbital bleeding from anesthetized mice and serum was obtained by centrifugation at 300× g for 15 minutes at 4°C. Procollagen type 1 amino terminal propeptide (P1NP), Tartrate Resistant Acid Phosphatase (Trap5b) and ALP levels in the serum were measured using Rat/Mouse P1NP EIA kit (IDS, Fountain Hills, CA, USA), Mouse Trap EIA kit (IDS, Fountain Hills, CA, USA), and Quantichrome ALP assay kit (Bioassay systems, Hayward, CA, USA), respectively. Osteocalcin was measured using IRMA kit (Alpco Diagnostics, Salem, NH, USA). TNF-α, IL-1, and IL-6 were assayed using OptiEIA kits (BD Biosciences Pharmingen, San Diego, CA, USA). Serum leptin was measured using ELISA kit (Diagnostic systems laboratory, Webster, TX, USA).
## 2.4. Peripheral Quantitative Computerized Tomography Densitometry (pQCT)
Cortical and cancellous bones of distal femoral metaphysis (DFM), proximal tibial metaphysis (PTM), and pure cortical bone at the femoral diaphysis (FD) and tibia fibula junction (TF) were analyzed by pQCT densitometry, using an XCT research M system (Norland Stratec, Birkenfeld, Germany) as described previously [19, 20]. In the proximal tibial metaphysis (PTM) and distal femoral metaphysis (DFM), both cancellous and cortical bone surrounding the cancellous bone were scanned and analyzed. At the PTM and DFM, 5 slices were scanned including the growth plate and one slice, 1 mm distal to the knee joint (PTM) and 1 mm proximal to the knee joint (DFM) was analyzed. The following parameters were determined for both sites: cancellous bone mineral content (Cn BMC), cancellous bone mineral density (Cn BMD), cortical bone area (Ct Ar), cortical BMC (Ct BMC), cortical BMD (Ct BMD), cortical thickness (Ct Th), periosteal perimeter (Peri PM), and endocortical perimeter (Endo PM).One slice was scanned at the FD (mid-diaphysis) and at the TF junction for measuring pure cortical bone and the following parameters determined: Ct B Ar, Ct BMC, Ct BMD, Ct Th, Peri PM, and Endo PM.
## 2.5. Micro Computerized Tomography (μCT)
Scans of the DFM was done using a high-resolution XradiaμCT 200 scanner (Xradia, Inc. Concord, CA) at 20 microns. All images were acquired using standard parameters, X-ray source of 90 KV, power of 8.0 W and current of 4.0 μA. Each scan consisted of 181 slices with an exposure time of 30 seconds per slice. The scans were analyzed using Tri/3D Bon (Ratoc System Engineering Co., Ltd., Tokyo, Japan) for the parameters including total volume (TV), bone volume (BV), BV/TV, trabecular number (Tb N), trabecular thickness (Tb Th), trabecular separation (Tb S) and connectivity density (Conn Den).
## 2.6. Statistical Analysis
Results are expressed as Mean ± SE. Data was analyzed with one-way ANOVA and unpairedt-test using GraphPad Prism 4 (GraphPad Software Inc., San Diego, CA, USA). P<0.05 was considered to be significant. Newman-Keuls multiple comparison test was used to analyze the differences between groups for significance.
## 3. Results
### 3.1. Effects on Body Weights and Organ Weights
Effects of CQ
Body weight (16%) and adipose tissue weight (148%) significantly increased in CQ S mice, when compared to that of LC S mice (Table1). Uterus weight did not change between CQ S and LC S mice (Table 1). Body weight (8%) of CQ O mice increased significantly when compared to that of LC O mice. Liver weight increased in CQ S mice but this was not statistically significant when compared to that of LC S mice (Table 1). No significant changes were seen in the weights of the spleen and kidney (Table 1).Table 1
Effects ofCissus quadrangularis on the body weight and weight of different organs of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
Initial body weight (g)
19.17 ± 0.42
19.26 ± 0.41
19.15 ± 0.45
19.25 ± 0.34
Final body weight (g)
22.86 ± 0.86a
27.42 ± 1.19
20↑
26.41 ± 0.65b
29.73 ± 0.66a
12↑
Adipose tissue weight (g)
0.45 ± 0.11a
1.14 ± 0.18
153↑
1.12 ± 0.09b
1.50 ± 0.10
34
*
↑
Uterus weight (g)
0.142 ± 0.011a
0.072 ± 0.011
51↓
0.144 ± 0.015
0.043 ± 0.008c
30
*
↓
Liver weight (g)
1.088 ± 0.043a
1.318 ± 0.050
21↑
1.211 ± 0.043
1.330 ± 0.050
Spleen weight (g)
0.086 ± 0.006a
0.150 ± 0.018
74↑
0.116 ± 0.011
0.096 ± 0.005a
Kidney weight (g)
0.267 ± 0.012
0.303 ± 0.015
0.275 ± 0.014
0.253 ± 0.009
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; *= P<0.05 versus LC; ↑ = increase; ↓ = decrease.Effects of Ovariectomy
Lab chow groups. Body weight (20%) and adipose tissue weight (153%) increased significantly in LC O mice, while uterus weight (51%) decreased significantly in LC O mice, when compared to that of LC S mice (Table 1). Liver and spleen weights significantly increased in the LC O mice by 21% and 74% respectively, when compared to that of LC S mice (Table 1).
CQ groups. Body weight significantly increased in the CQ O group although no significant differences were observed in the adipose tissue weight between CQ S and CQ O mice (Table 1). Uterus weight (30%) decreased significantly in CQ O mice, when compared to that of CQ S mice (Table 1). No significant differences were observed in the liver, spleen and kidneys weights (Table 1). Uterus was carefully examined for any changes and no visible changes were seen.
### 3.2. Effects on Serum Bone Biochemical Parameters, Proinflammatory Cytokines, and Leptin
#### 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
#### 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
#### 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
### 3.3. pQCT Densitometry
#### 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
#### 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
#### 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
#### 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
### 3.4.μCT Densitometry
Effects of CQ
CQ treatment did not significantly change the trabecular number, trabecular thickness, connectivity density, and BV/TV (Figure5) but trabecular separation (65%) significantly decreased in CQ S mice when compared to that of LC S mice (Figure 5). There was 45% increase in Tb N and 28% increase in connectivity density in CQ S mice but these increases were not statistically significant, when compared to those of LC S mice (Figure 5). CQ O mice had significantly higher trabecular number (353%) and connectivity density (363%) and lower trabecular separation (28%), when compared to those of LC O mice (Figure 5).Figure 5
Effects ofCissus quadrangularis on the static histomorphometry parameters of the distal femoral metaphysis of C57Bl/6 mice after ovariectomy using CT. Data are Mean ± SE. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S, *P<0.05 versus LC. The figures are representative of the mean values that were obtained.Effects of Ovariectomy
Lab Chow Groups. In LC O mice, Tb N (65%); connectivity density (92%) and BV/TV (69%) decreased significantly, when compared to those of LC S mice (Figure 5). Although Tb Th decreased by 20%, this decreases was not statistically significant.
CQ groups. In CQ O mice, Tb Sp (73%) significantly increased, and BV/TV (48%) significantly decreased when compared to those of CQ S mice (Figure 5).
## 3.1. Effects on Body Weights and Organ Weights
Effects of CQ
Body weight (16%) and adipose tissue weight (148%) significantly increased in CQ S mice, when compared to that of LC S mice (Table1). Uterus weight did not change between CQ S and LC S mice (Table 1). Body weight (8%) of CQ O mice increased significantly when compared to that of LC O mice. Liver weight increased in CQ S mice but this was not statistically significant when compared to that of LC S mice (Table 1). No significant changes were seen in the weights of the spleen and kidney (Table 1).Table 1
Effects ofCissus quadrangularis on the body weight and weight of different organs of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
Initial body weight (g)
19.17 ± 0.42
19.26 ± 0.41
19.15 ± 0.45
19.25 ± 0.34
Final body weight (g)
22.86 ± 0.86a
27.42 ± 1.19
20↑
26.41 ± 0.65b
29.73 ± 0.66a
12↑
Adipose tissue weight (g)
0.45 ± 0.11a
1.14 ± 0.18
153↑
1.12 ± 0.09b
1.50 ± 0.10
34
*
↑
Uterus weight (g)
0.142 ± 0.011a
0.072 ± 0.011
51↓
0.144 ± 0.015
0.043 ± 0.008c
30
*
↓
Liver weight (g)
1.088 ± 0.043a
1.318 ± 0.050
21↑
1.211 ± 0.043
1.330 ± 0.050
Spleen weight (g)
0.086 ± 0.006a
0.150 ± 0.018
74↑
0.116 ± 0.011
0.096 ± 0.005a
Kidney weight (g)
0.267 ± 0.012
0.303 ± 0.015
0.275 ± 0.014
0.253 ± 0.009
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; *= P<0.05 versus LC; ↑ = increase; ↓ = decrease.Effects of Ovariectomy
Lab chow groups. Body weight (20%) and adipose tissue weight (153%) increased significantly in LC O mice, while uterus weight (51%) decreased significantly in LC O mice, when compared to that of LC S mice (Table 1). Liver and spleen weights significantly increased in the LC O mice by 21% and 74% respectively, when compared to that of LC S mice (Table 1).
CQ groups. Body weight significantly increased in the CQ O group although no significant differences were observed in the adipose tissue weight between CQ S and CQ O mice (Table 1). Uterus weight (30%) decreased significantly in CQ O mice, when compared to that of CQ S mice (Table 1). No significant differences were observed in the liver, spleen and kidneys weights (Table 1). Uterus was carefully examined for any changes and no visible changes were seen.
## 3.2. Effects on Serum Bone Biochemical Parameters, Proinflammatory Cytokines, and Leptin
### 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
### 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
### 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
## 3.2.1. Bone Biochemical Markers
Effects of CQ
Serum P1NP levels (45%) decreased significantly in CQ S mice, when compared to that of LC S mice (Table2). No significant differences were observed between CQ and LC mice with respect to Trab5b and ALP levels (Table 2).Table 2
Effects ofCissus quadrangularis on the Biochemical markers of bone turnover, pro-inflammatory cytokines and leptin in the serum of female ovariectomized C57BL/6 mice.
Parameters/Groups
LC Sham
LC OVX
% difference (LC Sham versus LC OVX)
CQ Sham
CQ OVX
% difference (CQ Sham versus CQ OVX)
P1NP (ng/mL)
13.23 ± 1.17a
10.44 ± 0.18
21↓
7.29 ± 0.85b
6.55 ± 0.57
10↓
Alkaline phosphatase (U/L)
1.95 ± 0.24a
0.93 ± 0.18
48↓
1.60 ± 0.09
2.09 ± 0.57
6
Osteocalcin (ng/mL)
45.78 ± 1.11
51.82 ± 7.74
40.45 ± 2.27
34.53 ± 4.49
Trap5b (U/L)
13.53 ± 0.87
12.39 ± 0.65
12.92 ± 0.56
13.23 ± 1.46
IL-1β (pg/mL)
418 ± 143
580 ± 127
39↑
66 ± 8b
278 ± 30d
322
*
↑
IL-6 (pg/mL)
818 ± 152
888 ± 202
9↑
387 ± 35b
547 ± 77a
41↑
TNF-α(pg/mL)
1.47 ± 0.43
2.12 ± 0.45
44↑
0.76 ± 0.21
0.69 ± 0.08a
13↓
Leptin (pg/mL)
169 ± 27a
1100 ± 85
550↑
621 ± 115b
1321 ± 147c
112↑
Data are Mean ± SE.Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S; Pd<0.08 versus LC O; *= P<0.05 versus LC; ↑: increase; ↓: decrease.Effects of Ovariectomy
Lab chow groups. P1NP (21%) and ALP (48%) levels decreased significantly in LC O mice, when compared to those of LC S mice (Table 2). Trap5b levels were not significantly different between LC S and LC O mice (Table 2).
CQ groups. No significant differences were observed in serum P1NP, ALP and Trap5b levels between CQ S and CQ O mice (Table 2).
## 3.2.2. Proinflammatory Cytokines
Effects of CQ
TNF-α (48%), IL-1β (84%) and IL-6 (53%) levels decreased in the CQ S mice when compared to those of LC S mice (Table 2).Effects of Ovariectomy
Lab chow groups. Although there was an increase in the cytokines that were measured in the LC O mice these increases were not statistically significant between the different groups (Table 2).
CQ groups. No significant differences were observed between CQ S and CQ O mice with respect to pro-inflammatory cytokines (Table 2).
## 3.2.3. Leptin Levels
Effects of CQ
Leptin levels (450%) increased significantly in CQ S mice when compared to that of LC S mice (Table2).Effects of Ovariectomy
Lab chow groups. Leptin levels (550%) increased in the LC O mice, when compared to that of LC S mice (Table 2).
CQ groups. Leptin levels (112%) increased significantly in the CQ O mice when compared to that of CQ S mice (Table 2).
## 3.3. pQCT Densitometry
### 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
### 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
### 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
### 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
## 3.3.1. Distal Femoral Metaphysis (DFM)
Effects of CQ
CQ S did not change any of the parameters studied when compared to those of LC S fed mice in the distal femoral metaphysis (Figure1). But CQ O mice had significantly higher Cn BMC (8%), Cn BMD (34%), Ct BMC (32%), Ct BMD (8%), and Ct Th (30%) when compared to those of LC O mice (Figures 1(A)–1(D) and 2(A)). Endo PM (4%) decreased significantly in CQ O mice when compared to that of LC O mice (Figure 2(C)).Figure 1
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D); Cortical BMD. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Figure 2
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the distal femoral metaphysis and femoral diaphysis of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent distal femoral metaphysis; white bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (34%), Cn BMD (36%), Ct BMC (36%), Ct BMD (4%), and Ct Th (26%), and increased Endo PM (6%), when compared to those of LC S mice (Figures 1(A)-1(D), 2(A) and 2(C)).
CQ groups. No significant differences were observed in Cn BMC, Cn BMD, Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
## 3.3.2. Femoral Diaphysis (FD)
Effects of CQ
In FD, CQ did not change any of the parameters studied, when compared to those of LC S mice (Figures1(C), 1(D) and 2(A)–2(C)). CQ O mice had significantly higher Ct Th, when compared to those of LC O mice (Figure 2(A)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (19%), Ct BMD (8%), Ct Th (15%), and increased Endo PM (5%), when compared to those of LC S mice (Figures 1(C), 1(D) and 2(A) and 2(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 1(A)–1(D) and 2(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 2(C)).
## 3.3.3. Proximal Tibial Metaphysis (PTM)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC S fed mice in the proximal tibial metaphysis (Figures3(A)-3(D) and 4(A)–4(C)). CQ O mice had significantly higher Ct BMC (48%), Ct BMD (2%), and Ct Th (42%) when compared to those of LC O mice (Figures 3(C)–3(D) and 4(A)). Endo PM decreased in CQ O mice when compared to that of LC O mice, but this decrease was not statistically significant (Figure 4(C)).Figure 3
Effects ofCissus quadrangularis on the cancellous and cortical bone parameters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) cancellous BMC; (B) cancellous BMD; (C) cortical BMC; (D) cortical BMD. Black bars represent distal femoral metaphysis; White bars represent femoral diaphysis. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus CQ S; Pc<0.05 versus LC S.Figure 4
Effects ofCissus quadrangularis on the cortical bone thickness and perimeters of the proximal tibial metaphysis and tibia fibula junction of C57Bl/6 mice after ovariectomy using pQCT. Data are Mean ± SE. (A) Cortical thickness; (B) periosteal perimeter; (C) Endocortical perimeter. Black bars represent proximal tibial metaphysis; white bars represent tibia fibula junction. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S.Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Cn BMC (32%), Cn BMD (21%), Ct BMC (59%), Ct BMD (2%), Ct Th (65%), Peri PM (3%) and increased Endo PM (7%), when compared to those of LC S mice (Figures 3(A)–3(D) and 4(A)–4(C).
CQ groups. Although Cn BMC and Cn BMD decreased in CQ O mice, when compared to those of CQ S mice, these decreases were not statistically significant (Figures 3(A) and 3(B)). Ct BMC (34%), Ct BMD (3%), and Ct Th (30%) levels significantly decreased in CQ O mice when compared to those of CQ S mice (Figures 3(C) and 3(D) and Figure 4(a)). Endo PM increased in CQ O mice but this increase was not statistically significant, when compared to that of LC O (Figure 4(C)).
## 3.3.4. Tibia Fibular Junction (TF)
Effects of CQ
CQ did not change any of the parameters studied when compared to those of LC fed mice in the tibia fibular junction of sham and ovariectomized mice, except in the Endo PM which significantly decreased in CQ S, when compared to that of LC S mice (Figures3(C), 3(D), and 4(A)–4(C)).Effects of Ovariectomy
Lab chow groups. LC O mice had significantly less Ct BMC (13%), Ct BMD (6%) and Ct Th (11%) when compared to those of LC S mice (Figures 3(C), 3(D), and 4(A)). LC O mice showed increased Endo PM (−2%), but this increase was not statistically significant (Figure 4(C)).
CQ groups. No significant differences were observed in the Ct BMC, Ct BMD, and Ct Th levels between CQ S and CQ O mice (Figures 3(C), 3(D), and 4(A)). Endo PM increased in CQ O mice but this increase was not statistically significant (Figure 4(C)).
## 3.4.μCT Densitometry
Effects of CQ
CQ treatment did not significantly change the trabecular number, trabecular thickness, connectivity density, and BV/TV (Figure5) but trabecular separation (65%) significantly decreased in CQ S mice when compared to that of LC S mice (Figure 5). There was 45% increase in Tb N and 28% increase in connectivity density in CQ S mice but these increases were not statistically significant, when compared to those of LC S mice (Figure 5). CQ O mice had significantly higher trabecular number (353%) and connectivity density (363%) and lower trabecular separation (28%), when compared to those of LC O mice (Figure 5).Figure 5
Effects ofCissus quadrangularis on the static histomorphometry parameters of the distal femoral metaphysis of C57Bl/6 mice after ovariectomy using CT. Data are Mean ± SE. LC S: lab chow sham; LC O: lab chow ovariectomy; CQ S: Cissus quadrangularis sham; CQ O: Cissus quadrangularis ovariectomy. Pa<0.05 versus LC O; Pb<0.05 versus LC S; Pc<0.05 versus CQ S, *P<0.05 versus LC. The figures are representative of the mean values that were obtained.Effects of Ovariectomy
Lab Chow Groups. In LC O mice, Tb N (65%); connectivity density (92%) and BV/TV (69%) decreased significantly, when compared to those of LC S mice (Figure 5). Although Tb Th decreased by 20%, this decreases was not statistically significant.
CQ groups. In CQ O mice, Tb Sp (73%) significantly increased, and BV/TV (48%) significantly decreased when compared to those of CQ S mice (Figure 5).
## 4. Discussion
Cissus quadrangularis belongs to the vitaceae family and is found in South East Asia where it is edible and is used as a vegetable. This plant has been used from ancient times to enhance fracture healing and has several other health benefits including antiinflammatory [8], antiglucocorticoid [10], antidiabetic [11] antibacterial [5, 21], and antioxidant properties [9]. This plant has triterpenoids [22, 23], steroids [22, 24] stilbenes [25], flavanoids [13], lipids [13], and several catalpols [13]. Slowly, interest in natural products for the treatment and prevention of disease is growing in the quest to minimize severe side effects that existing drugs can cause and WHO has endorsed the safe and effective use of such medicines [26]. We studied the effects of CQ-dried powder (stems and leaves) in an animal model for postmenopausal bone loss. Although CQ by itself did not increase bone mass we observed that it decreased bone loss in the distal femoral metaphysis and proximal tibial metaphysis regions of the long bones that have both cancellous and cortical bones. Loss of cancellous bone from these regions is typical after ovariectomy and menopause, mainly because endocortical resorption is stimulated. Bone protection in the distal femur and proximal tibia is mainly due to decreased bone resorption at the endocortical bone surface and preservation of trabecular microarchitecture. Cancellous bone at the femur also showed higher trabecular number, connectivity density and BV/TV in CQ O mice suggesting that bone resorption is decreased considerably. This is supported by the well preserved trabecular morphology in the CQ O mice (Figure 5) and data is in line with reports using CQ ethanol extracts. When young Wistar rats were fed CQ ethanol extracts, after ovariectomy for three months, there was restoration of architecture and increased biomechanical properties in the femur [16]. However, this is the first report to show the effects of CQ on bone using densitometric morphometric analyses including actual BMD and BMC values for the different bone sites. Moreover, we have tested several bone sites (femoral and tibial metaphysis as well as femoral and tibial diaphysis) as it is well known that different bone sites do not react to treatment regimens in the same manner [27].We measured levels of several serum biochemical markers to determine the influence of CQ on the state of bone turnover in these mice. P1NP and ALP were decreased in LC O mice as expected. With CQ treatment P1NP was decreased in both sham and OVX mice which suggests that CQ maybe altering the processing of procollagen to collagen. However, there was no difference in P1NP levels between CQ S and CQ O groups. ALP that was measured is the total ALP which is an indirect marker for increased osteoblast activity, and with CQ treatment especially in the OVX group, there was increased activity which is in line with reports that show increased ALP activity in bone marrow cells in rats treated with CQ [18]. We also measured osteocalcin as a direct marker for osteoblast activity and Trap5b as a marker for osteoclast activity. We observed that there were no statistical differences between the levels, of either of these markers in any of the groups studied; therefore, CQ does not alter the levels of these markers. Based on these results, we suggest that CQ does not change the bone turnover rate in these mice.The mechanism(s) by which CQ inhibits bone loss is yet to be fully studied. As a preliminary investigation, we measured a few proinflammatory cytokines. Certain proinflammatory cytokines like TNF-α, IL-1, IL-6, and IL-11 play a critical role in the bone remodeling process [28, 29] mainly by activation of osteoclasts and increasing bone resorption [30–32]. While IL-1 activates NF-κB and MAPKs through TRAF-6, it may also induce PGE2 and the expression of RANKL in osteoblast [28]. IL-6 is produced by osteoblasts and stimulate the formation of osteoclasts [33]. Interestingly, IL-6 knockout mice do not lose cancellous bone after ovariectomy [33]. In our study, significant decreases in the serum levels of IL-1 (84%), IL-6 (53%), and TNF-α (40%) were observed with CQ diet. Even in CQ O mice the levels of IL-6 and TNF-α were significantly lower than those of the LC O mice and IL-1 levels decreased by 52%. Our results clearly show that CQ alters proinflammatory cytokines and maybe one of the major pathways used to reduce bone resorption—the characteristic response to ovariectomy.Based on the literature, CQ was also used to induce weight loss [6, 34], but in our mice, we noticed that there was increased body weight and peritoneal fat. Both the studies that reported weight reducing benefits of CQ were in obese humans. The major difference between these reports and our study is that they were using already obese patients while the mice in our study were not obese to begin with. So, it may be that a stimulus (obesity-linked proteins) is required for CQ to reduce fat mass or the increase in fat mass is specific to mice and may not affect humans. We measured the levels of leptin, a hormone derived from the adipose tissue which plays a key role in regulating energy intake and expenditure and also influences bone formation as well as bone resorption [35]. At lower than physiological concentrations leptin stimulates bone formation and probably can induce apoptosis of osteoclasts [36]. We were not surprised to find increased circulating levels of leptin and there was bone loss in the LC O mice, but CQ could protect bone in the long bones suggesting that CQ blocks the bone resorption action of leptin. It will be interesting to see if CQ has any influence on the adipocytes found in the bone marrow. If CQ reduces the adipocytes in the bone marrow then the bone resorbing properties of leptin will be reduced as local leptin concentrations produced by increased adipocytes in the bone marrow will increase bone loss [37]. Therefore, leptin CQ interaction needs to be further studied to determine the mechanism by which CQ beneficially influences bone through leptin.CQ may primarily attenuate bone resorption in OVX mice through the downregulation of proinflammatory cytokines but it does not rule out the possibility that it may also act through other pathways. There are reports that CQ also enhances bone mineralization by accumulating mucopolysaccharides at the site of bone formation [14]. Moreover, CQ is reported to increase calcium uptake and mechanical properties of bone in rats [15]. Phytochemical analyses of CQ show the presence of high levels of calcium, vitamin C, β-carotene [38, 39], and flavanoids [25] some of these substances have established beneficial properties on bone. In vitro studies have shown that ethanolic extracts of CQ increased mRNA and proteins related to the bone formation pathway and IGF-I, IGF-II, and IGF binding protein [40, 41]. More investigations are necessary to elucidate the mechanism(s) by which CQ influences bone metabolism. However, it is very encouraging to note that in studies done with CQ using very high doses (5000 mg/kg body weight) [9] have not reported any toxic side effects. In the present study, we have used only 500 mg/kg body weight of CQ and observed that the liver, spleen, and kidney weights were not altered significantly, suggesting that CQ may not have any severe side-effects.
## 5. Conclusions
We conclude that CQ can reduce OVX induced bone loss and it does this in the long bones in a site-specific manner with more effects on the cancellous bone of femur followed by tibia. CQ probably reduces bone resorption primarily by downregulating proinflammatory cytokines that are often increased after ovariectomy. The beneficial effects of CQ are probably due to the flavanoids present. Although the mechanism(s) by which CQ attenuates ovariectomy-induced bone loss has to be studied, CQ being an edible plant and with a history of medicinal effects, especially in healing bone fractures, may be a good supplement to existing medication for the reversal of postmenopausal bone loss.
---
*Source: 101206-2012-06-21.xml* | 2012 |
# Investigation on the Binding and Conformational Change of All-trans-Retinoic Acid with Peptidyl Prolyl cis/trans Isomerase Pin1 Using Spectroscopic and Computational Techniques
**Authors:** GuoFei Zhu; ShaoLi Lyu; Yang Liu; Chao Ma; Wang Wang
**Journal:** Journal of Spectroscopy
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1012078
---
## Abstract
Binding and conformational change of all-trans-retinoic acid (ATRA) with peptidyl prolyl cis/trans isomerase Pin1 were investigated systematically by spectroscopic and computational techniques under experimentally optimized physiological conditions. The intrinsic fluorescence of Pin1 was quenched through a static quenching mechanism in the presence of ATRA with binding constants on the order of 105 mol/L. Thermodynamic parameters (ΔH = 15.76 kJ/mol and ΔS = 158.36 J/mol·K at 293 K) and computational results illustrated that the hydrophobic interactions played a significant role in the binding process of ATRA to Pin1, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored. Circular dichroism, fluorescence spectra, and computational simulations revealed that ATRA interacted with residues Lys63 and Arg69 of Pin1 to affect its conformational changes. Molecular dynamic simulation, principal component analysis, and free energy landscape monitored the dynamical conformational characteristics of ATRA binding to Pin1. All in all, the present research might provide a reference for the development and design of retinoic acid drugs that inhibit the activity of Pin1.
---
## Body
## 1. Introduction
Peptidyl prolylcis-trans isomerase Pin1 is a unique enzyme that catalyzes cis-trans isomerization of phosphorylated serine/threonine-proline (pSer/Thr-Pro) motif and posttranslationally modulates the structure and function of Pin1 substrates [1]. For example, Pin1 binds to pThr286-Pro motif of Cyclin D1 and increases its stability in the nucleus [2]. Moreover, Pin1 binds to pSer246-Pro motif of ß-catenin, inhibits its interaction with adenomatous polyposis coli (APC), and improves its stability and transport to the nucleus [2]. Therefore, Pin1 can regulate multiple cancer-driving signaling pathways, such as Wnt/β-catenin, PI3K/AKT, and RTK/Ras/ERK pathway, and some physiological processes, such as cell cycle, apoptosis, and aging [3–5]. Pin1 contains 163 amino acid residues, which are composed of the WW domain and PPIase domain (Figure 1(a)) [6]. The WW domain recognizes the pSer/Thr-Pro motif of the substrate and passes it to the PPIase domain for the catalytic substrate [7, 8]. As a regulatory factor, Pin1 plays an important role in malignant tumors and neurodegenerative diseases, so it is an attractive and valuable drug target [9–11].Figure 1
The crystal structure of Pin1 (a) and chemical structure of ATRA (b). W11, W34, and W73 are three tryptophan residues.
(a)(b)ATRA, one of the most active metabolites of vitamin A, has broad application prospects in cancer therapy and prevention [12]. ATRA belongs to the retinoid family, which is composed of a ß-ionone ring and a polyunsaturated side with a carboxylic acid group (Figure 1(b)) [13]. A lot of literature has been reported the chemotherapeutic and chemopreventive effects of ATRA in hepatocellular carcinoma [14], breast cancer [15], gastric cancer [16], colon cancer [17], and prostate cancer [18]. Indeed, ATRA has become the standard front-line drug used for the treatment of acute promyelocytic leukemia (APL) in adults and that of Neuroblastoma in children [13]. Although ATRA exhibits a wide range of active functions, its application in other tumors is severely limited due to side effects, short half-life, and poor water solubility [12, 19].Several researchers have illustrated that ATRA is one of the most potent Pin1 inhibitors to cure cancer [12–14]. Notably, ATRA binds to the active pocket of Pin1 and inhibits its biological function, which induces its degradation in acute promyelocytic leukemia (APL) cells [20]. The combination of arsenic trioxide (ATO) and ATRA has also been reported to safely treat APL by targeting Pin1 [21]. The crystal structure of Pin1-ATRA complex reveals that ATRA binds to the catalytic domain of Pin1, but its specific inhibitory mechanism, thermodynamic parameters, binding affinity, energy transfer, and conformational changes are still unclear [22].In the present work, we utilized multiple spectroscopic and computational techniques to explore the dynamical conformational characteristics of ATRA binding to Pin1 in an aqueous solution at physiological conditions. The quenching constants (Ksv), binding constants (Ka), the number of binding sites (n), and thermodynamic parameters (ΔH, ΔS, and ΔG) of Pin1 by ATRA were calculated using fluorescence spectra at different temperatures (293 K and 303 K). The conformational changes of ATRA binding to Pin1 were determined by synchronous fluorescence, three-dimensional (3D) fluorescence, and circular dichroism (CD). The dynamic characteristics of ATRA binding to Pin1 were monitored at the atomic level by molecular dynamics simulations, principal component analysis, and free energy landscape. This study would help understand the binding model and inhibition mechanism of Pin1-ATRA complex.
## 2. Materials and Methods
### 2.1. Materials
The standard of ATRA was purchased from Macklin Biochemical Co., Ltd. (Shanghai, China). Yeast extract and peptone were products of AoBox Biotechnology Co., Ltd. (Beijing, China). His-tag purification resin and ultrafiltration spin columns were gained from Beyotime Biotechnology Co., Ltd. (Shanghai, China). Thermolysin was obtained from Yuanye Biotechnology Co., Ltd. (Shanghai, China). Isopropyl-beta-D-thiogalactopyranoside (IPTG), penicillin, and other reagents were obtained from Solarbio Technology Co., Ltd. (Beijing, China).
### 2.2. Preparation of Pin1
The expression and purification of Pin1 were shown as described before [23, 24]. Briefly, wild type Pin1 (WT-Pin1) and ten alanine mutants (H59A, L61A, K63A, R68A, R69A, C113A, Q131, M130A, F134A, and H157A) were expressed in E. coli BL21 (DE3), which contains the recombinant plasmid pET-19b-Pin1. Then, interesting proteins were purified using His-tag purification resin and ultrafiltration spin columns, which were placed in Buffer C (25 mM Tris, 200 mM NaCl, pH 7.4). Meanwhile, the purities of Pin1 and mutants were detected using SDS-PAGE (purity >90%). The concentrations of Pin1 and mutants were measured using Bradford assay.
### 2.3. Spectral Measures
Circular dichroism (CD) measurements were carried out on Jasco J-815 circular dichroism spectropolarimeter (JASCO, Japan) with a quartz cell of 0.1 cm path length at 298 K. The spectra range, scan speed, and bandwidth were set to 200–250 nm, 20 nm/min, 1.0 nm, respectively. The concentration of Pin1 (pH 7.4) was controlled at 10μM; then ATRA was gradually added. Every spectrum was the mean of three spectra.The fluorescence measurements were collected on F-4500 fluorescence spectrophotometer (Hitachi, Japan) with a 1.0 cm quartz cell and a thermostat bath. The concentration of Pin1 (pH 7.4) was controlled at 5μM; then ATRA was gradually added. Fluorescence emission spectra were measured in the wavelength range of 310–400 nm when excitation wavelength (λex) was 295 nm at 293 K and 303 K. In addition, synchronous fluorescence spectra were recorded in the wavelength range of 270–310 nm and 250–310 nm when Δλ (Δλ = λem–λex) was 15 and 60 nm at 293 K, respectively. Also, the three-dimensional (3D) fluorescence spectra were recorded in the emission wavelength range of 200–450 nm and the excitation wavelength range of 200–350 nm at 293 K. The excitation slit, emission slit, scanning speed, and voltage were set to 5 nm, 5 nm, 1200 nm/min, and 700 V, respectively. Every spectrum was the average of three spectra.
### 2.4. Computational Simulations
Classical molecular dynamic (MD) simulation was performed to study the binding and conformational changes of ATRA with peptidyl prolylcis/trans isomerase Pin1. The crystal structure of Pin1-ATRA complex was downloaded from Protein Data Bank website (PDB_ID: 4TNS) [22]. PyMol was adopted to remove the water molecules and ions [25]. To construct the wild type structure, the residues Q77 and Q82 were replaced by residues K77 and K82 using Swiss-Model server [26]. MD simulations of Pin1 and Pin1-ATRA complex were performed in GROMACS v4.6.5 with AMBER99SB all-atom force field [27]. The pdb2gmx and antechamber programs were implemented to produce the topology files of Pin1 and ATRA, respectively. Pin1 and Pin1-ATRA complex were solvated in a dodecahedron periodic box using TIP3P molecules. To keep the charge-neutral environment, Cl− was added in Pin1 and Pin1-ATRA systems. Energy minimization with steepest descent algorithm was performed with 5000 steps to correct improper geometries and avoid steric clashes. Then, 100 ps NVT and NPT ensembles were used to balance system at 300 K and 1 atm. Finally, 50 ns MD simulations of Pin1 and Pin1-ATRA complex began to run 50 ns with 2 fs timestep at 310 K. The g_rms, g_rmsf, g_gyrate, and g_hbond programs were used to analyze the backbone root mean square deviation (RMSD), root mean square fluctuation (RMSF), radius of gyration (Rg), and hydrogen bonds, respectively.
### 2.5. Binding Energy Calculations
The MM/PBSA (Molecular Mechanics/Poisson-Boltzmann Surface Area) approach was executed to predict the binding free energy between Pin1 and ATRA. The last 5 ns trajectory of MD simulation was extracted with 100 snapshots. The following formula was implemented to estimate binding free energy (∆Gbind) [28]:(1)ΔGbind=ΔGcomplex−ΔGprotein+ΔGligand=ΔEMM+ΔGsol−TΔS,ΔEMM=ΔEvdw+ΔEele,ΔGsol=ΔGpolar+ΔGnopolar.We have that ∆Gcomplex, ∆Gprotein, and ∆Gligand are total free energy of Pin1-ATRA complex, Pin1, and ATRA, respectively. ∆EMM consists of van der Waals (∆Evdw) and electrostatic interaction energy (∆Eele). ∆Gsol consists of polar solvation free energy (∆Gpolar) and nonpolar solvation free energy (∆Gnoploar). T∆S is entropic contribution, which is neglected due to its high computational cost and low prediction accuracy in MM/PBSA approach.Therefore, the binding free energy (∆Gbind) consists of van der Waals (∆Evdw), electrostatic interaction energy (∆Eele), polar solvation free energy (∆Gpolar), and nonpolar solvation free energy (∆Gnoploar), respectively. It is expressed follows:(2)ΔGbind=ΔEvdw+ΔEele+ΔGpolar+ΔGnonpolar.Theg_mmpbsa program and python script MmPbSaStat.py were used to predict the binding energy between Pin1 and ATRA [29].
### 2.6. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) was implemented to study the essential motions of biomacromolecules during MD simulations [30, 31]. The correlation matrix C was constructed using the coordinates of Cα atoms and diagonalized to generate the eigenvalues and their corresponding eigenvectors from the 50 ns MD trajectories. Covariance matrix (C) of Cα atoms from each residue was calculated as in the following equation [32]:(3)Cij=Mii1/2xi−xiMjj1/2xj−xj,where C is a symmetric 3n × 3n matrix, n is the number of residues, and M is a diagonal matrix. The g_covar package was adopted to construct and diagonalize covariance matrices of Cα atoms. The g_anaeig package was performed to project the trajectory onto the eigenvector. The first eigenvector and second eigenvector were considered as principal component 1 (PC 1) and principal component 2 (PC 2), respectively.Free energy landscape (FEL) is an effective technique to study the conformational changes related to different energy states [32, 33]. PC 1 and PC 2 were implemented to construct the two-dimensional representation of FEL. The minimum value of free energy indicates a stable conformation, and the energy barrier connecting the minimum value suggests a metastable state. Gibbs free energy (Gα) was obtained as in the following equation [34]:(4)Gα=−kTlnPqαPmaxq,where k, T, P(qα), and Pmax(q) are Boltzmann constant, absolute temperature, the probability density function, and the prospects of the most probable state, respectively. Gibbs free energy was generated using g_sham package.
### 2.7. Drug Affinity Responsive Target Stability (DARTS) Assay
DARTS assay can effectively detect the stability of protein binding to molecules because protein is less susceptible to proteolysis when it is drug-bound than when it is drug-free [35]. ATRA (0.2 mg/ml) was added to Pin1 and its mutants (0.2 mg/ml) and incubated at room temperature for 30 min. Then, the mixed solutions were proteolyzed at room temperature for 30 min with thermolysin (1 : 1000). Next, mixed solutions were detected by SDS-PAGE.
## 2.1. Materials
The standard of ATRA was purchased from Macklin Biochemical Co., Ltd. (Shanghai, China). Yeast extract and peptone were products of AoBox Biotechnology Co., Ltd. (Beijing, China). His-tag purification resin and ultrafiltration spin columns were gained from Beyotime Biotechnology Co., Ltd. (Shanghai, China). Thermolysin was obtained from Yuanye Biotechnology Co., Ltd. (Shanghai, China). Isopropyl-beta-D-thiogalactopyranoside (IPTG), penicillin, and other reagents were obtained from Solarbio Technology Co., Ltd. (Beijing, China).
## 2.2. Preparation of Pin1
The expression and purification of Pin1 were shown as described before [23, 24]. Briefly, wild type Pin1 (WT-Pin1) and ten alanine mutants (H59A, L61A, K63A, R68A, R69A, C113A, Q131, M130A, F134A, and H157A) were expressed in E. coli BL21 (DE3), which contains the recombinant plasmid pET-19b-Pin1. Then, interesting proteins were purified using His-tag purification resin and ultrafiltration spin columns, which were placed in Buffer C (25 mM Tris, 200 mM NaCl, pH 7.4). Meanwhile, the purities of Pin1 and mutants were detected using SDS-PAGE (purity >90%). The concentrations of Pin1 and mutants were measured using Bradford assay.
## 2.3. Spectral Measures
Circular dichroism (CD) measurements were carried out on Jasco J-815 circular dichroism spectropolarimeter (JASCO, Japan) with a quartz cell of 0.1 cm path length at 298 K. The spectra range, scan speed, and bandwidth were set to 200–250 nm, 20 nm/min, 1.0 nm, respectively. The concentration of Pin1 (pH 7.4) was controlled at 10μM; then ATRA was gradually added. Every spectrum was the mean of three spectra.The fluorescence measurements were collected on F-4500 fluorescence spectrophotometer (Hitachi, Japan) with a 1.0 cm quartz cell and a thermostat bath. The concentration of Pin1 (pH 7.4) was controlled at 5μM; then ATRA was gradually added. Fluorescence emission spectra were measured in the wavelength range of 310–400 nm when excitation wavelength (λex) was 295 nm at 293 K and 303 K. In addition, synchronous fluorescence spectra were recorded in the wavelength range of 270–310 nm and 250–310 nm when Δλ (Δλ = λem–λex) was 15 and 60 nm at 293 K, respectively. Also, the three-dimensional (3D) fluorescence spectra were recorded in the emission wavelength range of 200–450 nm and the excitation wavelength range of 200–350 nm at 293 K. The excitation slit, emission slit, scanning speed, and voltage were set to 5 nm, 5 nm, 1200 nm/min, and 700 V, respectively. Every spectrum was the average of three spectra.
## 2.4. Computational Simulations
Classical molecular dynamic (MD) simulation was performed to study the binding and conformational changes of ATRA with peptidyl prolylcis/trans isomerase Pin1. The crystal structure of Pin1-ATRA complex was downloaded from Protein Data Bank website (PDB_ID: 4TNS) [22]. PyMol was adopted to remove the water molecules and ions [25]. To construct the wild type structure, the residues Q77 and Q82 were replaced by residues K77 and K82 using Swiss-Model server [26]. MD simulations of Pin1 and Pin1-ATRA complex were performed in GROMACS v4.6.5 with AMBER99SB all-atom force field [27]. The pdb2gmx and antechamber programs were implemented to produce the topology files of Pin1 and ATRA, respectively. Pin1 and Pin1-ATRA complex were solvated in a dodecahedron periodic box using TIP3P molecules. To keep the charge-neutral environment, Cl− was added in Pin1 and Pin1-ATRA systems. Energy minimization with steepest descent algorithm was performed with 5000 steps to correct improper geometries and avoid steric clashes. Then, 100 ps NVT and NPT ensembles were used to balance system at 300 K and 1 atm. Finally, 50 ns MD simulations of Pin1 and Pin1-ATRA complex began to run 50 ns with 2 fs timestep at 310 K. The g_rms, g_rmsf, g_gyrate, and g_hbond programs were used to analyze the backbone root mean square deviation (RMSD), root mean square fluctuation (RMSF), radius of gyration (Rg), and hydrogen bonds, respectively.
## 2.5. Binding Energy Calculations
The MM/PBSA (Molecular Mechanics/Poisson-Boltzmann Surface Area) approach was executed to predict the binding free energy between Pin1 and ATRA. The last 5 ns trajectory of MD simulation was extracted with 100 snapshots. The following formula was implemented to estimate binding free energy (∆Gbind) [28]:(1)ΔGbind=ΔGcomplex−ΔGprotein+ΔGligand=ΔEMM+ΔGsol−TΔS,ΔEMM=ΔEvdw+ΔEele,ΔGsol=ΔGpolar+ΔGnopolar.We have that ∆Gcomplex, ∆Gprotein, and ∆Gligand are total free energy of Pin1-ATRA complex, Pin1, and ATRA, respectively. ∆EMM consists of van der Waals (∆Evdw) and electrostatic interaction energy (∆Eele). ∆Gsol consists of polar solvation free energy (∆Gpolar) and nonpolar solvation free energy (∆Gnoploar). T∆S is entropic contribution, which is neglected due to its high computational cost and low prediction accuracy in MM/PBSA approach.Therefore, the binding free energy (∆Gbind) consists of van der Waals (∆Evdw), electrostatic interaction energy (∆Eele), polar solvation free energy (∆Gpolar), and nonpolar solvation free energy (∆Gnoploar), respectively. It is expressed follows:(2)ΔGbind=ΔEvdw+ΔEele+ΔGpolar+ΔGnonpolar.Theg_mmpbsa program and python script MmPbSaStat.py were used to predict the binding energy between Pin1 and ATRA [29].
## 2.6. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) was implemented to study the essential motions of biomacromolecules during MD simulations [30, 31]. The correlation matrix C was constructed using the coordinates of Cα atoms and diagonalized to generate the eigenvalues and their corresponding eigenvectors from the 50 ns MD trajectories. Covariance matrix (C) of Cα atoms from each residue was calculated as in the following equation [32]:(3)Cij=Mii1/2xi−xiMjj1/2xj−xj,where C is a symmetric 3n × 3n matrix, n is the number of residues, and M is a diagonal matrix. The g_covar package was adopted to construct and diagonalize covariance matrices of Cα atoms. The g_anaeig package was performed to project the trajectory onto the eigenvector. The first eigenvector and second eigenvector were considered as principal component 1 (PC 1) and principal component 2 (PC 2), respectively.Free energy landscape (FEL) is an effective technique to study the conformational changes related to different energy states [32, 33]. PC 1 and PC 2 were implemented to construct the two-dimensional representation of FEL. The minimum value of free energy indicates a stable conformation, and the energy barrier connecting the minimum value suggests a metastable state. Gibbs free energy (Gα) was obtained as in the following equation [34]:(4)Gα=−kTlnPqαPmaxq,where k, T, P(qα), and Pmax(q) are Boltzmann constant, absolute temperature, the probability density function, and the prospects of the most probable state, respectively. Gibbs free energy was generated using g_sham package.
## 2.7. Drug Affinity Responsive Target Stability (DARTS) Assay
DARTS assay can effectively detect the stability of protein binding to molecules because protein is less susceptible to proteolysis when it is drug-bound than when it is drug-free [35]. ATRA (0.2 mg/ml) was added to Pin1 and its mutants (0.2 mg/ml) and incubated at room temperature for 30 min. Then, the mixed solutions were proteolyzed at room temperature for 30 min with thermolysin (1 : 1000). Next, mixed solutions were detected by SDS-PAGE.
## 3. Results and Discussion
### 3.1. Fluorescence Spectroscopic Studies
#### 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
#### 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
### 3.2. Conformational Studies
#### 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
#### 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
#### 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
### 3.3. Computational Studies
#### 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
#### 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
### 3.4. Mutant Studies
#### 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
#### 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
#### 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 3.1. Fluorescence Spectroscopic Studies
### 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
### 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
## 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
## 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
## 3.2. Conformational Studies
### 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
### 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
### 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
## 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
## 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
## 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
## 3.3. Computational Studies
### 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
### 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
## 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
## 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
## 3.4. Mutant Studies
### 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
### 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
### 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
## 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
## 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 4. Conclusion
The present work explains the details of the binding and conformational change of ATRA with Pin1 using fluorescence spectra, circular dichroism, MD simulations, binding free energy, and free energy landscape under physiological conditions. Fluorescence emission spectra showed that the binding mechanism of ATRA to Pin1 was a static quenching process with a moderate binding affinity. Thermodynamic parameters and computational simulations indicated that the binding force was mainly hydrophobic interactions, but other forces were also involved in the binding process of ATRA to Pin1. Circular dichroism, synchronous fluorescence, and three-dimensional fluorescence spectra demonstrated that the binding of ATRA to Pin1 reduces the helical stability of its active center. Free energy landscape and MD simulation displayed that the process of ATRA binding to Pin1 would cause its dynamic conformational transitions. Computational simulations, fluorescence titrations, and DARTS assays demonstrated that residues K63 and R69 played an important role in the binding process between Pin1 and ATRA. In summary, these works help clarify the binding mechanism of ATRA and Pin1 and provide useful information for the application of ATRA as a therapeutic drug in cancer.
---
*Source: 1012078-2021-12-16.xml* | 1012078-2021-12-16_1012078-2021-12-16.md | 78,065 | Investigation on the Binding and Conformational Change of All-trans-Retinoic Acid with Peptidyl Prolyl cis/trans Isomerase Pin1 Using Spectroscopic and Computational Techniques | GuoFei Zhu; ShaoLi Lyu; Yang Liu; Chao Ma; Wang Wang | Journal of Spectroscopy
(2021) | Physical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1012078 | 1012078-2021-12-16.xml | ---
## Abstract
Binding and conformational change of all-trans-retinoic acid (ATRA) with peptidyl prolyl cis/trans isomerase Pin1 were investigated systematically by spectroscopic and computational techniques under experimentally optimized physiological conditions. The intrinsic fluorescence of Pin1 was quenched through a static quenching mechanism in the presence of ATRA with binding constants on the order of 105 mol/L. Thermodynamic parameters (ΔH = 15.76 kJ/mol and ΔS = 158.36 J/mol·K at 293 K) and computational results illustrated that the hydrophobic interactions played a significant role in the binding process of ATRA to Pin1, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored. Circular dichroism, fluorescence spectra, and computational simulations revealed that ATRA interacted with residues Lys63 and Arg69 of Pin1 to affect its conformational changes. Molecular dynamic simulation, principal component analysis, and free energy landscape monitored the dynamical conformational characteristics of ATRA binding to Pin1. All in all, the present research might provide a reference for the development and design of retinoic acid drugs that inhibit the activity of Pin1.
---
## Body
## 1. Introduction
Peptidyl prolylcis-trans isomerase Pin1 is a unique enzyme that catalyzes cis-trans isomerization of phosphorylated serine/threonine-proline (pSer/Thr-Pro) motif and posttranslationally modulates the structure and function of Pin1 substrates [1]. For example, Pin1 binds to pThr286-Pro motif of Cyclin D1 and increases its stability in the nucleus [2]. Moreover, Pin1 binds to pSer246-Pro motif of ß-catenin, inhibits its interaction with adenomatous polyposis coli (APC), and improves its stability and transport to the nucleus [2]. Therefore, Pin1 can regulate multiple cancer-driving signaling pathways, such as Wnt/β-catenin, PI3K/AKT, and RTK/Ras/ERK pathway, and some physiological processes, such as cell cycle, apoptosis, and aging [3–5]. Pin1 contains 163 amino acid residues, which are composed of the WW domain and PPIase domain (Figure 1(a)) [6]. The WW domain recognizes the pSer/Thr-Pro motif of the substrate and passes it to the PPIase domain for the catalytic substrate [7, 8]. As a regulatory factor, Pin1 plays an important role in malignant tumors and neurodegenerative diseases, so it is an attractive and valuable drug target [9–11].Figure 1
The crystal structure of Pin1 (a) and chemical structure of ATRA (b). W11, W34, and W73 are three tryptophan residues.
(a)(b)ATRA, one of the most active metabolites of vitamin A, has broad application prospects in cancer therapy and prevention [12]. ATRA belongs to the retinoid family, which is composed of a ß-ionone ring and a polyunsaturated side with a carboxylic acid group (Figure 1(b)) [13]. A lot of literature has been reported the chemotherapeutic and chemopreventive effects of ATRA in hepatocellular carcinoma [14], breast cancer [15], gastric cancer [16], colon cancer [17], and prostate cancer [18]. Indeed, ATRA has become the standard front-line drug used for the treatment of acute promyelocytic leukemia (APL) in adults and that of Neuroblastoma in children [13]. Although ATRA exhibits a wide range of active functions, its application in other tumors is severely limited due to side effects, short half-life, and poor water solubility [12, 19].Several researchers have illustrated that ATRA is one of the most potent Pin1 inhibitors to cure cancer [12–14]. Notably, ATRA binds to the active pocket of Pin1 and inhibits its biological function, which induces its degradation in acute promyelocytic leukemia (APL) cells [20]. The combination of arsenic trioxide (ATO) and ATRA has also been reported to safely treat APL by targeting Pin1 [21]. The crystal structure of Pin1-ATRA complex reveals that ATRA binds to the catalytic domain of Pin1, but its specific inhibitory mechanism, thermodynamic parameters, binding affinity, energy transfer, and conformational changes are still unclear [22].In the present work, we utilized multiple spectroscopic and computational techniques to explore the dynamical conformational characteristics of ATRA binding to Pin1 in an aqueous solution at physiological conditions. The quenching constants (Ksv), binding constants (Ka), the number of binding sites (n), and thermodynamic parameters (ΔH, ΔS, and ΔG) of Pin1 by ATRA were calculated using fluorescence spectra at different temperatures (293 K and 303 K). The conformational changes of ATRA binding to Pin1 were determined by synchronous fluorescence, three-dimensional (3D) fluorescence, and circular dichroism (CD). The dynamic characteristics of ATRA binding to Pin1 were monitored at the atomic level by molecular dynamics simulations, principal component analysis, and free energy landscape. This study would help understand the binding model and inhibition mechanism of Pin1-ATRA complex.
## 2. Materials and Methods
### 2.1. Materials
The standard of ATRA was purchased from Macklin Biochemical Co., Ltd. (Shanghai, China). Yeast extract and peptone were products of AoBox Biotechnology Co., Ltd. (Beijing, China). His-tag purification resin and ultrafiltration spin columns were gained from Beyotime Biotechnology Co., Ltd. (Shanghai, China). Thermolysin was obtained from Yuanye Biotechnology Co., Ltd. (Shanghai, China). Isopropyl-beta-D-thiogalactopyranoside (IPTG), penicillin, and other reagents were obtained from Solarbio Technology Co., Ltd. (Beijing, China).
### 2.2. Preparation of Pin1
The expression and purification of Pin1 were shown as described before [23, 24]. Briefly, wild type Pin1 (WT-Pin1) and ten alanine mutants (H59A, L61A, K63A, R68A, R69A, C113A, Q131, M130A, F134A, and H157A) were expressed in E. coli BL21 (DE3), which contains the recombinant plasmid pET-19b-Pin1. Then, interesting proteins were purified using His-tag purification resin and ultrafiltration spin columns, which were placed in Buffer C (25 mM Tris, 200 mM NaCl, pH 7.4). Meanwhile, the purities of Pin1 and mutants were detected using SDS-PAGE (purity >90%). The concentrations of Pin1 and mutants were measured using Bradford assay.
### 2.3. Spectral Measures
Circular dichroism (CD) measurements were carried out on Jasco J-815 circular dichroism spectropolarimeter (JASCO, Japan) with a quartz cell of 0.1 cm path length at 298 K. The spectra range, scan speed, and bandwidth were set to 200–250 nm, 20 nm/min, 1.0 nm, respectively. The concentration of Pin1 (pH 7.4) was controlled at 10μM; then ATRA was gradually added. Every spectrum was the mean of three spectra.The fluorescence measurements were collected on F-4500 fluorescence spectrophotometer (Hitachi, Japan) with a 1.0 cm quartz cell and a thermostat bath. The concentration of Pin1 (pH 7.4) was controlled at 5μM; then ATRA was gradually added. Fluorescence emission spectra were measured in the wavelength range of 310–400 nm when excitation wavelength (λex) was 295 nm at 293 K and 303 K. In addition, synchronous fluorescence spectra were recorded in the wavelength range of 270–310 nm and 250–310 nm when Δλ (Δλ = λem–λex) was 15 and 60 nm at 293 K, respectively. Also, the three-dimensional (3D) fluorescence spectra were recorded in the emission wavelength range of 200–450 nm and the excitation wavelength range of 200–350 nm at 293 K. The excitation slit, emission slit, scanning speed, and voltage were set to 5 nm, 5 nm, 1200 nm/min, and 700 V, respectively. Every spectrum was the average of three spectra.
### 2.4. Computational Simulations
Classical molecular dynamic (MD) simulation was performed to study the binding and conformational changes of ATRA with peptidyl prolylcis/trans isomerase Pin1. The crystal structure of Pin1-ATRA complex was downloaded from Protein Data Bank website (PDB_ID: 4TNS) [22]. PyMol was adopted to remove the water molecules and ions [25]. To construct the wild type structure, the residues Q77 and Q82 were replaced by residues K77 and K82 using Swiss-Model server [26]. MD simulations of Pin1 and Pin1-ATRA complex were performed in GROMACS v4.6.5 with AMBER99SB all-atom force field [27]. The pdb2gmx and antechamber programs were implemented to produce the topology files of Pin1 and ATRA, respectively. Pin1 and Pin1-ATRA complex were solvated in a dodecahedron periodic box using TIP3P molecules. To keep the charge-neutral environment, Cl− was added in Pin1 and Pin1-ATRA systems. Energy minimization with steepest descent algorithm was performed with 5000 steps to correct improper geometries and avoid steric clashes. Then, 100 ps NVT and NPT ensembles were used to balance system at 300 K and 1 atm. Finally, 50 ns MD simulations of Pin1 and Pin1-ATRA complex began to run 50 ns with 2 fs timestep at 310 K. The g_rms, g_rmsf, g_gyrate, and g_hbond programs were used to analyze the backbone root mean square deviation (RMSD), root mean square fluctuation (RMSF), radius of gyration (Rg), and hydrogen bonds, respectively.
### 2.5. Binding Energy Calculations
The MM/PBSA (Molecular Mechanics/Poisson-Boltzmann Surface Area) approach was executed to predict the binding free energy between Pin1 and ATRA. The last 5 ns trajectory of MD simulation was extracted with 100 snapshots. The following formula was implemented to estimate binding free energy (∆Gbind) [28]:(1)ΔGbind=ΔGcomplex−ΔGprotein+ΔGligand=ΔEMM+ΔGsol−TΔS,ΔEMM=ΔEvdw+ΔEele,ΔGsol=ΔGpolar+ΔGnopolar.We have that ∆Gcomplex, ∆Gprotein, and ∆Gligand are total free energy of Pin1-ATRA complex, Pin1, and ATRA, respectively. ∆EMM consists of van der Waals (∆Evdw) and electrostatic interaction energy (∆Eele). ∆Gsol consists of polar solvation free energy (∆Gpolar) and nonpolar solvation free energy (∆Gnoploar). T∆S is entropic contribution, which is neglected due to its high computational cost and low prediction accuracy in MM/PBSA approach.Therefore, the binding free energy (∆Gbind) consists of van der Waals (∆Evdw), electrostatic interaction energy (∆Eele), polar solvation free energy (∆Gpolar), and nonpolar solvation free energy (∆Gnoploar), respectively. It is expressed follows:(2)ΔGbind=ΔEvdw+ΔEele+ΔGpolar+ΔGnonpolar.Theg_mmpbsa program and python script MmPbSaStat.py were used to predict the binding energy between Pin1 and ATRA [29].
### 2.6. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) was implemented to study the essential motions of biomacromolecules during MD simulations [30, 31]. The correlation matrix C was constructed using the coordinates of Cα atoms and diagonalized to generate the eigenvalues and their corresponding eigenvectors from the 50 ns MD trajectories. Covariance matrix (C) of Cα atoms from each residue was calculated as in the following equation [32]:(3)Cij=Mii1/2xi−xiMjj1/2xj−xj,where C is a symmetric 3n × 3n matrix, n is the number of residues, and M is a diagonal matrix. The g_covar package was adopted to construct and diagonalize covariance matrices of Cα atoms. The g_anaeig package was performed to project the trajectory onto the eigenvector. The first eigenvector and second eigenvector were considered as principal component 1 (PC 1) and principal component 2 (PC 2), respectively.Free energy landscape (FEL) is an effective technique to study the conformational changes related to different energy states [32, 33]. PC 1 and PC 2 were implemented to construct the two-dimensional representation of FEL. The minimum value of free energy indicates a stable conformation, and the energy barrier connecting the minimum value suggests a metastable state. Gibbs free energy (Gα) was obtained as in the following equation [34]:(4)Gα=−kTlnPqαPmaxq,where k, T, P(qα), and Pmax(q) are Boltzmann constant, absolute temperature, the probability density function, and the prospects of the most probable state, respectively. Gibbs free energy was generated using g_sham package.
### 2.7. Drug Affinity Responsive Target Stability (DARTS) Assay
DARTS assay can effectively detect the stability of protein binding to molecules because protein is less susceptible to proteolysis when it is drug-bound than when it is drug-free [35]. ATRA (0.2 mg/ml) was added to Pin1 and its mutants (0.2 mg/ml) and incubated at room temperature for 30 min. Then, the mixed solutions were proteolyzed at room temperature for 30 min with thermolysin (1 : 1000). Next, mixed solutions were detected by SDS-PAGE.
## 2.1. Materials
The standard of ATRA was purchased from Macklin Biochemical Co., Ltd. (Shanghai, China). Yeast extract and peptone were products of AoBox Biotechnology Co., Ltd. (Beijing, China). His-tag purification resin and ultrafiltration spin columns were gained from Beyotime Biotechnology Co., Ltd. (Shanghai, China). Thermolysin was obtained from Yuanye Biotechnology Co., Ltd. (Shanghai, China). Isopropyl-beta-D-thiogalactopyranoside (IPTG), penicillin, and other reagents were obtained from Solarbio Technology Co., Ltd. (Beijing, China).
## 2.2. Preparation of Pin1
The expression and purification of Pin1 were shown as described before [23, 24]. Briefly, wild type Pin1 (WT-Pin1) and ten alanine mutants (H59A, L61A, K63A, R68A, R69A, C113A, Q131, M130A, F134A, and H157A) were expressed in E. coli BL21 (DE3), which contains the recombinant plasmid pET-19b-Pin1. Then, interesting proteins were purified using His-tag purification resin and ultrafiltration spin columns, which were placed in Buffer C (25 mM Tris, 200 mM NaCl, pH 7.4). Meanwhile, the purities of Pin1 and mutants were detected using SDS-PAGE (purity >90%). The concentrations of Pin1 and mutants were measured using Bradford assay.
## 2.3. Spectral Measures
Circular dichroism (CD) measurements were carried out on Jasco J-815 circular dichroism spectropolarimeter (JASCO, Japan) with a quartz cell of 0.1 cm path length at 298 K. The spectra range, scan speed, and bandwidth were set to 200–250 nm, 20 nm/min, 1.0 nm, respectively. The concentration of Pin1 (pH 7.4) was controlled at 10μM; then ATRA was gradually added. Every spectrum was the mean of three spectra.The fluorescence measurements were collected on F-4500 fluorescence spectrophotometer (Hitachi, Japan) with a 1.0 cm quartz cell and a thermostat bath. The concentration of Pin1 (pH 7.4) was controlled at 5μM; then ATRA was gradually added. Fluorescence emission spectra were measured in the wavelength range of 310–400 nm when excitation wavelength (λex) was 295 nm at 293 K and 303 K. In addition, synchronous fluorescence spectra were recorded in the wavelength range of 270–310 nm and 250–310 nm when Δλ (Δλ = λem–λex) was 15 and 60 nm at 293 K, respectively. Also, the three-dimensional (3D) fluorescence spectra were recorded in the emission wavelength range of 200–450 nm and the excitation wavelength range of 200–350 nm at 293 K. The excitation slit, emission slit, scanning speed, and voltage were set to 5 nm, 5 nm, 1200 nm/min, and 700 V, respectively. Every spectrum was the average of three spectra.
## 2.4. Computational Simulations
Classical molecular dynamic (MD) simulation was performed to study the binding and conformational changes of ATRA with peptidyl prolylcis/trans isomerase Pin1. The crystal structure of Pin1-ATRA complex was downloaded from Protein Data Bank website (PDB_ID: 4TNS) [22]. PyMol was adopted to remove the water molecules and ions [25]. To construct the wild type structure, the residues Q77 and Q82 were replaced by residues K77 and K82 using Swiss-Model server [26]. MD simulations of Pin1 and Pin1-ATRA complex were performed in GROMACS v4.6.5 with AMBER99SB all-atom force field [27]. The pdb2gmx and antechamber programs were implemented to produce the topology files of Pin1 and ATRA, respectively. Pin1 and Pin1-ATRA complex were solvated in a dodecahedron periodic box using TIP3P molecules. To keep the charge-neutral environment, Cl− was added in Pin1 and Pin1-ATRA systems. Energy minimization with steepest descent algorithm was performed with 5000 steps to correct improper geometries and avoid steric clashes. Then, 100 ps NVT and NPT ensembles were used to balance system at 300 K and 1 atm. Finally, 50 ns MD simulations of Pin1 and Pin1-ATRA complex began to run 50 ns with 2 fs timestep at 310 K. The g_rms, g_rmsf, g_gyrate, and g_hbond programs were used to analyze the backbone root mean square deviation (RMSD), root mean square fluctuation (RMSF), radius of gyration (Rg), and hydrogen bonds, respectively.
## 2.5. Binding Energy Calculations
The MM/PBSA (Molecular Mechanics/Poisson-Boltzmann Surface Area) approach was executed to predict the binding free energy between Pin1 and ATRA. The last 5 ns trajectory of MD simulation was extracted with 100 snapshots. The following formula was implemented to estimate binding free energy (∆Gbind) [28]:(1)ΔGbind=ΔGcomplex−ΔGprotein+ΔGligand=ΔEMM+ΔGsol−TΔS,ΔEMM=ΔEvdw+ΔEele,ΔGsol=ΔGpolar+ΔGnopolar.We have that ∆Gcomplex, ∆Gprotein, and ∆Gligand are total free energy of Pin1-ATRA complex, Pin1, and ATRA, respectively. ∆EMM consists of van der Waals (∆Evdw) and electrostatic interaction energy (∆Eele). ∆Gsol consists of polar solvation free energy (∆Gpolar) and nonpolar solvation free energy (∆Gnoploar). T∆S is entropic contribution, which is neglected due to its high computational cost and low prediction accuracy in MM/PBSA approach.Therefore, the binding free energy (∆Gbind) consists of van der Waals (∆Evdw), electrostatic interaction energy (∆Eele), polar solvation free energy (∆Gpolar), and nonpolar solvation free energy (∆Gnoploar), respectively. It is expressed follows:(2)ΔGbind=ΔEvdw+ΔEele+ΔGpolar+ΔGnonpolar.Theg_mmpbsa program and python script MmPbSaStat.py were used to predict the binding energy between Pin1 and ATRA [29].
## 2.6. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) was implemented to study the essential motions of biomacromolecules during MD simulations [30, 31]. The correlation matrix C was constructed using the coordinates of Cα atoms and diagonalized to generate the eigenvalues and their corresponding eigenvectors from the 50 ns MD trajectories. Covariance matrix (C) of Cα atoms from each residue was calculated as in the following equation [32]:(3)Cij=Mii1/2xi−xiMjj1/2xj−xj,where C is a symmetric 3n × 3n matrix, n is the number of residues, and M is a diagonal matrix. The g_covar package was adopted to construct and diagonalize covariance matrices of Cα atoms. The g_anaeig package was performed to project the trajectory onto the eigenvector. The first eigenvector and second eigenvector were considered as principal component 1 (PC 1) and principal component 2 (PC 2), respectively.Free energy landscape (FEL) is an effective technique to study the conformational changes related to different energy states [32, 33]. PC 1 and PC 2 were implemented to construct the two-dimensional representation of FEL. The minimum value of free energy indicates a stable conformation, and the energy barrier connecting the minimum value suggests a metastable state. Gibbs free energy (Gα) was obtained as in the following equation [34]:(4)Gα=−kTlnPqαPmaxq,where k, T, P(qα), and Pmax(q) are Boltzmann constant, absolute temperature, the probability density function, and the prospects of the most probable state, respectively. Gibbs free energy was generated using g_sham package.
## 2.7. Drug Affinity Responsive Target Stability (DARTS) Assay
DARTS assay can effectively detect the stability of protein binding to molecules because protein is less susceptible to proteolysis when it is drug-bound than when it is drug-free [35]. ATRA (0.2 mg/ml) was added to Pin1 and its mutants (0.2 mg/ml) and incubated at room temperature for 30 min. Then, the mixed solutions were proteolyzed at room temperature for 30 min with thermolysin (1 : 1000). Next, mixed solutions were detected by SDS-PAGE.
## 3. Results and Discussion
### 3.1. Fluorescence Spectroscopic Studies
#### 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
#### 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
### 3.2. Conformational Studies
#### 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
#### 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
#### 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
### 3.3. Computational Studies
#### 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
#### 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
### 3.4. Mutant Studies
#### 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
#### 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
#### 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 3.1. Fluorescence Spectroscopic Studies
### 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
### 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
## 3.1.1. Fluorescence Quenching Mechanism
Fluorescence spectroscopy is an extremely useful and accurate method to monitor the binding and conformational changes of ligands with protein. It is well known that the endogenous fluorescence of protein is contributed mainly by tryptophan (Trp) and tyrosine (Tyr) residues. Also, when the excitation wavelength (λex) was 295 nm, the fluorescence spectra showed the endogenous fluorescence of tryptophan residues [36, 37]. A previous study had revealed that the activity of Pin1 decreased at high temperature, so this experiment selected 293 K and 303 K to implement the fluorescence spectra [38]. As shown in Figures 2(a) and 2(b), the fluorescence intensities of Pin1 were decreased gradually with increasing concentration of ATRA. When the concentration of ATRA was 5 μM, the fluorescence quenching rates were 50.60% and 59.46%, respectively, at 293 K and 303 K. The phenomenon revealed that ATRA interacted with Pin1 and quenched its intrinsic fluorescence. Meanwhile, the result further confirmed the previous scientific reports that ATRA was an effective Pin1 inhibitor [20]. In addition, it was obvious that the maximum wavelength (λmax) of fluorescence spectra showed a tiny red-shift from 338.2 nm to 342.4 nm and from 339.0 nm to 342.4 nm at 293 K and 303 K, respectively, with the increasing of ATRA. It indicated that the binding of ATRA induced a certain conformational change of Pin1, including a decrease in the hydrophobicity around the microenvironment of tryptophan residues.Figure 2
Fluorescence spectra of Pin1-ATRA complex. (a, b) Fluorescence emission spectra of Pin1 in the absence and presence of ATRA at 293 K and 303 K, respectively.λex = 295 nm, cpin1 = 5 μM, and cATRA = 0 (red line), 0.5 (blue line), 1 (magenta line), 2 (olive line), 3 (navy line), 4 (violet line), and 5 (purple line) μM. (c) Stern–Volmer and (d) double logarithmic plot for the interaction of Pin1 and ATRA.
(a)(b)(c)(d)In general, there are three fluorescence quenching mechanisms: static quenching mode, dynamic quenching mode, and combination of both modes. The quenching mechanism between Pin1 and ATRA was discriminated using the followingStern–Volmer equation [39, 40]:(5)F0F=1+KsvQ,where F0 and F are the maximum fluorescence intensity of Pin1 in the absence and presence of ATRA, respectively. Q is the concentration of ATRA. Ksv is a quenching constant.Figure2(c) shows a well-linear relationship between Pin1 and ATRA, which suggests that the quenching mechanism of ATRA is either a static quenching or a dynamic quenching mechanism. Scientific literature illustrates that quenching constant (Ksv) decreasing with increasing temperature is static quenching, and the opposite is dynamic quenching [41, 42]. From Table 1, the Ksv values are 1.83 and 1.32 × 10 5 mol/L at 293 K and 303 K, respectively, suggesting that the quenching mode between Pin1 and ATRA is a typical static quenching mechanism.Table 1
Thermodynamic parameters of the Pin1-ATRA complex at different temperatures.
Parameters293 K303 KKsv (105 mol/L)1.83 ± 0.011.32 ± 0.01R2a0.9930.996n0.73 ± 0.020.73 ± 0.03Ka (105 mol/L)2.90 ± 0.022.06 ± 0.01R2b0.9980.996ΔG (kJ/mol)−30.64 ± 0.09−29.80 ± 0.34ΔH (kJ/mol)15.76 ± 0.8815.76 ± 0.88ΔS (J/mol/K)158.36 ± 0.34154.91 ± 0.42
## 3.1.2. Binding Constants and Thermodynamic Parameters
The affinity of the drug-receptor is one of the significant indicators for evaluating drug efficacy, and it is closely related to the binding constant (Ka). As a static quenching process, binding constant was calculated using the following equation [43, 44]:(6)logF0−FF=logKa+nlogQ,where Ka and n are the binding constant and the number of binding sites, respectively.As shown in Figure2(d) and Table 1, the n values are approximately equal to 1 at 293 K and 303 K. The observation was consistent with the reported crystal structure, indicating that Pin1 bound only one ATRA [22]. Also, the Ka values are 2.90 and 2.06 × 10 5 mol/L at 293 K and 303 K, respectively. Our result was similar to the previous study, suggesting that there was a high affinity between ATRA and Pin1 [22]. In addition, the Ka values display downtrend with the increasing temperatures, in agreement with the Ksv values, further illustrating that the quenching mechanism between Pin1 and ATRA is a static quenching process.Binding forces were involved in the binding process of Pin1 with ATRA. It is well known that the binding forces are mainly van der Waals forces, hydrogen bonds, hydrophobic interactions, and electrostatic forces. These binding forces were distinguished through thermodynamic parameters obtained using the following equation [45]:(7)lnΔK2K1=ΔH1/T1−1/T2R,ΔG=−RTlnKaΔS=−ΔG−ΔHT,where ΔH, ΔG, and ΔS are enthalpy change, binding free energy, and entropy change, respectively. R and T are gas constant and experimental temperature, respectively.According to the viewpoint of Ross and Subramanian [46], if ΔH and ΔS > 0, hydrophobic interactions were binding forces; if ΔH < 0 and ΔS > 0, electrostatic forces were binding forces; if ΔH < 0 and ΔS < 0, hydrogen bonds and van der Waals forces were binding forces. From Table 1, the values of ΔH and ΔS are 15.76 kJ/mol and 158.36 J/mol·K in the binding process of ATRA to Pin1 at 293 K, respectively. The phenomenon suggested that hydrophobic interactions were the main binding force between ATRA and Pin1. In addition, the binding force between Pin1 and ATRA was analyzed in detail through the following computational simulations (Figures 3 and 4). It is observed that the values of ΔG are −30.64 and −29.80 kJ/mol at 293 K and 303 K, respectively, indicating that the binding of ATRA to Pin1 is a spontaneous process.Figure 3
Free energy landscape plot (a) and binding model (b)–(g) of Pin1-ATRA complex during MD simulation.Figure 4
MD simulations of Pin1-ATRA complex. (a) RMSD of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (b) RMSD of ATRA during 50 ns MD simulation. (c) RMSF of Pin1 and Pin1-ATRA complex during last 5 ns MD simulation. (d) Rg of Pin1 and Pin1-ATRA complex during 50 ns MD simulation. (e) Hydrogen bonds of Pin1-ATRA complex during 50 ns MD simulation. (f) Residues energy decomposition of Pin1-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)(f)
## 3.2. Conformational Studies
### 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
### 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
### 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
## 3.2.1. Circular Dichroism
The binding of ligands to receptors not only affected their thermal stability but also changed their binding conformation. Circular dichroism (CD) is one of the most widely used techniques to investigate the secondary structure changes of protein-ligand complex [47]. As shown in Figure 5(a), CD spectra of Pin1 have been decreased with the increasing amount of ATRA. It implied that ATRA changed the secondary structure of Pin1 and affected its stability. To explore the stability of the secondary structure, α-helix contents of Pin1 were calculated using the following equation [41]:(8)MRE208=observed CDmdegCpnl×10,α−helix%=−MRE208−400033000−4000,where MRE208 is mean residue ellipticity (MRE) at 208 nm and Cp, n, and l are the molar concentration of Pin1 (10 μmol/L), the number of amino acid residues (163), and the path length of the cell (0.1 cm), respectively.Figure 5
ATRA affects the conformation of Pin1. (a) CD spectra of Pin1 in the absence and presence of ATRA. cPin1 = 10 μM; cATRA = 0 (black line), 1 (red line), and 3 (blue line) μM. (b, c) Synchronous fluorescence spectra of Pin1 in the absence and presence of ATRA with Δλ = 15 nm and 60 nm, respectively. cPin1 = 5 μM; cATRA = 0 (black line), 0.5 (red line), 1 (blue line), 2 (magenta line), 3 (olive line), 4 (navy line), and 5 (violet line) μM. (d, e) Three-dimensional fluorescence spectra of Pin1 in the absence and presence of ATRA; cPin1 = 5 μM and cATRA = 0 and 5 μM.
(a)(b)(c)(d)(e)The calculated results showed that thea-helix content of Pin1 declined from 23.33% to 19.89% with the increasing amount of ATRA. The helical stability of Pin1 induced by ATRA might have altered hydrogen bonding networks, which resulted in unfolding of polypeptides and changing the secondary conformation of Pin1 [42].
## 3.2.2. Synchronous Fluorescence Spectroscopy
Synchronous fluorescence spectroscopy can detect the changes in the microenvironment of Pin1 after interacting with ATRA [48]. Synchronous fluorescence spectroscopy provides mainly the conformational characteristics of Tyr and Trp residues when Δλ = 15 and 60 nm, respectively. As shown in Figures 5(b) and 5(c), the fluorescence intensities of Tyr and Trp residues were declined regularly with the increasing amount of ATRA. When the concentration of ATRA is 5 μM, the fluorescence quenching rates of Tyr and Trp residues are 59.78% and 39.47%, respectively, at 293 K. The results demonstrated that ATRA not only quenched the intrinsic fluorescence of Tyr residues but also decreased the intrinsic fluorescence of Trp residues.It is noted thatλmax of fluorescence spectra of Tyr and Trp residues display a slight red-shift from 292.4 nm to 293.6 nm and from 280.8 nm to 282.0 nm, respectively. The results, which were in accordance with endogenous fluorescence (Figures 2(a) and 2(b)), illustrated that the ATRA declined the hydrophobicity and increased the polarity around the Tyr and Trp residues in Pin1.
## 3.2.3. Three-Dimensional Fluorescence Spectroscopy
Three-dimensional (3D) fluorescence spectroscopy is widely implemented to explore the conformational changes of protein-ligand complex. The 3D fluorescence spectra and characteristic peaks are displayed in Figures5(d) and 5(e). It can be seen that Pin1 has four characteristic peaks in the absence and presence of ATRA. Peak a and peak b represent the first-order Rayleigh scattering peak (λex = λem) and the second-order Rayleigh scattering peak (2λex = λem), respectively [49]. Peak 1 shows the characteristic fluorescence peak of Tyr and Trp residues. Also, peak 2 presents the characteristic fluorescence peak of polypeptide backbone structures, which are associated with the secondary structure. From Figures 5(d) and 5(e), fluorescence intensities of peak 1 and peak 2 reduced significantly after ATRA bound into Pin1, which further indicated that ATRA affected the conformation of Pin1.
## 3.3. Computational Studies
### 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
### 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
## 3.3.1. Principal Component Analysis and Free Energy Landscape
Principal component analysis (PCA) is an effective technique to reduce the huge dimension of data set to principal components, showing the main changes of protein-ligand complex and providing some important information [50]. It revealed that the first 15 eigenvectors account for 70.08% of the total variance, which was equal to the previous report of about 70% [30]. In addition, two important eigenvectors, principal components 1 and 2 (PC 1 and PC 2), contributed about 35.98% to the total variance with the conformational space.Based on PCA results, the conformational change of Pin1-ATRA complex was explored through free energy landscape (FEL) of PC 1 and PC 2. As shown in Figure3(a), it is noteworthy that Pin1-ATRA complex has widely two deeper energy basins with energy barriers in the range of 1.0 kJ/mol to 4.0 kJ/mol, indicating that the binding of ATRA to Pin1 may form two stable conformations. In other words, the process of ATRA binding to Pin1 will cause its dynamics conformational transitions, which corresponds well to the spectroscopic phenomena (Figures 2(a), 2(b), and 5).According to FEL results, two representative conformational models are displayed in Figures3(b)–3(g). From Figures 3(b)–3(d), ATRA interacts with residues H59, L61, K63, R69, C113, M130, Q131, F134, and H157. It is observed that the carboxylic acid of ATRA forms a critical hydrogen bond with residue R69 and also forms seven Pi-Alkyl interactions (a type of hydrophobic interaction). In addition, residues K63, Q131, and R69 interact with ATRA via weak van der Waals and salt bridge (a type of electrostatic force). Figures 3(e)–3(g), showing the lowest energy minima, reveal that residues H59, K63, R68, R69, M130, F134, and H157 play an important role in the process of binding ATRA to Pin1 through Pi-Alkyl interaction. Meanwhile, residue K63 forms a salt bridge and residue R69 forms a stable hydrogen bond with ATRA. These observations were similar to previous reports [22], suggesting that the carboxylic acid of ATRA formed salt bridges and hydrogen bonds with residues K63 and R68, and the hydrophobic skeleton of ATRA formed hydrophobic interactions and van der Waals with residues R68, C113, M130, Q131, F134, and H157. All in all, the results, which were in line with experimental works, further suggested that the main binding forces were strong hydrophobic interactions, but electrostatic forces, weak van der Waals, and hydrogen bonds cannot be ignored.
## 3.3.2. MD Simulations
MD simulation is a popular tool to investigate dynamic conformational stability at the atomic level. Root mean square deviation (RMSD) values, an important criterion for assessing conformational stability, of Pin1 and Pin1-ATRA complex were calculated for initial conformations (Figure4(a)). Two systems are relatively stable with approximately 1.75 Å during 50 ns MD simulations, which suggest that the binding of ATRA to Pin1 is relatively stable. From Figure 4(b), it is noteworthy that RMSD values of ATRA reach a relative equilibrium state after 15 ns MD simulations. As seen in Figure 4(c), root mean square fluctuation (RMSF) values, another important indicator for evaluating structural stability, show that two systems share similar fluctuation, except for the region of certain binding residues H59, L61, K63, R68, C113, M130, Q131, F134, and H157.As shown in Figure4(d), radius of gyration (Rg) value, a valuable indicator for evaluating the compactness of a protein, of Pin1-ATRA complex was less than that of Pin1 during 50 ns MD simulations. The results meant that ATRA entered the active pocket of Pin1, causing its conformation to become loose. From Figure 4(e), hydrogen bonds of Pin1-ATRA complex had been present during 50 ns MD simulations, with an average value of about 1, which was in line with the binding model (Figures 3(d) and 3(g)).Binding free energy (ΔGbind) of Pin1-ATRA system is predicted to be −167.19 kJ/mol, suggesting that the binding process is spontaneous between Pin1 and ATRA (Table S1). In addition, energy decompositions show that van der Waals energy (ΔEvdw), electrostatic energy (ΔEele), polar solvation energy (ΔGpolar), and nonpolar solvation energy (ΔGnonpolar) are −120.70, −121.41, 88.04, and −13.12 kJ/mol, respectively (Table S1). The result revealed that van der Waals energy, electrostatic energy, and nonpolar solvation were conducive to the binding process of ATRA to Pin1, whereas the polar solvation energy was the opposite. In addition, nonpolar interaction energy (ΔEvdw + ΔGnonpolar) and polar interaction energy (ΔEele + ΔGpolar) are −133.38 and −33.37 kJ/mol, respectively, in Pin1-ATRA system, indicating that both of these energies contribute to the binding. Meanwhile, residue energy contributions also are predicted in Pin1-ATRA system. From Figure 4(f), it is observed that residues K63, R68, and R69 may play a significant role in the binding process between Pin1 and ATRA.
## 3.4. Mutant Studies
### 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
### 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
### 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 3.4.1. Fluorescence Titrations
Fluorescence titrations were utilized to verify the key residues in the binding process of ATRA to Pin1. As shown in Figure6(a) and Figure S1, ATRA quenched the endogenous fluorescence of Pin1 mutants in different degrees, but K63A and R69A showed the weakest quenching. In addition, the binding constants revealed that mutations of these important residues reduced the binding affinity for ATRA to varying degrees, especially residues K63 and R69 (Figure 6(b) and Figure S2). Previous research illustrated that the carboxylic acid of ATRA formed salt bridges with residues K63 and R68 [22], which was similar to our results, further indicating that these two basic residues were essential for the binding of ATRA to Pin1. This observation could provide certain ideas for the subsequent development and design of retinoic acid drugs to inhibit the activity of Pin1.Figure 6
ATRA binds to Pin1 through key residues. (a) Fluorescence titration of Pin1 and its mutants in the absence and presence of ATRA;λex = 295 nm; T = 293 K; cpin1 or mutants = 5 μM; cATRA = 0, 0.5, 1, 2, 3, 4, and 5 μM. (b) Relative binding constant of ATRA to Pin1 and its mutants. (c) DARTS assay of Pin1 and its mutants by ATRA. (d, e) RMSD of K63A-ATRA and R69A-ATRA complex during 50 ns MD simulation.
(a)(b)(c)(d)(e)
## 3.4.2. DARTS Assay
DARTS assay was implemented to detect the effect of key residue mutations on the stability of ATRA binding to Pin1. As shown in Figure6(c), it was observed that mutants K63A and R69A were less resistant to proteolysis when bound to ATRA than wild type. This result was consistent with fluorescence titration, which further implied that residues K63 and R69 played an important role in the process of ATRA binding to Pin1.
## 3.4.3. MD Simulations of Mutants
Using the PyMol plug-in Mutagenesis, we predicted that the important residues of Pin1 would be mutated into alanine mutants. As shown in FigureS3, these ten important residues mutations did not destroy the secondary structure of Pin1, which was similar to previous work, indicating that single residue mutations had a limited impact on the overall secondary structure of protein [51]. In addition, the CD results of mutants K63A, R68A, and R69A were similar to those of wild type, which further confirmed this phenomenon (Figure S4).Next, we performed MD simulations to explore the effects of residues K63 and R69 mutations on the binding stability of Pin1 and ATRA. During 50 ns MD simulations, the RMSD values of K63A-ATRA and R69A-ATRA systems had been unstable compared with wild type (Figures6(d) and 6(e)). In addition, binding free energies (ΔGbind) of K63A-ATRA and R69A-ATRA systems are −70.81 and −51.23 kJ/mol, respectively, which are lower than that of Pin1-ATRA system (−167.19 kJ/mol, Table S1). These phenomena indicated that mutations of residues K63A and R69A would affect the binding of ATRA to Pin1.
## 4. Conclusion
The present work explains the details of the binding and conformational change of ATRA with Pin1 using fluorescence spectra, circular dichroism, MD simulations, binding free energy, and free energy landscape under physiological conditions. Fluorescence emission spectra showed that the binding mechanism of ATRA to Pin1 was a static quenching process with a moderate binding affinity. Thermodynamic parameters and computational simulations indicated that the binding force was mainly hydrophobic interactions, but other forces were also involved in the binding process of ATRA to Pin1. Circular dichroism, synchronous fluorescence, and three-dimensional fluorescence spectra demonstrated that the binding of ATRA to Pin1 reduces the helical stability of its active center. Free energy landscape and MD simulation displayed that the process of ATRA binding to Pin1 would cause its dynamic conformational transitions. Computational simulations, fluorescence titrations, and DARTS assays demonstrated that residues K63 and R69 played an important role in the binding process between Pin1 and ATRA. In summary, these works help clarify the binding mechanism of ATRA and Pin1 and provide useful information for the application of ATRA as a therapeutic drug in cancer.
---
*Source: 1012078-2021-12-16.xml* | 2021 |
# A Comprehensive Review on Traffic Control Modeling for Obtaining Sustainable Objectives in a Freeway Traffic Environment
**Authors:** Muhammad Sameer Sheikh; Yinqiao Peng
**Journal:** Journal of Advanced Transportation
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012206
---
## Abstract
Traffic control strategy plays a significant role in obtaining sustainable objectives because it not only improves traffic mobility but also enhances traffic management systems. It has been developed and applied by the research community in recent years and still offers various challenges and issues that may require the attention of researchers and engineers. Recent technological developments toward connected and automated vehicles are beneficial for improving traffic safety and achieving sustainable goals. There is a need to develop a survey on traffic control techniques, which could provide the recent developments in the traffic control strategy and could be useful in obtaining sustainable goals. This survey presents a comprehensive investigation of traffic control techniques by carefully reviewing existing methods from a new perspective and reviews various traffic control strategies that play an important role in achieving sustainable objectives. First, we present traffic control modeling techniques that provide a robust solution to obtain reasonable traffic and sustainable mobilities. These techniques could be helpful for enhancing the traffic flow in a freeway traffic environment. Then, we discuss traffic control strategies that could be helpful for researchers and practitioners to design a robust freeway traffic controller. Second, we present a comprehensive review of recent state-of-the-art methods on the vehicle design control strategy, which is followed by the traffic control design strategy. They aim to reduce traffic emissions and energy consumption by a vehicle. Finally, we present the open research challenges and outline some recommendations which could be beneficial for obtaining sustainable goals in traffic systems and help researchers understand various technical aspects in the deployment of traffic control systems.
---
## Body
## 1. Introduction
Nowadays, environmental pollution caused by transportation systems has received significant attention from the research community [1, 2]. Significant increase in population and economic expansion of developing economies are considered the main reasons for causing air pollution and increase in energy demand [3]. A report of the Ministry of Ecology and Environment of China revealed that traffic emission becomes a major source of air pollution and could cause disease [4]. It is considered as one of the main reasons for causing premature deaths [5], in which most of them are caused by prolonged exposure to substances such as carbon dioxide (CO2) and nitrogen oxide (NOx).According to US Energy Information Administration, the automobile industry consumes 55% of the total fuels in the world [6]. These statistics could increase in the next couple of decades because of the increasing number of vehicles on the road. Therefore, more attention has brought the issue of traffic emissions and sustainability to the ecological and environmental committee of many countries worldwide.Recently, the sustainability issue has been addressed in various aspects of human activities [7, 8]. Obtaining the sustainability objectives is a complex task, which requires intensive requirements to be fulfilled in order to accomplish that goal. The development of sustainable cities is one of the listed objectives identified during the meeting of the United Nations held in 2017. It needs to accomplish as a part of achieving the sustainable goal of the 2030 Agenda [9]. To accomplish the 2030 Agenda, the road transportation systems need to adopt social equity and safer traffic mobility by reducing air pollution and providing environmentally friendly vehicle movement.Sustainable transportation systems have changed the people lives using improved technologies. The main concept of using traffic control that covers the sustainability mobility in all aspects involves protecting the environment and improving the economic and social development [10]. The aim of sustainable transport is to improve the transportation system and enhance the people lives by providing them better services in terms of access to each facility. Various issues related to sustainability in transportation have been investigated by the research community for last couple of decades [1, 2].Achieving cleaner and sustainable transportation could significantly reduce traffic accidents and congestions. In particular, traffic accidents are the main reason for causing nonrecurrent traffic congestion and also cause serious and fatal injuries. A road safety report by the World Health Organization indicates that 1.35 million people died in road and traffic accidents every year [11]. Also, it reveals that road accidents lead to cause the death of younger people aged between 15 and 29 years. Therefore, traffic accidents are considered a critical issue, which may lead to cause serious health issues and also affect the country’s economy. The fatality rate can reduce by taking precautionary measures on both vehicles and roads. The rapid increase in traffic flow could significantly lead to an increase in traffic congestion, which could increase vehicle traveling times and less reliability of traffic systems handling by drivers. However, it is not possible to modify the existing algorithms to increase the traffic flow. In this regard, a need for robust traffic control and management system arises, which can effectively use the existing road conditions without being required to employ the substantial traffic infrastructure and also can perform a comprehensive analysis on the impact caused by system challenges such as traffic management and security system [12].The traffic control methods are getting the continuous attention of transport researchers and practitioners. It aims to improve road safety by significantly reducing traffic congestions and accidents causing severe injuries. It also provides cleaner and sustainable transportation by reducing traffic emissions. In recent years, various traffic control strategies addressing sustainable issues have been studied [10, 13, 14]. These studies could be useful for nurturing traffic control methods and strategies for the freeway traffic environment and could be applied to improve traffic safety and to reduce environmental impacts. The traffic control strategy aims to reduce traffic congestion caused by various incidents in the freeway traffic environment. The development of the traffic mobility system in terms of passengers and freight has significantly contributed to the economic prosperity of the country. However, it can lead to traffic congestion and worsening of traffic mobility, such as causing frequent traffic congestion, long queues on the road, an increase in travel time, and road rage incidents. Frequent traffic congestion causes frustration to drivers who realize that one has spent numerous amount of time to arrive at their destinations, which could be used to perform other useful activities [15, 16].Traffic congestion is further classified into recurrent and nonrecurrent traffic congestion. Nonrecurrent congestion events are usually caused by the traffic accident, signal malfunctioning, and other events that disrupt normal traffic flow and result in the reduction of traffic capacity on the road [17]. Recurrent and nonrecurrent traffic congestion events produce a severe rise in traffic volumes on the road. The intensive use of fossil fuels and numerous number of vehicles on the road are the main reasons for the emission of harmful substances [10]. These types of traffic congestion show numerous amounts of demerits which are influenced by an increase in traffic volume. It is caused by the usage of nonrenewable energy and the large number of vehicles on the road. It is widely evident that vehicular traffic contributes significantly to increase traffic emission and fuel consumption, among which some substances such as carbon dioxide, carbon monoxide, volatile organic compound, nitrogen oxide, and particulate matter contribute to traffic emission. Some of these substances dissipate into the environment, leading to air pollution and smog which affect sustainability. Also, these types of pollutants could cause severe health issues such as respiratory and cardiovascular diseases [18]. Therefore, the effect of these factors needs to be reduced in order to achieve cleaner, healthier, and sustainable transport [19]. Regardless of the rapid technological development in recent years, another issue is that traffic emissions caused by the usage of fossil fuels are still increasing due to a large number of vehicles on the road [20]. Therefore, limiting vehicle emissions is necessary for the sustainable smart city. Although significant achievements in technology development reduce a large amount of vehicle emissions and fuel consumption [21] in recent years, some fossil fuels are still needed to be in the standard range, which can be achieved by using the cleaning technology for reducing traffic pollutants.The major part of the freeway traffic environment cannot meet the current mobility demands, which resulted in more road users, a large queue on the road, increasing traffic emissions, large road bottlenecks, and raises security issues as well. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background. Since the traffic accidents cause nonrecurrent congestion, one or more lane congestion leads to reduced capacity, and the deceleration is caused by observing accidents or drivers participating in rescue operations [22]. Therefore, there is a need to improve existing traffic control models and transform them into a new perspective to achieve sustainable goals. Also, incorporating an effective road network could be beneficial for reducing traffic accidents and congestions, thereby reducing the amount of fuel consumptions.A couple of surveys have been presented for addressing traffic control and strategies with sustainable issues [10, 13] in 2019, and there have been significant developments in traffic control strategies. To the best of our knowledge, there is no comprehensive survey on traffic control and modeling for obtaining sustainable transportation. Pasquale et al. [10] launched a survey on traffic control strategies with various sustainability and traffic control issues for the freeway traffic environment. They discussed the traffic control strategies in terms of objectives of sustainable issues for the freeway traffic environment. They comprehensively discussed the traffic emission and safety models and highlighted various research challenges. Othman et al. [13] presented a survey on traffic modeling and control strategies for providing a sustainable environment. The authors reviewed the existing traffic modeling techniques to determine the traffic emission and amount of energy consumption. They also reviewed issues related to transportation and traffic control strategies in order to implement them in the urban traffic environment. Then, they outlined the challenges and future directions of the eco-traffic management system.A rapid increase in the number of vehicles on the road often leads to cause traffic incident and congestion, which resulted in significant increase in traffic emission and amount of fuel consumption [23]. In the recent years, several works have been proposed on the traffic control algorithms for the freeway traffic environment. However, the numerous amounts of vehicles on the road daily often need a higher traffic mobilities, improved road structure, and an enhanced traffic management system. Thus, a new set of traffic control strategies should be introduced which aims to achieve aforementioned objectives and to minimize the amount of traffic emission and fuel consumptions and so on.The traffic control strategies play an important role in obtaining sustainable goals because they not only improve the traffic mobility but also enhance traffic management systems. In this paper, we carry out a comprehensive review of published works that provide different solutions for the traffic control system. The purpose of this survey is to elucidate the roadmap for those who want to do research in the traffic control and strategy in a freeway environment. This survey not only comprehensively discusses the traffic control modeling techniques but also discusses traffic control strategies. We classify the traffic control modeling techniques into three categories such as traffic flow models, traffic emission and fuel consumption models, and safety models. These techniques could be helpful for enhancing the traffic flow under a freeway traffic environment. We comprehensively discuss about traffic control strategies in the freeway traffic environment. These strategies provide useful information that could be beneficial for enhancing urban traffic and safety management system. We then we present a comprehensive review of recent state-of-the-art methods on the vehicle design control strategy and traffic control design strategy. In the end, we outline open research challenges and recommend traffic control strategies for achieving sustainable goals. By comparing with previous surveys, we summarize the contribution of this paper as follows.The contributions of the proposed survey are the following:(i)
We present a comprehensive review of different traffic control modeling techniques that helps to provide a reliable solution for obtaining reasonable traffic and sustainable mobilities. Moreover, these techniques could be useful for improving traffic flow in the freeway traffic environment.(ii)
We discuss various traffic control strategies that help researchers and practitioners to design a robust traffic controller. Moreover, it provides useful traffic information that could improve the traffic flow and enhance the overall performance of the traffic management system.(iii)
We comprehensively discuss the recent state-of-the-art techniques on the vehicle design control strategy and traffic control design strategy. Adoption of these strategies could be helpful in reducing the amount of energy consumption required by a vehicle.(iv)
We discuss open research challenges that help researchers to tackle issues while designing a traffic control system. Then, we recommend some control strategies for obtaining sustainable objectives in traffic systems.(v)
In sum, the proposed survey fills the gap of existing surveys by presenting a comprehensive discussion on traffic control modeling techniques and traffic control strategies, which can be helpful for researchers and practitioners to choose the best research direction for their future work.The rest of this survey is organized as follows. Section2 presents the traffic control modeling which consists of traffic flow models, traffic emission and fuel consumptions models, and safety models. These models could perform better for real-time applications and provide accurate estimation of traffic flow and dynamics. Section 3 presents different traffic control strategies in freeway traffic environment. Section 4 discusses the vehicle control design strategy for reducing traffic emission and consumptions, whereas Section 5 discusses traffic control design strategies. Section 6 presents various research challenges and recommendations for the traffic control system. Finally, Section 7 concludes the study.
## 2. Traffic Control Modeling
Identifying traffic control measures indicates a robust solution, which produces a better measure to obtain reasonable traffic and sustainable mobilities. Various types of control actions have been used to regulate traffic flow in different traffic environments [24]. Ramp management is considered a main issue when applied to traffic lights at on and off-ramps [25], mainstream control, lane-changing warnings, incident notifications, route guidance at intersections, etc. The traffic control modeling techniques are further classified as traffic flow models, traffic emission and fuel consumption models, and safety models as shown in Figure 1.Figure 1
Methods of traffic control modeling.A traffic control modeling framework is useful to develop various control measures and needs to define in terms of traffic flow description and urban sustainability-related issues. Figure1 shows the block diagram of freeway traffic control methods such as traffic flow models and traffic safety models. The traffic control mechanism could perform better in terms of real-time applications and provide accurate estimations of traffic flow and dynamics. Note that traffic safety relies on characteristics and features of traffic flow. It can be obtained from various traffic models and require more input information to design and develop a robust traffic safety system.Most of the various safety models analyze the crash risk based on the road features, weather conditions, etc. The validation and calibration of the safety models remain a critical issue because it requires the collection of a large amount of traffic data over a long period on the freeway traffic environment due to the occurrence of rare events leading to cause traffic incidents. Therefore, the researchers of Communication Engineering and Technology areas should focus on choosing the optimum traffic modeling, which provides accurate estimation and detection of events while maintaining the system computationally efficient.
### 2.1. Traffic Flow Modeling Technique
In this section, we will discuss the traffic state flow model schemes. Traffic flow (TF) models highlight the dynamic behavior of real traffic systems by developing mathematical relationships. In an intelligent traffic management system, traffic flow prediction could be used for traffic planning, improving traffic and road safeties, and simulating specific control measures [10]. Lighthill and Whitham [26] proposed a wide range of traffic flow using different fields of application. Traffic models can be classified based on different criteria [27, 28]. Figure 2 shows the block diagram of the classification of traffic flow models, which consists of different traffic models such as microscopic, macroscopic, and mesoscopic traffic models.Figure 2
Traffic flow models.The main traffic flow classification is further classified as microscopic, macroscopic, and mesoscopic models [28, 29]. These models are distinguished from each other with respect to their detailed levels.
#### 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
#### 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
#### 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
### 2.2. Traffic Safety Modeling Technique
In recent years, several safety models have been proposed, aiming to design traffic safety systems that could provide traffic and road safety. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background due to traffic accidents causing nonrecurrent congestion, one or more lane congestion leading to reduced capacity, and the deceleration caused by observing accidents or drivers participating in rescue operations [22].Recently, various studies focused on the statistical analysis of historical crash data associated. In order to determine specific traffic accident conditions and other factors that lead to cause an incident, such as road structure, driver behavior, and environmental factors [45], Lord et al. [46] examined the correlation between traffic safety levels and traffic conditions in a freeway traffic environment. They discussed the relationship between crashes and traffic data of the Canadian site such as traffic flow and density. Potts et al. [47] first proposed the relationship between traffic safety densities. In Ref. [48], Pasquale et al. introduced a risk indicator that can estimate number of crashes in a freeway traffic environment and within specific time limits. As revealed in Ref [48], the index can be added as an objective in the cost function of the control problem. A number of crashes are obtained by combining the two terms, which are related to on-ramps and mainstream. The ramp control may lead to forming a large queue length, which could increase the crashing vehicles at on-ramps sites [10].Yeo et al. [49] introduced a method to examine the relationship between traffic states and crashes in the freeway traffic environment. First, they discussed different traffic states according to their characteristics and patterns and the states of each freeway network. Then, they integrated the crash data with the traffic states based on upstream and downstream traffic. The proposed method was tested on a 32-mile section of the California I-880 dataset. The result shows that the proposed method obtained a better crash involvement in different traffic states. Chang and Xiang [50] studied the analysis of the possibility of the crash as a function of traffic flow. Golob et al. [51] examined different types of safety level on the freeway traffic environment. They obtained the data from single loop detectors and used them for monitoring different traffic conditions. The proposed method examined over 1700 accidents on the freeway of Orange County, California. Lee et al. [52] performed a study on the characteristics of traffic flow which results in crashes (crash precursors) in the freeway traffic environment. They used data obtained from 38 loop detectors of the Expressway in Toronto to examine the crash precursors. The results show that the potential of crash analysis can be determined based on the precursors collected from real-time data. Pasquale et al. [48] derived the risk indication parameter which is mainly used for traffic control applications. The authors defined the nonlinear optimal control problem, which aims to estimate the number of incidents and crashes. They developed the global safety index, which aims to identify number of incident and crashes in terms of the existing traffic state in the freeway traffic environment. The proposed index implemented the performance indication which is used to evaluate the traffic delay and queue length. The proposed traffic control strategy is considered as the nonlinear control problem with the control variables. These problems could be solved by employing a gradient-based algorithm. The proposed model leads to cause long queue length on both on-ramps and off-ramps and increases the risk of crashes.
### 2.3. Traffic Emission and Fuel Consumption Models
The traffic emission and vehicle pollution caused by the fossil fuel dispersion in the environment are the main reasons for increasing vehicular traffic. There is a need for an algorithm to determine the traffic emission caused by traffic flow. The traffic emission and the fuel consumption models are regarded as the main issue for developing a sustainable smart city. These models significantly reduce traffic emission by quantifying the pollution into the air and the rate of consumption in terms of different traffic situations such as traffic flow, vehicle speed, and acceleration. These parameters can be obtained by placing the loop detectors on the road network and simulated data generated from different traffic flow models [36, 53].Generally, traffic emission and fuel consumption rely on the operating conditions of the vehicle configuration and the driver’s attitude towards driving and their decision to pass through the signalized intersection [54]. Also, it depends on the acceleration, deceleration, and vehicle speed. Traffic emission not only focuses on the vehicle dynamics but also relies on the adopted fuel, mechanical features, and characteristics of the vehicle. Also, environmental factors such as temperature and humidity affect the sustainability. Recently, several methods have been proposed that aim to make a sustainable smart city by estimating traffic emissions caused by vehicles and amount of fuel consumption. As indicated by Treiber and Kesting [53], the traffic emission model generates local emission in terms of quantified kilograms. In order to make traffic emission model, the researchers rely on the model using descriptive power for meeting their application requirements. For example, several types of microscopic models are commonly used for offline evaluation, while the macroscopic models are generally used for controlling traffic applications because those comprehensively analyze the traffic management system with an efficient computational framework.COPERT is the most common method of the macroscopic emission model which is used for traffic control at the freeway traffic environment [55, 56]. The COPERT model computes the local emission factors with different range of pollution along with various kinds of vehicles and associates with different average speed emission models. It is distinguished from the traffic emission model based on the traffic control technology, that is, embedded on-board vehicle. Also, the COPERT model provides a better estimation of different traffic conditions with less computational time. Thus, it is considered as a more robust and suitable modeling approach for online control schemes.Recently, various traffic control approaches have been employed to overcome the limitation of the COPERT model such as the macroscopic form of microscopic emission models so that the VERSIT+ and VT-micro models could be extended to macroscopic case and are called as VT-macro [57] and macroscopic VERSIT+ [58]. These regression-based models use the relationship between speed and acceleration and achieved them using a linear-based regression model [10]. These kinds of models are different from COPERT by considering acceleration effects to obtain an accurate traffic emission. The VT-macro and the macroscopic VERSIT+ could be used as single and multiclasses based on the traffic control system and traffic model.In Ref. [57], Zegeye et al. introduced a macroscopic framework for solving traffic control issues. They integrated the macroscopic and microscopic emission models with each other. Then, they demonstrated the proposed framework by considering METANET and VT-macro models. Second, they identified the error produced by the VT-macro model in comparison with the original VT-macro model. Finally, they assessed the performance of the proposed method by analyzing the error introduced by the VT-macro model and determining the computational time of the Dutch A12 highway. The aim of the VERSIT+ macroscopic model is identified by limited parameters with the simple computational method. Therefore, it could be implemented in online traffic control schemes. The VERSIT+ macroscopic model in the multiclass domain computes the traffic emission factors in terms of mainstream traffic flow and assesses them from entering on-ramps and off-ramps based on the average vehicle speed and acceleration. These parameters are aggregated based on the vehicle class. Pasquale et al. [59] introduced a two-class macroscopic emission model to overcome the traffic pollution generated on the freeway. They employed a two-class embedded local traffic controller that relies on a ramp metering model to minimize traffic emission and congestion. The simulation result shows that the proposed model obtained a better reduction in traffic emission.Recently, a few dispersion models have been proposed to overcome traffic emissions, which aim to enhance the sustainable smart city. In this regard, Buckland and Middleton [60] introduced a dispersion model which could identify high-level complexity by considering different environments, such as atmospheric obstacles. To develop robust traffic control strategies for obtaining sustainable objectives, the traffic dispersion model could be formulated as highlighted in Ref. [61]. In Ref. [61], Csikós et al. proposed a dynamic model for dispersing highway traffic emissions. They developed an integrated model with a Gaussian plume model which is transformed into a discrete time and space. This discrete model is computationally efficient and produces a better output when applied to traffic control systems and leads to transformation into a sustainable smart city. Zegeye et al. [62] introduced a model-based traffic control system for controlling vehicle speed limits and reducing road traffic emission at freeway. They aimed to reduce emission dispersion levels by considering a nearby public area on the freeway, travel times, and the wind speed direction. The simulation result reveals that the proposed system obtained a better dispersion of traffic emissions.
## 2.1. Traffic Flow Modeling Technique
In this section, we will discuss the traffic state flow model schemes. Traffic flow (TF) models highlight the dynamic behavior of real traffic systems by developing mathematical relationships. In an intelligent traffic management system, traffic flow prediction could be used for traffic planning, improving traffic and road safeties, and simulating specific control measures [10]. Lighthill and Whitham [26] proposed a wide range of traffic flow using different fields of application. Traffic models can be classified based on different criteria [27, 28]. Figure 2 shows the block diagram of the classification of traffic flow models, which consists of different traffic models such as microscopic, macroscopic, and mesoscopic traffic models.Figure 2
Traffic flow models.The main traffic flow classification is further classified as microscopic, macroscopic, and mesoscopic models [28, 29]. These models are distinguished from each other with respect to their detailed levels.
### 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
### 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
### 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
## 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
## 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
## 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
## 2.2. Traffic Safety Modeling Technique
In recent years, several safety models have been proposed, aiming to design traffic safety systems that could provide traffic and road safety. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background due to traffic accidents causing nonrecurrent congestion, one or more lane congestion leading to reduced capacity, and the deceleration caused by observing accidents or drivers participating in rescue operations [22].Recently, various studies focused on the statistical analysis of historical crash data associated. In order to determine specific traffic accident conditions and other factors that lead to cause an incident, such as road structure, driver behavior, and environmental factors [45], Lord et al. [46] examined the correlation between traffic safety levels and traffic conditions in a freeway traffic environment. They discussed the relationship between crashes and traffic data of the Canadian site such as traffic flow and density. Potts et al. [47] first proposed the relationship between traffic safety densities. In Ref. [48], Pasquale et al. introduced a risk indicator that can estimate number of crashes in a freeway traffic environment and within specific time limits. As revealed in Ref [48], the index can be added as an objective in the cost function of the control problem. A number of crashes are obtained by combining the two terms, which are related to on-ramps and mainstream. The ramp control may lead to forming a large queue length, which could increase the crashing vehicles at on-ramps sites [10].Yeo et al. [49] introduced a method to examine the relationship between traffic states and crashes in the freeway traffic environment. First, they discussed different traffic states according to their characteristics and patterns and the states of each freeway network. Then, they integrated the crash data with the traffic states based on upstream and downstream traffic. The proposed method was tested on a 32-mile section of the California I-880 dataset. The result shows that the proposed method obtained a better crash involvement in different traffic states. Chang and Xiang [50] studied the analysis of the possibility of the crash as a function of traffic flow. Golob et al. [51] examined different types of safety level on the freeway traffic environment. They obtained the data from single loop detectors and used them for monitoring different traffic conditions. The proposed method examined over 1700 accidents on the freeway of Orange County, California. Lee et al. [52] performed a study on the characteristics of traffic flow which results in crashes (crash precursors) in the freeway traffic environment. They used data obtained from 38 loop detectors of the Expressway in Toronto to examine the crash precursors. The results show that the potential of crash analysis can be determined based on the precursors collected from real-time data. Pasquale et al. [48] derived the risk indication parameter which is mainly used for traffic control applications. The authors defined the nonlinear optimal control problem, which aims to estimate the number of incidents and crashes. They developed the global safety index, which aims to identify number of incident and crashes in terms of the existing traffic state in the freeway traffic environment. The proposed index implemented the performance indication which is used to evaluate the traffic delay and queue length. The proposed traffic control strategy is considered as the nonlinear control problem with the control variables. These problems could be solved by employing a gradient-based algorithm. The proposed model leads to cause long queue length on both on-ramps and off-ramps and increases the risk of crashes.
## 2.3. Traffic Emission and Fuel Consumption Models
The traffic emission and vehicle pollution caused by the fossil fuel dispersion in the environment are the main reasons for increasing vehicular traffic. There is a need for an algorithm to determine the traffic emission caused by traffic flow. The traffic emission and the fuel consumption models are regarded as the main issue for developing a sustainable smart city. These models significantly reduce traffic emission by quantifying the pollution into the air and the rate of consumption in terms of different traffic situations such as traffic flow, vehicle speed, and acceleration. These parameters can be obtained by placing the loop detectors on the road network and simulated data generated from different traffic flow models [36, 53].Generally, traffic emission and fuel consumption rely on the operating conditions of the vehicle configuration and the driver’s attitude towards driving and their decision to pass through the signalized intersection [54]. Also, it depends on the acceleration, deceleration, and vehicle speed. Traffic emission not only focuses on the vehicle dynamics but also relies on the adopted fuel, mechanical features, and characteristics of the vehicle. Also, environmental factors such as temperature and humidity affect the sustainability. Recently, several methods have been proposed that aim to make a sustainable smart city by estimating traffic emissions caused by vehicles and amount of fuel consumption. As indicated by Treiber and Kesting [53], the traffic emission model generates local emission in terms of quantified kilograms. In order to make traffic emission model, the researchers rely on the model using descriptive power for meeting their application requirements. For example, several types of microscopic models are commonly used for offline evaluation, while the macroscopic models are generally used for controlling traffic applications because those comprehensively analyze the traffic management system with an efficient computational framework.COPERT is the most common method of the macroscopic emission model which is used for traffic control at the freeway traffic environment [55, 56]. The COPERT model computes the local emission factors with different range of pollution along with various kinds of vehicles and associates with different average speed emission models. It is distinguished from the traffic emission model based on the traffic control technology, that is, embedded on-board vehicle. Also, the COPERT model provides a better estimation of different traffic conditions with less computational time. Thus, it is considered as a more robust and suitable modeling approach for online control schemes.Recently, various traffic control approaches have been employed to overcome the limitation of the COPERT model such as the macroscopic form of microscopic emission models so that the VERSIT+ and VT-micro models could be extended to macroscopic case and are called as VT-macro [57] and macroscopic VERSIT+ [58]. These regression-based models use the relationship between speed and acceleration and achieved them using a linear-based regression model [10]. These kinds of models are different from COPERT by considering acceleration effects to obtain an accurate traffic emission. The VT-macro and the macroscopic VERSIT+ could be used as single and multiclasses based on the traffic control system and traffic model.In Ref. [57], Zegeye et al. introduced a macroscopic framework for solving traffic control issues. They integrated the macroscopic and microscopic emission models with each other. Then, they demonstrated the proposed framework by considering METANET and VT-macro models. Second, they identified the error produced by the VT-macro model in comparison with the original VT-macro model. Finally, they assessed the performance of the proposed method by analyzing the error introduced by the VT-macro model and determining the computational time of the Dutch A12 highway. The aim of the VERSIT+ macroscopic model is identified by limited parameters with the simple computational method. Therefore, it could be implemented in online traffic control schemes. The VERSIT+ macroscopic model in the multiclass domain computes the traffic emission factors in terms of mainstream traffic flow and assesses them from entering on-ramps and off-ramps based on the average vehicle speed and acceleration. These parameters are aggregated based on the vehicle class. Pasquale et al. [59] introduced a two-class macroscopic emission model to overcome the traffic pollution generated on the freeway. They employed a two-class embedded local traffic controller that relies on a ramp metering model to minimize traffic emission and congestion. The simulation result shows that the proposed model obtained a better reduction in traffic emission.Recently, a few dispersion models have been proposed to overcome traffic emissions, which aim to enhance the sustainable smart city. In this regard, Buckland and Middleton [60] introduced a dispersion model which could identify high-level complexity by considering different environments, such as atmospheric obstacles. To develop robust traffic control strategies for obtaining sustainable objectives, the traffic dispersion model could be formulated as highlighted in Ref. [61]. In Ref. [61], Csikós et al. proposed a dynamic model for dispersing highway traffic emissions. They developed an integrated model with a Gaussian plume model which is transformed into a discrete time and space. This discrete model is computationally efficient and produces a better output when applied to traffic control systems and leads to transformation into a sustainable smart city. Zegeye et al. [62] introduced a model-based traffic control system for controlling vehicle speed limits and reducing road traffic emission at freeway. They aimed to reduce emission dispersion levels by considering a nearby public area on the freeway, travel times, and the wind speed direction. The simulation result reveals that the proposed system obtained a better dispersion of traffic emissions.
## 3. Traffic Control Strategies in Freeway
In recent years, traffic control for the freeway traffic environment embarks a great deal of attention from the researcher of communication and technology background. The related existing schemes can be further categorized as traffic modeling, control mechanism, and sustainable control strategy types. These techniques play an important role for designing freeway traffic controllers and provide relevant information to improve sustainability in the urban traffic system. Table1 shows the research works on traffic control strategies.Table 1
Summary of the research works on the traffic control strategy.
ReferenceYearFeatures/objectivesControl strategyControl methodEmissionSustainability issuePasquale et al. [44]2017Introduced a multiclass based traffic control method that combines two control strategies✓✓✓✓Ferrara et al. [58]2017Introduced a control system to regulate traffic flow in the freeway traffic network.✓✓✓✓Zegeye et al. [63]2012Introduced a predictive traffic controller using parameter control policies✓✓✓Groot et al. [64]2013They investigated an integrated METANET freeway and the VT-macro emission models.✓✓✓Csikós et al. [65]2018They discussed methods for reducing jam waves.✓✓✓Liu et al. [66]2017They added endpoint on the multiclass traffic flow to identify the behavior of traffic pattern.✓✓✓Wang et al. [67]2016Estimate different traffic conditions.✓✓Ahn and Rakha [68]2013Examined the impacts of using eco-routing system strategies.✓✓✓✓Abdel-Aty et al. [69]2006Discussed various speed limit strategies mechanism for improving safety.✓✓✓Sheikh et al. [70]2020They introduced an incident detection technique using the V2I model.✓✓✓Yu and Abdel-Aty [71]2014Examined the feasibility of using VSL✓✓✓Pasquale et al. [72]2014They proposed a two-class traffic control strategy✓✓✓✓Li et al. [73]2014Used a generic model to solve the optimization problem✓✓✓Groot et al. [74]2015Employed Stackelberg game to reduce traffic congestion.✓✓✓Pasquale et al. [75]2015Employed a multiclass ramp metering technique to reduce traffic emission.✓✓✓✓
### 3.1. Modeling Framework Classification
In freeway traffic control, different types of models can be used for investigating freeway traffic control strategies. These models are considered as a subset of model-based control techniques and can be effectively used for simulating and validating different traffic scenarios.Several studies have used METANET as a traffic flow model and VT-macro emission model [63]. In Ref. [63], Zegeye et al. introduced a predictive traffic controller using parameter control policies. They adopted different control measures to identify different traffic conditions and features. The proposed model can significantly reduce computational time. Groot et al. [64] investigated an integrated METANET freeway and the VT-macro emission models using the model-based predictive control (MPC) technique. The author proposed a piecewise-affine approximation (PWA) based nonlinear METANET model to control real-time applications. The proposed method obtained a better computational speed by using the cost function values. Csikós et al. [65] proposed a second-order METANET model-based control system to reduce jam waves on the motorway. They designed different controllers that are used to measure predefined control modes. Ferrara et al. [58] introduced a control scheme to regulate traffic flow in freeway networks. They used the ramp metering technique to identify and reduce traffic congestion. They used the METANET and macroscopic VERSIT+ model to improve the traffic regulations.The multiclass METANET utilizes the COPERT in order to evaluate the traffic emission model. It combines with the macroscopic model of the multiclass VERSIT+ model [44, 48]. Liu et al. [66] compared extended versions of multiclass METANET, FASTLANE, and multiclass VERSIT+. They added endpoint on these multiclass traffic flows to identify the behavior of traffic pattern. Ahn and Rakha [76] estimated traffic emission and fuel estimation using the data obtained from the probe vehicle. Wang et al. [67] introduced an efficient multiple model particle filter (EMMPF) using the GPS equipped probe vehicle to estimate different traffic conditions such as estimation and detection. Ahn and Rakha [68] proposed the VT-microemission model to determine emission and applied a microscopic model to simulate various traffic dynamics models.Recently, the microscopic traffic simulation model was used to evaluate different speed limit variations. Lee et al. [77] proposed automatic control strategies that aim to reduce the likelihood of a crash on freeway traffic. They used a microscopic simulation model to simulate different traffic situations using variable speed limits and an integrated crash prediction model. The simulation results demonstrated that the proposed method could minimize 5–17% of crash risk by reducing risky traffic situations. Abdel-Aty et al. [69] proposed a various speed limit strategies mechanism for improving safety in the freeway traffic environment. The proposed system improved the efficiency of medium to high speed situations on the freeway. Sheikh et al. [70] proposed an improved incident detection method using vehicle-to-infrastructure communication (V2I). First, they developed a connection between the vehicle and roadside unit (RSU). Second, they used a probabilistic approach to obtain traffic information using V2I communication. Third, a hybrid observer is employed to estimate the possible occurrence of traffic incidents [78, 79]. Finally, a V2I communication-based lane-changing speed mechanism was developed to detect traffic incidents and thereby significantly reduce traffic congestion and improve traffic flow. The simulation results revealed that the proposed method obtained a better detection of traffic incidents. Therefore, it significantly reduces crash risks and improves the dissipation of traffic congestion. Yu and Abdel-Aty [71] examined the feasibility of using a variable speed limit (VSL). They used an active traffic management system (ATS) to enhance traffic flow on freeway traffic scenarios. First, they used an extended METANET model to evaluate the VSL effects on traffic flow. Second, a real-time crash risk mechanism is applied to determine the risk associated with that. Finally, an optimizing technique is employed to determined VSL strategies. The simulation results reveal that the proposed system could reduce chances of crash risk and thereby improve traffic flow.
### 3.2. Classification Based on Control Theory
Traffic classification can be further classified based on control theory for the freeway traffic environment according to the traffic control system and control strategies, and its impacts on developing sustainable traffic systems. Several works have been proposed on control techniques of classification, which used simple control rules to design a robust control algorithm. Pasquale et al. [72] proposed a two-class traffic control strategy. They used different types of vehicles that represent the dynamic model and separate control. They adopted the PIALINEA control strategy to reduce traffic emission and to alleviate traffic congestion. Also, the feedback controller was proposed by Pasquale et al. [80], in which the control mechanism was used to predict and control the traffic classification model.Additionally, other research works have been proposed that are based on optimization control techniques [48, 71], while Li et al. [73] proposed a generic model, which is used for solving the optimization problem. The application of optimization-based technique in the real-time conditions can be considered as the model predictive control (MPC) technique [62, 64, 65]. Generally, the MPC techniques are computationally expensive for real-time applications [10]. In Ref. [63], Zegeye et al. proposed predictive traffic controller in terms of parameterized control. They employed the MPC technique to control the freeway traffic. The proposed method obtained a significant reduction of computational controller processing. Groot et al. [74] proposed different techniques for extending the Stackelberg game to reduce traffic congestion. In the proposed system, the traffic authorities can induce drivers to follow the traffic pattern using the Stackelberg game. The proposed mechanism obtained an optimum behavior by considering a heterogeneous driver class. In recent years, some of the previous papers do not use traffic classification control mechanism. However, these methods examined traffic control in terms of various simulation tools used to evaluate issues of urban cities [69, 76]. Also, some methods investigate the effects of speed limits and ramp metering [69, 81].
### 3.3. Classification Based on the Control Strategy Type
Selecting and implementing control strategies requires thorough study and investigation to obtain a sustainable traffic control system and its goal. The researchers and practitioners should consider these strategies when designing traffic control models. We can observe from the literature that some control strategies are very effective when it comes to implementing various control strategies such as the ramp metering technique combined with other control methods leads to obtain a sustainable goal. Note that this application could cause long vehicle queues on the ramp which lead to producing emissions and increase the likelihood of traffic incidents and crashes. In this regard, several methods have been proposed which aim to reduce the pollution emission and the possible risk of incidents and crashes at the ramps [48, 58, 66].Ferrara et al. [58] proposed a congestion and emission reduction scheme for the freeway traffic network using a ramp metering technique. They employed a supervisory based traffic control technique, which receives measurements from the entire network to predict the system performance. The supervisory mechanism takes a decision when the controller needs to change which is followed by a triggered logic. Liu et al. [66] employed a macroscopic traffic flow and emission model to predict traffic networks. The results show that the proposed emission model can improve the performance of traffic control in terms of the total emission. Also, it can reduce large queue lengths as compared to other approaches. Pasquale et al. [48] introduced a control system for reducing traffic congestion and enhancing traffic safety. They developed a safety index that determines the possible number of crashes by considering the function of existing traffic state. The simulation result shows that the proposed index could mitigate traffic congestion and enhance the performance of the traffic management system. These schemes utilized the ramp control strategies and analyzed the risks associated with on-ramp merging areas. Also, they were successfully applied to overcome the traffic emission and incident issues.Several traffic and emission models are capable of reducing traffic emissions as compared to other traditional methods [58, 76, 82]. Recently, various studies use the combination of variation of speed limits and ramp metering to overcome traffic flow and emission [62, 63, 65]. These approaches produced robust results especially when they were employed for improving the traffic safety and management system. The control strategies implemented by these methods significantly reduce the number of traffic incidents and crashes, thereby improving traffic safety. Note that the effectiveness of the different speed limits for reducing traffic incidents and crashes relies on the speed level recommended [71, 77]. We can observe from the literature survey that traffic control techniques are generally used to reduce traffic emissions and minimize the environmental factor influenced by them. These works aimed to reduce the number of traffic incidents in the freeway traffic environment. They could produce better results by obtaining a sufficient safety level.The aforementioned methods could be extended to a multiclass framework which was assessed and compared with other traditional traffic control methods, which could be used to perform specific control tasks. Pasquale et al. [75] employed a multiclass ramp metering technique to reduce traffic congestion and emission. The proposed method allows heavy vehicles to enter the highway freely without waiting on-ramps. It significantly reduces traffic congestions and emissions by limiting the heavy traffic on-ramps which may be the source of high emissions. Pasquale et al. [44] introduced a multiclass based traffic control method that combines two control strategies. Pasquale et al. aims for reducing traffic congestions and emissions by applying them. They evaluated the control system by predicting the control system in terms of traffic scenarios and by measuring system state. The multiclass control schemes require comprehensive strategies and accurate system modeling as compared to the single-class methods. Also, the traffic safety and management system needs more robust safety models which has the capability of identifying the impact of traffic incidents and crashes on these classes of control system.The route guidance becomes one of the successful techniques for reducing traffic emissions and crashes in freeway traffic environments and is considered an eco-routing strategy. Such as the environment and energy effects formed by the generated route and the choice of selected route by drivers are analyzed in-depth in [76]. Ahn and Rakha [68] examined the impacts of using eco-routing system strategies. The proposed system investigated various congestion and penetration levels by performing the test on real-time traffic conditions of Cleveland and Columbus, Ohio, USA. This method provides an eco-routing system that could minimize the traffic emission and fuel consumption which are generally obtained by reducing travel distance.
## 3.1. Modeling Framework Classification
In freeway traffic control, different types of models can be used for investigating freeway traffic control strategies. These models are considered as a subset of model-based control techniques and can be effectively used for simulating and validating different traffic scenarios.Several studies have used METANET as a traffic flow model and VT-macro emission model [63]. In Ref. [63], Zegeye et al. introduced a predictive traffic controller using parameter control policies. They adopted different control measures to identify different traffic conditions and features. The proposed model can significantly reduce computational time. Groot et al. [64] investigated an integrated METANET freeway and the VT-macro emission models using the model-based predictive control (MPC) technique. The author proposed a piecewise-affine approximation (PWA) based nonlinear METANET model to control real-time applications. The proposed method obtained a better computational speed by using the cost function values. Csikós et al. [65] proposed a second-order METANET model-based control system to reduce jam waves on the motorway. They designed different controllers that are used to measure predefined control modes. Ferrara et al. [58] introduced a control scheme to regulate traffic flow in freeway networks. They used the ramp metering technique to identify and reduce traffic congestion. They used the METANET and macroscopic VERSIT+ model to improve the traffic regulations.The multiclass METANET utilizes the COPERT in order to evaluate the traffic emission model. It combines with the macroscopic model of the multiclass VERSIT+ model [44, 48]. Liu et al. [66] compared extended versions of multiclass METANET, FASTLANE, and multiclass VERSIT+. They added endpoint on these multiclass traffic flows to identify the behavior of traffic pattern. Ahn and Rakha [76] estimated traffic emission and fuel estimation using the data obtained from the probe vehicle. Wang et al. [67] introduced an efficient multiple model particle filter (EMMPF) using the GPS equipped probe vehicle to estimate different traffic conditions such as estimation and detection. Ahn and Rakha [68] proposed the VT-microemission model to determine emission and applied a microscopic model to simulate various traffic dynamics models.Recently, the microscopic traffic simulation model was used to evaluate different speed limit variations. Lee et al. [77] proposed automatic control strategies that aim to reduce the likelihood of a crash on freeway traffic. They used a microscopic simulation model to simulate different traffic situations using variable speed limits and an integrated crash prediction model. The simulation results demonstrated that the proposed method could minimize 5–17% of crash risk by reducing risky traffic situations. Abdel-Aty et al. [69] proposed a various speed limit strategies mechanism for improving safety in the freeway traffic environment. The proposed system improved the efficiency of medium to high speed situations on the freeway. Sheikh et al. [70] proposed an improved incident detection method using vehicle-to-infrastructure communication (V2I). First, they developed a connection between the vehicle and roadside unit (RSU). Second, they used a probabilistic approach to obtain traffic information using V2I communication. Third, a hybrid observer is employed to estimate the possible occurrence of traffic incidents [78, 79]. Finally, a V2I communication-based lane-changing speed mechanism was developed to detect traffic incidents and thereby significantly reduce traffic congestion and improve traffic flow. The simulation results revealed that the proposed method obtained a better detection of traffic incidents. Therefore, it significantly reduces crash risks and improves the dissipation of traffic congestion. Yu and Abdel-Aty [71] examined the feasibility of using a variable speed limit (VSL). They used an active traffic management system (ATS) to enhance traffic flow on freeway traffic scenarios. First, they used an extended METANET model to evaluate the VSL effects on traffic flow. Second, a real-time crash risk mechanism is applied to determine the risk associated with that. Finally, an optimizing technique is employed to determined VSL strategies. The simulation results reveal that the proposed system could reduce chances of crash risk and thereby improve traffic flow.
## 3.2. Classification Based on Control Theory
Traffic classification can be further classified based on control theory for the freeway traffic environment according to the traffic control system and control strategies, and its impacts on developing sustainable traffic systems. Several works have been proposed on control techniques of classification, which used simple control rules to design a robust control algorithm. Pasquale et al. [72] proposed a two-class traffic control strategy. They used different types of vehicles that represent the dynamic model and separate control. They adopted the PIALINEA control strategy to reduce traffic emission and to alleviate traffic congestion. Also, the feedback controller was proposed by Pasquale et al. [80], in which the control mechanism was used to predict and control the traffic classification model.Additionally, other research works have been proposed that are based on optimization control techniques [48, 71], while Li et al. [73] proposed a generic model, which is used for solving the optimization problem. The application of optimization-based technique in the real-time conditions can be considered as the model predictive control (MPC) technique [62, 64, 65]. Generally, the MPC techniques are computationally expensive for real-time applications [10]. In Ref. [63], Zegeye et al. proposed predictive traffic controller in terms of parameterized control. They employed the MPC technique to control the freeway traffic. The proposed method obtained a significant reduction of computational controller processing. Groot et al. [74] proposed different techniques for extending the Stackelberg game to reduce traffic congestion. In the proposed system, the traffic authorities can induce drivers to follow the traffic pattern using the Stackelberg game. The proposed mechanism obtained an optimum behavior by considering a heterogeneous driver class. In recent years, some of the previous papers do not use traffic classification control mechanism. However, these methods examined traffic control in terms of various simulation tools used to evaluate issues of urban cities [69, 76]. Also, some methods investigate the effects of speed limits and ramp metering [69, 81].
## 3.3. Classification Based on the Control Strategy Type
Selecting and implementing control strategies requires thorough study and investigation to obtain a sustainable traffic control system and its goal. The researchers and practitioners should consider these strategies when designing traffic control models. We can observe from the literature that some control strategies are very effective when it comes to implementing various control strategies such as the ramp metering technique combined with other control methods leads to obtain a sustainable goal. Note that this application could cause long vehicle queues on the ramp which lead to producing emissions and increase the likelihood of traffic incidents and crashes. In this regard, several methods have been proposed which aim to reduce the pollution emission and the possible risk of incidents and crashes at the ramps [48, 58, 66].Ferrara et al. [58] proposed a congestion and emission reduction scheme for the freeway traffic network using a ramp metering technique. They employed a supervisory based traffic control technique, which receives measurements from the entire network to predict the system performance. The supervisory mechanism takes a decision when the controller needs to change which is followed by a triggered logic. Liu et al. [66] employed a macroscopic traffic flow and emission model to predict traffic networks. The results show that the proposed emission model can improve the performance of traffic control in terms of the total emission. Also, it can reduce large queue lengths as compared to other approaches. Pasquale et al. [48] introduced a control system for reducing traffic congestion and enhancing traffic safety. They developed a safety index that determines the possible number of crashes by considering the function of existing traffic state. The simulation result shows that the proposed index could mitigate traffic congestion and enhance the performance of the traffic management system. These schemes utilized the ramp control strategies and analyzed the risks associated with on-ramp merging areas. Also, they were successfully applied to overcome the traffic emission and incident issues.Several traffic and emission models are capable of reducing traffic emissions as compared to other traditional methods [58, 76, 82]. Recently, various studies use the combination of variation of speed limits and ramp metering to overcome traffic flow and emission [62, 63, 65]. These approaches produced robust results especially when they were employed for improving the traffic safety and management system. The control strategies implemented by these methods significantly reduce the number of traffic incidents and crashes, thereby improving traffic safety. Note that the effectiveness of the different speed limits for reducing traffic incidents and crashes relies on the speed level recommended [71, 77]. We can observe from the literature survey that traffic control techniques are generally used to reduce traffic emissions and minimize the environmental factor influenced by them. These works aimed to reduce the number of traffic incidents in the freeway traffic environment. They could produce better results by obtaining a sufficient safety level.The aforementioned methods could be extended to a multiclass framework which was assessed and compared with other traditional traffic control methods, which could be used to perform specific control tasks. Pasquale et al. [75] employed a multiclass ramp metering technique to reduce traffic congestion and emission. The proposed method allows heavy vehicles to enter the highway freely without waiting on-ramps. It significantly reduces traffic congestions and emissions by limiting the heavy traffic on-ramps which may be the source of high emissions. Pasquale et al. [44] introduced a multiclass based traffic control method that combines two control strategies. Pasquale et al. aims for reducing traffic congestions and emissions by applying them. They evaluated the control system by predicting the control system in terms of traffic scenarios and by measuring system state. The multiclass control schemes require comprehensive strategies and accurate system modeling as compared to the single-class methods. Also, the traffic safety and management system needs more robust safety models which has the capability of identifying the impact of traffic incidents and crashes on these classes of control system.The route guidance becomes one of the successful techniques for reducing traffic emissions and crashes in freeway traffic environments and is considered an eco-routing strategy. Such as the environment and energy effects formed by the generated route and the choice of selected route by drivers are analyzed in-depth in [76]. Ahn and Rakha [68] examined the impacts of using eco-routing system strategies. The proposed system investigated various congestion and penetration levels by performing the test on real-time traffic conditions of Cleveland and Columbus, Ohio, USA. This method provides an eco-routing system that could minimize the traffic emission and fuel consumption which are generally obtained by reducing travel distance.
## 4. Vehicle Control Design Strategy
This section discusses the vehicle control strategy for reducing traffic emissions and consumption. This goal can be achieved by using an eco-driving system that analyzes and computes a vehicle trajectory, which resulted in reducing traffic emission and energy consumption corresponding to a vehicle route. The eco-routing system is used to plan a route that requires minimum energy and emissions. Recently, a few works have been proposed to discuss the different vehicle control strategies [13, 83], which could reduce traffic emissions are shown in Figure 3. The summary of research works on the vehicle control design strategy is listed in Table 2.Figure 3
Vehicle control design strategy.Table 2
Summary of the vehicle control design strategy.
ReferenceYearTechniqueFeatures/objectivesPerformanceApplicationSciarreta et al. [84]2015Eco-drivingConsidering different road conditions, such as online and offline for real-time analysis and estimation.Reduces traffic emission and fuel consumption.Sustainable smart city.Ozatay et al. [85]2014Reducing energy consumption based on the velocity optimization problem.Significantly reduces energy consumption.Sustainable smart city.Dib et al. [86]2012Employing performance metrics to determine the energy efficiency based on intelligent eco-driving methods.Helps in obtaining better energy efficiency.Sustainable smart city.Hellström et al. [87]2009Minimizing trip and fuel consumption using the on-board optimization controller.Reduces amount of fuel consumption.Sustainable smart city.Dimitrakopoulos and Demestichas [88]2010Notifying the driver about traffic light cycles prior to arriving at signal intersection.Provides a better traffic light cycles notification at signal intersection.ITS.Ozatay et al. [89]2014Considering traffic lights as stop signs to optimize the speed trajectory.Better optimization of the vehicle speed trajectory.ITS.Maher and Vahidi [90]2012Using a signal phase and timing information to obtain vehicle energy consumption.Obtains a better energy efficiency and less computational time.Sustainable smart city.Sun et al. [91]2018Investigating speed planning when CVs communicate with traffic lights.Improves traffic flow and significantly reduces traffic congestion.ITS.Miyatake et al. [92]2011A method for eco-driving based on dynamic programming by considering traffic signal on the road.Reduces traffic congestion and enhance traffic flow.Sustainable smart city/ITS.HomChaudhuri et al. [93]2017Control using the decentralized strategy for each vehicle that forms its own strategy based on the neighboring vehicle.Enhances traffic management system in terms of reducing traffic congestion and lane-changing warning.ITS.De Nunzio et al. [94]2016Solving the nonconvex control problem using a suboptimal strategy.Enhances traffic flow with less computational time.Sustainable smart city/ITS.Zhang and Cassandras [95]2018Introducing a control strategy method to reduce energy consumption based on the maximum throughput criteria.Significantly reduces amount of energy consumption.Sustainable smart city.Boriboonsomsin et al. [96]2012Eco-routingUsing the eco-routing navigation system to determine the route between trip origins and destinations.Improves the vehicle navigation system.ITS.Ericsson et al. [97]2006Using eco-routing to identify and classify the road networks in various different groups based on the GPS device.Reduce substantial amount of fuel consumption.Sustainable smart city/ITS.Liu [98]2015Integrate a microscopic vehicle emission model into the Markov decision process to solve signalized traffic issues.Improve traffic flow at signalized intersection.Sustainable smart city.De Nunzio et al. [99]2017A real-time searching algorithm which could provide drivers with different sets of solutions.Reduces traffic congestion and improves vehicle traveling time.Sustainable smart city.Kluge et al. [100]2013Solved a time-dependent eco-routing using the Dijkstras algorithm.Efficient energy consumption in all road networks.Sustainable smart city/ITS.Nannicini et al. [101]2012To overcome vehicle traveling time and distanceSignificantly reduces the complexity of route planning.ITS.
### 4.1. Vehicle Eco-Driving
Eco-driving is a modern and efficient style of driving that reduces fuel consumption and improves traffic safety. It computes and analyzes the initial vehicle trajectories to process for embedded algorithms. The parameter values are used to forecast road structure, traffic flow, and congestions, and various limitations such as vehicle trip time and maximum vehicle speed. Some of the limitations rely on the driver’s attitude towards driving, such as driving a vehicle while traffic signal lights are flashing [83].In eco-driving, the ego-connected vehicle can cooperate with other vehicles on the road. For instance, a group of vehicles (platooning) that travel together closely and safely at high speed. The aim of using the eco-driving mechanism is to reduce fuel consumption and aerodynamics drag, and when the eco-driving uses a multivehicle scenario, the information processing is more complex as compared to the single-vehicle.Let us assume the vehicle vector at time step functiontt as qt=mt,vtT, where m represents the vehicle position along the specific route and v denotes the speed of vehicle. The aim of the eco-driving is to evaluate each time step t input vector zt=Hemt,HentT, where Hem and Hen represent the traction force and mechanical brake force, respectively. The input vector zt has an ability to significantly reduce traffic emission or energy consumption by the vehicle [13].The optimization problems of eco-driving are represented as follows [84]:(1)minzo,…,zm−1∑t=0m−1gqt,zt.Reference [13] indicated that the vehicle state at time step at t+1 function can be written as follows:(2)fqt,zt=mt,ϑvtvt+ϑHemt−Hent−HretM,where Hret denotes the resistance force caused by driving vehicle.The problem associated with eco-driving is the traction forceHem and mechanical brake force Hen. These forces are directly applied to the input and are compatible with autonomous and connected vehicles in terms of vehicle position and directions (longitudinal and lateral). Note that eco-driving creates various issues for human drivers to return the speed profile that vehicle users can follow [13].Sciarreta et al. [84] introduced several methods to overcome eco-driving control problems. They aim to reduce traffic emissions caused by transportation energy. Sciarreta et al. considered all the road conditions (online and offline) for real-time analysis and estimation. Recently, various methods have been proposed for energy efficiency and offline optimization. Ozatay et al. [85] provided a solution for reducing energy consumption based on a velocity optimization problem. They incorporated the road conditions (road structure and grade) with an optimization problem and generated a vehicle speed trajectory corresponding to a given vehicle route. They tested the performance of the proposed system in terms of various problems and compared it with a dynamic programming solution. The results show that the proposed method generates a better vehicle trajectory for about 10% as compared to the cruise speed control method. Dib et al. [86] introduced an evaluation approach for energy of the electrical vehicle. They used the performance metrics to determine the energy efficiency by using intelligent eco-driving methods.Additionally, a few works for providing online solutions have been presented in the recent past years [87]. Hellström et al. [87] introduced a method for minimizing trip and fuel consumption. They used an on-board optimization controller by considering the road slope and a GPS device to obtain the road geometry and its conditions. They performed the experiments using a heavy truck in the freeway traffic environment. The results show that the proposed method could significantly reduce the fuel consumption in an eco-driving vehicle.In an urban traffic environment, eco-driving is complex and challenging due to nonlinear traffic flow. At traffic signal intersection, it is very difficult to know the traffic lights before arriving at the intersection due to phase duration that depends on amount of traffic flow on the street. As stated by Demestichas [88], intelligent transportation systems and urban traffic management systems could reduce these issues and notify the driver about traffic light cycles before arriving at the signal intersection. Ozatay et al. [89] proposed a method for optimizing the vehicle speed trajectory. They considered traffic lights as stop signs to optimize the speed trajectory. The driver can send the traffic information to the cloud. Then, the cloud server generates the routes and collects the corresponding traffic information, (i.e., the number of vehicles at traffic signal intersection). Finally, they solved the optimization problem based on these information using a dynamic programming method. The proposed system uses a speed advisory system, in which the driver has the choice not to follow the generated velocity produced by the algorithm when the traffic light is green.Note that the irregularity and uncertainty of traffic light cycles at signal intersection remains a challenging issue. In this regard, Maher and Vahidi [90] presented a planning algorithm for predicting optimal velocity. The proposed method uses a signal phase and timing information to obtain vehicle energy consumption. They considered the case with no prior phase knowledge or timing indicates an unaware driver and provides minimum energy required for a vehicle. The proposed prediction model is evaluated by considering average time data and real-time data. The results obtained from the numerical simulation show that the proposed method obtained efficient energy. Sun et al. [91] examined the speed planning issues when connected vehicles (CVs) communicate with traffic lights. They considered the eco-driving problem as a data-driven optimization problem. Second, they defined the duration of red light as a random variable and performed an analysis on the amount of time required by a vehicle passing through the traffic signal intersection.Several methods have been proposed to overcome the eco-driving issues [92–94]. In Ref. [92], Miyatake et al. presented a method for eco-driving based on dynamic programming. They evaluated the effectiveness of the proposed method by considering the road with a traffic signal in the simulation network and obtained a better performance. HomChaudhuri et al. [93] developed a model predictive control method for connected vehicles in the urban traffic environment. The control system consists of a decentralized strategy for every vehicle since its form owns a strategy based on the neighboring vehicle. The experimental results show that the proposed control method is computationally effective. De Nunzio et al. [94] proposed a method for consuming less energy while a vehicle travels through the signal intersection. The proposed method solves the nonconvex control problem using a suboptimal strategy. After retrieving convexity, they solved the optimization problem using a given route to determine the vehicle crossing time at each signal intersection. The proposed method produces a better result which could be used for online verification and obtained a lower computational processing time.In order to improve the traffic safety and avoid traffic incident and crashes, Zhang and Cassandras [95] introduced a control strategy method. It aimed to reduce energy consumption based on the maximum throughput criteria. First, they highlighted the problem between controlling connected vehicles (CVs) and nonCVs traveling on the road to reduce energy consumptions. The simulation results demonstrated that the proposed method significantly reduces energy consumption by increasing penetration rates of CVs on the road. The problem associated with eco-driving algorithms is that they need an accurate traffic condition such as number of traffic flow, road strategy, and safety conditions. These traffic conditions could be obtained by various equipment that are placed on the road such as electronic sensors, loop detectors, and a macroscopic traffic model. Obtaining parameter values from these equipment remain a challenging issue due to uncertainty and difficulty to predict driver’s decision-making for selecting the traffic route and to analyze the safety margin for pedestrians.Autonomous and connected vehicles provide a significant reduction in traffic consumption since they can accurately receive information and guidance from eco-driving algorithms [102]. When connected vehicles form a platoon and communicate with each other, they can reduce energy consumption along a given traffic route at which the platoons were formed even if they have different traveling destinations [103].
### 4.2. Vehicle Eco-Routing
The eco-routing plays a significant role for planning and determining the energy-efficient route. It determines an optimum route based on users requirements, road maps, and structures such as traffic flow, traffic speed, and fuel consumptions. [83]. The g function is connected to each link that the traffic emission of a vehicle travels on a link of that route. Generally, the g function depends on time function t since traffic network conditions change rapidly, and the function g depends on the link when we apply static eco-routing algorithms [13].Boribonsomsin et al. [96] introduced an eco-routing navigation system using real-time traffic information. They determine the route between trip origins to destinations using the eco-routing algorithm. Then they used a dynamic road database using a fusion algorithm. Second, they evaluated the real-time vehicle trajectories to determine the energy consumption of each link. Ericsson et al. [97] introduced a method for estimation of reducing fuel consumption. They used an eco-routing algorithm that identifies and classifies the road networks in various different groups based on GPS devices. They performed the analysis using a large amount of database of real traffic patterns obtained from the road network. Then, they extracted different routes from the database in order to evaluate the fuel-saving navigation system. Moreover, they determine the model performance during peak and nonpeak traveling hours for the entire day.Generally, the eco-routing algorithms could only consider the cost of the link associated with the vehicle route. It does not consider vehicle patterns at the signalized intersection. Although this aspect plays an imperative role in reducing traffic emissions or fuel consumption, several methods have been proposed that focus on designing energy consumption at the road intersection. Liu [98] proposed an eco-routing algorithm for solving signalized traffic issues. He integrated a microscopic vehicle emission model into the Markov decision process. High-resolution traffic data consist of vehicle entry and exit status which are used to evaluate the performance of the proposed method. De Nunzio et al. [99] proposed a method for biobjective eco-routing in urban traffic environments. They formulated the routing problem using a weighted sum optimization method. Then, they presented a real-time searching algorithm, which could provide drivers with different sets of solutions. The simulation results show that these strategies could reduce energy consumption and traveling time.Kluge et al. [100] introduced used an energy-efficient route in the urban traffic road network. First, they performed an analysis on energy consumption of the road network and then derived the traffic measurement using a mesoscopic traffic model. Then they solved a time-dependent eco-routing using the Dijkstras algorithm. The heuristic searches can be used to determine energy-efficient routes as indicated by Ref. [101]. It has overcome the complexity of route planning caused by the uncertainty of vehicle arrival time at destination using time-dependent graphs [101].Eco-routing algorithms require a large amount of computational time due to planning of an energy-efficient route. In order to reduce the computational time of an eco-routing algorithm, one can consider reducing vehicle traveling time or minimizing a route distance. Moreover, the computational time can also be reduced by employing a multiobjective eco-routing algorithm, which not only aims to minimize traffic emission but also reduces the vehicle traveling time and distance. In this regard, a few works have been presented, which aim to overcome vehicle traveling time and distance [99, 104].
## 4.1. Vehicle Eco-Driving
Eco-driving is a modern and efficient style of driving that reduces fuel consumption and improves traffic safety. It computes and analyzes the initial vehicle trajectories to process for embedded algorithms. The parameter values are used to forecast road structure, traffic flow, and congestions, and various limitations such as vehicle trip time and maximum vehicle speed. Some of the limitations rely on the driver’s attitude towards driving, such as driving a vehicle while traffic signal lights are flashing [83].In eco-driving, the ego-connected vehicle can cooperate with other vehicles on the road. For instance, a group of vehicles (platooning) that travel together closely and safely at high speed. The aim of using the eco-driving mechanism is to reduce fuel consumption and aerodynamics drag, and when the eco-driving uses a multivehicle scenario, the information processing is more complex as compared to the single-vehicle.Let us assume the vehicle vector at time step functiontt as qt=mt,vtT, where m represents the vehicle position along the specific route and v denotes the speed of vehicle. The aim of the eco-driving is to evaluate each time step t input vector zt=Hemt,HentT, where Hem and Hen represent the traction force and mechanical brake force, respectively. The input vector zt has an ability to significantly reduce traffic emission or energy consumption by the vehicle [13].The optimization problems of eco-driving are represented as follows [84]:(1)minzo,…,zm−1∑t=0m−1gqt,zt.Reference [13] indicated that the vehicle state at time step at t+1 function can be written as follows:(2)fqt,zt=mt,ϑvtvt+ϑHemt−Hent−HretM,where Hret denotes the resistance force caused by driving vehicle.The problem associated with eco-driving is the traction forceHem and mechanical brake force Hen. These forces are directly applied to the input and are compatible with autonomous and connected vehicles in terms of vehicle position and directions (longitudinal and lateral). Note that eco-driving creates various issues for human drivers to return the speed profile that vehicle users can follow [13].Sciarreta et al. [84] introduced several methods to overcome eco-driving control problems. They aim to reduce traffic emissions caused by transportation energy. Sciarreta et al. considered all the road conditions (online and offline) for real-time analysis and estimation. Recently, various methods have been proposed for energy efficiency and offline optimization. Ozatay et al. [85] provided a solution for reducing energy consumption based on a velocity optimization problem. They incorporated the road conditions (road structure and grade) with an optimization problem and generated a vehicle speed trajectory corresponding to a given vehicle route. They tested the performance of the proposed system in terms of various problems and compared it with a dynamic programming solution. The results show that the proposed method generates a better vehicle trajectory for about 10% as compared to the cruise speed control method. Dib et al. [86] introduced an evaluation approach for energy of the electrical vehicle. They used the performance metrics to determine the energy efficiency by using intelligent eco-driving methods.Additionally, a few works for providing online solutions have been presented in the recent past years [87]. Hellström et al. [87] introduced a method for minimizing trip and fuel consumption. They used an on-board optimization controller by considering the road slope and a GPS device to obtain the road geometry and its conditions. They performed the experiments using a heavy truck in the freeway traffic environment. The results show that the proposed method could significantly reduce the fuel consumption in an eco-driving vehicle.In an urban traffic environment, eco-driving is complex and challenging due to nonlinear traffic flow. At traffic signal intersection, it is very difficult to know the traffic lights before arriving at the intersection due to phase duration that depends on amount of traffic flow on the street. As stated by Demestichas [88], intelligent transportation systems and urban traffic management systems could reduce these issues and notify the driver about traffic light cycles before arriving at the signal intersection. Ozatay et al. [89] proposed a method for optimizing the vehicle speed trajectory. They considered traffic lights as stop signs to optimize the speed trajectory. The driver can send the traffic information to the cloud. Then, the cloud server generates the routes and collects the corresponding traffic information, (i.e., the number of vehicles at traffic signal intersection). Finally, they solved the optimization problem based on these information using a dynamic programming method. The proposed system uses a speed advisory system, in which the driver has the choice not to follow the generated velocity produced by the algorithm when the traffic light is green.Note that the irregularity and uncertainty of traffic light cycles at signal intersection remains a challenging issue. In this regard, Maher and Vahidi [90] presented a planning algorithm for predicting optimal velocity. The proposed method uses a signal phase and timing information to obtain vehicle energy consumption. They considered the case with no prior phase knowledge or timing indicates an unaware driver and provides minimum energy required for a vehicle. The proposed prediction model is evaluated by considering average time data and real-time data. The results obtained from the numerical simulation show that the proposed method obtained efficient energy. Sun et al. [91] examined the speed planning issues when connected vehicles (CVs) communicate with traffic lights. They considered the eco-driving problem as a data-driven optimization problem. Second, they defined the duration of red light as a random variable and performed an analysis on the amount of time required by a vehicle passing through the traffic signal intersection.Several methods have been proposed to overcome the eco-driving issues [92–94]. In Ref. [92], Miyatake et al. presented a method for eco-driving based on dynamic programming. They evaluated the effectiveness of the proposed method by considering the road with a traffic signal in the simulation network and obtained a better performance. HomChaudhuri et al. [93] developed a model predictive control method for connected vehicles in the urban traffic environment. The control system consists of a decentralized strategy for every vehicle since its form owns a strategy based on the neighboring vehicle. The experimental results show that the proposed control method is computationally effective. De Nunzio et al. [94] proposed a method for consuming less energy while a vehicle travels through the signal intersection. The proposed method solves the nonconvex control problem using a suboptimal strategy. After retrieving convexity, they solved the optimization problem using a given route to determine the vehicle crossing time at each signal intersection. The proposed method produces a better result which could be used for online verification and obtained a lower computational processing time.In order to improve the traffic safety and avoid traffic incident and crashes, Zhang and Cassandras [95] introduced a control strategy method. It aimed to reduce energy consumption based on the maximum throughput criteria. First, they highlighted the problem between controlling connected vehicles (CVs) and nonCVs traveling on the road to reduce energy consumptions. The simulation results demonstrated that the proposed method significantly reduces energy consumption by increasing penetration rates of CVs on the road. The problem associated with eco-driving algorithms is that they need an accurate traffic condition such as number of traffic flow, road strategy, and safety conditions. These traffic conditions could be obtained by various equipment that are placed on the road such as electronic sensors, loop detectors, and a macroscopic traffic model. Obtaining parameter values from these equipment remain a challenging issue due to uncertainty and difficulty to predict driver’s decision-making for selecting the traffic route and to analyze the safety margin for pedestrians.Autonomous and connected vehicles provide a significant reduction in traffic consumption since they can accurately receive information and guidance from eco-driving algorithms [102]. When connected vehicles form a platoon and communicate with each other, they can reduce energy consumption along a given traffic route at which the platoons were formed even if they have different traveling destinations [103].
## 4.2. Vehicle Eco-Routing
The eco-routing plays a significant role for planning and determining the energy-efficient route. It determines an optimum route based on users requirements, road maps, and structures such as traffic flow, traffic speed, and fuel consumptions. [83]. The g function is connected to each link that the traffic emission of a vehicle travels on a link of that route. Generally, the g function depends on time function t since traffic network conditions change rapidly, and the function g depends on the link when we apply static eco-routing algorithms [13].Boribonsomsin et al. [96] introduced an eco-routing navigation system using real-time traffic information. They determine the route between trip origins to destinations using the eco-routing algorithm. Then they used a dynamic road database using a fusion algorithm. Second, they evaluated the real-time vehicle trajectories to determine the energy consumption of each link. Ericsson et al. [97] introduced a method for estimation of reducing fuel consumption. They used an eco-routing algorithm that identifies and classifies the road networks in various different groups based on GPS devices. They performed the analysis using a large amount of database of real traffic patterns obtained from the road network. Then, they extracted different routes from the database in order to evaluate the fuel-saving navigation system. Moreover, they determine the model performance during peak and nonpeak traveling hours for the entire day.Generally, the eco-routing algorithms could only consider the cost of the link associated with the vehicle route. It does not consider vehicle patterns at the signalized intersection. Although this aspect plays an imperative role in reducing traffic emissions or fuel consumption, several methods have been proposed that focus on designing energy consumption at the road intersection. Liu [98] proposed an eco-routing algorithm for solving signalized traffic issues. He integrated a microscopic vehicle emission model into the Markov decision process. High-resolution traffic data consist of vehicle entry and exit status which are used to evaluate the performance of the proposed method. De Nunzio et al. [99] proposed a method for biobjective eco-routing in urban traffic environments. They formulated the routing problem using a weighted sum optimization method. Then, they presented a real-time searching algorithm, which could provide drivers with different sets of solutions. The simulation results show that these strategies could reduce energy consumption and traveling time.Kluge et al. [100] introduced used an energy-efficient route in the urban traffic road network. First, they performed an analysis on energy consumption of the road network and then derived the traffic measurement using a mesoscopic traffic model. Then they solved a time-dependent eco-routing using the Dijkstras algorithm. The heuristic searches can be used to determine energy-efficient routes as indicated by Ref. [101]. It has overcome the complexity of route planning caused by the uncertainty of vehicle arrival time at destination using time-dependent graphs [101].Eco-routing algorithms require a large amount of computational time due to planning of an energy-efficient route. In order to reduce the computational time of an eco-routing algorithm, one can consider reducing vehicle traveling time or minimizing a route distance. Moreover, the computational time can also be reduced by employing a multiobjective eco-routing algorithm, which not only aims to minimize traffic emission but also reduces the vehicle traveling time and distance. In this regard, a few works have been presented, which aim to overcome vehicle traveling time and distance [99, 104].
## 5. Traffic Control Design Strategy
This section discusses various traffic control design strategies which aim to reduce traffic emissions and energy consumed by a vehicle. We will review different traffic control strategies, which play a significant role in reducing traffic emissions or energy. Such control strategies could be useful for improving traffic flow and traffic management systems in terms of controlling vehicle speed limits, traffic light control, splitting traffic flow at different signal intersections, and using different actuators. The traffic control strategies rely on various actuators, as illustrated in Figure4. We will comprehensively discuss the various actuators which could be used for implementing traffic control strategies in different traffic environments.Figure 4
Traffic control design strategy.Traffic control strategies aim to minimize traffic emission and energy by implementing different optimization methods. In the past, several algorithms have been proposed which aims to alleviate traffic incident and congestion and to eliminate shock waves using different approaches such as vehicle equalization and homogenization rather than minimizing traffic emission or energy unambiguously [13]. Most of these approaches are designed to reduce the amount of vehicle acceleration, which then subsequently leads to minimizing traffic emission and fuel consumption [105]. Prior to implementing these approaches, one should perform a comprehensive analysis of how much traffic emission or energy could be minimized by applying them. They can also reduce the vehicle speed [106]. Table 3 shows the summary of research works in terms of traffic control design and strategy.Table 3
Summary of work for traffic control design and strategy.
ReferencePaperYearStrategyFeatures/objectivesApplication[107]Walraven et al.2016Speed limit controlTraffic flow optimization based on the reinforcement learning technique.ITS.[108]Hegyi et al.2008A framework for limiting speed limit using shock wave theory.Sustainable smart city/ITS.[109]Zu et al.2018Reducing vehicle fuel consumption using the COPERT model at the freeway traffic network.Sustainable smart city.[63]Zegeye et al.2012A predictive traffic controller to optimize the control law parameters and determine control inputs.Sustainable smart city/ITS.[110]Van den Berg et al.2007An MPC method based on optimal control inputs for urban and freeway traffic networks.ITS.[111]Tajali and Hajbabaie2018An MPC approach for examining the variations in traffic demand.ITS.[112]De Nunzio et al.2014Reducing energy consumption using a macroscopic steady-state analysis.Sustainable smart city/ITS.[113]Liu and Tate2004Determining network effects based on intelligent speed adaption system (ISA).ITS.[114]Panis et al.2006Emission model using empirical measurements and the vehicle emission type.Sustainable smart city.[115]Zhu and Ukkusuri2014Tackle traffic demand uncertainty using the speed limit control model.Sustainable smart city/ITS.[116]Khondaker and Kattan2015An overview and mechanism for controlling speed limits.ITS.[117]Stren et al.2019Mobile ActuatorsTo improve air quality using autonomous vehicles.Sustainable smart city/ITS.[118]Yang and Jin2014To control theoretic formulation based on intervehicle communication.ITS.[119]Wu et al.2018Stabilize traffic flow using an autonomous vehicle such as string stability and optimal traffic conditions based on frequency domain analysis.ITS.[120]Liu et al.2019A country-level evaluation for investigating greenhouse gas emission.Sustainable smart city.[121]Xu et al.2011Dynamic routingIntegrated traffic control based on the MPC.Sustainable smart city/ITS.[82]Luo et al.2016A route diversion method based on the MPC in terms of multiple objectives.ITS.[122]Wang et al.2018Reviewed different techniques used for sustainable transportation system and smart city applications.Sustainable smart city.
### 5.1. Speed Limit Control
Speed limit control is used to regulate traffic flow. It aims to minimize traffic emissions and energy consumption by controlling speed limits. The speed limits relate to different vehicle locations from the entire road network.Recently, various methods have been proposed, which aim to reduce speed vehicle limits in the freeway traffic environment. A few works were focused on eliminating shock waves instead of reducing traffic emissions and energy. Walraven et al. [107] proposed a traffic flow optimization method based on the reinforcement learning technique. They formulated the traffic flow problem using a Markov decision process. Then, they employed the Q-learning algorithm to detect the maximum driving speed which is allowed on the highway traffic. The simulation results revealed that the proposed method reduced traffic congestion under the heavy traffic environment. Hegyi et al. [108] introduced a method for limiting speed limit using shock wave theory. First, they employed the traffic control algorithm based on shock wave. Then, they controlled the speed limit when the shock wave is considered as solvable.Additionally, several methods have been proposed which aim to reduce traffic emissions and energy. These methods employed different optimization techniques, which could significantly reduce vehicle traveling time and control the speed limits. Zu et al. [109] solved a convex optimization problem using a macroscopic traffic control. The authors aimed to reduce vehicle fuel consumption based on the COPERT model in freeway traffic network. Then they formulated a convex optimization problem to produce an efficient energy scenario under a real-time traffic environment. Zegeye et al. [63] introduced a predictive traffic controller using parameter control policies. The proposed control method relies on MPC and state feedback control mechanism. It optimizes the control law parameters that determine the control inputs, which significantly reduces computational complexity. The effectiveness of the proposed control model is validated on the freeway traffic environment.The model predictive control provides numerous opportunities for controlling traffic lights and limiting vehicle speed. It is compatible with different traffic conditions and models which can solve nonconvex optimization problems. Nevertheless, computational complexity could be reduced effectively while employing MPC for real-time traffic scenarios. Therefore, a parameterized MPC could be useful for applying MPC-based macroscopic traffic flow control without compromising the computational performance [63].Van den Berg et al. [110] introduced an integrated approach for urban and freeway traffic networks. They employed the MPC method based on optimal control inputs which are obtained from numerical optimization. The simulation results show that the proposed method obtained a better reduction in traffic congestion. Tajali and Hajbabaie [111] proposed an MPC approach to determine the variations in traffic demand. The author designed a mathematical model for dynamic speed harmonization in urban traffic networks to enhance traffic flow. Additionally, a few works focused on reducing traffic emissions and fuel consumption. Additionally, De Nunzio et al. [112] presented a method for reducing traffic energy consumption using a macroscopic steady-state analysis. The authors assess the system behavior using boundary conditions based on the timing of traffic lights, and a traffic control policy by relying on a variation of vehicle speed limits. The effectiveness of the proposed model has demonstrated using a microscopic simulation network.Liu and Tate [113] presented a method for determining network effects based on the intelligent speed adaption system (ISA). The ISA aims to improve the traffic microsimulation model to signify the ISA throughout the road network. The effectiveness of the proposed ISA is performed on a real-world traffic network, and it evaluated the impact of traffic congestion and speed distribution. The result shows that the ISA model is more efficient for reducing different traffic conditions. The main limitation of the proposed model is that it requires numerous amounts of traffic data for simulating the ISA on microscopic traffic models, which causes requirement of a large amount of computational time to process the algorithm. Panis et al. [114] introduced a model for traffic emission and speed limits. They developed an emission model based on empirical measurements based on the vehicle emission type. Then the traffic control model obtains the instant speed and acceleration of each vehicle traveling under the road network. They tested the proposed model at Ghentbrugge, Belgium.The learning-based method has been used to control speed limits. Zhu and Ukkusuri [115] proposed a speed limit control model used for tackling traffic demand uncertainty. First, they developed a link dynamic model which simulates traffic flow propagation to control the speed limits. Second, the author represents the speed limit problem as a Markov decision process (MDP) and was solved in terms of a real traffic control method. A case study on the Sioux Falls network was performed to demonstrate the effectiveness of the proposed model. Also, in Ref. [116], Khondaker and Kattan presented a detailed overview and mechanism for controlling speed limits.
### 5.2. Mobile Actuators
This section discusses the mobile actuators, which rely on the vehicle’s movement and control the traffic around the network. The vehicles which are controlled by mobile actuators aim to reduce traffic emissions and amount of fuel consumption.Stren et al. [117] proposed a method for improving air quality using autonomous vehicles. The authors examined the likelihood of reducing vehicle emissions influenced by the whole traffic network. They collected data for velocity and acceleration by conducting various experiments using a single autonomous-capable vehicle to dampen traffic waves with about 21 human-piloted vehicles. Yang and Jin [118] proposed a method for control theoretic formulation based on intervehicle communication. They designed a control variable that aims to follow a subsequent vehicle’s speed without charging its average speed. Also, they performed the analysis of a constant independent and three cooperative green driving strategies. Wu et al. [119] proposed a method for stabilizing traffic flow using an autonomous vehicle. They formulated the problem using string stability and optimal traffic conditions based on frequency domain analysis. They determined the traffic stability while implementing safety limitations on the autonomous vehicle.Liu et al. [120] presented a country-level evaluation for investigating greenhouse gas emissions. The authors examined the various effects of autonomous vehicles which are deployed on greenhouse gas emissions. These effects are vehicle penetration rates by 2050 and the amount of fuel consumption changes.
### 5.3. Dynamic Routing
This section discusses another approach named dynamic routing, which is used to reduce traffic emission and fuel consumption. It consists of reorganizing the traffic flow over the road network efficiently in terms of controlling split ratios [13]. The controller first analyzes and predicts the optimal routes for different traffic flow directions and communicates with the vehicle users in terms of radio communication devices, message signs, etc. [53].The dynamic routing problem is regulated system optimization. Xu et al. [121] proposed a model for integrated traffic control based on the MPC. It corresponds to minimizing traffic congestion and the user equilibrium is categorized by a density distribution for all used routes. The author modeled the driver’s information using an adaptive Kalman filtering theory. A case study shows that the proposed model could improve traffic efficiency and reduce the cost of the traffic management system. Luo et al. [82] introduced a route diversion method based on the MPC in terms of multiple objectives. The authors used the routes which are provided by the traffic authority. The recommended routes are considered as the control variable. Then they determined the split ratio based on route recommendations in terms of driver compliance rate. The diversion route control uses an MPC model based on a parallel Tabu Search algorithm.Note that traffic emission and energy cost could also influence the selection of dynamic routing. It aims to select main routes which could significantly reduce traffic emissions and energy consumption. Wang et al. [122] discussed the various dynamic road pricing and reviewed different techniques that are used for the sustainable transportation system and smart city applications.
## 5.1. Speed Limit Control
Speed limit control is used to regulate traffic flow. It aims to minimize traffic emissions and energy consumption by controlling speed limits. The speed limits relate to different vehicle locations from the entire road network.Recently, various methods have been proposed, which aim to reduce speed vehicle limits in the freeway traffic environment. A few works were focused on eliminating shock waves instead of reducing traffic emissions and energy. Walraven et al. [107] proposed a traffic flow optimization method based on the reinforcement learning technique. They formulated the traffic flow problem using a Markov decision process. Then, they employed the Q-learning algorithm to detect the maximum driving speed which is allowed on the highway traffic. The simulation results revealed that the proposed method reduced traffic congestion under the heavy traffic environment. Hegyi et al. [108] introduced a method for limiting speed limit using shock wave theory. First, they employed the traffic control algorithm based on shock wave. Then, they controlled the speed limit when the shock wave is considered as solvable.Additionally, several methods have been proposed which aim to reduce traffic emissions and energy. These methods employed different optimization techniques, which could significantly reduce vehicle traveling time and control the speed limits. Zu et al. [109] solved a convex optimization problem using a macroscopic traffic control. The authors aimed to reduce vehicle fuel consumption based on the COPERT model in freeway traffic network. Then they formulated a convex optimization problem to produce an efficient energy scenario under a real-time traffic environment. Zegeye et al. [63] introduced a predictive traffic controller using parameter control policies. The proposed control method relies on MPC and state feedback control mechanism. It optimizes the control law parameters that determine the control inputs, which significantly reduces computational complexity. The effectiveness of the proposed control model is validated on the freeway traffic environment.The model predictive control provides numerous opportunities for controlling traffic lights and limiting vehicle speed. It is compatible with different traffic conditions and models which can solve nonconvex optimization problems. Nevertheless, computational complexity could be reduced effectively while employing MPC for real-time traffic scenarios. Therefore, a parameterized MPC could be useful for applying MPC-based macroscopic traffic flow control without compromising the computational performance [63].Van den Berg et al. [110] introduced an integrated approach for urban and freeway traffic networks. They employed the MPC method based on optimal control inputs which are obtained from numerical optimization. The simulation results show that the proposed method obtained a better reduction in traffic congestion. Tajali and Hajbabaie [111] proposed an MPC approach to determine the variations in traffic demand. The author designed a mathematical model for dynamic speed harmonization in urban traffic networks to enhance traffic flow. Additionally, a few works focused on reducing traffic emissions and fuel consumption. Additionally, De Nunzio et al. [112] presented a method for reducing traffic energy consumption using a macroscopic steady-state analysis. The authors assess the system behavior using boundary conditions based on the timing of traffic lights, and a traffic control policy by relying on a variation of vehicle speed limits. The effectiveness of the proposed model has demonstrated using a microscopic simulation network.Liu and Tate [113] presented a method for determining network effects based on the intelligent speed adaption system (ISA). The ISA aims to improve the traffic microsimulation model to signify the ISA throughout the road network. The effectiveness of the proposed ISA is performed on a real-world traffic network, and it evaluated the impact of traffic congestion and speed distribution. The result shows that the ISA model is more efficient for reducing different traffic conditions. The main limitation of the proposed model is that it requires numerous amounts of traffic data for simulating the ISA on microscopic traffic models, which causes requirement of a large amount of computational time to process the algorithm. Panis et al. [114] introduced a model for traffic emission and speed limits. They developed an emission model based on empirical measurements based on the vehicle emission type. Then the traffic control model obtains the instant speed and acceleration of each vehicle traveling under the road network. They tested the proposed model at Ghentbrugge, Belgium.The learning-based method has been used to control speed limits. Zhu and Ukkusuri [115] proposed a speed limit control model used for tackling traffic demand uncertainty. First, they developed a link dynamic model which simulates traffic flow propagation to control the speed limits. Second, the author represents the speed limit problem as a Markov decision process (MDP) and was solved in terms of a real traffic control method. A case study on the Sioux Falls network was performed to demonstrate the effectiveness of the proposed model. Also, in Ref. [116], Khondaker and Kattan presented a detailed overview and mechanism for controlling speed limits.
## 5.2. Mobile Actuators
This section discusses the mobile actuators, which rely on the vehicle’s movement and control the traffic around the network. The vehicles which are controlled by mobile actuators aim to reduce traffic emissions and amount of fuel consumption.Stren et al. [117] proposed a method for improving air quality using autonomous vehicles. The authors examined the likelihood of reducing vehicle emissions influenced by the whole traffic network. They collected data for velocity and acceleration by conducting various experiments using a single autonomous-capable vehicle to dampen traffic waves with about 21 human-piloted vehicles. Yang and Jin [118] proposed a method for control theoretic formulation based on intervehicle communication. They designed a control variable that aims to follow a subsequent vehicle’s speed without charging its average speed. Also, they performed the analysis of a constant independent and three cooperative green driving strategies. Wu et al. [119] proposed a method for stabilizing traffic flow using an autonomous vehicle. They formulated the problem using string stability and optimal traffic conditions based on frequency domain analysis. They determined the traffic stability while implementing safety limitations on the autonomous vehicle.Liu et al. [120] presented a country-level evaluation for investigating greenhouse gas emissions. The authors examined the various effects of autonomous vehicles which are deployed on greenhouse gas emissions. These effects are vehicle penetration rates by 2050 and the amount of fuel consumption changes.
## 5.3. Dynamic Routing
This section discusses another approach named dynamic routing, which is used to reduce traffic emission and fuel consumption. It consists of reorganizing the traffic flow over the road network efficiently in terms of controlling split ratios [13]. The controller first analyzes and predicts the optimal routes for different traffic flow directions and communicates with the vehicle users in terms of radio communication devices, message signs, etc. [53].The dynamic routing problem is regulated system optimization. Xu et al. [121] proposed a model for integrated traffic control based on the MPC. It corresponds to minimizing traffic congestion and the user equilibrium is categorized by a density distribution for all used routes. The author modeled the driver’s information using an adaptive Kalman filtering theory. A case study shows that the proposed model could improve traffic efficiency and reduce the cost of the traffic management system. Luo et al. [82] introduced a route diversion method based on the MPC in terms of multiple objectives. The authors used the routes which are provided by the traffic authority. The recommended routes are considered as the control variable. Then they determined the split ratio based on route recommendations in terms of driver compliance rate. The diversion route control uses an MPC model based on a parallel Tabu Search algorithm.Note that traffic emission and energy cost could also influence the selection of dynamic routing. It aims to select main routes which could significantly reduce traffic emissions and energy consumption. Wang et al. [122] discussed the various dynamic road pricing and reviewed different techniques that are used for the sustainable transportation system and smart city applications.
## 6. Open Research Challenges and Recommendations
We wrap up our survey by discussing the open research challenges and recommendations as illustrated in Figure5. They were obtained after reviewing existing techniques on the traffic control modeling. We found that various challenges still exist in traffic control modeling and strategies and require comprehensive research and investigation to design a sophisticated algorithm to test these strategies.Figure 5
Open research challenges and recommendations.
### 6.1. Open Challenges
#### 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
#### 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
#### 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
### 6.2. Recommendations for Obtaining Sustainability Goals
This section discusses the recommendations which could be used to obtain sustainability goals in the freeway traffic environment. The development of the Internet of vehicle technology and automated technology could be significantly used to improve traffic safety and reduce traffic emissions. These technologies should be adopted to meet the future sustainability goals of the traffic systems.
#### 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
#### 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
#### 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
#### 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 6.1. Open Challenges
### 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
### 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
### 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
## 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
## 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
## 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
## 6.2. Recommendations for Obtaining Sustainability Goals
This section discusses the recommendations which could be used to obtain sustainability goals in the freeway traffic environment. The development of the Internet of vehicle technology and automated technology could be significantly used to improve traffic safety and reduce traffic emissions. These technologies should be adopted to meet the future sustainability goals of the traffic systems.
### 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
### 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
### 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
### 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
## 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
## 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
## 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 7. Conclusion
In this survey, a comprehensive investigation of traffic control strategies for the freeway traffic environment has been discussed. We performed a thorough analysis by reviewing the latest papers on traffic control strategies. Such strategies play an important role in obtaining sustainable objectives by reducing traffic emissions, collision risk, and amount of fuel consumption. It is evident from the literature review that traffic control strategies have been deeply discussed in recent years. This indicates that a significant interest is shown by a research community for this research area due to the rapid transformation in electronics and communication devices. Therefore, this transformation encourages the research community or the automobile industry to design and develop a robust traffic control system. We first introduced the traffic control modeling approaches. It provides a better solution for obtaining reasonable traffic and sustainable mobilities. We then comprehensively discussed various control strategies that could be beneficial for researchers in order to design a robust freeway traffic controller. These strategies could enhance the traffic flow and traffic management system and reduce traffic congestion. A comprehensive analysis of existing methods on the vehicle control design and traffic control design strategies is presented. Adoption of these control strategies could be helpful in reducing the amount of energy consumption required by a vehicle. A detailed discussion on open research challenges and issues for traffic control in the freeway network is covered with the recommendation of obtaining sustainable goals. The proposed survey reveals that there is a need for focused research on the traffic control system that can overcome various safety challenges such as traffic incidents and crashes and also reduce the environmental effects. In short, this survey is well developed to cover traffic control techniques in the freeway traffic environment. It fills the literature gaps of existing surveys and incorporates the recent trends and approaches in traffic control.
---
*Source: 1012206-2022-07-20.xml* | 1012206-2022-07-20_1012206-2022-07-20.md | 172,706 | A Comprehensive Review on Traffic Control Modeling for Obtaining Sustainable Objectives in a Freeway Traffic Environment | Muhammad Sameer Sheikh; Yinqiao Peng | Journal of Advanced Transportation
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012206 | 1012206-2022-07-20.xml | ---
## Abstract
Traffic control strategy plays a significant role in obtaining sustainable objectives because it not only improves traffic mobility but also enhances traffic management systems. It has been developed and applied by the research community in recent years and still offers various challenges and issues that may require the attention of researchers and engineers. Recent technological developments toward connected and automated vehicles are beneficial for improving traffic safety and achieving sustainable goals. There is a need to develop a survey on traffic control techniques, which could provide the recent developments in the traffic control strategy and could be useful in obtaining sustainable goals. This survey presents a comprehensive investigation of traffic control techniques by carefully reviewing existing methods from a new perspective and reviews various traffic control strategies that play an important role in achieving sustainable objectives. First, we present traffic control modeling techniques that provide a robust solution to obtain reasonable traffic and sustainable mobilities. These techniques could be helpful for enhancing the traffic flow in a freeway traffic environment. Then, we discuss traffic control strategies that could be helpful for researchers and practitioners to design a robust freeway traffic controller. Second, we present a comprehensive review of recent state-of-the-art methods on the vehicle design control strategy, which is followed by the traffic control design strategy. They aim to reduce traffic emissions and energy consumption by a vehicle. Finally, we present the open research challenges and outline some recommendations which could be beneficial for obtaining sustainable goals in traffic systems and help researchers understand various technical aspects in the deployment of traffic control systems.
---
## Body
## 1. Introduction
Nowadays, environmental pollution caused by transportation systems has received significant attention from the research community [1, 2]. Significant increase in population and economic expansion of developing economies are considered the main reasons for causing air pollution and increase in energy demand [3]. A report of the Ministry of Ecology and Environment of China revealed that traffic emission becomes a major source of air pollution and could cause disease [4]. It is considered as one of the main reasons for causing premature deaths [5], in which most of them are caused by prolonged exposure to substances such as carbon dioxide (CO2) and nitrogen oxide (NOx).According to US Energy Information Administration, the automobile industry consumes 55% of the total fuels in the world [6]. These statistics could increase in the next couple of decades because of the increasing number of vehicles on the road. Therefore, more attention has brought the issue of traffic emissions and sustainability to the ecological and environmental committee of many countries worldwide.Recently, the sustainability issue has been addressed in various aspects of human activities [7, 8]. Obtaining the sustainability objectives is a complex task, which requires intensive requirements to be fulfilled in order to accomplish that goal. The development of sustainable cities is one of the listed objectives identified during the meeting of the United Nations held in 2017. It needs to accomplish as a part of achieving the sustainable goal of the 2030 Agenda [9]. To accomplish the 2030 Agenda, the road transportation systems need to adopt social equity and safer traffic mobility by reducing air pollution and providing environmentally friendly vehicle movement.Sustainable transportation systems have changed the people lives using improved technologies. The main concept of using traffic control that covers the sustainability mobility in all aspects involves protecting the environment and improving the economic and social development [10]. The aim of sustainable transport is to improve the transportation system and enhance the people lives by providing them better services in terms of access to each facility. Various issues related to sustainability in transportation have been investigated by the research community for last couple of decades [1, 2].Achieving cleaner and sustainable transportation could significantly reduce traffic accidents and congestions. In particular, traffic accidents are the main reason for causing nonrecurrent traffic congestion and also cause serious and fatal injuries. A road safety report by the World Health Organization indicates that 1.35 million people died in road and traffic accidents every year [11]. Also, it reveals that road accidents lead to cause the death of younger people aged between 15 and 29 years. Therefore, traffic accidents are considered a critical issue, which may lead to cause serious health issues and also affect the country’s economy. The fatality rate can reduce by taking precautionary measures on both vehicles and roads. The rapid increase in traffic flow could significantly lead to an increase in traffic congestion, which could increase vehicle traveling times and less reliability of traffic systems handling by drivers. However, it is not possible to modify the existing algorithms to increase the traffic flow. In this regard, a need for robust traffic control and management system arises, which can effectively use the existing road conditions without being required to employ the substantial traffic infrastructure and also can perform a comprehensive analysis on the impact caused by system challenges such as traffic management and security system [12].The traffic control methods are getting the continuous attention of transport researchers and practitioners. It aims to improve road safety by significantly reducing traffic congestions and accidents causing severe injuries. It also provides cleaner and sustainable transportation by reducing traffic emissions. In recent years, various traffic control strategies addressing sustainable issues have been studied [10, 13, 14]. These studies could be useful for nurturing traffic control methods and strategies for the freeway traffic environment and could be applied to improve traffic safety and to reduce environmental impacts. The traffic control strategy aims to reduce traffic congestion caused by various incidents in the freeway traffic environment. The development of the traffic mobility system in terms of passengers and freight has significantly contributed to the economic prosperity of the country. However, it can lead to traffic congestion and worsening of traffic mobility, such as causing frequent traffic congestion, long queues on the road, an increase in travel time, and road rage incidents. Frequent traffic congestion causes frustration to drivers who realize that one has spent numerous amount of time to arrive at their destinations, which could be used to perform other useful activities [15, 16].Traffic congestion is further classified into recurrent and nonrecurrent traffic congestion. Nonrecurrent congestion events are usually caused by the traffic accident, signal malfunctioning, and other events that disrupt normal traffic flow and result in the reduction of traffic capacity on the road [17]. Recurrent and nonrecurrent traffic congestion events produce a severe rise in traffic volumes on the road. The intensive use of fossil fuels and numerous number of vehicles on the road are the main reasons for the emission of harmful substances [10]. These types of traffic congestion show numerous amounts of demerits which are influenced by an increase in traffic volume. It is caused by the usage of nonrenewable energy and the large number of vehicles on the road. It is widely evident that vehicular traffic contributes significantly to increase traffic emission and fuel consumption, among which some substances such as carbon dioxide, carbon monoxide, volatile organic compound, nitrogen oxide, and particulate matter contribute to traffic emission. Some of these substances dissipate into the environment, leading to air pollution and smog which affect sustainability. Also, these types of pollutants could cause severe health issues such as respiratory and cardiovascular diseases [18]. Therefore, the effect of these factors needs to be reduced in order to achieve cleaner, healthier, and sustainable transport [19]. Regardless of the rapid technological development in recent years, another issue is that traffic emissions caused by the usage of fossil fuels are still increasing due to a large number of vehicles on the road [20]. Therefore, limiting vehicle emissions is necessary for the sustainable smart city. Although significant achievements in technology development reduce a large amount of vehicle emissions and fuel consumption [21] in recent years, some fossil fuels are still needed to be in the standard range, which can be achieved by using the cleaning technology for reducing traffic pollutants.The major part of the freeway traffic environment cannot meet the current mobility demands, which resulted in more road users, a large queue on the road, increasing traffic emissions, large road bottlenecks, and raises security issues as well. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background. Since the traffic accidents cause nonrecurrent congestion, one or more lane congestion leads to reduced capacity, and the deceleration is caused by observing accidents or drivers participating in rescue operations [22]. Therefore, there is a need to improve existing traffic control models and transform them into a new perspective to achieve sustainable goals. Also, incorporating an effective road network could be beneficial for reducing traffic accidents and congestions, thereby reducing the amount of fuel consumptions.A couple of surveys have been presented for addressing traffic control and strategies with sustainable issues [10, 13] in 2019, and there have been significant developments in traffic control strategies. To the best of our knowledge, there is no comprehensive survey on traffic control and modeling for obtaining sustainable transportation. Pasquale et al. [10] launched a survey on traffic control strategies with various sustainability and traffic control issues for the freeway traffic environment. They discussed the traffic control strategies in terms of objectives of sustainable issues for the freeway traffic environment. They comprehensively discussed the traffic emission and safety models and highlighted various research challenges. Othman et al. [13] presented a survey on traffic modeling and control strategies for providing a sustainable environment. The authors reviewed the existing traffic modeling techniques to determine the traffic emission and amount of energy consumption. They also reviewed issues related to transportation and traffic control strategies in order to implement them in the urban traffic environment. Then, they outlined the challenges and future directions of the eco-traffic management system.A rapid increase in the number of vehicles on the road often leads to cause traffic incident and congestion, which resulted in significant increase in traffic emission and amount of fuel consumption [23]. In the recent years, several works have been proposed on the traffic control algorithms for the freeway traffic environment. However, the numerous amounts of vehicles on the road daily often need a higher traffic mobilities, improved road structure, and an enhanced traffic management system. Thus, a new set of traffic control strategies should be introduced which aims to achieve aforementioned objectives and to minimize the amount of traffic emission and fuel consumptions and so on.The traffic control strategies play an important role in obtaining sustainable goals because they not only improve the traffic mobility but also enhance traffic management systems. In this paper, we carry out a comprehensive review of published works that provide different solutions for the traffic control system. The purpose of this survey is to elucidate the roadmap for those who want to do research in the traffic control and strategy in a freeway environment. This survey not only comprehensively discusses the traffic control modeling techniques but also discusses traffic control strategies. We classify the traffic control modeling techniques into three categories such as traffic flow models, traffic emission and fuel consumption models, and safety models. These techniques could be helpful for enhancing the traffic flow under a freeway traffic environment. We comprehensively discuss about traffic control strategies in the freeway traffic environment. These strategies provide useful information that could be beneficial for enhancing urban traffic and safety management system. We then we present a comprehensive review of recent state-of-the-art methods on the vehicle design control strategy and traffic control design strategy. In the end, we outline open research challenges and recommend traffic control strategies for achieving sustainable goals. By comparing with previous surveys, we summarize the contribution of this paper as follows.The contributions of the proposed survey are the following:(i)
We present a comprehensive review of different traffic control modeling techniques that helps to provide a reliable solution for obtaining reasonable traffic and sustainable mobilities. Moreover, these techniques could be useful for improving traffic flow in the freeway traffic environment.(ii)
We discuss various traffic control strategies that help researchers and practitioners to design a robust traffic controller. Moreover, it provides useful traffic information that could improve the traffic flow and enhance the overall performance of the traffic management system.(iii)
We comprehensively discuss the recent state-of-the-art techniques on the vehicle design control strategy and traffic control design strategy. Adoption of these strategies could be helpful in reducing the amount of energy consumption required by a vehicle.(iv)
We discuss open research challenges that help researchers to tackle issues while designing a traffic control system. Then, we recommend some control strategies for obtaining sustainable objectives in traffic systems.(v)
In sum, the proposed survey fills the gap of existing surveys by presenting a comprehensive discussion on traffic control modeling techniques and traffic control strategies, which can be helpful for researchers and practitioners to choose the best research direction for their future work.The rest of this survey is organized as follows. Section2 presents the traffic control modeling which consists of traffic flow models, traffic emission and fuel consumptions models, and safety models. These models could perform better for real-time applications and provide accurate estimation of traffic flow and dynamics. Section 3 presents different traffic control strategies in freeway traffic environment. Section 4 discusses the vehicle control design strategy for reducing traffic emission and consumptions, whereas Section 5 discusses traffic control design strategies. Section 6 presents various research challenges and recommendations for the traffic control system. Finally, Section 7 concludes the study.
## 2. Traffic Control Modeling
Identifying traffic control measures indicates a robust solution, which produces a better measure to obtain reasonable traffic and sustainable mobilities. Various types of control actions have been used to regulate traffic flow in different traffic environments [24]. Ramp management is considered a main issue when applied to traffic lights at on and off-ramps [25], mainstream control, lane-changing warnings, incident notifications, route guidance at intersections, etc. The traffic control modeling techniques are further classified as traffic flow models, traffic emission and fuel consumption models, and safety models as shown in Figure 1.Figure 1
Methods of traffic control modeling.A traffic control modeling framework is useful to develop various control measures and needs to define in terms of traffic flow description and urban sustainability-related issues. Figure1 shows the block diagram of freeway traffic control methods such as traffic flow models and traffic safety models. The traffic control mechanism could perform better in terms of real-time applications and provide accurate estimations of traffic flow and dynamics. Note that traffic safety relies on characteristics and features of traffic flow. It can be obtained from various traffic models and require more input information to design and develop a robust traffic safety system.Most of the various safety models analyze the crash risk based on the road features, weather conditions, etc. The validation and calibration of the safety models remain a critical issue because it requires the collection of a large amount of traffic data over a long period on the freeway traffic environment due to the occurrence of rare events leading to cause traffic incidents. Therefore, the researchers of Communication Engineering and Technology areas should focus on choosing the optimum traffic modeling, which provides accurate estimation and detection of events while maintaining the system computationally efficient.
### 2.1. Traffic Flow Modeling Technique
In this section, we will discuss the traffic state flow model schemes. Traffic flow (TF) models highlight the dynamic behavior of real traffic systems by developing mathematical relationships. In an intelligent traffic management system, traffic flow prediction could be used for traffic planning, improving traffic and road safeties, and simulating specific control measures [10]. Lighthill and Whitham [26] proposed a wide range of traffic flow using different fields of application. Traffic models can be classified based on different criteria [27, 28]. Figure 2 shows the block diagram of the classification of traffic flow models, which consists of different traffic models such as microscopic, macroscopic, and mesoscopic traffic models.Figure 2
Traffic flow models.The main traffic flow classification is further classified as microscopic, macroscopic, and mesoscopic models [28, 29]. These models are distinguished from each other with respect to their detailed levels.
#### 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
#### 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
#### 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
### 2.2. Traffic Safety Modeling Technique
In recent years, several safety models have been proposed, aiming to design traffic safety systems that could provide traffic and road safety. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background due to traffic accidents causing nonrecurrent congestion, one or more lane congestion leading to reduced capacity, and the deceleration caused by observing accidents or drivers participating in rescue operations [22].Recently, various studies focused on the statistical analysis of historical crash data associated. In order to determine specific traffic accident conditions and other factors that lead to cause an incident, such as road structure, driver behavior, and environmental factors [45], Lord et al. [46] examined the correlation between traffic safety levels and traffic conditions in a freeway traffic environment. They discussed the relationship between crashes and traffic data of the Canadian site such as traffic flow and density. Potts et al. [47] first proposed the relationship between traffic safety densities. In Ref. [48], Pasquale et al. introduced a risk indicator that can estimate number of crashes in a freeway traffic environment and within specific time limits. As revealed in Ref [48], the index can be added as an objective in the cost function of the control problem. A number of crashes are obtained by combining the two terms, which are related to on-ramps and mainstream. The ramp control may lead to forming a large queue length, which could increase the crashing vehicles at on-ramps sites [10].Yeo et al. [49] introduced a method to examine the relationship between traffic states and crashes in the freeway traffic environment. First, they discussed different traffic states according to their characteristics and patterns and the states of each freeway network. Then, they integrated the crash data with the traffic states based on upstream and downstream traffic. The proposed method was tested on a 32-mile section of the California I-880 dataset. The result shows that the proposed method obtained a better crash involvement in different traffic states. Chang and Xiang [50] studied the analysis of the possibility of the crash as a function of traffic flow. Golob et al. [51] examined different types of safety level on the freeway traffic environment. They obtained the data from single loop detectors and used them for monitoring different traffic conditions. The proposed method examined over 1700 accidents on the freeway of Orange County, California. Lee et al. [52] performed a study on the characteristics of traffic flow which results in crashes (crash precursors) in the freeway traffic environment. They used data obtained from 38 loop detectors of the Expressway in Toronto to examine the crash precursors. The results show that the potential of crash analysis can be determined based on the precursors collected from real-time data. Pasquale et al. [48] derived the risk indication parameter which is mainly used for traffic control applications. The authors defined the nonlinear optimal control problem, which aims to estimate the number of incidents and crashes. They developed the global safety index, which aims to identify number of incident and crashes in terms of the existing traffic state in the freeway traffic environment. The proposed index implemented the performance indication which is used to evaluate the traffic delay and queue length. The proposed traffic control strategy is considered as the nonlinear control problem with the control variables. These problems could be solved by employing a gradient-based algorithm. The proposed model leads to cause long queue length on both on-ramps and off-ramps and increases the risk of crashes.
### 2.3. Traffic Emission and Fuel Consumption Models
The traffic emission and vehicle pollution caused by the fossil fuel dispersion in the environment are the main reasons for increasing vehicular traffic. There is a need for an algorithm to determine the traffic emission caused by traffic flow. The traffic emission and the fuel consumption models are regarded as the main issue for developing a sustainable smart city. These models significantly reduce traffic emission by quantifying the pollution into the air and the rate of consumption in terms of different traffic situations such as traffic flow, vehicle speed, and acceleration. These parameters can be obtained by placing the loop detectors on the road network and simulated data generated from different traffic flow models [36, 53].Generally, traffic emission and fuel consumption rely on the operating conditions of the vehicle configuration and the driver’s attitude towards driving and their decision to pass through the signalized intersection [54]. Also, it depends on the acceleration, deceleration, and vehicle speed. Traffic emission not only focuses on the vehicle dynamics but also relies on the adopted fuel, mechanical features, and characteristics of the vehicle. Also, environmental factors such as temperature and humidity affect the sustainability. Recently, several methods have been proposed that aim to make a sustainable smart city by estimating traffic emissions caused by vehicles and amount of fuel consumption. As indicated by Treiber and Kesting [53], the traffic emission model generates local emission in terms of quantified kilograms. In order to make traffic emission model, the researchers rely on the model using descriptive power for meeting their application requirements. For example, several types of microscopic models are commonly used for offline evaluation, while the macroscopic models are generally used for controlling traffic applications because those comprehensively analyze the traffic management system with an efficient computational framework.COPERT is the most common method of the macroscopic emission model which is used for traffic control at the freeway traffic environment [55, 56]. The COPERT model computes the local emission factors with different range of pollution along with various kinds of vehicles and associates with different average speed emission models. It is distinguished from the traffic emission model based on the traffic control technology, that is, embedded on-board vehicle. Also, the COPERT model provides a better estimation of different traffic conditions with less computational time. Thus, it is considered as a more robust and suitable modeling approach for online control schemes.Recently, various traffic control approaches have been employed to overcome the limitation of the COPERT model such as the macroscopic form of microscopic emission models so that the VERSIT+ and VT-micro models could be extended to macroscopic case and are called as VT-macro [57] and macroscopic VERSIT+ [58]. These regression-based models use the relationship between speed and acceleration and achieved them using a linear-based regression model [10]. These kinds of models are different from COPERT by considering acceleration effects to obtain an accurate traffic emission. The VT-macro and the macroscopic VERSIT+ could be used as single and multiclasses based on the traffic control system and traffic model.In Ref. [57], Zegeye et al. introduced a macroscopic framework for solving traffic control issues. They integrated the macroscopic and microscopic emission models with each other. Then, they demonstrated the proposed framework by considering METANET and VT-macro models. Second, they identified the error produced by the VT-macro model in comparison with the original VT-macro model. Finally, they assessed the performance of the proposed method by analyzing the error introduced by the VT-macro model and determining the computational time of the Dutch A12 highway. The aim of the VERSIT+ macroscopic model is identified by limited parameters with the simple computational method. Therefore, it could be implemented in online traffic control schemes. The VERSIT+ macroscopic model in the multiclass domain computes the traffic emission factors in terms of mainstream traffic flow and assesses them from entering on-ramps and off-ramps based on the average vehicle speed and acceleration. These parameters are aggregated based on the vehicle class. Pasquale et al. [59] introduced a two-class macroscopic emission model to overcome the traffic pollution generated on the freeway. They employed a two-class embedded local traffic controller that relies on a ramp metering model to minimize traffic emission and congestion. The simulation result shows that the proposed model obtained a better reduction in traffic emission.Recently, a few dispersion models have been proposed to overcome traffic emissions, which aim to enhance the sustainable smart city. In this regard, Buckland and Middleton [60] introduced a dispersion model which could identify high-level complexity by considering different environments, such as atmospheric obstacles. To develop robust traffic control strategies for obtaining sustainable objectives, the traffic dispersion model could be formulated as highlighted in Ref. [61]. In Ref. [61], Csikós et al. proposed a dynamic model for dispersing highway traffic emissions. They developed an integrated model with a Gaussian plume model which is transformed into a discrete time and space. This discrete model is computationally efficient and produces a better output when applied to traffic control systems and leads to transformation into a sustainable smart city. Zegeye et al. [62] introduced a model-based traffic control system for controlling vehicle speed limits and reducing road traffic emission at freeway. They aimed to reduce emission dispersion levels by considering a nearby public area on the freeway, travel times, and the wind speed direction. The simulation result reveals that the proposed system obtained a better dispersion of traffic emissions.
## 2.1. Traffic Flow Modeling Technique
In this section, we will discuss the traffic state flow model schemes. Traffic flow (TF) models highlight the dynamic behavior of real traffic systems by developing mathematical relationships. In an intelligent traffic management system, traffic flow prediction could be used for traffic planning, improving traffic and road safeties, and simulating specific control measures [10]. Lighthill and Whitham [26] proposed a wide range of traffic flow using different fields of application. Traffic models can be classified based on different criteria [27, 28]. Figure 2 shows the block diagram of the classification of traffic flow models, which consists of different traffic models such as microscopic, macroscopic, and mesoscopic traffic models.Figure 2
Traffic flow models.The main traffic flow classification is further classified as microscopic, macroscopic, and mesoscopic models [28, 29]. These models are distinguished from each other with respect to their detailed levels.
### 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
### 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
### 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
## 2.1.1. Microscopic Traffic Models
Microscopic models are the computer-based modeling system, which represents the behavior of each vehicle and its drivers in a road network [30, 31]. It depends on different number of generated vehicles, defined network routing, and evaluated vehicle behavior. Due to these variations, it is important to run the model several times to obtain the desired results. Microscopic models are very accurate and usually run on simulation platforms. However, those can be computationally expensive when applied for control operations [10].
## 2.1.2. Macroscopic Traffic Models
The macroscopic traffic flow models are the mathematical models and represent traffic dynamics such as traffic density and flow and traffic stream. These models are obtained by combining microscopic traffic flow models and converting the entity characteristics to compare system characteristics [30]. Macroscopic models provide flexible calibration and are less computationally expensive and cheaper than microscopic models [10].The macroscopic model is further categorized into continuous and discrete traffic models. Discrete traffic models are commonly used in the traffic network. The discrete macroscopic model can be further divided according to the number of state variables accommodated [10]. The first-order macroscopic traffic flow models are the simplest and use dynamics of aggregate vehicles that represent the traffic volume [27]. The commonly used first-order discrete model is named as cell transmission model (CTM). Over the last decades, it has been commonly used by the research community [32, 33]. The CTM model is nonlinear model, which is commonly used for controlling applications [34, 35].The second-order macroscopic traffic flow model is considered as two dynamic equations, in which the first one represents the density and the second one represents the mean of vehicle speed [36]. A METANET is one of the most reliable techniques for a discrete second-order model [37]. It is a nonlinear model, which is used for controlling applications. However, it is more complex and computationally expensive than the CTM method. In particular, first-order and second-order models are extended to represent the heterogeneous features of traffic flow, which then subsequently lead to multiclass traffic model [10]. They discriminate the user categories according to vehicle types such as a car, truck, and bus and allow the description of some relevant features that cannot be captured by single-class model vehicles.Recently, various types of multiclass discrete first-order models have been proposed, in which Roncoli et al. [38] introduced the first-order multilane macroscopic model traffic flow for motorway traffic environment. They used the CTM to extend traffic dynamics and considered different traffic scenarios such as changing lanes and traffic flow to compute lateral and longitudinal traffic flows. The result shows that the proposed method obtained a better accuracy on the real-time traffic data. Liu et al. [39] integrated bus class vehicles into the CTM. They applied the BUS-CTM on the road links to determine comprehensive network information. The results obtained from numerical simulations shows that the proposed method obtained reliable performance as compared with other traditional CTM models. Qian et al. [40] proposed a macroscopic heterogeneous traffic flow model to control traffic mobility. They considered the various vehicle classes, which follow homogenous car-following behaviors and vehicle attributes. Boyles and Boyles [41] modeled the arbitrary shared road situation using the CTM model. The proposed model relies on variation in traffic capacity and backward speed wave in terms of each class proportions within each cell. The result shows that the proposed method obtained a better performance when the proportion of autonomous vehicles is higher.Several methods have been proposed for discrete multiclass second-order models. Deo et al. [42] extended the METANET into heterogeneous traffic flow by defining features and class of each vehicle. Liu et al. [43] proposed a multiclass METANET model. It is an extended version of the single macroscopic traffic flow model of METANET. They used a predictive control technique for online traffic control. The results obtained from the simulation show that the proposed method obtained better performance than the single-class METANET model. Pasquale et al. [44] proposed a multiclass control technique for freeway traffic networks. They combined ramp metering and route guidance to reduce a large amount of emissions time. The simulation results revealed that the proposed method implements a better control framework for different vehicle types.
## 2.1.3. Mesoscopic Traffic Models
The mesoscopic traffic models provide an intermediate detail and describe vehicle flow in terms of probability distributions. It includes cluster models and gas kinetic models.Traffic models are usually associated with continuous and discrete models. The continuous models represent space and time, and system dynamics are represented with differential equations. On the other hand, in discrete traffic models, space and time are discretized and differential equations can be used to obtain system dynamics. The discrete model is usually used for real-time control scheme in the freeway traffic network. In recent years, researchers from communication and technology background focuses on continuous microscopic models, which is used for controlling the traffic flow system.
## 2.2. Traffic Safety Modeling Technique
In recent years, several safety models have been proposed, aiming to design traffic safety systems that could provide traffic and road safety. Design and development of safety models remain a more focused issue for researchers of Communication Engineering background due to traffic accidents causing nonrecurrent congestion, one or more lane congestion leading to reduced capacity, and the deceleration caused by observing accidents or drivers participating in rescue operations [22].Recently, various studies focused on the statistical analysis of historical crash data associated. In order to determine specific traffic accident conditions and other factors that lead to cause an incident, such as road structure, driver behavior, and environmental factors [45], Lord et al. [46] examined the correlation between traffic safety levels and traffic conditions in a freeway traffic environment. They discussed the relationship between crashes and traffic data of the Canadian site such as traffic flow and density. Potts et al. [47] first proposed the relationship between traffic safety densities. In Ref. [48], Pasquale et al. introduced a risk indicator that can estimate number of crashes in a freeway traffic environment and within specific time limits. As revealed in Ref [48], the index can be added as an objective in the cost function of the control problem. A number of crashes are obtained by combining the two terms, which are related to on-ramps and mainstream. The ramp control may lead to forming a large queue length, which could increase the crashing vehicles at on-ramps sites [10].Yeo et al. [49] introduced a method to examine the relationship between traffic states and crashes in the freeway traffic environment. First, they discussed different traffic states according to their characteristics and patterns and the states of each freeway network. Then, they integrated the crash data with the traffic states based on upstream and downstream traffic. The proposed method was tested on a 32-mile section of the California I-880 dataset. The result shows that the proposed method obtained a better crash involvement in different traffic states. Chang and Xiang [50] studied the analysis of the possibility of the crash as a function of traffic flow. Golob et al. [51] examined different types of safety level on the freeway traffic environment. They obtained the data from single loop detectors and used them for monitoring different traffic conditions. The proposed method examined over 1700 accidents on the freeway of Orange County, California. Lee et al. [52] performed a study on the characteristics of traffic flow which results in crashes (crash precursors) in the freeway traffic environment. They used data obtained from 38 loop detectors of the Expressway in Toronto to examine the crash precursors. The results show that the potential of crash analysis can be determined based on the precursors collected from real-time data. Pasquale et al. [48] derived the risk indication parameter which is mainly used for traffic control applications. The authors defined the nonlinear optimal control problem, which aims to estimate the number of incidents and crashes. They developed the global safety index, which aims to identify number of incident and crashes in terms of the existing traffic state in the freeway traffic environment. The proposed index implemented the performance indication which is used to evaluate the traffic delay and queue length. The proposed traffic control strategy is considered as the nonlinear control problem with the control variables. These problems could be solved by employing a gradient-based algorithm. The proposed model leads to cause long queue length on both on-ramps and off-ramps and increases the risk of crashes.
## 2.3. Traffic Emission and Fuel Consumption Models
The traffic emission and vehicle pollution caused by the fossil fuel dispersion in the environment are the main reasons for increasing vehicular traffic. There is a need for an algorithm to determine the traffic emission caused by traffic flow. The traffic emission and the fuel consumption models are regarded as the main issue for developing a sustainable smart city. These models significantly reduce traffic emission by quantifying the pollution into the air and the rate of consumption in terms of different traffic situations such as traffic flow, vehicle speed, and acceleration. These parameters can be obtained by placing the loop detectors on the road network and simulated data generated from different traffic flow models [36, 53].Generally, traffic emission and fuel consumption rely on the operating conditions of the vehicle configuration and the driver’s attitude towards driving and their decision to pass through the signalized intersection [54]. Also, it depends on the acceleration, deceleration, and vehicle speed. Traffic emission not only focuses on the vehicle dynamics but also relies on the adopted fuel, mechanical features, and characteristics of the vehicle. Also, environmental factors such as temperature and humidity affect the sustainability. Recently, several methods have been proposed that aim to make a sustainable smart city by estimating traffic emissions caused by vehicles and amount of fuel consumption. As indicated by Treiber and Kesting [53], the traffic emission model generates local emission in terms of quantified kilograms. In order to make traffic emission model, the researchers rely on the model using descriptive power for meeting their application requirements. For example, several types of microscopic models are commonly used for offline evaluation, while the macroscopic models are generally used for controlling traffic applications because those comprehensively analyze the traffic management system with an efficient computational framework.COPERT is the most common method of the macroscopic emission model which is used for traffic control at the freeway traffic environment [55, 56]. The COPERT model computes the local emission factors with different range of pollution along with various kinds of vehicles and associates with different average speed emission models. It is distinguished from the traffic emission model based on the traffic control technology, that is, embedded on-board vehicle. Also, the COPERT model provides a better estimation of different traffic conditions with less computational time. Thus, it is considered as a more robust and suitable modeling approach for online control schemes.Recently, various traffic control approaches have been employed to overcome the limitation of the COPERT model such as the macroscopic form of microscopic emission models so that the VERSIT+ and VT-micro models could be extended to macroscopic case and are called as VT-macro [57] and macroscopic VERSIT+ [58]. These regression-based models use the relationship between speed and acceleration and achieved them using a linear-based regression model [10]. These kinds of models are different from COPERT by considering acceleration effects to obtain an accurate traffic emission. The VT-macro and the macroscopic VERSIT+ could be used as single and multiclasses based on the traffic control system and traffic model.In Ref. [57], Zegeye et al. introduced a macroscopic framework for solving traffic control issues. They integrated the macroscopic and microscopic emission models with each other. Then, they demonstrated the proposed framework by considering METANET and VT-macro models. Second, they identified the error produced by the VT-macro model in comparison with the original VT-macro model. Finally, they assessed the performance of the proposed method by analyzing the error introduced by the VT-macro model and determining the computational time of the Dutch A12 highway. The aim of the VERSIT+ macroscopic model is identified by limited parameters with the simple computational method. Therefore, it could be implemented in online traffic control schemes. The VERSIT+ macroscopic model in the multiclass domain computes the traffic emission factors in terms of mainstream traffic flow and assesses them from entering on-ramps and off-ramps based on the average vehicle speed and acceleration. These parameters are aggregated based on the vehicle class. Pasquale et al. [59] introduced a two-class macroscopic emission model to overcome the traffic pollution generated on the freeway. They employed a two-class embedded local traffic controller that relies on a ramp metering model to minimize traffic emission and congestion. The simulation result shows that the proposed model obtained a better reduction in traffic emission.Recently, a few dispersion models have been proposed to overcome traffic emissions, which aim to enhance the sustainable smart city. In this regard, Buckland and Middleton [60] introduced a dispersion model which could identify high-level complexity by considering different environments, such as atmospheric obstacles. To develop robust traffic control strategies for obtaining sustainable objectives, the traffic dispersion model could be formulated as highlighted in Ref. [61]. In Ref. [61], Csikós et al. proposed a dynamic model for dispersing highway traffic emissions. They developed an integrated model with a Gaussian plume model which is transformed into a discrete time and space. This discrete model is computationally efficient and produces a better output when applied to traffic control systems and leads to transformation into a sustainable smart city. Zegeye et al. [62] introduced a model-based traffic control system for controlling vehicle speed limits and reducing road traffic emission at freeway. They aimed to reduce emission dispersion levels by considering a nearby public area on the freeway, travel times, and the wind speed direction. The simulation result reveals that the proposed system obtained a better dispersion of traffic emissions.
## 3. Traffic Control Strategies in Freeway
In recent years, traffic control for the freeway traffic environment embarks a great deal of attention from the researcher of communication and technology background. The related existing schemes can be further categorized as traffic modeling, control mechanism, and sustainable control strategy types. These techniques play an important role for designing freeway traffic controllers and provide relevant information to improve sustainability in the urban traffic system. Table1 shows the research works on traffic control strategies.Table 1
Summary of the research works on the traffic control strategy.
ReferenceYearFeatures/objectivesControl strategyControl methodEmissionSustainability issuePasquale et al. [44]2017Introduced a multiclass based traffic control method that combines two control strategies✓✓✓✓Ferrara et al. [58]2017Introduced a control system to regulate traffic flow in the freeway traffic network.✓✓✓✓Zegeye et al. [63]2012Introduced a predictive traffic controller using parameter control policies✓✓✓Groot et al. [64]2013They investigated an integrated METANET freeway and the VT-macro emission models.✓✓✓Csikós et al. [65]2018They discussed methods for reducing jam waves.✓✓✓Liu et al. [66]2017They added endpoint on the multiclass traffic flow to identify the behavior of traffic pattern.✓✓✓Wang et al. [67]2016Estimate different traffic conditions.✓✓Ahn and Rakha [68]2013Examined the impacts of using eco-routing system strategies.✓✓✓✓Abdel-Aty et al. [69]2006Discussed various speed limit strategies mechanism for improving safety.✓✓✓Sheikh et al. [70]2020They introduced an incident detection technique using the V2I model.✓✓✓Yu and Abdel-Aty [71]2014Examined the feasibility of using VSL✓✓✓Pasquale et al. [72]2014They proposed a two-class traffic control strategy✓✓✓✓Li et al. [73]2014Used a generic model to solve the optimization problem✓✓✓Groot et al. [74]2015Employed Stackelberg game to reduce traffic congestion.✓✓✓Pasquale et al. [75]2015Employed a multiclass ramp metering technique to reduce traffic emission.✓✓✓✓
### 3.1. Modeling Framework Classification
In freeway traffic control, different types of models can be used for investigating freeway traffic control strategies. These models are considered as a subset of model-based control techniques and can be effectively used for simulating and validating different traffic scenarios.Several studies have used METANET as a traffic flow model and VT-macro emission model [63]. In Ref. [63], Zegeye et al. introduced a predictive traffic controller using parameter control policies. They adopted different control measures to identify different traffic conditions and features. The proposed model can significantly reduce computational time. Groot et al. [64] investigated an integrated METANET freeway and the VT-macro emission models using the model-based predictive control (MPC) technique. The author proposed a piecewise-affine approximation (PWA) based nonlinear METANET model to control real-time applications. The proposed method obtained a better computational speed by using the cost function values. Csikós et al. [65] proposed a second-order METANET model-based control system to reduce jam waves on the motorway. They designed different controllers that are used to measure predefined control modes. Ferrara et al. [58] introduced a control scheme to regulate traffic flow in freeway networks. They used the ramp metering technique to identify and reduce traffic congestion. They used the METANET and macroscopic VERSIT+ model to improve the traffic regulations.The multiclass METANET utilizes the COPERT in order to evaluate the traffic emission model. It combines with the macroscopic model of the multiclass VERSIT+ model [44, 48]. Liu et al. [66] compared extended versions of multiclass METANET, FASTLANE, and multiclass VERSIT+. They added endpoint on these multiclass traffic flows to identify the behavior of traffic pattern. Ahn and Rakha [76] estimated traffic emission and fuel estimation using the data obtained from the probe vehicle. Wang et al. [67] introduced an efficient multiple model particle filter (EMMPF) using the GPS equipped probe vehicle to estimate different traffic conditions such as estimation and detection. Ahn and Rakha [68] proposed the VT-microemission model to determine emission and applied a microscopic model to simulate various traffic dynamics models.Recently, the microscopic traffic simulation model was used to evaluate different speed limit variations. Lee et al. [77] proposed automatic control strategies that aim to reduce the likelihood of a crash on freeway traffic. They used a microscopic simulation model to simulate different traffic situations using variable speed limits and an integrated crash prediction model. The simulation results demonstrated that the proposed method could minimize 5–17% of crash risk by reducing risky traffic situations. Abdel-Aty et al. [69] proposed a various speed limit strategies mechanism for improving safety in the freeway traffic environment. The proposed system improved the efficiency of medium to high speed situations on the freeway. Sheikh et al. [70] proposed an improved incident detection method using vehicle-to-infrastructure communication (V2I). First, they developed a connection between the vehicle and roadside unit (RSU). Second, they used a probabilistic approach to obtain traffic information using V2I communication. Third, a hybrid observer is employed to estimate the possible occurrence of traffic incidents [78, 79]. Finally, a V2I communication-based lane-changing speed mechanism was developed to detect traffic incidents and thereby significantly reduce traffic congestion and improve traffic flow. The simulation results revealed that the proposed method obtained a better detection of traffic incidents. Therefore, it significantly reduces crash risks and improves the dissipation of traffic congestion. Yu and Abdel-Aty [71] examined the feasibility of using a variable speed limit (VSL). They used an active traffic management system (ATS) to enhance traffic flow on freeway traffic scenarios. First, they used an extended METANET model to evaluate the VSL effects on traffic flow. Second, a real-time crash risk mechanism is applied to determine the risk associated with that. Finally, an optimizing technique is employed to determined VSL strategies. The simulation results reveal that the proposed system could reduce chances of crash risk and thereby improve traffic flow.
### 3.2. Classification Based on Control Theory
Traffic classification can be further classified based on control theory for the freeway traffic environment according to the traffic control system and control strategies, and its impacts on developing sustainable traffic systems. Several works have been proposed on control techniques of classification, which used simple control rules to design a robust control algorithm. Pasquale et al. [72] proposed a two-class traffic control strategy. They used different types of vehicles that represent the dynamic model and separate control. They adopted the PIALINEA control strategy to reduce traffic emission and to alleviate traffic congestion. Also, the feedback controller was proposed by Pasquale et al. [80], in which the control mechanism was used to predict and control the traffic classification model.Additionally, other research works have been proposed that are based on optimization control techniques [48, 71], while Li et al. [73] proposed a generic model, which is used for solving the optimization problem. The application of optimization-based technique in the real-time conditions can be considered as the model predictive control (MPC) technique [62, 64, 65]. Generally, the MPC techniques are computationally expensive for real-time applications [10]. In Ref. [63], Zegeye et al. proposed predictive traffic controller in terms of parameterized control. They employed the MPC technique to control the freeway traffic. The proposed method obtained a significant reduction of computational controller processing. Groot et al. [74] proposed different techniques for extending the Stackelberg game to reduce traffic congestion. In the proposed system, the traffic authorities can induce drivers to follow the traffic pattern using the Stackelberg game. The proposed mechanism obtained an optimum behavior by considering a heterogeneous driver class. In recent years, some of the previous papers do not use traffic classification control mechanism. However, these methods examined traffic control in terms of various simulation tools used to evaluate issues of urban cities [69, 76]. Also, some methods investigate the effects of speed limits and ramp metering [69, 81].
### 3.3. Classification Based on the Control Strategy Type
Selecting and implementing control strategies requires thorough study and investigation to obtain a sustainable traffic control system and its goal. The researchers and practitioners should consider these strategies when designing traffic control models. We can observe from the literature that some control strategies are very effective when it comes to implementing various control strategies such as the ramp metering technique combined with other control methods leads to obtain a sustainable goal. Note that this application could cause long vehicle queues on the ramp which lead to producing emissions and increase the likelihood of traffic incidents and crashes. In this regard, several methods have been proposed which aim to reduce the pollution emission and the possible risk of incidents and crashes at the ramps [48, 58, 66].Ferrara et al. [58] proposed a congestion and emission reduction scheme for the freeway traffic network using a ramp metering technique. They employed a supervisory based traffic control technique, which receives measurements from the entire network to predict the system performance. The supervisory mechanism takes a decision when the controller needs to change which is followed by a triggered logic. Liu et al. [66] employed a macroscopic traffic flow and emission model to predict traffic networks. The results show that the proposed emission model can improve the performance of traffic control in terms of the total emission. Also, it can reduce large queue lengths as compared to other approaches. Pasquale et al. [48] introduced a control system for reducing traffic congestion and enhancing traffic safety. They developed a safety index that determines the possible number of crashes by considering the function of existing traffic state. The simulation result shows that the proposed index could mitigate traffic congestion and enhance the performance of the traffic management system. These schemes utilized the ramp control strategies and analyzed the risks associated with on-ramp merging areas. Also, they were successfully applied to overcome the traffic emission and incident issues.Several traffic and emission models are capable of reducing traffic emissions as compared to other traditional methods [58, 76, 82]. Recently, various studies use the combination of variation of speed limits and ramp metering to overcome traffic flow and emission [62, 63, 65]. These approaches produced robust results especially when they were employed for improving the traffic safety and management system. The control strategies implemented by these methods significantly reduce the number of traffic incidents and crashes, thereby improving traffic safety. Note that the effectiveness of the different speed limits for reducing traffic incidents and crashes relies on the speed level recommended [71, 77]. We can observe from the literature survey that traffic control techniques are generally used to reduce traffic emissions and minimize the environmental factor influenced by them. These works aimed to reduce the number of traffic incidents in the freeway traffic environment. They could produce better results by obtaining a sufficient safety level.The aforementioned methods could be extended to a multiclass framework which was assessed and compared with other traditional traffic control methods, which could be used to perform specific control tasks. Pasquale et al. [75] employed a multiclass ramp metering technique to reduce traffic congestion and emission. The proposed method allows heavy vehicles to enter the highway freely without waiting on-ramps. It significantly reduces traffic congestions and emissions by limiting the heavy traffic on-ramps which may be the source of high emissions. Pasquale et al. [44] introduced a multiclass based traffic control method that combines two control strategies. Pasquale et al. aims for reducing traffic congestions and emissions by applying them. They evaluated the control system by predicting the control system in terms of traffic scenarios and by measuring system state. The multiclass control schemes require comprehensive strategies and accurate system modeling as compared to the single-class methods. Also, the traffic safety and management system needs more robust safety models which has the capability of identifying the impact of traffic incidents and crashes on these classes of control system.The route guidance becomes one of the successful techniques for reducing traffic emissions and crashes in freeway traffic environments and is considered an eco-routing strategy. Such as the environment and energy effects formed by the generated route and the choice of selected route by drivers are analyzed in-depth in [76]. Ahn and Rakha [68] examined the impacts of using eco-routing system strategies. The proposed system investigated various congestion and penetration levels by performing the test on real-time traffic conditions of Cleveland and Columbus, Ohio, USA. This method provides an eco-routing system that could minimize the traffic emission and fuel consumption which are generally obtained by reducing travel distance.
## 3.1. Modeling Framework Classification
In freeway traffic control, different types of models can be used for investigating freeway traffic control strategies. These models are considered as a subset of model-based control techniques and can be effectively used for simulating and validating different traffic scenarios.Several studies have used METANET as a traffic flow model and VT-macro emission model [63]. In Ref. [63], Zegeye et al. introduced a predictive traffic controller using parameter control policies. They adopted different control measures to identify different traffic conditions and features. The proposed model can significantly reduce computational time. Groot et al. [64] investigated an integrated METANET freeway and the VT-macro emission models using the model-based predictive control (MPC) technique. The author proposed a piecewise-affine approximation (PWA) based nonlinear METANET model to control real-time applications. The proposed method obtained a better computational speed by using the cost function values. Csikós et al. [65] proposed a second-order METANET model-based control system to reduce jam waves on the motorway. They designed different controllers that are used to measure predefined control modes. Ferrara et al. [58] introduced a control scheme to regulate traffic flow in freeway networks. They used the ramp metering technique to identify and reduce traffic congestion. They used the METANET and macroscopic VERSIT+ model to improve the traffic regulations.The multiclass METANET utilizes the COPERT in order to evaluate the traffic emission model. It combines with the macroscopic model of the multiclass VERSIT+ model [44, 48]. Liu et al. [66] compared extended versions of multiclass METANET, FASTLANE, and multiclass VERSIT+. They added endpoint on these multiclass traffic flows to identify the behavior of traffic pattern. Ahn and Rakha [76] estimated traffic emission and fuel estimation using the data obtained from the probe vehicle. Wang et al. [67] introduced an efficient multiple model particle filter (EMMPF) using the GPS equipped probe vehicle to estimate different traffic conditions such as estimation and detection. Ahn and Rakha [68] proposed the VT-microemission model to determine emission and applied a microscopic model to simulate various traffic dynamics models.Recently, the microscopic traffic simulation model was used to evaluate different speed limit variations. Lee et al. [77] proposed automatic control strategies that aim to reduce the likelihood of a crash on freeway traffic. They used a microscopic simulation model to simulate different traffic situations using variable speed limits and an integrated crash prediction model. The simulation results demonstrated that the proposed method could minimize 5–17% of crash risk by reducing risky traffic situations. Abdel-Aty et al. [69] proposed a various speed limit strategies mechanism for improving safety in the freeway traffic environment. The proposed system improved the efficiency of medium to high speed situations on the freeway. Sheikh et al. [70] proposed an improved incident detection method using vehicle-to-infrastructure communication (V2I). First, they developed a connection between the vehicle and roadside unit (RSU). Second, they used a probabilistic approach to obtain traffic information using V2I communication. Third, a hybrid observer is employed to estimate the possible occurrence of traffic incidents [78, 79]. Finally, a V2I communication-based lane-changing speed mechanism was developed to detect traffic incidents and thereby significantly reduce traffic congestion and improve traffic flow. The simulation results revealed that the proposed method obtained a better detection of traffic incidents. Therefore, it significantly reduces crash risks and improves the dissipation of traffic congestion. Yu and Abdel-Aty [71] examined the feasibility of using a variable speed limit (VSL). They used an active traffic management system (ATS) to enhance traffic flow on freeway traffic scenarios. First, they used an extended METANET model to evaluate the VSL effects on traffic flow. Second, a real-time crash risk mechanism is applied to determine the risk associated with that. Finally, an optimizing technique is employed to determined VSL strategies. The simulation results reveal that the proposed system could reduce chances of crash risk and thereby improve traffic flow.
## 3.2. Classification Based on Control Theory
Traffic classification can be further classified based on control theory for the freeway traffic environment according to the traffic control system and control strategies, and its impacts on developing sustainable traffic systems. Several works have been proposed on control techniques of classification, which used simple control rules to design a robust control algorithm. Pasquale et al. [72] proposed a two-class traffic control strategy. They used different types of vehicles that represent the dynamic model and separate control. They adopted the PIALINEA control strategy to reduce traffic emission and to alleviate traffic congestion. Also, the feedback controller was proposed by Pasquale et al. [80], in which the control mechanism was used to predict and control the traffic classification model.Additionally, other research works have been proposed that are based on optimization control techniques [48, 71], while Li et al. [73] proposed a generic model, which is used for solving the optimization problem. The application of optimization-based technique in the real-time conditions can be considered as the model predictive control (MPC) technique [62, 64, 65]. Generally, the MPC techniques are computationally expensive for real-time applications [10]. In Ref. [63], Zegeye et al. proposed predictive traffic controller in terms of parameterized control. They employed the MPC technique to control the freeway traffic. The proposed method obtained a significant reduction of computational controller processing. Groot et al. [74] proposed different techniques for extending the Stackelberg game to reduce traffic congestion. In the proposed system, the traffic authorities can induce drivers to follow the traffic pattern using the Stackelberg game. The proposed mechanism obtained an optimum behavior by considering a heterogeneous driver class. In recent years, some of the previous papers do not use traffic classification control mechanism. However, these methods examined traffic control in terms of various simulation tools used to evaluate issues of urban cities [69, 76]. Also, some methods investigate the effects of speed limits and ramp metering [69, 81].
## 3.3. Classification Based on the Control Strategy Type
Selecting and implementing control strategies requires thorough study and investigation to obtain a sustainable traffic control system and its goal. The researchers and practitioners should consider these strategies when designing traffic control models. We can observe from the literature that some control strategies are very effective when it comes to implementing various control strategies such as the ramp metering technique combined with other control methods leads to obtain a sustainable goal. Note that this application could cause long vehicle queues on the ramp which lead to producing emissions and increase the likelihood of traffic incidents and crashes. In this regard, several methods have been proposed which aim to reduce the pollution emission and the possible risk of incidents and crashes at the ramps [48, 58, 66].Ferrara et al. [58] proposed a congestion and emission reduction scheme for the freeway traffic network using a ramp metering technique. They employed a supervisory based traffic control technique, which receives measurements from the entire network to predict the system performance. The supervisory mechanism takes a decision when the controller needs to change which is followed by a triggered logic. Liu et al. [66] employed a macroscopic traffic flow and emission model to predict traffic networks. The results show that the proposed emission model can improve the performance of traffic control in terms of the total emission. Also, it can reduce large queue lengths as compared to other approaches. Pasquale et al. [48] introduced a control system for reducing traffic congestion and enhancing traffic safety. They developed a safety index that determines the possible number of crashes by considering the function of existing traffic state. The simulation result shows that the proposed index could mitigate traffic congestion and enhance the performance of the traffic management system. These schemes utilized the ramp control strategies and analyzed the risks associated with on-ramp merging areas. Also, they were successfully applied to overcome the traffic emission and incident issues.Several traffic and emission models are capable of reducing traffic emissions as compared to other traditional methods [58, 76, 82]. Recently, various studies use the combination of variation of speed limits and ramp metering to overcome traffic flow and emission [62, 63, 65]. These approaches produced robust results especially when they were employed for improving the traffic safety and management system. The control strategies implemented by these methods significantly reduce the number of traffic incidents and crashes, thereby improving traffic safety. Note that the effectiveness of the different speed limits for reducing traffic incidents and crashes relies on the speed level recommended [71, 77]. We can observe from the literature survey that traffic control techniques are generally used to reduce traffic emissions and minimize the environmental factor influenced by them. These works aimed to reduce the number of traffic incidents in the freeway traffic environment. They could produce better results by obtaining a sufficient safety level.The aforementioned methods could be extended to a multiclass framework which was assessed and compared with other traditional traffic control methods, which could be used to perform specific control tasks. Pasquale et al. [75] employed a multiclass ramp metering technique to reduce traffic congestion and emission. The proposed method allows heavy vehicles to enter the highway freely without waiting on-ramps. It significantly reduces traffic congestions and emissions by limiting the heavy traffic on-ramps which may be the source of high emissions. Pasquale et al. [44] introduced a multiclass based traffic control method that combines two control strategies. Pasquale et al. aims for reducing traffic congestions and emissions by applying them. They evaluated the control system by predicting the control system in terms of traffic scenarios and by measuring system state. The multiclass control schemes require comprehensive strategies and accurate system modeling as compared to the single-class methods. Also, the traffic safety and management system needs more robust safety models which has the capability of identifying the impact of traffic incidents and crashes on these classes of control system.The route guidance becomes one of the successful techniques for reducing traffic emissions and crashes in freeway traffic environments and is considered an eco-routing strategy. Such as the environment and energy effects formed by the generated route and the choice of selected route by drivers are analyzed in-depth in [76]. Ahn and Rakha [68] examined the impacts of using eco-routing system strategies. The proposed system investigated various congestion and penetration levels by performing the test on real-time traffic conditions of Cleveland and Columbus, Ohio, USA. This method provides an eco-routing system that could minimize the traffic emission and fuel consumption which are generally obtained by reducing travel distance.
## 4. Vehicle Control Design Strategy
This section discusses the vehicle control strategy for reducing traffic emissions and consumption. This goal can be achieved by using an eco-driving system that analyzes and computes a vehicle trajectory, which resulted in reducing traffic emission and energy consumption corresponding to a vehicle route. The eco-routing system is used to plan a route that requires minimum energy and emissions. Recently, a few works have been proposed to discuss the different vehicle control strategies [13, 83], which could reduce traffic emissions are shown in Figure 3. The summary of research works on the vehicle control design strategy is listed in Table 2.Figure 3
Vehicle control design strategy.Table 2
Summary of the vehicle control design strategy.
ReferenceYearTechniqueFeatures/objectivesPerformanceApplicationSciarreta et al. [84]2015Eco-drivingConsidering different road conditions, such as online and offline for real-time analysis and estimation.Reduces traffic emission and fuel consumption.Sustainable smart city.Ozatay et al. [85]2014Reducing energy consumption based on the velocity optimization problem.Significantly reduces energy consumption.Sustainable smart city.Dib et al. [86]2012Employing performance metrics to determine the energy efficiency based on intelligent eco-driving methods.Helps in obtaining better energy efficiency.Sustainable smart city.Hellström et al. [87]2009Minimizing trip and fuel consumption using the on-board optimization controller.Reduces amount of fuel consumption.Sustainable smart city.Dimitrakopoulos and Demestichas [88]2010Notifying the driver about traffic light cycles prior to arriving at signal intersection.Provides a better traffic light cycles notification at signal intersection.ITS.Ozatay et al. [89]2014Considering traffic lights as stop signs to optimize the speed trajectory.Better optimization of the vehicle speed trajectory.ITS.Maher and Vahidi [90]2012Using a signal phase and timing information to obtain vehicle energy consumption.Obtains a better energy efficiency and less computational time.Sustainable smart city.Sun et al. [91]2018Investigating speed planning when CVs communicate with traffic lights.Improves traffic flow and significantly reduces traffic congestion.ITS.Miyatake et al. [92]2011A method for eco-driving based on dynamic programming by considering traffic signal on the road.Reduces traffic congestion and enhance traffic flow.Sustainable smart city/ITS.HomChaudhuri et al. [93]2017Control using the decentralized strategy for each vehicle that forms its own strategy based on the neighboring vehicle.Enhances traffic management system in terms of reducing traffic congestion and lane-changing warning.ITS.De Nunzio et al. [94]2016Solving the nonconvex control problem using a suboptimal strategy.Enhances traffic flow with less computational time.Sustainable smart city/ITS.Zhang and Cassandras [95]2018Introducing a control strategy method to reduce energy consumption based on the maximum throughput criteria.Significantly reduces amount of energy consumption.Sustainable smart city.Boriboonsomsin et al. [96]2012Eco-routingUsing the eco-routing navigation system to determine the route between trip origins and destinations.Improves the vehicle navigation system.ITS.Ericsson et al. [97]2006Using eco-routing to identify and classify the road networks in various different groups based on the GPS device.Reduce substantial amount of fuel consumption.Sustainable smart city/ITS.Liu [98]2015Integrate a microscopic vehicle emission model into the Markov decision process to solve signalized traffic issues.Improve traffic flow at signalized intersection.Sustainable smart city.De Nunzio et al. [99]2017A real-time searching algorithm which could provide drivers with different sets of solutions.Reduces traffic congestion and improves vehicle traveling time.Sustainable smart city.Kluge et al. [100]2013Solved a time-dependent eco-routing using the Dijkstras algorithm.Efficient energy consumption in all road networks.Sustainable smart city/ITS.Nannicini et al. [101]2012To overcome vehicle traveling time and distanceSignificantly reduces the complexity of route planning.ITS.
### 4.1. Vehicle Eco-Driving
Eco-driving is a modern and efficient style of driving that reduces fuel consumption and improves traffic safety. It computes and analyzes the initial vehicle trajectories to process for embedded algorithms. The parameter values are used to forecast road structure, traffic flow, and congestions, and various limitations such as vehicle trip time and maximum vehicle speed. Some of the limitations rely on the driver’s attitude towards driving, such as driving a vehicle while traffic signal lights are flashing [83].In eco-driving, the ego-connected vehicle can cooperate with other vehicles on the road. For instance, a group of vehicles (platooning) that travel together closely and safely at high speed. The aim of using the eco-driving mechanism is to reduce fuel consumption and aerodynamics drag, and when the eco-driving uses a multivehicle scenario, the information processing is more complex as compared to the single-vehicle.Let us assume the vehicle vector at time step functiontt as qt=mt,vtT, where m represents the vehicle position along the specific route and v denotes the speed of vehicle. The aim of the eco-driving is to evaluate each time step t input vector zt=Hemt,HentT, where Hem and Hen represent the traction force and mechanical brake force, respectively. The input vector zt has an ability to significantly reduce traffic emission or energy consumption by the vehicle [13].The optimization problems of eco-driving are represented as follows [84]:(1)minzo,…,zm−1∑t=0m−1gqt,zt.Reference [13] indicated that the vehicle state at time step at t+1 function can be written as follows:(2)fqt,zt=mt,ϑvtvt+ϑHemt−Hent−HretM,where Hret denotes the resistance force caused by driving vehicle.The problem associated with eco-driving is the traction forceHem and mechanical brake force Hen. These forces are directly applied to the input and are compatible with autonomous and connected vehicles in terms of vehicle position and directions (longitudinal and lateral). Note that eco-driving creates various issues for human drivers to return the speed profile that vehicle users can follow [13].Sciarreta et al. [84] introduced several methods to overcome eco-driving control problems. They aim to reduce traffic emissions caused by transportation energy. Sciarreta et al. considered all the road conditions (online and offline) for real-time analysis and estimation. Recently, various methods have been proposed for energy efficiency and offline optimization. Ozatay et al. [85] provided a solution for reducing energy consumption based on a velocity optimization problem. They incorporated the road conditions (road structure and grade) with an optimization problem and generated a vehicle speed trajectory corresponding to a given vehicle route. They tested the performance of the proposed system in terms of various problems and compared it with a dynamic programming solution. The results show that the proposed method generates a better vehicle trajectory for about 10% as compared to the cruise speed control method. Dib et al. [86] introduced an evaluation approach for energy of the electrical vehicle. They used the performance metrics to determine the energy efficiency by using intelligent eco-driving methods.Additionally, a few works for providing online solutions have been presented in the recent past years [87]. Hellström et al. [87] introduced a method for minimizing trip and fuel consumption. They used an on-board optimization controller by considering the road slope and a GPS device to obtain the road geometry and its conditions. They performed the experiments using a heavy truck in the freeway traffic environment. The results show that the proposed method could significantly reduce the fuel consumption in an eco-driving vehicle.In an urban traffic environment, eco-driving is complex and challenging due to nonlinear traffic flow. At traffic signal intersection, it is very difficult to know the traffic lights before arriving at the intersection due to phase duration that depends on amount of traffic flow on the street. As stated by Demestichas [88], intelligent transportation systems and urban traffic management systems could reduce these issues and notify the driver about traffic light cycles before arriving at the signal intersection. Ozatay et al. [89] proposed a method for optimizing the vehicle speed trajectory. They considered traffic lights as stop signs to optimize the speed trajectory. The driver can send the traffic information to the cloud. Then, the cloud server generates the routes and collects the corresponding traffic information, (i.e., the number of vehicles at traffic signal intersection). Finally, they solved the optimization problem based on these information using a dynamic programming method. The proposed system uses a speed advisory system, in which the driver has the choice not to follow the generated velocity produced by the algorithm when the traffic light is green.Note that the irregularity and uncertainty of traffic light cycles at signal intersection remains a challenging issue. In this regard, Maher and Vahidi [90] presented a planning algorithm for predicting optimal velocity. The proposed method uses a signal phase and timing information to obtain vehicle energy consumption. They considered the case with no prior phase knowledge or timing indicates an unaware driver and provides minimum energy required for a vehicle. The proposed prediction model is evaluated by considering average time data and real-time data. The results obtained from the numerical simulation show that the proposed method obtained efficient energy. Sun et al. [91] examined the speed planning issues when connected vehicles (CVs) communicate with traffic lights. They considered the eco-driving problem as a data-driven optimization problem. Second, they defined the duration of red light as a random variable and performed an analysis on the amount of time required by a vehicle passing through the traffic signal intersection.Several methods have been proposed to overcome the eco-driving issues [92–94]. In Ref. [92], Miyatake et al. presented a method for eco-driving based on dynamic programming. They evaluated the effectiveness of the proposed method by considering the road with a traffic signal in the simulation network and obtained a better performance. HomChaudhuri et al. [93] developed a model predictive control method for connected vehicles in the urban traffic environment. The control system consists of a decentralized strategy for every vehicle since its form owns a strategy based on the neighboring vehicle. The experimental results show that the proposed control method is computationally effective. De Nunzio et al. [94] proposed a method for consuming less energy while a vehicle travels through the signal intersection. The proposed method solves the nonconvex control problem using a suboptimal strategy. After retrieving convexity, they solved the optimization problem using a given route to determine the vehicle crossing time at each signal intersection. The proposed method produces a better result which could be used for online verification and obtained a lower computational processing time.In order to improve the traffic safety and avoid traffic incident and crashes, Zhang and Cassandras [95] introduced a control strategy method. It aimed to reduce energy consumption based on the maximum throughput criteria. First, they highlighted the problem between controlling connected vehicles (CVs) and nonCVs traveling on the road to reduce energy consumptions. The simulation results demonstrated that the proposed method significantly reduces energy consumption by increasing penetration rates of CVs on the road. The problem associated with eco-driving algorithms is that they need an accurate traffic condition such as number of traffic flow, road strategy, and safety conditions. These traffic conditions could be obtained by various equipment that are placed on the road such as electronic sensors, loop detectors, and a macroscopic traffic model. Obtaining parameter values from these equipment remain a challenging issue due to uncertainty and difficulty to predict driver’s decision-making for selecting the traffic route and to analyze the safety margin for pedestrians.Autonomous and connected vehicles provide a significant reduction in traffic consumption since they can accurately receive information and guidance from eco-driving algorithms [102]. When connected vehicles form a platoon and communicate with each other, they can reduce energy consumption along a given traffic route at which the platoons were formed even if they have different traveling destinations [103].
### 4.2. Vehicle Eco-Routing
The eco-routing plays a significant role for planning and determining the energy-efficient route. It determines an optimum route based on users requirements, road maps, and structures such as traffic flow, traffic speed, and fuel consumptions. [83]. The g function is connected to each link that the traffic emission of a vehicle travels on a link of that route. Generally, the g function depends on time function t since traffic network conditions change rapidly, and the function g depends on the link when we apply static eco-routing algorithms [13].Boribonsomsin et al. [96] introduced an eco-routing navigation system using real-time traffic information. They determine the route between trip origins to destinations using the eco-routing algorithm. Then they used a dynamic road database using a fusion algorithm. Second, they evaluated the real-time vehicle trajectories to determine the energy consumption of each link. Ericsson et al. [97] introduced a method for estimation of reducing fuel consumption. They used an eco-routing algorithm that identifies and classifies the road networks in various different groups based on GPS devices. They performed the analysis using a large amount of database of real traffic patterns obtained from the road network. Then, they extracted different routes from the database in order to evaluate the fuel-saving navigation system. Moreover, they determine the model performance during peak and nonpeak traveling hours for the entire day.Generally, the eco-routing algorithms could only consider the cost of the link associated with the vehicle route. It does not consider vehicle patterns at the signalized intersection. Although this aspect plays an imperative role in reducing traffic emissions or fuel consumption, several methods have been proposed that focus on designing energy consumption at the road intersection. Liu [98] proposed an eco-routing algorithm for solving signalized traffic issues. He integrated a microscopic vehicle emission model into the Markov decision process. High-resolution traffic data consist of vehicle entry and exit status which are used to evaluate the performance of the proposed method. De Nunzio et al. [99] proposed a method for biobjective eco-routing in urban traffic environments. They formulated the routing problem using a weighted sum optimization method. Then, they presented a real-time searching algorithm, which could provide drivers with different sets of solutions. The simulation results show that these strategies could reduce energy consumption and traveling time.Kluge et al. [100] introduced used an energy-efficient route in the urban traffic road network. First, they performed an analysis on energy consumption of the road network and then derived the traffic measurement using a mesoscopic traffic model. Then they solved a time-dependent eco-routing using the Dijkstras algorithm. The heuristic searches can be used to determine energy-efficient routes as indicated by Ref. [101]. It has overcome the complexity of route planning caused by the uncertainty of vehicle arrival time at destination using time-dependent graphs [101].Eco-routing algorithms require a large amount of computational time due to planning of an energy-efficient route. In order to reduce the computational time of an eco-routing algorithm, one can consider reducing vehicle traveling time or minimizing a route distance. Moreover, the computational time can also be reduced by employing a multiobjective eco-routing algorithm, which not only aims to minimize traffic emission but also reduces the vehicle traveling time and distance. In this regard, a few works have been presented, which aim to overcome vehicle traveling time and distance [99, 104].
## 4.1. Vehicle Eco-Driving
Eco-driving is a modern and efficient style of driving that reduces fuel consumption and improves traffic safety. It computes and analyzes the initial vehicle trajectories to process for embedded algorithms. The parameter values are used to forecast road structure, traffic flow, and congestions, and various limitations such as vehicle trip time and maximum vehicle speed. Some of the limitations rely on the driver’s attitude towards driving, such as driving a vehicle while traffic signal lights are flashing [83].In eco-driving, the ego-connected vehicle can cooperate with other vehicles on the road. For instance, a group of vehicles (platooning) that travel together closely and safely at high speed. The aim of using the eco-driving mechanism is to reduce fuel consumption and aerodynamics drag, and when the eco-driving uses a multivehicle scenario, the information processing is more complex as compared to the single-vehicle.Let us assume the vehicle vector at time step functiontt as qt=mt,vtT, where m represents the vehicle position along the specific route and v denotes the speed of vehicle. The aim of the eco-driving is to evaluate each time step t input vector zt=Hemt,HentT, where Hem and Hen represent the traction force and mechanical brake force, respectively. The input vector zt has an ability to significantly reduce traffic emission or energy consumption by the vehicle [13].The optimization problems of eco-driving are represented as follows [84]:(1)minzo,…,zm−1∑t=0m−1gqt,zt.Reference [13] indicated that the vehicle state at time step at t+1 function can be written as follows:(2)fqt,zt=mt,ϑvtvt+ϑHemt−Hent−HretM,where Hret denotes the resistance force caused by driving vehicle.The problem associated with eco-driving is the traction forceHem and mechanical brake force Hen. These forces are directly applied to the input and are compatible with autonomous and connected vehicles in terms of vehicle position and directions (longitudinal and lateral). Note that eco-driving creates various issues for human drivers to return the speed profile that vehicle users can follow [13].Sciarreta et al. [84] introduced several methods to overcome eco-driving control problems. They aim to reduce traffic emissions caused by transportation energy. Sciarreta et al. considered all the road conditions (online and offline) for real-time analysis and estimation. Recently, various methods have been proposed for energy efficiency and offline optimization. Ozatay et al. [85] provided a solution for reducing energy consumption based on a velocity optimization problem. They incorporated the road conditions (road structure and grade) with an optimization problem and generated a vehicle speed trajectory corresponding to a given vehicle route. They tested the performance of the proposed system in terms of various problems and compared it with a dynamic programming solution. The results show that the proposed method generates a better vehicle trajectory for about 10% as compared to the cruise speed control method. Dib et al. [86] introduced an evaluation approach for energy of the electrical vehicle. They used the performance metrics to determine the energy efficiency by using intelligent eco-driving methods.Additionally, a few works for providing online solutions have been presented in the recent past years [87]. Hellström et al. [87] introduced a method for minimizing trip and fuel consumption. They used an on-board optimization controller by considering the road slope and a GPS device to obtain the road geometry and its conditions. They performed the experiments using a heavy truck in the freeway traffic environment. The results show that the proposed method could significantly reduce the fuel consumption in an eco-driving vehicle.In an urban traffic environment, eco-driving is complex and challenging due to nonlinear traffic flow. At traffic signal intersection, it is very difficult to know the traffic lights before arriving at the intersection due to phase duration that depends on amount of traffic flow on the street. As stated by Demestichas [88], intelligent transportation systems and urban traffic management systems could reduce these issues and notify the driver about traffic light cycles before arriving at the signal intersection. Ozatay et al. [89] proposed a method for optimizing the vehicle speed trajectory. They considered traffic lights as stop signs to optimize the speed trajectory. The driver can send the traffic information to the cloud. Then, the cloud server generates the routes and collects the corresponding traffic information, (i.e., the number of vehicles at traffic signal intersection). Finally, they solved the optimization problem based on these information using a dynamic programming method. The proposed system uses a speed advisory system, in which the driver has the choice not to follow the generated velocity produced by the algorithm when the traffic light is green.Note that the irregularity and uncertainty of traffic light cycles at signal intersection remains a challenging issue. In this regard, Maher and Vahidi [90] presented a planning algorithm for predicting optimal velocity. The proposed method uses a signal phase and timing information to obtain vehicle energy consumption. They considered the case with no prior phase knowledge or timing indicates an unaware driver and provides minimum energy required for a vehicle. The proposed prediction model is evaluated by considering average time data and real-time data. The results obtained from the numerical simulation show that the proposed method obtained efficient energy. Sun et al. [91] examined the speed planning issues when connected vehicles (CVs) communicate with traffic lights. They considered the eco-driving problem as a data-driven optimization problem. Second, they defined the duration of red light as a random variable and performed an analysis on the amount of time required by a vehicle passing through the traffic signal intersection.Several methods have been proposed to overcome the eco-driving issues [92–94]. In Ref. [92], Miyatake et al. presented a method for eco-driving based on dynamic programming. They evaluated the effectiveness of the proposed method by considering the road with a traffic signal in the simulation network and obtained a better performance. HomChaudhuri et al. [93] developed a model predictive control method for connected vehicles in the urban traffic environment. The control system consists of a decentralized strategy for every vehicle since its form owns a strategy based on the neighboring vehicle. The experimental results show that the proposed control method is computationally effective. De Nunzio et al. [94] proposed a method for consuming less energy while a vehicle travels through the signal intersection. The proposed method solves the nonconvex control problem using a suboptimal strategy. After retrieving convexity, they solved the optimization problem using a given route to determine the vehicle crossing time at each signal intersection. The proposed method produces a better result which could be used for online verification and obtained a lower computational processing time.In order to improve the traffic safety and avoid traffic incident and crashes, Zhang and Cassandras [95] introduced a control strategy method. It aimed to reduce energy consumption based on the maximum throughput criteria. First, they highlighted the problem between controlling connected vehicles (CVs) and nonCVs traveling on the road to reduce energy consumptions. The simulation results demonstrated that the proposed method significantly reduces energy consumption by increasing penetration rates of CVs on the road. The problem associated with eco-driving algorithms is that they need an accurate traffic condition such as number of traffic flow, road strategy, and safety conditions. These traffic conditions could be obtained by various equipment that are placed on the road such as electronic sensors, loop detectors, and a macroscopic traffic model. Obtaining parameter values from these equipment remain a challenging issue due to uncertainty and difficulty to predict driver’s decision-making for selecting the traffic route and to analyze the safety margin for pedestrians.Autonomous and connected vehicles provide a significant reduction in traffic consumption since they can accurately receive information and guidance from eco-driving algorithms [102]. When connected vehicles form a platoon and communicate with each other, they can reduce energy consumption along a given traffic route at which the platoons were formed even if they have different traveling destinations [103].
## 4.2. Vehicle Eco-Routing
The eco-routing plays a significant role for planning and determining the energy-efficient route. It determines an optimum route based on users requirements, road maps, and structures such as traffic flow, traffic speed, and fuel consumptions. [83]. The g function is connected to each link that the traffic emission of a vehicle travels on a link of that route. Generally, the g function depends on time function t since traffic network conditions change rapidly, and the function g depends on the link when we apply static eco-routing algorithms [13].Boribonsomsin et al. [96] introduced an eco-routing navigation system using real-time traffic information. They determine the route between trip origins to destinations using the eco-routing algorithm. Then they used a dynamic road database using a fusion algorithm. Second, they evaluated the real-time vehicle trajectories to determine the energy consumption of each link. Ericsson et al. [97] introduced a method for estimation of reducing fuel consumption. They used an eco-routing algorithm that identifies and classifies the road networks in various different groups based on GPS devices. They performed the analysis using a large amount of database of real traffic patterns obtained from the road network. Then, they extracted different routes from the database in order to evaluate the fuel-saving navigation system. Moreover, they determine the model performance during peak and nonpeak traveling hours for the entire day.Generally, the eco-routing algorithms could only consider the cost of the link associated with the vehicle route. It does not consider vehicle patterns at the signalized intersection. Although this aspect plays an imperative role in reducing traffic emissions or fuel consumption, several methods have been proposed that focus on designing energy consumption at the road intersection. Liu [98] proposed an eco-routing algorithm for solving signalized traffic issues. He integrated a microscopic vehicle emission model into the Markov decision process. High-resolution traffic data consist of vehicle entry and exit status which are used to evaluate the performance of the proposed method. De Nunzio et al. [99] proposed a method for biobjective eco-routing in urban traffic environments. They formulated the routing problem using a weighted sum optimization method. Then, they presented a real-time searching algorithm, which could provide drivers with different sets of solutions. The simulation results show that these strategies could reduce energy consumption and traveling time.Kluge et al. [100] introduced used an energy-efficient route in the urban traffic road network. First, they performed an analysis on energy consumption of the road network and then derived the traffic measurement using a mesoscopic traffic model. Then they solved a time-dependent eco-routing using the Dijkstras algorithm. The heuristic searches can be used to determine energy-efficient routes as indicated by Ref. [101]. It has overcome the complexity of route planning caused by the uncertainty of vehicle arrival time at destination using time-dependent graphs [101].Eco-routing algorithms require a large amount of computational time due to planning of an energy-efficient route. In order to reduce the computational time of an eco-routing algorithm, one can consider reducing vehicle traveling time or minimizing a route distance. Moreover, the computational time can also be reduced by employing a multiobjective eco-routing algorithm, which not only aims to minimize traffic emission but also reduces the vehicle traveling time and distance. In this regard, a few works have been presented, which aim to overcome vehicle traveling time and distance [99, 104].
## 5. Traffic Control Design Strategy
This section discusses various traffic control design strategies which aim to reduce traffic emissions and energy consumed by a vehicle. We will review different traffic control strategies, which play a significant role in reducing traffic emissions or energy. Such control strategies could be useful for improving traffic flow and traffic management systems in terms of controlling vehicle speed limits, traffic light control, splitting traffic flow at different signal intersections, and using different actuators. The traffic control strategies rely on various actuators, as illustrated in Figure4. We will comprehensively discuss the various actuators which could be used for implementing traffic control strategies in different traffic environments.Figure 4
Traffic control design strategy.Traffic control strategies aim to minimize traffic emission and energy by implementing different optimization methods. In the past, several algorithms have been proposed which aims to alleviate traffic incident and congestion and to eliminate shock waves using different approaches such as vehicle equalization and homogenization rather than minimizing traffic emission or energy unambiguously [13]. Most of these approaches are designed to reduce the amount of vehicle acceleration, which then subsequently leads to minimizing traffic emission and fuel consumption [105]. Prior to implementing these approaches, one should perform a comprehensive analysis of how much traffic emission or energy could be minimized by applying them. They can also reduce the vehicle speed [106]. Table 3 shows the summary of research works in terms of traffic control design and strategy.Table 3
Summary of work for traffic control design and strategy.
ReferencePaperYearStrategyFeatures/objectivesApplication[107]Walraven et al.2016Speed limit controlTraffic flow optimization based on the reinforcement learning technique.ITS.[108]Hegyi et al.2008A framework for limiting speed limit using shock wave theory.Sustainable smart city/ITS.[109]Zu et al.2018Reducing vehicle fuel consumption using the COPERT model at the freeway traffic network.Sustainable smart city.[63]Zegeye et al.2012A predictive traffic controller to optimize the control law parameters and determine control inputs.Sustainable smart city/ITS.[110]Van den Berg et al.2007An MPC method based on optimal control inputs for urban and freeway traffic networks.ITS.[111]Tajali and Hajbabaie2018An MPC approach for examining the variations in traffic demand.ITS.[112]De Nunzio et al.2014Reducing energy consumption using a macroscopic steady-state analysis.Sustainable smart city/ITS.[113]Liu and Tate2004Determining network effects based on intelligent speed adaption system (ISA).ITS.[114]Panis et al.2006Emission model using empirical measurements and the vehicle emission type.Sustainable smart city.[115]Zhu and Ukkusuri2014Tackle traffic demand uncertainty using the speed limit control model.Sustainable smart city/ITS.[116]Khondaker and Kattan2015An overview and mechanism for controlling speed limits.ITS.[117]Stren et al.2019Mobile ActuatorsTo improve air quality using autonomous vehicles.Sustainable smart city/ITS.[118]Yang and Jin2014To control theoretic formulation based on intervehicle communication.ITS.[119]Wu et al.2018Stabilize traffic flow using an autonomous vehicle such as string stability and optimal traffic conditions based on frequency domain analysis.ITS.[120]Liu et al.2019A country-level evaluation for investigating greenhouse gas emission.Sustainable smart city.[121]Xu et al.2011Dynamic routingIntegrated traffic control based on the MPC.Sustainable smart city/ITS.[82]Luo et al.2016A route diversion method based on the MPC in terms of multiple objectives.ITS.[122]Wang et al.2018Reviewed different techniques used for sustainable transportation system and smart city applications.Sustainable smart city.
### 5.1. Speed Limit Control
Speed limit control is used to regulate traffic flow. It aims to minimize traffic emissions and energy consumption by controlling speed limits. The speed limits relate to different vehicle locations from the entire road network.Recently, various methods have been proposed, which aim to reduce speed vehicle limits in the freeway traffic environment. A few works were focused on eliminating shock waves instead of reducing traffic emissions and energy. Walraven et al. [107] proposed a traffic flow optimization method based on the reinforcement learning technique. They formulated the traffic flow problem using a Markov decision process. Then, they employed the Q-learning algorithm to detect the maximum driving speed which is allowed on the highway traffic. The simulation results revealed that the proposed method reduced traffic congestion under the heavy traffic environment. Hegyi et al. [108] introduced a method for limiting speed limit using shock wave theory. First, they employed the traffic control algorithm based on shock wave. Then, they controlled the speed limit when the shock wave is considered as solvable.Additionally, several methods have been proposed which aim to reduce traffic emissions and energy. These methods employed different optimization techniques, which could significantly reduce vehicle traveling time and control the speed limits. Zu et al. [109] solved a convex optimization problem using a macroscopic traffic control. The authors aimed to reduce vehicle fuel consumption based on the COPERT model in freeway traffic network. Then they formulated a convex optimization problem to produce an efficient energy scenario under a real-time traffic environment. Zegeye et al. [63] introduced a predictive traffic controller using parameter control policies. The proposed control method relies on MPC and state feedback control mechanism. It optimizes the control law parameters that determine the control inputs, which significantly reduces computational complexity. The effectiveness of the proposed control model is validated on the freeway traffic environment.The model predictive control provides numerous opportunities for controlling traffic lights and limiting vehicle speed. It is compatible with different traffic conditions and models which can solve nonconvex optimization problems. Nevertheless, computational complexity could be reduced effectively while employing MPC for real-time traffic scenarios. Therefore, a parameterized MPC could be useful for applying MPC-based macroscopic traffic flow control without compromising the computational performance [63].Van den Berg et al. [110] introduced an integrated approach for urban and freeway traffic networks. They employed the MPC method based on optimal control inputs which are obtained from numerical optimization. The simulation results show that the proposed method obtained a better reduction in traffic congestion. Tajali and Hajbabaie [111] proposed an MPC approach to determine the variations in traffic demand. The author designed a mathematical model for dynamic speed harmonization in urban traffic networks to enhance traffic flow. Additionally, a few works focused on reducing traffic emissions and fuel consumption. Additionally, De Nunzio et al. [112] presented a method for reducing traffic energy consumption using a macroscopic steady-state analysis. The authors assess the system behavior using boundary conditions based on the timing of traffic lights, and a traffic control policy by relying on a variation of vehicle speed limits. The effectiveness of the proposed model has demonstrated using a microscopic simulation network.Liu and Tate [113] presented a method for determining network effects based on the intelligent speed adaption system (ISA). The ISA aims to improve the traffic microsimulation model to signify the ISA throughout the road network. The effectiveness of the proposed ISA is performed on a real-world traffic network, and it evaluated the impact of traffic congestion and speed distribution. The result shows that the ISA model is more efficient for reducing different traffic conditions. The main limitation of the proposed model is that it requires numerous amounts of traffic data for simulating the ISA on microscopic traffic models, which causes requirement of a large amount of computational time to process the algorithm. Panis et al. [114] introduced a model for traffic emission and speed limits. They developed an emission model based on empirical measurements based on the vehicle emission type. Then the traffic control model obtains the instant speed and acceleration of each vehicle traveling under the road network. They tested the proposed model at Ghentbrugge, Belgium.The learning-based method has been used to control speed limits. Zhu and Ukkusuri [115] proposed a speed limit control model used for tackling traffic demand uncertainty. First, they developed a link dynamic model which simulates traffic flow propagation to control the speed limits. Second, the author represents the speed limit problem as a Markov decision process (MDP) and was solved in terms of a real traffic control method. A case study on the Sioux Falls network was performed to demonstrate the effectiveness of the proposed model. Also, in Ref. [116], Khondaker and Kattan presented a detailed overview and mechanism for controlling speed limits.
### 5.2. Mobile Actuators
This section discusses the mobile actuators, which rely on the vehicle’s movement and control the traffic around the network. The vehicles which are controlled by mobile actuators aim to reduce traffic emissions and amount of fuel consumption.Stren et al. [117] proposed a method for improving air quality using autonomous vehicles. The authors examined the likelihood of reducing vehicle emissions influenced by the whole traffic network. They collected data for velocity and acceleration by conducting various experiments using a single autonomous-capable vehicle to dampen traffic waves with about 21 human-piloted vehicles. Yang and Jin [118] proposed a method for control theoretic formulation based on intervehicle communication. They designed a control variable that aims to follow a subsequent vehicle’s speed without charging its average speed. Also, they performed the analysis of a constant independent and three cooperative green driving strategies. Wu et al. [119] proposed a method for stabilizing traffic flow using an autonomous vehicle. They formulated the problem using string stability and optimal traffic conditions based on frequency domain analysis. They determined the traffic stability while implementing safety limitations on the autonomous vehicle.Liu et al. [120] presented a country-level evaluation for investigating greenhouse gas emissions. The authors examined the various effects of autonomous vehicles which are deployed on greenhouse gas emissions. These effects are vehicle penetration rates by 2050 and the amount of fuel consumption changes.
### 5.3. Dynamic Routing
This section discusses another approach named dynamic routing, which is used to reduce traffic emission and fuel consumption. It consists of reorganizing the traffic flow over the road network efficiently in terms of controlling split ratios [13]. The controller first analyzes and predicts the optimal routes for different traffic flow directions and communicates with the vehicle users in terms of radio communication devices, message signs, etc. [53].The dynamic routing problem is regulated system optimization. Xu et al. [121] proposed a model for integrated traffic control based on the MPC. It corresponds to minimizing traffic congestion and the user equilibrium is categorized by a density distribution for all used routes. The author modeled the driver’s information using an adaptive Kalman filtering theory. A case study shows that the proposed model could improve traffic efficiency and reduce the cost of the traffic management system. Luo et al. [82] introduced a route diversion method based on the MPC in terms of multiple objectives. The authors used the routes which are provided by the traffic authority. The recommended routes are considered as the control variable. Then they determined the split ratio based on route recommendations in terms of driver compliance rate. The diversion route control uses an MPC model based on a parallel Tabu Search algorithm.Note that traffic emission and energy cost could also influence the selection of dynamic routing. It aims to select main routes which could significantly reduce traffic emissions and energy consumption. Wang et al. [122] discussed the various dynamic road pricing and reviewed different techniques that are used for the sustainable transportation system and smart city applications.
## 5.1. Speed Limit Control
Speed limit control is used to regulate traffic flow. It aims to minimize traffic emissions and energy consumption by controlling speed limits. The speed limits relate to different vehicle locations from the entire road network.Recently, various methods have been proposed, which aim to reduce speed vehicle limits in the freeway traffic environment. A few works were focused on eliminating shock waves instead of reducing traffic emissions and energy. Walraven et al. [107] proposed a traffic flow optimization method based on the reinforcement learning technique. They formulated the traffic flow problem using a Markov decision process. Then, they employed the Q-learning algorithm to detect the maximum driving speed which is allowed on the highway traffic. The simulation results revealed that the proposed method reduced traffic congestion under the heavy traffic environment. Hegyi et al. [108] introduced a method for limiting speed limit using shock wave theory. First, they employed the traffic control algorithm based on shock wave. Then, they controlled the speed limit when the shock wave is considered as solvable.Additionally, several methods have been proposed which aim to reduce traffic emissions and energy. These methods employed different optimization techniques, which could significantly reduce vehicle traveling time and control the speed limits. Zu et al. [109] solved a convex optimization problem using a macroscopic traffic control. The authors aimed to reduce vehicle fuel consumption based on the COPERT model in freeway traffic network. Then they formulated a convex optimization problem to produce an efficient energy scenario under a real-time traffic environment. Zegeye et al. [63] introduced a predictive traffic controller using parameter control policies. The proposed control method relies on MPC and state feedback control mechanism. It optimizes the control law parameters that determine the control inputs, which significantly reduces computational complexity. The effectiveness of the proposed control model is validated on the freeway traffic environment.The model predictive control provides numerous opportunities for controlling traffic lights and limiting vehicle speed. It is compatible with different traffic conditions and models which can solve nonconvex optimization problems. Nevertheless, computational complexity could be reduced effectively while employing MPC for real-time traffic scenarios. Therefore, a parameterized MPC could be useful for applying MPC-based macroscopic traffic flow control without compromising the computational performance [63].Van den Berg et al. [110] introduced an integrated approach for urban and freeway traffic networks. They employed the MPC method based on optimal control inputs which are obtained from numerical optimization. The simulation results show that the proposed method obtained a better reduction in traffic congestion. Tajali and Hajbabaie [111] proposed an MPC approach to determine the variations in traffic demand. The author designed a mathematical model for dynamic speed harmonization in urban traffic networks to enhance traffic flow. Additionally, a few works focused on reducing traffic emissions and fuel consumption. Additionally, De Nunzio et al. [112] presented a method for reducing traffic energy consumption using a macroscopic steady-state analysis. The authors assess the system behavior using boundary conditions based on the timing of traffic lights, and a traffic control policy by relying on a variation of vehicle speed limits. The effectiveness of the proposed model has demonstrated using a microscopic simulation network.Liu and Tate [113] presented a method for determining network effects based on the intelligent speed adaption system (ISA). The ISA aims to improve the traffic microsimulation model to signify the ISA throughout the road network. The effectiveness of the proposed ISA is performed on a real-world traffic network, and it evaluated the impact of traffic congestion and speed distribution. The result shows that the ISA model is more efficient for reducing different traffic conditions. The main limitation of the proposed model is that it requires numerous amounts of traffic data for simulating the ISA on microscopic traffic models, which causes requirement of a large amount of computational time to process the algorithm. Panis et al. [114] introduced a model for traffic emission and speed limits. They developed an emission model based on empirical measurements based on the vehicle emission type. Then the traffic control model obtains the instant speed and acceleration of each vehicle traveling under the road network. They tested the proposed model at Ghentbrugge, Belgium.The learning-based method has been used to control speed limits. Zhu and Ukkusuri [115] proposed a speed limit control model used for tackling traffic demand uncertainty. First, they developed a link dynamic model which simulates traffic flow propagation to control the speed limits. Second, the author represents the speed limit problem as a Markov decision process (MDP) and was solved in terms of a real traffic control method. A case study on the Sioux Falls network was performed to demonstrate the effectiveness of the proposed model. Also, in Ref. [116], Khondaker and Kattan presented a detailed overview and mechanism for controlling speed limits.
## 5.2. Mobile Actuators
This section discusses the mobile actuators, which rely on the vehicle’s movement and control the traffic around the network. The vehicles which are controlled by mobile actuators aim to reduce traffic emissions and amount of fuel consumption.Stren et al. [117] proposed a method for improving air quality using autonomous vehicles. The authors examined the likelihood of reducing vehicle emissions influenced by the whole traffic network. They collected data for velocity and acceleration by conducting various experiments using a single autonomous-capable vehicle to dampen traffic waves with about 21 human-piloted vehicles. Yang and Jin [118] proposed a method for control theoretic formulation based on intervehicle communication. They designed a control variable that aims to follow a subsequent vehicle’s speed without charging its average speed. Also, they performed the analysis of a constant independent and three cooperative green driving strategies. Wu et al. [119] proposed a method for stabilizing traffic flow using an autonomous vehicle. They formulated the problem using string stability and optimal traffic conditions based on frequency domain analysis. They determined the traffic stability while implementing safety limitations on the autonomous vehicle.Liu et al. [120] presented a country-level evaluation for investigating greenhouse gas emissions. The authors examined the various effects of autonomous vehicles which are deployed on greenhouse gas emissions. These effects are vehicle penetration rates by 2050 and the amount of fuel consumption changes.
## 5.3. Dynamic Routing
This section discusses another approach named dynamic routing, which is used to reduce traffic emission and fuel consumption. It consists of reorganizing the traffic flow over the road network efficiently in terms of controlling split ratios [13]. The controller first analyzes and predicts the optimal routes for different traffic flow directions and communicates with the vehicle users in terms of radio communication devices, message signs, etc. [53].The dynamic routing problem is regulated system optimization. Xu et al. [121] proposed a model for integrated traffic control based on the MPC. It corresponds to minimizing traffic congestion and the user equilibrium is categorized by a density distribution for all used routes. The author modeled the driver’s information using an adaptive Kalman filtering theory. A case study shows that the proposed model could improve traffic efficiency and reduce the cost of the traffic management system. Luo et al. [82] introduced a route diversion method based on the MPC in terms of multiple objectives. The authors used the routes which are provided by the traffic authority. The recommended routes are considered as the control variable. Then they determined the split ratio based on route recommendations in terms of driver compliance rate. The diversion route control uses an MPC model based on a parallel Tabu Search algorithm.Note that traffic emission and energy cost could also influence the selection of dynamic routing. It aims to select main routes which could significantly reduce traffic emissions and energy consumption. Wang et al. [122] discussed the various dynamic road pricing and reviewed different techniques that are used for the sustainable transportation system and smart city applications.
## 6. Open Research Challenges and Recommendations
We wrap up our survey by discussing the open research challenges and recommendations as illustrated in Figure5. They were obtained after reviewing existing techniques on the traffic control modeling. We found that various challenges still exist in traffic control modeling and strategies and require comprehensive research and investigation to design a sophisticated algorithm to test these strategies.Figure 5
Open research challenges and recommendations.
### 6.1. Open Challenges
#### 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
#### 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
#### 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
### 6.2. Recommendations for Obtaining Sustainability Goals
This section discusses the recommendations which could be used to obtain sustainability goals in the freeway traffic environment. The development of the Internet of vehicle technology and automated technology could be significantly used to improve traffic safety and reduce traffic emissions. These technologies should be adopted to meet the future sustainability goals of the traffic systems.
#### 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
#### 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
#### 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
#### 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 6.1. Open Challenges
### 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
### 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
### 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
## 6.1.1. Challenges in Traffic Control Methodologies
This section discusses the control methodologies of freeway traffic management system. The traffic control highlights environmental and traffic safety issues. An improvement in traffic operation remains a complex control problem for the regulation of traffic management systems. In our opinion, researchers should focus on control models based on traffic analysis to design and develop robust traffic control algorithms in order to solve the complexity of the control system and its objectives.In the past, several works have been presented, which demonstrates that the feedback control strategies and techniques were reliable schemes for reducing traffic emission and enhancing traffic safety [72, 77]. These schemes usually require traffic measurements and then integrate with traffic simulators to assess the performance of calibrated control parameters.The optimum control strategy relies on the solution of finite horizon problems and receding horizon schemes [48, 71]. Also, the model control predictive (MPC) strategies produce better improvement in different types of control methods [64]. However, the optimization-based schemes required numerous amounts of computational time to process the real-time application. These schemes could be based on models using an adequate power system, which can resolve the complexity and computational issue of optimization-based schemes. Furthermore, centralized and model-based control schemes use large traffic state measurements on the network, which is considered a critical issue for implementing traffic control schemes for practical applications. The equipment costs and computational efficiency are the main drawbacks of these schemes. To overcome these issues, optimum control schemes can be processed as predictive decentralized control techniques [10]. In Ref [80] and Ref [48], the authors proposed ramp metering schemes, which do not require a large amount of computational time for real-time applications. Also, these can be used to overcome the limitation of feedback schemes.
## 6.1.2. Challenges in the Modeling Framework
The selection of a modeling framework remains a critical issue, which must be reliable and robust to address and overcome traffic control problems. For instance, the pollution emission and fuel consumption by the vehicles contributes directly to assessing the ability of the control model to estimate and predict traffic flow on the road. In this regard, the first-order macroscopic traffic flow model does not provide satisfactory results and it is not the best choice for traffic control used for reducing traffic emissions [82]. The speed of a vehicle plays an important role to analyze and determine the severity of traffic incidents. Traffic safety models usually correlate with the occurrence of an incident event such as traffic flow volume, road characteristics, and vehicle speed. Consequently, a first-order macroscopic traffic flow model could be adopted to analyze and estimate the crash risk associated with traffic incidents.The second-order macroscopic traffic flow model provides robust evolution of traffic speed and could be used to significantly reduce traffic emission and also enhance traffic and road safety. In the past years, various studies have been performed by increasing the precision of emission estimation and average acceleration was reduced from the traffic flow models [48, 63, 64, 80]. More specifically, macroscopic simulation tools obtained an accurate analysis of traffic safety and emissions. However, they cannot apply the tools to develop model-based control strategies.An appropriate selection step should be considered for obtaining detail of the traffic control model. However, these models required more information to process traffic control models. For instance, the microscopic simulation requires parameter settings in terms of vehicle types (bus and public transport), fuel capacity, road structure, and environment conditions (temperature, humidity, and air). In the dispersion model, prior knowledge of traffic evolution is required. It processes the input to the pollution quantity, which is generated by vehicles, and other useful traffic information such as fuel consumption, wind direction, and road structure. Traffic safety models usually rely on the processing of traffic data obtained from loop detectors. The data processing and description of unusual events caused by incidents are difficult to predict. Therefore, the safety methods correlate with events caused by crashes. These events can be evaluated with different traffic parameters such as traffic flow and traffic density, driver behavior, and weather conditions.
## 6.1.3. Challenges in the Control Strategy
The fundamental step of the traffic control framework is to select the best type of control strategy that could achieve both safety and sustainable goals, which are discussed in this survey. According to the literature review, it could observe and determine traffic control strategies that are suitable for applying to traffic control objectives. For example, the ramp metering combined with other control techniques can lead to obtaining a better performance and achieving sustainability goals. Note that most of the applications can lead to long queues on the road, thereby increasing the emission and increasing the likelihood of traffic incidents and crashes. Several approaches have been successfully applied for route guidance approaches, which consequently led to reduced crash risk [48, 58, 65]. Also, these schemes have been successfully applied to reduce emissions and fuel consumption. Variation in speed limits is combined with the ramp metering to alleviate the traffic flow and emission [63, 65, 76]. These strategies could be used to improve traffic and road safety by reducing the risky interaction among vehicles.Reference [71, 77] mentioned that the effectiveness of variation in speed limits remains a critical issue that depends on the accepted speed level of vehicles. Note that we realized that the traffic control issues still exist when the results were obtained from different simulation tools. The traffic control problems aimed to reduce environmental issues in terms of reducing performance indicators and enhancing the safety indicators, which alleviate traffic control problems. Nevertheless, these schemes aim to minimize the number of traffic incidents in freeway traffic environments but not real-time crash analysis.The aforementioned schemes can be extended into multiclass frameworks and evaluated with traditional traffic control techniques. It allows defining traffic control parameters and types of the vehicle class. Pasquale et al. [75] presented a method for using the multiclass ramp metering technique. They modeled two types of vehicles in a multiclass framework and then they determined the class of each vehicle type. The simulation results demonstrated that the proposed method provides feasible directions of a multiclass framework. Pasquale et al. [44] proposed a multiclass routing control algorithm to provide priorities to determine the specific vehicle classes in a predefined structure. Note that the multiclass traffic control modeling requires more accurate modeling strategies as compared to the single-class control model. Also, multiclass models in terms of safety models have the capability of assessing the impact of each vehicle class based on total incident numbers. This kind of multiclass model required a large amount of data and required them to calibrate.Most of the multiclass schemes that were covered in this survey aimed to obtain better results in terms of improving traffic safety and reducing traffic congestion and to obtain sustainability in urban cities. The main advantage of applying multiclass schemes is to evaluate whether or not sustainable objectives are considered in the conflict domain. Then, it is necessary to choose reliable cost function parameters in order to obtain better traffic control algorithms by using these objectives against each other. Zegeye et al. [63] introduced a method that could be able to reduce traffic emission by minimizing the total time spent required for emission; however, it could deteriorate traffic flow and introduce conflict objectives. Pasquale et al. [75] investigated on reducing travel times and emissions which considered the nonconflict objectives of sustainability. Then, they implemented the control strategies which aim to significantly reduce traffic congestion and improve traffic flow. In Ref [77], Lee et al.’s scheme demonstrated that the improvement of traffic safety due to variation in speed may lead to an increased vehicle traveling time. However, in Ref. [48], Pasquale indicated that the results obtained from multiramp metering control show that the traffic congestion mitigation infers significant improvements to the vehicle traveling times and traffic and road safety conditions. Nevertheless, it obtained less amount of these objectives used for different solutions and is also considered a competitive behavior.
## 6.2. Recommendations for Obtaining Sustainability Goals
This section discusses the recommendations which could be used to obtain sustainability goals in the freeway traffic environment. The development of the Internet of vehicle technology and automated technology could be significantly used to improve traffic safety and reduce traffic emissions. These technologies should be adopted to meet the future sustainability goals of the traffic systems.
### 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
### 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
### 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
### 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 6.2.1. Technology Transformations
The automotive industry has undergone significant changes in recent past years and is shifting its focus toward electric and automated vehicles since the traffic safety and environmental and sustainability issues are more challenging in traditional vehicles. Nowadays, car manufacturers are producing automated vehicles which are embedded with automated components and comprise of various intelligent features. These features can significantly improve road safety and reduce the amount of fuel consumption and also enhance the overall experience.
## 6.2.2. Sensing Equipment Technologies
The vehicular technologies such as vehicle-to-vehicle (V2V) communication and vehicle-to-infrastructure (V2I) [70, 123–125] provide a connected environment that enables vehicles to become a direct mode of communication with other vehicles, infrastructure, and network and to perform various control operations, and significantly reduces traffic emission [126]. Recently, several control strategies have been applied in logistic and freight transportation, which employs truck platooning policies to reduce the significant amount of fuel consumption [127, 128].
## 6.2.3. Connected and Automated Vehicles Technologies
The connected and autonomous vehicles (CAVs) technologies bring significant improvements to the traffic control and management system, including reducing the possibility of collision risks influenced by driver’s negligence. The CAVs could improve self-driving abilities and provide fast and efficient communication between vehicles that reduce vehicle travel time, improve road and traffic safety, reduce traffic emission and energy consumption [129], and provide speed guidance in different traffic environments [130]. The CAV is considered an essential product in intelligent transportation systems (ITS) and consists of various features such as the advanced decision-making system, the recognition model, and the control model [131]. These features could be helpful for drivers to take safe driving decisions while maintaining road safety and reducing the environmental impacts [132].
## 6.2.4. Machine Learning Methods
Recently, machine learning (ML) approaches have gained significant attention from the research community due to their ability to analyze data which could help to manage large data operations and reduce vehicle emissions and limit fuel consumption. Neural networks, such as wavelet neural networks [133], are widely used ML methods for estimating traffic emission and the amount of fuel consumption required by a vehicle. The reinforcement learning (RF) methods have been applied successfully for reducing traffic congestion and emissions and could be employed based on actuator types.
## 7. Conclusion
In this survey, a comprehensive investigation of traffic control strategies for the freeway traffic environment has been discussed. We performed a thorough analysis by reviewing the latest papers on traffic control strategies. Such strategies play an important role in obtaining sustainable objectives by reducing traffic emissions, collision risk, and amount of fuel consumption. It is evident from the literature review that traffic control strategies have been deeply discussed in recent years. This indicates that a significant interest is shown by a research community for this research area due to the rapid transformation in electronics and communication devices. Therefore, this transformation encourages the research community or the automobile industry to design and develop a robust traffic control system. We first introduced the traffic control modeling approaches. It provides a better solution for obtaining reasonable traffic and sustainable mobilities. We then comprehensively discussed various control strategies that could be beneficial for researchers in order to design a robust freeway traffic controller. These strategies could enhance the traffic flow and traffic management system and reduce traffic congestion. A comprehensive analysis of existing methods on the vehicle control design and traffic control design strategies is presented. Adoption of these control strategies could be helpful in reducing the amount of energy consumption required by a vehicle. A detailed discussion on open research challenges and issues for traffic control in the freeway network is covered with the recommendation of obtaining sustainable goals. The proposed survey reveals that there is a need for focused research on the traffic control system that can overcome various safety challenges such as traffic incidents and crashes and also reduce the environmental effects. In short, this survey is well developed to cover traffic control techniques in the freeway traffic environment. It fills the literature gaps of existing surveys and incorporates the recent trends and approaches in traffic control.
---
*Source: 1012206-2022-07-20.xml* | 2022 |
# An Impulsive Periodic Single-Species Logistic System with Diffusion
**Authors:** Chenxue Yang; Mao Ye; Zijian Liu
**Journal:** Journal of Applied Mathematics
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101238
---
## Abstract
We study a single-species periodic logistic type dispersal system in a patchy environment with impulses. On the basis of inequality estimation technique, sufficient conditions of integrable form for the permanence and extinction of the system are obtained. By constructing an appropriate Lyapunov function, conditions for the existence of a unique globally attractively positive periodic solution are also established. Numerical examples are shown to verify the validity of our results and to further discuss the model.
---
## Body
## 1. Introduction
In the practical world, on the one hand, owing to natural enemy, severe competition, or deterioration of the patch environment, species dispersal in two or more patches becomes one of the most prevalent phenomena of nature. Many empirical works and monographies on population dynamics in a spatial heterogeneous environment have been done (see [1–9] and the references cited therein). On the other hand, many natural and man-made factors (e.g., fire, drought, flooding deforestation, hunting, harvesting, breeding, etc.) always lead to rapid decrease or increase of population number at fixed moment. Such sudden changes can often be characterized mathematically in the form of impulses. With the development of the theory of impulsive differential equations [10], various population dynamical models of impulsive differential equations have been proposed and studied extensively. For example, many important and interesting results on the permanence, persistence, extinction, global stability, the existence of positive periodic solutions, bifurcation and dynamical complexity, and so forth can be found in [11–17] and the references cited therein.Although considerable researches on the dispersal and impulses of species have been reported in the literature, there are few papers that investigate the dynamical behavior of population systems under the circumstances in which both dispersal and impulse exist. However, dispersal species which undergoes impulses is also one of the most prevalent phenomena of nature. In our previous paper [18], an impulsive periodic predator-prey system with diffusion is studied, and some conditions for the permanence, extinction, and existence of a unique globally stable periodic solution are established. In this paper, we will present and study a single-species periodic logistic system with impulses and dispersal in n different patches. Our model takes the form
(1)x˙i(t)=xi(t)(ri(t)-ai(t)xi(t))+∑j=1nDij(t)(xj(t)-xi(t)),t≠tk,xi(tk+)=hikxi(tk),i=1,2,…,n,k=1,2,…,
where ri(t) and ai(t)(i∈I={1,2,…,n}) represent the intrinsic growth rates and the density-dependent coefficients in patch i, respectively. Dij(t)⩾0(i,j∈I) denotes the dispersal rate of the species from patch j to patch i. hikxi(tk)(i∈I) is the regular pulse at time tk of species x in patch i. Throughout this paper, we always assume the following.(C1)
ri(t),ai(t), and Dij(t)(i,j∈I) are continuously periodic functions with common period T, defined on R+=[0,∞) and ai(t)⩾0,D2⩾Dij(t)⩾D1>0(i≠j),Dii(t)≡0 for all i,j∈I and t∈R+.(C2)
hik>0 for all i∈I,k=1,2,… and there exists a positive integer q such that tk+q=tk+T and hi(k+q)=hik for any i∈I.The organization of this paper is as follows. In the next section, some sufficient conditions for the permanence and extinction of system (1) are obtained. In Section 3, conditions for the existence of a unique globally attractively positive periodic solution are also established. Finally, some numerical simulations are proposed to illustrate the feasibility of our results and discuss the model further.
## 2. Permanence and Extinction
In this section, applying inequality estimation technique, we get some sufficient conditions on the permanence and extinction of system (1).Theorem 1.
There exists a positive constantM such that limsupt→∞xi(t)<M for all i∈I if
(2)∫0Ta(t)dt>0,
where a(t)=mini∈I{ai(t)}.Proof.
Lethk=maxi∈I{hik} for any k=1,2,…, then we have hk+q=hk and there exists a positive constant H such that function
(3)|h(t,μ)|=|∑t⩽tk<t+μlnhk|⩽∑k=1q|lnhk|⩽H
for all t∈R+ and μ∈[0,T]. Choose r(t)=maxi∈I{ri(t)}, and r(t) is bounded for all t∈R+. Then from conditions (2) and (3), we have two positive constants τ and δ such that
(4)∫0T(r(t)-a(t)τ)dt+∑k=1qlnhk<-δ.
Define the functionV(t)=maxi∈I{xi(t)}. For any t∈R+, there is an i=i(t)∈I such that V(t)=xi(t). Calculating the upper-right derivative of V(t), we obtain
(5)D+V(t)⩽xi(t)(ri(t)-ai(t)xi(t))⩽V(t)(r(t)-a(t)V(t)).
When t=tk, we have V(tk+)=max{xi(tk+)}=max{hikxi(tk)}⩽max{hik}max{xi(tk)}=hkV(tk).
Consider the following auxiliary system:(6)D+w(t)=w(t)(r(t)-a(t)w(t)),w(tk+)=hkw(tk),k=1,2,…
with the initial condition V(0)⩽w(0). If there is a constant M0>0 such that
(7)limsupt→∞w(t)<M0
for any positive solution w(t) of system (6), then, according to the comparison theorem of impulsive differential equations [10], we have V(t)⩽w(t) for all t⩾0. Therefore, choose M=M0, and we will finally have limsupt→∞xi(t)⩽limsupt→∞V(t)⩽limsupt→∞w(t)<M for all i∈I.
Next, we will prove that (7) holds. In fact, for any positive solution w(t) of system (6), we only need to consider the following three cases.
Case
1. There is a t0⩾0 such that w(t)⩾τ for all t⩾t0.
Case
2. There is a t0⩾0 such that w(t)⩽τ for all t⩾t0.
Case
3. w(t) is oscillatory about τ for all t⩾0.
We first consider Case 1. Sincew(t)⩾τ for all t⩾t0, then for t=t0+lT, where l is any positive integer, integrating system (6) from t0 to t, from (4) we have
(8)w(t)=w(t0)exp(∫t0t(r(s)-a(s)w(s))ds+∑t0⩽tk<tlnhk)⩽w(t0)exp(∑k=1q∫t0t0+T(r(s)-a(s)τ)ds+⋯+∫t0+(l-1)Tt0+lT(r(s)-a(s)τ)ds∫t0t0+T(r(s)-a(s)τ)+l∑k=1qlnhk∑k=1q)⩽w(t0)exp(-lδ).
Hence, w(t)→0 as l→∞, which leads to a contradiction.
Then, we consider Case 3. From the oscillation ofw(t) about τ, we can choose two sequences {ρn} and {ρn*} satisfying 0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(9)w(ρn)⩽τ,w(ρn+)⩾τ,w(ρn*)⩾τ,w(ρn*+)⩽τ,w(t)⩾τ∀t∈(ρn,ρn*),w(t)⩽τ∀t∈(ρn*,ρn+1).
For any t⩾ρ1, if t∈(ρn,ρn*] for some integer n, then we can choose integer l⩾0 and constant 0⩽ν<T such that t=ρn+lT+ν. Since D+w(t)⩽w(t)(r(t)-a(t)τ) for all t∈(ρn,ρn*) and t≠tk, integrating this inequality from ρn to t, by (3) and (4) we obtain
(10)w(t)⩽w(ρn)exp(∫ρnt(r(s)-a(s)τ)ds+∑ρn⩽tk<tlnhk)⩽τexp(l(∫0T(r(t)-a(t)τ)dt+∑k=1qlnhk)+∫ρnρn+ν(r(t)-a(t)τ)dt+∑ρn⩽tk<ρn+νlnhk)⩽τexp(-lδ+rT+H)⩽τexp(rT+H),
where r=supt∈[0,T]{|r(t)|+a(t)τ}. If there is an integer n such that t∈(ρn*,ρn+1], then we have w(t)⩽τ⩽τexp(rT+H). Therefore, for Case 3 we always have w(t)⩽τexp(rT+H) for all t⩾ρ1.
Lastly, if Case 2 holds, then we directly havew(t)⩽τexp(rT+H) for all t⩾0.
Choose constantM0=τexp(rT+H)+1, then we see that (7) holds. This completes the proof.Remark 2.
It can be seen from Theorem1 that, in one time period T, if the density-dependent coefficient in patch i (i∈I) is strictly greater than zero and the impulsive coefficient hik is bounded in the same time period, the dispersal species x is always ultimately bounded.Theorem 3.
Assume that all conditions of Theorem1 hold. In addition, there is an i0∈I such that
(11)∫0T(ri0(t)-∑j=1nDi0j(t))dt+∑k=1qlnhi0k>0.
Then system (1) is permanent.Proof.
The ultimate boundedness of system (1) has been proved in Theorem 1. In the following, we mainly prove the permanence of the system; that is, there is a constant m>0 such that
(12)liminft→∞xi(t)>m
for each i∈I and any positive solution x(t)=(x1(t),x2(t),…,xn(t)) of system (1).
From assumptionhi(k+q)=hik, we obtain that there exists a constant H>0 such that function
(13)|hi(t,μ)|=|∑t⩽tk<t+μlnhik|⩽∑k=1q|lnhik|⩽H
for any i∈I,t∈R+ and μ∈[0,T].
Fori=i0, by condition (11) and the boundedness of ai0(t), there are two positive constants τ- and δ- such that
(14)∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑k=1qlnhi0k>δ-.
Letx(t)=(x1(t),x2(t),…,xn(t)) be any positive solution of system (1). Since
(15)x˙i0(t)⩾xi0(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)xi0(t)),(ri0(t)-∑j=1nDi0j(t)-ai0(t)xi0(t))t≠tk,
by the comparison theorem of impulsive differential equations, we obtain xi0(t)⩾u(t) for all t⩾0, where u(t) is the positive solution of system
(16)u˙(t)=u(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)u(t)),u(tk+)=hi0ku(tk),k=1,2,…
with initial condition u(0)=xi0(0).
In the following, we first prove that there is a constantm->0 such that
(17)liminft→∞u(t)>m-
for any positive solution u(t) of system (16). We only need to consider the following three cases.
Case
1. There is a t-⩾0 such that u(t)⩽τ- for all t⩾t-.
Case
2. There is a t-⩾0 such that u(t)⩾τ- for all t⩾t-.
Case
3. u(t) is oscillatory about τ- for all t⩾0.
For Case 1, lett=t-+lT, where l⩾0 is any integer. From (14), we obtain
(18)u(t)=u(t-)×exp(∑t-⩽tk<tlnhi0k∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s))∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s))ds+∑t-⩽tk<tlnhi0k∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s)))⩾u(t-)exp(∑k=1qlnhi0k∫t-t-+T(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-)ds+⋯+∫t-+(l-1)Tt-+lT(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-∑j=1n)ds+l∑k=1qlnhi0k∫t-t-+T(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-))⩾u(t-)exp(lδ-).
Hence, u(t)→∞ as l→∞, which leads to a contradiction.
For Case 3, we choose two sequences{ρn} and {ρn*} satisfying 0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(19)u(ρn)⩾τ-,u(ρn+)⩽τ-,u(ρn*)⩽τ-,u(ρn*+)⩾τ-,u(t)⩽τ-∀t∈(ρn,ρn*),u(t)⩾τ-∀t∈(ρn*,ρn+1).
For any t⩾ρ1, if t∈(ρn,ρn*] for some integer n, then we can choose integer l⩾0 and constant 0⩽ν-<T such that t=ρn+lT+ν-. Since, for all t∈(ρn,ρn*) and t≠tk, we have u˙(t)⩾u(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-), integrating this inequality from ρn to t, then from (13) and (14) we obtain
(20)u(t)⩾u(ρn)×exp(∫ρnt(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-)ds+∑ρn⩽tk<tlnhi0k)⩾τ-exp(∑ρn⩽tk<ρn+ν-l(∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑k=1qlnhi0k∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-))+∫ρnρn+ν-(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑ρn⩽tk<ρn+ν-lnhi0k)⩾τ-exp(lδ--βi0T-H)⩾τ-exp(-βi0T-H),
where βi0=supt∈[0,T]{|ri0(t)|+∑j=1nDi0j(t)+ai0(t)τ-}. If there is an integer n such that t∈(ρn*,ρn+1], obviously we have u(t)⩾τ-⩾τ-exp(-βi0T-H). Therefore, for Case 3 we always have u(t)⩾τ-exp(-βi0T-H) for all t⩾ρ1. Let constant m-=τ-exp(-βi0T-H-1). Then m- is independent of any positive solution of system (16) and we finally have that (17) holds.
Lastly, if Case 2 holds, then fromu(t)⩾τ- for all t⩾0, we directly have that (17) holds.
From the fact thatxi0(t)⩾u(t) for all t⩾0, then we have
(21)liminft→∞xi0(t)⩾liminft→∞u(t)>m-.
It follows immediately from (21) that there is a T-0>0 such that ∑i=1nxi(t)⩾m- for all t⩾T-0. Then for any i∈I, when t⩾T-0 and t≠tk we have
(22)x˙i(t)=xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t))+∑j=1nDij(t)xj(t)⩾xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t)-D1)+D1m-.
Obviously, there is a constant β>0 and β is independent of any positive solution of system (1), such that for any i∈I, 0⩽xi(t)⩽β, and all t∈R+ the following inequality holds
(23)xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t)-D1)+D1m->β.
Hence, for any i∈I, by (22) we have
(24)x˙i(t)>β,t≠tk,xi(tk+)⩾h~ixi(tk),k=1,2,…
for all 0⩽xi(t)⩽β and t⩾T-0, where h~i=min1⩽k⩽q{hik}>0.
In order to prove that (12) holds, we only need to consider the following three cases.
Case
1. There is a T-1⩾T-0 such that xi(t)⩾β for all t⩾T-1.
Case
2. There is a T-1⩾T-0 such that xi(t)⩽β for all t⩾T-1.
Case
3. xi(t) is oscillatory about β for all t⩾T-0.
Equation (12) is obviously true if Case 1 holds.
For Case 2, there exists an impulsive timetq*⩾T-1 for some integer q*>0. For any t>tq*+1>tq*⩾T-1, there is an integer p⩾q*+1 such that t∈(tp,tp+1], and from system (24) we have
(25)xi(t)>xi(tp+)+β(t-tp)⩾h~ixi(tp)+β(t-tp)⩾h~i[xi(tp-1+)+β(tp-tp-1)]+β(t-tp)>h~iβτ0,
where τ0=min1⩽k⩽q{tk-tk-1}>0. Therefore, we obtain
(26)liminft→∞xi(t)>h~iβτ0
for any i∈I.
Then, we consider Case 3. We choose two sequences{ρn} and {ρn*} satisfying T-0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(27)xi(ρn)⩾β,xi(ρn+)⩽β,xi(ρn*)⩽β,xi(ρn*+)⩾β,xi(t)⩽β∀t∈(ρn,ρn*),xi(t)⩾β∀t∈(ρn*,ρn+1).
For anyt⩾ρ1, if t∈(ρn,ρn*] for some integer n, we first of all show that ρn must be an impulsive time. Otherwise, there exists a positive constant δ such that there is no pulse in the interval (ρn,ρn+δ)⊂(ρn,ρn*]. Then for any t∈(ρn,ρn+δ), from system (24) we have xi(t)>xi(ρn+)+β(t-ρn)>xi(ρn)⩾β, which leads to a contradiction.
If there is only one impulsive time in the interval(ρn,ρn*](which must be ρn), then from system (24) we get
(28)xi(t)>xi(ρn+)+β(t-ρn)⩾h~ixi(ρn)+β(t-ρn)>h~iβ.
If there are at least twice pulses in(ρn,ρn*], then we can denote (ρn,ρn*]=⋃p=p0q*-1(tp,tp+1]⋃(tq*,ρn*], where tp0=ρn, q*>p0, p0,p and q* are some positive integers. Hence, for any t∈(tp,tp+1], there is xi(t)>xi(tp+)+β(t-tp) from system (24). Moreover, if p=p0, we have
(29)xi(t)>xi(tp+)+β(t-tp)⩾h~ixi(ρn)+β(t-ρn)>h~iβ.
If p>p0, then
(30)xi(t)>xi(tp+)+β(t-tp)⩾h~i[xi(tp-1+)+β(tp-tp-1)]+β(t-tp)>h~iβτ0.
However, when t∈(tq*,ρn*], then
(31)xi(t)>xi(tq*+)+β(t-tq*)⩾h~i[xi(tq*-1+)+β(tq*-tq*-1)]+β(t-tq*)>h~iβτ0.
It follows from (28)–(31) that
(32)xi(t)⩾min{h~iβ,h~iβτ0}
for all t∈(ρn,ρn*] and any i∈I.
If there is an integern such that t∈(ρn*,ρn+1], obviously we have xi(t)⩾β. Therefore, for Case 3 we always have
(33)xi(t)⩾min{h~iβ,h~iβτ0,β}
for all t⩾ρ1 and any i∈I.
From (26) and (33), choose m=mini∈I{h~iβ,h~iβτ0,β}/2. Then we finally have that (12) holds. System (1) is permanent. The proof of Theorem 3 is completed.Remark 4.
It follows from Theorem3 that system (1) is permanent if there is a positive average growth rate (which does not include the dispersal entrance) in one time period T in any patch i0(i0∈I). In paper [4], the authors showed an interesting result that the dispersal species without impulses is permanent in all other patches if it is permanent in a patch. However, we extend this result to a periodic case with impulses.Remark 5.
In paper [18], we studied an impulsive periodic predator-prey system with Holling type III functional response and diffusion, and the paper mainly considers the influences of Holling type functional response and impulses. The conditions of the main result Theorem 3 require the minimum of the coefficients. However, in this paper, we consider a single-species logistic system. Although the predator is not involved in the model, which is simple than the model in [18], a more accurate and reasonable condition is established in the present paper; that is, in dispersal system, species with impulses is permanent in all patches if it is permanent in a patch, which improves the minimum conditions in the previous paper.Theorem 6.
System (1) is extinct if conditions
(34)∫0Ta(t)dt>0,∫0Tγ(t)dt+∑k=1qlnhk⩽0
hold, where γ(t)=maxi∈I{ri(t)-∑j=1nDij(t)+∑j=1nDji(t)} for all t∈[0,T], and a(t) and hk are defined in Theorem 1.Proof.
In fact, from (34), for any constant ε>0, there is a positive constant δ~ such that
(35)∫0T(γ(t)-εa(t)n)dt+∑k=1qlnhk<-δ~.
Define V(t)=∑i=1nxi(t). When t≠tk, calculating the right-upper derivative of V(t), we have
(36)D+V(t)=∑i=1nx˙i(t)=∑i=1nxi(t)(ri(t)-∑j=1nDij(t)+∑j=1nDji(t)-ai(t)xi(t))⩽γ(t)V(t)-a(t)∑i=1nxi2(t)⩽V(t)(γ(t)-a(t)nV(t)).
When t=tk, we obtain V(tk+)=∑i=1nxi(tk+)=∑i=1nhikxi(tk)⩽maxi∈I{hik}V(tk)=hkV(tk). From this and (35), a similar argument as in the proof of (7), we can obtain V(t)⩽εexp(γ(t)T+H) for t⩾0. Then from the arbitrariness of ε, we obtain V(t)→0 as t→∞. Finally, we have xi(t)→0 as t→∞ for all i∈I. This completes the proof of Theorem 6.Remark 7.
It can be seen from Theorem6 that system (1) is always extinct if, in one time period T, there are a positive density-dependent coefficient and a nonpositive average growth rate (which includes the dispersal entrance) in the time period in each patch i(i∈I).
## 3. Periodic Solutions
In this section, by constructing an appropriate Lyapunov function, sufficient conditions for the existence of the unique globally attractively positiveT-periodic solution of system (1) are established.Letx(t)=(x1(t),x2(t),…,xn(t)) and x*(t)=(x1*(t),x2*(t),…,xn*(t)) be any two positive solutions of system (1). From Theorem 3, we can obtain that there are constants A>0 and B>0 such that
(37)A⩽xi(t),xi*(t)⩽B∀t⩾0,i∈I.Theorem 8.
Suppose all the conditions of Theorem3 hold. Moreover, if
(38)∫0Tβ(t)dt>0,
where β(t)=mini∈I{ai(t)-∑j=1nDji(t)/A}⩾0, t∈[0,T], then system (1) has a unique globally attractively positive T-periodic solution x*(t)=(x1*(t),x2*(t),…,xn*(t)); that is, any positive solution x(t)=(x1(t),x2(t),…,xn(t)) of system (1) satisfies
(39)limt→∞(xi(t)-xi*(t))=0,i∈I.Proof.
Choose Lyapunov functionV(t)=∑i=1n|lnxi(t)-lnxi*(t)|. Since for any impulsive time tk we have
(40)V(tk+)=∑i=1n|lnxi(tk+)-lnxi*(tk+)|=∑i=1n|lnhikxi(tk)-lnhikxi*(tk)|=V(tk),
then V(t) is continuous for all t⩾0. On the other hand, from (37) we can obtain that for any t∈R+ and t≠tk(41)1B|xi(t)-xi*(t)|⩽|lnxi(t)-lnxi*(t)|⩽1A|xi(t)-xi*(t)|.
For any t∈R+ and t≠tk, calculating the derivative of V(t), we obtain
(42)D+V(t)=∑i=1nsgn(xi(t)-xi*(t))(x˙i(t)xi(t)-x˙i*(t)xi*(t))⩽∑i=1n(-ai(t)|xi(t)-xi*(t)|)+∑i=1n∑j=1nD-ij(t),
where
(43)D-ij(t)={Dij(t)(xj(t)xi(t)-xj*(t)xi*(t)),xi(t)>xi*(t),Dij(t)(xj*(t)xi*(t)-xj(t)xi(t)),xi(t)<xi*(t).
For all t⩾0, we estimate D-ij(t) under the following two cases.(i)
Ifxi(t)⩾xi*(t), then D-ij(t)⩽Dij(t)/xi(t)(xj(t)-xj*(t))⩽Dij(t)/A|xj(t)-xj*(t)|.(ii)
Ifxi(t)<yi(t), then D-ij(t)⩽Dij(t)/xi*(t)(xj(t)-xj*(t))⩽Dij(t)/A|xj(t)-xj*(t)|.
It follows from the estimation ofD-ij(t) and (41) that
(44)D+V(t)⩽∑i=1n(-ai(t)|xi(t)-xi*(t)|)+∑i=1n∑j=1nDij(t)A|xj(t)-xj*(t)|⩽-∑i=1n(ai(t)-∑j=1nDji(t)A)|xi(t)-xi*(t)|⩽-β(t)AV(t).
From this and condition (38), we have V(t)⩽V(0)exp(-A∫0tβ(s)ds)→0 as t→∞. Further more, from (41) we have that (39) holds.
Now let us consider the sequence(x1*(mT,z0),x2*(mT,z0),…,xn*(mT,z0))=z(mT,z0), where m=1,2,… and z0=(x1*(0),x2*(0),…,xn*(0)). It is compact in the domain [A,B]n since A⩽xi*(t)⩽B for all t⩾0 and i=1,2,…,n. Let z- be a limit point of this sequence, with z-=limn→∞z(mnT,z0). Then z(T,z-)=z-. Indeed, since z(T,z(mnT,z0))=z(mnT,z(T,z0)) and z(mnT,z(T,z0))-z(mnT,z0)→0 as mn→∞, we get
(45)∥z(T,z-)-z-∥[A,B]n⩽∥z(T,z-)-z(T,z(mnT,z0))∥[A,B]n+∥z(T,z(mnT,z0))-z(mnT,z0)∥[A,B]n+∥z(mnT,z0)-z-∥[A,B]n⟶0,n⟶∞.
The sequence z(mT,z0),m=1,2,… has a unique limit point. On the contrary, let the sequence have two limit points z-=limn→∞z(mnT,z0) and z~=limn→∞z(mnT,z0). Then, taking into account (39) and z~=z(mnT,z~), we have
(46)∥z--z~∥[A,B]n⩽∥z--z(mnT,z0)∥[A,B]n+∥z(mnT,z0)-z~∥[A,B]n⟶0,n⟶∞,
and hence z-=z~. The solution (x1*(t,z-),x2*(t,z-),…,xn*(t,z-)) is the unique periodic solution of system (1). By (39), it is globally attractive. This completes the proof of Theorem 8.
## 4. Numerical Simulation and Discussion
In this paper, we have investigated a class of single-species periodic logistic system with impulses and dispersal inn different patches. By means of inequality estimation technique and Lyapunov function, we gave the criteria for the permanence, extinction, and existence of a unique globally stable positive periodic solution of system (1).In order to testify the validity of our results and present a more in-depth problem for further discussion, we discuss the following two patchesT-periodic dispersal system:
(47)x˙1(t)=x1(t)(r1(t)-a1(t)x1(t))+D12(t)(x2(t)-x1(t)),x˙2(t)=x2(t)(r2(t)-a2(t)x2(t))+D21(t)(x1(t)-x2(t)),t≠tk,x1(tk+)=h1kx1(tk),x2(tk+)=h2kx2(tk),k=1,2,….We taker1(t)=5-|sin(π/2)t|, r2(t)=2.5+0.5cosπt, a1(t)=1.5, a2(t)=0.8, D12(t)=1.2, D21(t)=0.7, h1k=1.2, h2k=0.8, and tk=0.1k, k=1,2,…. Obviously, r(t)=max{r1(t),r2(t)}=5-|sin(π/2)t|, a(t)=min{a1(t),a2(t)}=0.8, hk=max{h1k,h2k}=1.2, and system (47) is periodic with period T=2. For q=20, we have tk+q=tk+T, h1(k+q)=h1k, and h2(k+q)=h2k for all k=1,2,…. It is easy to verify that hk(t,μ) and hik(t,μ)(i=1,2) are bounded for all t∈R+ and μ∈[0,T]. Further more, since
(48)∫0Ta(t)dt=1.6>0,∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=9.9732>0(i0=1),
all the conditions of Theorem 3 are satisfied. Hence, system (47) is permanent. See Figures 1 and 2.Figure 1
The time series of the permanence of speciesx.Figure 2
The phase of the permanence of speciesx.However, if the survival environment of the two patches is austere, the intrinsic growth rates will be negative. Hence, if we taker1(t)=-1-|sin(π/2)t|, r2(t)=-4+0.5cosπt and all other parameters are retained, then we obtain γ(t)=max{r1(t)-D12(t)+D21(t),r2(t)-D21(t)+D12(t)}=-1.5-|sin(π/2)t| and
(49)∫0Tγ(t)dt+∑k=1qlnhk=-0.6268<0,
hence conditions of Theorem 6 are satisfied. From Theorem 6 we find that any positive solution of system (47) will be extinct. See Figure 3.The time series and phase of the extinction of speciesx.From the illustrations of the theorems, we note that there is a great difference on the choice of the intrinsic growth ratesri(t)(i=1,2), which guarantee that the system is permanent or extinct. These differences make us want to know what results will be if all the parameters satisfy
(50)∫0T(ri(t)-∑j=12Dij(t))dt+∑k=1qlnhik⩽0i=1,2,∫0Tγ(t)dt+∑k=1qlnhk>0.
For this aim, we choose r1(t)=2.55-5|sin(π/2)t|, r2(t)=2+0.5cosπt and all other parameters are retained; then
(51)∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=-0.0198<0,∫0T(r2(t)-D21(t))dt+∑k=1qlnh2k=-1.8629<0,∫0Tγ(t)dt+∑k=1qlnhk=11.3732>0,
and all the conditions of Theorems 3 and 6 are not satisfied. But from Figures 4 and 5 we find that the system is permanent.Figure 4
The time series of the permanence of speciesx.Figure 5
The phase of the permanence of speciesx.Furthermore, if we chooser1(t)=-0.5-|sin(π/2)t|, r2(t)=-4+0.5cosπt and keep all other parameters, then we have
(52)∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=-1.0268<0,∫0T(r2(t)-D21(t))dt+∑k=1qlnh2k=-12.4629<0,∫0Tγ(t)dt+∑k=1qlnhk=1.3732>0,
which do not satisfy conditions of any theorem. But, from Figure 6 we see that any positive solution of system (1) is extinct.The time series and phase of the extinction of speciesx.Remark 9.
Through the above analysis, we realize that there is a little flaw of the finding conditions of the theorems. A challenging problem is to find some sufficient and necessary conditions (if the conditions hold, then the system will be permanent, otherwise, it will be extinct) to guarantee the permanence and extinction of the system.Throughout Figures1–6, we always take the initial condition x(0)=(x1(0),x2(0))=(2,3).
---
*Source: 101238-2013-05-09.xml* | 101238-2013-05-09_101238-2013-05-09.md | 23,947 | An Impulsive Periodic Single-Species Logistic System with Diffusion | Chenxue Yang; Mao Ye; Zijian Liu | Journal of Applied Mathematics
(2013) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101238 | 101238-2013-05-09.xml | ---
## Abstract
We study a single-species periodic logistic type dispersal system in a patchy environment with impulses. On the basis of inequality estimation technique, sufficient conditions of integrable form for the permanence and extinction of the system are obtained. By constructing an appropriate Lyapunov function, conditions for the existence of a unique globally attractively positive periodic solution are also established. Numerical examples are shown to verify the validity of our results and to further discuss the model.
---
## Body
## 1. Introduction
In the practical world, on the one hand, owing to natural enemy, severe competition, or deterioration of the patch environment, species dispersal in two or more patches becomes one of the most prevalent phenomena of nature. Many empirical works and monographies on population dynamics in a spatial heterogeneous environment have been done (see [1–9] and the references cited therein). On the other hand, many natural and man-made factors (e.g., fire, drought, flooding deforestation, hunting, harvesting, breeding, etc.) always lead to rapid decrease or increase of population number at fixed moment. Such sudden changes can often be characterized mathematically in the form of impulses. With the development of the theory of impulsive differential equations [10], various population dynamical models of impulsive differential equations have been proposed and studied extensively. For example, many important and interesting results on the permanence, persistence, extinction, global stability, the existence of positive periodic solutions, bifurcation and dynamical complexity, and so forth can be found in [11–17] and the references cited therein.Although considerable researches on the dispersal and impulses of species have been reported in the literature, there are few papers that investigate the dynamical behavior of population systems under the circumstances in which both dispersal and impulse exist. However, dispersal species which undergoes impulses is also one of the most prevalent phenomena of nature. In our previous paper [18], an impulsive periodic predator-prey system with diffusion is studied, and some conditions for the permanence, extinction, and existence of a unique globally stable periodic solution are established. In this paper, we will present and study a single-species periodic logistic system with impulses and dispersal in n different patches. Our model takes the form
(1)x˙i(t)=xi(t)(ri(t)-ai(t)xi(t))+∑j=1nDij(t)(xj(t)-xi(t)),t≠tk,xi(tk+)=hikxi(tk),i=1,2,…,n,k=1,2,…,
where ri(t) and ai(t)(i∈I={1,2,…,n}) represent the intrinsic growth rates and the density-dependent coefficients in patch i, respectively. Dij(t)⩾0(i,j∈I) denotes the dispersal rate of the species from patch j to patch i. hikxi(tk)(i∈I) is the regular pulse at time tk of species x in patch i. Throughout this paper, we always assume the following.(C1)
ri(t),ai(t), and Dij(t)(i,j∈I) are continuously periodic functions with common period T, defined on R+=[0,∞) and ai(t)⩾0,D2⩾Dij(t)⩾D1>0(i≠j),Dii(t)≡0 for all i,j∈I and t∈R+.(C2)
hik>0 for all i∈I,k=1,2,… and there exists a positive integer q such that tk+q=tk+T and hi(k+q)=hik for any i∈I.The organization of this paper is as follows. In the next section, some sufficient conditions for the permanence and extinction of system (1) are obtained. In Section 3, conditions for the existence of a unique globally attractively positive periodic solution are also established. Finally, some numerical simulations are proposed to illustrate the feasibility of our results and discuss the model further.
## 2. Permanence and Extinction
In this section, applying inequality estimation technique, we get some sufficient conditions on the permanence and extinction of system (1).Theorem 1.
There exists a positive constantM such that limsupt→∞xi(t)<M for all i∈I if
(2)∫0Ta(t)dt>0,
where a(t)=mini∈I{ai(t)}.Proof.
Lethk=maxi∈I{hik} for any k=1,2,…, then we have hk+q=hk and there exists a positive constant H such that function
(3)|h(t,μ)|=|∑t⩽tk<t+μlnhk|⩽∑k=1q|lnhk|⩽H
for all t∈R+ and μ∈[0,T]. Choose r(t)=maxi∈I{ri(t)}, and r(t) is bounded for all t∈R+. Then from conditions (2) and (3), we have two positive constants τ and δ such that
(4)∫0T(r(t)-a(t)τ)dt+∑k=1qlnhk<-δ.
Define the functionV(t)=maxi∈I{xi(t)}. For any t∈R+, there is an i=i(t)∈I such that V(t)=xi(t). Calculating the upper-right derivative of V(t), we obtain
(5)D+V(t)⩽xi(t)(ri(t)-ai(t)xi(t))⩽V(t)(r(t)-a(t)V(t)).
When t=tk, we have V(tk+)=max{xi(tk+)}=max{hikxi(tk)}⩽max{hik}max{xi(tk)}=hkV(tk).
Consider the following auxiliary system:(6)D+w(t)=w(t)(r(t)-a(t)w(t)),w(tk+)=hkw(tk),k=1,2,…
with the initial condition V(0)⩽w(0). If there is a constant M0>0 such that
(7)limsupt→∞w(t)<M0
for any positive solution w(t) of system (6), then, according to the comparison theorem of impulsive differential equations [10], we have V(t)⩽w(t) for all t⩾0. Therefore, choose M=M0, and we will finally have limsupt→∞xi(t)⩽limsupt→∞V(t)⩽limsupt→∞w(t)<M for all i∈I.
Next, we will prove that (7) holds. In fact, for any positive solution w(t) of system (6), we only need to consider the following three cases.
Case
1. There is a t0⩾0 such that w(t)⩾τ for all t⩾t0.
Case
2. There is a t0⩾0 such that w(t)⩽τ for all t⩾t0.
Case
3. w(t) is oscillatory about τ for all t⩾0.
We first consider Case 1. Sincew(t)⩾τ for all t⩾t0, then for t=t0+lT, where l is any positive integer, integrating system (6) from t0 to t, from (4) we have
(8)w(t)=w(t0)exp(∫t0t(r(s)-a(s)w(s))ds+∑t0⩽tk<tlnhk)⩽w(t0)exp(∑k=1q∫t0t0+T(r(s)-a(s)τ)ds+⋯+∫t0+(l-1)Tt0+lT(r(s)-a(s)τ)ds∫t0t0+T(r(s)-a(s)τ)+l∑k=1qlnhk∑k=1q)⩽w(t0)exp(-lδ).
Hence, w(t)→0 as l→∞, which leads to a contradiction.
Then, we consider Case 3. From the oscillation ofw(t) about τ, we can choose two sequences {ρn} and {ρn*} satisfying 0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(9)w(ρn)⩽τ,w(ρn+)⩾τ,w(ρn*)⩾τ,w(ρn*+)⩽τ,w(t)⩾τ∀t∈(ρn,ρn*),w(t)⩽τ∀t∈(ρn*,ρn+1).
For any t⩾ρ1, if t∈(ρn,ρn*] for some integer n, then we can choose integer l⩾0 and constant 0⩽ν<T such that t=ρn+lT+ν. Since D+w(t)⩽w(t)(r(t)-a(t)τ) for all t∈(ρn,ρn*) and t≠tk, integrating this inequality from ρn to t, by (3) and (4) we obtain
(10)w(t)⩽w(ρn)exp(∫ρnt(r(s)-a(s)τ)ds+∑ρn⩽tk<tlnhk)⩽τexp(l(∫0T(r(t)-a(t)τ)dt+∑k=1qlnhk)+∫ρnρn+ν(r(t)-a(t)τ)dt+∑ρn⩽tk<ρn+νlnhk)⩽τexp(-lδ+rT+H)⩽τexp(rT+H),
where r=supt∈[0,T]{|r(t)|+a(t)τ}. If there is an integer n such that t∈(ρn*,ρn+1], then we have w(t)⩽τ⩽τexp(rT+H). Therefore, for Case 3 we always have w(t)⩽τexp(rT+H) for all t⩾ρ1.
Lastly, if Case 2 holds, then we directly havew(t)⩽τexp(rT+H) for all t⩾0.
Choose constantM0=τexp(rT+H)+1, then we see that (7) holds. This completes the proof.Remark 2.
It can be seen from Theorem1 that, in one time period T, if the density-dependent coefficient in patch i (i∈I) is strictly greater than zero and the impulsive coefficient hik is bounded in the same time period, the dispersal species x is always ultimately bounded.Theorem 3.
Assume that all conditions of Theorem1 hold. In addition, there is an i0∈I such that
(11)∫0T(ri0(t)-∑j=1nDi0j(t))dt+∑k=1qlnhi0k>0.
Then system (1) is permanent.Proof.
The ultimate boundedness of system (1) has been proved in Theorem 1. In the following, we mainly prove the permanence of the system; that is, there is a constant m>0 such that
(12)liminft→∞xi(t)>m
for each i∈I and any positive solution x(t)=(x1(t),x2(t),…,xn(t)) of system (1).
From assumptionhi(k+q)=hik, we obtain that there exists a constant H>0 such that function
(13)|hi(t,μ)|=|∑t⩽tk<t+μlnhik|⩽∑k=1q|lnhik|⩽H
for any i∈I,t∈R+ and μ∈[0,T].
Fori=i0, by condition (11) and the boundedness of ai0(t), there are two positive constants τ- and δ- such that
(14)∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑k=1qlnhi0k>δ-.
Letx(t)=(x1(t),x2(t),…,xn(t)) be any positive solution of system (1). Since
(15)x˙i0(t)⩾xi0(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)xi0(t)),(ri0(t)-∑j=1nDi0j(t)-ai0(t)xi0(t))t≠tk,
by the comparison theorem of impulsive differential equations, we obtain xi0(t)⩾u(t) for all t⩾0, where u(t) is the positive solution of system
(16)u˙(t)=u(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)u(t)),u(tk+)=hi0ku(tk),k=1,2,…
with initial condition u(0)=xi0(0).
In the following, we first prove that there is a constantm->0 such that
(17)liminft→∞u(t)>m-
for any positive solution u(t) of system (16). We only need to consider the following three cases.
Case
1. There is a t-⩾0 such that u(t)⩽τ- for all t⩾t-.
Case
2. There is a t-⩾0 such that u(t)⩾τ- for all t⩾t-.
Case
3. u(t) is oscillatory about τ- for all t⩾0.
For Case 1, lett=t-+lT, where l⩾0 is any integer. From (14), we obtain
(18)u(t)=u(t-)×exp(∑t-⩽tk<tlnhi0k∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s))∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s))ds+∑t-⩽tk<tlnhi0k∫t-t(ri0(s)-∑j=1nDi0j(s)-ai0(s)u(s)))⩾u(t-)exp(∑k=1qlnhi0k∫t-t-+T(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-)ds+⋯+∫t-+(l-1)Tt-+lT(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-∑j=1n)ds+l∑k=1qlnhi0k∫t-t-+T(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-))⩾u(t-)exp(lδ-).
Hence, u(t)→∞ as l→∞, which leads to a contradiction.
For Case 3, we choose two sequences{ρn} and {ρn*} satisfying 0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(19)u(ρn)⩾τ-,u(ρn+)⩽τ-,u(ρn*)⩽τ-,u(ρn*+)⩾τ-,u(t)⩽τ-∀t∈(ρn,ρn*),u(t)⩾τ-∀t∈(ρn*,ρn+1).
For any t⩾ρ1, if t∈(ρn,ρn*] for some integer n, then we can choose integer l⩾0 and constant 0⩽ν-<T such that t=ρn+lT+ν-. Since, for all t∈(ρn,ρn*) and t≠tk, we have u˙(t)⩾u(t)(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-), integrating this inequality from ρn to t, then from (13) and (14) we obtain
(20)u(t)⩾u(ρn)×exp(∫ρnt(ri0(s)-∑j=1nDi0j(s)-ai0(s)τ-)ds+∑ρn⩽tk<tlnhi0k)⩾τ-exp(∑ρn⩽tk<ρn+ν-l(∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑k=1qlnhi0k∫0T(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-))+∫ρnρn+ν-(ri0(t)-∑j=1nDi0j(t)-ai0(t)τ-)dt+∑ρn⩽tk<ρn+ν-lnhi0k)⩾τ-exp(lδ--βi0T-H)⩾τ-exp(-βi0T-H),
where βi0=supt∈[0,T]{|ri0(t)|+∑j=1nDi0j(t)+ai0(t)τ-}. If there is an integer n such that t∈(ρn*,ρn+1], obviously we have u(t)⩾τ-⩾τ-exp(-βi0T-H). Therefore, for Case 3 we always have u(t)⩾τ-exp(-βi0T-H) for all t⩾ρ1. Let constant m-=τ-exp(-βi0T-H-1). Then m- is independent of any positive solution of system (16) and we finally have that (17) holds.
Lastly, if Case 2 holds, then fromu(t)⩾τ- for all t⩾0, we directly have that (17) holds.
From the fact thatxi0(t)⩾u(t) for all t⩾0, then we have
(21)liminft→∞xi0(t)⩾liminft→∞u(t)>m-.
It follows immediately from (21) that there is a T-0>0 such that ∑i=1nxi(t)⩾m- for all t⩾T-0. Then for any i∈I, when t⩾T-0 and t≠tk we have
(22)x˙i(t)=xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t))+∑j=1nDij(t)xj(t)⩾xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t)-D1)+D1m-.
Obviously, there is a constant β>0 and β is independent of any positive solution of system (1), such that for any i∈I, 0⩽xi(t)⩽β, and all t∈R+ the following inequality holds
(23)xi(t)(ri(t)-ai(t)xi(t)-∑j=1nDij(t)-D1)+D1m->β.
Hence, for any i∈I, by (22) we have
(24)x˙i(t)>β,t≠tk,xi(tk+)⩾h~ixi(tk),k=1,2,…
for all 0⩽xi(t)⩽β and t⩾T-0, where h~i=min1⩽k⩽q{hik}>0.
In order to prove that (12) holds, we only need to consider the following three cases.
Case
1. There is a T-1⩾T-0 such that xi(t)⩾β for all t⩾T-1.
Case
2. There is a T-1⩾T-0 such that xi(t)⩽β for all t⩾T-1.
Case
3. xi(t) is oscillatory about β for all t⩾T-0.
Equation (12) is obviously true if Case 1 holds.
For Case 2, there exists an impulsive timetq*⩾T-1 for some integer q*>0. For any t>tq*+1>tq*⩾T-1, there is an integer p⩾q*+1 such that t∈(tp,tp+1], and from system (24) we have
(25)xi(t)>xi(tp+)+β(t-tp)⩾h~ixi(tp)+β(t-tp)⩾h~i[xi(tp-1+)+β(tp-tp-1)]+β(t-tp)>h~iβτ0,
where τ0=min1⩽k⩽q{tk-tk-1}>0. Therefore, we obtain
(26)liminft→∞xi(t)>h~iβτ0
for any i∈I.
Then, we consider Case 3. We choose two sequences{ρn} and {ρn*} satisfying T-0<ρ1<ρ1*<⋯<ρn<ρn*<⋯ and limn→∞ρn=limn→∞ρn*=∞ such that
(27)xi(ρn)⩾β,xi(ρn+)⩽β,xi(ρn*)⩽β,xi(ρn*+)⩾β,xi(t)⩽β∀t∈(ρn,ρn*),xi(t)⩾β∀t∈(ρn*,ρn+1).
For anyt⩾ρ1, if t∈(ρn,ρn*] for some integer n, we first of all show that ρn must be an impulsive time. Otherwise, there exists a positive constant δ such that there is no pulse in the interval (ρn,ρn+δ)⊂(ρn,ρn*]. Then for any t∈(ρn,ρn+δ), from system (24) we have xi(t)>xi(ρn+)+β(t-ρn)>xi(ρn)⩾β, which leads to a contradiction.
If there is only one impulsive time in the interval(ρn,ρn*](which must be ρn), then from system (24) we get
(28)xi(t)>xi(ρn+)+β(t-ρn)⩾h~ixi(ρn)+β(t-ρn)>h~iβ.
If there are at least twice pulses in(ρn,ρn*], then we can denote (ρn,ρn*]=⋃p=p0q*-1(tp,tp+1]⋃(tq*,ρn*], where tp0=ρn, q*>p0, p0,p and q* are some positive integers. Hence, for any t∈(tp,tp+1], there is xi(t)>xi(tp+)+β(t-tp) from system (24). Moreover, if p=p0, we have
(29)xi(t)>xi(tp+)+β(t-tp)⩾h~ixi(ρn)+β(t-ρn)>h~iβ.
If p>p0, then
(30)xi(t)>xi(tp+)+β(t-tp)⩾h~i[xi(tp-1+)+β(tp-tp-1)]+β(t-tp)>h~iβτ0.
However, when t∈(tq*,ρn*], then
(31)xi(t)>xi(tq*+)+β(t-tq*)⩾h~i[xi(tq*-1+)+β(tq*-tq*-1)]+β(t-tq*)>h~iβτ0.
It follows from (28)–(31) that
(32)xi(t)⩾min{h~iβ,h~iβτ0}
for all t∈(ρn,ρn*] and any i∈I.
If there is an integern such that t∈(ρn*,ρn+1], obviously we have xi(t)⩾β. Therefore, for Case 3 we always have
(33)xi(t)⩾min{h~iβ,h~iβτ0,β}
for all t⩾ρ1 and any i∈I.
From (26) and (33), choose m=mini∈I{h~iβ,h~iβτ0,β}/2. Then we finally have that (12) holds. System (1) is permanent. The proof of Theorem 3 is completed.Remark 4.
It follows from Theorem3 that system (1) is permanent if there is a positive average growth rate (which does not include the dispersal entrance) in one time period T in any patch i0(i0∈I). In paper [4], the authors showed an interesting result that the dispersal species without impulses is permanent in all other patches if it is permanent in a patch. However, we extend this result to a periodic case with impulses.Remark 5.
In paper [18], we studied an impulsive periodic predator-prey system with Holling type III functional response and diffusion, and the paper mainly considers the influences of Holling type functional response and impulses. The conditions of the main result Theorem 3 require the minimum of the coefficients. However, in this paper, we consider a single-species logistic system. Although the predator is not involved in the model, which is simple than the model in [18], a more accurate and reasonable condition is established in the present paper; that is, in dispersal system, species with impulses is permanent in all patches if it is permanent in a patch, which improves the minimum conditions in the previous paper.Theorem 6.
System (1) is extinct if conditions
(34)∫0Ta(t)dt>0,∫0Tγ(t)dt+∑k=1qlnhk⩽0
hold, where γ(t)=maxi∈I{ri(t)-∑j=1nDij(t)+∑j=1nDji(t)} for all t∈[0,T], and a(t) and hk are defined in Theorem 1.Proof.
In fact, from (34), for any constant ε>0, there is a positive constant δ~ such that
(35)∫0T(γ(t)-εa(t)n)dt+∑k=1qlnhk<-δ~.
Define V(t)=∑i=1nxi(t). When t≠tk, calculating the right-upper derivative of V(t), we have
(36)D+V(t)=∑i=1nx˙i(t)=∑i=1nxi(t)(ri(t)-∑j=1nDij(t)+∑j=1nDji(t)-ai(t)xi(t))⩽γ(t)V(t)-a(t)∑i=1nxi2(t)⩽V(t)(γ(t)-a(t)nV(t)).
When t=tk, we obtain V(tk+)=∑i=1nxi(tk+)=∑i=1nhikxi(tk)⩽maxi∈I{hik}V(tk)=hkV(tk). From this and (35), a similar argument as in the proof of (7), we can obtain V(t)⩽εexp(γ(t)T+H) for t⩾0. Then from the arbitrariness of ε, we obtain V(t)→0 as t→∞. Finally, we have xi(t)→0 as t→∞ for all i∈I. This completes the proof of Theorem 6.Remark 7.
It can be seen from Theorem6 that system (1) is always extinct if, in one time period T, there are a positive density-dependent coefficient and a nonpositive average growth rate (which includes the dispersal entrance) in the time period in each patch i(i∈I).
## 3. Periodic Solutions
In this section, by constructing an appropriate Lyapunov function, sufficient conditions for the existence of the unique globally attractively positiveT-periodic solution of system (1) are established.Letx(t)=(x1(t),x2(t),…,xn(t)) and x*(t)=(x1*(t),x2*(t),…,xn*(t)) be any two positive solutions of system (1). From Theorem 3, we can obtain that there are constants A>0 and B>0 such that
(37)A⩽xi(t),xi*(t)⩽B∀t⩾0,i∈I.Theorem 8.
Suppose all the conditions of Theorem3 hold. Moreover, if
(38)∫0Tβ(t)dt>0,
where β(t)=mini∈I{ai(t)-∑j=1nDji(t)/A}⩾0, t∈[0,T], then system (1) has a unique globally attractively positive T-periodic solution x*(t)=(x1*(t),x2*(t),…,xn*(t)); that is, any positive solution x(t)=(x1(t),x2(t),…,xn(t)) of system (1) satisfies
(39)limt→∞(xi(t)-xi*(t))=0,i∈I.Proof.
Choose Lyapunov functionV(t)=∑i=1n|lnxi(t)-lnxi*(t)|. Since for any impulsive time tk we have
(40)V(tk+)=∑i=1n|lnxi(tk+)-lnxi*(tk+)|=∑i=1n|lnhikxi(tk)-lnhikxi*(tk)|=V(tk),
then V(t) is continuous for all t⩾0. On the other hand, from (37) we can obtain that for any t∈R+ and t≠tk(41)1B|xi(t)-xi*(t)|⩽|lnxi(t)-lnxi*(t)|⩽1A|xi(t)-xi*(t)|.
For any t∈R+ and t≠tk, calculating the derivative of V(t), we obtain
(42)D+V(t)=∑i=1nsgn(xi(t)-xi*(t))(x˙i(t)xi(t)-x˙i*(t)xi*(t))⩽∑i=1n(-ai(t)|xi(t)-xi*(t)|)+∑i=1n∑j=1nD-ij(t),
where
(43)D-ij(t)={Dij(t)(xj(t)xi(t)-xj*(t)xi*(t)),xi(t)>xi*(t),Dij(t)(xj*(t)xi*(t)-xj(t)xi(t)),xi(t)<xi*(t).
For all t⩾0, we estimate D-ij(t) under the following two cases.(i)
Ifxi(t)⩾xi*(t), then D-ij(t)⩽Dij(t)/xi(t)(xj(t)-xj*(t))⩽Dij(t)/A|xj(t)-xj*(t)|.(ii)
Ifxi(t)<yi(t), then D-ij(t)⩽Dij(t)/xi*(t)(xj(t)-xj*(t))⩽Dij(t)/A|xj(t)-xj*(t)|.
It follows from the estimation ofD-ij(t) and (41) that
(44)D+V(t)⩽∑i=1n(-ai(t)|xi(t)-xi*(t)|)+∑i=1n∑j=1nDij(t)A|xj(t)-xj*(t)|⩽-∑i=1n(ai(t)-∑j=1nDji(t)A)|xi(t)-xi*(t)|⩽-β(t)AV(t).
From this and condition (38), we have V(t)⩽V(0)exp(-A∫0tβ(s)ds)→0 as t→∞. Further more, from (41) we have that (39) holds.
Now let us consider the sequence(x1*(mT,z0),x2*(mT,z0),…,xn*(mT,z0))=z(mT,z0), where m=1,2,… and z0=(x1*(0),x2*(0),…,xn*(0)). It is compact in the domain [A,B]n since A⩽xi*(t)⩽B for all t⩾0 and i=1,2,…,n. Let z- be a limit point of this sequence, with z-=limn→∞z(mnT,z0). Then z(T,z-)=z-. Indeed, since z(T,z(mnT,z0))=z(mnT,z(T,z0)) and z(mnT,z(T,z0))-z(mnT,z0)→0 as mn→∞, we get
(45)∥z(T,z-)-z-∥[A,B]n⩽∥z(T,z-)-z(T,z(mnT,z0))∥[A,B]n+∥z(T,z(mnT,z0))-z(mnT,z0)∥[A,B]n+∥z(mnT,z0)-z-∥[A,B]n⟶0,n⟶∞.
The sequence z(mT,z0),m=1,2,… has a unique limit point. On the contrary, let the sequence have two limit points z-=limn→∞z(mnT,z0) and z~=limn→∞z(mnT,z0). Then, taking into account (39) and z~=z(mnT,z~), we have
(46)∥z--z~∥[A,B]n⩽∥z--z(mnT,z0)∥[A,B]n+∥z(mnT,z0)-z~∥[A,B]n⟶0,n⟶∞,
and hence z-=z~. The solution (x1*(t,z-),x2*(t,z-),…,xn*(t,z-)) is the unique periodic solution of system (1). By (39), it is globally attractive. This completes the proof of Theorem 8.
## 4. Numerical Simulation and Discussion
In this paper, we have investigated a class of single-species periodic logistic system with impulses and dispersal inn different patches. By means of inequality estimation technique and Lyapunov function, we gave the criteria for the permanence, extinction, and existence of a unique globally stable positive periodic solution of system (1).In order to testify the validity of our results and present a more in-depth problem for further discussion, we discuss the following two patchesT-periodic dispersal system:
(47)x˙1(t)=x1(t)(r1(t)-a1(t)x1(t))+D12(t)(x2(t)-x1(t)),x˙2(t)=x2(t)(r2(t)-a2(t)x2(t))+D21(t)(x1(t)-x2(t)),t≠tk,x1(tk+)=h1kx1(tk),x2(tk+)=h2kx2(tk),k=1,2,….We taker1(t)=5-|sin(π/2)t|, r2(t)=2.5+0.5cosπt, a1(t)=1.5, a2(t)=0.8, D12(t)=1.2, D21(t)=0.7, h1k=1.2, h2k=0.8, and tk=0.1k, k=1,2,…. Obviously, r(t)=max{r1(t),r2(t)}=5-|sin(π/2)t|, a(t)=min{a1(t),a2(t)}=0.8, hk=max{h1k,h2k}=1.2, and system (47) is periodic with period T=2. For q=20, we have tk+q=tk+T, h1(k+q)=h1k, and h2(k+q)=h2k for all k=1,2,…. It is easy to verify that hk(t,μ) and hik(t,μ)(i=1,2) are bounded for all t∈R+ and μ∈[0,T]. Further more, since
(48)∫0Ta(t)dt=1.6>0,∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=9.9732>0(i0=1),
all the conditions of Theorem 3 are satisfied. Hence, system (47) is permanent. See Figures 1 and 2.Figure 1
The time series of the permanence of speciesx.Figure 2
The phase of the permanence of speciesx.However, if the survival environment of the two patches is austere, the intrinsic growth rates will be negative. Hence, if we taker1(t)=-1-|sin(π/2)t|, r2(t)=-4+0.5cosπt and all other parameters are retained, then we obtain γ(t)=max{r1(t)-D12(t)+D21(t),r2(t)-D21(t)+D12(t)}=-1.5-|sin(π/2)t| and
(49)∫0Tγ(t)dt+∑k=1qlnhk=-0.6268<0,
hence conditions of Theorem 6 are satisfied. From Theorem 6 we find that any positive solution of system (47) will be extinct. See Figure 3.The time series and phase of the extinction of speciesx.From the illustrations of the theorems, we note that there is a great difference on the choice of the intrinsic growth ratesri(t)(i=1,2), which guarantee that the system is permanent or extinct. These differences make us want to know what results will be if all the parameters satisfy
(50)∫0T(ri(t)-∑j=12Dij(t))dt+∑k=1qlnhik⩽0i=1,2,∫0Tγ(t)dt+∑k=1qlnhk>0.
For this aim, we choose r1(t)=2.55-5|sin(π/2)t|, r2(t)=2+0.5cosπt and all other parameters are retained; then
(51)∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=-0.0198<0,∫0T(r2(t)-D21(t))dt+∑k=1qlnh2k=-1.8629<0,∫0Tγ(t)dt+∑k=1qlnhk=11.3732>0,
and all the conditions of Theorems 3 and 6 are not satisfied. But from Figures 4 and 5 we find that the system is permanent.Figure 4
The time series of the permanence of speciesx.Figure 5
The phase of the permanence of speciesx.Furthermore, if we chooser1(t)=-0.5-|sin(π/2)t|, r2(t)=-4+0.5cosπt and keep all other parameters, then we have
(52)∫0T(r1(t)-D12(t))dt+∑k=1qlnh1k=-1.0268<0,∫0T(r2(t)-D21(t))dt+∑k=1qlnh2k=-12.4629<0,∫0Tγ(t)dt+∑k=1qlnhk=1.3732>0,
which do not satisfy conditions of any theorem. But, from Figure 6 we see that any positive solution of system (1) is extinct.The time series and phase of the extinction of speciesx.Remark 9.
Through the above analysis, we realize that there is a little flaw of the finding conditions of the theorems. A challenging problem is to find some sufficient and necessary conditions (if the conditions hold, then the system will be permanent, otherwise, it will be extinct) to guarantee the permanence and extinction of the system.Throughout Figures1–6, we always take the initial condition x(0)=(x1(0),x2(0))=(2,3).
---
*Source: 101238-2013-05-09.xml* | 2013 |
# Spirochetes in the Liver: An Unusual Presentation of a Common STI
**Authors:** Natasha Narang; Layth Al-Jashaami; Nayan Patel
**Journal:** Case Reports in Medicine
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1012405
---
## Abstract
It is estimated that 10% of patients with secondary syphilis have liver enzyme elevations, but clinical hepatitis is rare. However, in HIV-positive patients, syphilitic hepatitis may be much more common. We report a case of a 67-year-old male who developed progressively elevated liver enzymes, followed by development of neurological symptoms and then rash. Though the timeline of his symptom development was unusual, his constellation of symptoms prompted an RPR and FTA-ABS which returned reactive. He was additionally found to be HIV positive with a CD4 count of 946. He was treated with IV Penicillin, and his hepatitis improved thereafter.
---
## Body
## 1. Introduction
Often referred to as “the great imitator,” syphilis continues to be one of the most common sexually transmitted infections to this day, with a substantial proportion occurring in men who have sex with men [1]. The HIV population is particularly susceptible, with estimated incidence of up to 1/4 of all cases occurring in coinfected persons [2]. Though the separate phases of syphilis are well defined, clinical manifestations can greatly vary. It most commonly presents as painless genital ulcerations in the primary stage and progresses to rash, fever, and lymphadenopathy in the second [1, 2]. If secondary (disseminated) syphilis remains untreated, disease can span years and affect numerous organ systems. It has been reported that syphilis may affect mucocutaneous, gastrointestinal, pulmonary, renal, neurologic, and hepatic systems [1, 3]. When secondary syphilis signs recede, the patient enters the latent phase in which serologies remain positive without any overt signs or symptoms of infection. Tertiary syphilis may develop in about one-third of those with latent disease, presenting as neurosyphilis, aortic root insufficiency, or gummatous lesions [1, 4]. Overall, tertiary syphilis is scarce in our postantibiotic era.
## 2. Case Report
A 67-year-old male was admitted for progressive liver enzyme elevation. His symptoms began three months prior to this admission, when he presented to the emergency department with fatigue, decreased appetite, and abdominal pain and was found to have elevated transaminases. Initial evaluation by his outpatient gastroenterologist including workup for viral hepatitis, alpha-1 antitrypsin deficiency, primary biliary cirrhosis, Wilson’s disease, and autoimmune hepatitis was largely inconclusive. Subsequently, he developed weakness and numbness that began distal to his axillae and progressed to his torso and lower extremities. The lower extremity symptoms worsened, and he developed ataxia, requiring a walker for ambulation. Two months after symptoms began, he underwent neurological workup for ataxia, right-sided weakness, and sporadic severe radiating low back pain. Imaging of his head, brain, and spine was unremarkable. On presentation, he attested to anorexia, 18 lb weight loss, weakness, lower extremity edema, “rusty” colored urine, and frequent episodes of “sharp” pain in his back, groin, and legs lasting minutes to hours. He also identified a nonitchy painless rash that began ten days prior on his arms and then spread to his torso, palms, and thighs. Past medical history was noncontributory He denied use of alcohol, tobacco, or drugs. He admitted to being sexually active with 5–10 male partners in the past year. There was no recent international travel or sick contacts and no use of antibiotics or herbal supplements.On physical examination, he had mild scleral icterus, bilateral pitting lower extremity edema, and diminished sensation to pinprick and light touch in his bilateral lower extremities. His skin had a nontender maculopapular rash, most notable on the palms, thighs, chest, and scalp (Figures1 and 2). A 1-2 cm nontender chancre was found on the posterior penile shaft.Figure 1
Palmar rash.Figure 2
Trunk and abdominal rash.Admitting labs were significant for total bilirubin 5.9, AST 201, ALT 116, and alkaline phosphatase 1048. Abdominal CT scan showed hepatomegaly with heterogeneous attenuation, patent hepatic vasculature, no focal lesions, and mild splenomegaly. HIDA scan showed patent cystic and common bile ducts. MRCP showed no extrahepatic biliary obstruction. Liver biopsy was performed.The coexistence of dermatologic, neurologic, and hepatic signs and symptoms prompted evaluation for syphilis. The patient had a reactive RPR titer of 1 : 256, reactive TPPA, and syphilis total antibody ratio of 15.8. Additionally, HIV screening was positive with a viral load of 650,493 copies/mL and CD4+ count of 946 cells/mm3. Liver pathology showed macrovesicular and microvesicular steatosis with focal hepatocellular ballooning and Mallory–Denk bodies, patchy PAS-D positive cytoplasmic hyaline globules, and periportal and sinusoidal fibrosis. Diagnosis of syphilitic hepatitis was confirmed by immunostain showing numerous treponemal spirochetes (Figures 3 and 4). A lumbar puncture was performed and showed a cell count of 7, nonreactive CSF VDRL titer, protein of 55 mg/dL, and glucose of 85 mg/dL, thus ruling out neurosyphilis. He was started on Penicillin G, and his liver enzymes improved impressively (Table 1).Figure 3
Immunohistochemistry for syphilis highlighting the organisms in sinusoidal, hepatocyte, and biliary epithelial cells.Figure 4
Treponemal immunostain of the large septal bile duct.Table 1
Pertinent labs prior to and after treatment.
Days from treatment
AST (IU/L)
ALT (IU/L)
Alkaline phosphatase (IU/L)
Total bilirubin (mg/dL)
PLT
INR
T − 4
193
118
1074
5.9
171
1.6
T − 3
201
126
1144
9.1
166
1.7
T
330
125
1149
8.7
148
1.8
T + 4
277
120
835
8.3
137
1.6
T + 7
226
116
866
8.0
153
1.6
T + 12
217
127
894
4.4
159
1.6
T + 16
208
119
816
3.3
163
1.6
T = day of penicillin treatment initiation.
## 3. Discussion
The first recognized case of hepatitis attributable to syphilis was reported in 1585 and termed “luetic jaundice” [4]. While syphilitic hepatitis has since been an established diagnosis in the medical literature as a component of secondary syphilis, it is not a commonly encountered etiology in patients seen for transaminitis, much less clinical hepatitis [5]. Cases that have been reported contain variable presentations including jaundice, dark urine, arthralgias, and generalized weakness [6].Hepatic involvement has been characteristically described as a cholestatic pattern of injury with disproportionately elevated alkaline phosphatase compared to transaminases [1, 3, 7]. The preferential elevation of alkaline phosphatase is suspected to be due to pericholangiolar inflammation [2, 5]. Histologically, syphilitic hepatitis is visualized as inflammatory percolation in the portal region stimulating intralobular bile duct collapse and hepatocellular periportal necrosis [1]. Our patient had the distinguishing liver enzyme abnormalities plus the specific pathology findings, both diagnostic for syphilitic hepatitis. Additionally, the liver biopsy sampling showed numerous treponemes, a finding that is variable and relatively infrequent among published reports of syphilitic hepatitis [3].Liver involvement in secondary syphilis is especially prevalent in patients with concurrent HIV infection, likely due to similar risk factors and degree of immunosuppression [2, 8]. The notable high rates of coinfection can be attributed to parallel risk factors including unprotected sexual activity, men who have sex with men, and intravenous drug use [4]. The most current CDC STD treatment guidelines emphasize the importance of routine HIV screening in all patients who pursue evaluation and therapy for any STDs [9]. The patient discussed in this case was immediately screened for additional sexually transmitted infections when the syphilis diagnosis was made, resulting in the discovery of his HIV-positive status. A case study and review performed by Mullick et al. identified a linear relationship between RPR titer and absolute CD4+ T-lymphocytes count [8]. This supports a presumption that clinical manifestations of hepatitis due to syphilitic periportal inflammation is more likely to be apparent in those with preserved host inflammatory response.The causative role ofTreponema pallidum in hepatocellular damage is supported by the resolution of laboratory and clinical aberrations following treatment with intramuscular or intravenous Penicillin G [8]. Thus, it is important that early identification of this infrequent presentation of syphilis is made because of its easy reversibility and subsequent prevention of progression to further stages. This case additionally emphasizes the importance for ensuring infectious etiologies remain in the differential diagnoses of elevated liver function tests.
---
*Source: 1012405-2019-12-11.xml* | 1012405-2019-12-11_1012405-2019-12-11.md | 9,163 | Spirochetes in the Liver: An Unusual Presentation of a Common STI | Natasha Narang; Layth Al-Jashaami; Nayan Patel | Case Reports in Medicine
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1012405 | 1012405-2019-12-11.xml | ---
## Abstract
It is estimated that 10% of patients with secondary syphilis have liver enzyme elevations, but clinical hepatitis is rare. However, in HIV-positive patients, syphilitic hepatitis may be much more common. We report a case of a 67-year-old male who developed progressively elevated liver enzymes, followed by development of neurological symptoms and then rash. Though the timeline of his symptom development was unusual, his constellation of symptoms prompted an RPR and FTA-ABS which returned reactive. He was additionally found to be HIV positive with a CD4 count of 946. He was treated with IV Penicillin, and his hepatitis improved thereafter.
---
## Body
## 1. Introduction
Often referred to as “the great imitator,” syphilis continues to be one of the most common sexually transmitted infections to this day, with a substantial proportion occurring in men who have sex with men [1]. The HIV population is particularly susceptible, with estimated incidence of up to 1/4 of all cases occurring in coinfected persons [2]. Though the separate phases of syphilis are well defined, clinical manifestations can greatly vary. It most commonly presents as painless genital ulcerations in the primary stage and progresses to rash, fever, and lymphadenopathy in the second [1, 2]. If secondary (disseminated) syphilis remains untreated, disease can span years and affect numerous organ systems. It has been reported that syphilis may affect mucocutaneous, gastrointestinal, pulmonary, renal, neurologic, and hepatic systems [1, 3]. When secondary syphilis signs recede, the patient enters the latent phase in which serologies remain positive without any overt signs or symptoms of infection. Tertiary syphilis may develop in about one-third of those with latent disease, presenting as neurosyphilis, aortic root insufficiency, or gummatous lesions [1, 4]. Overall, tertiary syphilis is scarce in our postantibiotic era.
## 2. Case Report
A 67-year-old male was admitted for progressive liver enzyme elevation. His symptoms began three months prior to this admission, when he presented to the emergency department with fatigue, decreased appetite, and abdominal pain and was found to have elevated transaminases. Initial evaluation by his outpatient gastroenterologist including workup for viral hepatitis, alpha-1 antitrypsin deficiency, primary biliary cirrhosis, Wilson’s disease, and autoimmune hepatitis was largely inconclusive. Subsequently, he developed weakness and numbness that began distal to his axillae and progressed to his torso and lower extremities. The lower extremity symptoms worsened, and he developed ataxia, requiring a walker for ambulation. Two months after symptoms began, he underwent neurological workup for ataxia, right-sided weakness, and sporadic severe radiating low back pain. Imaging of his head, brain, and spine was unremarkable. On presentation, he attested to anorexia, 18 lb weight loss, weakness, lower extremity edema, “rusty” colored urine, and frequent episodes of “sharp” pain in his back, groin, and legs lasting minutes to hours. He also identified a nonitchy painless rash that began ten days prior on his arms and then spread to his torso, palms, and thighs. Past medical history was noncontributory He denied use of alcohol, tobacco, or drugs. He admitted to being sexually active with 5–10 male partners in the past year. There was no recent international travel or sick contacts and no use of antibiotics or herbal supplements.On physical examination, he had mild scleral icterus, bilateral pitting lower extremity edema, and diminished sensation to pinprick and light touch in his bilateral lower extremities. His skin had a nontender maculopapular rash, most notable on the palms, thighs, chest, and scalp (Figures1 and 2). A 1-2 cm nontender chancre was found on the posterior penile shaft.Figure 1
Palmar rash.Figure 2
Trunk and abdominal rash.Admitting labs were significant for total bilirubin 5.9, AST 201, ALT 116, and alkaline phosphatase 1048. Abdominal CT scan showed hepatomegaly with heterogeneous attenuation, patent hepatic vasculature, no focal lesions, and mild splenomegaly. HIDA scan showed patent cystic and common bile ducts. MRCP showed no extrahepatic biliary obstruction. Liver biopsy was performed.The coexistence of dermatologic, neurologic, and hepatic signs and symptoms prompted evaluation for syphilis. The patient had a reactive RPR titer of 1 : 256, reactive TPPA, and syphilis total antibody ratio of 15.8. Additionally, HIV screening was positive with a viral load of 650,493 copies/mL and CD4+ count of 946 cells/mm3. Liver pathology showed macrovesicular and microvesicular steatosis with focal hepatocellular ballooning and Mallory–Denk bodies, patchy PAS-D positive cytoplasmic hyaline globules, and periportal and sinusoidal fibrosis. Diagnosis of syphilitic hepatitis was confirmed by immunostain showing numerous treponemal spirochetes (Figures 3 and 4). A lumbar puncture was performed and showed a cell count of 7, nonreactive CSF VDRL titer, protein of 55 mg/dL, and glucose of 85 mg/dL, thus ruling out neurosyphilis. He was started on Penicillin G, and his liver enzymes improved impressively (Table 1).Figure 3
Immunohistochemistry for syphilis highlighting the organisms in sinusoidal, hepatocyte, and biliary epithelial cells.Figure 4
Treponemal immunostain of the large septal bile duct.Table 1
Pertinent labs prior to and after treatment.
Days from treatment
AST (IU/L)
ALT (IU/L)
Alkaline phosphatase (IU/L)
Total bilirubin (mg/dL)
PLT
INR
T − 4
193
118
1074
5.9
171
1.6
T − 3
201
126
1144
9.1
166
1.7
T
330
125
1149
8.7
148
1.8
T + 4
277
120
835
8.3
137
1.6
T + 7
226
116
866
8.0
153
1.6
T + 12
217
127
894
4.4
159
1.6
T + 16
208
119
816
3.3
163
1.6
T = day of penicillin treatment initiation.
## 3. Discussion
The first recognized case of hepatitis attributable to syphilis was reported in 1585 and termed “luetic jaundice” [4]. While syphilitic hepatitis has since been an established diagnosis in the medical literature as a component of secondary syphilis, it is not a commonly encountered etiology in patients seen for transaminitis, much less clinical hepatitis [5]. Cases that have been reported contain variable presentations including jaundice, dark urine, arthralgias, and generalized weakness [6].Hepatic involvement has been characteristically described as a cholestatic pattern of injury with disproportionately elevated alkaline phosphatase compared to transaminases [1, 3, 7]. The preferential elevation of alkaline phosphatase is suspected to be due to pericholangiolar inflammation [2, 5]. Histologically, syphilitic hepatitis is visualized as inflammatory percolation in the portal region stimulating intralobular bile duct collapse and hepatocellular periportal necrosis [1]. Our patient had the distinguishing liver enzyme abnormalities plus the specific pathology findings, both diagnostic for syphilitic hepatitis. Additionally, the liver biopsy sampling showed numerous treponemes, a finding that is variable and relatively infrequent among published reports of syphilitic hepatitis [3].Liver involvement in secondary syphilis is especially prevalent in patients with concurrent HIV infection, likely due to similar risk factors and degree of immunosuppression [2, 8]. The notable high rates of coinfection can be attributed to parallel risk factors including unprotected sexual activity, men who have sex with men, and intravenous drug use [4]. The most current CDC STD treatment guidelines emphasize the importance of routine HIV screening in all patients who pursue evaluation and therapy for any STDs [9]. The patient discussed in this case was immediately screened for additional sexually transmitted infections when the syphilis diagnosis was made, resulting in the discovery of his HIV-positive status. A case study and review performed by Mullick et al. identified a linear relationship between RPR titer and absolute CD4+ T-lymphocytes count [8]. This supports a presumption that clinical manifestations of hepatitis due to syphilitic periportal inflammation is more likely to be apparent in those with preserved host inflammatory response.The causative role ofTreponema pallidum in hepatocellular damage is supported by the resolution of laboratory and clinical aberrations following treatment with intramuscular or intravenous Penicillin G [8]. Thus, it is important that early identification of this infrequent presentation of syphilis is made because of its easy reversibility and subsequent prevention of progression to further stages. This case additionally emphasizes the importance for ensuring infectious etiologies remain in the differential diagnoses of elevated liver function tests.
---
*Source: 1012405-2019-12-11.xml* | 2019 |
# Mechanism of Resistance to Dietary Cholesterol
**Authors:** Lindsey R. Boone; Patricia A. Brooks; Melissa I. Niesen; Gene C. Ness
**Journal:** Journal of Lipids
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101242
---
## Abstract
Background. Alterations in expression of hepatic genes that could contribute to resistance to dietary cholesterol were investigated in Sprague-Dawley rats, which are known to be resistant to the serum cholesterol raising action of dietary cholesterol. Methods. Microarray analysis was used to provide a comprehensive analysis of changes in hepatic gene expression in rats in response to dietary cholesterol. Changes were confirmed by RT-PCR analysis. Western blotting was employed to measure changes in hepatic cholesterol 7α hydroxylase protein. Results. Of the 28,000 genes examined using the Affymetrix rat microarray, relatively few were significantly altered. As expected, decreases were observed for several genes that encode enzymes of the cholesterol biosynthetic pathway. The largest decreases were seen for squalene epoxidase and lanosterol 14α demethylase (CYP 51A1). These changes were confirmed by quantitative RT-PCR. LDL receptor expression was not altered by dietary cholesterol. Critically, the expression of cholesterol 7α hydroxylase, which catalyzes the rate-limiting step in bile acid synthesis, was increased over 4-fold in livers of rats fed diets containing 1% cholesterol. In contrast, mice, which are not resistant to dietary cholesterol, exhibited lower hepatic cholesterol 7α hydroxylase (CYP7A1) protein levels, which were not increased in response to diets containing 2% cholesterol.
---
## Body
## 1. Introduction
There is considerable variation among animals and humans in terms of their responses to consumption of excess dietary cholesterol. Consumption of a high-cholesterol diet does not automatically result in elevated serum cholesterol levels due to the operation of adaptive responses. Rabbits, hamsters, and C57BL/6 mice are not resistant to dietary cholesterol and exhibit marked elevations in serum cholesterol levels when given diets supplemented with cholesterol [1–3]. On the other hand, rats such as Sprague-Dawley, Wistar-Furth, Spontaneously Hypertensive or Fischer 344 show very little if any increase in serum cholesterol levels when given a similar cholesterol challenge [2]. For most humans, consumption of increased amounts of dietary cholesterol produces only small increases in both LDL and HDL cholesterol with little effect on the ratio of LDL to HDL [4, 5]. In a 14-year study of over 80,000 female nurses, egg consumption was unrelated to the risk of coronary heart disease [4]. On balance, extensive epidemiologic studies show that dietary cholesterol is not a contributor to increased heart disease risk in humans [6]. Clearly, in many rat strains and most people, adaptive responses are operating that keep serum cholesterol levels within the normal range.Multiple possible mechanisms may be operating to provide a person or animal with resistance to the serum cholesterol-raising action of dietary cholesterol. These include decreasing the rate of cholesterol biosynthesis, decreasing the rate of cholesterol absorption, increasing the rate of cholesterol excretion, increasing the conversion of cholesterol to bile acids, and increasing the rate of removal of serum cholesterol via liver lipoprotein receptors [7, 8].In order to obtain a comprehensive and unbiased analysis of adaptive responses in hepatic gene expression to dietary cholesterol, we carried out microarray analysis of changes in Sprague-Dawley rat liver gene expression elicited by a 1% cholesterol diet. These animals are known to have adaptive responses that render them resistant to dietary cholesterol. In order to induce atherosclerotic plaques in these animals, the rats must be rendered hypothyroid [9]. Thus, this animal is a reasonable model for human responses to dietary cholesterol. In a previous microarray study, C57BL/6 mice were used [10]. These mice become hypercholesterolemic and develop fatty lesions in their ascending aortas when given diets supplemented with cholesterol [11, 12].Liver was selected for extensive study, because this tissue not only synthesizes cholesterol but also is also responsible for bile acid production and excretion of cholesterol and expresses the majority of the body’s LDL receptors [13].
## 2. Methods
### 2.1. Animals
Male Sprague-Dawley rats, 150–200 g (Harlan, Madison, Wis, USA), were fed Harlan Teklad 22/5 rodent chow with or without 1% cholesterol for five days. The animals were kept in a reversed lighting cycle room (12 dark/12 hrs light) and sacrificed at 0900–1000 hrs, corresponding to the third to fourth hour of the dark period when cholesterol biosynthesis is maximal. Twelve-week old C57BL/6J mice were fed chow with or without 2% cholesterol for 5 days. Liver samples were obtained, and microsomes were prepared from both rats and mice [14]. All procedures were carried out according to the regulations and oversight of the University of South Florida Institutional Animal Care and Use Committee (IACUC), protocol 2953.
### 2.2. Cholesterol Analysis
Trunk blood was collected and allowed to clot. The samples were centrifuged at 5,000 xg for 5 min. The serum was removed with a Pasteur pipette. Total serum cholesterol levels were determined by a cholesterol oxidase method using Infinity Cholesterol Reagent (Sigma) [15]. Values are expressed as mg/dL of serum. Liver cholesterol levels were determined as previously described [16]. Briefly, weighed liver samples were saponified, extracted with petroleum ether and subjected to reverse-phase high-performance liquid chromatography on a Spheri-5, RP-18, 5 reverse-phase column (Altech Associates). Values are expressed as mg/g of liver.
### 2.3. RNA Isolation
A portion of about 200 mg was quickly excised from livers of the rats and immediately homogenized in 4 mL of Tri-Reagent from Molecular Research Center (Cincinnati, Ohio, USA) using a Polytron homogenizer at room temperature. The remainder of the isolation steps was carried out using volumes corresponding to 4x the manufacturer’s recommendations. RNA concentrations were determined by diluting each sample 1 : 100 and measuring its absorbance at 260 nm.
### 2.4. Microarray Analysis
Isolated RNA was further purified using the RNeasy kit from Qiagen. To demonstrate that the RNA was indeed free of RNase activity, samples were incubated in water at 42°C for 1 hr and then examined on 1% agarose gels. An identical pattern of bands in unheated and heated samples was obtained showing a lack of RNase activity. Microarray analysis was performed by the Moffitt Core Facility (Tampa, FL) using the Affymetrix GeneChip Instrument system and the protocol established by Affymetrix, Inc. Tenμg of RNA each from the livers of 3 control and 3 cholesterol-fed rats was used in the analysis. The RNA was converted to double-stranded cDNA using an oligo(dT)24 primer containing a T7 RNA polymerase recognition sequence. The product was transcribed into biotin-labeled cRNA using T7 RNA polymerase. The biotinylated cRNA was hybridized to Affymetrix GeneChip Rat Genome 230 Plus 2.0 arrays, which detects about 28,000 genes. Multiple oligos were used for each gene with the data averaged. Scanned chip images were analyzed using GeneChip algorithms.
### 2.5. Real Time RT-PCR Analysis
To validate the microarray results, we assessed the expression of a subset of genes via real-time PCR essentially as described previously [14]. Total RNA was resuspended in diethyl pyrocarbonate-treated water. Twenty micrograms of RNA was then DNAse-treated using the TURBO DNA-Free Kit from Ambion. cDNA was prepared from 1 μg of DNAse-treated RNA using the Reverse Transcription System from Promega. The final product was brought up to 100 μL, and a total of 2 μL of the reverse transcription reaction was then used for real-time PCR analysis. The primer sequences used are given in Table 1. PCR was carried out according to the protocol from ABI Power SYBR Green Master Mix, using a Chromo-4 DNA Engine (Bio-Rad) with minor modifications. The program used for amplification was (i) 95°C for 5 minutes, (ii) 95°C for 15 seconds, (iii) 61°C for 1 minute (collect data), and (iv) go to step (ii) 40 times, (v) 55°C + 0.5°C each 10 seconds, ×80 (melt curve). The results were quantified by the ΔΔCt method using Microsoft Excel statistical programs and SigmaPlot 8.0. As a housekeeping gene, 18S ribosomal RNA was used. Values are expressed relative to this.Table 1
Oligonucleotide sequences used for RT-PCR analysis.
GenePrimersSequence (5′→3′)SQLESenseAGTGAACAAACGAGGCGTCCTACTAntisenseAAAGCGACTGTCATTCCTCCACCACYP51SenseTTAGGTGACAACCTGACACACGCTAntisenseTGCTTACTGTCTTGCTCCTGGTGTABCG8SenseGATGCTGGCTATCATAGGGAGCAntisenseTCTCTGCCTGTGATAACGTCGAABCG5SenseTGAGCTCTTCCACCACTTCGACAAAntisenseTGTCCACCGATGTCAAGTCCATGTACAT2SenseTTGTGCCAGTGCACGTGTCTTCTAAntisenseGCTTCAGCTTGCTCATGGCTTCAACYP7ASenseTGAAAGCGGGAAAGCAAAGACCACAntisenseTCTTGGACGGCAAAGAGTCTTCCA7DHCRSenseTCAGCTTCCAGGTGCTGCTTTACTAntisenseACAATCCCTGCTGGAGTTATGGCAD14SRSenseAATGGTTTCCAGGCTCTGGTGCTAAntisenseATAAAGCTGGTGAGAGTGGTCGCAHMGCRSenseATTGCACCGACAAGAAACCTGCTGAntisenseTTCTCTCACCACCTTGGCTGGAATHMGCSSenseTTGGTAGTTGCAGGAGACATCGCTAntisenseAGCATTTGGCCCAATTAGCAGAGCIGFBP1SenseAGAGGAACAGCTGCTGGATAGCTTAntisenseAGGGCTCCTTCCATTTCTTGAGGTLANSSenseACTCTACGATGCTGTGGCTGTGTTAntisenseAAATACCCGCCACGCTTAGTCTCALIPIN2SenseTCTGCCATGGACTTGCCTGATGTAAntisenseACTCGTGGTACGTGATGATGTGCTPPARαSenseAGACCTTGTGCATGGCTGAGAAGAAntisenseAATCGGACCTCTGCCTCCTTGTTTPPARγSenseCAATGCCATCAGGTTTGGGCGAATAntisenseATACAAATGCTTTGCCAGGGCTCGSC4MEOXSenseACCTGGCACTATTTCCTGCACAGAAntisenseAGCCTGGAACTCGTGATGGACTTC
### 2.6. Western Blot Analysis
At time of sacrifice, a portion of liver was excised for protein analysis. Lysosome-free liver microsomes were prepared according to the procedure previously described [17]. Liver microsomes (50 μg of protein per lane) from rats and mice were subjected to SDS-PAGE and western blotting. Membranes were incubated overnight with 1 : 2,000 dilution of cholesterol 7α hydroxylase primary antibody generated in rabbits and generously provided to us by Dr. Mats Rudling in 5% PBST nonfat dry milk. A 1 : 10,000 dilution of Sheep anti Rabbit IGG was as the secondary antibody. The West Pico Chemiluminesence kit was used for detection with exposure times ranging from 5 to 20 seconds. The blots were then stripped and reprobed with an antibody to β-actin. The resulted were expressed relative to the β-actin signal.
### 2.7. Statistics
For the microarray data, a 2-tailed distributiont-test, equal variance was used. The RT-PCR data are presented as means ± standard errors. P values using a 2-tailed distribution t-test are given.
## 2.1. Animals
Male Sprague-Dawley rats, 150–200 g (Harlan, Madison, Wis, USA), were fed Harlan Teklad 22/5 rodent chow with or without 1% cholesterol for five days. The animals were kept in a reversed lighting cycle room (12 dark/12 hrs light) and sacrificed at 0900–1000 hrs, corresponding to the third to fourth hour of the dark period when cholesterol biosynthesis is maximal. Twelve-week old C57BL/6J mice were fed chow with or without 2% cholesterol for 5 days. Liver samples were obtained, and microsomes were prepared from both rats and mice [14]. All procedures were carried out according to the regulations and oversight of the University of South Florida Institutional Animal Care and Use Committee (IACUC), protocol 2953.
## 2.2. Cholesterol Analysis
Trunk blood was collected and allowed to clot. The samples were centrifuged at 5,000 xg for 5 min. The serum was removed with a Pasteur pipette. Total serum cholesterol levels were determined by a cholesterol oxidase method using Infinity Cholesterol Reagent (Sigma) [15]. Values are expressed as mg/dL of serum. Liver cholesterol levels were determined as previously described [16]. Briefly, weighed liver samples were saponified, extracted with petroleum ether and subjected to reverse-phase high-performance liquid chromatography on a Spheri-5, RP-18, 5 reverse-phase column (Altech Associates). Values are expressed as mg/g of liver.
## 2.3. RNA Isolation
A portion of about 200 mg was quickly excised from livers of the rats and immediately homogenized in 4 mL of Tri-Reagent from Molecular Research Center (Cincinnati, Ohio, USA) using a Polytron homogenizer at room temperature. The remainder of the isolation steps was carried out using volumes corresponding to 4x the manufacturer’s recommendations. RNA concentrations were determined by diluting each sample 1 : 100 and measuring its absorbance at 260 nm.
## 2.4. Microarray Analysis
Isolated RNA was further purified using the RNeasy kit from Qiagen. To demonstrate that the RNA was indeed free of RNase activity, samples were incubated in water at 42°C for 1 hr and then examined on 1% agarose gels. An identical pattern of bands in unheated and heated samples was obtained showing a lack of RNase activity. Microarray analysis was performed by the Moffitt Core Facility (Tampa, FL) using the Affymetrix GeneChip Instrument system and the protocol established by Affymetrix, Inc. Tenμg of RNA each from the livers of 3 control and 3 cholesterol-fed rats was used in the analysis. The RNA was converted to double-stranded cDNA using an oligo(dT)24 primer containing a T7 RNA polymerase recognition sequence. The product was transcribed into biotin-labeled cRNA using T7 RNA polymerase. The biotinylated cRNA was hybridized to Affymetrix GeneChip Rat Genome 230 Plus 2.0 arrays, which detects about 28,000 genes. Multiple oligos were used for each gene with the data averaged. Scanned chip images were analyzed using GeneChip algorithms.
## 2.5. Real Time RT-PCR Analysis
To validate the microarray results, we assessed the expression of a subset of genes via real-time PCR essentially as described previously [14]. Total RNA was resuspended in diethyl pyrocarbonate-treated water. Twenty micrograms of RNA was then DNAse-treated using the TURBO DNA-Free Kit from Ambion. cDNA was prepared from 1 μg of DNAse-treated RNA using the Reverse Transcription System from Promega. The final product was brought up to 100 μL, and a total of 2 μL of the reverse transcription reaction was then used for real-time PCR analysis. The primer sequences used are given in Table 1. PCR was carried out according to the protocol from ABI Power SYBR Green Master Mix, using a Chromo-4 DNA Engine (Bio-Rad) with minor modifications. The program used for amplification was (i) 95°C for 5 minutes, (ii) 95°C for 15 seconds, (iii) 61°C for 1 minute (collect data), and (iv) go to step (ii) 40 times, (v) 55°C + 0.5°C each 10 seconds, ×80 (melt curve). The results were quantified by the ΔΔCt method using Microsoft Excel statistical programs and SigmaPlot 8.0. As a housekeeping gene, 18S ribosomal RNA was used. Values are expressed relative to this.Table 1
Oligonucleotide sequences used for RT-PCR analysis.
GenePrimersSequence (5′→3′)SQLESenseAGTGAACAAACGAGGCGTCCTACTAntisenseAAAGCGACTGTCATTCCTCCACCACYP51SenseTTAGGTGACAACCTGACACACGCTAntisenseTGCTTACTGTCTTGCTCCTGGTGTABCG8SenseGATGCTGGCTATCATAGGGAGCAntisenseTCTCTGCCTGTGATAACGTCGAABCG5SenseTGAGCTCTTCCACCACTTCGACAAAntisenseTGTCCACCGATGTCAAGTCCATGTACAT2SenseTTGTGCCAGTGCACGTGTCTTCTAAntisenseGCTTCAGCTTGCTCATGGCTTCAACYP7ASenseTGAAAGCGGGAAAGCAAAGACCACAntisenseTCTTGGACGGCAAAGAGTCTTCCA7DHCRSenseTCAGCTTCCAGGTGCTGCTTTACTAntisenseACAATCCCTGCTGGAGTTATGGCAD14SRSenseAATGGTTTCCAGGCTCTGGTGCTAAntisenseATAAAGCTGGTGAGAGTGGTCGCAHMGCRSenseATTGCACCGACAAGAAACCTGCTGAntisenseTTCTCTCACCACCTTGGCTGGAATHMGCSSenseTTGGTAGTTGCAGGAGACATCGCTAntisenseAGCATTTGGCCCAATTAGCAGAGCIGFBP1SenseAGAGGAACAGCTGCTGGATAGCTTAntisenseAGGGCTCCTTCCATTTCTTGAGGTLANSSenseACTCTACGATGCTGTGGCTGTGTTAntisenseAAATACCCGCCACGCTTAGTCTCALIPIN2SenseTCTGCCATGGACTTGCCTGATGTAAntisenseACTCGTGGTACGTGATGATGTGCTPPARαSenseAGACCTTGTGCATGGCTGAGAAGAAntisenseAATCGGACCTCTGCCTCCTTGTTTPPARγSenseCAATGCCATCAGGTTTGGGCGAATAntisenseATACAAATGCTTTGCCAGGGCTCGSC4MEOXSenseACCTGGCACTATTTCCTGCACAGAAntisenseAGCCTGGAACTCGTGATGGACTTC
## 2.6. Western Blot Analysis
At time of sacrifice, a portion of liver was excised for protein analysis. Lysosome-free liver microsomes were prepared according to the procedure previously described [17]. Liver microsomes (50 μg of protein per lane) from rats and mice were subjected to SDS-PAGE and western blotting. Membranes were incubated overnight with 1 : 2,000 dilution of cholesterol 7α hydroxylase primary antibody generated in rabbits and generously provided to us by Dr. Mats Rudling in 5% PBST nonfat dry milk. A 1 : 10,000 dilution of Sheep anti Rabbit IGG was as the secondary antibody. The West Pico Chemiluminesence kit was used for detection with exposure times ranging from 5 to 20 seconds. The blots were then stripped and reprobed with an antibody to β-actin. The resulted were expressed relative to the β-actin signal.
## 2.7. Statistics
For the microarray data, a 2-tailed distributiont-test, equal variance was used. The RT-PCR data are presented as means ± standard errors. P values using a 2-tailed distribution t-test are given.
## 3. Results
### 3.1. Microarray Analysis
In order to comprehensively identify the genes that exhibit significant alterations in rates of transcription in response to dietary cholesterol, microarray analysis was performed on hepatic RNA from Sprague-Dawley rats fed either normal rodent chow or chow supplemented with 1% cholesterol for 5 days. This dietary regimen does not raise serum cholesterol levels in these male Sprague Dawley rats [2, 18]. Liver cholesterol levels are increased about 2-fold (Table 2). This is similar to our previous findings with 2% cholesterol [16]. Thus, this treatment appears to constitute a useful model for studying adaptive responses.Table 2
Serum and liver cholesterol levels.
ConditionSerumLiverMg/100 mLMg/gNormal128 ± 53.2 ± 0.3Cholesterol fed102 ± 46.2 ± 0.5**P<0.01 compared with normal chow fed rats. A 2-tailed distribution t-test, equal variance was used.Surprisingly, the rates of transcription of relatively few genes were altered more than 2-fold (Table3). The largest increase was observed for insulin-like growth factor-binding protein 1. This increase of over 4-fold was confirmed by RT-PCR analysis (Table 5). A prominent increase of nearly 3-fold in hepatic cholesterol 7α hydroxylase expression was observed. This would provide for increased production of bile acids and more efficient elimination of biliary cholesterol, since this enzyme catalyzes the rate-limiting step of bile acid synthesis. ABCG5 and ABCG8 expression was actually decreased (Table 4). These ATP-binding cassette (ABC) transport proteins promote biliary secretion of neutral sterols [19]. Also, acetyl CoA acetyltransferase 2 (ACAT2), which esterifies excess cholesterol, was significantly decreased. Strikingly, there was no change in hepatic LDL receptor expression.Table 3
Microarray analysis identifying genes upregulated by dietary cholesterol.
NormalCholesterolFold diffGene386507439237712642347+4.5**Insulin-like growth factor binding protein 147645753623222418988+3.9*B-cell leukemia/lymphoma 6 (predicted)359753118493162871791412394+2.7**Cholesterol 7α hydroxylase73304546853747746+2.5*Similar to Ran-binding protein 2 (predicted)228717331841559057265166+2.8**Zinc finger protein 354A127387320756494773+2.4*PPARα253372250766759536+2.4**Lipin 2 (predicted)34131263332303344+2.3*Zinc finger, matrin-like (predicted)207684856135810061277+2.1*HGPRTase425528304209965449598308+2.0Glucose-6-phosphatase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 4
Microarray analysis identifying genes downregulated by dietary cholesterol.
NormalCholesterolFold diffGene118523811598219232203−7.9**Squalene epoxidase58878342614884143−4.8**ABCG85743731955114462−4.5Hypo protein XP_580018235330662285735783503−3.8*Acetyl CoA Acetyltransferase2103771167610752269033503329−3.5**Lanosterol 14α demethylase38904874366892912801426−3.4**Delta 14 sterol reductase10498931196186216542−3.3**Cytokine inducible SH2 protein2383162426511576−3.1**Nuc factor, erythroid derived 27331023760233232345−3.1**Ephrin A51998608524340343375−3.0Dual specificity phosphatase111250143229729351146364315−2.8**Sterol C4 methyl oxidase-like608363264997196125972103−2.6**7-Dehydrocholesterol reductase28512771211210278791055−2.6**Solute carrier family 25 mem 30140791766616412555562967154−2.5**HMG-CoA Synthase 1148901320812147514659205178−2.5**Farnesyl diphosphate synthase1857263617928821025635−2.5**Lanosterol synthase494655024267150324561667−2.6**Farnesyl diphos transferase 1169525181567870834814−2.3*ABCG5134761370214018604873575541−2.2**Fatty acid desaturase 11059981616620305042154797−2.1*Hemato expressed homeobox444591470225288205−2.1**Acetoacetyl CoA synthase126914651136604749548−2.0**Hypo protein XP_57984910087111358727422353045287−2.0**HMGCR251621772744113810491509−2.0**Forkhead box A28481080873538444432−2.0**Mevalonate kinase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 5
RT-PCR analysis.
GeneNormalCholesterolFoldP valueABCG51.00 ± 0.070.67 ± 0.09−1.490.018ABCG81.02 ± 0.170.17 ± 0.03−6.000.003HMG-CoA synthase1.03 ± 0.230.23 ± 0.01−4.480.008Squalene epoxidase1.17 ± 0.260.14 ± 0.03−8.290.002Lanosterol 14α demeth0.71 ± 0.280.16 ± 0.01−4.530.027Delta 14 sterol reductase1.03 ± 0.310.19 ± 0.15−5.330.013Lanosterol synthase1.12 ± 0.580.33 ± 0.14−3.390.083ACAT-21.00 ± 0.080.31 ± 0.07−3.200.001Sterol C4 Me Ox like1.04 ± 0.310.39 ± 0.50−2.650.1287-Dehydrocholesterol red1.01 ± 0.130.50 ± 0.06−2.000.004HMG-CoA reductase1.02 ± 0.260.73 ± 0.28−1.400.261PPAR gamma1.07 ± 0.401.02 ± 0.89−1.040.944PPAR alpha1.01 ± 0.181.06 ± 0.44+1.050.852Lipin 2 (predicted)1.03 ± 0.291.64 ± 0.42+1.590.105Chol 7α hydroxylase1.17 ± 0.353.27 ± 0.97+2.790.044IGFBP11.02 ± 0.314.30 ± 1.90+4.220.042A 2-tailed distributiont-test, equal variance was used.Many of the genes exhibiting downregulated expression in response to dietary cholesterol (Table3) catalyze reactions of the cholesterol biosynthetic pathway. Levels of mRNA for the enzyme that catalyzes the rate-limiting reaction, HMG-CoA reductase, were only decreased 2-fold. RT-PCR analysis (Table 4) showed only a 1.4-fold decrease HMG-CoA reductase, in agreement with previous results from Northern blotting analysis [20]. The largest decrease (7.9-fold) was observed for squalene epoxidase. This enzyme regulates lanosterol synthesis [21]. The next largest decrease among cholesterol biosynthetic enzymes was seen for lanosterol 14α demethylase. The expression of several cholesterol biosynthetic enzymes including Δ14 sterol reductase, sterol C4 methyl oxidase-like, 7-dehydrocholesterol reductase, HMG-CoA synthase 1, farnesyl diphosphate synthase, lanosterol synthase, acetoacetyl CoA synthase, and mevalonate kinase were decreased 2 to 4-fold (Table 4). Other cholesterol biosynthetic enzymes such as sterol-C5-desaturase, phosphomevalonate kinase, diphosphomevalonate decarboxylase, 24-dehydrocholesterol reductase and dehydrogenase/reductase (SDR family) member 7 were decreased 1.5- to 1.9-fold. Other genes of interest whose expression was decreased by dietary cholesterol were SREBP-2 (1.4-fold, P=0.028) and PCSK9 (1.5-fold, P=0.022).
### 3.2. RT-PCR Analysis
Quantitative RT-PCR was utilized to examine a subset of the altered genes that were identified via microarray analysis. The results are presented in Table5 and correlate with those seen in the microarray analysis. For example, the expression of squalene epoxidase (Sqle) and lanosterol 14α demethylase (Cyp51A1) were decreased by 8.29 and 4.53-fold, respectively (Table 5) as compared with 7.9 and 3.5-fold in the microarray analysis (Table 4). The relative fold changes in ABCG5 and 8, HMG-CoA synthase, ACAT-2, lanosterol synthase, sterol C4 methyl oxidase-like, lipin 2, cholesterol 7α hydroxylase, and 7-dehydrocholesterol reductase also agree very closely, providing further verification of the data obtained from the microarray analysis.
### 3.3. Western Blotting Analysis
Since the increase in rat liver cholesterol 7α hydroxylase mRNA caused by dietary cholesterol could be important for the elimination of cholesterol from the body, we wished to determine whether levels of this protein are actually increased. For comparison purposes, we examined the effect of dietary cholesterol on mouse liver cholesterol 7α hydroxylase protein. Cholesterol 7α hydroxylase protein levels were much higher in rat liver than in mouse liver (Figure 1). Supplementing the chow with 1% cholesterol increased hepatic cholesterol 7α hydroxylase protein levels in rats from 1.22 ± 0.78 to 6.52 ± 2.27 (4 chow-fed rats compared with 5 cholesterol-fed animals with a P=0.011). In contrast, little effect on the levels in mouse liver was seen even when 2% cholesterol was added to the diet (Figure 1).Figure 1
Comparison of the effects of dietary cholesterol on hepatic cholesterol 7α hydroxylase protein levels in rats and Mice. Rats (R) and mice (M) were fed normal (N) chow diets with or without 1% or 2% cholesterol (C) for 5 days. A representative Western blot of hepatic microsomal cholesterol 7α hydroxylase protein is presented.
### 3.4. Time-Course Experiment
A time-course experiment was conducted to determine how rapidly dietary cholesterol reduces the expression of Sqle and Cyp51. RT-PCR analysis, as shown in Figures2 and 3, demonstrates that the decrease in Sqle occurred more rapidly. Both Sqle and Cyp51 decreased progressively over a 3-day period. This agrees with the time course for the reduction in translation of HMG-CoA reductase mRNA [22].Figure 2
Effect of cholesterol feeding on hepatic squalene epoxidase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.Figure 3
Effect of cholesterol feeding on hepatic lanosterol 14α demethylase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.
## 3.1. Microarray Analysis
In order to comprehensively identify the genes that exhibit significant alterations in rates of transcription in response to dietary cholesterol, microarray analysis was performed on hepatic RNA from Sprague-Dawley rats fed either normal rodent chow or chow supplemented with 1% cholesterol for 5 days. This dietary regimen does not raise serum cholesterol levels in these male Sprague Dawley rats [2, 18]. Liver cholesterol levels are increased about 2-fold (Table 2). This is similar to our previous findings with 2% cholesterol [16]. Thus, this treatment appears to constitute a useful model for studying adaptive responses.Table 2
Serum and liver cholesterol levels.
ConditionSerumLiverMg/100 mLMg/gNormal128 ± 53.2 ± 0.3Cholesterol fed102 ± 46.2 ± 0.5**P<0.01 compared with normal chow fed rats. A 2-tailed distribution t-test, equal variance was used.Surprisingly, the rates of transcription of relatively few genes were altered more than 2-fold (Table3). The largest increase was observed for insulin-like growth factor-binding protein 1. This increase of over 4-fold was confirmed by RT-PCR analysis (Table 5). A prominent increase of nearly 3-fold in hepatic cholesterol 7α hydroxylase expression was observed. This would provide for increased production of bile acids and more efficient elimination of biliary cholesterol, since this enzyme catalyzes the rate-limiting step of bile acid synthesis. ABCG5 and ABCG8 expression was actually decreased (Table 4). These ATP-binding cassette (ABC) transport proteins promote biliary secretion of neutral sterols [19]. Also, acetyl CoA acetyltransferase 2 (ACAT2), which esterifies excess cholesterol, was significantly decreased. Strikingly, there was no change in hepatic LDL receptor expression.Table 3
Microarray analysis identifying genes upregulated by dietary cholesterol.
NormalCholesterolFold diffGene386507439237712642347+4.5**Insulin-like growth factor binding protein 147645753623222418988+3.9*B-cell leukemia/lymphoma 6 (predicted)359753118493162871791412394+2.7**Cholesterol 7α hydroxylase73304546853747746+2.5*Similar to Ran-binding protein 2 (predicted)228717331841559057265166+2.8**Zinc finger protein 354A127387320756494773+2.4*PPARα253372250766759536+2.4**Lipin 2 (predicted)34131263332303344+2.3*Zinc finger, matrin-like (predicted)207684856135810061277+2.1*HGPRTase425528304209965449598308+2.0Glucose-6-phosphatase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 4
Microarray analysis identifying genes downregulated by dietary cholesterol.
NormalCholesterolFold diffGene118523811598219232203−7.9**Squalene epoxidase58878342614884143−4.8**ABCG85743731955114462−4.5Hypo protein XP_580018235330662285735783503−3.8*Acetyl CoA Acetyltransferase2103771167610752269033503329−3.5**Lanosterol 14α demethylase38904874366892912801426−3.4**Delta 14 sterol reductase10498931196186216542−3.3**Cytokine inducible SH2 protein2383162426511576−3.1**Nuc factor, erythroid derived 27331023760233232345−3.1**Ephrin A51998608524340343375−3.0Dual specificity phosphatase111250143229729351146364315−2.8**Sterol C4 methyl oxidase-like608363264997196125972103−2.6**7-Dehydrocholesterol reductase28512771211210278791055−2.6**Solute carrier family 25 mem 30140791766616412555562967154−2.5**HMG-CoA Synthase 1148901320812147514659205178−2.5**Farnesyl diphosphate synthase1857263617928821025635−2.5**Lanosterol synthase494655024267150324561667−2.6**Farnesyl diphos transferase 1169525181567870834814−2.3*ABCG5134761370214018604873575541−2.2**Fatty acid desaturase 11059981616620305042154797−2.1*Hemato expressed homeobox444591470225288205−2.1**Acetoacetyl CoA synthase126914651136604749548−2.0**Hypo protein XP_57984910087111358727422353045287−2.0**HMGCR251621772744113810491509−2.0**Forkhead box A28481080873538444432−2.0**Mevalonate kinase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 5
RT-PCR analysis.
GeneNormalCholesterolFoldP valueABCG51.00 ± 0.070.67 ± 0.09−1.490.018ABCG81.02 ± 0.170.17 ± 0.03−6.000.003HMG-CoA synthase1.03 ± 0.230.23 ± 0.01−4.480.008Squalene epoxidase1.17 ± 0.260.14 ± 0.03−8.290.002Lanosterol 14α demeth0.71 ± 0.280.16 ± 0.01−4.530.027Delta 14 sterol reductase1.03 ± 0.310.19 ± 0.15−5.330.013Lanosterol synthase1.12 ± 0.580.33 ± 0.14−3.390.083ACAT-21.00 ± 0.080.31 ± 0.07−3.200.001Sterol C4 Me Ox like1.04 ± 0.310.39 ± 0.50−2.650.1287-Dehydrocholesterol red1.01 ± 0.130.50 ± 0.06−2.000.004HMG-CoA reductase1.02 ± 0.260.73 ± 0.28−1.400.261PPAR gamma1.07 ± 0.401.02 ± 0.89−1.040.944PPAR alpha1.01 ± 0.181.06 ± 0.44+1.050.852Lipin 2 (predicted)1.03 ± 0.291.64 ± 0.42+1.590.105Chol 7α hydroxylase1.17 ± 0.353.27 ± 0.97+2.790.044IGFBP11.02 ± 0.314.30 ± 1.90+4.220.042A 2-tailed distributiont-test, equal variance was used.Many of the genes exhibiting downregulated expression in response to dietary cholesterol (Table3) catalyze reactions of the cholesterol biosynthetic pathway. Levels of mRNA for the enzyme that catalyzes the rate-limiting reaction, HMG-CoA reductase, were only decreased 2-fold. RT-PCR analysis (Table 4) showed only a 1.4-fold decrease HMG-CoA reductase, in agreement with previous results from Northern blotting analysis [20]. The largest decrease (7.9-fold) was observed for squalene epoxidase. This enzyme regulates lanosterol synthesis [21]. The next largest decrease among cholesterol biosynthetic enzymes was seen for lanosterol 14α demethylase. The expression of several cholesterol biosynthetic enzymes including Δ14 sterol reductase, sterol C4 methyl oxidase-like, 7-dehydrocholesterol reductase, HMG-CoA synthase 1, farnesyl diphosphate synthase, lanosterol synthase, acetoacetyl CoA synthase, and mevalonate kinase were decreased 2 to 4-fold (Table 4). Other cholesterol biosynthetic enzymes such as sterol-C5-desaturase, phosphomevalonate kinase, diphosphomevalonate decarboxylase, 24-dehydrocholesterol reductase and dehydrogenase/reductase (SDR family) member 7 were decreased 1.5- to 1.9-fold. Other genes of interest whose expression was decreased by dietary cholesterol were SREBP-2 (1.4-fold, P=0.028) and PCSK9 (1.5-fold, P=0.022).
## 3.2. RT-PCR Analysis
Quantitative RT-PCR was utilized to examine a subset of the altered genes that were identified via microarray analysis. The results are presented in Table5 and correlate with those seen in the microarray analysis. For example, the expression of squalene epoxidase (Sqle) and lanosterol 14α demethylase (Cyp51A1) were decreased by 8.29 and 4.53-fold, respectively (Table 5) as compared with 7.9 and 3.5-fold in the microarray analysis (Table 4). The relative fold changes in ABCG5 and 8, HMG-CoA synthase, ACAT-2, lanosterol synthase, sterol C4 methyl oxidase-like, lipin 2, cholesterol 7α hydroxylase, and 7-dehydrocholesterol reductase also agree very closely, providing further verification of the data obtained from the microarray analysis.
## 3.3. Western Blotting Analysis
Since the increase in rat liver cholesterol 7α hydroxylase mRNA caused by dietary cholesterol could be important for the elimination of cholesterol from the body, we wished to determine whether levels of this protein are actually increased. For comparison purposes, we examined the effect of dietary cholesterol on mouse liver cholesterol 7α hydroxylase protein. Cholesterol 7α hydroxylase protein levels were much higher in rat liver than in mouse liver (Figure 1). Supplementing the chow with 1% cholesterol increased hepatic cholesterol 7α hydroxylase protein levels in rats from 1.22 ± 0.78 to 6.52 ± 2.27 (4 chow-fed rats compared with 5 cholesterol-fed animals with a P=0.011). In contrast, little effect on the levels in mouse liver was seen even when 2% cholesterol was added to the diet (Figure 1).Figure 1
Comparison of the effects of dietary cholesterol on hepatic cholesterol 7α hydroxylase protein levels in rats and Mice. Rats (R) and mice (M) were fed normal (N) chow diets with or without 1% or 2% cholesterol (C) for 5 days. A representative Western blot of hepatic microsomal cholesterol 7α hydroxylase protein is presented.
## 3.4. Time-Course Experiment
A time-course experiment was conducted to determine how rapidly dietary cholesterol reduces the expression of Sqle and Cyp51. RT-PCR analysis, as shown in Figures2 and 3, demonstrates that the decrease in Sqle occurred more rapidly. Both Sqle and Cyp51 decreased progressively over a 3-day period. This agrees with the time course for the reduction in translation of HMG-CoA reductase mRNA [22].Figure 2
Effect of cholesterol feeding on hepatic squalene epoxidase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.Figure 3
Effect of cholesterol feeding on hepatic lanosterol 14α demethylase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.
## 4. Conclusion
Sprague Dawley rats from Harlan fed the grain-based Tekland 22/5 rodent chow diet resist a high cholesterol diet-induced increase in serum cholesterol as reported previously [2, 16]. They do show increases in liver cholesterol levels (Table 2). This indicates that dietary cholesterol is effectively taken up by the intestine, where Niemann-Pick C1-like 1 protein facilitates its uptake [23]. The resulting chylomicron remanents are taken up by the liver. With the increased cholesterol from the diet, the activity of the hepatic LDL receptor, as determined from its rate of turnover, is markedly decreased [18]. Thus, the cycling of this receptor, located in the cholesterol-rich caveolae portion of the plasma membrane [24], virtually stops [18]. The levels of hepatic LDL receptor protein and mRNA do not, however, change [18]. Since the liver must play a key role in adapting to and resisting the increase load of cholesterol, we conducted a microarray study to investigate changes in hepatic mRNA levels.Of all the hepatic genes examined, the most dramatic changes seen in response to dietary cholesterol were the decreases in Sqle and Cyp51. These changes seen in the microarray experiment were confirmed by RT-PCR. When lanosterol levels are decreased as occurs when Sqle expression is decreased, the different Kms of the three reactions catalyzed by Cyp51 come into play. The Km for the formation of 3β-hydroxylanosterol-8-en-32-al is 56 μM while the Km for its destruction (conversion to 4 dimethylcholesta 8, 14 dien-3β-ol) is 368 μM [25]. Although there would be sufficient lanosterol for the first reaction catalyzed by CYP51 (formation of 3β-hydroxylanosterol-8-en-32-al), because of the low Km, its subsequent conversion to 4 dimethylcholesta 8, 14 dien-3β-ol would be slow due to the high Km for this reaction. Thus, 3β-hydroxylanosterol-8-en-32-al would accumulate. This oxylanosterol is known to act to decrease the rate of translation of hepatic HMG-CoA reductase in CHO cells [26]. The structural analogue, 15α-fluoro-3β-hydroxylanost-7-en-32-aldehyde has also been reported to inhibit translation of HMG-CoA reductase mRNA in CHO cells [27]. In addition, it has been demonstrated that feeding the nonmetabolizable oxylanosterol analogue, 15 oxa-32-vinyllanost-8-ene-3β, 32 diol to rats results in a marked decrease in translation of hepatic HMG-CoA reductase mRNA which mimics the effect of dietary cholesterol [28]. Thus, dietary cholesterol-mediated decreases in Sqle and Cyp51 transcription lead to decreased translation of hepatic HMG-CoA reductase mRNA.In this microarray study, a 2-fold decrease in HMG-CoA reductase expression was observed in response to dietary cholesterol. The SREs in the promoters of both Sqle and Cyp51 [29, 30] have been shown to be responsive to SREBP2 and would be expected to be downregulated by cholesterol feeding. However, the SRE sequence in the HMG-CoA reductase promoter does not agree well with the consensus SRE, which may explain the only modest decrease in HMG-CoA reductase transcription in response to dietary cholesterol. Previous Northern blotting analysis and nuclear run-on studies showed only slight decreases in mRNA levels and rates of transcription for hepatic HMG-CoA reductase [20, 31]. However, enzyme activity and protein levels drop to a couple percent of control values [20, 31]. This is due primarily to a marked decrease in the rate of translation of HMG-CoA reductase mRNA caused by feeding cholesterol [22]. The decrease in the rate of translation of HMG-CoA reductase mRNA is due to shift from association of the mRNA with polysomes to monosomes caused by dietary cholesterol [32]. The present observations are predicted by the proposed two-step model for feedback regulation of hepatic HMG-CoA reductase gene expression [33]. In this model, increased hepatic cholesterol leads to decreased formation of mature SREBP2 and decreased transcription of Sqle and Cyp51. This in turn results in accumulation of 3β-hydroxylanosterol-8-en-32-al, which acts to decrease translation of hepatic HMG-CoA reductase mRNA [20].In addition to adapting to excess dietary cholesterol by decreasing the rate of hepatic cholesterol biosynthesis, adaptations in bile acid pathways also occurred. A 2.7-fold increase was observed in cholesterol 7α hydroxylase expression, which is the enzyme that catalyzes the rate-limiting step in bile acid synthesis (Table 3). We have previously observed by Northern blotting analysis that hepatic cholesterol 7α hydroxylase mRNA is markedly increased in rats [16]. Paradoxically, the microarray analysis did not show increases in expression of the ABCG5/G8 transporters. Instead, decreases in the expression of both ABCG5 and ABCG8 were observed. The expression of ABCG8, in particular, was markedly decreased. These findings were confirmed by RT-PCR analysis (Table 4). This is in contrast to the study in mice where ABCG5 expression was reported to be upregulated about 2-fold in response to dietary cholesterol [10]. In agreement with previous Northern blotting analysis, cholesterol feeding did not affect hepatic LDL receptor mRNA levels [18]. However, the rate of cycling and hence activity of the receptor is markedly decreased in response to dietary cholesterol [18].Western blotting analysis showed that dietary cholesterol increases hepatic cholesterol 7α hydroxylase protein levels in rats but not in mice (Figure 1). Also, mice exhibited lower levels of cholesterol 7α hydroxylase protein in their livers. This is in agreement with a previous study [34] showing that dietary cholesterol caused a 3-fold increase in hepatic cholesterol 7α hydroxylase mRNA in rats with much less effect in C57BL/6J mice. Other reports showed that dietary cholesterol may act to increase or decrease cholesterol 7α hydroxylase mRNA levels in C57BL/6J mice depending on the type of fat added to the diet [35, 36]. Diets supplemented with olive oil (monounsaturated fatty acids) caused increased expression of hepatic cholesterol 7α hydroxylase. Western blotting analysis of cholesterol 7α hydroxylase was not preformed in these previous studies.In a recent study comparing the responses of hamsters and rats to cholesterol-enriched diets, it was found that hepatic cholesterol 7α hydroxylase expression in hamsters was not increased in response to cholesterol feeding [37]. This is in contrast with the increase seen in rats. However, hamsters do not adapt well to cholesterol supplemented diets, as their serum cholesterol levels are increased about 2-fold in response to a 0.1% cholesterol diet [37]. Also, hamsters express much lower (only 2 percent of that of rats) basal levels of hepatic HMG-CoA reductase [2], which renders them less able to resist the effects of dietary cholesterol [13].The effect of a high cholesterol diet on the expression of genes in livers of mice has been previously investigated by microarray analysis [10]. In that study, Sqle was found to be downregulated 3.4 and 2.8-fold in males and females, respectively. This correlates with the decrease (7.9-fold) observed for hepatic Sqle expression in the present study. Apparently, no changes in CYP51 or cholesterol 7α hydroxylase expression were observed in the previous study [10]. HMG-CoA reductase was decreased 2-fold in males with no effect seen in females [10]. The greatest change in the previous study was observed for a disintegrin and metalloprotease domain 11 family member (Adam 11) which was upregulated 5 to 11-fold in the microarray and TaqMan analysis. ADAM 11 is a specific aggrecanase induced by interleukin 6 [38] which would promote an inflammatory response. In the present analysis, no change in Adam 11 expression was observed. A significant increase in serum amyloid A3, an acute phase reactant, was also observed in the earlier study [10]. An increase in this protein also indicates an inflammatory response to a high cholesterol diet.The largest cholesterol-induced increase in hepatic gene expression in the present study was observed for insulin-like growth factor-binding protein 1 (IGFBP1), which was measured at over 4-fold by both microarray and RT-PCR analysis. This protein is produced by the liver during inflammation [39], possibly caused by dietary cholesterol. Increased levels are associated with an elevated risk of cardiovascular disease in type 2 diabetic patients [40]. IGFBP1 may also be a marker for insulin resistance [41] and be involved in the development of type 2 diabetes [42]. It may be protective against the growth of certain cancers by binding insulin-like growth factor-1, a potent mitogen [43].In summary, a comprehensive examination of the adaptive hepatic transcriptional responses to dietary cholesterol was conducted using microarray and RT-PCR analysis. The data suggest that rats adapt, in part, to excess dietary cholesterol by markedly reducing hepatic cholesterol biosynthesis and enhancing elimination of cholesterol as bile acids due to induction of cholesterol 7α hydroxylase. A large increase in insulin-like growth factor-binding protein 1 was also observed. This could partially explain the inflammatory response associated with excess dietary cholesterol.
---
*Source: 101242-2011-10-05.xml* | 101242-2011-10-05_101242-2011-10-05.md | 45,286 | Mechanism of Resistance to Dietary Cholesterol | Lindsey R. Boone; Patricia A. Brooks; Melissa I. Niesen; Gene C. Ness | Journal of Lipids
(2011) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101242 | 101242-2011-10-05.xml | ---
## Abstract
Background. Alterations in expression of hepatic genes that could contribute to resistance to dietary cholesterol were investigated in Sprague-Dawley rats, which are known to be resistant to the serum cholesterol raising action of dietary cholesterol. Methods. Microarray analysis was used to provide a comprehensive analysis of changes in hepatic gene expression in rats in response to dietary cholesterol. Changes were confirmed by RT-PCR analysis. Western blotting was employed to measure changes in hepatic cholesterol 7α hydroxylase protein. Results. Of the 28,000 genes examined using the Affymetrix rat microarray, relatively few were significantly altered. As expected, decreases were observed for several genes that encode enzymes of the cholesterol biosynthetic pathway. The largest decreases were seen for squalene epoxidase and lanosterol 14α demethylase (CYP 51A1). These changes were confirmed by quantitative RT-PCR. LDL receptor expression was not altered by dietary cholesterol. Critically, the expression of cholesterol 7α hydroxylase, which catalyzes the rate-limiting step in bile acid synthesis, was increased over 4-fold in livers of rats fed diets containing 1% cholesterol. In contrast, mice, which are not resistant to dietary cholesterol, exhibited lower hepatic cholesterol 7α hydroxylase (CYP7A1) protein levels, which were not increased in response to diets containing 2% cholesterol.
---
## Body
## 1. Introduction
There is considerable variation among animals and humans in terms of their responses to consumption of excess dietary cholesterol. Consumption of a high-cholesterol diet does not automatically result in elevated serum cholesterol levels due to the operation of adaptive responses. Rabbits, hamsters, and C57BL/6 mice are not resistant to dietary cholesterol and exhibit marked elevations in serum cholesterol levels when given diets supplemented with cholesterol [1–3]. On the other hand, rats such as Sprague-Dawley, Wistar-Furth, Spontaneously Hypertensive or Fischer 344 show very little if any increase in serum cholesterol levels when given a similar cholesterol challenge [2]. For most humans, consumption of increased amounts of dietary cholesterol produces only small increases in both LDL and HDL cholesterol with little effect on the ratio of LDL to HDL [4, 5]. In a 14-year study of over 80,000 female nurses, egg consumption was unrelated to the risk of coronary heart disease [4]. On balance, extensive epidemiologic studies show that dietary cholesterol is not a contributor to increased heart disease risk in humans [6]. Clearly, in many rat strains and most people, adaptive responses are operating that keep serum cholesterol levels within the normal range.Multiple possible mechanisms may be operating to provide a person or animal with resistance to the serum cholesterol-raising action of dietary cholesterol. These include decreasing the rate of cholesterol biosynthesis, decreasing the rate of cholesterol absorption, increasing the rate of cholesterol excretion, increasing the conversion of cholesterol to bile acids, and increasing the rate of removal of serum cholesterol via liver lipoprotein receptors [7, 8].In order to obtain a comprehensive and unbiased analysis of adaptive responses in hepatic gene expression to dietary cholesterol, we carried out microarray analysis of changes in Sprague-Dawley rat liver gene expression elicited by a 1% cholesterol diet. These animals are known to have adaptive responses that render them resistant to dietary cholesterol. In order to induce atherosclerotic plaques in these animals, the rats must be rendered hypothyroid [9]. Thus, this animal is a reasonable model for human responses to dietary cholesterol. In a previous microarray study, C57BL/6 mice were used [10]. These mice become hypercholesterolemic and develop fatty lesions in their ascending aortas when given diets supplemented with cholesterol [11, 12].Liver was selected for extensive study, because this tissue not only synthesizes cholesterol but also is also responsible for bile acid production and excretion of cholesterol and expresses the majority of the body’s LDL receptors [13].
## 2. Methods
### 2.1. Animals
Male Sprague-Dawley rats, 150–200 g (Harlan, Madison, Wis, USA), were fed Harlan Teklad 22/5 rodent chow with or without 1% cholesterol for five days. The animals were kept in a reversed lighting cycle room (12 dark/12 hrs light) and sacrificed at 0900–1000 hrs, corresponding to the third to fourth hour of the dark period when cholesterol biosynthesis is maximal. Twelve-week old C57BL/6J mice were fed chow with or without 2% cholesterol for 5 days. Liver samples were obtained, and microsomes were prepared from both rats and mice [14]. All procedures were carried out according to the regulations and oversight of the University of South Florida Institutional Animal Care and Use Committee (IACUC), protocol 2953.
### 2.2. Cholesterol Analysis
Trunk blood was collected and allowed to clot. The samples were centrifuged at 5,000 xg for 5 min. The serum was removed with a Pasteur pipette. Total serum cholesterol levels were determined by a cholesterol oxidase method using Infinity Cholesterol Reagent (Sigma) [15]. Values are expressed as mg/dL of serum. Liver cholesterol levels were determined as previously described [16]. Briefly, weighed liver samples were saponified, extracted with petroleum ether and subjected to reverse-phase high-performance liquid chromatography on a Spheri-5, RP-18, 5 reverse-phase column (Altech Associates). Values are expressed as mg/g of liver.
### 2.3. RNA Isolation
A portion of about 200 mg was quickly excised from livers of the rats and immediately homogenized in 4 mL of Tri-Reagent from Molecular Research Center (Cincinnati, Ohio, USA) using a Polytron homogenizer at room temperature. The remainder of the isolation steps was carried out using volumes corresponding to 4x the manufacturer’s recommendations. RNA concentrations were determined by diluting each sample 1 : 100 and measuring its absorbance at 260 nm.
### 2.4. Microarray Analysis
Isolated RNA was further purified using the RNeasy kit from Qiagen. To demonstrate that the RNA was indeed free of RNase activity, samples were incubated in water at 42°C for 1 hr and then examined on 1% agarose gels. An identical pattern of bands in unheated and heated samples was obtained showing a lack of RNase activity. Microarray analysis was performed by the Moffitt Core Facility (Tampa, FL) using the Affymetrix GeneChip Instrument system and the protocol established by Affymetrix, Inc. Tenμg of RNA each from the livers of 3 control and 3 cholesterol-fed rats was used in the analysis. The RNA was converted to double-stranded cDNA using an oligo(dT)24 primer containing a T7 RNA polymerase recognition sequence. The product was transcribed into biotin-labeled cRNA using T7 RNA polymerase. The biotinylated cRNA was hybridized to Affymetrix GeneChip Rat Genome 230 Plus 2.0 arrays, which detects about 28,000 genes. Multiple oligos were used for each gene with the data averaged. Scanned chip images were analyzed using GeneChip algorithms.
### 2.5. Real Time RT-PCR Analysis
To validate the microarray results, we assessed the expression of a subset of genes via real-time PCR essentially as described previously [14]. Total RNA was resuspended in diethyl pyrocarbonate-treated water. Twenty micrograms of RNA was then DNAse-treated using the TURBO DNA-Free Kit from Ambion. cDNA was prepared from 1 μg of DNAse-treated RNA using the Reverse Transcription System from Promega. The final product was brought up to 100 μL, and a total of 2 μL of the reverse transcription reaction was then used for real-time PCR analysis. The primer sequences used are given in Table 1. PCR was carried out according to the protocol from ABI Power SYBR Green Master Mix, using a Chromo-4 DNA Engine (Bio-Rad) with minor modifications. The program used for amplification was (i) 95°C for 5 minutes, (ii) 95°C for 15 seconds, (iii) 61°C for 1 minute (collect data), and (iv) go to step (ii) 40 times, (v) 55°C + 0.5°C each 10 seconds, ×80 (melt curve). The results were quantified by the ΔΔCt method using Microsoft Excel statistical programs and SigmaPlot 8.0. As a housekeeping gene, 18S ribosomal RNA was used. Values are expressed relative to this.Table 1
Oligonucleotide sequences used for RT-PCR analysis.
GenePrimersSequence (5′→3′)SQLESenseAGTGAACAAACGAGGCGTCCTACTAntisenseAAAGCGACTGTCATTCCTCCACCACYP51SenseTTAGGTGACAACCTGACACACGCTAntisenseTGCTTACTGTCTTGCTCCTGGTGTABCG8SenseGATGCTGGCTATCATAGGGAGCAntisenseTCTCTGCCTGTGATAACGTCGAABCG5SenseTGAGCTCTTCCACCACTTCGACAAAntisenseTGTCCACCGATGTCAAGTCCATGTACAT2SenseTTGTGCCAGTGCACGTGTCTTCTAAntisenseGCTTCAGCTTGCTCATGGCTTCAACYP7ASenseTGAAAGCGGGAAAGCAAAGACCACAntisenseTCTTGGACGGCAAAGAGTCTTCCA7DHCRSenseTCAGCTTCCAGGTGCTGCTTTACTAntisenseACAATCCCTGCTGGAGTTATGGCAD14SRSenseAATGGTTTCCAGGCTCTGGTGCTAAntisenseATAAAGCTGGTGAGAGTGGTCGCAHMGCRSenseATTGCACCGACAAGAAACCTGCTGAntisenseTTCTCTCACCACCTTGGCTGGAATHMGCSSenseTTGGTAGTTGCAGGAGACATCGCTAntisenseAGCATTTGGCCCAATTAGCAGAGCIGFBP1SenseAGAGGAACAGCTGCTGGATAGCTTAntisenseAGGGCTCCTTCCATTTCTTGAGGTLANSSenseACTCTACGATGCTGTGGCTGTGTTAntisenseAAATACCCGCCACGCTTAGTCTCALIPIN2SenseTCTGCCATGGACTTGCCTGATGTAAntisenseACTCGTGGTACGTGATGATGTGCTPPARαSenseAGACCTTGTGCATGGCTGAGAAGAAntisenseAATCGGACCTCTGCCTCCTTGTTTPPARγSenseCAATGCCATCAGGTTTGGGCGAATAntisenseATACAAATGCTTTGCCAGGGCTCGSC4MEOXSenseACCTGGCACTATTTCCTGCACAGAAntisenseAGCCTGGAACTCGTGATGGACTTC
### 2.6. Western Blot Analysis
At time of sacrifice, a portion of liver was excised for protein analysis. Lysosome-free liver microsomes were prepared according to the procedure previously described [17]. Liver microsomes (50 μg of protein per lane) from rats and mice were subjected to SDS-PAGE and western blotting. Membranes were incubated overnight with 1 : 2,000 dilution of cholesterol 7α hydroxylase primary antibody generated in rabbits and generously provided to us by Dr. Mats Rudling in 5% PBST nonfat dry milk. A 1 : 10,000 dilution of Sheep anti Rabbit IGG was as the secondary antibody. The West Pico Chemiluminesence kit was used for detection with exposure times ranging from 5 to 20 seconds. The blots were then stripped and reprobed with an antibody to β-actin. The resulted were expressed relative to the β-actin signal.
### 2.7. Statistics
For the microarray data, a 2-tailed distributiont-test, equal variance was used. The RT-PCR data are presented as means ± standard errors. P values using a 2-tailed distribution t-test are given.
## 2.1. Animals
Male Sprague-Dawley rats, 150–200 g (Harlan, Madison, Wis, USA), were fed Harlan Teklad 22/5 rodent chow with or without 1% cholesterol for five days. The animals were kept in a reversed lighting cycle room (12 dark/12 hrs light) and sacrificed at 0900–1000 hrs, corresponding to the third to fourth hour of the dark period when cholesterol biosynthesis is maximal. Twelve-week old C57BL/6J mice were fed chow with or without 2% cholesterol for 5 days. Liver samples were obtained, and microsomes were prepared from both rats and mice [14]. All procedures were carried out according to the regulations and oversight of the University of South Florida Institutional Animal Care and Use Committee (IACUC), protocol 2953.
## 2.2. Cholesterol Analysis
Trunk blood was collected and allowed to clot. The samples were centrifuged at 5,000 xg for 5 min. The serum was removed with a Pasteur pipette. Total serum cholesterol levels were determined by a cholesterol oxidase method using Infinity Cholesterol Reagent (Sigma) [15]. Values are expressed as mg/dL of serum. Liver cholesterol levels were determined as previously described [16]. Briefly, weighed liver samples were saponified, extracted with petroleum ether and subjected to reverse-phase high-performance liquid chromatography on a Spheri-5, RP-18, 5 reverse-phase column (Altech Associates). Values are expressed as mg/g of liver.
## 2.3. RNA Isolation
A portion of about 200 mg was quickly excised from livers of the rats and immediately homogenized in 4 mL of Tri-Reagent from Molecular Research Center (Cincinnati, Ohio, USA) using a Polytron homogenizer at room temperature. The remainder of the isolation steps was carried out using volumes corresponding to 4x the manufacturer’s recommendations. RNA concentrations were determined by diluting each sample 1 : 100 and measuring its absorbance at 260 nm.
## 2.4. Microarray Analysis
Isolated RNA was further purified using the RNeasy kit from Qiagen. To demonstrate that the RNA was indeed free of RNase activity, samples were incubated in water at 42°C for 1 hr and then examined on 1% agarose gels. An identical pattern of bands in unheated and heated samples was obtained showing a lack of RNase activity. Microarray analysis was performed by the Moffitt Core Facility (Tampa, FL) using the Affymetrix GeneChip Instrument system and the protocol established by Affymetrix, Inc. Tenμg of RNA each from the livers of 3 control and 3 cholesterol-fed rats was used in the analysis. The RNA was converted to double-stranded cDNA using an oligo(dT)24 primer containing a T7 RNA polymerase recognition sequence. The product was transcribed into biotin-labeled cRNA using T7 RNA polymerase. The biotinylated cRNA was hybridized to Affymetrix GeneChip Rat Genome 230 Plus 2.0 arrays, which detects about 28,000 genes. Multiple oligos were used for each gene with the data averaged. Scanned chip images were analyzed using GeneChip algorithms.
## 2.5. Real Time RT-PCR Analysis
To validate the microarray results, we assessed the expression of a subset of genes via real-time PCR essentially as described previously [14]. Total RNA was resuspended in diethyl pyrocarbonate-treated water. Twenty micrograms of RNA was then DNAse-treated using the TURBO DNA-Free Kit from Ambion. cDNA was prepared from 1 μg of DNAse-treated RNA using the Reverse Transcription System from Promega. The final product was brought up to 100 μL, and a total of 2 μL of the reverse transcription reaction was then used for real-time PCR analysis. The primer sequences used are given in Table 1. PCR was carried out according to the protocol from ABI Power SYBR Green Master Mix, using a Chromo-4 DNA Engine (Bio-Rad) with minor modifications. The program used for amplification was (i) 95°C for 5 minutes, (ii) 95°C for 15 seconds, (iii) 61°C for 1 minute (collect data), and (iv) go to step (ii) 40 times, (v) 55°C + 0.5°C each 10 seconds, ×80 (melt curve). The results were quantified by the ΔΔCt method using Microsoft Excel statistical programs and SigmaPlot 8.0. As a housekeeping gene, 18S ribosomal RNA was used. Values are expressed relative to this.Table 1
Oligonucleotide sequences used for RT-PCR analysis.
GenePrimersSequence (5′→3′)SQLESenseAGTGAACAAACGAGGCGTCCTACTAntisenseAAAGCGACTGTCATTCCTCCACCACYP51SenseTTAGGTGACAACCTGACACACGCTAntisenseTGCTTACTGTCTTGCTCCTGGTGTABCG8SenseGATGCTGGCTATCATAGGGAGCAntisenseTCTCTGCCTGTGATAACGTCGAABCG5SenseTGAGCTCTTCCACCACTTCGACAAAntisenseTGTCCACCGATGTCAAGTCCATGTACAT2SenseTTGTGCCAGTGCACGTGTCTTCTAAntisenseGCTTCAGCTTGCTCATGGCTTCAACYP7ASenseTGAAAGCGGGAAAGCAAAGACCACAntisenseTCTTGGACGGCAAAGAGTCTTCCA7DHCRSenseTCAGCTTCCAGGTGCTGCTTTACTAntisenseACAATCCCTGCTGGAGTTATGGCAD14SRSenseAATGGTTTCCAGGCTCTGGTGCTAAntisenseATAAAGCTGGTGAGAGTGGTCGCAHMGCRSenseATTGCACCGACAAGAAACCTGCTGAntisenseTTCTCTCACCACCTTGGCTGGAATHMGCSSenseTTGGTAGTTGCAGGAGACATCGCTAntisenseAGCATTTGGCCCAATTAGCAGAGCIGFBP1SenseAGAGGAACAGCTGCTGGATAGCTTAntisenseAGGGCTCCTTCCATTTCTTGAGGTLANSSenseACTCTACGATGCTGTGGCTGTGTTAntisenseAAATACCCGCCACGCTTAGTCTCALIPIN2SenseTCTGCCATGGACTTGCCTGATGTAAntisenseACTCGTGGTACGTGATGATGTGCTPPARαSenseAGACCTTGTGCATGGCTGAGAAGAAntisenseAATCGGACCTCTGCCTCCTTGTTTPPARγSenseCAATGCCATCAGGTTTGGGCGAATAntisenseATACAAATGCTTTGCCAGGGCTCGSC4MEOXSenseACCTGGCACTATTTCCTGCACAGAAntisenseAGCCTGGAACTCGTGATGGACTTC
## 2.6. Western Blot Analysis
At time of sacrifice, a portion of liver was excised for protein analysis. Lysosome-free liver microsomes were prepared according to the procedure previously described [17]. Liver microsomes (50 μg of protein per lane) from rats and mice were subjected to SDS-PAGE and western blotting. Membranes were incubated overnight with 1 : 2,000 dilution of cholesterol 7α hydroxylase primary antibody generated in rabbits and generously provided to us by Dr. Mats Rudling in 5% PBST nonfat dry milk. A 1 : 10,000 dilution of Sheep anti Rabbit IGG was as the secondary antibody. The West Pico Chemiluminesence kit was used for detection with exposure times ranging from 5 to 20 seconds. The blots were then stripped and reprobed with an antibody to β-actin. The resulted were expressed relative to the β-actin signal.
## 2.7. Statistics
For the microarray data, a 2-tailed distributiont-test, equal variance was used. The RT-PCR data are presented as means ± standard errors. P values using a 2-tailed distribution t-test are given.
## 3. Results
### 3.1. Microarray Analysis
In order to comprehensively identify the genes that exhibit significant alterations in rates of transcription in response to dietary cholesterol, microarray analysis was performed on hepatic RNA from Sprague-Dawley rats fed either normal rodent chow or chow supplemented with 1% cholesterol for 5 days. This dietary regimen does not raise serum cholesterol levels in these male Sprague Dawley rats [2, 18]. Liver cholesterol levels are increased about 2-fold (Table 2). This is similar to our previous findings with 2% cholesterol [16]. Thus, this treatment appears to constitute a useful model for studying adaptive responses.Table 2
Serum and liver cholesterol levels.
ConditionSerumLiverMg/100 mLMg/gNormal128 ± 53.2 ± 0.3Cholesterol fed102 ± 46.2 ± 0.5**P<0.01 compared with normal chow fed rats. A 2-tailed distribution t-test, equal variance was used.Surprisingly, the rates of transcription of relatively few genes were altered more than 2-fold (Table3). The largest increase was observed for insulin-like growth factor-binding protein 1. This increase of over 4-fold was confirmed by RT-PCR analysis (Table 5). A prominent increase of nearly 3-fold in hepatic cholesterol 7α hydroxylase expression was observed. This would provide for increased production of bile acids and more efficient elimination of biliary cholesterol, since this enzyme catalyzes the rate-limiting step of bile acid synthesis. ABCG5 and ABCG8 expression was actually decreased (Table 4). These ATP-binding cassette (ABC) transport proteins promote biliary secretion of neutral sterols [19]. Also, acetyl CoA acetyltransferase 2 (ACAT2), which esterifies excess cholesterol, was significantly decreased. Strikingly, there was no change in hepatic LDL receptor expression.Table 3
Microarray analysis identifying genes upregulated by dietary cholesterol.
NormalCholesterolFold diffGene386507439237712642347+4.5**Insulin-like growth factor binding protein 147645753623222418988+3.9*B-cell leukemia/lymphoma 6 (predicted)359753118493162871791412394+2.7**Cholesterol 7α hydroxylase73304546853747746+2.5*Similar to Ran-binding protein 2 (predicted)228717331841559057265166+2.8**Zinc finger protein 354A127387320756494773+2.4*PPARα253372250766759536+2.4**Lipin 2 (predicted)34131263332303344+2.3*Zinc finger, matrin-like (predicted)207684856135810061277+2.1*HGPRTase425528304209965449598308+2.0Glucose-6-phosphatase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 4
Microarray analysis identifying genes downregulated by dietary cholesterol.
NormalCholesterolFold diffGene118523811598219232203−7.9**Squalene epoxidase58878342614884143−4.8**ABCG85743731955114462−4.5Hypo protein XP_580018235330662285735783503−3.8*Acetyl CoA Acetyltransferase2103771167610752269033503329−3.5**Lanosterol 14α demethylase38904874366892912801426−3.4**Delta 14 sterol reductase10498931196186216542−3.3**Cytokine inducible SH2 protein2383162426511576−3.1**Nuc factor, erythroid derived 27331023760233232345−3.1**Ephrin A51998608524340343375−3.0Dual specificity phosphatase111250143229729351146364315−2.8**Sterol C4 methyl oxidase-like608363264997196125972103−2.6**7-Dehydrocholesterol reductase28512771211210278791055−2.6**Solute carrier family 25 mem 30140791766616412555562967154−2.5**HMG-CoA Synthase 1148901320812147514659205178−2.5**Farnesyl diphosphate synthase1857263617928821025635−2.5**Lanosterol synthase494655024267150324561667−2.6**Farnesyl diphos transferase 1169525181567870834814−2.3*ABCG5134761370214018604873575541−2.2**Fatty acid desaturase 11059981616620305042154797−2.1*Hemato expressed homeobox444591470225288205−2.1**Acetoacetyl CoA synthase126914651136604749548−2.0**Hypo protein XP_57984910087111358727422353045287−2.0**HMGCR251621772744113810491509−2.0**Forkhead box A28481080873538444432−2.0**Mevalonate kinase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 5
RT-PCR analysis.
GeneNormalCholesterolFoldP valueABCG51.00 ± 0.070.67 ± 0.09−1.490.018ABCG81.02 ± 0.170.17 ± 0.03−6.000.003HMG-CoA synthase1.03 ± 0.230.23 ± 0.01−4.480.008Squalene epoxidase1.17 ± 0.260.14 ± 0.03−8.290.002Lanosterol 14α demeth0.71 ± 0.280.16 ± 0.01−4.530.027Delta 14 sterol reductase1.03 ± 0.310.19 ± 0.15−5.330.013Lanosterol synthase1.12 ± 0.580.33 ± 0.14−3.390.083ACAT-21.00 ± 0.080.31 ± 0.07−3.200.001Sterol C4 Me Ox like1.04 ± 0.310.39 ± 0.50−2.650.1287-Dehydrocholesterol red1.01 ± 0.130.50 ± 0.06−2.000.004HMG-CoA reductase1.02 ± 0.260.73 ± 0.28−1.400.261PPAR gamma1.07 ± 0.401.02 ± 0.89−1.040.944PPAR alpha1.01 ± 0.181.06 ± 0.44+1.050.852Lipin 2 (predicted)1.03 ± 0.291.64 ± 0.42+1.590.105Chol 7α hydroxylase1.17 ± 0.353.27 ± 0.97+2.790.044IGFBP11.02 ± 0.314.30 ± 1.90+4.220.042A 2-tailed distributiont-test, equal variance was used.Many of the genes exhibiting downregulated expression in response to dietary cholesterol (Table3) catalyze reactions of the cholesterol biosynthetic pathway. Levels of mRNA for the enzyme that catalyzes the rate-limiting reaction, HMG-CoA reductase, were only decreased 2-fold. RT-PCR analysis (Table 4) showed only a 1.4-fold decrease HMG-CoA reductase, in agreement with previous results from Northern blotting analysis [20]. The largest decrease (7.9-fold) was observed for squalene epoxidase. This enzyme regulates lanosterol synthesis [21]. The next largest decrease among cholesterol biosynthetic enzymes was seen for lanosterol 14α demethylase. The expression of several cholesterol biosynthetic enzymes including Δ14 sterol reductase, sterol C4 methyl oxidase-like, 7-dehydrocholesterol reductase, HMG-CoA synthase 1, farnesyl diphosphate synthase, lanosterol synthase, acetoacetyl CoA synthase, and mevalonate kinase were decreased 2 to 4-fold (Table 4). Other cholesterol biosynthetic enzymes such as sterol-C5-desaturase, phosphomevalonate kinase, diphosphomevalonate decarboxylase, 24-dehydrocholesterol reductase and dehydrogenase/reductase (SDR family) member 7 were decreased 1.5- to 1.9-fold. Other genes of interest whose expression was decreased by dietary cholesterol were SREBP-2 (1.4-fold, P=0.028) and PCSK9 (1.5-fold, P=0.022).
### 3.2. RT-PCR Analysis
Quantitative RT-PCR was utilized to examine a subset of the altered genes that were identified via microarray analysis. The results are presented in Table5 and correlate with those seen in the microarray analysis. For example, the expression of squalene epoxidase (Sqle) and lanosterol 14α demethylase (Cyp51A1) were decreased by 8.29 and 4.53-fold, respectively (Table 5) as compared with 7.9 and 3.5-fold in the microarray analysis (Table 4). The relative fold changes in ABCG5 and 8, HMG-CoA synthase, ACAT-2, lanosterol synthase, sterol C4 methyl oxidase-like, lipin 2, cholesterol 7α hydroxylase, and 7-dehydrocholesterol reductase also agree very closely, providing further verification of the data obtained from the microarray analysis.
### 3.3. Western Blotting Analysis
Since the increase in rat liver cholesterol 7α hydroxylase mRNA caused by dietary cholesterol could be important for the elimination of cholesterol from the body, we wished to determine whether levels of this protein are actually increased. For comparison purposes, we examined the effect of dietary cholesterol on mouse liver cholesterol 7α hydroxylase protein. Cholesterol 7α hydroxylase protein levels were much higher in rat liver than in mouse liver (Figure 1). Supplementing the chow with 1% cholesterol increased hepatic cholesterol 7α hydroxylase protein levels in rats from 1.22 ± 0.78 to 6.52 ± 2.27 (4 chow-fed rats compared with 5 cholesterol-fed animals with a P=0.011). In contrast, little effect on the levels in mouse liver was seen even when 2% cholesterol was added to the diet (Figure 1).Figure 1
Comparison of the effects of dietary cholesterol on hepatic cholesterol 7α hydroxylase protein levels in rats and Mice. Rats (R) and mice (M) were fed normal (N) chow diets with or without 1% or 2% cholesterol (C) for 5 days. A representative Western blot of hepatic microsomal cholesterol 7α hydroxylase protein is presented.
### 3.4. Time-Course Experiment
A time-course experiment was conducted to determine how rapidly dietary cholesterol reduces the expression of Sqle and Cyp51. RT-PCR analysis, as shown in Figures2 and 3, demonstrates that the decrease in Sqle occurred more rapidly. Both Sqle and Cyp51 decreased progressively over a 3-day period. This agrees with the time course for the reduction in translation of HMG-CoA reductase mRNA [22].Figure 2
Effect of cholesterol feeding on hepatic squalene epoxidase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.Figure 3
Effect of cholesterol feeding on hepatic lanosterol 14α demethylase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.
## 3.1. Microarray Analysis
In order to comprehensively identify the genes that exhibit significant alterations in rates of transcription in response to dietary cholesterol, microarray analysis was performed on hepatic RNA from Sprague-Dawley rats fed either normal rodent chow or chow supplemented with 1% cholesterol for 5 days. This dietary regimen does not raise serum cholesterol levels in these male Sprague Dawley rats [2, 18]. Liver cholesterol levels are increased about 2-fold (Table 2). This is similar to our previous findings with 2% cholesterol [16]. Thus, this treatment appears to constitute a useful model for studying adaptive responses.Table 2
Serum and liver cholesterol levels.
ConditionSerumLiverMg/100 mLMg/gNormal128 ± 53.2 ± 0.3Cholesterol fed102 ± 46.2 ± 0.5**P<0.01 compared with normal chow fed rats. A 2-tailed distribution t-test, equal variance was used.Surprisingly, the rates of transcription of relatively few genes were altered more than 2-fold (Table3). The largest increase was observed for insulin-like growth factor-binding protein 1. This increase of over 4-fold was confirmed by RT-PCR analysis (Table 5). A prominent increase of nearly 3-fold in hepatic cholesterol 7α hydroxylase expression was observed. This would provide for increased production of bile acids and more efficient elimination of biliary cholesterol, since this enzyme catalyzes the rate-limiting step of bile acid synthesis. ABCG5 and ABCG8 expression was actually decreased (Table 4). These ATP-binding cassette (ABC) transport proteins promote biliary secretion of neutral sterols [19]. Also, acetyl CoA acetyltransferase 2 (ACAT2), which esterifies excess cholesterol, was significantly decreased. Strikingly, there was no change in hepatic LDL receptor expression.Table 3
Microarray analysis identifying genes upregulated by dietary cholesterol.
NormalCholesterolFold diffGene386507439237712642347+4.5**Insulin-like growth factor binding protein 147645753623222418988+3.9*B-cell leukemia/lymphoma 6 (predicted)359753118493162871791412394+2.7**Cholesterol 7α hydroxylase73304546853747746+2.5*Similar to Ran-binding protein 2 (predicted)228717331841559057265166+2.8**Zinc finger protein 354A127387320756494773+2.4*PPARα253372250766759536+2.4**Lipin 2 (predicted)34131263332303344+2.3*Zinc finger, matrin-like (predicted)207684856135810061277+2.1*HGPRTase425528304209965449598308+2.0Glucose-6-phosphatase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 4
Microarray analysis identifying genes downregulated by dietary cholesterol.
NormalCholesterolFold diffGene118523811598219232203−7.9**Squalene epoxidase58878342614884143−4.8**ABCG85743731955114462−4.5Hypo protein XP_580018235330662285735783503−3.8*Acetyl CoA Acetyltransferase2103771167610752269033503329−3.5**Lanosterol 14α demethylase38904874366892912801426−3.4**Delta 14 sterol reductase10498931196186216542−3.3**Cytokine inducible SH2 protein2383162426511576−3.1**Nuc factor, erythroid derived 27331023760233232345−3.1**Ephrin A51998608524340343375−3.0Dual specificity phosphatase111250143229729351146364315−2.8**Sterol C4 methyl oxidase-like608363264997196125972103−2.6**7-Dehydrocholesterol reductase28512771211210278791055−2.6**Solute carrier family 25 mem 30140791766616412555562967154−2.5**HMG-CoA Synthase 1148901320812147514659205178−2.5**Farnesyl diphosphate synthase1857263617928821025635−2.5**Lanosterol synthase494655024267150324561667−2.6**Farnesyl diphos transferase 1169525181567870834814−2.3*ABCG5134761370214018604873575541−2.2**Fatty acid desaturase 11059981616620305042154797−2.1*Hemato expressed homeobox444591470225288205−2.1**Acetoacetyl CoA synthase126914651136604749548−2.0**Hypo protein XP_57984910087111358727422353045287−2.0**HMGCR251621772744113810491509−2.0**Forkhead box A28481080873538444432−2.0**Mevalonate kinase*P<0.05; **P<0.01. A 2-tailed distribution t-test, equal variance was used.Table 5
RT-PCR analysis.
GeneNormalCholesterolFoldP valueABCG51.00 ± 0.070.67 ± 0.09−1.490.018ABCG81.02 ± 0.170.17 ± 0.03−6.000.003HMG-CoA synthase1.03 ± 0.230.23 ± 0.01−4.480.008Squalene epoxidase1.17 ± 0.260.14 ± 0.03−8.290.002Lanosterol 14α demeth0.71 ± 0.280.16 ± 0.01−4.530.027Delta 14 sterol reductase1.03 ± 0.310.19 ± 0.15−5.330.013Lanosterol synthase1.12 ± 0.580.33 ± 0.14−3.390.083ACAT-21.00 ± 0.080.31 ± 0.07−3.200.001Sterol C4 Me Ox like1.04 ± 0.310.39 ± 0.50−2.650.1287-Dehydrocholesterol red1.01 ± 0.130.50 ± 0.06−2.000.004HMG-CoA reductase1.02 ± 0.260.73 ± 0.28−1.400.261PPAR gamma1.07 ± 0.401.02 ± 0.89−1.040.944PPAR alpha1.01 ± 0.181.06 ± 0.44+1.050.852Lipin 2 (predicted)1.03 ± 0.291.64 ± 0.42+1.590.105Chol 7α hydroxylase1.17 ± 0.353.27 ± 0.97+2.790.044IGFBP11.02 ± 0.314.30 ± 1.90+4.220.042A 2-tailed distributiont-test, equal variance was used.Many of the genes exhibiting downregulated expression in response to dietary cholesterol (Table3) catalyze reactions of the cholesterol biosynthetic pathway. Levels of mRNA for the enzyme that catalyzes the rate-limiting reaction, HMG-CoA reductase, were only decreased 2-fold. RT-PCR analysis (Table 4) showed only a 1.4-fold decrease HMG-CoA reductase, in agreement with previous results from Northern blotting analysis [20]. The largest decrease (7.9-fold) was observed for squalene epoxidase. This enzyme regulates lanosterol synthesis [21]. The next largest decrease among cholesterol biosynthetic enzymes was seen for lanosterol 14α demethylase. The expression of several cholesterol biosynthetic enzymes including Δ14 sterol reductase, sterol C4 methyl oxidase-like, 7-dehydrocholesterol reductase, HMG-CoA synthase 1, farnesyl diphosphate synthase, lanosterol synthase, acetoacetyl CoA synthase, and mevalonate kinase were decreased 2 to 4-fold (Table 4). Other cholesterol biosynthetic enzymes such as sterol-C5-desaturase, phosphomevalonate kinase, diphosphomevalonate decarboxylase, 24-dehydrocholesterol reductase and dehydrogenase/reductase (SDR family) member 7 were decreased 1.5- to 1.9-fold. Other genes of interest whose expression was decreased by dietary cholesterol were SREBP-2 (1.4-fold, P=0.028) and PCSK9 (1.5-fold, P=0.022).
## 3.2. RT-PCR Analysis
Quantitative RT-PCR was utilized to examine a subset of the altered genes that were identified via microarray analysis. The results are presented in Table5 and correlate with those seen in the microarray analysis. For example, the expression of squalene epoxidase (Sqle) and lanosterol 14α demethylase (Cyp51A1) were decreased by 8.29 and 4.53-fold, respectively (Table 5) as compared with 7.9 and 3.5-fold in the microarray analysis (Table 4). The relative fold changes in ABCG5 and 8, HMG-CoA synthase, ACAT-2, lanosterol synthase, sterol C4 methyl oxidase-like, lipin 2, cholesterol 7α hydroxylase, and 7-dehydrocholesterol reductase also agree very closely, providing further verification of the data obtained from the microarray analysis.
## 3.3. Western Blotting Analysis
Since the increase in rat liver cholesterol 7α hydroxylase mRNA caused by dietary cholesterol could be important for the elimination of cholesterol from the body, we wished to determine whether levels of this protein are actually increased. For comparison purposes, we examined the effect of dietary cholesterol on mouse liver cholesterol 7α hydroxylase protein. Cholesterol 7α hydroxylase protein levels were much higher in rat liver than in mouse liver (Figure 1). Supplementing the chow with 1% cholesterol increased hepatic cholesterol 7α hydroxylase protein levels in rats from 1.22 ± 0.78 to 6.52 ± 2.27 (4 chow-fed rats compared with 5 cholesterol-fed animals with a P=0.011). In contrast, little effect on the levels in mouse liver was seen even when 2% cholesterol was added to the diet (Figure 1).Figure 1
Comparison of the effects of dietary cholesterol on hepatic cholesterol 7α hydroxylase protein levels in rats and Mice. Rats (R) and mice (M) were fed normal (N) chow diets with or without 1% or 2% cholesterol (C) for 5 days. A representative Western blot of hepatic microsomal cholesterol 7α hydroxylase protein is presented.
## 3.4. Time-Course Experiment
A time-course experiment was conducted to determine how rapidly dietary cholesterol reduces the expression of Sqle and Cyp51. RT-PCR analysis, as shown in Figures2 and 3, demonstrates that the decrease in Sqle occurred more rapidly. Both Sqle and Cyp51 decreased progressively over a 3-day period. This agrees with the time course for the reduction in translation of HMG-CoA reductase mRNA [22].Figure 2
Effect of cholesterol feeding on hepatic squalene epoxidase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.Figure 3
Effect of cholesterol feeding on hepatic lanosterol 14α demethylase mRNA levels. Rats were fed a diet containing 1% cholesterol for the indicated number of days. Hepatic mRNA levels were determined by RT-PCR. Values are given as means ± Standard Error for 3 to 6 rats per time point. *P<0.05; **P<0.01.
## 4. Conclusion
Sprague Dawley rats from Harlan fed the grain-based Tekland 22/5 rodent chow diet resist a high cholesterol diet-induced increase in serum cholesterol as reported previously [2, 16]. They do show increases in liver cholesterol levels (Table 2). This indicates that dietary cholesterol is effectively taken up by the intestine, where Niemann-Pick C1-like 1 protein facilitates its uptake [23]. The resulting chylomicron remanents are taken up by the liver. With the increased cholesterol from the diet, the activity of the hepatic LDL receptor, as determined from its rate of turnover, is markedly decreased [18]. Thus, the cycling of this receptor, located in the cholesterol-rich caveolae portion of the plasma membrane [24], virtually stops [18]. The levels of hepatic LDL receptor protein and mRNA do not, however, change [18]. Since the liver must play a key role in adapting to and resisting the increase load of cholesterol, we conducted a microarray study to investigate changes in hepatic mRNA levels.Of all the hepatic genes examined, the most dramatic changes seen in response to dietary cholesterol were the decreases in Sqle and Cyp51. These changes seen in the microarray experiment were confirmed by RT-PCR. When lanosterol levels are decreased as occurs when Sqle expression is decreased, the different Kms of the three reactions catalyzed by Cyp51 come into play. The Km for the formation of 3β-hydroxylanosterol-8-en-32-al is 56 μM while the Km for its destruction (conversion to 4 dimethylcholesta 8, 14 dien-3β-ol) is 368 μM [25]. Although there would be sufficient lanosterol for the first reaction catalyzed by CYP51 (formation of 3β-hydroxylanosterol-8-en-32-al), because of the low Km, its subsequent conversion to 4 dimethylcholesta 8, 14 dien-3β-ol would be slow due to the high Km for this reaction. Thus, 3β-hydroxylanosterol-8-en-32-al would accumulate. This oxylanosterol is known to act to decrease the rate of translation of hepatic HMG-CoA reductase in CHO cells [26]. The structural analogue, 15α-fluoro-3β-hydroxylanost-7-en-32-aldehyde has also been reported to inhibit translation of HMG-CoA reductase mRNA in CHO cells [27]. In addition, it has been demonstrated that feeding the nonmetabolizable oxylanosterol analogue, 15 oxa-32-vinyllanost-8-ene-3β, 32 diol to rats results in a marked decrease in translation of hepatic HMG-CoA reductase mRNA which mimics the effect of dietary cholesterol [28]. Thus, dietary cholesterol-mediated decreases in Sqle and Cyp51 transcription lead to decreased translation of hepatic HMG-CoA reductase mRNA.In this microarray study, a 2-fold decrease in HMG-CoA reductase expression was observed in response to dietary cholesterol. The SREs in the promoters of both Sqle and Cyp51 [29, 30] have been shown to be responsive to SREBP2 and would be expected to be downregulated by cholesterol feeding. However, the SRE sequence in the HMG-CoA reductase promoter does not agree well with the consensus SRE, which may explain the only modest decrease in HMG-CoA reductase transcription in response to dietary cholesterol. Previous Northern blotting analysis and nuclear run-on studies showed only slight decreases in mRNA levels and rates of transcription for hepatic HMG-CoA reductase [20, 31]. However, enzyme activity and protein levels drop to a couple percent of control values [20, 31]. This is due primarily to a marked decrease in the rate of translation of HMG-CoA reductase mRNA caused by feeding cholesterol [22]. The decrease in the rate of translation of HMG-CoA reductase mRNA is due to shift from association of the mRNA with polysomes to monosomes caused by dietary cholesterol [32]. The present observations are predicted by the proposed two-step model for feedback regulation of hepatic HMG-CoA reductase gene expression [33]. In this model, increased hepatic cholesterol leads to decreased formation of mature SREBP2 and decreased transcription of Sqle and Cyp51. This in turn results in accumulation of 3β-hydroxylanosterol-8-en-32-al, which acts to decrease translation of hepatic HMG-CoA reductase mRNA [20].In addition to adapting to excess dietary cholesterol by decreasing the rate of hepatic cholesterol biosynthesis, adaptations in bile acid pathways also occurred. A 2.7-fold increase was observed in cholesterol 7α hydroxylase expression, which is the enzyme that catalyzes the rate-limiting step in bile acid synthesis (Table 3). We have previously observed by Northern blotting analysis that hepatic cholesterol 7α hydroxylase mRNA is markedly increased in rats [16]. Paradoxically, the microarray analysis did not show increases in expression of the ABCG5/G8 transporters. Instead, decreases in the expression of both ABCG5 and ABCG8 were observed. The expression of ABCG8, in particular, was markedly decreased. These findings were confirmed by RT-PCR analysis (Table 4). This is in contrast to the study in mice where ABCG5 expression was reported to be upregulated about 2-fold in response to dietary cholesterol [10]. In agreement with previous Northern blotting analysis, cholesterol feeding did not affect hepatic LDL receptor mRNA levels [18]. However, the rate of cycling and hence activity of the receptor is markedly decreased in response to dietary cholesterol [18].Western blotting analysis showed that dietary cholesterol increases hepatic cholesterol 7α hydroxylase protein levels in rats but not in mice (Figure 1). Also, mice exhibited lower levels of cholesterol 7α hydroxylase protein in their livers. This is in agreement with a previous study [34] showing that dietary cholesterol caused a 3-fold increase in hepatic cholesterol 7α hydroxylase mRNA in rats with much less effect in C57BL/6J mice. Other reports showed that dietary cholesterol may act to increase or decrease cholesterol 7α hydroxylase mRNA levels in C57BL/6J mice depending on the type of fat added to the diet [35, 36]. Diets supplemented with olive oil (monounsaturated fatty acids) caused increased expression of hepatic cholesterol 7α hydroxylase. Western blotting analysis of cholesterol 7α hydroxylase was not preformed in these previous studies.In a recent study comparing the responses of hamsters and rats to cholesterol-enriched diets, it was found that hepatic cholesterol 7α hydroxylase expression in hamsters was not increased in response to cholesterol feeding [37]. This is in contrast with the increase seen in rats. However, hamsters do not adapt well to cholesterol supplemented diets, as their serum cholesterol levels are increased about 2-fold in response to a 0.1% cholesterol diet [37]. Also, hamsters express much lower (only 2 percent of that of rats) basal levels of hepatic HMG-CoA reductase [2], which renders them less able to resist the effects of dietary cholesterol [13].The effect of a high cholesterol diet on the expression of genes in livers of mice has been previously investigated by microarray analysis [10]. In that study, Sqle was found to be downregulated 3.4 and 2.8-fold in males and females, respectively. This correlates with the decrease (7.9-fold) observed for hepatic Sqle expression in the present study. Apparently, no changes in CYP51 or cholesterol 7α hydroxylase expression were observed in the previous study [10]. HMG-CoA reductase was decreased 2-fold in males with no effect seen in females [10]. The greatest change in the previous study was observed for a disintegrin and metalloprotease domain 11 family member (Adam 11) which was upregulated 5 to 11-fold in the microarray and TaqMan analysis. ADAM 11 is a specific aggrecanase induced by interleukin 6 [38] which would promote an inflammatory response. In the present analysis, no change in Adam 11 expression was observed. A significant increase in serum amyloid A3, an acute phase reactant, was also observed in the earlier study [10]. An increase in this protein also indicates an inflammatory response to a high cholesterol diet.The largest cholesterol-induced increase in hepatic gene expression in the present study was observed for insulin-like growth factor-binding protein 1 (IGFBP1), which was measured at over 4-fold by both microarray and RT-PCR analysis. This protein is produced by the liver during inflammation [39], possibly caused by dietary cholesterol. Increased levels are associated with an elevated risk of cardiovascular disease in type 2 diabetic patients [40]. IGFBP1 may also be a marker for insulin resistance [41] and be involved in the development of type 2 diabetes [42]. It may be protective against the growth of certain cancers by binding insulin-like growth factor-1, a potent mitogen [43].In summary, a comprehensive examination of the adaptive hepatic transcriptional responses to dietary cholesterol was conducted using microarray and RT-PCR analysis. The data suggest that rats adapt, in part, to excess dietary cholesterol by markedly reducing hepatic cholesterol biosynthesis and enhancing elimination of cholesterol as bile acids due to induction of cholesterol 7α hydroxylase. A large increase in insulin-like growth factor-binding protein 1 was also observed. This could partially explain the inflammatory response associated with excess dietary cholesterol.
---
*Source: 101242-2011-10-05.xml* | 2011 |
# Raman Laser Polymerization ofC60 Nanowhiskers
**Authors:** Ryoei Kato; Kun'ichi Miyazawa
**Journal:** Journal of Nanotechnology
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101243
---
## Abstract
Photopolymerization ofC60 nanowhiskers (C60NWs) was investigated by using a Raman spectrometer in air at room temperature, since the polymerized C60NWs are expected to exhibit a high mechanical strength and a thermal stability. Short C60NWs with a mean length of 4.4 μm were synthesized by LLIP method (liquid-liquid interfacial precipitation method). The Ag(2) peak of C60NWs shifted to the lower wavenumbers with increasing the laser beam energy dose, and an energy dose more than about 1520 J/mm2 was found necessary to obtain the photopolymerized C60NWs. However, excessive energy doses at high-power densities increased the sample temperature and lead to the thermal decomposition of polymerized C60 molecules.
---
## Body
## 1. Introduction
C60 nanowhiskers (C60NWs) are the single crystal nanofibers composed of C60 molecules [1] and can be synthesized by a facile method called “LLIP method” [2]. C60NWs have a variety of applications and such as field-effect transistors (FETs) [3], solar cells [4], biosensors [5].C60 molecules can be polymerized by electron beam irradiation [6]. Although as-grown C60NWs are composed of the C60 molecules that are weakly bonded via van der Waals forces [7], the C60NWs irradiated by electron beams showed the stronger thermal stability [8], the higher Young’s modulus [9] than pristine van der Waals C60 crystals. Hence, it is of great importance to study the polymerization of C60NWs in order to improve their mechanical and thermal properties.Laser irradiation is a promising method to obtain the polymerized C60 molecules [7, 10]. We first showed the photopolymerization of C60NWs by using the Raman laser beam irradiation [7]. Rao et al. showed that the peak of Ag(2) pentagonal pinch mode of C60 shifts downward from 1469 cm−1 to 1459 cm−1 upon the photopolymerization [11], showing that the shift of Ag(2) peak is a good indicator for the polymerization of C60.Alvarez-Zauco et al. studied the polymerization of C60 thin films in air by the ultraviolet (UV) laser irradiation as a function of laser energy dose (= fluence) from 10 to 50 mJ/cm2 in order to optimize the photopolymerization of C60 films [12]. Likewise, the laser energy dose for the photopolymerization of C60NWs should be optimized. Hence, the present study aims to reveal how the polymerization of C60NWs proceeds as a function of the laser beam energy dose.
## 2. Experimental
C60NWs were synthesized by a modified liquid-liquid interfacial precipitation method. Isopropyl alcohol (IPA) was gently poured into a toluene solution saturated with C60 (MTR Ltd. 99.5%) in a glass bottle to make a liquid-liquid interface, and then the solution was subjected to ultrasonication and stored in an incubator at 10°C to grow short C60NWs. The synthesized C60NWs were filtered and dried in vacuum at 100°C for 120 min. to remove the solvents. In the Raman spectrometry analyses, the C60NWs dispersed in ethyl alcohol were mounted on a slide glass and dried in air.A Raman spectrometer (JASCO, NRS-3100) with a green laser of 532 nm excitation wavelength was used for the polymerization and structural analysis of C60NWs in air. The power of laser light illuminated onto the specimens was measured by using a silicon photodiode (S2281, Hamamatsu Photonics K.K.). The laser beam power density (D) and the energy dose of excitation laser beams in the Raman spectroscopy were controlled by changing ND (Neutral Density) filters, the defocus value of objective lens, and the exposure time of laser beam. D is defined by the following formula in this paper,(1)D(mW/mm2)=Thepoweroflaserbeam(mW)theareaoflaserbeamexposedonthesample(mm2).
## 3. Results and Discussion
Figure1 shows examples of scanning electron microscopy (SEM) images and the size distributions of the synthesized C60NWs with a mean length of 4.4 ± 2.7 μm and a mean diameter of 540 ± 161 nm. The distribution of aspect ratios (length/diameter) is also shown. Most of the C60NWs were found to possess the aspect ratios less than 15.(a) SEM images, (b) length, (c) diameter, and (d) aspect ratio (length/diameter) distributions of the synthesized C60NWs.
(a)(b)(c)(d)The power of excitation laser beam can be changed by selecting ND filters. Figure2 shows the relationship between the ND filter number and the power of laser beam irradiated on samples. The laser beam power could be widely changed between OD1 and OD3. The ND filters OD1 (attenuation rate 0.1), OD2 (0.01), and OD3 (0.001) were used in the experiment, since the other filters gave too strong or too weak laser beam energies. The excitation laser beam power density could be varied from about 0.53 to 11800 mW/mm2 using the above ND filters and by controlling the irradiation area of the laser beams and the defocus value from 0 to 100 μm as shown in Figure 3. The defocus value is defined as the distance from actual image plane and was set to be positive as the distance between the objective lens and the sample surface decreased. The places of C60NWs exposed to the excitation laser beams can be recognized as the green circular areas marked in Figures 3(a)–3(f). The area of laser beam on the samples could be changed from 63.8 to 9270 μm2 by controlling the defocus value from 0 to 100 μm.Figure 2
Relationship between the neutral density (ND) filter number and the laser beam power.Figure 3
Optical microscopy images of the samples of C60NWs irradiated by the excitation laser beams for the defocus values (under focus) of (a) 100, (b) 80, (c) 60, (d) 40, (e) 20, and (f) 0 μm and for the arrowed exposed areas of (a) 9270, (b) 6630, (c) 3480, (d) 1470, (e) 617, and (f) 63.8 μm2, respectively. Graph (g) shows the relationship between the defocus value and the exposed area.The exposed area (y, μm2) and the defocus value (x, μm) were plotted as shown in Figure 3(g). The plotted points can be approximated by the fitted quadratic curve, y=0.88x2+6.8x+36. Figure 4 summarizes the relationship among the laser beam power density, ND filter number, and the defocus value.Figure 4
Power density of the Raman excitation laser beam measured as a function of ND filter number and the defocus value.Figure5 shows examples of the Raman spectra of C60NWs taken by using the ND filters of OD1, OD2, and OD3 for an exposure time of about 220 s, where the spot size of laser beam on samples was 9 μm in diameter. Each power density of the excitation laser beam was (a) 11800, (b) 1660, and (c) 71.5 mW/mm2, respectively. The Ag(2) peak around 1468 cm−1 sifted to the lower wavenumbers with increasing the laser beam power density.Figure 5
Raman spectra of C60NWs. The power density of laser beam (D) is (a) 11800, (b) 1660, and (c) 71.5 mW/mm2, respectively.Figure6 shows the Ag(2) peak positions of the Raman spectra of C60NWs as a function of energy dose of the laser beam for each defocus value from 100 μm to 0 μm (just focus). The power density of laser beam on samples was changed by changing the defocus value and the ND filter number as shown in Figure 4. The energy dose was changed by setting the beam exposure time at 215 ± 6 s, 441 ± 10 s, 665 ± 9 s, and 899 ± 29 s for each power density. Hence, as a whole, 72 data points are plotted in Figure 6. As shown in Figure 5, the Raman shifts are found to generally decrease to the lower values with increasing the energy dose. However, the Raman shifts were observed to increase along the red arrows for the high energy doses in Figures 6(c), 6(d), 6(e), and 6(f). These phenomena are supposed to be explained by the temperature rise of the C60NWs exposed to the laser beams, since it is known that the photopolymerized C60 molecules decompose into their primary monomers and dimers by heating at temperatures higher than about 100°C [13].Ag(2) peak positions of the Raman spectra of C60NWs under various exposure conditions at the defocus values of (a) 100 μm, (b) 80 μm, (c) 60 μm, (d) 40 μm, (e) 20 μm, and (f) 0 μm (just focus), corresponding to (a) ~ (f) of Figure 3.
(a)(b)(c)(d)(e)(f)The data points obtained using the highest power densities are indicated in each graph of Figure6 by the black arrows for the exposure time of about 220 s. Figure 7 shows the relationship between the laser beam energy dose and the Ag(2) peak position for the arrowed data points of Figure 6. The fitted curve of semilog plot is expressed as y=-2.2x+1467, where x represents log10 (laser beam energy dose) and y represents the Raman shift of Ag(2) peak. Using this experimental formula, the energy dose more than about 1520 J/mm2 is found to be necessary for the photopolymerization of C60NWs in air, when the laser light with a wavelength of 532 nm is used.Figure 7
Relationship between the Raman shift of Ag(2) peak and the energy dose of C60NWs irradiated by the excitation laser beams.Since it is known that the photopolymerization of C60 progresses through the formation of four-membered rings between adjacent C60 molecules [11], it is considered that C60 molecules are linearly polymerized by forming the four-membered rings along the growth axis of C60NWs, as was shown in Figure 6 of [2].In the gas chromatography-mass spectrometry (GC-MS) measurement of solvents contained in the C60NWs that were prepared by use of toluene and IPA, the major residual solvent was toluene, and the content of IPA was very small compared with toluene [14]. Since the residual toluene of C60NWs was measured to be about 0.2% after drying in an Ar atmosphere at 100°C for 30 min. [14], it is considered that the residual toluene of the vacuum-dried samples of C60NWs in the present experiment is negligible and does not influence the Raman profiles.
## 4. Conclusions
The photopolymerization of C60NWs was investigated by using the Raman laser beam of 532 nm wavelength at various exposure conditions for the power density and the exposure time in air.The Ag(2) peak of C60NWs shifted to the lower wavenumbers from that of the as-grown dried C60NWs. However, the Ag(2) peaks were found to move to the higher wavenumbers from the polymerized positions by the irradiation of laser beams for high energy doses at high-power densities, indicating the thermal dissociation of polymerized C60 molecules owing to the temperature rise.An energy dose larger than about 1520 J/mm2 was found to be necessary for the laser beam of 532 nm wavelength to obtain the photopolymerized C60NWs.
---
*Source: 101243-2012-03-14.xml* | 101243-2012-03-14_101243-2012-03-14.md | 10,813 | Raman Laser Polymerization ofC60 Nanowhiskers | Ryoei Kato; Kun'ichi Miyazawa | Journal of Nanotechnology
(2012) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101243 | 101243-2012-03-14.xml | ---
## Abstract
Photopolymerization ofC60 nanowhiskers (C60NWs) was investigated by using a Raman spectrometer in air at room temperature, since the polymerized C60NWs are expected to exhibit a high mechanical strength and a thermal stability. Short C60NWs with a mean length of 4.4 μm were synthesized by LLIP method (liquid-liquid interfacial precipitation method). The Ag(2) peak of C60NWs shifted to the lower wavenumbers with increasing the laser beam energy dose, and an energy dose more than about 1520 J/mm2 was found necessary to obtain the photopolymerized C60NWs. However, excessive energy doses at high-power densities increased the sample temperature and lead to the thermal decomposition of polymerized C60 molecules.
---
## Body
## 1. Introduction
C60 nanowhiskers (C60NWs) are the single crystal nanofibers composed of C60 molecules [1] and can be synthesized by a facile method called “LLIP method” [2]. C60NWs have a variety of applications and such as field-effect transistors (FETs) [3], solar cells [4], biosensors [5].C60 molecules can be polymerized by electron beam irradiation [6]. Although as-grown C60NWs are composed of the C60 molecules that are weakly bonded via van der Waals forces [7], the C60NWs irradiated by electron beams showed the stronger thermal stability [8], the higher Young’s modulus [9] than pristine van der Waals C60 crystals. Hence, it is of great importance to study the polymerization of C60NWs in order to improve their mechanical and thermal properties.Laser irradiation is a promising method to obtain the polymerized C60 molecules [7, 10]. We first showed the photopolymerization of C60NWs by using the Raman laser beam irradiation [7]. Rao et al. showed that the peak of Ag(2) pentagonal pinch mode of C60 shifts downward from 1469 cm−1 to 1459 cm−1 upon the photopolymerization [11], showing that the shift of Ag(2) peak is a good indicator for the polymerization of C60.Alvarez-Zauco et al. studied the polymerization of C60 thin films in air by the ultraviolet (UV) laser irradiation as a function of laser energy dose (= fluence) from 10 to 50 mJ/cm2 in order to optimize the photopolymerization of C60 films [12]. Likewise, the laser energy dose for the photopolymerization of C60NWs should be optimized. Hence, the present study aims to reveal how the polymerization of C60NWs proceeds as a function of the laser beam energy dose.
## 2. Experimental
C60NWs were synthesized by a modified liquid-liquid interfacial precipitation method. Isopropyl alcohol (IPA) was gently poured into a toluene solution saturated with C60 (MTR Ltd. 99.5%) in a glass bottle to make a liquid-liquid interface, and then the solution was subjected to ultrasonication and stored in an incubator at 10°C to grow short C60NWs. The synthesized C60NWs were filtered and dried in vacuum at 100°C for 120 min. to remove the solvents. In the Raman spectrometry analyses, the C60NWs dispersed in ethyl alcohol were mounted on a slide glass and dried in air.A Raman spectrometer (JASCO, NRS-3100) with a green laser of 532 nm excitation wavelength was used for the polymerization and structural analysis of C60NWs in air. The power of laser light illuminated onto the specimens was measured by using a silicon photodiode (S2281, Hamamatsu Photonics K.K.). The laser beam power density (D) and the energy dose of excitation laser beams in the Raman spectroscopy were controlled by changing ND (Neutral Density) filters, the defocus value of objective lens, and the exposure time of laser beam. D is defined by the following formula in this paper,(1)D(mW/mm2)=Thepoweroflaserbeam(mW)theareaoflaserbeamexposedonthesample(mm2).
## 3. Results and Discussion
Figure1 shows examples of scanning electron microscopy (SEM) images and the size distributions of the synthesized C60NWs with a mean length of 4.4 ± 2.7 μm and a mean diameter of 540 ± 161 nm. The distribution of aspect ratios (length/diameter) is also shown. Most of the C60NWs were found to possess the aspect ratios less than 15.(a) SEM images, (b) length, (c) diameter, and (d) aspect ratio (length/diameter) distributions of the synthesized C60NWs.
(a)(b)(c)(d)The power of excitation laser beam can be changed by selecting ND filters. Figure2 shows the relationship between the ND filter number and the power of laser beam irradiated on samples. The laser beam power could be widely changed between OD1 and OD3. The ND filters OD1 (attenuation rate 0.1), OD2 (0.01), and OD3 (0.001) were used in the experiment, since the other filters gave too strong or too weak laser beam energies. The excitation laser beam power density could be varied from about 0.53 to 11800 mW/mm2 using the above ND filters and by controlling the irradiation area of the laser beams and the defocus value from 0 to 100 μm as shown in Figure 3. The defocus value is defined as the distance from actual image plane and was set to be positive as the distance between the objective lens and the sample surface decreased. The places of C60NWs exposed to the excitation laser beams can be recognized as the green circular areas marked in Figures 3(a)–3(f). The area of laser beam on the samples could be changed from 63.8 to 9270 μm2 by controlling the defocus value from 0 to 100 μm.Figure 2
Relationship between the neutral density (ND) filter number and the laser beam power.Figure 3
Optical microscopy images of the samples of C60NWs irradiated by the excitation laser beams for the defocus values (under focus) of (a) 100, (b) 80, (c) 60, (d) 40, (e) 20, and (f) 0 μm and for the arrowed exposed areas of (a) 9270, (b) 6630, (c) 3480, (d) 1470, (e) 617, and (f) 63.8 μm2, respectively. Graph (g) shows the relationship between the defocus value and the exposed area.The exposed area (y, μm2) and the defocus value (x, μm) were plotted as shown in Figure 3(g). The plotted points can be approximated by the fitted quadratic curve, y=0.88x2+6.8x+36. Figure 4 summarizes the relationship among the laser beam power density, ND filter number, and the defocus value.Figure 4
Power density of the Raman excitation laser beam measured as a function of ND filter number and the defocus value.Figure5 shows examples of the Raman spectra of C60NWs taken by using the ND filters of OD1, OD2, and OD3 for an exposure time of about 220 s, where the spot size of laser beam on samples was 9 μm in diameter. Each power density of the excitation laser beam was (a) 11800, (b) 1660, and (c) 71.5 mW/mm2, respectively. The Ag(2) peak around 1468 cm−1 sifted to the lower wavenumbers with increasing the laser beam power density.Figure 5
Raman spectra of C60NWs. The power density of laser beam (D) is (a) 11800, (b) 1660, and (c) 71.5 mW/mm2, respectively.Figure6 shows the Ag(2) peak positions of the Raman spectra of C60NWs as a function of energy dose of the laser beam for each defocus value from 100 μm to 0 μm (just focus). The power density of laser beam on samples was changed by changing the defocus value and the ND filter number as shown in Figure 4. The energy dose was changed by setting the beam exposure time at 215 ± 6 s, 441 ± 10 s, 665 ± 9 s, and 899 ± 29 s for each power density. Hence, as a whole, 72 data points are plotted in Figure 6. As shown in Figure 5, the Raman shifts are found to generally decrease to the lower values with increasing the energy dose. However, the Raman shifts were observed to increase along the red arrows for the high energy doses in Figures 6(c), 6(d), 6(e), and 6(f). These phenomena are supposed to be explained by the temperature rise of the C60NWs exposed to the laser beams, since it is known that the photopolymerized C60 molecules decompose into their primary monomers and dimers by heating at temperatures higher than about 100°C [13].Ag(2) peak positions of the Raman spectra of C60NWs under various exposure conditions at the defocus values of (a) 100 μm, (b) 80 μm, (c) 60 μm, (d) 40 μm, (e) 20 μm, and (f) 0 μm (just focus), corresponding to (a) ~ (f) of Figure 3.
(a)(b)(c)(d)(e)(f)The data points obtained using the highest power densities are indicated in each graph of Figure6 by the black arrows for the exposure time of about 220 s. Figure 7 shows the relationship between the laser beam energy dose and the Ag(2) peak position for the arrowed data points of Figure 6. The fitted curve of semilog plot is expressed as y=-2.2x+1467, where x represents log10 (laser beam energy dose) and y represents the Raman shift of Ag(2) peak. Using this experimental formula, the energy dose more than about 1520 J/mm2 is found to be necessary for the photopolymerization of C60NWs in air, when the laser light with a wavelength of 532 nm is used.Figure 7
Relationship between the Raman shift of Ag(2) peak and the energy dose of C60NWs irradiated by the excitation laser beams.Since it is known that the photopolymerization of C60 progresses through the formation of four-membered rings between adjacent C60 molecules [11], it is considered that C60 molecules are linearly polymerized by forming the four-membered rings along the growth axis of C60NWs, as was shown in Figure 6 of [2].In the gas chromatography-mass spectrometry (GC-MS) measurement of solvents contained in the C60NWs that were prepared by use of toluene and IPA, the major residual solvent was toluene, and the content of IPA was very small compared with toluene [14]. Since the residual toluene of C60NWs was measured to be about 0.2% after drying in an Ar atmosphere at 100°C for 30 min. [14], it is considered that the residual toluene of the vacuum-dried samples of C60NWs in the present experiment is negligible and does not influence the Raman profiles.
## 4. Conclusions
The photopolymerization of C60NWs was investigated by using the Raman laser beam of 532 nm wavelength at various exposure conditions for the power density and the exposure time in air.The Ag(2) peak of C60NWs shifted to the lower wavenumbers from that of the as-grown dried C60NWs. However, the Ag(2) peaks were found to move to the higher wavenumbers from the polymerized positions by the irradiation of laser beams for high energy doses at high-power densities, indicating the thermal dissociation of polymerized C60 molecules owing to the temperature rise.An energy dose larger than about 1520 J/mm2 was found to be necessary for the laser beam of 532 nm wavelength to obtain the photopolymerized C60NWs.
---
*Source: 101243-2012-03-14.xml* | 2012 |
# Diagnostics of Thyroid Malignancy and Indications for Surgery in the Elderly and Younger Counterparts: Comparison of 3,749 Patients
**Authors:** Krzysztof Kaliszewski; Dorota Diakowska; Marta Strutyńska-Karpińska; Beata Wojtczak; Michał Aporowicz; Zdzisław Forkasiewicz; Waldemar Balcerzak; Tadeusz Łukieńczuk; Paweł Domosławski
**Journal:** BioMed Research International
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1012451
---
## Abstract
Background. It seems valuable for clinicians to know if diagnostics of thyroid malignancy (TM) and indications for surgery in the elderly patients differ from these in younger counterparts.Materials and Methods. Retrospective analysis of the medical records of 3,749 patients surgically treated for thyroid tumor. Data of patients with histopathology confirmed TM (n=309) were studied.Results. The rate of cytological prediction to malignancy was more than three times higher in elderly women. Compression was a main reason for surgery in the elderly (p<0.0001). The final diagnosis of malignancy was significantly higher in older women (p=0.002). Clinical suspicion of malignancy was positively correlated with histopathological diagnosis in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001). The subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer and primary tumor progression (p<0.0001). Distant metastases were significantly more presented among the elderly patients (p=0.032).Conclusions. The rate of cytological prediction to malignancy in elderly women is high. Tracheal compression is a common surgical indication in the elderly patients. The final diagnoses of malignancy predominate in elderly women. The oldest TM patients present a higher number of advanced thyroid tumors and distant metastases.
---
## Body
## 1. Introduction
The proportion of the elderly people to the younger population has increased by 90% over the last 30 years, and as some authors predict, by the year 2020, the proportion of the people over 65 years will increase from 12.4% to 20% [1]. According to the same source of information, the number of older United States citizens will reach 80 million by 2050. The other authors estimated that if in 2000 in the world there were 600 million people in age of 60 years or more, in 2050 there will be 2 billion [2]. The United State Census Bureau announced that the number of the elderly Americans aged above 65 years increases by 2.8% per year between 2010 and 2030 [1]. What is more, it is estimated that by 2050 the population above 85 years old will comprise 24% of the elderly in the United States and 5% of the total population in the world [3]. Some authors noticed that approximately 50% of all surgical interventions are performed in patients in age of 65 or older [4].Benign and malignant thyroid nodules occur with increasing frequency in the elderly patients [5, 6]. Mazzaferri [7] assessed that, by the age of 65, about 50% of these patients have ultrasound revealed nodules, and the same observations are described on basis autopsies performed for the general population [8]. Some other studies say that about 90% of women after the age of 60 years demonstrate the thyroid nodules and about 60% of men after the age of 80 [2]. It was estimated that about 5% of palpable nodules appear malignant on histopathological examination; however some authors revealed strong association between age and malignant potential of thyroid nodules [9]. From 1973, we have observed the rapid increase of incidence of thyroid malignancy in older patients, and it is due to rapid increase of papillary thyroid carcinoma incidence [10]. Besides the fact that this type of cancer has excellent prognosis in general population, a lot of studies suggest that well-differentiated thyroid carcinomas comprised of papillary, follicular, and Hurtle cell carcinomas are more aggressive tumors in elderly patients and are more likely to recur. Sorrenti et al. [11] also confirmed that well-differentiated thyroid carcinoma patients have a favourable prognosis, but they added too that the aggressive thyroid tumors are more frequently observed in the geriatric ages showing a higher disease specific mortality. In older patients, sporadic medullary thyroid carcinomas are more characteristic than hereditary tumors and older age is assessed as poor prognostic factor [12]. Lerch et al. [13] noticed that the age of diagnosis of thyroid carcinoma influences its prognosis. Anaplastic thyroid carcinoma tends to have an extremely poor prognosis in elderly people and most often is observed in the 6th-7th decades of life [14]. Sautter-Bihl et al. [15] revealed that none of the patients in age below 40 years died, while in the group of patients over 60 years old a ten-year survival rate was 79%.Diagnostics and treatment of thyroid carcinoma is based on the similar surgical procedures in elderly patients as in younger ones. A lot of studies revealed that the most common indications for surgery in older patients are similar than in younger counterparts and the most common are thyroid carcinoma suspicion, multinodular goiter (MNG) with trachea compression, or hyperthyroidism [16]. However, to establish the accurate diagnosis in the elderly patients may seem much more difficult. The clinical dilemmas concern the elderly patients with thyroid malignancy in the early stage of disease. Some authors suggest that older patients generally have a much “stronger” or more urgent indications for surgery like severe compressive or trachea infiltration symptoms due to thyroid carcinoma [17]. The proper diagnosis and accurate indication for surgery in older patients in aspect of some recent performed clinical analyses seem extremely valuable, especially that, as the authors say, thyroid surgery in older patients is associated with higher risk of severe complications than in younger ones [18].All these observations forced the authors of this study to estimate how more difficult and complicated is to establish the proper diagnosis of thyroid pathology in elderly patients and, next if needed, properly qualify to surgery. The authors analyzed the most common thyroid pathology and the most common indication for surgery in the elderly patients.
## 2. Materials and Methods
Our study protocol was approved by the Bioethics Committee of Wroclaw Medical University (signature number: KB-487/2017).
### 2.1. Demographic and Clinical Characteristics
The terms of “geriatric” or “elderly” patients are not clearly defined. The cutoff age for this group of patients may vary from 60 to 80 years or even older [18]. In our study, we defined the elderly patients as 65 and older.
### 2.2. Indication for Surgery
In aspect of some studies [19], which have revealed the higher risk of perioperative complications among older patients undergoing thyroid surgery, the indications for strumectomy in our patients were accurately analyzed and based on the classical standards.
### 2.3. Study Group
Study group consisted of 3,749 patients with solitary and multiple thyroid nodules, who were surgically treated between January 2008 and December 2015 at Department of General, Gastroenterological and Endocrine Surgery of Wroclaw Medical University, Poland. The average age of the patients was51±14 years, and female to male ratio was 3,148/601. All patients before the surgical ward admission were diagnosed clinically by ultrasonography (US), computed tomography (CT), or magnetic resonance (MR). All of the patients had ultrasound guided fine needle aspiration biopsy (UG-FNAB) of the thyroid tumor performed minimum once and one month before admission. Clinical and pathological c/pTNM staging was estimated according to AJCC/UICC 2010 7th Edition. The characteristic of the patients in the study group is demonstrated in Table 1.Table 1
Baseline characteristics of 3,749 patients with thyroid nodules.
Parameters
Mean ± SD orn (%)
Age (years)
51.4 ± 14.4
Gender
Female
3,148 (84.0)
Male
601 (16.0)
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
3,206 (85.5)
Stage III (AUS/FLUS)
84 (2.2)
Stage IV (follicular neoplasm)
225 (6.0)
Stage V (malignancy suspicion)
48 (1.3)
Stage V (malignancy/lymphoma suspicion)
2 (0.1)
Stage VI (papillary carcinoma)
178 (4.7)
Stage VI (medullary carcinoma)
6 (0.2)
Clinical suspicion of malignancy
No
3,290 (87.8)
Yes
459 (12.2)
Histological diagnosis
Benign multinodular goiter
2,946 (78.6)
Thyroiditis
118 (3.1)
Follicular carcinoma
25 (0.7)
Papillary carcinoma
247 (6.6)
Medullary carcinoma
10 (0.3)
Undifferentiated thyroid carcinoma
9 (0.3)
Secondary malignant tumor
4 (0.1)
Lymphoma
9 (0.2)
Follicular adenoma
375 (10.0)
Squamous cell carcinoma
1 (0.03)
Abscess
1 (0.03)
Final diagnosis
Benign
3,440 (91.8)
Malignant
309 (8.2)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance.Total study group of patients (n=3,749) was divided into three subgroups according to the age parameter. The first subgroup combined the patients below 45 years old (“<45”), the second one combined the patients at the age equal to or above 45 years to below 65 years (“≥45–<65”), and the third subgroup combined the patients at the age equal to or above 65 years (“≥65”). Since the first statistical analysis showed that these three subgroups of patients are significantly different in respect of gender parameter (p=0.019), next comparative statistics were performed separately in female and male.In the second part of the research, selected from among 3,749 subjects, data of patients with histopathology confirmed TM (n=309) were studied. This group was also divided into three subgroups according to the age parameter (the classification was identical to the one performed in the first part of the study). There were no significant differences in the gender distribution between subgroups of TM patients (p=0.520). Therefore receiver operating characteristic (ROC) and comparative analyses of the demographic, clinical, and histopathological parameters were performed for these three subgroups.
### 2.4. Statistical Analysis
Numbers and percentages were calculated for qualitative variables, and averages and standard deviations were measured for quantitative variables. The data distribution was tested using chi-square test. Frequencies were analyzed using Fisher exact test or chi-square test with Yates correction. Correlation analysis was conducted with Spearman test.The diagnostic potential of clinical examinations in thyroid cancer occurrence in each subgroup was determined by ROC analysis. Results were expressed as area under ROC curve (AUC). The sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio of positive results (LR(+)), likelihood ratio of negative results (LR(−)), and Youden’s index were also calculated.In a two-tailed test, significance levelα≤0.05 was considered as statistically significant. Statistical analyses were performed using STATISTICA 12.0 software (Statistica, Tulsa, OK, USA) and MedCalc 16.8 software (MedCalc Statistical Software, Ostend, Belgium).
## 2.1. Demographic and Clinical Characteristics
The terms of “geriatric” or “elderly” patients are not clearly defined. The cutoff age for this group of patients may vary from 60 to 80 years or even older [18]. In our study, we defined the elderly patients as 65 and older.
## 2.2. Indication for Surgery
In aspect of some studies [19], which have revealed the higher risk of perioperative complications among older patients undergoing thyroid surgery, the indications for strumectomy in our patients were accurately analyzed and based on the classical standards.
## 2.3. Study Group
Study group consisted of 3,749 patients with solitary and multiple thyroid nodules, who were surgically treated between January 2008 and December 2015 at Department of General, Gastroenterological and Endocrine Surgery of Wroclaw Medical University, Poland. The average age of the patients was51±14 years, and female to male ratio was 3,148/601. All patients before the surgical ward admission were diagnosed clinically by ultrasonography (US), computed tomography (CT), or magnetic resonance (MR). All of the patients had ultrasound guided fine needle aspiration biopsy (UG-FNAB) of the thyroid tumor performed minimum once and one month before admission. Clinical and pathological c/pTNM staging was estimated according to AJCC/UICC 2010 7th Edition. The characteristic of the patients in the study group is demonstrated in Table 1.Table 1
Baseline characteristics of 3,749 patients with thyroid nodules.
Parameters
Mean ± SD orn (%)
Age (years)
51.4 ± 14.4
Gender
Female
3,148 (84.0)
Male
601 (16.0)
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
3,206 (85.5)
Stage III (AUS/FLUS)
84 (2.2)
Stage IV (follicular neoplasm)
225 (6.0)
Stage V (malignancy suspicion)
48 (1.3)
Stage V (malignancy/lymphoma suspicion)
2 (0.1)
Stage VI (papillary carcinoma)
178 (4.7)
Stage VI (medullary carcinoma)
6 (0.2)
Clinical suspicion of malignancy
No
3,290 (87.8)
Yes
459 (12.2)
Histological diagnosis
Benign multinodular goiter
2,946 (78.6)
Thyroiditis
118 (3.1)
Follicular carcinoma
25 (0.7)
Papillary carcinoma
247 (6.6)
Medullary carcinoma
10 (0.3)
Undifferentiated thyroid carcinoma
9 (0.3)
Secondary malignant tumor
4 (0.1)
Lymphoma
9 (0.2)
Follicular adenoma
375 (10.0)
Squamous cell carcinoma
1 (0.03)
Abscess
1 (0.03)
Final diagnosis
Benign
3,440 (91.8)
Malignant
309 (8.2)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance.Total study group of patients (n=3,749) was divided into three subgroups according to the age parameter. The first subgroup combined the patients below 45 years old (“<45”), the second one combined the patients at the age equal to or above 45 years to below 65 years (“≥45–<65”), and the third subgroup combined the patients at the age equal to or above 65 years (“≥65”). Since the first statistical analysis showed that these three subgroups of patients are significantly different in respect of gender parameter (p=0.019), next comparative statistics were performed separately in female and male.In the second part of the research, selected from among 3,749 subjects, data of patients with histopathology confirmed TM (n=309) were studied. This group was also divided into three subgroups according to the age parameter (the classification was identical to the one performed in the first part of the study). There were no significant differences in the gender distribution between subgroups of TM patients (p=0.520). Therefore receiver operating characteristic (ROC) and comparative analyses of the demographic, clinical, and histopathological parameters were performed for these three subgroups.
## 2.4. Statistical Analysis
Numbers and percentages were calculated for qualitative variables, and averages and standard deviations were measured for quantitative variables. The data distribution was tested using chi-square test. Frequencies were analyzed using Fisher exact test or chi-square test with Yates correction. Correlation analysis was conducted with Spearman test.The diagnostic potential of clinical examinations in thyroid cancer occurrence in each subgroup was determined by ROC analysis. Results were expressed as area under ROC curve (AUC). The sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio of positive results (LR(+)), likelihood ratio of negative results (LR(−)), and Youden’s index were also calculated.In a two-tailed test, significance levelα≤0.05 was considered as statistically significant. Statistical analyses were performed using STATISTICA 12.0 software (Statistica, Tulsa, OK, USA) and MedCalc 16.8 software (MedCalc Statistical Software, Ostend, Belgium).
## 3. Results
### 3.1. Demographic, Clinical, and Pathological Characteristics of Patients with Thyroid Nodules
We observed significant differences in gender parameter between the three subgroups (female/male ratio in “<45” group: 85/15, in “≥45–<65” group: 85/15, and in “≥65” group: 80/20;p=0.019), so we analyzed data for women and men separately (Table 2).Table 2
Distribution of cases according to gender and age parameters. Data were presented as number (percent).
Female
Male
<45 years (n=995)
≥45–<65 years (n=1584)
≥65 years (n=569)
p value
<45 years (n=178)
≥45–<65 years (n=285)
≥65 years (n=138)
p value
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
841 (84.5)
1,382 (87.3)
448 (78.7)
<
0.0001
∗
151 (84.8)
261 (91.6)
123 (89.1)
0.254
Stage III (AUS/FLUS)
26 (2.6)
34 (2.2)
13 (2.3)
5 (2.8)
4 (1.4)
2 (1.5)
Stage IV (follicular neoplasm)
65 (6.5)
87 (5.5)
50 (8.8)
12 (6.7)
8 (2.8)
3 (2.2)
Stage V (malignancy suspicion)
3 (0.3)
15 (1.0)
21 (3.7)
3 (1.7)
2 (0.7)
4 (2.9)
Stage V (malignancy/lymphoma suspicion)
0 (0.0)
0 (0.0)
2 (0.4)
0 (0.0)
0 (0.0)
0 (0.0)
Stage VI (papillary carcinoma)
59 (5.9)
62 (3.9)
34 (6.0)
7 (3.9)
10 (3.5)
6 (4.4)
Stage VI (medullary carcinoma)
1 (0.1)
4 (0.3)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Clinical suspicion of malignancy
No
867 (87.1)
1,416 (89.4)
461 (81.0)
<
0.0001
∗
156 (87.6)
265 (93.0)
125 (90.6)
0.157
Yes
128 (12.9)
168 (10.6)
108 (19.0)
22 (12.4)
20 (7.0)
13 (9.4)
Indication for surgery (out of malignancy)
Trachea compression
278 (32.1)
720 (50.9)
444 (96.3)
<
0.0001
∗
46 (30.0)
117 (43.8)
112 (89.6)
<
0.0001
∗
Recent tumor/goiter enlargement
205 (23.6)
284 (20)
8 (1.7)
79 (50.0)
49 (18.8)
7 (5.6)
Urgent indication (acute respiratory failure/retrosternal goiter)
0 (0.0)
0 (0.0)
7 (1.5)
0 (0.0)
0 (0.0)
4 (3.2)
Cosmetic indication
384 (44.3)
412 (29.1)
2 (0.4)
31 (20.0)
99 (37.5)
2 (1.6)
Histological follow-up
Benign multinodular goiter
759 (76.3)
1,281 (81.0)
433 (76.1)
<
0.0001
∗
131 (73.6)
236 (82.8)
105 (76.1)
0.001
∗
Thyroiditis
36 (3.6)
50 (3.2)
20 (3.5)
7 (3.9)
3 (1.1)
2 (1.5)
Follicular carcinoma
7 (0.7)
5 (0.3)
5 (0.9)
0 (0.0)
1 (0.4)
7 (5.1)
Papillary carcinoma
76 (7.6)
96 (6.1)
44 (7.7)
9 (5.1)
18 (6.3)
4 (2.9)
Medullary carcinoma
2 (0.2)
2 (0.1)
3 (0.5)
2 (1.1)
1 (0.4)
0 (0.0)
Undifferentiated thyroid cancer
0 (0.0)
1 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
1 (0.7)
Secondary malignant tumor
0 (0.0)
1 (0.1)
2 (0.4)
0 (0.0)
0 (0.0)
1 (0.7)
Lymphoma
0 (0.0)
2 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
0 (0.0)
Squamous cell carcinoma
0 (0.0)
0 (0.0)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Adenoma
115 (11.6)
142 (9.0)
47 (8.3)
29 (16.3)
24 (8.4)
18 (13.0)
Abscess
0 (0.0)
1 (0.1)
0 (0.0)
0 (0.0)
0 (0.0)
0 (0.0)
Final diagnosis
Benign
910 (91.5)
1,474 (93.1)
501 (88.1)
0.002
∗
167 (93.8)
263 (92.3)
125 (90.6)
0.561
Malignant
85 (8.5)
110 (6.9)
68 (11.9)
11 (6.2)
22 (7.7)
13 (9.4)
Recurrent goiter
No
995 (100.0)
1,572 (99.2)
472 (83.0)
<
0.0001
∗
178 (100.0)
285 (100.0)
130 (94.2)
<
0.0001
∗
Yes
0 (0.0)
12 (0.8)
97 (17.0)
0 (0.0)
0 (0.0)
8 (5.8)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance;statistically∗ significant.The rate of cytological prediction to malignancy was more than three times higher in elderly women compared with women below 65 years old. Also follicular neoplasm samples were observed more frequently in women, who were over 65 years old. As it has been shown in comparative analysis of clinical examinations, older women had significantly higher prediction and suspicion to TM than women below 65 years (for bothp<0.0001). These results were not observed in subgroup of older men (for both p>0.05) (Table 2).Goiter enlargement and cosmetics indication were significant factors for operation in women and men below 45 years old, and compression was a main reason for surgery in more than 85% of older women and men. These differences were statistically significant between age subgroups in women and men (for bothp<0.0001) (Table 2).The histopathological data demonstrated that significantly more patients with age below 65 years had extreme results of diagnosis: simple goiter or adenoma (p<0.0001 and p<0.001, respectively, for women and men), and percent value of final diagnosis of malignancy was significantly higher in older women than in women below 65 years old (p=0.002). There were no significant differences in final diagnosis between age subgroups in men (Table 2).Clinical suspicion of malignancy was positively correlated with histopathological diagnosis not only in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001), but also in each of age subgroups (Table 3).Table 3
Analyses of correlation between clinical suspicion of malignancy and final diagnosis of thyroid malignancy presence.
Total (n=3,148)
<45 years (n=995)
≥45–<65 years (n=1,584)
≥65 years (n=569)
r
p value
r
p value
r
p value
r
p value
Female
Clinical suspicion of malignancy & final diagnosis
0.543
<
0.0001
∗
0.581
<
0.0001
∗
0.503
<
0.0001
∗
0.554
<
0.0001
∗
Male
Clinical suspicion of malignancy & final diagnosis
0.560
<
0.0001
∗
0.542
<
0.0001
∗
0.590
<
0.0001
∗
0.575
<
0.0001
∗
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
### 3.2. Diagnostic Potential of Clinical Examinations in Prediction of TM Presence in Patients with Thyroid Nodules
The diagnostic potential of clinical examinations in TM occurrence was evaluated in terms of the capacity to rule out malignancy in patients with benign thyroid disease (the controls). The overall accuracy of clinical examinations in prediction of TM presence was high in all subgroups of patients (Table4).Table 4
Diagnostic potential ofclinical suspicion of malignancy parameter in prediction of thyroid malignant tumor presence in total study group and age subgroups of patients with thyroid nodules (ROC analysis).
Total study group (n=3,749)
<45 years (n=1,173)
≥45–<65 years (n=1,869)
≥65 years (n=707)
AUC
0.825
0.850
0.800
0.829
95% CI
0.790–0.855
0.800–0.900
0.751–0.850
0.772–0.885
p value
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
SE
0.015
0.026
0.025
0.029
Sensitivity
0.72
0.77
0.66
0.75
Specificity
0.93
0.93
0.94
0.90
Accuracy
0.91
0.92
0.92
0.89
LR(+)
10.43
10.92
11.33
7.86
LR(−)
0.30
0.25
0.36
0.27
PPV
0.48
0.49
0.46
0.50
NPV
0.97
0.98
0.97
0.97
Youden’s index
0.65
0.70
0.60
0.66
AUC: area under ROC curve; 95% CI: confidence interval; SE: standard error; LR(+): likelihood ratio of positive results; LR(−): likelihood ratio of negative results; PPV: positive predictive value; NPV: negative predictive value;statistically∗ significant.
### 3.3. Demographic, Clinical, and Pathological Characteristics of Patients with TM
As shown in Table5, the subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer (III or IV stage of disease progression) and with primary tumor progression (T3 and T4) (for both p<0.0001). Distant metastases were significantly more presented among the elderly patients in comparison to patients below 65 years old (p=0.032). No significant differences were observed in type of surgery, necessity of reoperation, type of nodule, and lymph node metastases between the three subgroups of TM patients (Table 5).Table 5
Clinical and pathological characteristics of thyroid malignancy (TM) patients (n=309) divided into three subgroups according to age parameter. Data were presented as n (%).
N
<45 years (n=96)
≥45–<65 years (n=132)
≥65 years (n=81)
p value
Gender
Female
263
85 (88,5)
110 (83.3)
68 (84.0)
0.520
Male
46
11 (11.5)
22 (16.7)
13 (16.0)
Type of surgery
Radical
202
65 (67.7)
82 (66.7)
46 (63,0)
0.803
Nonradical
107
31 (32.3)
41 (33.3)
27 (37.0)
Reoperation
No
217
70 (72.9)
87 (70.7)
50 (70.4)
0.919
Yes
92
26 (27.1)
36 (29.3)
21 (29.6)
pTNM
I
199
88 (91.7)
84 (67.2)
23 (31.9)
<
0.000
1
∗
II
48
5 (5.2)
21 (16.8)
18 (25.0)
III
31
1 (1.0)
11 (8.8)
15 (20.8)
IV
31
2 (2.1)
9 (7.2)
16 (22.2)
pT
T1a
125
45 (47.9)
62 (49.2)
16 (22.2)
<
0.000
1
∗
T1b
68
31 (32.9)
26 (20.6)
8 (11.1)
T2
56
15 (15.9)
19 (15.1)
19 (26.3)
T3
25
1 (1.0)
8 (6.4)
13 (18.1)
T4a
17
2 (2.1)
8 (6.4)
4 (5.6)
T4b
18
0 (0.0)
3 (2.3)
12 (16.7)
pN
N0
148
49 (51.0)
60 (47.6)
35 (48.6)
0.071
N1a
42
12 (12.5)
12 (9.5)
14 (19.4)
N1b
20
5 (5.2)
4 (3.2)
7 (9.7)
Nx
99
30 (31.3)
50 (39.7)
16 (22.2)
pM
M0
218
77 (80.2)
87 (69.1)
54 (75.0)
0.03
2
∗
M1
6
0 (0.0)
2 (1.6)
4 (5.6)
Mx
85
19 (19.8)
37 (29.3)
14 (19.4)
Type of nodule
Solitary
223
71 (73.9)
88 (69.8)
56 (77.8)
0.468
Multiple
86
25 (26.1)
38 (30.2)
16 (22.2)
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 3.1. Demographic, Clinical, and Pathological Characteristics of Patients with Thyroid Nodules
We observed significant differences in gender parameter between the three subgroups (female/male ratio in “<45” group: 85/15, in “≥45–<65” group: 85/15, and in “≥65” group: 80/20;p=0.019), so we analyzed data for women and men separately (Table 2).Table 2
Distribution of cases according to gender and age parameters. Data were presented as number (percent).
Female
Male
<45 years (n=995)
≥45–<65 years (n=1584)
≥65 years (n=569)
p value
<45 years (n=178)
≥45–<65 years (n=285)
≥65 years (n=138)
p value
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
841 (84.5)
1,382 (87.3)
448 (78.7)
<
0.0001
∗
151 (84.8)
261 (91.6)
123 (89.1)
0.254
Stage III (AUS/FLUS)
26 (2.6)
34 (2.2)
13 (2.3)
5 (2.8)
4 (1.4)
2 (1.5)
Stage IV (follicular neoplasm)
65 (6.5)
87 (5.5)
50 (8.8)
12 (6.7)
8 (2.8)
3 (2.2)
Stage V (malignancy suspicion)
3 (0.3)
15 (1.0)
21 (3.7)
3 (1.7)
2 (0.7)
4 (2.9)
Stage V (malignancy/lymphoma suspicion)
0 (0.0)
0 (0.0)
2 (0.4)
0 (0.0)
0 (0.0)
0 (0.0)
Stage VI (papillary carcinoma)
59 (5.9)
62 (3.9)
34 (6.0)
7 (3.9)
10 (3.5)
6 (4.4)
Stage VI (medullary carcinoma)
1 (0.1)
4 (0.3)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Clinical suspicion of malignancy
No
867 (87.1)
1,416 (89.4)
461 (81.0)
<
0.0001
∗
156 (87.6)
265 (93.0)
125 (90.6)
0.157
Yes
128 (12.9)
168 (10.6)
108 (19.0)
22 (12.4)
20 (7.0)
13 (9.4)
Indication for surgery (out of malignancy)
Trachea compression
278 (32.1)
720 (50.9)
444 (96.3)
<
0.0001
∗
46 (30.0)
117 (43.8)
112 (89.6)
<
0.0001
∗
Recent tumor/goiter enlargement
205 (23.6)
284 (20)
8 (1.7)
79 (50.0)
49 (18.8)
7 (5.6)
Urgent indication (acute respiratory failure/retrosternal goiter)
0 (0.0)
0 (0.0)
7 (1.5)
0 (0.0)
0 (0.0)
4 (3.2)
Cosmetic indication
384 (44.3)
412 (29.1)
2 (0.4)
31 (20.0)
99 (37.5)
2 (1.6)
Histological follow-up
Benign multinodular goiter
759 (76.3)
1,281 (81.0)
433 (76.1)
<
0.0001
∗
131 (73.6)
236 (82.8)
105 (76.1)
0.001
∗
Thyroiditis
36 (3.6)
50 (3.2)
20 (3.5)
7 (3.9)
3 (1.1)
2 (1.5)
Follicular carcinoma
7 (0.7)
5 (0.3)
5 (0.9)
0 (0.0)
1 (0.4)
7 (5.1)
Papillary carcinoma
76 (7.6)
96 (6.1)
44 (7.7)
9 (5.1)
18 (6.3)
4 (2.9)
Medullary carcinoma
2 (0.2)
2 (0.1)
3 (0.5)
2 (1.1)
1 (0.4)
0 (0.0)
Undifferentiated thyroid cancer
0 (0.0)
1 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
1 (0.7)
Secondary malignant tumor
0 (0.0)
1 (0.1)
2 (0.4)
0 (0.0)
0 (0.0)
1 (0.7)
Lymphoma
0 (0.0)
2 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
0 (0.0)
Squamous cell carcinoma
0 (0.0)
0 (0.0)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Adenoma
115 (11.6)
142 (9.0)
47 (8.3)
29 (16.3)
24 (8.4)
18 (13.0)
Abscess
0 (0.0)
1 (0.1)
0 (0.0)
0 (0.0)
0 (0.0)
0 (0.0)
Final diagnosis
Benign
910 (91.5)
1,474 (93.1)
501 (88.1)
0.002
∗
167 (93.8)
263 (92.3)
125 (90.6)
0.561
Malignant
85 (8.5)
110 (6.9)
68 (11.9)
11 (6.2)
22 (7.7)
13 (9.4)
Recurrent goiter
No
995 (100.0)
1,572 (99.2)
472 (83.0)
<
0.0001
∗
178 (100.0)
285 (100.0)
130 (94.2)
<
0.0001
∗
Yes
0 (0.0)
12 (0.8)
97 (17.0)
0 (0.0)
0 (0.0)
8 (5.8)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance;statistically∗ significant.The rate of cytological prediction to malignancy was more than three times higher in elderly women compared with women below 65 years old. Also follicular neoplasm samples were observed more frequently in women, who were over 65 years old. As it has been shown in comparative analysis of clinical examinations, older women had significantly higher prediction and suspicion to TM than women below 65 years (for bothp<0.0001). These results were not observed in subgroup of older men (for both p>0.05) (Table 2).Goiter enlargement and cosmetics indication were significant factors for operation in women and men below 45 years old, and compression was a main reason for surgery in more than 85% of older women and men. These differences were statistically significant between age subgroups in women and men (for bothp<0.0001) (Table 2).The histopathological data demonstrated that significantly more patients with age below 65 years had extreme results of diagnosis: simple goiter or adenoma (p<0.0001 and p<0.001, respectively, for women and men), and percent value of final diagnosis of malignancy was significantly higher in older women than in women below 65 years old (p=0.002). There were no significant differences in final diagnosis between age subgroups in men (Table 2).Clinical suspicion of malignancy was positively correlated with histopathological diagnosis not only in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001), but also in each of age subgroups (Table 3).Table 3
Analyses of correlation between clinical suspicion of malignancy and final diagnosis of thyroid malignancy presence.
Total (n=3,148)
<45 years (n=995)
≥45–<65 years (n=1,584)
≥65 years (n=569)
r
p value
r
p value
r
p value
r
p value
Female
Clinical suspicion of malignancy & final diagnosis
0.543
<
0.0001
∗
0.581
<
0.0001
∗
0.503
<
0.0001
∗
0.554
<
0.0001
∗
Male
Clinical suspicion of malignancy & final diagnosis
0.560
<
0.0001
∗
0.542
<
0.0001
∗
0.590
<
0.0001
∗
0.575
<
0.0001
∗
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 3.2. Diagnostic Potential of Clinical Examinations in Prediction of TM Presence in Patients with Thyroid Nodules
The diagnostic potential of clinical examinations in TM occurrence was evaluated in terms of the capacity to rule out malignancy in patients with benign thyroid disease (the controls). The overall accuracy of clinical examinations in prediction of TM presence was high in all subgroups of patients (Table4).Table 4
Diagnostic potential ofclinical suspicion of malignancy parameter in prediction of thyroid malignant tumor presence in total study group and age subgroups of patients with thyroid nodules (ROC analysis).
Total study group (n=3,749)
<45 years (n=1,173)
≥45–<65 years (n=1,869)
≥65 years (n=707)
AUC
0.825
0.850
0.800
0.829
95% CI
0.790–0.855
0.800–0.900
0.751–0.850
0.772–0.885
p value
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
SE
0.015
0.026
0.025
0.029
Sensitivity
0.72
0.77
0.66
0.75
Specificity
0.93
0.93
0.94
0.90
Accuracy
0.91
0.92
0.92
0.89
LR(+)
10.43
10.92
11.33
7.86
LR(−)
0.30
0.25
0.36
0.27
PPV
0.48
0.49
0.46
0.50
NPV
0.97
0.98
0.97
0.97
Youden’s index
0.65
0.70
0.60
0.66
AUC: area under ROC curve; 95% CI: confidence interval; SE: standard error; LR(+): likelihood ratio of positive results; LR(−): likelihood ratio of negative results; PPV: positive predictive value; NPV: negative predictive value;statistically∗ significant.
## 3.3. Demographic, Clinical, and Pathological Characteristics of Patients with TM
As shown in Table5, the subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer (III or IV stage of disease progression) and with primary tumor progression (T3 and T4) (for both p<0.0001). Distant metastases were significantly more presented among the elderly patients in comparison to patients below 65 years old (p=0.032). No significant differences were observed in type of surgery, necessity of reoperation, type of nodule, and lymph node metastases between the three subgroups of TM patients (Table 5).Table 5
Clinical and pathological characteristics of thyroid malignancy (TM) patients (n=309) divided into three subgroups according to age parameter. Data were presented as n (%).
N
<45 years (n=96)
≥45–<65 years (n=132)
≥65 years (n=81)
p value
Gender
Female
263
85 (88,5)
110 (83.3)
68 (84.0)
0.520
Male
46
11 (11.5)
22 (16.7)
13 (16.0)
Type of surgery
Radical
202
65 (67.7)
82 (66.7)
46 (63,0)
0.803
Nonradical
107
31 (32.3)
41 (33.3)
27 (37.0)
Reoperation
No
217
70 (72.9)
87 (70.7)
50 (70.4)
0.919
Yes
92
26 (27.1)
36 (29.3)
21 (29.6)
pTNM
I
199
88 (91.7)
84 (67.2)
23 (31.9)
<
0.000
1
∗
II
48
5 (5.2)
21 (16.8)
18 (25.0)
III
31
1 (1.0)
11 (8.8)
15 (20.8)
IV
31
2 (2.1)
9 (7.2)
16 (22.2)
pT
T1a
125
45 (47.9)
62 (49.2)
16 (22.2)
<
0.000
1
∗
T1b
68
31 (32.9)
26 (20.6)
8 (11.1)
T2
56
15 (15.9)
19 (15.1)
19 (26.3)
T3
25
1 (1.0)
8 (6.4)
13 (18.1)
T4a
17
2 (2.1)
8 (6.4)
4 (5.6)
T4b
18
0 (0.0)
3 (2.3)
12 (16.7)
pN
N0
148
49 (51.0)
60 (47.6)
35 (48.6)
0.071
N1a
42
12 (12.5)
12 (9.5)
14 (19.4)
N1b
20
5 (5.2)
4 (3.2)
7 (9.7)
Nx
99
30 (31.3)
50 (39.7)
16 (22.2)
pM
M0
218
77 (80.2)
87 (69.1)
54 (75.0)
0.03
2
∗
M1
6
0 (0.0)
2 (1.6)
4 (5.6)
Mx
85
19 (19.8)
37 (29.3)
14 (19.4)
Type of nodule
Solitary
223
71 (73.9)
88 (69.8)
56 (77.8)
0.468
Multiple
86
25 (26.1)
38 (30.2)
16 (22.2)
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 4. Discussion
The prevalence of thyroid malignancy increases with age and in the elderly it is more aggressive process [20, 21]. It was estimated that papillary and follicular thyroid carcinomas are more aggressive tumors after 45 years and anaplastic thyroid carcinoma is extremely rare observed before the age of 60 [22]. Some authors revealed that, in patients in age of 65 and older, more aggressive tumors with extrathyroid spread, multiple, larger lesion, and more advanced-stage disease can be observed [23]. They added that nonpapillary types of carcinoma were most often observed.Diagnostics and qualification for thyroid surgery of elderly patients still remain a set of challenges. The effect of age on the risk of thyroid surgery still remains under debate. In the literature, there are opposite opinions. Some authors suggest that there is no higher risk of complications in thyroid surgery for elderly patients [2], whereas the others say that the possibility for surgical complication is very high [21]. Some authors suggest that thyroid surgery presents various risks for older people [24]. On the basis of all these opinions we think that proper diagnostics and qualification to surgery in the elderly patients are extremely important. Majority of authors agree that there are two main indications for thyroid surgery in elderly patients. The first is mechanical compression symptoms caused by solitary tumor or goiter, often localized in retrosternal space. The second is suspicion or verified malignant process [21, 25–27]. The indications for thyroid surgery like compressive syndrome, thyrotoxicosis, and recent gland enlargement were described as more common in elderly patients than in younger population (the percentage in age above 70 years was 38.2%, 30.9%, and 27.3%, respectively) [26]. The basic diagnostic tool for thyroid tumors to rule out or confirm its malignancy still remains UG-FNAB. A large number of studies have demonstrated the high overall accuracy of UG-FNAB for evaluation of thyroid nodules, and this finding has been confirmed, particularly for patients with solitary nodules [28]. Some authors revealed that older patients underwent thyroid surgery more often due to suspicion or verified malignancy than younger patients (52.7% versus 30.3%) [27]. The next more common indication is compression of trachea (38.2% versus 3.1%). The same authors say that benign, asymptomatic MNG in opposite to younger population is very rare indication for strumectomy in elderly patients (9.1% versus 66.6%). However in the literature there are no clear data about differences of extent of surgery between each age group of patients. In a very interesting study performed by Ríos et al. [19], they analyzed 591 patients with 81 individuals above 65 years old, who underwent thyroidectomy due to MNG. They revealed that elderly patients more often than younger ones presented compressive symptoms (43% versus 21%) and rare suspicion for malignancy (19% versus 29%), recent goiter growth (1% versus 6%), or patient request (4% versus 12%). In our study the main indication for surgery in the elderly patients group was compression symptom. The second was verified malignant tumor or suspicion of malignancy. The number of retrosternal goiters in elderly patients was significantly higher than in younger ones. Only in this group of patients we observed urgent indication for surgery, which had to be performed immediately after admission to our department. Park et al. [29] performed very interesting study, in which they compared three age groups of patients who received surgery due to well-differentiated thyroid carcinoma. They analyzed patients between 45 and 64, 65 and 79, and 80 years and older. They noticed that patients in the age of 65 years and older had more aggressive malignant disease with multiple, larger tumors and more advanced-stage disease. In this group of patients, in opposite to younger ones, there were a lot of nonpapillary thyroid carcinoma and extraglandular extension. The same authors noticed that these elderly patients underwent less radical treatment without radioiodine ablation therapy even if American Thyroid Association guidelines recommended more aggressive treatment [30]. In a very similar observation described by Panigrahi et al. [31], they analyzed 2,033 patients surgically treated due to medullary thyroid carcinoma. Among all patients without local invasion and distant metastases, in the group of patients above 65 years, the authors noticed the fewest percentage of individuals receiving recommended treatment. In the group of patients below 40 years old it was 65% compared to those above 65, where it was only 45%.In the other study, the authors analyzed the most common indications for thyroid surgery in the three age groups of patients: 50–60 years (725 patients), 61–74 years (685 patients), and more than 75 years (221 patients). They noticed that the most common indication in all groups was retrosternal goiter with tracheal compression, but it was least frequent in the oldest group. In this group there were the most of remedial surgeries [25]. Raffaelli et al. [32] analyzed the indications for thyroid surgery in 320 patients in the age of 70 years and more. They noticed that the most common indication was bilateral multinodular goiter, next suspicion or confirmed malignant process, and toxic goiter. In similar observations revealed by Lang and Lo [26], they confirmed that in patients aged 70 years and more the most common indication for thyroid surgery was retrosternal goiter, but they added that in this group of patients the volume of goiter was significantly higher.The next very important aspect of thyroid surgery in elderly patients is emergency thyroidectomies. This surgical procedure is indicated in case of severe respiratory distress caused by airway compression. Miccoli et al. [33] revealed that the most common reasons of acute airway compression in patients above 80 years were malignant process. The presence of retrosternal goiter or its mediastinal location very often causes the trachea compression with respiratory failure. In our clinic during the analysis period we treated 11 patients with the huge retrosternal goiter which caused acute compression symptoms. However retrosternal goiter not always causes trachea compression. In the asymptomatic patients, especially in the geriatric age, decision for surgery is extremely difficult. In aspect of some recent studies, which suggest that in the elderly patients the risk of complication after thyroid surgery is high, it is very valuable to identify the patients who become symptomatic without surgery, those who benefit from only observation, and in the end those who need rapid surgical procedure. In our study we operated on 5 patients requiring emergency admission and emergency surgery due to acute airway obstruction from a large compressive goiter.The next important issue of indications and thyroid surgery in the elderly patients is secondary operations. These might be performed mainly due to two basic clinical situations. The first is local or lymph node recurrence of thyroid malignant process and the second is recurrent goiter with compression symptoms. Some authors say that the number of secondary thyroid operations is significantly higher in elderly patients than in younger group [25, 27]. These observations were confirmed in our study.In the previous analyses, it can be observed that some authors pay attention to a very important aspect of the older people living, that is, a quality of life [34]. This problem seems extremely important especially in elderly patients, where surgery not always increases the quality of life. In our study, we treated 6 patients where the only surgical procedure that we could perform was tracheostomy and gastrostomy. All individuals had advanced malignant processes qualified as inoperable cases. Matsuyama et al. [34] performed 85 surgical procedures in elderly patients with thyroid carcinoma and revealed that successful operation in these patients improved the quality of life.Mekel et al. [21] analyzed indications for thyroid surgery of 332 patients. They divided the patients as octogenarian and younger patients. The authors did not reveal any differences in preoperative indications for surgery. In both groups there were equal numbers of indications like benign diseases, suspected malignancy, and follicular neoplasms.The next issue of thyroid surgery in elderly patients is the higher prevalence of more aggressive types of thyroid carcinoma than in younger patients. Dellal et al. [20] described their observations performed in 933 patients. They noticed that the rates of anaplastic and Hurtle cell cancers were increased in geriatric ages. These authors concluded that cytological evaluation of thyroid lesions should be especially considered because of increased tendency for aggressive thyroid cancer types in elderly patients. We confirmed these observations in our study. We noticed significantly higher number of individuals with aggressive thyroid malignant tumors in the subgroup of patients in age of 65 and older.
## 5. Conclusions
The rate of cytological prediction of malignancy in elderly women is high. Tracheal compression is a common surgical indication in the elderly patients. The final diagnoses of malignancy predominate in elderly women, but not in men. The oldest thyroid cancer patients present a higher number of advanced thyroid tumors and distant metastases, but not lymph node metastases. Undifferentiated carcinomas, sarcomas, secondary tumors, and lymphomas of thyroid occur with increasing frequency in comparison to younger counterparts; however well-differentiated carcinomas predominate. Retrosternal goiter is a common surgical indication in the elderly patients. Every thyroid pathology in elderly patients should be taken under careful consideration as the prevalence of malignant tumors with more aggressive course.
---
*Source: 1012451-2017-10-16.xml* | 1012451-2017-10-16_1012451-2017-10-16.md | 44,038 | Diagnostics of Thyroid Malignancy and Indications for Surgery in the Elderly and Younger Counterparts: Comparison of 3,749 Patients | Krzysztof Kaliszewski; Dorota Diakowska; Marta Strutyńska-Karpińska; Beata Wojtczak; Michał Aporowicz; Zdzisław Forkasiewicz; Waldemar Balcerzak; Tadeusz Łukieńczuk; Paweł Domosławski | BioMed Research International
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1012451 | 1012451-2017-10-16.xml | ---
## Abstract
Background. It seems valuable for clinicians to know if diagnostics of thyroid malignancy (TM) and indications for surgery in the elderly patients differ from these in younger counterparts.Materials and Methods. Retrospective analysis of the medical records of 3,749 patients surgically treated for thyroid tumor. Data of patients with histopathology confirmed TM (n=309) were studied.Results. The rate of cytological prediction to malignancy was more than three times higher in elderly women. Compression was a main reason for surgery in the elderly (p<0.0001). The final diagnosis of malignancy was significantly higher in older women (p=0.002). Clinical suspicion of malignancy was positively correlated with histopathological diagnosis in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001). The subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer and primary tumor progression (p<0.0001). Distant metastases were significantly more presented among the elderly patients (p=0.032).Conclusions. The rate of cytological prediction to malignancy in elderly women is high. Tracheal compression is a common surgical indication in the elderly patients. The final diagnoses of malignancy predominate in elderly women. The oldest TM patients present a higher number of advanced thyroid tumors and distant metastases.
---
## Body
## 1. Introduction
The proportion of the elderly people to the younger population has increased by 90% over the last 30 years, and as some authors predict, by the year 2020, the proportion of the people over 65 years will increase from 12.4% to 20% [1]. According to the same source of information, the number of older United States citizens will reach 80 million by 2050. The other authors estimated that if in 2000 in the world there were 600 million people in age of 60 years or more, in 2050 there will be 2 billion [2]. The United State Census Bureau announced that the number of the elderly Americans aged above 65 years increases by 2.8% per year between 2010 and 2030 [1]. What is more, it is estimated that by 2050 the population above 85 years old will comprise 24% of the elderly in the United States and 5% of the total population in the world [3]. Some authors noticed that approximately 50% of all surgical interventions are performed in patients in age of 65 or older [4].Benign and malignant thyroid nodules occur with increasing frequency in the elderly patients [5, 6]. Mazzaferri [7] assessed that, by the age of 65, about 50% of these patients have ultrasound revealed nodules, and the same observations are described on basis autopsies performed for the general population [8]. Some other studies say that about 90% of women after the age of 60 years demonstrate the thyroid nodules and about 60% of men after the age of 80 [2]. It was estimated that about 5% of palpable nodules appear malignant on histopathological examination; however some authors revealed strong association between age and malignant potential of thyroid nodules [9]. From 1973, we have observed the rapid increase of incidence of thyroid malignancy in older patients, and it is due to rapid increase of papillary thyroid carcinoma incidence [10]. Besides the fact that this type of cancer has excellent prognosis in general population, a lot of studies suggest that well-differentiated thyroid carcinomas comprised of papillary, follicular, and Hurtle cell carcinomas are more aggressive tumors in elderly patients and are more likely to recur. Sorrenti et al. [11] also confirmed that well-differentiated thyroid carcinoma patients have a favourable prognosis, but they added too that the aggressive thyroid tumors are more frequently observed in the geriatric ages showing a higher disease specific mortality. In older patients, sporadic medullary thyroid carcinomas are more characteristic than hereditary tumors and older age is assessed as poor prognostic factor [12]. Lerch et al. [13] noticed that the age of diagnosis of thyroid carcinoma influences its prognosis. Anaplastic thyroid carcinoma tends to have an extremely poor prognosis in elderly people and most often is observed in the 6th-7th decades of life [14]. Sautter-Bihl et al. [15] revealed that none of the patients in age below 40 years died, while in the group of patients over 60 years old a ten-year survival rate was 79%.Diagnostics and treatment of thyroid carcinoma is based on the similar surgical procedures in elderly patients as in younger ones. A lot of studies revealed that the most common indications for surgery in older patients are similar than in younger counterparts and the most common are thyroid carcinoma suspicion, multinodular goiter (MNG) with trachea compression, or hyperthyroidism [16]. However, to establish the accurate diagnosis in the elderly patients may seem much more difficult. The clinical dilemmas concern the elderly patients with thyroid malignancy in the early stage of disease. Some authors suggest that older patients generally have a much “stronger” or more urgent indications for surgery like severe compressive or trachea infiltration symptoms due to thyroid carcinoma [17]. The proper diagnosis and accurate indication for surgery in older patients in aspect of some recent performed clinical analyses seem extremely valuable, especially that, as the authors say, thyroid surgery in older patients is associated with higher risk of severe complications than in younger ones [18].All these observations forced the authors of this study to estimate how more difficult and complicated is to establish the proper diagnosis of thyroid pathology in elderly patients and, next if needed, properly qualify to surgery. The authors analyzed the most common thyroid pathology and the most common indication for surgery in the elderly patients.
## 2. Materials and Methods
Our study protocol was approved by the Bioethics Committee of Wroclaw Medical University (signature number: KB-487/2017).
### 2.1. Demographic and Clinical Characteristics
The terms of “geriatric” or “elderly” patients are not clearly defined. The cutoff age for this group of patients may vary from 60 to 80 years or even older [18]. In our study, we defined the elderly patients as 65 and older.
### 2.2. Indication for Surgery
In aspect of some studies [19], which have revealed the higher risk of perioperative complications among older patients undergoing thyroid surgery, the indications for strumectomy in our patients were accurately analyzed and based on the classical standards.
### 2.3. Study Group
Study group consisted of 3,749 patients with solitary and multiple thyroid nodules, who were surgically treated between January 2008 and December 2015 at Department of General, Gastroenterological and Endocrine Surgery of Wroclaw Medical University, Poland. The average age of the patients was51±14 years, and female to male ratio was 3,148/601. All patients before the surgical ward admission were diagnosed clinically by ultrasonography (US), computed tomography (CT), or magnetic resonance (MR). All of the patients had ultrasound guided fine needle aspiration biopsy (UG-FNAB) of the thyroid tumor performed minimum once and one month before admission. Clinical and pathological c/pTNM staging was estimated according to AJCC/UICC 2010 7th Edition. The characteristic of the patients in the study group is demonstrated in Table 1.Table 1
Baseline characteristics of 3,749 patients with thyroid nodules.
Parameters
Mean ± SD orn (%)
Age (years)
51.4 ± 14.4
Gender
Female
3,148 (84.0)
Male
601 (16.0)
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
3,206 (85.5)
Stage III (AUS/FLUS)
84 (2.2)
Stage IV (follicular neoplasm)
225 (6.0)
Stage V (malignancy suspicion)
48 (1.3)
Stage V (malignancy/lymphoma suspicion)
2 (0.1)
Stage VI (papillary carcinoma)
178 (4.7)
Stage VI (medullary carcinoma)
6 (0.2)
Clinical suspicion of malignancy
No
3,290 (87.8)
Yes
459 (12.2)
Histological diagnosis
Benign multinodular goiter
2,946 (78.6)
Thyroiditis
118 (3.1)
Follicular carcinoma
25 (0.7)
Papillary carcinoma
247 (6.6)
Medullary carcinoma
10 (0.3)
Undifferentiated thyroid carcinoma
9 (0.3)
Secondary malignant tumor
4 (0.1)
Lymphoma
9 (0.2)
Follicular adenoma
375 (10.0)
Squamous cell carcinoma
1 (0.03)
Abscess
1 (0.03)
Final diagnosis
Benign
3,440 (91.8)
Malignant
309 (8.2)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance.Total study group of patients (n=3,749) was divided into three subgroups according to the age parameter. The first subgroup combined the patients below 45 years old (“<45”), the second one combined the patients at the age equal to or above 45 years to below 65 years (“≥45–<65”), and the third subgroup combined the patients at the age equal to or above 65 years (“≥65”). Since the first statistical analysis showed that these three subgroups of patients are significantly different in respect of gender parameter (p=0.019), next comparative statistics were performed separately in female and male.In the second part of the research, selected from among 3,749 subjects, data of patients with histopathology confirmed TM (n=309) were studied. This group was also divided into three subgroups according to the age parameter (the classification was identical to the one performed in the first part of the study). There were no significant differences in the gender distribution between subgroups of TM patients (p=0.520). Therefore receiver operating characteristic (ROC) and comparative analyses of the demographic, clinical, and histopathological parameters were performed for these three subgroups.
### 2.4. Statistical Analysis
Numbers and percentages were calculated for qualitative variables, and averages and standard deviations were measured for quantitative variables. The data distribution was tested using chi-square test. Frequencies were analyzed using Fisher exact test or chi-square test with Yates correction. Correlation analysis was conducted with Spearman test.The diagnostic potential of clinical examinations in thyroid cancer occurrence in each subgroup was determined by ROC analysis. Results were expressed as area under ROC curve (AUC). The sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio of positive results (LR(+)), likelihood ratio of negative results (LR(−)), and Youden’s index were also calculated.In a two-tailed test, significance levelα≤0.05 was considered as statistically significant. Statistical analyses were performed using STATISTICA 12.0 software (Statistica, Tulsa, OK, USA) and MedCalc 16.8 software (MedCalc Statistical Software, Ostend, Belgium).
## 2.1. Demographic and Clinical Characteristics
The terms of “geriatric” or “elderly” patients are not clearly defined. The cutoff age for this group of patients may vary from 60 to 80 years or even older [18]. In our study, we defined the elderly patients as 65 and older.
## 2.2. Indication for Surgery
In aspect of some studies [19], which have revealed the higher risk of perioperative complications among older patients undergoing thyroid surgery, the indications for strumectomy in our patients were accurately analyzed and based on the classical standards.
## 2.3. Study Group
Study group consisted of 3,749 patients with solitary and multiple thyroid nodules, who were surgically treated between January 2008 and December 2015 at Department of General, Gastroenterological and Endocrine Surgery of Wroclaw Medical University, Poland. The average age of the patients was51±14 years, and female to male ratio was 3,148/601. All patients before the surgical ward admission were diagnosed clinically by ultrasonography (US), computed tomography (CT), or magnetic resonance (MR). All of the patients had ultrasound guided fine needle aspiration biopsy (UG-FNAB) of the thyroid tumor performed minimum once and one month before admission. Clinical and pathological c/pTNM staging was estimated according to AJCC/UICC 2010 7th Edition. The characteristic of the patients in the study group is demonstrated in Table 1.Table 1
Baseline characteristics of 3,749 patients with thyroid nodules.
Parameters
Mean ± SD orn (%)
Age (years)
51.4 ± 14.4
Gender
Female
3,148 (84.0)
Male
601 (16.0)
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
3,206 (85.5)
Stage III (AUS/FLUS)
84 (2.2)
Stage IV (follicular neoplasm)
225 (6.0)
Stage V (malignancy suspicion)
48 (1.3)
Stage V (malignancy/lymphoma suspicion)
2 (0.1)
Stage VI (papillary carcinoma)
178 (4.7)
Stage VI (medullary carcinoma)
6 (0.2)
Clinical suspicion of malignancy
No
3,290 (87.8)
Yes
459 (12.2)
Histological diagnosis
Benign multinodular goiter
2,946 (78.6)
Thyroiditis
118 (3.1)
Follicular carcinoma
25 (0.7)
Papillary carcinoma
247 (6.6)
Medullary carcinoma
10 (0.3)
Undifferentiated thyroid carcinoma
9 (0.3)
Secondary malignant tumor
4 (0.1)
Lymphoma
9 (0.2)
Follicular adenoma
375 (10.0)
Squamous cell carcinoma
1 (0.03)
Abscess
1 (0.03)
Final diagnosis
Benign
3,440 (91.8)
Malignant
309 (8.2)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance.Total study group of patients (n=3,749) was divided into three subgroups according to the age parameter. The first subgroup combined the patients below 45 years old (“<45”), the second one combined the patients at the age equal to or above 45 years to below 65 years (“≥45–<65”), and the third subgroup combined the patients at the age equal to or above 65 years (“≥65”). Since the first statistical analysis showed that these three subgroups of patients are significantly different in respect of gender parameter (p=0.019), next comparative statistics were performed separately in female and male.In the second part of the research, selected from among 3,749 subjects, data of patients with histopathology confirmed TM (n=309) were studied. This group was also divided into three subgroups according to the age parameter (the classification was identical to the one performed in the first part of the study). There were no significant differences in the gender distribution between subgroups of TM patients (p=0.520). Therefore receiver operating characteristic (ROC) and comparative analyses of the demographic, clinical, and histopathological parameters were performed for these three subgroups.
## 2.4. Statistical Analysis
Numbers and percentages were calculated for qualitative variables, and averages and standard deviations were measured for quantitative variables. The data distribution was tested using chi-square test. Frequencies were analyzed using Fisher exact test or chi-square test with Yates correction. Correlation analysis was conducted with Spearman test.The diagnostic potential of clinical examinations in thyroid cancer occurrence in each subgroup was determined by ROC analysis. Results were expressed as area under ROC curve (AUC). The sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio of positive results (LR(+)), likelihood ratio of negative results (LR(−)), and Youden’s index were also calculated.In a two-tailed test, significance levelα≤0.05 was considered as statistically significant. Statistical analyses were performed using STATISTICA 12.0 software (Statistica, Tulsa, OK, USA) and MedCalc 16.8 software (MedCalc Statistical Software, Ostend, Belgium).
## 3. Results
### 3.1. Demographic, Clinical, and Pathological Characteristics of Patients with Thyroid Nodules
We observed significant differences in gender parameter between the three subgroups (female/male ratio in “<45” group: 85/15, in “≥45–<65” group: 85/15, and in “≥65” group: 80/20;p=0.019), so we analyzed data for women and men separately (Table 2).Table 2
Distribution of cases according to gender and age parameters. Data were presented as number (percent).
Female
Male
<45 years (n=995)
≥45–<65 years (n=1584)
≥65 years (n=569)
p value
<45 years (n=178)
≥45–<65 years (n=285)
≥65 years (n=138)
p value
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
841 (84.5)
1,382 (87.3)
448 (78.7)
<
0.0001
∗
151 (84.8)
261 (91.6)
123 (89.1)
0.254
Stage III (AUS/FLUS)
26 (2.6)
34 (2.2)
13 (2.3)
5 (2.8)
4 (1.4)
2 (1.5)
Stage IV (follicular neoplasm)
65 (6.5)
87 (5.5)
50 (8.8)
12 (6.7)
8 (2.8)
3 (2.2)
Stage V (malignancy suspicion)
3 (0.3)
15 (1.0)
21 (3.7)
3 (1.7)
2 (0.7)
4 (2.9)
Stage V (malignancy/lymphoma suspicion)
0 (0.0)
0 (0.0)
2 (0.4)
0 (0.0)
0 (0.0)
0 (0.0)
Stage VI (papillary carcinoma)
59 (5.9)
62 (3.9)
34 (6.0)
7 (3.9)
10 (3.5)
6 (4.4)
Stage VI (medullary carcinoma)
1 (0.1)
4 (0.3)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Clinical suspicion of malignancy
No
867 (87.1)
1,416 (89.4)
461 (81.0)
<
0.0001
∗
156 (87.6)
265 (93.0)
125 (90.6)
0.157
Yes
128 (12.9)
168 (10.6)
108 (19.0)
22 (12.4)
20 (7.0)
13 (9.4)
Indication for surgery (out of malignancy)
Trachea compression
278 (32.1)
720 (50.9)
444 (96.3)
<
0.0001
∗
46 (30.0)
117 (43.8)
112 (89.6)
<
0.0001
∗
Recent tumor/goiter enlargement
205 (23.6)
284 (20)
8 (1.7)
79 (50.0)
49 (18.8)
7 (5.6)
Urgent indication (acute respiratory failure/retrosternal goiter)
0 (0.0)
0 (0.0)
7 (1.5)
0 (0.0)
0 (0.0)
4 (3.2)
Cosmetic indication
384 (44.3)
412 (29.1)
2 (0.4)
31 (20.0)
99 (37.5)
2 (1.6)
Histological follow-up
Benign multinodular goiter
759 (76.3)
1,281 (81.0)
433 (76.1)
<
0.0001
∗
131 (73.6)
236 (82.8)
105 (76.1)
0.001
∗
Thyroiditis
36 (3.6)
50 (3.2)
20 (3.5)
7 (3.9)
3 (1.1)
2 (1.5)
Follicular carcinoma
7 (0.7)
5 (0.3)
5 (0.9)
0 (0.0)
1 (0.4)
7 (5.1)
Papillary carcinoma
76 (7.6)
96 (6.1)
44 (7.7)
9 (5.1)
18 (6.3)
4 (2.9)
Medullary carcinoma
2 (0.2)
2 (0.1)
3 (0.5)
2 (1.1)
1 (0.4)
0 (0.0)
Undifferentiated thyroid cancer
0 (0.0)
1 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
1 (0.7)
Secondary malignant tumor
0 (0.0)
1 (0.1)
2 (0.4)
0 (0.0)
0 (0.0)
1 (0.7)
Lymphoma
0 (0.0)
2 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
0 (0.0)
Squamous cell carcinoma
0 (0.0)
0 (0.0)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Adenoma
115 (11.6)
142 (9.0)
47 (8.3)
29 (16.3)
24 (8.4)
18 (13.0)
Abscess
0 (0.0)
1 (0.1)
0 (0.0)
0 (0.0)
0 (0.0)
0 (0.0)
Final diagnosis
Benign
910 (91.5)
1,474 (93.1)
501 (88.1)
0.002
∗
167 (93.8)
263 (92.3)
125 (90.6)
0.561
Malignant
85 (8.5)
110 (6.9)
68 (11.9)
11 (6.2)
22 (7.7)
13 (9.4)
Recurrent goiter
No
995 (100.0)
1,572 (99.2)
472 (83.0)
<
0.0001
∗
178 (100.0)
285 (100.0)
130 (94.2)
<
0.0001
∗
Yes
0 (0.0)
12 (0.8)
97 (17.0)
0 (0.0)
0 (0.0)
8 (5.8)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance;statistically∗ significant.The rate of cytological prediction to malignancy was more than three times higher in elderly women compared with women below 65 years old. Also follicular neoplasm samples were observed more frequently in women, who were over 65 years old. As it has been shown in comparative analysis of clinical examinations, older women had significantly higher prediction and suspicion to TM than women below 65 years (for bothp<0.0001). These results were not observed in subgroup of older men (for both p>0.05) (Table 2).Goiter enlargement and cosmetics indication were significant factors for operation in women and men below 45 years old, and compression was a main reason for surgery in more than 85% of older women and men. These differences were statistically significant between age subgroups in women and men (for bothp<0.0001) (Table 2).The histopathological data demonstrated that significantly more patients with age below 65 years had extreme results of diagnosis: simple goiter or adenoma (p<0.0001 and p<0.001, respectively, for women and men), and percent value of final diagnosis of malignancy was significantly higher in older women than in women below 65 years old (p=0.002). There were no significant differences in final diagnosis between age subgroups in men (Table 2).Clinical suspicion of malignancy was positively correlated with histopathological diagnosis not only in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001), but also in each of age subgroups (Table 3).Table 3
Analyses of correlation between clinical suspicion of malignancy and final diagnosis of thyroid malignancy presence.
Total (n=3,148)
<45 years (n=995)
≥45–<65 years (n=1,584)
≥65 years (n=569)
r
p value
r
p value
r
p value
r
p value
Female
Clinical suspicion of malignancy & final diagnosis
0.543
<
0.0001
∗
0.581
<
0.0001
∗
0.503
<
0.0001
∗
0.554
<
0.0001
∗
Male
Clinical suspicion of malignancy & final diagnosis
0.560
<
0.0001
∗
0.542
<
0.0001
∗
0.590
<
0.0001
∗
0.575
<
0.0001
∗
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
### 3.2. Diagnostic Potential of Clinical Examinations in Prediction of TM Presence in Patients with Thyroid Nodules
The diagnostic potential of clinical examinations in TM occurrence was evaluated in terms of the capacity to rule out malignancy in patients with benign thyroid disease (the controls). The overall accuracy of clinical examinations in prediction of TM presence was high in all subgroups of patients (Table4).Table 4
Diagnostic potential ofclinical suspicion of malignancy parameter in prediction of thyroid malignant tumor presence in total study group and age subgroups of patients with thyroid nodules (ROC analysis).
Total study group (n=3,749)
<45 years (n=1,173)
≥45–<65 years (n=1,869)
≥65 years (n=707)
AUC
0.825
0.850
0.800
0.829
95% CI
0.790–0.855
0.800–0.900
0.751–0.850
0.772–0.885
p value
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
SE
0.015
0.026
0.025
0.029
Sensitivity
0.72
0.77
0.66
0.75
Specificity
0.93
0.93
0.94
0.90
Accuracy
0.91
0.92
0.92
0.89
LR(+)
10.43
10.92
11.33
7.86
LR(−)
0.30
0.25
0.36
0.27
PPV
0.48
0.49
0.46
0.50
NPV
0.97
0.98
0.97
0.97
Youden’s index
0.65
0.70
0.60
0.66
AUC: area under ROC curve; 95% CI: confidence interval; SE: standard error; LR(+): likelihood ratio of positive results; LR(−): likelihood ratio of negative results; PPV: positive predictive value; NPV: negative predictive value;statistically∗ significant.
### 3.3. Demographic, Clinical, and Pathological Characteristics of Patients with TM
As shown in Table5, the subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer (III or IV stage of disease progression) and with primary tumor progression (T3 and T4) (for both p<0.0001). Distant metastases were significantly more presented among the elderly patients in comparison to patients below 65 years old (p=0.032). No significant differences were observed in type of surgery, necessity of reoperation, type of nodule, and lymph node metastases between the three subgroups of TM patients (Table 5).Table 5
Clinical and pathological characteristics of thyroid malignancy (TM) patients (n=309) divided into three subgroups according to age parameter. Data were presented as n (%).
N
<45 years (n=96)
≥45–<65 years (n=132)
≥65 years (n=81)
p value
Gender
Female
263
85 (88,5)
110 (83.3)
68 (84.0)
0.520
Male
46
11 (11.5)
22 (16.7)
13 (16.0)
Type of surgery
Radical
202
65 (67.7)
82 (66.7)
46 (63,0)
0.803
Nonradical
107
31 (32.3)
41 (33.3)
27 (37.0)
Reoperation
No
217
70 (72.9)
87 (70.7)
50 (70.4)
0.919
Yes
92
26 (27.1)
36 (29.3)
21 (29.6)
pTNM
I
199
88 (91.7)
84 (67.2)
23 (31.9)
<
0.000
1
∗
II
48
5 (5.2)
21 (16.8)
18 (25.0)
III
31
1 (1.0)
11 (8.8)
15 (20.8)
IV
31
2 (2.1)
9 (7.2)
16 (22.2)
pT
T1a
125
45 (47.9)
62 (49.2)
16 (22.2)
<
0.000
1
∗
T1b
68
31 (32.9)
26 (20.6)
8 (11.1)
T2
56
15 (15.9)
19 (15.1)
19 (26.3)
T3
25
1 (1.0)
8 (6.4)
13 (18.1)
T4a
17
2 (2.1)
8 (6.4)
4 (5.6)
T4b
18
0 (0.0)
3 (2.3)
12 (16.7)
pN
N0
148
49 (51.0)
60 (47.6)
35 (48.6)
0.071
N1a
42
12 (12.5)
12 (9.5)
14 (19.4)
N1b
20
5 (5.2)
4 (3.2)
7 (9.7)
Nx
99
30 (31.3)
50 (39.7)
16 (22.2)
pM
M0
218
77 (80.2)
87 (69.1)
54 (75.0)
0.03
2
∗
M1
6
0 (0.0)
2 (1.6)
4 (5.6)
Mx
85
19 (19.8)
37 (29.3)
14 (19.4)
Type of nodule
Solitary
223
71 (73.9)
88 (69.8)
56 (77.8)
0.468
Multiple
86
25 (26.1)
38 (30.2)
16 (22.2)
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 3.1. Demographic, Clinical, and Pathological Characteristics of Patients with Thyroid Nodules
We observed significant differences in gender parameter between the three subgroups (female/male ratio in “<45” group: 85/15, in “≥45–<65” group: 85/15, and in “≥65” group: 80/20;p=0.019), so we analyzed data for women and men separately (Table 2).Table 2
Distribution of cases according to gender and age parameters. Data were presented as number (percent).
Female
Male
<45 years (n=995)
≥45–<65 years (n=1584)
≥65 years (n=569)
p value
<45 years (n=178)
≥45–<65 years (n=285)
≥65 years (n=138)
p value
Cytological diagnosis according to TBSRTC
Stage II (normotype thyrocytes, lymphocytes, thyroiditis suspicion)
841 (84.5)
1,382 (87.3)
448 (78.7)
<
0.0001
∗
151 (84.8)
261 (91.6)
123 (89.1)
0.254
Stage III (AUS/FLUS)
26 (2.6)
34 (2.2)
13 (2.3)
5 (2.8)
4 (1.4)
2 (1.5)
Stage IV (follicular neoplasm)
65 (6.5)
87 (5.5)
50 (8.8)
12 (6.7)
8 (2.8)
3 (2.2)
Stage V (malignancy suspicion)
3 (0.3)
15 (1.0)
21 (3.7)
3 (1.7)
2 (0.7)
4 (2.9)
Stage V (malignancy/lymphoma suspicion)
0 (0.0)
0 (0.0)
2 (0.4)
0 (0.0)
0 (0.0)
0 (0.0)
Stage VI (papillary carcinoma)
59 (5.9)
62 (3.9)
34 (6.0)
7 (3.9)
10 (3.5)
6 (4.4)
Stage VI (medullary carcinoma)
1 (0.1)
4 (0.3)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Clinical suspicion of malignancy
No
867 (87.1)
1,416 (89.4)
461 (81.0)
<
0.0001
∗
156 (87.6)
265 (93.0)
125 (90.6)
0.157
Yes
128 (12.9)
168 (10.6)
108 (19.0)
22 (12.4)
20 (7.0)
13 (9.4)
Indication for surgery (out of malignancy)
Trachea compression
278 (32.1)
720 (50.9)
444 (96.3)
<
0.0001
∗
46 (30.0)
117 (43.8)
112 (89.6)
<
0.0001
∗
Recent tumor/goiter enlargement
205 (23.6)
284 (20)
8 (1.7)
79 (50.0)
49 (18.8)
7 (5.6)
Urgent indication (acute respiratory failure/retrosternal goiter)
0 (0.0)
0 (0.0)
7 (1.5)
0 (0.0)
0 (0.0)
4 (3.2)
Cosmetic indication
384 (44.3)
412 (29.1)
2 (0.4)
31 (20.0)
99 (37.5)
2 (1.6)
Histological follow-up
Benign multinodular goiter
759 (76.3)
1,281 (81.0)
433 (76.1)
<
0.0001
∗
131 (73.6)
236 (82.8)
105 (76.1)
0.001
∗
Thyroiditis
36 (3.6)
50 (3.2)
20 (3.5)
7 (3.9)
3 (1.1)
2 (1.5)
Follicular carcinoma
7 (0.7)
5 (0.3)
5 (0.9)
0 (0.0)
1 (0.4)
7 (5.1)
Papillary carcinoma
76 (7.6)
96 (6.1)
44 (7.7)
9 (5.1)
18 (6.3)
4 (2.9)
Medullary carcinoma
2 (0.2)
2 (0.1)
3 (0.5)
2 (1.1)
1 (0.4)
0 (0.0)
Undifferentiated thyroid cancer
0 (0.0)
1 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
1 (0.7)
Secondary malignant tumor
0 (0.0)
1 (0.1)
2 (0.4)
0 (0.0)
0 (0.0)
1 (0.7)
Lymphoma
0 (0.0)
2 (0.1)
6 (1.1)
0 (0.0)
1 (0.4)
0 (0.0)
Squamous cell carcinoma
0 (0.0)
0 (0.0)
1 (0.2)
0 (0.0)
0 (0.0)
0 (0.0)
Adenoma
115 (11.6)
142 (9.0)
47 (8.3)
29 (16.3)
24 (8.4)
18 (13.0)
Abscess
0 (0.0)
1 (0.1)
0 (0.0)
0 (0.0)
0 (0.0)
0 (0.0)
Final diagnosis
Benign
910 (91.5)
1,474 (93.1)
501 (88.1)
0.002
∗
167 (93.8)
263 (92.3)
125 (90.6)
0.561
Malignant
85 (8.5)
110 (6.9)
68 (11.9)
11 (6.2)
22 (7.7)
13 (9.4)
Recurrent goiter
No
995 (100.0)
1,572 (99.2)
472 (83.0)
<
0.0001
∗
178 (100.0)
285 (100.0)
130 (94.2)
<
0.0001
∗
Yes
0 (0.0)
12 (0.8)
97 (17.0)
0 (0.0)
0 (0.0)
8 (5.8)
TBSRTC: The Bethesda System for Reporting Thyroid Cytology, Second Edition 2010 Bethesda, Maryland; AUS: atypia of undetermined significance; FLUS: follicular lesion of undetermined significance;statistically∗ significant.The rate of cytological prediction to malignancy was more than three times higher in elderly women compared with women below 65 years old. Also follicular neoplasm samples were observed more frequently in women, who were over 65 years old. As it has been shown in comparative analysis of clinical examinations, older women had significantly higher prediction and suspicion to TM than women below 65 years (for bothp<0.0001). These results were not observed in subgroup of older men (for both p>0.05) (Table 2).Goiter enlargement and cosmetics indication were significant factors for operation in women and men below 45 years old, and compression was a main reason for surgery in more than 85% of older women and men. These differences were statistically significant between age subgroups in women and men (for bothp<0.0001) (Table 2).The histopathological data demonstrated that significantly more patients with age below 65 years had extreme results of diagnosis: simple goiter or adenoma (p<0.0001 and p<0.001, respectively, for women and men), and percent value of final diagnosis of malignancy was significantly higher in older women than in women below 65 years old (p=0.002). There were no significant differences in final diagnosis between age subgroups in men (Table 2).Clinical suspicion of malignancy was positively correlated with histopathological diagnosis not only in total group of women (r=0.543, p<0.001) and total group of men (r=0.560, p<0.001), but also in each of age subgroups (Table 3).Table 3
Analyses of correlation between clinical suspicion of malignancy and final diagnosis of thyroid malignancy presence.
Total (n=3,148)
<45 years (n=995)
≥45–<65 years (n=1,584)
≥65 years (n=569)
r
p value
r
p value
r
p value
r
p value
Female
Clinical suspicion of malignancy & final diagnosis
0.543
<
0.0001
∗
0.581
<
0.0001
∗
0.503
<
0.0001
∗
0.554
<
0.0001
∗
Male
Clinical suspicion of malignancy & final diagnosis
0.560
<
0.0001
∗
0.542
<
0.0001
∗
0.590
<
0.0001
∗
0.575
<
0.0001
∗
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 3.2. Diagnostic Potential of Clinical Examinations in Prediction of TM Presence in Patients with Thyroid Nodules
The diagnostic potential of clinical examinations in TM occurrence was evaluated in terms of the capacity to rule out malignancy in patients with benign thyroid disease (the controls). The overall accuracy of clinical examinations in prediction of TM presence was high in all subgroups of patients (Table4).Table 4
Diagnostic potential ofclinical suspicion of malignancy parameter in prediction of thyroid malignant tumor presence in total study group and age subgroups of patients with thyroid nodules (ROC analysis).
Total study group (n=3,749)
<45 years (n=1,173)
≥45–<65 years (n=1,869)
≥65 years (n=707)
AUC
0.825
0.850
0.800
0.829
95% CI
0.790–0.855
0.800–0.900
0.751–0.850
0.772–0.885
p value
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
<
0.000
1
∗
SE
0.015
0.026
0.025
0.029
Sensitivity
0.72
0.77
0.66
0.75
Specificity
0.93
0.93
0.94
0.90
Accuracy
0.91
0.92
0.92
0.89
LR(+)
10.43
10.92
11.33
7.86
LR(−)
0.30
0.25
0.36
0.27
PPV
0.48
0.49
0.46
0.50
NPV
0.97
0.98
0.97
0.97
Youden’s index
0.65
0.70
0.60
0.66
AUC: area under ROC curve; 95% CI: confidence interval; SE: standard error; LR(+): likelihood ratio of positive results; LR(−): likelihood ratio of negative results; PPV: positive predictive value; NPV: negative predictive value;statistically∗ significant.
## 3.3. Demographic, Clinical, and Pathological Characteristics of Patients with TM
As shown in Table5, the subgroup of the eldest TM patients included a significantly higher number of subjects with advanced cancer (III or IV stage of disease progression) and with primary tumor progression (T3 and T4) (for both p<0.0001). Distant metastases were significantly more presented among the elderly patients in comparison to patients below 65 years old (p=0.032). No significant differences were observed in type of surgery, necessity of reoperation, type of nodule, and lymph node metastases between the three subgroups of TM patients (Table 5).Table 5
Clinical and pathological characteristics of thyroid malignancy (TM) patients (n=309) divided into three subgroups according to age parameter. Data were presented as n (%).
N
<45 years (n=96)
≥45–<65 years (n=132)
≥65 years (n=81)
p value
Gender
Female
263
85 (88,5)
110 (83.3)
68 (84.0)
0.520
Male
46
11 (11.5)
22 (16.7)
13 (16.0)
Type of surgery
Radical
202
65 (67.7)
82 (66.7)
46 (63,0)
0.803
Nonradical
107
31 (32.3)
41 (33.3)
27 (37.0)
Reoperation
No
217
70 (72.9)
87 (70.7)
50 (70.4)
0.919
Yes
92
26 (27.1)
36 (29.3)
21 (29.6)
pTNM
I
199
88 (91.7)
84 (67.2)
23 (31.9)
<
0.000
1
∗
II
48
5 (5.2)
21 (16.8)
18 (25.0)
III
31
1 (1.0)
11 (8.8)
15 (20.8)
IV
31
2 (2.1)
9 (7.2)
16 (22.2)
pT
T1a
125
45 (47.9)
62 (49.2)
16 (22.2)
<
0.000
1
∗
T1b
68
31 (32.9)
26 (20.6)
8 (11.1)
T2
56
15 (15.9)
19 (15.1)
19 (26.3)
T3
25
1 (1.0)
8 (6.4)
13 (18.1)
T4a
17
2 (2.1)
8 (6.4)
4 (5.6)
T4b
18
0 (0.0)
3 (2.3)
12 (16.7)
pN
N0
148
49 (51.0)
60 (47.6)
35 (48.6)
0.071
N1a
42
12 (12.5)
12 (9.5)
14 (19.4)
N1b
20
5 (5.2)
4 (3.2)
7 (9.7)
Nx
99
30 (31.3)
50 (39.7)
16 (22.2)
pM
M0
218
77 (80.2)
87 (69.1)
54 (75.0)
0.03
2
∗
M1
6
0 (0.0)
2 (1.6)
4 (5.6)
Mx
85
19 (19.8)
37 (29.3)
14 (19.4)
Type of nodule
Solitary
223
71 (73.9)
88 (69.8)
56 (77.8)
0.468
Multiple
86
25 (26.1)
38 (30.2)
16 (22.2)
S
t
a
t
i
s
t
i
c
a
l
l
y
∗ significant.
## 4. Discussion
The prevalence of thyroid malignancy increases with age and in the elderly it is more aggressive process [20, 21]. It was estimated that papillary and follicular thyroid carcinomas are more aggressive tumors after 45 years and anaplastic thyroid carcinoma is extremely rare observed before the age of 60 [22]. Some authors revealed that, in patients in age of 65 and older, more aggressive tumors with extrathyroid spread, multiple, larger lesion, and more advanced-stage disease can be observed [23]. They added that nonpapillary types of carcinoma were most often observed.Diagnostics and qualification for thyroid surgery of elderly patients still remain a set of challenges. The effect of age on the risk of thyroid surgery still remains under debate. In the literature, there are opposite opinions. Some authors suggest that there is no higher risk of complications in thyroid surgery for elderly patients [2], whereas the others say that the possibility for surgical complication is very high [21]. Some authors suggest that thyroid surgery presents various risks for older people [24]. On the basis of all these opinions we think that proper diagnostics and qualification to surgery in the elderly patients are extremely important. Majority of authors agree that there are two main indications for thyroid surgery in elderly patients. The first is mechanical compression symptoms caused by solitary tumor or goiter, often localized in retrosternal space. The second is suspicion or verified malignant process [21, 25–27]. The indications for thyroid surgery like compressive syndrome, thyrotoxicosis, and recent gland enlargement were described as more common in elderly patients than in younger population (the percentage in age above 70 years was 38.2%, 30.9%, and 27.3%, respectively) [26]. The basic diagnostic tool for thyroid tumors to rule out or confirm its malignancy still remains UG-FNAB. A large number of studies have demonstrated the high overall accuracy of UG-FNAB for evaluation of thyroid nodules, and this finding has been confirmed, particularly for patients with solitary nodules [28]. Some authors revealed that older patients underwent thyroid surgery more often due to suspicion or verified malignancy than younger patients (52.7% versus 30.3%) [27]. The next more common indication is compression of trachea (38.2% versus 3.1%). The same authors say that benign, asymptomatic MNG in opposite to younger population is very rare indication for strumectomy in elderly patients (9.1% versus 66.6%). However in the literature there are no clear data about differences of extent of surgery between each age group of patients. In a very interesting study performed by Ríos et al. [19], they analyzed 591 patients with 81 individuals above 65 years old, who underwent thyroidectomy due to MNG. They revealed that elderly patients more often than younger ones presented compressive symptoms (43% versus 21%) and rare suspicion for malignancy (19% versus 29%), recent goiter growth (1% versus 6%), or patient request (4% versus 12%). In our study the main indication for surgery in the elderly patients group was compression symptom. The second was verified malignant tumor or suspicion of malignancy. The number of retrosternal goiters in elderly patients was significantly higher than in younger ones. Only in this group of patients we observed urgent indication for surgery, which had to be performed immediately after admission to our department. Park et al. [29] performed very interesting study, in which they compared three age groups of patients who received surgery due to well-differentiated thyroid carcinoma. They analyzed patients between 45 and 64, 65 and 79, and 80 years and older. They noticed that patients in the age of 65 years and older had more aggressive malignant disease with multiple, larger tumors and more advanced-stage disease. In this group of patients, in opposite to younger ones, there were a lot of nonpapillary thyroid carcinoma and extraglandular extension. The same authors noticed that these elderly patients underwent less radical treatment without radioiodine ablation therapy even if American Thyroid Association guidelines recommended more aggressive treatment [30]. In a very similar observation described by Panigrahi et al. [31], they analyzed 2,033 patients surgically treated due to medullary thyroid carcinoma. Among all patients without local invasion and distant metastases, in the group of patients above 65 years, the authors noticed the fewest percentage of individuals receiving recommended treatment. In the group of patients below 40 years old it was 65% compared to those above 65, where it was only 45%.In the other study, the authors analyzed the most common indications for thyroid surgery in the three age groups of patients: 50–60 years (725 patients), 61–74 years (685 patients), and more than 75 years (221 patients). They noticed that the most common indication in all groups was retrosternal goiter with tracheal compression, but it was least frequent in the oldest group. In this group there were the most of remedial surgeries [25]. Raffaelli et al. [32] analyzed the indications for thyroid surgery in 320 patients in the age of 70 years and more. They noticed that the most common indication was bilateral multinodular goiter, next suspicion or confirmed malignant process, and toxic goiter. In similar observations revealed by Lang and Lo [26], they confirmed that in patients aged 70 years and more the most common indication for thyroid surgery was retrosternal goiter, but they added that in this group of patients the volume of goiter was significantly higher.The next very important aspect of thyroid surgery in elderly patients is emergency thyroidectomies. This surgical procedure is indicated in case of severe respiratory distress caused by airway compression. Miccoli et al. [33] revealed that the most common reasons of acute airway compression in patients above 80 years were malignant process. The presence of retrosternal goiter or its mediastinal location very often causes the trachea compression with respiratory failure. In our clinic during the analysis period we treated 11 patients with the huge retrosternal goiter which caused acute compression symptoms. However retrosternal goiter not always causes trachea compression. In the asymptomatic patients, especially in the geriatric age, decision for surgery is extremely difficult. In aspect of some recent studies, which suggest that in the elderly patients the risk of complication after thyroid surgery is high, it is very valuable to identify the patients who become symptomatic without surgery, those who benefit from only observation, and in the end those who need rapid surgical procedure. In our study we operated on 5 patients requiring emergency admission and emergency surgery due to acute airway obstruction from a large compressive goiter.The next important issue of indications and thyroid surgery in the elderly patients is secondary operations. These might be performed mainly due to two basic clinical situations. The first is local or lymph node recurrence of thyroid malignant process and the second is recurrent goiter with compression symptoms. Some authors say that the number of secondary thyroid operations is significantly higher in elderly patients than in younger group [25, 27]. These observations were confirmed in our study.In the previous analyses, it can be observed that some authors pay attention to a very important aspect of the older people living, that is, a quality of life [34]. This problem seems extremely important especially in elderly patients, where surgery not always increases the quality of life. In our study, we treated 6 patients where the only surgical procedure that we could perform was tracheostomy and gastrostomy. All individuals had advanced malignant processes qualified as inoperable cases. Matsuyama et al. [34] performed 85 surgical procedures in elderly patients with thyroid carcinoma and revealed that successful operation in these patients improved the quality of life.Mekel et al. [21] analyzed indications for thyroid surgery of 332 patients. They divided the patients as octogenarian and younger patients. The authors did not reveal any differences in preoperative indications for surgery. In both groups there were equal numbers of indications like benign diseases, suspected malignancy, and follicular neoplasms.The next issue of thyroid surgery in elderly patients is the higher prevalence of more aggressive types of thyroid carcinoma than in younger patients. Dellal et al. [20] described their observations performed in 933 patients. They noticed that the rates of anaplastic and Hurtle cell cancers were increased in geriatric ages. These authors concluded that cytological evaluation of thyroid lesions should be especially considered because of increased tendency for aggressive thyroid cancer types in elderly patients. We confirmed these observations in our study. We noticed significantly higher number of individuals with aggressive thyroid malignant tumors in the subgroup of patients in age of 65 and older.
## 5. Conclusions
The rate of cytological prediction of malignancy in elderly women is high. Tracheal compression is a common surgical indication in the elderly patients. The final diagnoses of malignancy predominate in elderly women, but not in men. The oldest thyroid cancer patients present a higher number of advanced thyroid tumors and distant metastases, but not lymph node metastases. Undifferentiated carcinomas, sarcomas, secondary tumors, and lymphomas of thyroid occur with increasing frequency in comparison to younger counterparts; however well-differentiated carcinomas predominate. Retrosternal goiter is a common surgical indication in the elderly patients. Every thyroid pathology in elderly patients should be taken under careful consideration as the prevalence of malignant tumors with more aggressive course.
---
*Source: 1012451-2017-10-16.xml* | 2017 |
# Bioanalytical Method Development and Validation of Memantine in Human Plasma by High Performance Liquid Chromatography with Tandem Mass Spectrometry: Application to Bioequivalence Study
**Authors:** Ravi Kumar Konda; B. R. Challa; Babu Rao Chandu; Kothapalli B. Chandrasekhar
**Journal:** Journal of Analytical Methods in Chemistry
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101249
---
## Abstract
A simple, sensitive, and rapid HPLC-MS/MS method was developed and validated for quantitative estimation of memantine in human plasma. Chromatography was performed on Zorbax SB-C18 (4.6×75 mm, 3.5 μm) column. Memantine (ME) and internal standard Memantine-d6(MED6) were extracted by using liquid-liquid extraction and analyzed by LC-ESI-MS/MS using multiple-reaction monitoring (MRM) mode. The assay exhibited a linear dynamic range of 50.00–50000.00 pg/ml for ME in human plasma. This method demonstrated an intra- and interday precision within the range of 2.1–3.7 and 1.4–7.8%, respectively. Further intra- and interday accuracy was within the range of 95.6–99.8 and 95.7–99.1% correspondingly. The mean recovery of ME and MED6 was 86.07±6.87 and 80.31±5.70%, respectively. The described method was successfully employed in bioequivalence study of ME in Indian male healthy human volunteers under fasting conditions.
---
## Body
## 1. Introduction
Memantine (1-amino-3,5-dimethyladamantane hydrochloride) (Figure1) acting on the glutamatergic system by blocking N-methyl-D-aspartate (NMDA) glutamate receptors [1]. Memantine (ME) is used in Parkinson’s disease and movement disorders, and recently it has been demonstrated to be useful in dementia syndrome. The mode of action is thought to be due to prevention of damage to retinal ganglion as a result of increased intraocular pressure. The accumulation of a drug in melanin-rich tissues may have serious physiological consequences as it could lead to potentially toxic effects. Despite several investigations into the nature of drug melanin binding, the exact mechanism of the interaction remains unknown. ME is well absorbed, with peak plasma concentrations (Cmax) ranging from 22 to 46 ng/mL following a single dose of 20 mg. The time to achieve maximum plasma concentration (Tmax) following single doses of 10–40 mg ranges from 3 to 8 hr. The drug is 45% bound to plasma proteins presenting a distribution volume of approximately 9–11 L/kg, which suggests an extensive distribution into tissues. It is poorly metabolized by the liver, and 57–82% of the administered dose is excreted unchanged in the urine with a mean terminal half-life of 70 hr [1].Chemical structures of (a) Memantine hydrochloride and (b) Memantine-D6HCL.
(a)(b)There were few methods established previously to determine ME in a variety of matrices with different instruments. These methods include LC-MS [1–4], HPLC [5–8], GC-MS [9], and Micellar electrokinetic chromatography [10]. Among all methods LC-MS [1–4] has gained more importance.Liu et al. [1] developed the method with the linear concentration range of 0.2–200 ng/mL, with 0.2 ng/mL sensitivity. This sensitivity was improved by Almeida et al. [2]. They developed the method with the linear concentration range of 0.1 to 50 ng/mL, with 0.1 ng/mL sensitivity. Pan et al. [3] developed the method with the linear concentration range of 0.1 to 25 ng/mL. They used 0.5 mL plasma usage to get 0.1 ng/mL of sensitivity. Koeberle et al. [4] developed the method in different melanins.The reported methods do not show the usage of deuterated internal standard comparision with analyte which is most important in bioanalytical method development. All the reported methods develop the method with long run time and more amount of plasma sample for extraction.The purpose of this investigation was to develop a rapid, simple, sensitive, and selective LC-MS/MS method for the quantitative estimation of ME in less volume of human plasma using deuterated internal standard. It is also expected that this method would provide an efficient solution for pharmacokinetic, bioavailability, and/or bioequivalence studies of ME.
## 2. Materials and Methods
### 2.1. Chemicals
ME (99.9%) was obtained from Varda biotech Pvt. Ltd. Andheri, Mumbai, India. MED6 (99.0%) was obtained from the Toronto Research Chemicals, Toronto, Canada. Blank plasma lots were purchased from Navjeevan blood bank, Hyderabad. HPLC-grade methanol and acetonitrile were purchased from Jt. Baker, USA. Diethyl ether andn-hexane were purchased from Lab Scan, Asia Co. Ltd, Bangkok, Thailand. Formic acid and sodium hydroxide were purchased from Merck Mumbai, India. HPLC-grade water from Milli-Q System was used. All other chemicals used were analytical grade.
### 2.2. Instrumentation and Chromatographic Conditions
HPLC system (1200 series, Agilent Technologies, Germany) is connected with API 4000 triple quadrupole mass spectrometer (ABI-SCIEX, Toronto, Canada) using multiple reaction monitoring (MRM). A turbo electrospray interface in positive ionization mode was used. Data processing was performed on Analyst 1.4.1 software package (SCIEX). The chromatography was performed on a Zorbax SB-C18 (4.6×75 mm, 3.5 μm) (Agilent technologies,Germany) at 40°C temperature. The mobile phase composition was a mixture of 0.1% formic acid : acetonitrile (35 : 65 v/v) which was pumped at a flow-rate of 0.6 mL/min without split.
### 2.3. Preparation of Calibration Standards and Quality Control Samples
Standard stock solutions of ME (100.00μg/mL) and MED6 (100.00 μg/mL) were separately prepared in methanol. MED6 dilution (25.00 ng/mL) was made from MED6 standard stock solution with diluent (methanol: water 50 : 50 v/v). Standard stock solution of ME was added to drug-free human plasma to obtain ME calibration standards of 50.00, 100.00, 500.00, 1000.00, 5000.00, 10000.00, 20000.00, 30000.00, 40000.00, and 50000.00 pg/mL. Quality control (QC) samples were also prepared as a bulk on an independent weighing of standard drug at concentrations of 50.00 (LLOQ), 150.00 (LQC), 15000.00 (MQC), and 35000.00 pg/mL (HQC) from standard stock solutions of ME. The calibration standards and quality control samples were divided into aliquots in 5 mL Ria vials and stored in the freezer at below −30°C until analysis.
### 2.4. Sample Preparation
50μL of MED6 (25 ng/mL), 100 μL of plasma sample, and 100 μL of 10 mM NaOH were added into 5 mL Ria vials and vortexed briefly. This was followed by addition of 3 mL extraction solvent (diethyl ether : n-hexane 70 : 30 v/v) and vortexed for 10 min. Then samples were centrifuged at 4000 rpm for 5 min at ambient temperature conditions. Then, the supernatant from each sample was transferred into labelled vials by using the dry-ice acetone flash freeze technique and evaporated to dryness under nitrogen stream at 40°C. The dried residue was reconstituted with 400 μL of 0.1% of formic acid: acetonitrile (35 : 65 v/v) mixture and vortexed until dissolved. Finally, a 20 μL of each sample was transferred into auto sampler vials and injected into HPLC connected with mass spectrometer.
### 2.5. Recovery
Recovery of ME was evaluated by comparing the mean peak area of six extracted low, medium, and high (150.00, 1500.00, and 35000.00 pg/mL) quality control samples to the mean peak area of six aqueous standards with the same concentrations of low, medium, and high ME quality control samples.Similarly the recovery of MED6 was evaluated by comparing the mean peak area of extracted quality control samples to the mean peak area of MED6 in aqueous standards samples with the same concentrations of MED6.
### 2.6. Selectivity
The selectivity of the method was determined by blank human plasma samples from six different healthy human volunteers to test the potential interferences of endogenous compounds coeluted with ME and MED6. The Chromatographic peaks of ME and MED6 were identified on the basis of their retention times and MRM responses. The mean peak area of LOQ for ME and MED6 at corresponding retention time in blank samples should not be more than 20 and 5%, respectively.
### 2.7. Limit of Quantification (LOQ)
The LOQ was estimated in accordance with the baseline noise method at a signal-to-noise ratio (S/N) of 5. It was experimentally determined by injecting six samples with ME at the LLOQ concentration. The acceptance criterion for S/N was ≥5 and calculated by selecting the noise region as close as possible to the signal peak, which was at least 8 times of the signal peak width at half height.
### 2.8. Analytical Curves
The analytical curves of ME were constructed in the concentrations ranging from 50.00 to 50000.00 pg/mL in human plasma. The calibration curve was constructed by using instrument response (ratio of ME peak area to MED6 peak area) against the ME concentration (pg/mL) for four consecutive days by weighted1/x2 quadratic regression model. The fitness of calibration curve was confirmed by back-calculating the concentrations of calibration standards.
### 2.9. Calibration Curve Standards, Regression Model, Precision, and Accuracy Batches
Calibration curve standard samples and QC samples were prepared in replicates (n=6) for analysis. Correlation coefficients (r2) were obtained by using quadratic regression model in whole range of tested concentrations. The accuracy and precision for the back calculated concentrations of the calibration points should be within ±15% whereas those of LLOQ should be within ±20% of their nominal values.
### 2.10. Stability
Low and high QC samples (n=6) were retrieved from the deep freezer; samples were processed for three freeze/thaw cycles according to the clinical protocols. The samples were stored at −10°C to −30°C in three cycles of 24, 36, and 48 hr. In addition, the long-term stability of ME in QC samples was also evaluated after 76 days of storage at −10 to −30°C. The stability at refrigerated temperature was studied following 79 hr storage period in the autosampler tray. Bench top stability was studied for 26-hour period. Stability samples were processed and extracted along with the freshly spiked calibration curve standards. Stability of the stock solutions was proved for 24 days. The precision and accuracy for the stability samples were maintained within 15 and ±15%, respectively, of their nominal concentrations.
### 2.11. Matrix Effect
The matrix effect due to plasma matrix was used to evaluate ion suppression/enhancement in a signal by comparing the absolute response of QC samples after pretreatment (liquid-liquid extraction) with that of reconstituted samples extracted blank plasma sample spiked with analyte. Experiments were performed at low and high concentration levels in triplicate. The acceptable precision (%CV) should be ≤15%.
### 2.12. Analysis of Human Plasma Samples
The bioanalytical method described previously was applied to determine ME concentrations in plasma following oral administration to healthy adult human male volunteers below 25 years of age. The volunteers were contracted by Micro Therapeutics Research Labs Pvt Ltd., Chennai, India. They were screened before participation in the study and an informed consent was taken from them. These volunteers, were not undergone any other medication before conducting this study. To each of the 20 volunteers a tablet containing 10 mg of ME was orally administered along with a 240 mL of drinking water. Proper diet was provided to each volunteer as per the protocol. The reference product (Namenda tablets 10 mg, Forest laboratories, Ireland) and test product (Memantine tablets 10 mg) were used in the study. The study protocol was approved by IEC (Institutional Ethical Committee) and by ICMR (Indian Council of Medical Research). Blood samples were collected as predose (0 hr) 5 minutes prior to dosing followed by further samples at 1, 2, 3, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 12, 24, 48, and 72 hr. After dosing, a 5 mL blood sample was collected each preestablished time in vacutainers containing K2EDTA. A total of 34 samples (17 time points each for reference and test) were collected and centrifuged at 3200 rpm and10°C for 10 min. Then they were stored at −30°C until further analysis. Test and reference were administered to the same human volunteers under fasting conditions separately after a washing period of 18 days as per protocol approved by IEC.
### 2.13. Pharmacokinetics and Statistical Analysis
Pharmacokinetics parameters were calculated from plasma levels applying a noncompartmental statistics model using WinNon-Lin 5.0 software (Pharsight, USA). Following Food and Drug Administration (F.D.A) guideline [11, 12], blood samples were drawn up to a period of three to five times the terminal elimination half-life (t1/2) and it was considered as the area under the concentration time curve (AUC) ratio higher than 80%. The Cmax and Tmax values were determined by visual inspection of the plasma ME concentration-time profiles. The area under the concentration-time curve (AUC0-t) was obtained by the trapezoidal method. The total area under the curve (AUC0-∞) was calculated up to the last measureable concentration, and extrapolations were obtained by the last measureable concentration and the terminal elimination rate constant (Ke). The Ke was estimated from the slope of the terminal exponential phase of the plasma of ME concentration-time curve using linear regression method. The t1/2 was then calculated as 0.693/Ke. The AUC0-t, AUC0-∞, and Cmax bioequivalence were assessed by analysis of variance (ANOVA), and the standard 90% confidence intervals (90% CIs) of the ratios test/reference were calculated after transforming the data logarithmically. The bioequivalence was considered when the ratio of averages of log transformed data was within 80–125% for AUC0-t, AUC0-∞, and Cmax [11, 12].
## 2.1. Chemicals
ME (99.9%) was obtained from Varda biotech Pvt. Ltd. Andheri, Mumbai, India. MED6 (99.0%) was obtained from the Toronto Research Chemicals, Toronto, Canada. Blank plasma lots were purchased from Navjeevan blood bank, Hyderabad. HPLC-grade methanol and acetonitrile were purchased from Jt. Baker, USA. Diethyl ether andn-hexane were purchased from Lab Scan, Asia Co. Ltd, Bangkok, Thailand. Formic acid and sodium hydroxide were purchased from Merck Mumbai, India. HPLC-grade water from Milli-Q System was used. All other chemicals used were analytical grade.
## 2.2. Instrumentation and Chromatographic Conditions
HPLC system (1200 series, Agilent Technologies, Germany) is connected with API 4000 triple quadrupole mass spectrometer (ABI-SCIEX, Toronto, Canada) using multiple reaction monitoring (MRM). A turbo electrospray interface in positive ionization mode was used. Data processing was performed on Analyst 1.4.1 software package (SCIEX). The chromatography was performed on a Zorbax SB-C18 (4.6×75 mm, 3.5 μm) (Agilent technologies,Germany) at 40°C temperature. The mobile phase composition was a mixture of 0.1% formic acid : acetonitrile (35 : 65 v/v) which was pumped at a flow-rate of 0.6 mL/min without split.
## 2.3. Preparation of Calibration Standards and Quality Control Samples
Standard stock solutions of ME (100.00μg/mL) and MED6 (100.00 μg/mL) were separately prepared in methanol. MED6 dilution (25.00 ng/mL) was made from MED6 standard stock solution with diluent (methanol: water 50 : 50 v/v). Standard stock solution of ME was added to drug-free human plasma to obtain ME calibration standards of 50.00, 100.00, 500.00, 1000.00, 5000.00, 10000.00, 20000.00, 30000.00, 40000.00, and 50000.00 pg/mL. Quality control (QC) samples were also prepared as a bulk on an independent weighing of standard drug at concentrations of 50.00 (LLOQ), 150.00 (LQC), 15000.00 (MQC), and 35000.00 pg/mL (HQC) from standard stock solutions of ME. The calibration standards and quality control samples were divided into aliquots in 5 mL Ria vials and stored in the freezer at below −30°C until analysis.
## 2.4. Sample Preparation
50μL of MED6 (25 ng/mL), 100 μL of plasma sample, and 100 μL of 10 mM NaOH were added into 5 mL Ria vials and vortexed briefly. This was followed by addition of 3 mL extraction solvent (diethyl ether : n-hexane 70 : 30 v/v) and vortexed for 10 min. Then samples were centrifuged at 4000 rpm for 5 min at ambient temperature conditions. Then, the supernatant from each sample was transferred into labelled vials by using the dry-ice acetone flash freeze technique and evaporated to dryness under nitrogen stream at 40°C. The dried residue was reconstituted with 400 μL of 0.1% of formic acid: acetonitrile (35 : 65 v/v) mixture and vortexed until dissolved. Finally, a 20 μL of each sample was transferred into auto sampler vials and injected into HPLC connected with mass spectrometer.
## 2.5. Recovery
Recovery of ME was evaluated by comparing the mean peak area of six extracted low, medium, and high (150.00, 1500.00, and 35000.00 pg/mL) quality control samples to the mean peak area of six aqueous standards with the same concentrations of low, medium, and high ME quality control samples.Similarly the recovery of MED6 was evaluated by comparing the mean peak area of extracted quality control samples to the mean peak area of MED6 in aqueous standards samples with the same concentrations of MED6.
## 2.6. Selectivity
The selectivity of the method was determined by blank human plasma samples from six different healthy human volunteers to test the potential interferences of endogenous compounds coeluted with ME and MED6. The Chromatographic peaks of ME and MED6 were identified on the basis of their retention times and MRM responses. The mean peak area of LOQ for ME and MED6 at corresponding retention time in blank samples should not be more than 20 and 5%, respectively.
## 2.7. Limit of Quantification (LOQ)
The LOQ was estimated in accordance with the baseline noise method at a signal-to-noise ratio (S/N) of 5. It was experimentally determined by injecting six samples with ME at the LLOQ concentration. The acceptance criterion for S/N was ≥5 and calculated by selecting the noise region as close as possible to the signal peak, which was at least 8 times of the signal peak width at half height.
## 2.8. Analytical Curves
The analytical curves of ME were constructed in the concentrations ranging from 50.00 to 50000.00 pg/mL in human plasma. The calibration curve was constructed by using instrument response (ratio of ME peak area to MED6 peak area) against the ME concentration (pg/mL) for four consecutive days by weighted1/x2 quadratic regression model. The fitness of calibration curve was confirmed by back-calculating the concentrations of calibration standards.
## 2.9. Calibration Curve Standards, Regression Model, Precision, and Accuracy Batches
Calibration curve standard samples and QC samples were prepared in replicates (n=6) for analysis. Correlation coefficients (r2) were obtained by using quadratic regression model in whole range of tested concentrations. The accuracy and precision for the back calculated concentrations of the calibration points should be within ±15% whereas those of LLOQ should be within ±20% of their nominal values.
## 2.10. Stability
Low and high QC samples (n=6) were retrieved from the deep freezer; samples were processed for three freeze/thaw cycles according to the clinical protocols. The samples were stored at −10°C to −30°C in three cycles of 24, 36, and 48 hr. In addition, the long-term stability of ME in QC samples was also evaluated after 76 days of storage at −10 to −30°C. The stability at refrigerated temperature was studied following 79 hr storage period in the autosampler tray. Bench top stability was studied for 26-hour period. Stability samples were processed and extracted along with the freshly spiked calibration curve standards. Stability of the stock solutions was proved for 24 days. The precision and accuracy for the stability samples were maintained within 15 and ±15%, respectively, of their nominal concentrations.
## 2.11. Matrix Effect
The matrix effect due to plasma matrix was used to evaluate ion suppression/enhancement in a signal by comparing the absolute response of QC samples after pretreatment (liquid-liquid extraction) with that of reconstituted samples extracted blank plasma sample spiked with analyte. Experiments were performed at low and high concentration levels in triplicate. The acceptable precision (%CV) should be ≤15%.
## 2.12. Analysis of Human Plasma Samples
The bioanalytical method described previously was applied to determine ME concentrations in plasma following oral administration to healthy adult human male volunteers below 25 years of age. The volunteers were contracted by Micro Therapeutics Research Labs Pvt Ltd., Chennai, India. They were screened before participation in the study and an informed consent was taken from them. These volunteers, were not undergone any other medication before conducting this study. To each of the 20 volunteers a tablet containing 10 mg of ME was orally administered along with a 240 mL of drinking water. Proper diet was provided to each volunteer as per the protocol. The reference product (Namenda tablets 10 mg, Forest laboratories, Ireland) and test product (Memantine tablets 10 mg) were used in the study. The study protocol was approved by IEC (Institutional Ethical Committee) and by ICMR (Indian Council of Medical Research). Blood samples were collected as predose (0 hr) 5 minutes prior to dosing followed by further samples at 1, 2, 3, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 12, 24, 48, and 72 hr. After dosing, a 5 mL blood sample was collected each preestablished time in vacutainers containing K2EDTA. A total of 34 samples (17 time points each for reference and test) were collected and centrifuged at 3200 rpm and10°C for 10 min. Then they were stored at −30°C until further analysis. Test and reference were administered to the same human volunteers under fasting conditions separately after a washing period of 18 days as per protocol approved by IEC.
## 2.13. Pharmacokinetics and Statistical Analysis
Pharmacokinetics parameters were calculated from plasma levels applying a noncompartmental statistics model using WinNon-Lin 5.0 software (Pharsight, USA). Following Food and Drug Administration (F.D.A) guideline [11, 12], blood samples were drawn up to a period of three to five times the terminal elimination half-life (t1/2) and it was considered as the area under the concentration time curve (AUC) ratio higher than 80%. The Cmax and Tmax values were determined by visual inspection of the plasma ME concentration-time profiles. The area under the concentration-time curve (AUC0-t) was obtained by the trapezoidal method. The total area under the curve (AUC0-∞) was calculated up to the last measureable concentration, and extrapolations were obtained by the last measureable concentration and the terminal elimination rate constant (Ke). The Ke was estimated from the slope of the terminal exponential phase of the plasma of ME concentration-time curve using linear regression method. The t1/2 was then calculated as 0.693/Ke. The AUC0-t, AUC0-∞, and Cmax bioequivalence were assessed by analysis of variance (ANOVA), and the standard 90% confidence intervals (90% CIs) of the ratios test/reference were calculated after transforming the data logarithmically. The bioequivalence was considered when the ratio of averages of log transformed data was within 80–125% for AUC0-t, AUC0-∞, and Cmax [11, 12].
## 3. Results and Discussion
### 3.1. Method Development and Validation
Mass spectrometry parameters, fragmentation pattern, and mode of ionization are the main task in mass spectrometry tuning to obtain respective fragmented ions and response for both ME and MED6 which were shown in Figures2(a), 2(b), 2(c), and 2(d). ESI-LC-MS/MS is a very powerful technique for pharmacokinetic studies since it provides sensitivity and selectivity requirements for analytical methods. MRM technique was chosen for the assay development. The MRM parameters were optimized to maximize the response for the analyte.(a) Mass spectra of Memantine parent ion (Q1). (b) Mass spectra of Memantine product ion (Q3). (c) Mass spectra of Memantine-D6 parent ion. (d) Mass spectra of Memantine-D6 product ion (Q3).
(a)(b)(c)(d)The instrumental parameters for mass spectroscopy were optimized. The source temperature was 600°C. The gas pressures of nebulizer, heater, curtain, and CAD were 40, 30, 20, and 4 psi, respectively. The ion spray voltage, entrance potential, declustering potential, collision energy, and collision cell exit potential were optimized at 5500, 10, 50, 32, and 12 V, respectively. The dwell time was 400 milliseconds for both ME and MED6.The product ion (Q3) mass spectra of ME and the MED6 are shown in Figures2(b) and 2(d). [M + H]+ was the predominant ion in the Q1 spectrum. The Q1 for ME and MED6 was 180.2 and 186.1, respectively, and were used as the precursor ion to obtain product ion spectra. The collisionally associated dissociation (CAD) mass spectrum of ME shows formation of characteristic product ions at m/z161.8, 163.2, and 165.1. The major product ion at m/z163.2 for ME could be explained by the splitting of 1-amino-3-,5-dimethyladamantane hydrochloride from the protonated precursor molecule. The CAD mass spectrum of MED6 shows formation of characteristic product ions at m/z169.2. The major product ion at m/z169.2 arose from 3,5-Dimethyl-d6-tricyclo-[3,3,1,13,7]decan-1-amine,3,5-Dimethyl-d6-1-adamantanamine from the protonated precursor molecule. The most sensitive mass transitions were from m/z180.2 to 1163.2 for ME and m/z 186.1 to m/z 169.2 for the MED6. The proposed fragmentation pattern is Figure 2(a)→Figure 2(b), Figure 2(c)→Figure 2(d). The inherent selectivity of MS-MS detection was also expected to be beneficial in developing a selective and sensitive method.The chromatographic conditions particularly the composition of mobile phase, flow-rate of mobile phase, choosing of suitable column, injection volume, column oven temperature, autosampler temperature, splitting of sample in to ion source, as well as a short run time were optimized through several trials to achieve good resolution and symmetric peak shapes for the ME and MED6. It was found that a mixture of 0.1% formic acid:acetonitrile (35 : 65 v/v) could achieve this purpose and this was finally adopted as the mobile phase. The formic acid was found to be necessary in order to lower the pH to protonate the ME and thus deliver good peak shape. The percentage of formic acid was optimized to maintain this peak shape while being consistent with good ionization and fragmentation in the mass spectrometer. The high proportion of organic solvent eluted both the ME and the MED6 at retention time1.45±0.2 min at a flow rate of 0.6 mL/min, produced good peak shapes, and permitted a run time of 3.5 min.Liquid-liquid extraction (LLE) was used for the sample preparation in this work. LLE can be helpful to clean the samples. Clean samples are essential for minimizing ion suppression and matrix effect in LC-MS/MS analyses. Several organic solvents and their mixtures in different combinations and ratios were evaluated. Finally, diethyl ether/n-hexane (70 : 30) was found to be optimal, which produced a clean chromatogram for a blank plasma sample and yielded the highest recovery for the ME and MED6 from the plasma. Memantine-D6 hydrochloride was used as internal standard for the present purpose. Clean chromatograms were obtained, and no significant direct interferences in the MRM channels at the relevant retention times were observed.
### 3.2. Selectivity
The selectivity of the method was examined by analyzing blank human plasma extracts (n=6). The result of one blank (Figure 3(a)) plasma is shown and the lack of interference is similar to other samples which were studied which shows no significant direct interference in the blank plasma traces as observed from endogenous substances in drug-free human plasma at the retention time of the analyte.(a) MRM chromatogram for blank plasma. (b) Chromatogram of LOQ.
(a)(b)
### 3.3. Limit of Quantification (LOQ)
The LOQ signal-to-noise (S/N) value found for 6 injections of ME at LOQ concentration was 11.93. Figure3(b) shows a representative ion-chromatogram for the LOQ (50 pg/mL) with 20 μL injection volume.
### 3.4. Linearity, Precision, and Accuracy
The ten-point calibration curve was linear over the concentration range 50.00–50000.00 pg/mL. The calibration model was selected based on the analysis of the data by quadratic regression with intercepts and weighting factor1/x2. The best quadratic regression for the calibration curve was achieved with a 1/x2 weighing factor, giving a mean quadratic regression equation for the calibration curve of y=-9.427×10-11x2+9.194×10-5x+2.989×10-4(y=ax2+bx+c) where y is the peak-area ratio of the ME to the MED6 and x is the concentration of the ME in plasma (Table 1). For the between-batch experiments, the precision and accuracy ranged from 1.4 to 2.7% and 95.7 to 99.1%, respectively (Table 2). Further, in within-batch experiments the precision and accuracy ranged from 2.1 to 2.3% and 95.6 to 99.8% correspondingly.Table 1
Concentration data form validation.
Spiked plasma concentration (pg/ml)Concentration measured (pg/ml) Mean ± Sdn=5Precision (% CV)Accuracy %50.0049.92±0.440.8099.23100.00100.21±1.911.9098.14500.00502.73±6.831.4098.651000.001000.54±11.531.1098.925000.005005.06±38.750.8099.2810000.009978.95±160.561.6098.4520000.0019871.04±303.461.5098.5330000.0029759.82±508.471.7098.3740000.0040310.88±123.850.3099.7450000.0050084.85±266.720.5099.55Table 2
Precision and accuracy (analysis with spiked plasma samples at three different concentrations).
Spiked plasma concentration (pg/ml)Within-run(n=6)Between-run(n=30)Concentration measured(n=6) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %Concentration measured(n=30) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %150.00143.40±3.202.2095.60143.50±3.902.7095.7015000.0014746.40±338.402.3098.3014719.10±248.301.7098.1035000.0034935.20±730.402.1099.8034699.20±498.401.4099.10
### 3.5. Recovery
The recoveries for ME at low (150.00 pg/mL), medium (15000.00 pg/mL) and high (35000.00 pg/mL) plasma concentrations with six replicate injections each showed79.45±6.20%, 91.25±5.9%, and 87.52±2.59%. The overall recovery of ME was found to be 86.07%±6.87%. Similarly extraction recovery of MED6 (25.00 ng/mL) was determined as 80.31%±5.70%. Recoveries of ME and MED6 were high, precise, and reproducible. Therefore, the assay has proved to be robust in high-throughput bioanalysis.
### 3.6. Stability Studies
Quantification of the ME in plasma that was subjected to 3 freeze-thaw cycles (−30°C to room temperature) showed the stability of the analyte. The concentrations ranged from 98.00 to 104.00% for ME. No significant degradation was observed even after a 79-hour storage period in the autosampler tray, and the final concentrations of ME were found between 100.00 and 105.00% The room temperature stability of ME in QC samples after 26 hr was also evaluated. The concentrations were ranged between 99.00 and 102.00% for ME. In addition, the long-term stability in low and high QC samples after 76 days of storage at −30°C was also evaluated, and the concentrations ranged from 98.00 to 103.00% for ME. These results confirmed the stability of ME in human plasma for at least 76 days at −30°C. (Table3).Table 3
Stability of Memantine in human plasma samples.
Spiked plasma concentration (pg/mln=6)Concentration measured (pg/ml)Precision (%CV)Accuracy (%)Room temperature stability for 26 hr in plasma150.00151.231.00100.8235000.0034926.500.8099.79Three freeze-thaw cycles150.00153.022.30102.0135000.0034818.000.9099.48Auto sampler stability for 79 hr150.00154.591.40103.0635000.0035213.500.90100.61Stability for 76 days −30°C150.00151.533.20101.0235000.0035080.501.40100.23
### 3.7. Application to Biological Samples
The proposed method was applied to the determination of ME in plasma samples for the purpose of establishing the bioequivalence of a single dose (10 mg tablet) in 20 healthy human volunteers. Typical plasma concentrations versus time profiles were shown in Figure4. Plasma concentrations of ME were in the standard curve range and retained above LLOQ for the entire sampling period. The pharmacokinetic parameters for test and reference products were shown in Tables 4 and 5. The mean ratio of AUC0-t/AUC0-∞ was higher than 90% which followed the Food and Drug Administration Bioequivalence Guideline [11, 12]. The ratio of test/reference (T/R) and 90% confidence intervals (90 CIs) for overall analysis were comprised within the previously stipulated range (80–125%). Therefore, it can be concluded that the two ME formulations (reference and test) analyzed are bioequivalent in terms of rate and extent of absorption at fasting conditions.Table 4
Mean pharmacokinetic parameters of Memantine in 20 healthy human volunteers after oral administration of 10 mg test and reference products.
Pharmacokinetic details of Memantine in human plasmaPharmacokinetic ParameterReferenceTestMean ± SDMean ± SDCmax (pg/ml)14368.57±4044.1614328±4324.76AUC0-t (pg·hr/ml)654545.5±70423.12674564.4±67858.99AUC0-∞ (pg·hr/ml)1053469.0±77690.791136607±74862.04Tmax (hr)7.07.5t1/249.2953.35AUC0-∞: Area under the curve extrapolated to infinity.AUC0-t: Area under the curve up to the last sampling time.Cmax: The maximum plasma concentration.Tmax: The time to reach peak concentration.Table 5
Pharmacokinetic parameters of memantine after administration of 10 mg of test and reference products in 20 healthy human volunteers.
Pharmacokinetic parametersCmax (T/R)AUC0-t (T/R)AUC0-∞ (T/R)Test/Ref99.72103.06107.89Figure 4
Mean plasma concentrations of test versus reference after a 10 mg dose (one 10 mg tablet) single oral dose (20 healthy volunteers).
## 3.1. Method Development and Validation
Mass spectrometry parameters, fragmentation pattern, and mode of ionization are the main task in mass spectrometry tuning to obtain respective fragmented ions and response for both ME and MED6 which were shown in Figures2(a), 2(b), 2(c), and 2(d). ESI-LC-MS/MS is a very powerful technique for pharmacokinetic studies since it provides sensitivity and selectivity requirements for analytical methods. MRM technique was chosen for the assay development. The MRM parameters were optimized to maximize the response for the analyte.(a) Mass spectra of Memantine parent ion (Q1). (b) Mass spectra of Memantine product ion (Q3). (c) Mass spectra of Memantine-D6 parent ion. (d) Mass spectra of Memantine-D6 product ion (Q3).
(a)(b)(c)(d)The instrumental parameters for mass spectroscopy were optimized. The source temperature was 600°C. The gas pressures of nebulizer, heater, curtain, and CAD were 40, 30, 20, and 4 psi, respectively. The ion spray voltage, entrance potential, declustering potential, collision energy, and collision cell exit potential were optimized at 5500, 10, 50, 32, and 12 V, respectively. The dwell time was 400 milliseconds for both ME and MED6.The product ion (Q3) mass spectra of ME and the MED6 are shown in Figures2(b) and 2(d). [M + H]+ was the predominant ion in the Q1 spectrum. The Q1 for ME and MED6 was 180.2 and 186.1, respectively, and were used as the precursor ion to obtain product ion spectra. The collisionally associated dissociation (CAD) mass spectrum of ME shows formation of characteristic product ions at m/z161.8, 163.2, and 165.1. The major product ion at m/z163.2 for ME could be explained by the splitting of 1-amino-3-,5-dimethyladamantane hydrochloride from the protonated precursor molecule. The CAD mass spectrum of MED6 shows formation of characteristic product ions at m/z169.2. The major product ion at m/z169.2 arose from 3,5-Dimethyl-d6-tricyclo-[3,3,1,13,7]decan-1-amine,3,5-Dimethyl-d6-1-adamantanamine from the protonated precursor molecule. The most sensitive mass transitions were from m/z180.2 to 1163.2 for ME and m/z 186.1 to m/z 169.2 for the MED6. The proposed fragmentation pattern is Figure 2(a)→Figure 2(b), Figure 2(c)→Figure 2(d). The inherent selectivity of MS-MS detection was also expected to be beneficial in developing a selective and sensitive method.The chromatographic conditions particularly the composition of mobile phase, flow-rate of mobile phase, choosing of suitable column, injection volume, column oven temperature, autosampler temperature, splitting of sample in to ion source, as well as a short run time were optimized through several trials to achieve good resolution and symmetric peak shapes for the ME and MED6. It was found that a mixture of 0.1% formic acid:acetonitrile (35 : 65 v/v) could achieve this purpose and this was finally adopted as the mobile phase. The formic acid was found to be necessary in order to lower the pH to protonate the ME and thus deliver good peak shape. The percentage of formic acid was optimized to maintain this peak shape while being consistent with good ionization and fragmentation in the mass spectrometer. The high proportion of organic solvent eluted both the ME and the MED6 at retention time1.45±0.2 min at a flow rate of 0.6 mL/min, produced good peak shapes, and permitted a run time of 3.5 min.Liquid-liquid extraction (LLE) was used for the sample preparation in this work. LLE can be helpful to clean the samples. Clean samples are essential for minimizing ion suppression and matrix effect in LC-MS/MS analyses. Several organic solvents and their mixtures in different combinations and ratios were evaluated. Finally, diethyl ether/n-hexane (70 : 30) was found to be optimal, which produced a clean chromatogram for a blank plasma sample and yielded the highest recovery for the ME and MED6 from the plasma. Memantine-D6 hydrochloride was used as internal standard for the present purpose. Clean chromatograms were obtained, and no significant direct interferences in the MRM channels at the relevant retention times were observed.
## 3.2. Selectivity
The selectivity of the method was examined by analyzing blank human plasma extracts (n=6). The result of one blank (Figure 3(a)) plasma is shown and the lack of interference is similar to other samples which were studied which shows no significant direct interference in the blank plasma traces as observed from endogenous substances in drug-free human plasma at the retention time of the analyte.(a) MRM chromatogram for blank plasma. (b) Chromatogram of LOQ.
(a)(b)
## 3.3. Limit of Quantification (LOQ)
The LOQ signal-to-noise (S/N) value found for 6 injections of ME at LOQ concentration was 11.93. Figure3(b) shows a representative ion-chromatogram for the LOQ (50 pg/mL) with 20 μL injection volume.
## 3.4. Linearity, Precision, and Accuracy
The ten-point calibration curve was linear over the concentration range 50.00–50000.00 pg/mL. The calibration model was selected based on the analysis of the data by quadratic regression with intercepts and weighting factor1/x2. The best quadratic regression for the calibration curve was achieved with a 1/x2 weighing factor, giving a mean quadratic regression equation for the calibration curve of y=-9.427×10-11x2+9.194×10-5x+2.989×10-4(y=ax2+bx+c) where y is the peak-area ratio of the ME to the MED6 and x is the concentration of the ME in plasma (Table 1). For the between-batch experiments, the precision and accuracy ranged from 1.4 to 2.7% and 95.7 to 99.1%, respectively (Table 2). Further, in within-batch experiments the precision and accuracy ranged from 2.1 to 2.3% and 95.6 to 99.8% correspondingly.Table 1
Concentration data form validation.
Spiked plasma concentration (pg/ml)Concentration measured (pg/ml) Mean ± Sdn=5Precision (% CV)Accuracy %50.0049.92±0.440.8099.23100.00100.21±1.911.9098.14500.00502.73±6.831.4098.651000.001000.54±11.531.1098.925000.005005.06±38.750.8099.2810000.009978.95±160.561.6098.4520000.0019871.04±303.461.5098.5330000.0029759.82±508.471.7098.3740000.0040310.88±123.850.3099.7450000.0050084.85±266.720.5099.55Table 2
Precision and accuracy (analysis with spiked plasma samples at three different concentrations).
Spiked plasma concentration (pg/ml)Within-run(n=6)Between-run(n=30)Concentration measured(n=6) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %Concentration measured(n=30) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %150.00143.40±3.202.2095.60143.50±3.902.7095.7015000.0014746.40±338.402.3098.3014719.10±248.301.7098.1035000.0034935.20±730.402.1099.8034699.20±498.401.4099.10
## 3.5. Recovery
The recoveries for ME at low (150.00 pg/mL), medium (15000.00 pg/mL) and high (35000.00 pg/mL) plasma concentrations with six replicate injections each showed79.45±6.20%, 91.25±5.9%, and 87.52±2.59%. The overall recovery of ME was found to be 86.07%±6.87%. Similarly extraction recovery of MED6 (25.00 ng/mL) was determined as 80.31%±5.70%. Recoveries of ME and MED6 were high, precise, and reproducible. Therefore, the assay has proved to be robust in high-throughput bioanalysis.
## 3.6. Stability Studies
Quantification of the ME in plasma that was subjected to 3 freeze-thaw cycles (−30°C to room temperature) showed the stability of the analyte. The concentrations ranged from 98.00 to 104.00% for ME. No significant degradation was observed even after a 79-hour storage period in the autosampler tray, and the final concentrations of ME were found between 100.00 and 105.00% The room temperature stability of ME in QC samples after 26 hr was also evaluated. The concentrations were ranged between 99.00 and 102.00% for ME. In addition, the long-term stability in low and high QC samples after 76 days of storage at −30°C was also evaluated, and the concentrations ranged from 98.00 to 103.00% for ME. These results confirmed the stability of ME in human plasma for at least 76 days at −30°C. (Table3).Table 3
Stability of Memantine in human plasma samples.
Spiked plasma concentration (pg/mln=6)Concentration measured (pg/ml)Precision (%CV)Accuracy (%)Room temperature stability for 26 hr in plasma150.00151.231.00100.8235000.0034926.500.8099.79Three freeze-thaw cycles150.00153.022.30102.0135000.0034818.000.9099.48Auto sampler stability for 79 hr150.00154.591.40103.0635000.0035213.500.90100.61Stability for 76 days −30°C150.00151.533.20101.0235000.0035080.501.40100.23
## 3.7. Application to Biological Samples
The proposed method was applied to the determination of ME in plasma samples for the purpose of establishing the bioequivalence of a single dose (10 mg tablet) in 20 healthy human volunteers. Typical plasma concentrations versus time profiles were shown in Figure4. Plasma concentrations of ME were in the standard curve range and retained above LLOQ for the entire sampling period. The pharmacokinetic parameters for test and reference products were shown in Tables 4 and 5. The mean ratio of AUC0-t/AUC0-∞ was higher than 90% which followed the Food and Drug Administration Bioequivalence Guideline [11, 12]. The ratio of test/reference (T/R) and 90% confidence intervals (90 CIs) for overall analysis were comprised within the previously stipulated range (80–125%). Therefore, it can be concluded that the two ME formulations (reference and test) analyzed are bioequivalent in terms of rate and extent of absorption at fasting conditions.Table 4
Mean pharmacokinetic parameters of Memantine in 20 healthy human volunteers after oral administration of 10 mg test and reference products.
Pharmacokinetic details of Memantine in human plasmaPharmacokinetic ParameterReferenceTestMean ± SDMean ± SDCmax (pg/ml)14368.57±4044.1614328±4324.76AUC0-t (pg·hr/ml)654545.5±70423.12674564.4±67858.99AUC0-∞ (pg·hr/ml)1053469.0±77690.791136607±74862.04Tmax (hr)7.07.5t1/249.2953.35AUC0-∞: Area under the curve extrapolated to infinity.AUC0-t: Area under the curve up to the last sampling time.Cmax: The maximum plasma concentration.Tmax: The time to reach peak concentration.Table 5
Pharmacokinetic parameters of memantine after administration of 10 mg of test and reference products in 20 healthy human volunteers.
Pharmacokinetic parametersCmax (T/R)AUC0-t (T/R)AUC0-∞ (T/R)Test/Ref99.72103.06107.89Figure 4
Mean plasma concentrations of test versus reference after a 10 mg dose (one 10 mg tablet) single oral dose (20 healthy volunteers).
## 4. Conclusion
A simple, high sensitive, specific, rugged, and reproducible LC-MS/MS method for the determination of memantine in human plasma was developed and validated as per FDA guidelines. This method was successfully applied in bioequivalence study to evaluate the plasma concentrations of ME in healthy human volunteers.
---
*Source: 101249-2012-03-22.xml* | 101249-2012-03-22_101249-2012-03-22.md | 45,474 | Bioanalytical Method Development and Validation of Memantine in Human Plasma by High Performance Liquid Chromatography with Tandem Mass Spectrometry: Application to Bioequivalence Study | Ravi Kumar Konda; B. R. Challa; Babu Rao Chandu; Kothapalli B. Chandrasekhar | Journal of Analytical Methods in Chemistry
(2012) | Chemistry and Chemical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101249 | 101249-2012-03-22.xml | ---
## Abstract
A simple, sensitive, and rapid HPLC-MS/MS method was developed and validated for quantitative estimation of memantine in human plasma. Chromatography was performed on Zorbax SB-C18 (4.6×75 mm, 3.5 μm) column. Memantine (ME) and internal standard Memantine-d6(MED6) were extracted by using liquid-liquid extraction and analyzed by LC-ESI-MS/MS using multiple-reaction monitoring (MRM) mode. The assay exhibited a linear dynamic range of 50.00–50000.00 pg/ml for ME in human plasma. This method demonstrated an intra- and interday precision within the range of 2.1–3.7 and 1.4–7.8%, respectively. Further intra- and interday accuracy was within the range of 95.6–99.8 and 95.7–99.1% correspondingly. The mean recovery of ME and MED6 was 86.07±6.87 and 80.31±5.70%, respectively. The described method was successfully employed in bioequivalence study of ME in Indian male healthy human volunteers under fasting conditions.
---
## Body
## 1. Introduction
Memantine (1-amino-3,5-dimethyladamantane hydrochloride) (Figure1) acting on the glutamatergic system by blocking N-methyl-D-aspartate (NMDA) glutamate receptors [1]. Memantine (ME) is used in Parkinson’s disease and movement disorders, and recently it has been demonstrated to be useful in dementia syndrome. The mode of action is thought to be due to prevention of damage to retinal ganglion as a result of increased intraocular pressure. The accumulation of a drug in melanin-rich tissues may have serious physiological consequences as it could lead to potentially toxic effects. Despite several investigations into the nature of drug melanin binding, the exact mechanism of the interaction remains unknown. ME is well absorbed, with peak plasma concentrations (Cmax) ranging from 22 to 46 ng/mL following a single dose of 20 mg. The time to achieve maximum plasma concentration (Tmax) following single doses of 10–40 mg ranges from 3 to 8 hr. The drug is 45% bound to plasma proteins presenting a distribution volume of approximately 9–11 L/kg, which suggests an extensive distribution into tissues. It is poorly metabolized by the liver, and 57–82% of the administered dose is excreted unchanged in the urine with a mean terminal half-life of 70 hr [1].Chemical structures of (a) Memantine hydrochloride and (b) Memantine-D6HCL.
(a)(b)There were few methods established previously to determine ME in a variety of matrices with different instruments. These methods include LC-MS [1–4], HPLC [5–8], GC-MS [9], and Micellar electrokinetic chromatography [10]. Among all methods LC-MS [1–4] has gained more importance.Liu et al. [1] developed the method with the linear concentration range of 0.2–200 ng/mL, with 0.2 ng/mL sensitivity. This sensitivity was improved by Almeida et al. [2]. They developed the method with the linear concentration range of 0.1 to 50 ng/mL, with 0.1 ng/mL sensitivity. Pan et al. [3] developed the method with the linear concentration range of 0.1 to 25 ng/mL. They used 0.5 mL plasma usage to get 0.1 ng/mL of sensitivity. Koeberle et al. [4] developed the method in different melanins.The reported methods do not show the usage of deuterated internal standard comparision with analyte which is most important in bioanalytical method development. All the reported methods develop the method with long run time and more amount of plasma sample for extraction.The purpose of this investigation was to develop a rapid, simple, sensitive, and selective LC-MS/MS method for the quantitative estimation of ME in less volume of human plasma using deuterated internal standard. It is also expected that this method would provide an efficient solution for pharmacokinetic, bioavailability, and/or bioequivalence studies of ME.
## 2. Materials and Methods
### 2.1. Chemicals
ME (99.9%) was obtained from Varda biotech Pvt. Ltd. Andheri, Mumbai, India. MED6 (99.0%) was obtained from the Toronto Research Chemicals, Toronto, Canada. Blank plasma lots were purchased from Navjeevan blood bank, Hyderabad. HPLC-grade methanol and acetonitrile were purchased from Jt. Baker, USA. Diethyl ether andn-hexane were purchased from Lab Scan, Asia Co. Ltd, Bangkok, Thailand. Formic acid and sodium hydroxide were purchased from Merck Mumbai, India. HPLC-grade water from Milli-Q System was used. All other chemicals used were analytical grade.
### 2.2. Instrumentation and Chromatographic Conditions
HPLC system (1200 series, Agilent Technologies, Germany) is connected with API 4000 triple quadrupole mass spectrometer (ABI-SCIEX, Toronto, Canada) using multiple reaction monitoring (MRM). A turbo electrospray interface in positive ionization mode was used. Data processing was performed on Analyst 1.4.1 software package (SCIEX). The chromatography was performed on a Zorbax SB-C18 (4.6×75 mm, 3.5 μm) (Agilent technologies,Germany) at 40°C temperature. The mobile phase composition was a mixture of 0.1% formic acid : acetonitrile (35 : 65 v/v) which was pumped at a flow-rate of 0.6 mL/min without split.
### 2.3. Preparation of Calibration Standards and Quality Control Samples
Standard stock solutions of ME (100.00μg/mL) and MED6 (100.00 μg/mL) were separately prepared in methanol. MED6 dilution (25.00 ng/mL) was made from MED6 standard stock solution with diluent (methanol: water 50 : 50 v/v). Standard stock solution of ME was added to drug-free human plasma to obtain ME calibration standards of 50.00, 100.00, 500.00, 1000.00, 5000.00, 10000.00, 20000.00, 30000.00, 40000.00, and 50000.00 pg/mL. Quality control (QC) samples were also prepared as a bulk on an independent weighing of standard drug at concentrations of 50.00 (LLOQ), 150.00 (LQC), 15000.00 (MQC), and 35000.00 pg/mL (HQC) from standard stock solutions of ME. The calibration standards and quality control samples were divided into aliquots in 5 mL Ria vials and stored in the freezer at below −30°C until analysis.
### 2.4. Sample Preparation
50μL of MED6 (25 ng/mL), 100 μL of plasma sample, and 100 μL of 10 mM NaOH were added into 5 mL Ria vials and vortexed briefly. This was followed by addition of 3 mL extraction solvent (diethyl ether : n-hexane 70 : 30 v/v) and vortexed for 10 min. Then samples were centrifuged at 4000 rpm for 5 min at ambient temperature conditions. Then, the supernatant from each sample was transferred into labelled vials by using the dry-ice acetone flash freeze technique and evaporated to dryness under nitrogen stream at 40°C. The dried residue was reconstituted with 400 μL of 0.1% of formic acid: acetonitrile (35 : 65 v/v) mixture and vortexed until dissolved. Finally, a 20 μL of each sample was transferred into auto sampler vials and injected into HPLC connected with mass spectrometer.
### 2.5. Recovery
Recovery of ME was evaluated by comparing the mean peak area of six extracted low, medium, and high (150.00, 1500.00, and 35000.00 pg/mL) quality control samples to the mean peak area of six aqueous standards with the same concentrations of low, medium, and high ME quality control samples.Similarly the recovery of MED6 was evaluated by comparing the mean peak area of extracted quality control samples to the mean peak area of MED6 in aqueous standards samples with the same concentrations of MED6.
### 2.6. Selectivity
The selectivity of the method was determined by blank human plasma samples from six different healthy human volunteers to test the potential interferences of endogenous compounds coeluted with ME and MED6. The Chromatographic peaks of ME and MED6 were identified on the basis of their retention times and MRM responses. The mean peak area of LOQ for ME and MED6 at corresponding retention time in blank samples should not be more than 20 and 5%, respectively.
### 2.7. Limit of Quantification (LOQ)
The LOQ was estimated in accordance with the baseline noise method at a signal-to-noise ratio (S/N) of 5. It was experimentally determined by injecting six samples with ME at the LLOQ concentration. The acceptance criterion for S/N was ≥5 and calculated by selecting the noise region as close as possible to the signal peak, which was at least 8 times of the signal peak width at half height.
### 2.8. Analytical Curves
The analytical curves of ME were constructed in the concentrations ranging from 50.00 to 50000.00 pg/mL in human plasma. The calibration curve was constructed by using instrument response (ratio of ME peak area to MED6 peak area) against the ME concentration (pg/mL) for four consecutive days by weighted1/x2 quadratic regression model. The fitness of calibration curve was confirmed by back-calculating the concentrations of calibration standards.
### 2.9. Calibration Curve Standards, Regression Model, Precision, and Accuracy Batches
Calibration curve standard samples and QC samples were prepared in replicates (n=6) for analysis. Correlation coefficients (r2) were obtained by using quadratic regression model in whole range of tested concentrations. The accuracy and precision for the back calculated concentrations of the calibration points should be within ±15% whereas those of LLOQ should be within ±20% of their nominal values.
### 2.10. Stability
Low and high QC samples (n=6) were retrieved from the deep freezer; samples were processed for three freeze/thaw cycles according to the clinical protocols. The samples were stored at −10°C to −30°C in three cycles of 24, 36, and 48 hr. In addition, the long-term stability of ME in QC samples was also evaluated after 76 days of storage at −10 to −30°C. The stability at refrigerated temperature was studied following 79 hr storage period in the autosampler tray. Bench top stability was studied for 26-hour period. Stability samples were processed and extracted along with the freshly spiked calibration curve standards. Stability of the stock solutions was proved for 24 days. The precision and accuracy for the stability samples were maintained within 15 and ±15%, respectively, of their nominal concentrations.
### 2.11. Matrix Effect
The matrix effect due to plasma matrix was used to evaluate ion suppression/enhancement in a signal by comparing the absolute response of QC samples after pretreatment (liquid-liquid extraction) with that of reconstituted samples extracted blank plasma sample spiked with analyte. Experiments were performed at low and high concentration levels in triplicate. The acceptable precision (%CV) should be ≤15%.
### 2.12. Analysis of Human Plasma Samples
The bioanalytical method described previously was applied to determine ME concentrations in plasma following oral administration to healthy adult human male volunteers below 25 years of age. The volunteers were contracted by Micro Therapeutics Research Labs Pvt Ltd., Chennai, India. They were screened before participation in the study and an informed consent was taken from them. These volunteers, were not undergone any other medication before conducting this study. To each of the 20 volunteers a tablet containing 10 mg of ME was orally administered along with a 240 mL of drinking water. Proper diet was provided to each volunteer as per the protocol. The reference product (Namenda tablets 10 mg, Forest laboratories, Ireland) and test product (Memantine tablets 10 mg) were used in the study. The study protocol was approved by IEC (Institutional Ethical Committee) and by ICMR (Indian Council of Medical Research). Blood samples were collected as predose (0 hr) 5 minutes prior to dosing followed by further samples at 1, 2, 3, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 12, 24, 48, and 72 hr. After dosing, a 5 mL blood sample was collected each preestablished time in vacutainers containing K2EDTA. A total of 34 samples (17 time points each for reference and test) were collected and centrifuged at 3200 rpm and10°C for 10 min. Then they were stored at −30°C until further analysis. Test and reference were administered to the same human volunteers under fasting conditions separately after a washing period of 18 days as per protocol approved by IEC.
### 2.13. Pharmacokinetics and Statistical Analysis
Pharmacokinetics parameters were calculated from plasma levels applying a noncompartmental statistics model using WinNon-Lin 5.0 software (Pharsight, USA). Following Food and Drug Administration (F.D.A) guideline [11, 12], blood samples were drawn up to a period of three to five times the terminal elimination half-life (t1/2) and it was considered as the area under the concentration time curve (AUC) ratio higher than 80%. The Cmax and Tmax values were determined by visual inspection of the plasma ME concentration-time profiles. The area under the concentration-time curve (AUC0-t) was obtained by the trapezoidal method. The total area under the curve (AUC0-∞) was calculated up to the last measureable concentration, and extrapolations were obtained by the last measureable concentration and the terminal elimination rate constant (Ke). The Ke was estimated from the slope of the terminal exponential phase of the plasma of ME concentration-time curve using linear regression method. The t1/2 was then calculated as 0.693/Ke. The AUC0-t, AUC0-∞, and Cmax bioequivalence were assessed by analysis of variance (ANOVA), and the standard 90% confidence intervals (90% CIs) of the ratios test/reference were calculated after transforming the data logarithmically. The bioequivalence was considered when the ratio of averages of log transformed data was within 80–125% for AUC0-t, AUC0-∞, and Cmax [11, 12].
## 2.1. Chemicals
ME (99.9%) was obtained from Varda biotech Pvt. Ltd. Andheri, Mumbai, India. MED6 (99.0%) was obtained from the Toronto Research Chemicals, Toronto, Canada. Blank plasma lots were purchased from Navjeevan blood bank, Hyderabad. HPLC-grade methanol and acetonitrile were purchased from Jt. Baker, USA. Diethyl ether andn-hexane were purchased from Lab Scan, Asia Co. Ltd, Bangkok, Thailand. Formic acid and sodium hydroxide were purchased from Merck Mumbai, India. HPLC-grade water from Milli-Q System was used. All other chemicals used were analytical grade.
## 2.2. Instrumentation and Chromatographic Conditions
HPLC system (1200 series, Agilent Technologies, Germany) is connected with API 4000 triple quadrupole mass spectrometer (ABI-SCIEX, Toronto, Canada) using multiple reaction monitoring (MRM). A turbo electrospray interface in positive ionization mode was used. Data processing was performed on Analyst 1.4.1 software package (SCIEX). The chromatography was performed on a Zorbax SB-C18 (4.6×75 mm, 3.5 μm) (Agilent technologies,Germany) at 40°C temperature. The mobile phase composition was a mixture of 0.1% formic acid : acetonitrile (35 : 65 v/v) which was pumped at a flow-rate of 0.6 mL/min without split.
## 2.3. Preparation of Calibration Standards and Quality Control Samples
Standard stock solutions of ME (100.00μg/mL) and MED6 (100.00 μg/mL) were separately prepared in methanol. MED6 dilution (25.00 ng/mL) was made from MED6 standard stock solution with diluent (methanol: water 50 : 50 v/v). Standard stock solution of ME was added to drug-free human plasma to obtain ME calibration standards of 50.00, 100.00, 500.00, 1000.00, 5000.00, 10000.00, 20000.00, 30000.00, 40000.00, and 50000.00 pg/mL. Quality control (QC) samples were also prepared as a bulk on an independent weighing of standard drug at concentrations of 50.00 (LLOQ), 150.00 (LQC), 15000.00 (MQC), and 35000.00 pg/mL (HQC) from standard stock solutions of ME. The calibration standards and quality control samples were divided into aliquots in 5 mL Ria vials and stored in the freezer at below −30°C until analysis.
## 2.4. Sample Preparation
50μL of MED6 (25 ng/mL), 100 μL of plasma sample, and 100 μL of 10 mM NaOH were added into 5 mL Ria vials and vortexed briefly. This was followed by addition of 3 mL extraction solvent (diethyl ether : n-hexane 70 : 30 v/v) and vortexed for 10 min. Then samples were centrifuged at 4000 rpm for 5 min at ambient temperature conditions. Then, the supernatant from each sample was transferred into labelled vials by using the dry-ice acetone flash freeze technique and evaporated to dryness under nitrogen stream at 40°C. The dried residue was reconstituted with 400 μL of 0.1% of formic acid: acetonitrile (35 : 65 v/v) mixture and vortexed until dissolved. Finally, a 20 μL of each sample was transferred into auto sampler vials and injected into HPLC connected with mass spectrometer.
## 2.5. Recovery
Recovery of ME was evaluated by comparing the mean peak area of six extracted low, medium, and high (150.00, 1500.00, and 35000.00 pg/mL) quality control samples to the mean peak area of six aqueous standards with the same concentrations of low, medium, and high ME quality control samples.Similarly the recovery of MED6 was evaluated by comparing the mean peak area of extracted quality control samples to the mean peak area of MED6 in aqueous standards samples with the same concentrations of MED6.
## 2.6. Selectivity
The selectivity of the method was determined by blank human plasma samples from six different healthy human volunteers to test the potential interferences of endogenous compounds coeluted with ME and MED6. The Chromatographic peaks of ME and MED6 were identified on the basis of their retention times and MRM responses. The mean peak area of LOQ for ME and MED6 at corresponding retention time in blank samples should not be more than 20 and 5%, respectively.
## 2.7. Limit of Quantification (LOQ)
The LOQ was estimated in accordance with the baseline noise method at a signal-to-noise ratio (S/N) of 5. It was experimentally determined by injecting six samples with ME at the LLOQ concentration. The acceptance criterion for S/N was ≥5 and calculated by selecting the noise region as close as possible to the signal peak, which was at least 8 times of the signal peak width at half height.
## 2.8. Analytical Curves
The analytical curves of ME were constructed in the concentrations ranging from 50.00 to 50000.00 pg/mL in human plasma. The calibration curve was constructed by using instrument response (ratio of ME peak area to MED6 peak area) against the ME concentration (pg/mL) for four consecutive days by weighted1/x2 quadratic regression model. The fitness of calibration curve was confirmed by back-calculating the concentrations of calibration standards.
## 2.9. Calibration Curve Standards, Regression Model, Precision, and Accuracy Batches
Calibration curve standard samples and QC samples were prepared in replicates (n=6) for analysis. Correlation coefficients (r2) were obtained by using quadratic regression model in whole range of tested concentrations. The accuracy and precision for the back calculated concentrations of the calibration points should be within ±15% whereas those of LLOQ should be within ±20% of their nominal values.
## 2.10. Stability
Low and high QC samples (n=6) were retrieved from the deep freezer; samples were processed for three freeze/thaw cycles according to the clinical protocols. The samples were stored at −10°C to −30°C in three cycles of 24, 36, and 48 hr. In addition, the long-term stability of ME in QC samples was also evaluated after 76 days of storage at −10 to −30°C. The stability at refrigerated temperature was studied following 79 hr storage period in the autosampler tray. Bench top stability was studied for 26-hour period. Stability samples were processed and extracted along with the freshly spiked calibration curve standards. Stability of the stock solutions was proved for 24 days. The precision and accuracy for the stability samples were maintained within 15 and ±15%, respectively, of their nominal concentrations.
## 2.11. Matrix Effect
The matrix effect due to plasma matrix was used to evaluate ion suppression/enhancement in a signal by comparing the absolute response of QC samples after pretreatment (liquid-liquid extraction) with that of reconstituted samples extracted blank plasma sample spiked with analyte. Experiments were performed at low and high concentration levels in triplicate. The acceptable precision (%CV) should be ≤15%.
## 2.12. Analysis of Human Plasma Samples
The bioanalytical method described previously was applied to determine ME concentrations in plasma following oral administration to healthy adult human male volunteers below 25 years of age. The volunteers were contracted by Micro Therapeutics Research Labs Pvt Ltd., Chennai, India. They were screened before participation in the study and an informed consent was taken from them. These volunteers, were not undergone any other medication before conducting this study. To each of the 20 volunteers a tablet containing 10 mg of ME was orally administered along with a 240 mL of drinking water. Proper diet was provided to each volunteer as per the protocol. The reference product (Namenda tablets 10 mg, Forest laboratories, Ireland) and test product (Memantine tablets 10 mg) were used in the study. The study protocol was approved by IEC (Institutional Ethical Committee) and by ICMR (Indian Council of Medical Research). Blood samples were collected as predose (0 hr) 5 minutes prior to dosing followed by further samples at 1, 2, 3, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 12, 24, 48, and 72 hr. After dosing, a 5 mL blood sample was collected each preestablished time in vacutainers containing K2EDTA. A total of 34 samples (17 time points each for reference and test) were collected and centrifuged at 3200 rpm and10°C for 10 min. Then they were stored at −30°C until further analysis. Test and reference were administered to the same human volunteers under fasting conditions separately after a washing period of 18 days as per protocol approved by IEC.
## 2.13. Pharmacokinetics and Statistical Analysis
Pharmacokinetics parameters were calculated from plasma levels applying a noncompartmental statistics model using WinNon-Lin 5.0 software (Pharsight, USA). Following Food and Drug Administration (F.D.A) guideline [11, 12], blood samples were drawn up to a period of three to five times the terminal elimination half-life (t1/2) and it was considered as the area under the concentration time curve (AUC) ratio higher than 80%. The Cmax and Tmax values were determined by visual inspection of the plasma ME concentration-time profiles. The area under the concentration-time curve (AUC0-t) was obtained by the trapezoidal method. The total area under the curve (AUC0-∞) was calculated up to the last measureable concentration, and extrapolations were obtained by the last measureable concentration and the terminal elimination rate constant (Ke). The Ke was estimated from the slope of the terminal exponential phase of the plasma of ME concentration-time curve using linear regression method. The t1/2 was then calculated as 0.693/Ke. The AUC0-t, AUC0-∞, and Cmax bioequivalence were assessed by analysis of variance (ANOVA), and the standard 90% confidence intervals (90% CIs) of the ratios test/reference were calculated after transforming the data logarithmically. The bioequivalence was considered when the ratio of averages of log transformed data was within 80–125% for AUC0-t, AUC0-∞, and Cmax [11, 12].
## 3. Results and Discussion
### 3.1. Method Development and Validation
Mass spectrometry parameters, fragmentation pattern, and mode of ionization are the main task in mass spectrometry tuning to obtain respective fragmented ions and response for both ME and MED6 which were shown in Figures2(a), 2(b), 2(c), and 2(d). ESI-LC-MS/MS is a very powerful technique for pharmacokinetic studies since it provides sensitivity and selectivity requirements for analytical methods. MRM technique was chosen for the assay development. The MRM parameters were optimized to maximize the response for the analyte.(a) Mass spectra of Memantine parent ion (Q1). (b) Mass spectra of Memantine product ion (Q3). (c) Mass spectra of Memantine-D6 parent ion. (d) Mass spectra of Memantine-D6 product ion (Q3).
(a)(b)(c)(d)The instrumental parameters for mass spectroscopy were optimized. The source temperature was 600°C. The gas pressures of nebulizer, heater, curtain, and CAD were 40, 30, 20, and 4 psi, respectively. The ion spray voltage, entrance potential, declustering potential, collision energy, and collision cell exit potential were optimized at 5500, 10, 50, 32, and 12 V, respectively. The dwell time was 400 milliseconds for both ME and MED6.The product ion (Q3) mass spectra of ME and the MED6 are shown in Figures2(b) and 2(d). [M + H]+ was the predominant ion in the Q1 spectrum. The Q1 for ME and MED6 was 180.2 and 186.1, respectively, and were used as the precursor ion to obtain product ion spectra. The collisionally associated dissociation (CAD) mass spectrum of ME shows formation of characteristic product ions at m/z161.8, 163.2, and 165.1. The major product ion at m/z163.2 for ME could be explained by the splitting of 1-amino-3-,5-dimethyladamantane hydrochloride from the protonated precursor molecule. The CAD mass spectrum of MED6 shows formation of characteristic product ions at m/z169.2. The major product ion at m/z169.2 arose from 3,5-Dimethyl-d6-tricyclo-[3,3,1,13,7]decan-1-amine,3,5-Dimethyl-d6-1-adamantanamine from the protonated precursor molecule. The most sensitive mass transitions were from m/z180.2 to 1163.2 for ME and m/z 186.1 to m/z 169.2 for the MED6. The proposed fragmentation pattern is Figure 2(a)→Figure 2(b), Figure 2(c)→Figure 2(d). The inherent selectivity of MS-MS detection was also expected to be beneficial in developing a selective and sensitive method.The chromatographic conditions particularly the composition of mobile phase, flow-rate of mobile phase, choosing of suitable column, injection volume, column oven temperature, autosampler temperature, splitting of sample in to ion source, as well as a short run time were optimized through several trials to achieve good resolution and symmetric peak shapes for the ME and MED6. It was found that a mixture of 0.1% formic acid:acetonitrile (35 : 65 v/v) could achieve this purpose and this was finally adopted as the mobile phase. The formic acid was found to be necessary in order to lower the pH to protonate the ME and thus deliver good peak shape. The percentage of formic acid was optimized to maintain this peak shape while being consistent with good ionization and fragmentation in the mass spectrometer. The high proportion of organic solvent eluted both the ME and the MED6 at retention time1.45±0.2 min at a flow rate of 0.6 mL/min, produced good peak shapes, and permitted a run time of 3.5 min.Liquid-liquid extraction (LLE) was used for the sample preparation in this work. LLE can be helpful to clean the samples. Clean samples are essential for minimizing ion suppression and matrix effect in LC-MS/MS analyses. Several organic solvents and their mixtures in different combinations and ratios were evaluated. Finally, diethyl ether/n-hexane (70 : 30) was found to be optimal, which produced a clean chromatogram for a blank plasma sample and yielded the highest recovery for the ME and MED6 from the plasma. Memantine-D6 hydrochloride was used as internal standard for the present purpose. Clean chromatograms were obtained, and no significant direct interferences in the MRM channels at the relevant retention times were observed.
### 3.2. Selectivity
The selectivity of the method was examined by analyzing blank human plasma extracts (n=6). The result of one blank (Figure 3(a)) plasma is shown and the lack of interference is similar to other samples which were studied which shows no significant direct interference in the blank plasma traces as observed from endogenous substances in drug-free human plasma at the retention time of the analyte.(a) MRM chromatogram for blank plasma. (b) Chromatogram of LOQ.
(a)(b)
### 3.3. Limit of Quantification (LOQ)
The LOQ signal-to-noise (S/N) value found for 6 injections of ME at LOQ concentration was 11.93. Figure3(b) shows a representative ion-chromatogram for the LOQ (50 pg/mL) with 20 μL injection volume.
### 3.4. Linearity, Precision, and Accuracy
The ten-point calibration curve was linear over the concentration range 50.00–50000.00 pg/mL. The calibration model was selected based on the analysis of the data by quadratic regression with intercepts and weighting factor1/x2. The best quadratic regression for the calibration curve was achieved with a 1/x2 weighing factor, giving a mean quadratic regression equation for the calibration curve of y=-9.427×10-11x2+9.194×10-5x+2.989×10-4(y=ax2+bx+c) where y is the peak-area ratio of the ME to the MED6 and x is the concentration of the ME in plasma (Table 1). For the between-batch experiments, the precision and accuracy ranged from 1.4 to 2.7% and 95.7 to 99.1%, respectively (Table 2). Further, in within-batch experiments the precision and accuracy ranged from 2.1 to 2.3% and 95.6 to 99.8% correspondingly.Table 1
Concentration data form validation.
Spiked plasma concentration (pg/ml)Concentration measured (pg/ml) Mean ± Sdn=5Precision (% CV)Accuracy %50.0049.92±0.440.8099.23100.00100.21±1.911.9098.14500.00502.73±6.831.4098.651000.001000.54±11.531.1098.925000.005005.06±38.750.8099.2810000.009978.95±160.561.6098.4520000.0019871.04±303.461.5098.5330000.0029759.82±508.471.7098.3740000.0040310.88±123.850.3099.7450000.0050084.85±266.720.5099.55Table 2
Precision and accuracy (analysis with spiked plasma samples at three different concentrations).
Spiked plasma concentration (pg/ml)Within-run(n=6)Between-run(n=30)Concentration measured(n=6) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %Concentration measured(n=30) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %150.00143.40±3.202.2095.60143.50±3.902.7095.7015000.0014746.40±338.402.3098.3014719.10±248.301.7098.1035000.0034935.20±730.402.1099.8034699.20±498.401.4099.10
### 3.5. Recovery
The recoveries for ME at low (150.00 pg/mL), medium (15000.00 pg/mL) and high (35000.00 pg/mL) plasma concentrations with six replicate injections each showed79.45±6.20%, 91.25±5.9%, and 87.52±2.59%. The overall recovery of ME was found to be 86.07%±6.87%. Similarly extraction recovery of MED6 (25.00 ng/mL) was determined as 80.31%±5.70%. Recoveries of ME and MED6 were high, precise, and reproducible. Therefore, the assay has proved to be robust in high-throughput bioanalysis.
### 3.6. Stability Studies
Quantification of the ME in plasma that was subjected to 3 freeze-thaw cycles (−30°C to room temperature) showed the stability of the analyte. The concentrations ranged from 98.00 to 104.00% for ME. No significant degradation was observed even after a 79-hour storage period in the autosampler tray, and the final concentrations of ME were found between 100.00 and 105.00% The room temperature stability of ME in QC samples after 26 hr was also evaluated. The concentrations were ranged between 99.00 and 102.00% for ME. In addition, the long-term stability in low and high QC samples after 76 days of storage at −30°C was also evaluated, and the concentrations ranged from 98.00 to 103.00% for ME. These results confirmed the stability of ME in human plasma for at least 76 days at −30°C. (Table3).Table 3
Stability of Memantine in human plasma samples.
Spiked plasma concentration (pg/mln=6)Concentration measured (pg/ml)Precision (%CV)Accuracy (%)Room temperature stability for 26 hr in plasma150.00151.231.00100.8235000.0034926.500.8099.79Three freeze-thaw cycles150.00153.022.30102.0135000.0034818.000.9099.48Auto sampler stability for 79 hr150.00154.591.40103.0635000.0035213.500.90100.61Stability for 76 days −30°C150.00151.533.20101.0235000.0035080.501.40100.23
### 3.7. Application to Biological Samples
The proposed method was applied to the determination of ME in plasma samples for the purpose of establishing the bioequivalence of a single dose (10 mg tablet) in 20 healthy human volunteers. Typical plasma concentrations versus time profiles were shown in Figure4. Plasma concentrations of ME were in the standard curve range and retained above LLOQ for the entire sampling period. The pharmacokinetic parameters for test and reference products were shown in Tables 4 and 5. The mean ratio of AUC0-t/AUC0-∞ was higher than 90% which followed the Food and Drug Administration Bioequivalence Guideline [11, 12]. The ratio of test/reference (T/R) and 90% confidence intervals (90 CIs) for overall analysis were comprised within the previously stipulated range (80–125%). Therefore, it can be concluded that the two ME formulations (reference and test) analyzed are bioequivalent in terms of rate and extent of absorption at fasting conditions.Table 4
Mean pharmacokinetic parameters of Memantine in 20 healthy human volunteers after oral administration of 10 mg test and reference products.
Pharmacokinetic details of Memantine in human plasmaPharmacokinetic ParameterReferenceTestMean ± SDMean ± SDCmax (pg/ml)14368.57±4044.1614328±4324.76AUC0-t (pg·hr/ml)654545.5±70423.12674564.4±67858.99AUC0-∞ (pg·hr/ml)1053469.0±77690.791136607±74862.04Tmax (hr)7.07.5t1/249.2953.35AUC0-∞: Area under the curve extrapolated to infinity.AUC0-t: Area under the curve up to the last sampling time.Cmax: The maximum plasma concentration.Tmax: The time to reach peak concentration.Table 5
Pharmacokinetic parameters of memantine after administration of 10 mg of test and reference products in 20 healthy human volunteers.
Pharmacokinetic parametersCmax (T/R)AUC0-t (T/R)AUC0-∞ (T/R)Test/Ref99.72103.06107.89Figure 4
Mean plasma concentrations of test versus reference after a 10 mg dose (one 10 mg tablet) single oral dose (20 healthy volunteers).
## 3.1. Method Development and Validation
Mass spectrometry parameters, fragmentation pattern, and mode of ionization are the main task in mass spectrometry tuning to obtain respective fragmented ions and response for both ME and MED6 which were shown in Figures2(a), 2(b), 2(c), and 2(d). ESI-LC-MS/MS is a very powerful technique for pharmacokinetic studies since it provides sensitivity and selectivity requirements for analytical methods. MRM technique was chosen for the assay development. The MRM parameters were optimized to maximize the response for the analyte.(a) Mass spectra of Memantine parent ion (Q1). (b) Mass spectra of Memantine product ion (Q3). (c) Mass spectra of Memantine-D6 parent ion. (d) Mass spectra of Memantine-D6 product ion (Q3).
(a)(b)(c)(d)The instrumental parameters for mass spectroscopy were optimized. The source temperature was 600°C. The gas pressures of nebulizer, heater, curtain, and CAD were 40, 30, 20, and 4 psi, respectively. The ion spray voltage, entrance potential, declustering potential, collision energy, and collision cell exit potential were optimized at 5500, 10, 50, 32, and 12 V, respectively. The dwell time was 400 milliseconds for both ME and MED6.The product ion (Q3) mass spectra of ME and the MED6 are shown in Figures2(b) and 2(d). [M + H]+ was the predominant ion in the Q1 spectrum. The Q1 for ME and MED6 was 180.2 and 186.1, respectively, and were used as the precursor ion to obtain product ion spectra. The collisionally associated dissociation (CAD) mass spectrum of ME shows formation of characteristic product ions at m/z161.8, 163.2, and 165.1. The major product ion at m/z163.2 for ME could be explained by the splitting of 1-amino-3-,5-dimethyladamantane hydrochloride from the protonated precursor molecule. The CAD mass spectrum of MED6 shows formation of characteristic product ions at m/z169.2. The major product ion at m/z169.2 arose from 3,5-Dimethyl-d6-tricyclo-[3,3,1,13,7]decan-1-amine,3,5-Dimethyl-d6-1-adamantanamine from the protonated precursor molecule. The most sensitive mass transitions were from m/z180.2 to 1163.2 for ME and m/z 186.1 to m/z 169.2 for the MED6. The proposed fragmentation pattern is Figure 2(a)→Figure 2(b), Figure 2(c)→Figure 2(d). The inherent selectivity of MS-MS detection was also expected to be beneficial in developing a selective and sensitive method.The chromatographic conditions particularly the composition of mobile phase, flow-rate of mobile phase, choosing of suitable column, injection volume, column oven temperature, autosampler temperature, splitting of sample in to ion source, as well as a short run time were optimized through several trials to achieve good resolution and symmetric peak shapes for the ME and MED6. It was found that a mixture of 0.1% formic acid:acetonitrile (35 : 65 v/v) could achieve this purpose and this was finally adopted as the mobile phase. The formic acid was found to be necessary in order to lower the pH to protonate the ME and thus deliver good peak shape. The percentage of formic acid was optimized to maintain this peak shape while being consistent with good ionization and fragmentation in the mass spectrometer. The high proportion of organic solvent eluted both the ME and the MED6 at retention time1.45±0.2 min at a flow rate of 0.6 mL/min, produced good peak shapes, and permitted a run time of 3.5 min.Liquid-liquid extraction (LLE) was used for the sample preparation in this work. LLE can be helpful to clean the samples. Clean samples are essential for minimizing ion suppression and matrix effect in LC-MS/MS analyses. Several organic solvents and their mixtures in different combinations and ratios were evaluated. Finally, diethyl ether/n-hexane (70 : 30) was found to be optimal, which produced a clean chromatogram for a blank plasma sample and yielded the highest recovery for the ME and MED6 from the plasma. Memantine-D6 hydrochloride was used as internal standard for the present purpose. Clean chromatograms were obtained, and no significant direct interferences in the MRM channels at the relevant retention times were observed.
## 3.2. Selectivity
The selectivity of the method was examined by analyzing blank human plasma extracts (n=6). The result of one blank (Figure 3(a)) plasma is shown and the lack of interference is similar to other samples which were studied which shows no significant direct interference in the blank plasma traces as observed from endogenous substances in drug-free human plasma at the retention time of the analyte.(a) MRM chromatogram for blank plasma. (b) Chromatogram of LOQ.
(a)(b)
## 3.3. Limit of Quantification (LOQ)
The LOQ signal-to-noise (S/N) value found for 6 injections of ME at LOQ concentration was 11.93. Figure3(b) shows a representative ion-chromatogram for the LOQ (50 pg/mL) with 20 μL injection volume.
## 3.4. Linearity, Precision, and Accuracy
The ten-point calibration curve was linear over the concentration range 50.00–50000.00 pg/mL. The calibration model was selected based on the analysis of the data by quadratic regression with intercepts and weighting factor1/x2. The best quadratic regression for the calibration curve was achieved with a 1/x2 weighing factor, giving a mean quadratic regression equation for the calibration curve of y=-9.427×10-11x2+9.194×10-5x+2.989×10-4(y=ax2+bx+c) where y is the peak-area ratio of the ME to the MED6 and x is the concentration of the ME in plasma (Table 1). For the between-batch experiments, the precision and accuracy ranged from 1.4 to 2.7% and 95.7 to 99.1%, respectively (Table 2). Further, in within-batch experiments the precision and accuracy ranged from 2.1 to 2.3% and 95.6 to 99.8% correspondingly.Table 1
Concentration data form validation.
Spiked plasma concentration (pg/ml)Concentration measured (pg/ml) Mean ± Sdn=5Precision (% CV)Accuracy %50.0049.92±0.440.8099.23100.00100.21±1.911.9098.14500.00502.73±6.831.4098.651000.001000.54±11.531.1098.925000.005005.06±38.750.8099.2810000.009978.95±160.561.6098.4520000.0019871.04±303.461.5098.5330000.0029759.82±508.471.7098.3740000.0040310.88±123.850.3099.7450000.0050084.85±266.720.5099.55Table 2
Precision and accuracy (analysis with spiked plasma samples at three different concentrations).
Spiked plasma concentration (pg/ml)Within-run(n=6)Between-run(n=30)Concentration measured(n=6) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %Concentration measured(n=30) (pg/ml) (mean ± Sd.)Precision (%CV)Accuracy %150.00143.40±3.202.2095.60143.50±3.902.7095.7015000.0014746.40±338.402.3098.3014719.10±248.301.7098.1035000.0034935.20±730.402.1099.8034699.20±498.401.4099.10
## 3.5. Recovery
The recoveries for ME at low (150.00 pg/mL), medium (15000.00 pg/mL) and high (35000.00 pg/mL) plasma concentrations with six replicate injections each showed79.45±6.20%, 91.25±5.9%, and 87.52±2.59%. The overall recovery of ME was found to be 86.07%±6.87%. Similarly extraction recovery of MED6 (25.00 ng/mL) was determined as 80.31%±5.70%. Recoveries of ME and MED6 were high, precise, and reproducible. Therefore, the assay has proved to be robust in high-throughput bioanalysis.
## 3.6. Stability Studies
Quantification of the ME in plasma that was subjected to 3 freeze-thaw cycles (−30°C to room temperature) showed the stability of the analyte. The concentrations ranged from 98.00 to 104.00% for ME. No significant degradation was observed even after a 79-hour storage period in the autosampler tray, and the final concentrations of ME were found between 100.00 and 105.00% The room temperature stability of ME in QC samples after 26 hr was also evaluated. The concentrations were ranged between 99.00 and 102.00% for ME. In addition, the long-term stability in low and high QC samples after 76 days of storage at −30°C was also evaluated, and the concentrations ranged from 98.00 to 103.00% for ME. These results confirmed the stability of ME in human plasma for at least 76 days at −30°C. (Table3).Table 3
Stability of Memantine in human plasma samples.
Spiked plasma concentration (pg/mln=6)Concentration measured (pg/ml)Precision (%CV)Accuracy (%)Room temperature stability for 26 hr in plasma150.00151.231.00100.8235000.0034926.500.8099.79Three freeze-thaw cycles150.00153.022.30102.0135000.0034818.000.9099.48Auto sampler stability for 79 hr150.00154.591.40103.0635000.0035213.500.90100.61Stability for 76 days −30°C150.00151.533.20101.0235000.0035080.501.40100.23
## 3.7. Application to Biological Samples
The proposed method was applied to the determination of ME in plasma samples for the purpose of establishing the bioequivalence of a single dose (10 mg tablet) in 20 healthy human volunteers. Typical plasma concentrations versus time profiles were shown in Figure4. Plasma concentrations of ME were in the standard curve range and retained above LLOQ for the entire sampling period. The pharmacokinetic parameters for test and reference products were shown in Tables 4 and 5. The mean ratio of AUC0-t/AUC0-∞ was higher than 90% which followed the Food and Drug Administration Bioequivalence Guideline [11, 12]. The ratio of test/reference (T/R) and 90% confidence intervals (90 CIs) for overall analysis were comprised within the previously stipulated range (80–125%). Therefore, it can be concluded that the two ME formulations (reference and test) analyzed are bioequivalent in terms of rate and extent of absorption at fasting conditions.Table 4
Mean pharmacokinetic parameters of Memantine in 20 healthy human volunteers after oral administration of 10 mg test and reference products.
Pharmacokinetic details of Memantine in human plasmaPharmacokinetic ParameterReferenceTestMean ± SDMean ± SDCmax (pg/ml)14368.57±4044.1614328±4324.76AUC0-t (pg·hr/ml)654545.5±70423.12674564.4±67858.99AUC0-∞ (pg·hr/ml)1053469.0±77690.791136607±74862.04Tmax (hr)7.07.5t1/249.2953.35AUC0-∞: Area under the curve extrapolated to infinity.AUC0-t: Area under the curve up to the last sampling time.Cmax: The maximum plasma concentration.Tmax: The time to reach peak concentration.Table 5
Pharmacokinetic parameters of memantine after administration of 10 mg of test and reference products in 20 healthy human volunteers.
Pharmacokinetic parametersCmax (T/R)AUC0-t (T/R)AUC0-∞ (T/R)Test/Ref99.72103.06107.89Figure 4
Mean plasma concentrations of test versus reference after a 10 mg dose (one 10 mg tablet) single oral dose (20 healthy volunteers).
## 4. Conclusion
A simple, high sensitive, specific, rugged, and reproducible LC-MS/MS method for the determination of memantine in human plasma was developed and validated as per FDA guidelines. This method was successfully applied in bioequivalence study to evaluate the plasma concentrations of ME in healthy human volunteers.
---
*Source: 101249-2012-03-22.xml* | 2012 |
# Capecitabine Regulates HSP90AB1 Expression and Induces Apoptosis via Akt/SMARCC1/AP-1/ROS Axis in T Cells
**Authors:** Sai Zhang; Shunli Fan; Zhenglu Wang; Wen Hou; Tao Liu; Sei Yoshida; Shuang Yang; Hong Zheng; Zhongyang Shen
**Journal:** Oxidative Medicine and Cellular Longevity
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012509
---
## Abstract
Transplant oncology is a newly emerging discipline integrating oncology, transplant medicine, and surgery and has brought malignancy treatment into a new era via transplantation. In this context, obtaining a drug with both immunosuppressive and antitumor effects can take into account the dual needs of preventing both transplant rejection and tumor recurrence in liver transplantation patients with malignancies. Capecitabine (CAP), a classic antitumor drug, has been shown to induce reactive oxygen species (ROS) production and apoptosis in tumor cells. Meanwhile, we have demonstrated that CAP can induce ROS production and apoptosis in T cells to exert immunosuppressive effects, but its underlying molecular mechanism is still unclear. In this study, metronomic doses of CAP were administered to normal mice by gavage, and the spleen was selected for quantitative proteomic and phosphoproteomic analysis. The results showed that CAP significantly reduced the expression of HSP90AB1 and SMARCC1 in the spleen. It was subsequently confirmed that CAP also significantly reduced the expression of HSP90AB1 and SMARCC1 and increased ROS and apoptosis levels in T cells. The results of in vitro experiments showed that HSP90AB1 knockdown resulted in a significant decrease inp-Akt, SMARCC1, p-c-Fos, and p-c-Jun expression levels and a significant increase in ROS and apoptosis levels. HSP90AB1 overexpression significantly inhibited CAP-induced T cell apoptosis by increasing the p-Akt, SMARCC1, p-c-Fos, and p-c-Jun expression levels and reducing the ROS level. In conclusion, HSP90AB1 is a key target of CAP-induced T cell apoptosis via Akt/SMARCC1/AP-1/ROS axis, which provides a novel understanding of CAP-induced T cell apoptosis and lays the experimental foundation for further exploring CAP as an immunosuppressant with antitumor effects to optimize the medication regimen for transplantation patients.
---
## Body
## 1. Introduction
As a prodrug of 5-fluorouracil (5-FU), CAP is converted into 5-FU sequentially by carboxylesterase (CES), cytidine deaminase (CDA), and thymidylate phosphorylase (TP) to exert antitumor effects [1–3]. As a key enzyme for CAP transformation, TP is expressed in many tumor tissues, including colorectal cancer (CRC) and hepatocellular carcinoma (HCC), and it is more concentrated in tumor tissues than in adjacent tissues [4–7]. This distribution is the reason for the significant tumor-targeting capability of CAP. CAP is the first-line therapeutic drug for CRC, and some clinical studies have also confirmed that CAP, especially at the dosage used in metronomic chemotherapy (a novel type of chemotherapy featuring low dosage and uninterrupted administration), has a good effect in treating HCC [8–11]. In addition, the expression of TP in T cells has been confirmed, which lays the pharmacodynamics foundation for the conversion of CAP to 5-FU in T cells [12]. Earlier experiments showed that CAP can induce apoptosis in T cells, which confirmed this view [12]. Therefore, CAP may be a potential immunosuppressant with an antitumor effect. Obtaining an immunosuppressant with an antitumor effect has great clinical application value. This is because, in the context of transplant oncology, although liver transplantation has become an important treatment for HCC and nonresectable colorectal liver metastases, exposure to postoperative immunosuppressive therapy would contribute to increased tumor recurrence and poor outcomes [13–15]. Therefore, a drug with both immunosuppressive and antitumor effects can take into account the dual needs of preventing both transplant rejection and tumor recurrence.Apoptosis is a kind of programmed cell death, and its role in the antitumor effect of CAP has been confirmed [16, 17]. Induction of T cell apoptosis is one of the classic antirejection mechanisms of immunosuppressants [18–20]. As oxygen-containing chemically reactive molecules, ROS are closely related to T cell apoptosis and activation [21]. On this basis, we explored the immunosuppressive effect of CAP and recently showed that CAP can induce T cell ROS production, subsequently leading to apoptosis, but the molecular mechanism behind it is still unknown [12]. In recent years, the development of mass spectrometry technology has made it possible to identify proteins on a large scale, which enables the underlying mechanism behind the immunosuppressive effect of CAP to be explored [22–24]. So far, there has been no proteomics research on the immunosuppressive effect of CAP. Therefore, in the present study, metronomic chemotherapy doses of CAP were administered to normal mice and quantitative proteomics and phosphoproteomics are applied to look for the target proteins of CAP-induced T cell apoptosis. The results of proteomic analysis showed that CAP significantly reduced HSP90AB1 and SMARCC1 expression. HSP90AB1, which is one of the HSP90 subtypes, has a critical role in tumorigenesis and progression and is also closely related to T cell activity [25–28]. SMARCC1, which is a member of the SWI/SNF DNA chromatin remodeling complex family, is also associated with tumor progression and T cell activity [29, 30]. As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. Jeong et al. showed that SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. AP-1 is a transcriptional factor that consists of a homodimer or heterodimer of Jun and Fos families [32]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. ROS production, as previously mentioned, is closely associated with apoptosis [12, 21, 34]. It was hypothesized that targeted inhibition of HSP90AB1 could inhibit Akt/SMARCC1/AP-1 axis to induce ROS production and lead to T cell apoptosis by CAP. In addition, considering the role of HSP90AB1 in tumorigenesis and progression, HSP90AB1 may also be a key target of CAP to exert antitumor effect. In this study, we focused on the underlying mechanism of T cell apoptosis, attempting to confirm that CAP can induce T cell apoptosis via HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.
## 2. Materials and Methods
### 2.1. Animals
All animals were obtained from China National Institutes for Food and Drug Control. Male Balb/c mice aged 6–8 weeks were gavaged with metronomic doses of CAP (100 mg/kg/day) (Solarbio, Beijing, China) [12, 35]. At days 0, 7, 14, and 21, mice (n=10) were sacrificed. All animal experiments followed the ARRIVE guidelines and were approved by the Ethics Committee of Nankai University.
### 2.2. Total Protein Extraction and Protein Quality Test
The spleen sample (n=4/group) was lysed with PASP lysis, and the supernatant was reduced with 10 mM DTT for 1 h after the lysate was centrifuged, and then, alkylated with IAM for 1 h. The samples were mixed with acetone for 2 h, and the precipitation was collected after centrifugation [36]. The protein concentration of the sample was calculated using Bradford protein quantitative kit (Solarbio, Beijing, China).
### 2.3. TMT Labeling of Peptides and Separation of Fractions
A buffer solution of DB dissolution, Trypsin, TEAB, and CaCl2 was added successively to the sample solution. The supernatant was collected and mixed with TMT labeling reagent [37]. The sample was fractionated, and eluates were collected and combined into 10 fractions.
### 2.4. LC-MS/MS Analysis
The Q Exactive™ HF-X mass spectrometer and an EASY-nL C™ 1200 UHPLC system were used to perform Shotgun proteomics analyses. Peptides were separated and analyzed using Q Exactive™ HF-X mass spectrometer.
### 2.5. Data Analysis
Raw data were searched separately against UniPort database. Compared with day 0, the proteins of days 7, 14, and 21, whose quantitation differed significantly (P<0.05 and FC≥1.2 or FC≤0.83), were defined as differentially expressed proteins (DEPs). Next, databases, including ProDom, Pfam, PRINTS, ProSite, SMART, and PANTHER, were used to perform GO and IPR functional analyses. The protein family and pathway analyses were performed using the databases COG and KEGG, and STRING-db server was used to analyze the probable protein–protein interactions.
### 2.6. Sorting Primary CD3+ T Cells
The anti-CD3 microbeads (Miltenyi Biotec, Germany) was used to sort CD3+ T cells in mouse spleen. Then, anti-CD3 antibody (2 μg/mL) and anti-CD28 antibody (1 μg/mL) (BioLegend, San Diego, CA, USA) were used to activate T cells.
### 2.7. Immunohistochemistry (IHC) Assay
HSP90AB1 and SMARCC1 (Abcam, Cambridge, UK) expression levels in the spleen were determined by IHC assay. The images were acquired using a microscope (400x magnification).
### 2.8. Transfection of siRNA and Plasmid
The sequence of HSP90AB1 siRNA was 5′-GCCCUGGACAAGAUUCGAUTT-3′. The sequence of SMARCC1 siRNA was 5′-GCAGAUGCUCCUACCAAUATT-3′ (GenePharma, Shanghai, China). Lipofectamine 3000 Transfection Reagent (Thermo Fisher, Waltham, MA, USA) was used for siRNA transfection. The HSP90AB1 plasmid was purchased from Genechem Corporation (Shanghai, China), which was also transfected using Lipofectamine 3000 Transfection Reagent.
### 2.9. Apoptosis and ROS Measurement
The apoptosis and ROS detection were performed using an Apoptosis Kit (Solarbio, Beijing, China) and Reactive Oxygen Species Assay Kit (Solarbio, Beijing, China) according to the manufacturers’ instructions [12]. The FITC-Annexin V dye and propidium iodide (PI) dye were added successively into the sample, and the proportion of apoptosis was analyzed using flow cytometry. T cells were collected and stained with DCFH-DA. The mean fluorescence intensity (MFI) of DCF was detected using flow cytometry.
### 2.10. Cellular Reduced Glutathione (GSH) Measurement
GSH detection was performed using the GSH Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions [38]. After 5-FU treatment, T cells were fragmented by ultrasound and centrifuged, and the supernatant was collected. Then, GSH standards (20 μmol/L) and working solution were prepared. The sample was mixed with the working solution, and the OD value was measured at 405 nm.
### 2.11. Western Blotting
The expression levels of HSP90AB1, SMARCC1,p-Akt, BAX, BCL2, GCLM, GCLC, HO-1 (Abcam, Cambridge, UK), Caspase3 (CST, Massachusetts, USA), p-HSP90AB1, p-c-Fos, and p-c-Jun (ImmunoWay, Texas, USA) were detected. The ImageJ 7.0 software was used to analyze the bands.
### 2.12. Statistical Analysis
The SPSS 13.0 (SPSS GmbH, Munich, Germany) and GraphPad 8.0 software (GraphPad Software, La Jolla, CA, USA) were used to analyze the data. Data were expressed asmean±standarddeviationSD. One-way analysis of variance (ANOVA) was used to determine the differences between groups, and P<0.05 was considered to be statistically significant.
## 2.1. Animals
All animals were obtained from China National Institutes for Food and Drug Control. Male Balb/c mice aged 6–8 weeks were gavaged with metronomic doses of CAP (100 mg/kg/day) (Solarbio, Beijing, China) [12, 35]. At days 0, 7, 14, and 21, mice (n=10) were sacrificed. All animal experiments followed the ARRIVE guidelines and were approved by the Ethics Committee of Nankai University.
## 2.2. Total Protein Extraction and Protein Quality Test
The spleen sample (n=4/group) was lysed with PASP lysis, and the supernatant was reduced with 10 mM DTT for 1 h after the lysate was centrifuged, and then, alkylated with IAM for 1 h. The samples were mixed with acetone for 2 h, and the precipitation was collected after centrifugation [36]. The protein concentration of the sample was calculated using Bradford protein quantitative kit (Solarbio, Beijing, China).
## 2.3. TMT Labeling of Peptides and Separation of Fractions
A buffer solution of DB dissolution, Trypsin, TEAB, and CaCl2 was added successively to the sample solution. The supernatant was collected and mixed with TMT labeling reagent [37]. The sample was fractionated, and eluates were collected and combined into 10 fractions.
## 2.4. LC-MS/MS Analysis
The Q Exactive™ HF-X mass spectrometer and an EASY-nL C™ 1200 UHPLC system were used to perform Shotgun proteomics analyses. Peptides were separated and analyzed using Q Exactive™ HF-X mass spectrometer.
## 2.5. Data Analysis
Raw data were searched separately against UniPort database. Compared with day 0, the proteins of days 7, 14, and 21, whose quantitation differed significantly (P<0.05 and FC≥1.2 or FC≤0.83), were defined as differentially expressed proteins (DEPs). Next, databases, including ProDom, Pfam, PRINTS, ProSite, SMART, and PANTHER, were used to perform GO and IPR functional analyses. The protein family and pathway analyses were performed using the databases COG and KEGG, and STRING-db server was used to analyze the probable protein–protein interactions.
## 2.6. Sorting Primary CD3+ T Cells
The anti-CD3 microbeads (Miltenyi Biotec, Germany) was used to sort CD3+ T cells in mouse spleen. Then, anti-CD3 antibody (2 μg/mL) and anti-CD28 antibody (1 μg/mL) (BioLegend, San Diego, CA, USA) were used to activate T cells.
## 2.7. Immunohistochemistry (IHC) Assay
HSP90AB1 and SMARCC1 (Abcam, Cambridge, UK) expression levels in the spleen were determined by IHC assay. The images were acquired using a microscope (400x magnification).
## 2.8. Transfection of siRNA and Plasmid
The sequence of HSP90AB1 siRNA was 5′-GCCCUGGACAAGAUUCGAUTT-3′. The sequence of SMARCC1 siRNA was 5′-GCAGAUGCUCCUACCAAUATT-3′ (GenePharma, Shanghai, China). Lipofectamine 3000 Transfection Reagent (Thermo Fisher, Waltham, MA, USA) was used for siRNA transfection. The HSP90AB1 plasmid was purchased from Genechem Corporation (Shanghai, China), which was also transfected using Lipofectamine 3000 Transfection Reagent.
## 2.9. Apoptosis and ROS Measurement
The apoptosis and ROS detection were performed using an Apoptosis Kit (Solarbio, Beijing, China) and Reactive Oxygen Species Assay Kit (Solarbio, Beijing, China) according to the manufacturers’ instructions [12]. The FITC-Annexin V dye and propidium iodide (PI) dye were added successively into the sample, and the proportion of apoptosis was analyzed using flow cytometry. T cells were collected and stained with DCFH-DA. The mean fluorescence intensity (MFI) of DCF was detected using flow cytometry.
## 2.10. Cellular Reduced Glutathione (GSH) Measurement
GSH detection was performed using the GSH Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions [38]. After 5-FU treatment, T cells were fragmented by ultrasound and centrifuged, and the supernatant was collected. Then, GSH standards (20 μmol/L) and working solution were prepared. The sample was mixed with the working solution, and the OD value was measured at 405 nm.
## 2.11. Western Blotting
The expression levels of HSP90AB1, SMARCC1,p-Akt, BAX, BCL2, GCLM, GCLC, HO-1 (Abcam, Cambridge, UK), Caspase3 (CST, Massachusetts, USA), p-HSP90AB1, p-c-Fos, and p-c-Jun (ImmunoWay, Texas, USA) were detected. The ImageJ 7.0 software was used to analyze the bands.
## 2.12. Statistical Analysis
The SPSS 13.0 (SPSS GmbH, Munich, Germany) and GraphPad 8.0 software (GraphPad Software, La Jolla, CA, USA) were used to analyze the data. Data were expressed asmean±standarddeviationSD. One-way analysis of variance (ANOVA) was used to determine the differences between groups, and P<0.05 was considered to be statistically significant.
## 3. Results
### 3.1. Quantitative Proteomic and Phosphoproteomic Analysis Revealed that CAP Inhibits HSP90AB1 and SMARCC1 Protein Expression In Vivo
Previous studies have confirmed that CAP can induce T cell apoptosis and has the potential to be an effective immunosuppressant, but the underlying molecular mechanism remains unknown [12]. In the present study, normal mice were gavaged with metronomic chemotherapy doses of CAP, and the immune organ—the spleen—was selected because T cell sorting may have an impact on protein expression, and the sample size cannot meet the needs of proteomic analyses. Thus, direct T cell proteomic testing is difficult. TMT-based quantitative proteomics and phosphoproteomic were used to separately and quantitatively analyze 7,565 proteins and 3398 phosphorylated proteins in the spleen. Quantitative proteomic results (Figure 1(a)) show that compared with day 0, on day 7, 187 proteins were downregulated and 175 proteins were upregulated; on day 14, 257 proteins were downregulated and 244 proteins were upregulated; on day 21, 175 proteins were downregulated and 132 proteins were upregulated. Subsequently, as shown in Supplementary Figure S1, which is a GO enrichment histogram, in the biological process (BP), DEPs are enriched to immune responses, immune system processes, activation of immune response, and so on. In the cellular component (CC), DEPs are enriched to macromolecular complex, intracellular organelles, and so on. In the molecular function (MF), DEPs are enriched to structural molecule activity, structural constituent of ribosome, DNA binding, and so on. Phosphoproteomic results (Figure 1(b)) show that compared with day 0, on day 7, 98 proteins were downregulated and 148 proteins were upregulated; on day 14, 149 proteins were downregulated and 203 proteins were upregulated; on day 21, 63 proteins were downregulated and 76 proteins were upregulated. Subsequently, as shown in Supplementary Figure S2, which is a GO enrichment histogram, in BP, DEPs are mainly enriched to DNA-dependent DNA replication, apoptotic signaling pathway, single-organism transport, and so on. In CC, DEPs are enriched to intracellular nonmembrane-bounded organelles, integral components of membranes, intracellular organelle part, and so on. In MF, DEPs are enriched to ATPase activity, protein serine/threonine kinase activity, enzyme binding, and so on. Because both quantitative proteomic and phosphoproteomic analyses are important methods for studying cell function, in this study, the information from both methods was integrated. As shown in Supplementary Figure S3a, the results of quantitative proteomic analysis and the results of phosphoproteomic analysis were significantly correlated. As shown in Supplementary Figure S3b, compared with day 0, on days 7, 14, and 21, there were 23, 30, and 10 proteins, respectively, with significant changes in both quantitative proteomic and phosphoproteomic analyses. Specific DEPs are presented in Figures 1(c)–1(e), and at each time point, HSP90AB1 and p-HSP90AB1 expression in the spleen of mice decreased significantly; on days 14 and 21, both SMARCC1 and p-SMARCC1 expression decreased significantly. The results of protein interaction analysis showed that HSP90AB1 and SMARCC1 have the potential for interaction (Supplementary Figure S3c). Studies have confirmed that HSP90AB1 can regulate the expression of Akt, and Akt can regulate the expression of SMARCC1 [26, 31]. The results of protein interaction analysis also showed that HSP90AB1, SMARCC1, and the apoptosis-related proteins have the potential for interaction (Supplementary Figure S3c). Previous studies have confirmed that both HSP90AB1 and SMARCC1 are closely related to T cell activity [25, 30]. Hence, HSP90AB1 and SMARCC1 may be the target protein we searched for, and there could be a possible association between two proteins.Figure 1
Quantitative proteomic and phosphoproteomic analysis revealed that CAP inhibits HSP90AB1 and SMARCC1 protein expression in vivo. Normal mice were administered metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, the spleen was collected for quantitative proteomic and phosphoproteomic analysis. (a) Volcanic map of differentially expressed proteins (DEPs) in quantitative proteomic analysis on days 7, 14, and 21 (compared with day 0). (b) Volcanic map of DEPs in phosphoproteomic analysis on days 7, 14, and 21 (compared with day 0). Subsequently, association analysis of quantitative proteomic and phosphoproteomic analysis results was performed. (c–e) Heat map of DEPs in both quantitative proteomic and phosphoproteomic analyses on days 7, 14, and 21.
(a)(b)(c)(d)(e)
### 3.2. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in the Spleen
Next, we utilized western blot and IHC to verify the proteomics results. The IHC results showed that, compared with day 0, the number of HSP90AB1-positive and SMARCC1-positive cells in the spleen of mice was significantly reduced on days 7, 14, and 21 (Figure2(a)). The western blot results also showed that, compared with day 0, HSP90AB1, p-HSP90AB1, and SMARCC1 expression levels were significantly reduced on days 7, 14, and 21 (Figure 2(b)). The above results verify the reliability of the proteomics results and confirm that CAP can reduce HSP90AB1 and SMARCC1 expression in the spleen of mice.Figure 2
CAP can reduce the expression of HSP90AB1 and SMARCC1 in the spleen of mice. In order to verify the reliability of proteomic results, IHC and western blot were applied to detect HSP90AB1 and SMARCC1 expression in mouse spleen. (a) HSP90AB1 and SMARCC1 in the spleen were stained with IHC (400x). (b) The protein levels of HSP90AB1,p-HSP90AB1, and SMARCC1 in the spleen were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
(a)(b)
### 3.3. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in T Cells
As an immune organ containing T, B, and other types of immune cells [39, 40], there may be inconsistencies between the protein changes in the spleen and the protein changes in T cells. We first confirmed that CAP significantly increased the ROS production and apoptosis rate of T cells, which was consistent with the results of previous studies (Figures 3(a) and 3(b)) [12]. Immediately afterward, CD3+ T cells in mouse spleen were sorted using magnetic beads (Figure 3(c)), and western blot analysis confirmed that CAP can significantly reduce the expression of HSP90AB1, p-HSP90AB1, and SMARCC1 in T cells (Figures 3(d) and 3(e)).Figure 3
CAP can reduce the expression of HSP90AB1 and SMARCC1 in T cells of mice. Normal mice were gavaged with metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, (a) mononuclear cells extracted from mouse spleen were collected and gated by CD3, and then, the apoptosis rate of CD3+ T cells was detected using Annexin V and PI staining. (b) The ROS level of CD3+ T cells was evaluated using DCFH-DA staining. (c) CD3+ T cells were sorted from mouse spleen and identified by staining with PE-CD3 antibody. (d, e) HSP90AB1, p-HSP90AB1, and SMARCC1 expression in T cells was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
### 3.4. Knocking Down HSP90AB1 Can Induce T Cell Apoptosis via Akt/SMARCC1/AP-1/ROS Axis
As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. In addition, SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. It has been well established that ROS production is closely associated with apoptosis [12, 21, 34]. As shown in Supplementary Figure S3c, the results of protein interaction analysis also showed that HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, GCLM, and apoptosis-related proteins such as BCL2, BAX, and Caspase3 have the potential for interaction, but this was not confirmed in T cells. Therefore, primary CD3+ T cells were obtained by magnetic bead sorting and were activated with CD3 and CD28 antibodies. As shown in Figures 4(a) and 5(a), after knocking down HSP90AB1 in T cells, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun (both p-c-Fos and p-c-Jun are subunits of AP-1), HO-1, GCLM, and GCLC expression was significantly reduced; after knocking down SMARCC1 in T cells, no significant changes in HSP90AB1, p-HSP90AB1, or p-Akt expression were observed, but SMARCC1, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression was significantly reduced. Reduced glutathione (GSH) levels (an important biomarker of antioxidant status, which can reduce ROS levels) [33] were lowered significantly (Figure 5(b)), and ROS levels were significantly elevated (Figure 5(c)) after knocking down HSP90AB1 and SMARCC1 in T cells. In turn, knocking down HSP90AB1 or SMARCC1 significantly induced T cell apoptosis (Figure 5(d)). The above results confirm that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis plays an important role in T cell apoptosis.Figure 4
Knocking down of HSP90AB1 can regulate Akt/SMARCC1/AP-1 axis in T cells. Primary T cells were sorted and stimulated in vitro with anti-CD3/CD28 antibodies. Then, HSP90AB1 and SMARCC1 were reduced in T cells by siRNA knockdown. (a, b) The protein levels of HSP90AB1,p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviations: si: siRNA; NC: negative control.
(a)(b)Figure 5
Knocking down of HSP90AB1 and SMARCC1 can repress GCLC, GCLM, and HO-1 expression; reduce GSH level; and increase ROS production and apoptosis rate in T cells. (a) The protein levels of GCLM, GCLC, HO-1, BAX, BCL2, and Caspase3 were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the reduced GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
### 3.5. Overexpression of HSP90AB1 Can Alleviate CAP-Induced Apoptosis in T Cells via Akt/SMARCC1/AP-1/ROS Axis
Next, 5-FU (the active ingredient of CAP) (10μM) was chosen to be cultured with T cells for 48 h [12]. Compared with the NC group, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression levels in the 5-FU group were significantly reduced (Figures 6(a) and 7(a)) and GSH levels were significantly decreased (Figure 7(b)). Compared with the NC group, ROS level (Figure 7(c)) and apoptosis rate in the 5-FU group were significantly increased (Figure 7(d)). Subsequently, compared with the 5-FU group, overexpression of HSP90AB1 in T cells significantly increased SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLM, and GCLC expression in the 5-FU+OE group (Figures 6(a) and 7(a)). Compared with the 5-FU group, overexpression of HSP90AB1 in T cells increased the GSH level and reduced the ROS level in the 5-FU+OE group (Figures 7(b)–7(d)). The above results confirm that CAP induces T cell apoptosis through the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.Figure 6
Overexpression of HSP90AB1 attenuated the inhibition of SMARCC1, Akt, and AP-1 expression by CAP in T cells. HSP90AB1 was overexpressed by transfecting with HSP90AB1 overexpression plasmids in primary CD3+ T cells. T cells were exposed to 5-FU (the active ingredient of CAP) (0 μM or 10 μM) for 48 h. (a, b) HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun expression was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviation: OE: overexpression.
(a)(b)Figure 7
Overexpression of HSP90AB1 can attenuate the inhibition of CAP on GCLC, GCLM, and HO-1 expression and CAP-induced apoptosis in T cells. (a) The protein levels of GCLC, GCLM, HO-1, BCL2, Caspase3, and BAX were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 3.1. Quantitative Proteomic and Phosphoproteomic Analysis Revealed that CAP Inhibits HSP90AB1 and SMARCC1 Protein Expression In Vivo
Previous studies have confirmed that CAP can induce T cell apoptosis and has the potential to be an effective immunosuppressant, but the underlying molecular mechanism remains unknown [12]. In the present study, normal mice were gavaged with metronomic chemotherapy doses of CAP, and the immune organ—the spleen—was selected because T cell sorting may have an impact on protein expression, and the sample size cannot meet the needs of proteomic analyses. Thus, direct T cell proteomic testing is difficult. TMT-based quantitative proteomics and phosphoproteomic were used to separately and quantitatively analyze 7,565 proteins and 3398 phosphorylated proteins in the spleen. Quantitative proteomic results (Figure 1(a)) show that compared with day 0, on day 7, 187 proteins were downregulated and 175 proteins were upregulated; on day 14, 257 proteins were downregulated and 244 proteins were upregulated; on day 21, 175 proteins were downregulated and 132 proteins were upregulated. Subsequently, as shown in Supplementary Figure S1, which is a GO enrichment histogram, in the biological process (BP), DEPs are enriched to immune responses, immune system processes, activation of immune response, and so on. In the cellular component (CC), DEPs are enriched to macromolecular complex, intracellular organelles, and so on. In the molecular function (MF), DEPs are enriched to structural molecule activity, structural constituent of ribosome, DNA binding, and so on. Phosphoproteomic results (Figure 1(b)) show that compared with day 0, on day 7, 98 proteins were downregulated and 148 proteins were upregulated; on day 14, 149 proteins were downregulated and 203 proteins were upregulated; on day 21, 63 proteins were downregulated and 76 proteins were upregulated. Subsequently, as shown in Supplementary Figure S2, which is a GO enrichment histogram, in BP, DEPs are mainly enriched to DNA-dependent DNA replication, apoptotic signaling pathway, single-organism transport, and so on. In CC, DEPs are enriched to intracellular nonmembrane-bounded organelles, integral components of membranes, intracellular organelle part, and so on. In MF, DEPs are enriched to ATPase activity, protein serine/threonine kinase activity, enzyme binding, and so on. Because both quantitative proteomic and phosphoproteomic analyses are important methods for studying cell function, in this study, the information from both methods was integrated. As shown in Supplementary Figure S3a, the results of quantitative proteomic analysis and the results of phosphoproteomic analysis were significantly correlated. As shown in Supplementary Figure S3b, compared with day 0, on days 7, 14, and 21, there were 23, 30, and 10 proteins, respectively, with significant changes in both quantitative proteomic and phosphoproteomic analyses. Specific DEPs are presented in Figures 1(c)–1(e), and at each time point, HSP90AB1 and p-HSP90AB1 expression in the spleen of mice decreased significantly; on days 14 and 21, both SMARCC1 and p-SMARCC1 expression decreased significantly. The results of protein interaction analysis showed that HSP90AB1 and SMARCC1 have the potential for interaction (Supplementary Figure S3c). Studies have confirmed that HSP90AB1 can regulate the expression of Akt, and Akt can regulate the expression of SMARCC1 [26, 31]. The results of protein interaction analysis also showed that HSP90AB1, SMARCC1, and the apoptosis-related proteins have the potential for interaction (Supplementary Figure S3c). Previous studies have confirmed that both HSP90AB1 and SMARCC1 are closely related to T cell activity [25, 30]. Hence, HSP90AB1 and SMARCC1 may be the target protein we searched for, and there could be a possible association between two proteins.Figure 1
Quantitative proteomic and phosphoproteomic analysis revealed that CAP inhibits HSP90AB1 and SMARCC1 protein expression in vivo. Normal mice were administered metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, the spleen was collected for quantitative proteomic and phosphoproteomic analysis. (a) Volcanic map of differentially expressed proteins (DEPs) in quantitative proteomic analysis on days 7, 14, and 21 (compared with day 0). (b) Volcanic map of DEPs in phosphoproteomic analysis on days 7, 14, and 21 (compared with day 0). Subsequently, association analysis of quantitative proteomic and phosphoproteomic analysis results was performed. (c–e) Heat map of DEPs in both quantitative proteomic and phosphoproteomic analyses on days 7, 14, and 21.
(a)(b)(c)(d)(e)
## 3.2. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in the Spleen
Next, we utilized western blot and IHC to verify the proteomics results. The IHC results showed that, compared with day 0, the number of HSP90AB1-positive and SMARCC1-positive cells in the spleen of mice was significantly reduced on days 7, 14, and 21 (Figure2(a)). The western blot results also showed that, compared with day 0, HSP90AB1, p-HSP90AB1, and SMARCC1 expression levels were significantly reduced on days 7, 14, and 21 (Figure 2(b)). The above results verify the reliability of the proteomics results and confirm that CAP can reduce HSP90AB1 and SMARCC1 expression in the spleen of mice.Figure 2
CAP can reduce the expression of HSP90AB1 and SMARCC1 in the spleen of mice. In order to verify the reliability of proteomic results, IHC and western blot were applied to detect HSP90AB1 and SMARCC1 expression in mouse spleen. (a) HSP90AB1 and SMARCC1 in the spleen were stained with IHC (400x). (b) The protein levels of HSP90AB1,p-HSP90AB1, and SMARCC1 in the spleen were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
(a)(b)
## 3.3. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in T Cells
As an immune organ containing T, B, and other types of immune cells [39, 40], there may be inconsistencies between the protein changes in the spleen and the protein changes in T cells. We first confirmed that CAP significantly increased the ROS production and apoptosis rate of T cells, which was consistent with the results of previous studies (Figures 3(a) and 3(b)) [12]. Immediately afterward, CD3+ T cells in mouse spleen were sorted using magnetic beads (Figure 3(c)), and western blot analysis confirmed that CAP can significantly reduce the expression of HSP90AB1, p-HSP90AB1, and SMARCC1 in T cells (Figures 3(d) and 3(e)).Figure 3
CAP can reduce the expression of HSP90AB1 and SMARCC1 in T cells of mice. Normal mice were gavaged with metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, (a) mononuclear cells extracted from mouse spleen were collected and gated by CD3, and then, the apoptosis rate of CD3+ T cells was detected using Annexin V and PI staining. (b) The ROS level of CD3+ T cells was evaluated using DCFH-DA staining. (c) CD3+ T cells were sorted from mouse spleen and identified by staining with PE-CD3 antibody. (d, e) HSP90AB1, p-HSP90AB1, and SMARCC1 expression in T cells was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
## 3.4. Knocking Down HSP90AB1 Can Induce T Cell Apoptosis via Akt/SMARCC1/AP-1/ROS Axis
As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. In addition, SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. It has been well established that ROS production is closely associated with apoptosis [12, 21, 34]. As shown in Supplementary Figure S3c, the results of protein interaction analysis also showed that HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, GCLM, and apoptosis-related proteins such as BCL2, BAX, and Caspase3 have the potential for interaction, but this was not confirmed in T cells. Therefore, primary CD3+ T cells were obtained by magnetic bead sorting and were activated with CD3 and CD28 antibodies. As shown in Figures 4(a) and 5(a), after knocking down HSP90AB1 in T cells, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun (both p-c-Fos and p-c-Jun are subunits of AP-1), HO-1, GCLM, and GCLC expression was significantly reduced; after knocking down SMARCC1 in T cells, no significant changes in HSP90AB1, p-HSP90AB1, or p-Akt expression were observed, but SMARCC1, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression was significantly reduced. Reduced glutathione (GSH) levels (an important biomarker of antioxidant status, which can reduce ROS levels) [33] were lowered significantly (Figure 5(b)), and ROS levels were significantly elevated (Figure 5(c)) after knocking down HSP90AB1 and SMARCC1 in T cells. In turn, knocking down HSP90AB1 or SMARCC1 significantly induced T cell apoptosis (Figure 5(d)). The above results confirm that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis plays an important role in T cell apoptosis.Figure 4
Knocking down of HSP90AB1 can regulate Akt/SMARCC1/AP-1 axis in T cells. Primary T cells were sorted and stimulated in vitro with anti-CD3/CD28 antibodies. Then, HSP90AB1 and SMARCC1 were reduced in T cells by siRNA knockdown. (a, b) The protein levels of HSP90AB1,p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviations: si: siRNA; NC: negative control.
(a)(b)Figure 5
Knocking down of HSP90AB1 and SMARCC1 can repress GCLC, GCLM, and HO-1 expression; reduce GSH level; and increase ROS production and apoptosis rate in T cells. (a) The protein levels of GCLM, GCLC, HO-1, BAX, BCL2, and Caspase3 were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the reduced GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 3.5. Overexpression of HSP90AB1 Can Alleviate CAP-Induced Apoptosis in T Cells via Akt/SMARCC1/AP-1/ROS Axis
Next, 5-FU (the active ingredient of CAP) (10μM) was chosen to be cultured with T cells for 48 h [12]. Compared with the NC group, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression levels in the 5-FU group were significantly reduced (Figures 6(a) and 7(a)) and GSH levels were significantly decreased (Figure 7(b)). Compared with the NC group, ROS level (Figure 7(c)) and apoptosis rate in the 5-FU group were significantly increased (Figure 7(d)). Subsequently, compared with the 5-FU group, overexpression of HSP90AB1 in T cells significantly increased SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLM, and GCLC expression in the 5-FU+OE group (Figures 6(a) and 7(a)). Compared with the 5-FU group, overexpression of HSP90AB1 in T cells increased the GSH level and reduced the ROS level in the 5-FU+OE group (Figures 7(b)–7(d)). The above results confirm that CAP induces T cell apoptosis through the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.Figure 6
Overexpression of HSP90AB1 attenuated the inhibition of SMARCC1, Akt, and AP-1 expression by CAP in T cells. HSP90AB1 was overexpressed by transfecting with HSP90AB1 overexpression plasmids in primary CD3+ T cells. T cells were exposed to 5-FU (the active ingredient of CAP) (0 μM or 10 μM) for 48 h. (a, b) HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun expression was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviation: OE: overexpression.
(a)(b)Figure 7
Overexpression of HSP90AB1 can attenuate the inhibition of CAP on GCLC, GCLM, and HO-1 expression and CAP-induced apoptosis in T cells. (a) The protein levels of GCLC, GCLM, HO-1, BCL2, Caspase3, and BAX were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 4. Discussion
CAP may be a potential immunosuppressant with an antitumor effect. Studies have also confirmed that CAP can induce T cell apoptosis to exert an immunosuppressive effect [12], but its underlying mechanism is still unclear. For this reason, we chose the metronomic chemotherapy dosage of CAP to treat normal mice by gavage to further explore the mechanism of CAP’s immunosuppressive effect.First, we used quantitative proteomic and phosphoproteomic analyses to comprehensively explore the protein molecular network relevant to the immunosuppressive effect of CAP. Considering the objective difficulties of sorting mouse T cells for proteomic analysis, such as small sample size and many interference factors, we chose the peripheral immune organ—the spleen—for proteomic analysis. CAP can significantly reduce the expression of HSP90AB1 in the spleen of mice. Of course, the spleen houses numerous immune cells [39, 40]. Moreover, changes in the protein content of the spleen are not necessarily consistent with changes in the protein content of T cells. Subsequently, we also confirmed that CAP can significantly reduce HSP90AB1 expression in T cells. As a stress protein, HSP90AB1 is a member of the heat shock protein family [26, 41]. It is also closely related to many physiological functions of cells [28, 42]. Previous studies have confirmed that HSP90AB1 is closely related to T cell activity [25], and other studies have confirmed that HSP90AB1 can regulate the expression of Akt [26]. This study confirmed that knockdown or overexpression of HSP90AB1 in T cells can regulate Akt expression accordingly. Akt is widely involved in T cell activation and proliferation [43, 44]; so, reducing Akt expression in T cells is an important direction for inducing immunosuppressive effects. Sánchez et al. [45] confirmed that targeting the inhibition of Akt expression can further reduce T cell activation and prevent the development of graft-versus-host disease. Chaudhuri et al. [46] confirmed that activating Akt could inhibit T cell apoptosis. This prompted us to hypothesize that HSP90AB1 may be a key target protein for CAP to induce T cell apoptosis. In our study, knockdown of HSP90AB1 in T cells significantly inhibited Akt expression and induced ROS production and apoptosis, which confirmed this hypothesis. It is also worth noting that previous studies have shown that the elevation in intracellular ROS levels can both activate and inhibit Akt signaling [47–50]. Hence, the decrease of Akt expression and the increase of ROS in T cells caused by CAP deserve further study in the future. In addition to HSP90AB1, proteomics results showed that CAP can significantly reduce SMARCC1 expression in the spleen. We also confirmed that CAP can significantly reduce the expression of SMARCC1 in T cells. SMARCC1, which is part of the SWI/SNF complex in the nucleus, is the main complex of ATP-dependent chromatin remodeling factors [51, 52]. Jeong et al. [30] confirmed that in T cells, SMARCC1 is recruited to the promoter of the transcription factor AP-1 and increases AP-1 expression; so, knocking down SMARCC1 reduces AP-1 expression and further regulates T cell activity. Furthermore, previous studies have confirmed that SMARCC1 is the phosphorylation substrate of Akt [31], which led us to believe that HSP90AB1 may regulate SMARCC1 through Akt in T cells. When we knocked down SMARCC1 expression in T cells, there was no significant change in HSP90AB1 or Akt expression. However, when we knocked down the expression of HSP90AB1, the expression of Akt and SMARCC1 decreased significantly. GCLC and GCLM, which were subunits of mammalian glutamate cysteine ligase holoenzyme, can increase GSH level [33, 53]. GSH can regulate the metabolic activity of T cell; in turn, it could also affect T cell activity [53]. HO-1, which is considered to be an antioxidant, can regulate the production of ROS [54]. Several signaling molecules, such as AP-1 and PI3K/Akt, participate in the regulation of HO-1, GCLM, and GCLC expression [33]. In this experiment, as HSP90AB1 or SMARCC1 were knocked down in T cells, AP-1, HO-1, GCLC, and GCLM expression decreased, the GSH level decreased, and the ROS level and apoptosis rate increased. Therefore, we hypothesized that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis may be the underlying mechanism of CAP-induced T cell apoptosis. T cells and 5-FU (the active component of CAP) were cultured in vitro, and the expression of HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, and GCLM protein was significantly reduced, the GSH level decreased, and the ROS level and apoptosis rate increased. Overexpression of HSP90AB1 significantly increased the expression of Akt, SMARCC1, AP-1, HO-1, GCLC, and GCLM, increased the GSH level, and reduced the ROS level and apoptosis rate, which confirmed our hypothesis.
## 5. Conclusions
In summary, targeting HSP90AB1 is the key to CAP-induced T cell apoptosis, and the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis is the underlying mechanism (Figure8). Of course, there were some shortcomings in the present study; notably, the activities of antioxidant enzymes such as SOD, CAT, and GPx, which play an important role in eliminating ROS, were not evaluated, and this experiment was based on normal mice and not on organ transplant mice with acute rejection. Therefore, in the future, we will comprehensively explore the mechanism of ROS production and apoptosis induced by CAP in T cells and will not only use an acute rejection mouse model of organ transplantation but also attempt to establish a tumor-bearing animal model simultaneously bearing an organ allograft to confirm that CAP, which targets HSP90AB1 to induce apoptosis, has both immunosuppressive and anticancer effects. In brief, the results of this study provide novel understanding of CAP-induced T cell apoptosis and lay the experimental foundation for further exploring the immunosuppressive effect of CAP to enrich the treatment strategies of transplant oncology and fill the gap of the lack of pyrimidine immunosuppressive agents in the field of organ transplantation.Figure 8
Scheme summarizing the apoptosis of T cells induced by CAP via HSP90AB1/Akt/SMARCC1/AP-1/ROS axis. After oral administration, CAP is converted into 5′DFCR and 5′DFUR by CES and CDA in the liver. In T cells, which expressed TP, 5′DFUR can finally be converted into 5-FU; CAP reduces HSP90AB1, Akt, SMARCC1, c-Fos, c-Jun, GCLC, GCLM, and HO-1 expression, reduces the GSH level, increases the ROS level, and finally induces apoptosis.
---
*Source: 1012509-2022-03-24.xml* | 1012509-2022-03-24_1012509-2022-03-24.md | 47,130 | Capecitabine Regulates HSP90AB1 Expression and Induces Apoptosis via Akt/SMARCC1/AP-1/ROS Axis in T Cells | Sai Zhang; Shunli Fan; Zhenglu Wang; Wen Hou; Tao Liu; Sei Yoshida; Shuang Yang; Hong Zheng; Zhongyang Shen | Oxidative Medicine and Cellular Longevity
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012509 | 1012509-2022-03-24.xml | ---
## Abstract
Transplant oncology is a newly emerging discipline integrating oncology, transplant medicine, and surgery and has brought malignancy treatment into a new era via transplantation. In this context, obtaining a drug with both immunosuppressive and antitumor effects can take into account the dual needs of preventing both transplant rejection and tumor recurrence in liver transplantation patients with malignancies. Capecitabine (CAP), a classic antitumor drug, has been shown to induce reactive oxygen species (ROS) production and apoptosis in tumor cells. Meanwhile, we have demonstrated that CAP can induce ROS production and apoptosis in T cells to exert immunosuppressive effects, but its underlying molecular mechanism is still unclear. In this study, metronomic doses of CAP were administered to normal mice by gavage, and the spleen was selected for quantitative proteomic and phosphoproteomic analysis. The results showed that CAP significantly reduced the expression of HSP90AB1 and SMARCC1 in the spleen. It was subsequently confirmed that CAP also significantly reduced the expression of HSP90AB1 and SMARCC1 and increased ROS and apoptosis levels in T cells. The results of in vitro experiments showed that HSP90AB1 knockdown resulted in a significant decrease inp-Akt, SMARCC1, p-c-Fos, and p-c-Jun expression levels and a significant increase in ROS and apoptosis levels. HSP90AB1 overexpression significantly inhibited CAP-induced T cell apoptosis by increasing the p-Akt, SMARCC1, p-c-Fos, and p-c-Jun expression levels and reducing the ROS level. In conclusion, HSP90AB1 is a key target of CAP-induced T cell apoptosis via Akt/SMARCC1/AP-1/ROS axis, which provides a novel understanding of CAP-induced T cell apoptosis and lays the experimental foundation for further exploring CAP as an immunosuppressant with antitumor effects to optimize the medication regimen for transplantation patients.
---
## Body
## 1. Introduction
As a prodrug of 5-fluorouracil (5-FU), CAP is converted into 5-FU sequentially by carboxylesterase (CES), cytidine deaminase (CDA), and thymidylate phosphorylase (TP) to exert antitumor effects [1–3]. As a key enzyme for CAP transformation, TP is expressed in many tumor tissues, including colorectal cancer (CRC) and hepatocellular carcinoma (HCC), and it is more concentrated in tumor tissues than in adjacent tissues [4–7]. This distribution is the reason for the significant tumor-targeting capability of CAP. CAP is the first-line therapeutic drug for CRC, and some clinical studies have also confirmed that CAP, especially at the dosage used in metronomic chemotherapy (a novel type of chemotherapy featuring low dosage and uninterrupted administration), has a good effect in treating HCC [8–11]. In addition, the expression of TP in T cells has been confirmed, which lays the pharmacodynamics foundation for the conversion of CAP to 5-FU in T cells [12]. Earlier experiments showed that CAP can induce apoptosis in T cells, which confirmed this view [12]. Therefore, CAP may be a potential immunosuppressant with an antitumor effect. Obtaining an immunosuppressant with an antitumor effect has great clinical application value. This is because, in the context of transplant oncology, although liver transplantation has become an important treatment for HCC and nonresectable colorectal liver metastases, exposure to postoperative immunosuppressive therapy would contribute to increased tumor recurrence and poor outcomes [13–15]. Therefore, a drug with both immunosuppressive and antitumor effects can take into account the dual needs of preventing both transplant rejection and tumor recurrence.Apoptosis is a kind of programmed cell death, and its role in the antitumor effect of CAP has been confirmed [16, 17]. Induction of T cell apoptosis is one of the classic antirejection mechanisms of immunosuppressants [18–20]. As oxygen-containing chemically reactive molecules, ROS are closely related to T cell apoptosis and activation [21]. On this basis, we explored the immunosuppressive effect of CAP and recently showed that CAP can induce T cell ROS production, subsequently leading to apoptosis, but the molecular mechanism behind it is still unknown [12]. In recent years, the development of mass spectrometry technology has made it possible to identify proteins on a large scale, which enables the underlying mechanism behind the immunosuppressive effect of CAP to be explored [22–24]. So far, there has been no proteomics research on the immunosuppressive effect of CAP. Therefore, in the present study, metronomic chemotherapy doses of CAP were administered to normal mice and quantitative proteomics and phosphoproteomics are applied to look for the target proteins of CAP-induced T cell apoptosis. The results of proteomic analysis showed that CAP significantly reduced HSP90AB1 and SMARCC1 expression. HSP90AB1, which is one of the HSP90 subtypes, has a critical role in tumorigenesis and progression and is also closely related to T cell activity [25–28]. SMARCC1, which is a member of the SWI/SNF DNA chromatin remodeling complex family, is also associated with tumor progression and T cell activity [29, 30]. As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. Jeong et al. showed that SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. AP-1 is a transcriptional factor that consists of a homodimer or heterodimer of Jun and Fos families [32]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. ROS production, as previously mentioned, is closely associated with apoptosis [12, 21, 34]. It was hypothesized that targeted inhibition of HSP90AB1 could inhibit Akt/SMARCC1/AP-1 axis to induce ROS production and lead to T cell apoptosis by CAP. In addition, considering the role of HSP90AB1 in tumorigenesis and progression, HSP90AB1 may also be a key target of CAP to exert antitumor effect. In this study, we focused on the underlying mechanism of T cell apoptosis, attempting to confirm that CAP can induce T cell apoptosis via HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.
## 2. Materials and Methods
### 2.1. Animals
All animals were obtained from China National Institutes for Food and Drug Control. Male Balb/c mice aged 6–8 weeks were gavaged with metronomic doses of CAP (100 mg/kg/day) (Solarbio, Beijing, China) [12, 35]. At days 0, 7, 14, and 21, mice (n=10) were sacrificed. All animal experiments followed the ARRIVE guidelines and were approved by the Ethics Committee of Nankai University.
### 2.2. Total Protein Extraction and Protein Quality Test
The spleen sample (n=4/group) was lysed with PASP lysis, and the supernatant was reduced with 10 mM DTT for 1 h after the lysate was centrifuged, and then, alkylated with IAM for 1 h. The samples were mixed with acetone for 2 h, and the precipitation was collected after centrifugation [36]. The protein concentration of the sample was calculated using Bradford protein quantitative kit (Solarbio, Beijing, China).
### 2.3. TMT Labeling of Peptides and Separation of Fractions
A buffer solution of DB dissolution, Trypsin, TEAB, and CaCl2 was added successively to the sample solution. The supernatant was collected and mixed with TMT labeling reagent [37]. The sample was fractionated, and eluates were collected and combined into 10 fractions.
### 2.4. LC-MS/MS Analysis
The Q Exactive™ HF-X mass spectrometer and an EASY-nL C™ 1200 UHPLC system were used to perform Shotgun proteomics analyses. Peptides were separated and analyzed using Q Exactive™ HF-X mass spectrometer.
### 2.5. Data Analysis
Raw data were searched separately against UniPort database. Compared with day 0, the proteins of days 7, 14, and 21, whose quantitation differed significantly (P<0.05 and FC≥1.2 or FC≤0.83), were defined as differentially expressed proteins (DEPs). Next, databases, including ProDom, Pfam, PRINTS, ProSite, SMART, and PANTHER, were used to perform GO and IPR functional analyses. The protein family and pathway analyses were performed using the databases COG and KEGG, and STRING-db server was used to analyze the probable protein–protein interactions.
### 2.6. Sorting Primary CD3+ T Cells
The anti-CD3 microbeads (Miltenyi Biotec, Germany) was used to sort CD3+ T cells in mouse spleen. Then, anti-CD3 antibody (2 μg/mL) and anti-CD28 antibody (1 μg/mL) (BioLegend, San Diego, CA, USA) were used to activate T cells.
### 2.7. Immunohistochemistry (IHC) Assay
HSP90AB1 and SMARCC1 (Abcam, Cambridge, UK) expression levels in the spleen were determined by IHC assay. The images were acquired using a microscope (400x magnification).
### 2.8. Transfection of siRNA and Plasmid
The sequence of HSP90AB1 siRNA was 5′-GCCCUGGACAAGAUUCGAUTT-3′. The sequence of SMARCC1 siRNA was 5′-GCAGAUGCUCCUACCAAUATT-3′ (GenePharma, Shanghai, China). Lipofectamine 3000 Transfection Reagent (Thermo Fisher, Waltham, MA, USA) was used for siRNA transfection. The HSP90AB1 plasmid was purchased from Genechem Corporation (Shanghai, China), which was also transfected using Lipofectamine 3000 Transfection Reagent.
### 2.9. Apoptosis and ROS Measurement
The apoptosis and ROS detection were performed using an Apoptosis Kit (Solarbio, Beijing, China) and Reactive Oxygen Species Assay Kit (Solarbio, Beijing, China) according to the manufacturers’ instructions [12]. The FITC-Annexin V dye and propidium iodide (PI) dye were added successively into the sample, and the proportion of apoptosis was analyzed using flow cytometry. T cells were collected and stained with DCFH-DA. The mean fluorescence intensity (MFI) of DCF was detected using flow cytometry.
### 2.10. Cellular Reduced Glutathione (GSH) Measurement
GSH detection was performed using the GSH Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions [38]. After 5-FU treatment, T cells were fragmented by ultrasound and centrifuged, and the supernatant was collected. Then, GSH standards (20 μmol/L) and working solution were prepared. The sample was mixed with the working solution, and the OD value was measured at 405 nm.
### 2.11. Western Blotting
The expression levels of HSP90AB1, SMARCC1,p-Akt, BAX, BCL2, GCLM, GCLC, HO-1 (Abcam, Cambridge, UK), Caspase3 (CST, Massachusetts, USA), p-HSP90AB1, p-c-Fos, and p-c-Jun (ImmunoWay, Texas, USA) were detected. The ImageJ 7.0 software was used to analyze the bands.
### 2.12. Statistical Analysis
The SPSS 13.0 (SPSS GmbH, Munich, Germany) and GraphPad 8.0 software (GraphPad Software, La Jolla, CA, USA) were used to analyze the data. Data were expressed asmean±standarddeviationSD. One-way analysis of variance (ANOVA) was used to determine the differences between groups, and P<0.05 was considered to be statistically significant.
## 2.1. Animals
All animals were obtained from China National Institutes for Food and Drug Control. Male Balb/c mice aged 6–8 weeks were gavaged with metronomic doses of CAP (100 mg/kg/day) (Solarbio, Beijing, China) [12, 35]. At days 0, 7, 14, and 21, mice (n=10) were sacrificed. All animal experiments followed the ARRIVE guidelines and were approved by the Ethics Committee of Nankai University.
## 2.2. Total Protein Extraction and Protein Quality Test
The spleen sample (n=4/group) was lysed with PASP lysis, and the supernatant was reduced with 10 mM DTT for 1 h after the lysate was centrifuged, and then, alkylated with IAM for 1 h. The samples were mixed with acetone for 2 h, and the precipitation was collected after centrifugation [36]. The protein concentration of the sample was calculated using Bradford protein quantitative kit (Solarbio, Beijing, China).
## 2.3. TMT Labeling of Peptides and Separation of Fractions
A buffer solution of DB dissolution, Trypsin, TEAB, and CaCl2 was added successively to the sample solution. The supernatant was collected and mixed with TMT labeling reagent [37]. The sample was fractionated, and eluates were collected and combined into 10 fractions.
## 2.4. LC-MS/MS Analysis
The Q Exactive™ HF-X mass spectrometer and an EASY-nL C™ 1200 UHPLC system were used to perform Shotgun proteomics analyses. Peptides were separated and analyzed using Q Exactive™ HF-X mass spectrometer.
## 2.5. Data Analysis
Raw data were searched separately against UniPort database. Compared with day 0, the proteins of days 7, 14, and 21, whose quantitation differed significantly (P<0.05 and FC≥1.2 or FC≤0.83), were defined as differentially expressed proteins (DEPs). Next, databases, including ProDom, Pfam, PRINTS, ProSite, SMART, and PANTHER, were used to perform GO and IPR functional analyses. The protein family and pathway analyses were performed using the databases COG and KEGG, and STRING-db server was used to analyze the probable protein–protein interactions.
## 2.6. Sorting Primary CD3+ T Cells
The anti-CD3 microbeads (Miltenyi Biotec, Germany) was used to sort CD3+ T cells in mouse spleen. Then, anti-CD3 antibody (2 μg/mL) and anti-CD28 antibody (1 μg/mL) (BioLegend, San Diego, CA, USA) were used to activate T cells.
## 2.7. Immunohistochemistry (IHC) Assay
HSP90AB1 and SMARCC1 (Abcam, Cambridge, UK) expression levels in the spleen were determined by IHC assay. The images were acquired using a microscope (400x magnification).
## 2.8. Transfection of siRNA and Plasmid
The sequence of HSP90AB1 siRNA was 5′-GCCCUGGACAAGAUUCGAUTT-3′. The sequence of SMARCC1 siRNA was 5′-GCAGAUGCUCCUACCAAUATT-3′ (GenePharma, Shanghai, China). Lipofectamine 3000 Transfection Reagent (Thermo Fisher, Waltham, MA, USA) was used for siRNA transfection. The HSP90AB1 plasmid was purchased from Genechem Corporation (Shanghai, China), which was also transfected using Lipofectamine 3000 Transfection Reagent.
## 2.9. Apoptosis and ROS Measurement
The apoptosis and ROS detection were performed using an Apoptosis Kit (Solarbio, Beijing, China) and Reactive Oxygen Species Assay Kit (Solarbio, Beijing, China) according to the manufacturers’ instructions [12]. The FITC-Annexin V dye and propidium iodide (PI) dye were added successively into the sample, and the proportion of apoptosis was analyzed using flow cytometry. T cells were collected and stained with DCFH-DA. The mean fluorescence intensity (MFI) of DCF was detected using flow cytometry.
## 2.10. Cellular Reduced Glutathione (GSH) Measurement
GSH detection was performed using the GSH Assay Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer’s instructions [38]. After 5-FU treatment, T cells were fragmented by ultrasound and centrifuged, and the supernatant was collected. Then, GSH standards (20 μmol/L) and working solution were prepared. The sample was mixed with the working solution, and the OD value was measured at 405 nm.
## 2.11. Western Blotting
The expression levels of HSP90AB1, SMARCC1,p-Akt, BAX, BCL2, GCLM, GCLC, HO-1 (Abcam, Cambridge, UK), Caspase3 (CST, Massachusetts, USA), p-HSP90AB1, p-c-Fos, and p-c-Jun (ImmunoWay, Texas, USA) were detected. The ImageJ 7.0 software was used to analyze the bands.
## 2.12. Statistical Analysis
The SPSS 13.0 (SPSS GmbH, Munich, Germany) and GraphPad 8.0 software (GraphPad Software, La Jolla, CA, USA) were used to analyze the data. Data were expressed asmean±standarddeviationSD. One-way analysis of variance (ANOVA) was used to determine the differences between groups, and P<0.05 was considered to be statistically significant.
## 3. Results
### 3.1. Quantitative Proteomic and Phosphoproteomic Analysis Revealed that CAP Inhibits HSP90AB1 and SMARCC1 Protein Expression In Vivo
Previous studies have confirmed that CAP can induce T cell apoptosis and has the potential to be an effective immunosuppressant, but the underlying molecular mechanism remains unknown [12]. In the present study, normal mice were gavaged with metronomic chemotherapy doses of CAP, and the immune organ—the spleen—was selected because T cell sorting may have an impact on protein expression, and the sample size cannot meet the needs of proteomic analyses. Thus, direct T cell proteomic testing is difficult. TMT-based quantitative proteomics and phosphoproteomic were used to separately and quantitatively analyze 7,565 proteins and 3398 phosphorylated proteins in the spleen. Quantitative proteomic results (Figure 1(a)) show that compared with day 0, on day 7, 187 proteins were downregulated and 175 proteins were upregulated; on day 14, 257 proteins were downregulated and 244 proteins were upregulated; on day 21, 175 proteins were downregulated and 132 proteins were upregulated. Subsequently, as shown in Supplementary Figure S1, which is a GO enrichment histogram, in the biological process (BP), DEPs are enriched to immune responses, immune system processes, activation of immune response, and so on. In the cellular component (CC), DEPs are enriched to macromolecular complex, intracellular organelles, and so on. In the molecular function (MF), DEPs are enriched to structural molecule activity, structural constituent of ribosome, DNA binding, and so on. Phosphoproteomic results (Figure 1(b)) show that compared with day 0, on day 7, 98 proteins were downregulated and 148 proteins were upregulated; on day 14, 149 proteins were downregulated and 203 proteins were upregulated; on day 21, 63 proteins were downregulated and 76 proteins were upregulated. Subsequently, as shown in Supplementary Figure S2, which is a GO enrichment histogram, in BP, DEPs are mainly enriched to DNA-dependent DNA replication, apoptotic signaling pathway, single-organism transport, and so on. In CC, DEPs are enriched to intracellular nonmembrane-bounded organelles, integral components of membranes, intracellular organelle part, and so on. In MF, DEPs are enriched to ATPase activity, protein serine/threonine kinase activity, enzyme binding, and so on. Because both quantitative proteomic and phosphoproteomic analyses are important methods for studying cell function, in this study, the information from both methods was integrated. As shown in Supplementary Figure S3a, the results of quantitative proteomic analysis and the results of phosphoproteomic analysis were significantly correlated. As shown in Supplementary Figure S3b, compared with day 0, on days 7, 14, and 21, there were 23, 30, and 10 proteins, respectively, with significant changes in both quantitative proteomic and phosphoproteomic analyses. Specific DEPs are presented in Figures 1(c)–1(e), and at each time point, HSP90AB1 and p-HSP90AB1 expression in the spleen of mice decreased significantly; on days 14 and 21, both SMARCC1 and p-SMARCC1 expression decreased significantly. The results of protein interaction analysis showed that HSP90AB1 and SMARCC1 have the potential for interaction (Supplementary Figure S3c). Studies have confirmed that HSP90AB1 can regulate the expression of Akt, and Akt can regulate the expression of SMARCC1 [26, 31]. The results of protein interaction analysis also showed that HSP90AB1, SMARCC1, and the apoptosis-related proteins have the potential for interaction (Supplementary Figure S3c). Previous studies have confirmed that both HSP90AB1 and SMARCC1 are closely related to T cell activity [25, 30]. Hence, HSP90AB1 and SMARCC1 may be the target protein we searched for, and there could be a possible association between two proteins.Figure 1
Quantitative proteomic and phosphoproteomic analysis revealed that CAP inhibits HSP90AB1 and SMARCC1 protein expression in vivo. Normal mice were administered metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, the spleen was collected for quantitative proteomic and phosphoproteomic analysis. (a) Volcanic map of differentially expressed proteins (DEPs) in quantitative proteomic analysis on days 7, 14, and 21 (compared with day 0). (b) Volcanic map of DEPs in phosphoproteomic analysis on days 7, 14, and 21 (compared with day 0). Subsequently, association analysis of quantitative proteomic and phosphoproteomic analysis results was performed. (c–e) Heat map of DEPs in both quantitative proteomic and phosphoproteomic analyses on days 7, 14, and 21.
(a)(b)(c)(d)(e)
### 3.2. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in the Spleen
Next, we utilized western blot and IHC to verify the proteomics results. The IHC results showed that, compared with day 0, the number of HSP90AB1-positive and SMARCC1-positive cells in the spleen of mice was significantly reduced on days 7, 14, and 21 (Figure2(a)). The western blot results also showed that, compared with day 0, HSP90AB1, p-HSP90AB1, and SMARCC1 expression levels were significantly reduced on days 7, 14, and 21 (Figure 2(b)). The above results verify the reliability of the proteomics results and confirm that CAP can reduce HSP90AB1 and SMARCC1 expression in the spleen of mice.Figure 2
CAP can reduce the expression of HSP90AB1 and SMARCC1 in the spleen of mice. In order to verify the reliability of proteomic results, IHC and western blot were applied to detect HSP90AB1 and SMARCC1 expression in mouse spleen. (a) HSP90AB1 and SMARCC1 in the spleen were stained with IHC (400x). (b) The protein levels of HSP90AB1,p-HSP90AB1, and SMARCC1 in the spleen were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
(a)(b)
### 3.3. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in T Cells
As an immune organ containing T, B, and other types of immune cells [39, 40], there may be inconsistencies between the protein changes in the spleen and the protein changes in T cells. We first confirmed that CAP significantly increased the ROS production and apoptosis rate of T cells, which was consistent with the results of previous studies (Figures 3(a) and 3(b)) [12]. Immediately afterward, CD3+ T cells in mouse spleen were sorted using magnetic beads (Figure 3(c)), and western blot analysis confirmed that CAP can significantly reduce the expression of HSP90AB1, p-HSP90AB1, and SMARCC1 in T cells (Figures 3(d) and 3(e)).Figure 3
CAP can reduce the expression of HSP90AB1 and SMARCC1 in T cells of mice. Normal mice were gavaged with metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, (a) mononuclear cells extracted from mouse spleen were collected and gated by CD3, and then, the apoptosis rate of CD3+ T cells was detected using Annexin V and PI staining. (b) The ROS level of CD3+ T cells was evaluated using DCFH-DA staining. (c) CD3+ T cells were sorted from mouse spleen and identified by staining with PE-CD3 antibody. (d, e) HSP90AB1, p-HSP90AB1, and SMARCC1 expression in T cells was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
### 3.4. Knocking Down HSP90AB1 Can Induce T Cell Apoptosis via Akt/SMARCC1/AP-1/ROS Axis
As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. In addition, SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. It has been well established that ROS production is closely associated with apoptosis [12, 21, 34]. As shown in Supplementary Figure S3c, the results of protein interaction analysis also showed that HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, GCLM, and apoptosis-related proteins such as BCL2, BAX, and Caspase3 have the potential for interaction, but this was not confirmed in T cells. Therefore, primary CD3+ T cells were obtained by magnetic bead sorting and were activated with CD3 and CD28 antibodies. As shown in Figures 4(a) and 5(a), after knocking down HSP90AB1 in T cells, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun (both p-c-Fos and p-c-Jun are subunits of AP-1), HO-1, GCLM, and GCLC expression was significantly reduced; after knocking down SMARCC1 in T cells, no significant changes in HSP90AB1, p-HSP90AB1, or p-Akt expression were observed, but SMARCC1, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression was significantly reduced. Reduced glutathione (GSH) levels (an important biomarker of antioxidant status, which can reduce ROS levels) [33] were lowered significantly (Figure 5(b)), and ROS levels were significantly elevated (Figure 5(c)) after knocking down HSP90AB1 and SMARCC1 in T cells. In turn, knocking down HSP90AB1 or SMARCC1 significantly induced T cell apoptosis (Figure 5(d)). The above results confirm that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis plays an important role in T cell apoptosis.Figure 4
Knocking down of HSP90AB1 can regulate Akt/SMARCC1/AP-1 axis in T cells. Primary T cells were sorted and stimulated in vitro with anti-CD3/CD28 antibodies. Then, HSP90AB1 and SMARCC1 were reduced in T cells by siRNA knockdown. (a, b) The protein levels of HSP90AB1,p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviations: si: siRNA; NC: negative control.
(a)(b)Figure 5
Knocking down of HSP90AB1 and SMARCC1 can repress GCLC, GCLM, and HO-1 expression; reduce GSH level; and increase ROS production and apoptosis rate in T cells. (a) The protein levels of GCLM, GCLC, HO-1, BAX, BCL2, and Caspase3 were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the reduced GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
### 3.5. Overexpression of HSP90AB1 Can Alleviate CAP-Induced Apoptosis in T Cells via Akt/SMARCC1/AP-1/ROS Axis
Next, 5-FU (the active ingredient of CAP) (10μM) was chosen to be cultured with T cells for 48 h [12]. Compared with the NC group, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression levels in the 5-FU group were significantly reduced (Figures 6(a) and 7(a)) and GSH levels were significantly decreased (Figure 7(b)). Compared with the NC group, ROS level (Figure 7(c)) and apoptosis rate in the 5-FU group were significantly increased (Figure 7(d)). Subsequently, compared with the 5-FU group, overexpression of HSP90AB1 in T cells significantly increased SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLM, and GCLC expression in the 5-FU+OE group (Figures 6(a) and 7(a)). Compared with the 5-FU group, overexpression of HSP90AB1 in T cells increased the GSH level and reduced the ROS level in the 5-FU+OE group (Figures 7(b)–7(d)). The above results confirm that CAP induces T cell apoptosis through the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.Figure 6
Overexpression of HSP90AB1 attenuated the inhibition of SMARCC1, Akt, and AP-1 expression by CAP in T cells. HSP90AB1 was overexpressed by transfecting with HSP90AB1 overexpression plasmids in primary CD3+ T cells. T cells were exposed to 5-FU (the active ingredient of CAP) (0 μM or 10 μM) for 48 h. (a, b) HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun expression was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviation: OE: overexpression.
(a)(b)Figure 7
Overexpression of HSP90AB1 can attenuate the inhibition of CAP on GCLC, GCLM, and HO-1 expression and CAP-induced apoptosis in T cells. (a) The protein levels of GCLC, GCLM, HO-1, BCL2, Caspase3, and BAX were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 3.1. Quantitative Proteomic and Phosphoproteomic Analysis Revealed that CAP Inhibits HSP90AB1 and SMARCC1 Protein Expression In Vivo
Previous studies have confirmed that CAP can induce T cell apoptosis and has the potential to be an effective immunosuppressant, but the underlying molecular mechanism remains unknown [12]. In the present study, normal mice were gavaged with metronomic chemotherapy doses of CAP, and the immune organ—the spleen—was selected because T cell sorting may have an impact on protein expression, and the sample size cannot meet the needs of proteomic analyses. Thus, direct T cell proteomic testing is difficult. TMT-based quantitative proteomics and phosphoproteomic were used to separately and quantitatively analyze 7,565 proteins and 3398 phosphorylated proteins in the spleen. Quantitative proteomic results (Figure 1(a)) show that compared with day 0, on day 7, 187 proteins were downregulated and 175 proteins were upregulated; on day 14, 257 proteins were downregulated and 244 proteins were upregulated; on day 21, 175 proteins were downregulated and 132 proteins were upregulated. Subsequently, as shown in Supplementary Figure S1, which is a GO enrichment histogram, in the biological process (BP), DEPs are enriched to immune responses, immune system processes, activation of immune response, and so on. In the cellular component (CC), DEPs are enriched to macromolecular complex, intracellular organelles, and so on. In the molecular function (MF), DEPs are enriched to structural molecule activity, structural constituent of ribosome, DNA binding, and so on. Phosphoproteomic results (Figure 1(b)) show that compared with day 0, on day 7, 98 proteins were downregulated and 148 proteins were upregulated; on day 14, 149 proteins were downregulated and 203 proteins were upregulated; on day 21, 63 proteins were downregulated and 76 proteins were upregulated. Subsequently, as shown in Supplementary Figure S2, which is a GO enrichment histogram, in BP, DEPs are mainly enriched to DNA-dependent DNA replication, apoptotic signaling pathway, single-organism transport, and so on. In CC, DEPs are enriched to intracellular nonmembrane-bounded organelles, integral components of membranes, intracellular organelle part, and so on. In MF, DEPs are enriched to ATPase activity, protein serine/threonine kinase activity, enzyme binding, and so on. Because both quantitative proteomic and phosphoproteomic analyses are important methods for studying cell function, in this study, the information from both methods was integrated. As shown in Supplementary Figure S3a, the results of quantitative proteomic analysis and the results of phosphoproteomic analysis were significantly correlated. As shown in Supplementary Figure S3b, compared with day 0, on days 7, 14, and 21, there were 23, 30, and 10 proteins, respectively, with significant changes in both quantitative proteomic and phosphoproteomic analyses. Specific DEPs are presented in Figures 1(c)–1(e), and at each time point, HSP90AB1 and p-HSP90AB1 expression in the spleen of mice decreased significantly; on days 14 and 21, both SMARCC1 and p-SMARCC1 expression decreased significantly. The results of protein interaction analysis showed that HSP90AB1 and SMARCC1 have the potential for interaction (Supplementary Figure S3c). Studies have confirmed that HSP90AB1 can regulate the expression of Akt, and Akt can regulate the expression of SMARCC1 [26, 31]. The results of protein interaction analysis also showed that HSP90AB1, SMARCC1, and the apoptosis-related proteins have the potential for interaction (Supplementary Figure S3c). Previous studies have confirmed that both HSP90AB1 and SMARCC1 are closely related to T cell activity [25, 30]. Hence, HSP90AB1 and SMARCC1 may be the target protein we searched for, and there could be a possible association between two proteins.Figure 1
Quantitative proteomic and phosphoproteomic analysis revealed that CAP inhibits HSP90AB1 and SMARCC1 protein expression in vivo. Normal mice were administered metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, the spleen was collected for quantitative proteomic and phosphoproteomic analysis. (a) Volcanic map of differentially expressed proteins (DEPs) in quantitative proteomic analysis on days 7, 14, and 21 (compared with day 0). (b) Volcanic map of DEPs in phosphoproteomic analysis on days 7, 14, and 21 (compared with day 0). Subsequently, association analysis of quantitative proteomic and phosphoproteomic analysis results was performed. (c–e) Heat map of DEPs in both quantitative proteomic and phosphoproteomic analyses on days 7, 14, and 21.
(a)(b)(c)(d)(e)
## 3.2. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in the Spleen
Next, we utilized western blot and IHC to verify the proteomics results. The IHC results showed that, compared with day 0, the number of HSP90AB1-positive and SMARCC1-positive cells in the spleen of mice was significantly reduced on days 7, 14, and 21 (Figure2(a)). The western blot results also showed that, compared with day 0, HSP90AB1, p-HSP90AB1, and SMARCC1 expression levels were significantly reduced on days 7, 14, and 21 (Figure 2(b)). The above results verify the reliability of the proteomics results and confirm that CAP can reduce HSP90AB1 and SMARCC1 expression in the spleen of mice.Figure 2
CAP can reduce the expression of HSP90AB1 and SMARCC1 in the spleen of mice. In order to verify the reliability of proteomic results, IHC and western blot were applied to detect HSP90AB1 and SMARCC1 expression in mouse spleen. (a) HSP90AB1 and SMARCC1 in the spleen were stained with IHC (400x). (b) The protein levels of HSP90AB1,p-HSP90AB1, and SMARCC1 in the spleen were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
(a)(b)
## 3.3. CAP Reduces the Expression of HSP90AB1 and SMARCC1 in T Cells
As an immune organ containing T, B, and other types of immune cells [39, 40], there may be inconsistencies between the protein changes in the spleen and the protein changes in T cells. We first confirmed that CAP significantly increased the ROS production and apoptosis rate of T cells, which was consistent with the results of previous studies (Figures 3(a) and 3(b)) [12]. Immediately afterward, CD3+ T cells in mouse spleen were sorted using magnetic beads (Figure 3(c)), and western blot analysis confirmed that CAP can significantly reduce the expression of HSP90AB1, p-HSP90AB1, and SMARCC1 in T cells (Figures 3(d) and 3(e)).Figure 3
CAP can reduce the expression of HSP90AB1 and SMARCC1 in T cells of mice. Normal mice were gavaged with metronomic doses of CAP (100 mg/kg/d). On days 0, 7, 14, and 21, (a) mononuclear cells extracted from mouse spleen were collected and gated by CD3, and then, the apoptosis rate of CD3+ T cells was detected using Annexin V and PI staining. (b) The ROS level of CD3+ T cells was evaluated using DCFH-DA staining. (c) CD3+ T cells were sorted from mouse spleen and identified by staining with PE-CD3 antibody. (d, e) HSP90AB1, p-HSP90AB1, and SMARCC1 expression in T cells was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05.
## 3.4. Knocking Down HSP90AB1 Can Induce T Cell Apoptosis via Akt/SMARCC1/AP-1/ROS Axis
As previously mentioned, HSP90AB1 can regulate Akt expression, and Akt can regulate the expression of SMARCC1 [26, 31]. In addition, SMARCC1 can reduce the expression of AP-1 to regulate T cell activity [30]. Previous studies have demonstrated that AP-1 can regulate GCLM and HO-1 expression, which in turn influences the production of ROS [33]. It has been well established that ROS production is closely associated with apoptosis [12, 21, 34]. As shown in Supplementary Figure S3c, the results of protein interaction analysis also showed that HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, GCLM, and apoptosis-related proteins such as BCL2, BAX, and Caspase3 have the potential for interaction, but this was not confirmed in T cells. Therefore, primary CD3+ T cells were obtained by magnetic bead sorting and were activated with CD3 and CD28 antibodies. As shown in Figures 4(a) and 5(a), after knocking down HSP90AB1 in T cells, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun (both p-c-Fos and p-c-Jun are subunits of AP-1), HO-1, GCLM, and GCLC expression was significantly reduced; after knocking down SMARCC1 in T cells, no significant changes in HSP90AB1, p-HSP90AB1, or p-Akt expression were observed, but SMARCC1, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression was significantly reduced. Reduced glutathione (GSH) levels (an important biomarker of antioxidant status, which can reduce ROS levels) [33] were lowered significantly (Figure 5(b)), and ROS levels were significantly elevated (Figure 5(c)) after knocking down HSP90AB1 and SMARCC1 in T cells. In turn, knocking down HSP90AB1 or SMARCC1 significantly induced T cell apoptosis (Figure 5(d)). The above results confirm that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis plays an important role in T cell apoptosis.Figure 4
Knocking down of HSP90AB1 can regulate Akt/SMARCC1/AP-1 axis in T cells. Primary T cells were sorted and stimulated in vitro with anti-CD3/CD28 antibodies. Then, HSP90AB1 and SMARCC1 were reduced in T cells by siRNA knockdown. (a, b) The protein levels of HSP90AB1,p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun were evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviations: si: siRNA; NC: negative control.
(a)(b)Figure 5
Knocking down of HSP90AB1 and SMARCC1 can repress GCLC, GCLM, and HO-1 expression; reduce GSH level; and increase ROS production and apoptosis rate in T cells. (a) The protein levels of GCLM, GCLC, HO-1, BAX, BCL2, and Caspase3 were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the reduced GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 3.5. Overexpression of HSP90AB1 Can Alleviate CAP-Induced Apoptosis in T Cells via Akt/SMARCC1/AP-1/ROS Axis
Next, 5-FU (the active ingredient of CAP) (10μM) was chosen to be cultured with T cells for 48 h [12]. Compared with the NC group, HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLC, and GCLM expression levels in the 5-FU group were significantly reduced (Figures 6(a) and 7(a)) and GSH levels were significantly decreased (Figure 7(b)). Compared with the NC group, ROS level (Figure 7(c)) and apoptosis rate in the 5-FU group were significantly increased (Figure 7(d)). Subsequently, compared with the 5-FU group, overexpression of HSP90AB1 in T cells significantly increased SMARCC1, p-Akt, p-c-Fos, p-c-Jun, HO-1, GCLM, and GCLC expression in the 5-FU+OE group (Figures 6(a) and 7(a)). Compared with the 5-FU group, overexpression of HSP90AB1 in T cells increased the GSH level and reduced the ROS level in the 5-FU+OE group (Figures 7(b)–7(d)). The above results confirm that CAP induces T cell apoptosis through the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis.Figure 6
Overexpression of HSP90AB1 attenuated the inhibition of SMARCC1, Akt, and AP-1 expression by CAP in T cells. HSP90AB1 was overexpressed by transfecting with HSP90AB1 overexpression plasmids in primary CD3+ T cells. T cells were exposed to 5-FU (the active ingredient of CAP) (0 μM or 10 μM) for 48 h. (a, b) HSP90AB1, p-HSP90AB1, SMARCC1, p-Akt, p-c-Fos, and p-c-Jun expression was evaluated using western blot assay. Data are shown as mean±SD. ∗P<0.05. Abbreviation: OE: overexpression.
(a)(b)Figure 7
Overexpression of HSP90AB1 can attenuate the inhibition of CAP on GCLC, GCLM, and HO-1 expression and CAP-induced apoptosis in T cells. (a) The protein levels of GCLC, GCLM, HO-1, BCL2, Caspase3, and BAX were evaluated using western blot assay. (b) The GSH level of T cells was evaluated using the GSH Assay Kit. (c) The ROS level of T cells was evaluated using DCFH-DA staining. (d) The apoptosis of T cells was evaluated using Annexin V/PI staining. Data are shown asmean±SD. ∗P<0.05.
(a)(b)(c)(d)
## 4. Discussion
CAP may be a potential immunosuppressant with an antitumor effect. Studies have also confirmed that CAP can induce T cell apoptosis to exert an immunosuppressive effect [12], but its underlying mechanism is still unclear. For this reason, we chose the metronomic chemotherapy dosage of CAP to treat normal mice by gavage to further explore the mechanism of CAP’s immunosuppressive effect.First, we used quantitative proteomic and phosphoproteomic analyses to comprehensively explore the protein molecular network relevant to the immunosuppressive effect of CAP. Considering the objective difficulties of sorting mouse T cells for proteomic analysis, such as small sample size and many interference factors, we chose the peripheral immune organ—the spleen—for proteomic analysis. CAP can significantly reduce the expression of HSP90AB1 in the spleen of mice. Of course, the spleen houses numerous immune cells [39, 40]. Moreover, changes in the protein content of the spleen are not necessarily consistent with changes in the protein content of T cells. Subsequently, we also confirmed that CAP can significantly reduce HSP90AB1 expression in T cells. As a stress protein, HSP90AB1 is a member of the heat shock protein family [26, 41]. It is also closely related to many physiological functions of cells [28, 42]. Previous studies have confirmed that HSP90AB1 is closely related to T cell activity [25], and other studies have confirmed that HSP90AB1 can regulate the expression of Akt [26]. This study confirmed that knockdown or overexpression of HSP90AB1 in T cells can regulate Akt expression accordingly. Akt is widely involved in T cell activation and proliferation [43, 44]; so, reducing Akt expression in T cells is an important direction for inducing immunosuppressive effects. Sánchez et al. [45] confirmed that targeting the inhibition of Akt expression can further reduce T cell activation and prevent the development of graft-versus-host disease. Chaudhuri et al. [46] confirmed that activating Akt could inhibit T cell apoptosis. This prompted us to hypothesize that HSP90AB1 may be a key target protein for CAP to induce T cell apoptosis. In our study, knockdown of HSP90AB1 in T cells significantly inhibited Akt expression and induced ROS production and apoptosis, which confirmed this hypothesis. It is also worth noting that previous studies have shown that the elevation in intracellular ROS levels can both activate and inhibit Akt signaling [47–50]. Hence, the decrease of Akt expression and the increase of ROS in T cells caused by CAP deserve further study in the future. In addition to HSP90AB1, proteomics results showed that CAP can significantly reduce SMARCC1 expression in the spleen. We also confirmed that CAP can significantly reduce the expression of SMARCC1 in T cells. SMARCC1, which is part of the SWI/SNF complex in the nucleus, is the main complex of ATP-dependent chromatin remodeling factors [51, 52]. Jeong et al. [30] confirmed that in T cells, SMARCC1 is recruited to the promoter of the transcription factor AP-1 and increases AP-1 expression; so, knocking down SMARCC1 reduces AP-1 expression and further regulates T cell activity. Furthermore, previous studies have confirmed that SMARCC1 is the phosphorylation substrate of Akt [31], which led us to believe that HSP90AB1 may regulate SMARCC1 through Akt in T cells. When we knocked down SMARCC1 expression in T cells, there was no significant change in HSP90AB1 or Akt expression. However, when we knocked down the expression of HSP90AB1, the expression of Akt and SMARCC1 decreased significantly. GCLC and GCLM, which were subunits of mammalian glutamate cysteine ligase holoenzyme, can increase GSH level [33, 53]. GSH can regulate the metabolic activity of T cell; in turn, it could also affect T cell activity [53]. HO-1, which is considered to be an antioxidant, can regulate the production of ROS [54]. Several signaling molecules, such as AP-1 and PI3K/Akt, participate in the regulation of HO-1, GCLM, and GCLC expression [33]. In this experiment, as HSP90AB1 or SMARCC1 were knocked down in T cells, AP-1, HO-1, GCLC, and GCLM expression decreased, the GSH level decreased, and the ROS level and apoptosis rate increased. Therefore, we hypothesized that the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis may be the underlying mechanism of CAP-induced T cell apoptosis. T cells and 5-FU (the active component of CAP) were cultured in vitro, and the expression of HSP90AB1, Akt, SMARCC1, AP-1, HO-1, GCLC, and GCLM protein was significantly reduced, the GSH level decreased, and the ROS level and apoptosis rate increased. Overexpression of HSP90AB1 significantly increased the expression of Akt, SMARCC1, AP-1, HO-1, GCLC, and GCLM, increased the GSH level, and reduced the ROS level and apoptosis rate, which confirmed our hypothesis.
## 5. Conclusions
In summary, targeting HSP90AB1 is the key to CAP-induced T cell apoptosis, and the HSP90AB1/Akt/SMARCC1/AP-1/ROS axis is the underlying mechanism (Figure8). Of course, there were some shortcomings in the present study; notably, the activities of antioxidant enzymes such as SOD, CAT, and GPx, which play an important role in eliminating ROS, were not evaluated, and this experiment was based on normal mice and not on organ transplant mice with acute rejection. Therefore, in the future, we will comprehensively explore the mechanism of ROS production and apoptosis induced by CAP in T cells and will not only use an acute rejection mouse model of organ transplantation but also attempt to establish a tumor-bearing animal model simultaneously bearing an organ allograft to confirm that CAP, which targets HSP90AB1 to induce apoptosis, has both immunosuppressive and anticancer effects. In brief, the results of this study provide novel understanding of CAP-induced T cell apoptosis and lay the experimental foundation for further exploring the immunosuppressive effect of CAP to enrich the treatment strategies of transplant oncology and fill the gap of the lack of pyrimidine immunosuppressive agents in the field of organ transplantation.Figure 8
Scheme summarizing the apoptosis of T cells induced by CAP via HSP90AB1/Akt/SMARCC1/AP-1/ROS axis. After oral administration, CAP is converted into 5′DFCR and 5′DFUR by CES and CDA in the liver. In T cells, which expressed TP, 5′DFUR can finally be converted into 5-FU; CAP reduces HSP90AB1, Akt, SMARCC1, c-Fos, c-Jun, GCLC, GCLM, and HO-1 expression, reduces the GSH level, increases the ROS level, and finally induces apoptosis.
---
*Source: 1012509-2022-03-24.xml* | 2022 |
# Shake Table Study on the Efficiency of Seismic Base Isolation Using Natural Stone Pebbles
**Authors:** Ivan Banović; Jure Radnić; Nikola Grgić
**Journal:** Advances in Materials Science and Engineering
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1012527
---
## Abstract
The results of a shake table study of the efficiency of a seismic base isolation using a layer of natural stone pebbles are presented. Models of stiff and medium-stiff buildings were tested. Case studies were conducted with the foundation of model on the rigid base and on four different layers of pebbles (thin and thick layer with small and large pebbles). Four different horizontal accelerograms were applied, and the characteristic displacements, accelerations, and strains were measured. Strains/stresses of the tested models remained in the elastic area. It was concluded that the effectiveness of the stone pebble layer under the foundation, i.e., the reduction in the seismic forces and stresses in the structure compared to the classical solution of foundation, significantly depends on the type of the applied excitation and depends relatively little on the layer thickness and pebble fraction. The results of the study showed that a layer of pebbles can significantly reduce the peak acceleration and strains/stresses of the model, with acceptable displacements. Further research is expected to confirm the effectiveness of this low-cost and low-tech seismic base isolation and to pave the way to its practical application.
---
## Body
## 1. Introduction
Modern science in recent decades has explored numerous solutions to reduce the seismic forces on buildings, aiming to improve safety during earthquakes and to provide more rational solutions. Some of these aseismic solutions are quite simple and rational (e.g., different variants of elastomeric bearings) and have found applications in the construction of bridges and important buildings. Unfortunately, a large number of the devices for reducing the seismic forces on structures and for controlling their displacements in the earthquake remain complex and expensive, and their practical application remains rare. To enable widespread application of a solution for seismic isolation, especially in less-developed countries, it should be simple and based on low technology.Solutions involving the application of a layer of appropriate materials under the foundation to reduce seismic forces on buildings, which are expected to be efficient and rational for use in the so-called low-cost buildings, are the starting point of our study. Such a low-cost and low-technology method could be widely used in seismic isolation of low-rise buildings around the world. The research results of one such seismic isolation method are presented in this paper.There are indications that in ancient history, builders used layers of different materials to increase the seismic resistance of buildings. Contemporary researchers are exploring this ancient approach to find the appropriate solutions that enable replacement of sophisticated devices for seismic isolation in many buildings with simple methods. Currently, to the author’s knowledge, there are very few studies related to the use of natural materials for seismic base isolation of buildings.A concept of interposing an artificial soil layer between the superstructure and the foundation soil was examined by Doudoumis et al. [1]. Extended investigation of utilization of a smooth synthetic liner placed within the soil deposit can be found in [2, 3]. Xiao et al. [4] tested five potential isolation materials to characterize their frictional features by both semidynamic and shake table experiments. The materials were sand, lighting ridge pebble, polypropylene sheet, PVC sheet, and polythene membrane. A series of numerical simulations and a parametric study on seismic base isolation using rubber-soil mixtures can be found in [5]. Radnić et al. [6, 7] found from shake table tests that a thin layer of plain sand under the foundation can reduce seismic forces to a cantilever concrete column by over 10%. Xiong and Li [8] analyzed seismic base isolation using rubber-soil mixtures (RMSs) based on shake table tests and a parametric numerical study in [9]. The effectiveness of utilizing a rubber-sand mixture (RSM) in the foundation soil of different moment-resisting frame (MRF) typologies was assessed through numerical simulations in [10]. The results highlighted the beneficial effects of the use of RSM as a foundation layer on the structures’ response under dynamic loading, particularly for the mid- and high-rise buildings, leading to a reduction in the base shear and maximum interstory drift up to 40% and 30%, respectively, in comparison with the clean sand profile. Panjamani et al. [11] obtained similar results in terms of acceleration and interstory drift reduction; at different floor levels with the use of RSM, the reduction can be approximately 40 to 50%. Bandyopadhyay et al. [12] found from shake table tests that a composite consisting of sand and 50% shredded rubber tire placed under the foundation was most promising as a low-cost effective base isolator. Patil et al. [13] found encouraging results regarding the efficiency of seismic base isolation using river sand based on experimental and analytical work. Nanda et al. [14–17] conducted experimental studies based on shake table tests by providing geotextiles and a smooth marble frictional base isolation system at the plinth level of a brick masonry building. A 65% reduction in absolute response acceleration at the roof level was obtained in comparison with the response of the fixed base structure. Further work on pure-friction base isolation systems can be found in [18, 19].This paper presents the results of a shake table study regarding the efficiency of seismic base isolation using natural stone pebbles below the foundation for the reduction in seismic forces on structures, with the aim that such a solution finds practical application in the construction of low-cost buildings and smaller bridges in seismically active regions. Testing was performed on stiff and medium-stiff buildings. Four different accelerograms were applied, and stresses of the models remained in the elastic area. First, a model with the foundation directly on a rigid base (shake table) was tested, and then a model with a layer of stone pebbles under the foundation (the layer thickness and fraction of the pebbles are varied) was tested. Characteristic displacements, accelerations, and strains were measured. Some study results are presented and discussed, and the main conclusions of the research are given at the end of the paper. However, further research on some important effects that were not considered in this study is required to achieve even more reliable conclusions regarding the efficiency and rationality of the considered concept of seismic isolation.
## 2. Layer of Natural Stone Pebbles below the Foundation
Stone pebbles are natural material created from larger pieces of stone under the long-lasting action of rivers and sea. In this process, the sharp parts of stone were rounded, and the weak parts of stone have fallen off as a result only solid, smooth, rounded pieces of stone (stone pebbles) remain. In this study, stone pebbles from a riverbed were used. The pebbles are mainly of limestone and partly of granite. In the conducted tests, the following two fractions of pebbles were used (Figure1): 4–8 mm (i.e., small pebbles) and 16–32 mm (i.e., large pebbles). The average compressive strength of the pebbles was approximately 80 MPa, and the humidity was approximately 10%. It is assumed that the thickness of the pebble layer of approximately 0.3 to 1.0 m could be effective in terms of reducing the seismic forces to the building, while being a rational approach. A thicker layer is probably more efficient but requires deeper excavation and a taller embankment, i.e., higher costs. In the conducted tests, the following two layer thicknesses were used (Figure 2): d = 0.3 m (thin layer) and d = 0.6 m (thick layer). Layers are formed within a frame with a plan size of 2.5 m × 2.5 m, which was fixed to the shake table. The deformation conditions of the layer within the frame are sought to be similar to those that the layer would have under the foundation of a real building. Although a reduced model of the building was used, the layer thickness was used in real size because the reduced building model has the same dynamic characteristics (periods of free oscillations) as that of the target full-scale building. The layers were formed in sub-layers with a thickness of 0.10 m, with static compaction and dynamic compaction using the shake table. The average compaction module at the top of the layer was approximately MS = 30 MPa.Figure 1
Used fractions of pebbles. (a) 4–8 mm. (b) 16–32 mm.
(a)
(b)Figure 2
Used thicknesses of pebble layer.
## 3. Adopted Building Models
Seismic forces on the structure significantly depend on their dynamic characteristics, i.e., on the structure stiffness and the weight. The dynamic characteristics of the building are well described by its periods and forms of free oscillations. According to [20], for type 1 and type of ground soil A, spectral acceleration Se for a elastic single-degree-of-freedom (SDOF) system of a cantilever column with mass on its top is defined according to the fundamental free oscillation period T (Figure 3). Real buildings have a wide stiffness range, from very stiff to very soft, i.e., a wide spectrum of T.Figure 3
Seismic response spectra according to [20], for type 1 and type of ground soil A.Instead of a small-scale model of a real building, which results in a series of problems and doubts, a model (cantilever column with a mass on top—SDOF) that has the same fundamental periodT as a real building is adopted in this study. Thus, this model well represents the dynamic characteristics of the real building. Two models of buildings shown in Figure 4 were tested: the MSB model with T = 0.05 s which represents stiff buildings and the MSSB model with T = 0.6 s which represents medium-stiff buildings (Figure 3). The adopted models include a foundation because the behavior of real buildings in the earthquake depends significantly on their foundations, i.e., on the soil-structure interaction. The calculation of the seismic forces based on an SDOF system starts from the assumption that there is no displacements and rotations of the column bottom, i.e., there is no displacement and rotation of the foundation. This study takes these effects into consideration.Figure 4
Considered building models. (a) MSB (T = 0.05 s). (b) MSSB (T = 0.60 s).
(a)
(b)The same foundation and mass on the top of the column were adopted in both models, with different column heights and dimensions of its cross section. The foundation and mass at the top of the column (m = 1000 kg) are made from concrete (cube strength of 46 MPa), and the column is a square steel tube with uniaxial tensile strength of 355 MPa. The foundation is highly reinforced and is practically rigid. In the conducted experimental tests, relatively small plan dimensions of the foundation were adopted. However, they are the same in the case of the foundation supported on the rigid base and on the pebble layers. In further research, it is planned to vary the different plan dimensions of the foundation. In the adopted steel columns, stresses remained in the elastic area for all performed tests. Namely, the starting point was that for all tests, nonlinearity does not appear in the whole structure (column and foundation), i.e., all nonlinearity and dissipation of seismic energy are realized in the pebble layer and in the layer-foundation coupling surface. Thus, the intention was to exclude the influence of nonlinearity in the construction material, i.e., the dissipation of seismic energy in the form of plastification and damage of the construction material, on the results regarding the aseismic efficiency of the tested pebble layer.
## 4. Tested Samples
Ten different samples were experimentally tested (Figure5) under four different types of dynamic excitation produced by the shake table. The first tested MSB and MSSB models were supported on a rigid base Pr (Figure 5(a)). A concrete layer was placed and fixed on the top of the shake table to simulate the usual subconcrete under foundation of a real building. This situation approximates the real buildings with a classic foundation without seismic base isolation. The horizontal displacement of the foundation in relation to the base (shake table) is prevented, while the rocking and uplifting of the foundation is allowed. Next, MSB and MSSB models supported on different layers of pebble (Pp1 to Pp4) were tested (Figure 5(b)). The layer thickness d (0.3 m and 0.6 m) and the pebble fraction Φ (4 to 8 mm and 16 to 32 mm) were varied. The pebble layer was returned to its initial condition after each test, i.e., recompaction to the required compaction module and leveling of the layer top. The same shake table acceleration was adopted for the model supported on a rigid base and on a pebble layer. It is assumed that the real earthquake acceleration at the top of the natural solid ground in both cases is the same.Figure 5
Tested samples. (a) Rigid base (Pr). (b) Pebble layers.
(a)
(b)
## 5. Dynamic Excitations
The models of buildings with considered variants of foundation support (Figure5) were exposed to horizontal accelerations of the shake table in the direction of larger dimension of the foundation, using the accelerograms shown in Figure 6. The maximum acceleration ag,max of the accelerogram is scaled to 0.3 g and 0.2 g for the MSB and MSSB model, respectively. An artificial accelerogram (AA), as shown in Figure 6(a), is created to match the elastic response spectra according to [20]. The horizontal component N-S of the Petrovac earthquake (Montenegro) [21] is shown in Figure 6(b) (AP), the horizontal component N-S of the Ston earthquake (Croatia) [21] is shown in Figure 6(c) (AS), and the horizontal component N-S of the Banja Luka earthquake (BiH) [21] is shown in Figure 6(d) (ABL). Elastic response spectra of the adopted accelerograms are shown in Figure 7. It is difficult to predict which applied accelerogram will be most unfavorable for each tested sample in Figure 5 because of the possible occurrence of nonlinearities in the system. The adopted accelerograms cover a wide spectrum of potential earthquake types. Namely, the artificial accelerogram (AA) is characterized by the long-lasting action, moderate predominant period, large spectral displacements, and high earthquake input energy in structure. Compared to AA, accelerogram Petrovac (AP) has similar characteristics, slightly shorter duration and longer predominant period. The Ston accelerogram (AS) and B. Luka accelerogram (ABL) are characterized by a short impact action with a short predominant period. Namely, AS and ABL represent the so-called impact earthquakes.Figure 6
Applied horizontal base accelerations (ag,max scaled to 0.2 g for MSSB and 0.3 g for MSB). (a) Artificial accelerogram (AA). (b) N-S accelerogram of Petrovac earthquake (AP). (c) N-S accelerogram of Ston earthquake (AS). (d) N-S accelerogram of B. Luka earthquake (ABL).
(a)
(b)
(c)
(d)Figure 7
Elastic response spectra for applied accelerograms. (a) Spectral acceleration. (b) Spectral velocity. (c) Spectral displacement.
(a)
(b)
(c)
## 6. Measured Values
The following values were measured on each tested sample (Figure8): horizontal displacement of the mass center at the column top (u1), horizontal displacement at the foundation top (u2), vertical displacement at the right edge (v1) and at the left edge (v2) of the foundation, vertical strain on the bottom of the steel column at the right side (ε1) and at the left side (ε2), and horizontal acceleration of the mass center at the column top (a).Figure 8
Measured values.
## 7. Testing and Measuring Equipment
Tests were performed using a shake table at the University of Split, Faculty of Civil Engineering, Architecture and Geodesy (Croatia). Data collection from all sensors was performed using the Quantum-x mx 840A system (HBM). The displacements were measured using analog displacement sensors, type PB-25-S10-N0S-10C (Uni Measure). The strains were measured using strain gauges, type 6/120 LY11 (HBM). The accelerations were measured by a piezo-electric low frequency accelerometer type 4610 (MS). Some photos of experimental setup before testing are shown in Figure9.Figure 9
Photos of experimental setup before testing. (a) MSB on rigid base. (b) MSB on layerPp2.
(a)
(b)
## 8. Experimental Results
The test results are shown in a graphic form to ensure that the presentation is concise and clear, even with reduced size of the drawings. The results are separately shown for some of the measured values, for the models MSB and MSSB. Each of the drawings shows the measured values separately at each applied accelerogram, for all five considered substrate types:Pr—rigid base; Pp1—pebble layer (d = 0.3 m, Φ = 16 to 32 mm); Pp2—pebble layer (d = 0.6 m, Φ = 16 to 32 mm); Pp3—pebble layer (d = 0.3 m, Φ = 4 to 8 mm); and Pp4—pebble layer (d = 0.6 m, Φ = 4 to 8 mm); see Figure 5.In order to investigate the impact of some possible negative factors on the conclusions of the study, preliminary research has been carried out. Namely, in order to investigate the impact of subsequent earthquakes on the efficiency of the considered seismic base isolation, the tested structure was exposed to a set of six repeated base accelerations, without updating the pebble layer. Testing was performed with AA and AS, for MSB on layerPp1 (Figure 5) and MSSB on layer Pp4. Compared to the first excitation, repeated excitations produced up to 8.6% higher strain/stress on the bottom of the steel column and up to 196% larger irreversible horizontal displacement at the foundation top. This can be considered acceptable because it is unlikely that some buildings would be exposed to a large number of medium to severe earthquakes that would cause building displacements in the same direction. To prevent a possible similar scenario, the problem can be solved so that the width of the aseismic layer is sufficiently wider than the foundation.Tests with repeated high base accelerations that could cause nonlinearities in the model were not performed. The pebble layer efficiency for repeated base accelerations is explained by the fact that the layer of stone pebbles of the same grain size is very difficult to compact. Also, the influence of compaction ofPp1 and Pp4 layers was also tested with AA and AS. The average compaction module at the top of the layers was MS = 30 MPa and MS = 60 MPa, respectively. The maximum strain/stress on the bottom of the steel column for MS = 60 MPa was 4.9% higher than for MS = 30 MPa. This can be considered acceptable.Foregoing suggests that the proposed seismic base isolation can be effective throughout the lifetime of the building and it is not necessary to renew.
### 8.1. Model of Stiff Building MSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure10. It is found that the rigid base produced maximum acceleration for all considered accelerograms and that the maximum accelerations for the pebble layer were similar. Compared to the rigid base, thin layer with large pebbles produced the lowest reduction in acceleration. For ag,max = 3.0 m·s−2, the highest acceleration on the rigid base was produced by AA (approx. 11.6 m·s−2), whereas the lowest was produced by ABL (approx. 5.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approx. 5.7 m·s−2 and approx. 4.1 m·s−2, respectively.Figure 10
Horizontal acceleration of the mass center at the column top (a) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacement of the mass center at the column top (u1) for all considered accelerograms was produced with the rigid base, and the maximum displacements on all pebble layers were similar (Figure 11). Compared to the rigid base, the slightest reduction in the displacement was produced using a thin layer with large pebbles. For the rigid base, AA produced the largest displacement of approximately 150 mm, whereas ABL produced the smallest displacement of approximately 12 mm. The largest displacement on the pebble layer was produced by AP (approx. 80 mm), whereas the smallest was produced by ABL (approx. 3.5 mm).Figure 11
Horizontal displacement of the mass center at the column top (u1). (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 12. Note that the model on the rigid base had the maximum strain for all considered accelerograms and that the maximum strain for the model on the pebble layers was similar. Compared to the rigid base, the slightest reduction in strain also produced a thin layer with large pebbles. The largest strain on the rigid base was caused by AP (approx. 0.059‰), whereas the smallest was caused by ABL (approx. 0.018‰). The largest strain on the pebble layer was caused by AP (approx. 0.028‰), whereas the smallest was caused by ABL (approx. 0.018‰). All strains (stresses) were within the elastic area of the steel (≤1.7‰).Figure 12
Vertical strain on the right bottom side of the steel column (ε1) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The horizontal displacement at the foundation top (u2) is prevented for a rigid base (Figure 13), i.e., the bottom of the foundation is fixed to the base (shake table). The largest displacement for the pebble layer was produced by AP (approx. 18.5 mm), whereas the smallest was produced by ABL (approx. 1.2 mm). Thicker layers resulted in larger horizontal displacements. The largest permanent displacement for the pebble layer was produced also by AP (approx. 6.0 mm), which is the result of the foundation slipping at the pebble layer top. Thus, the ratio of the largest permanent displacement of the foundation and peak foundation displacement for AP is approximately 6 mm : 18.5 mm or about 1 : 3.Figure 13
Horizontal displacement at the foundation top (u2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplifts of the foundation (Figure14) were produced for models with the rigid base, approximately 64 mm for AA and approximately 4.4 mm for ABL. The largest uplift of the foundation for the pebble layer was produced by AP (approx. 35 mm), whereas the smallest was produced by ABL (approx. 1.8 mm). The largest permanent settlement on the left edge of the foundation of approximately 7 mm was produced by AP (thin layer with large pebbles).Figure 14
Vertical displacement at the left edge of the foundation (v2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
### 8.2. Model of Medium-Stiff Building MSSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure15. It can be seen that the rigid base produced maximum acceleration for all applied accelerograms and that the maximum accelerations for the pebble layer were similar (analogous to model MSB). For ag,max = 2.0 m·s−2, the highest acceleration for the model on the rigid base was produced by AA and AP (approx. 7.5 m·s−2), whereas the lowest was produced by ABL (approx. 2.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approximately 4.4 m·s−2 and approximately 2.4 m·s−2, respectively.Figure 15
Horizontal acceleration of the mass center at the column top (a) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacements of the mass center at the column top (u1) were also for the rigid base case (Figure 16): AA produced the largest displacement of approximately 170 mm, whereas ABL produced the smallest of approximately 21.5 mm. The largest displacement for the model on the pebble layer was produced by AP (approx. 110 mm), whereas the smallest was produced by ABL (approx. 21.5 mm). The largest permanent displacement on the pebble layer was for AA (approx. 25 mm), which is the result of the foundation slipping at the pebble layer top and foundation rotation on the vertically deformable substrate.Figure 16
Horizontal displacement of the mass center at the column top (u1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 17. The maximum strain for the rigid base was approximately equal for AA and AP (approx. 0.82‰), i.e., within the elastic steel behavior. The minimum strain was for ABL (approx. 0.33‰). Compared to the MSB model, the MSSB model had significantly greater stresses/strains. For the pebble layer, AA produced maximum strain of approximately 0.45‰.Figure 17
Vertical strain on the right bottom side of the steel column (ε1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest displacement at the foundation top (u2) for the pebble layer (Figure 18) was produced by AA (approx. 13 mm), whereas the smallest was produced by ABL (approx. 1.6 mm). The largest permanent displacement (u2) for the pebble layer was for AA (approx. 7 mm) with a thick layer of large pebbles, as a result of the foundation sliding on the pebble layer top.Figure 18
Horizontal displacement at the foundation top (u2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplift at the left edge of the foundation (v2) for the rigid base (Figure 19) was produced by AA (approx. 37 mm), whereas the smallest was produced by ABL (approx. 1.4 mm). The largest uplift on the pebble layer was produced by AP and AA (approx. 14 mm). The largest permanent settlement on the left edge of the foundation of approximately 5 mm for the pebble layer was for AA (thick layer with large pebbles). The consequence of the different permanent vertical settlement of the left edge and right edge of the foundation is the rotation of the model and the occurrence of an additional permanent horizontal displacement u1.Figure 19
Foundation vertical displacement at the left edge (v2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
### 8.3. Comparison of Experimental Results for Models MSB and MSSB
Table1 presents the maximum values of some of the measured experimental results for building models MSB and MSSB on a rigid base and on the pebble layer as well as the ratio of these values. Note that the efficiency of the pebble layer depends on the stiffness of the building model and the type of accelerogram (earthquake characteristics). The values in Table 1 are shown in Figures 20–23, which provide a better visual insight into the ratio of measured maximum values on the rigid base and pebble layer, i.e., a better insight into the effectiveness of the pebble layer compared to the rigid base.Table 1
Maximum values of some measured experimental results and their ratios.
Applied excitation
Building model
Horizontal displacement of the block center
Vertical uplift of the foundation
Acceleration of the block center
Strain at the bottom of the column
u
1
mm
u
1
∗
mm
u
1
∗
/
u
1
v
1
,
v
2
mm
v
1
∗
,
v
2
∗
mm
v
1
∗
,
v
2
∗
/
v
1
,
v
2
a
m
⋅
s
−
2
a
∗
m
⋅
s
−
2
a
∗
/
a
ε
1
,
ε
2
‰
ε
1
∗
,
ε
2
∗
‰
ε
1
∗
,
ε
2
∗
/
ε
1
,
ε
2
Artificial
MSB
150
45
0.30
64
15
0.23
11.6
5.5
0.47
0.055
0.029
0.53
Accelerogram
MSSB
173
107
0.62
39
17
0.44
7.6
4.4
0.58
0.850
0.460
0.53
Accelerogram
MSB
120
85
0.71
51
35
0.69
12.1
5.7
0.47
0.058
0.027
0.47
Petrovac
MSSB
142
80
0.56
28
13.5
0.48
7.6
4.5
0.59
0.870
0.460
0.53
Accelerogram
MSB
16.2
16.5
1.02
6.8
5.5
0.81
6.5
5.2
0.80
0.031
0.023
0.74
Ston
MSSB
33.6
32
0.95
3.2
3.7
1.16
3.7
3.7
1.00
0.415
0.380
0.92
Accelerogram
MSB
12
6.5
0.54
4.4
1.7
0.39
5.8
4.1
0.71
0.025
0.018
0.72
B. Luka
MSSB
21
14.5
0.69
1.4
2.3
1.64
2.8
2.4
0.86
0.320
0.220
0.69
u
1, v1, v2, a, ε1, and ε2 are the maximum values for the rigid base. u1∗, v1∗, v2∗, a∗, ε1∗, and ε2∗ are the maximum values for the pebble layer.Figure 20
Some maximum measured values for an artificial accelerogram (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 21
Some maximum measured values for the accelerogram Petrovac (AP). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 22
Some maximum measured values for the accelerogram Ston (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 23
Some maximum measured values for the accelerogram B. Luka (ABL). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).
#### 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
#### 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
#### 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
#### 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 8.1. Model of Stiff Building MSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure10. It is found that the rigid base produced maximum acceleration for all considered accelerograms and that the maximum accelerations for the pebble layer were similar. Compared to the rigid base, thin layer with large pebbles produced the lowest reduction in acceleration. For ag,max = 3.0 m·s−2, the highest acceleration on the rigid base was produced by AA (approx. 11.6 m·s−2), whereas the lowest was produced by ABL (approx. 5.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approx. 5.7 m·s−2 and approx. 4.1 m·s−2, respectively.Figure 10
Horizontal acceleration of the mass center at the column top (a) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacement of the mass center at the column top (u1) for all considered accelerograms was produced with the rigid base, and the maximum displacements on all pebble layers were similar (Figure 11). Compared to the rigid base, the slightest reduction in the displacement was produced using a thin layer with large pebbles. For the rigid base, AA produced the largest displacement of approximately 150 mm, whereas ABL produced the smallest displacement of approximately 12 mm. The largest displacement on the pebble layer was produced by AP (approx. 80 mm), whereas the smallest was produced by ABL (approx. 3.5 mm).Figure 11
Horizontal displacement of the mass center at the column top (u1). (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 12. Note that the model on the rigid base had the maximum strain for all considered accelerograms and that the maximum strain for the model on the pebble layers was similar. Compared to the rigid base, the slightest reduction in strain also produced a thin layer with large pebbles. The largest strain on the rigid base was caused by AP (approx. 0.059‰), whereas the smallest was caused by ABL (approx. 0.018‰). The largest strain on the pebble layer was caused by AP (approx. 0.028‰), whereas the smallest was caused by ABL (approx. 0.018‰). All strains (stresses) were within the elastic area of the steel (≤1.7‰).Figure 12
Vertical strain on the right bottom side of the steel column (ε1) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The horizontal displacement at the foundation top (u2) is prevented for a rigid base (Figure 13), i.e., the bottom of the foundation is fixed to the base (shake table). The largest displacement for the pebble layer was produced by AP (approx. 18.5 mm), whereas the smallest was produced by ABL (approx. 1.2 mm). Thicker layers resulted in larger horizontal displacements. The largest permanent displacement for the pebble layer was produced also by AP (approx. 6.0 mm), which is the result of the foundation slipping at the pebble layer top. Thus, the ratio of the largest permanent displacement of the foundation and peak foundation displacement for AP is approximately 6 mm : 18.5 mm or about 1 : 3.Figure 13
Horizontal displacement at the foundation top (u2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplifts of the foundation (Figure14) were produced for models with the rigid base, approximately 64 mm for AA and approximately 4.4 mm for ABL. The largest uplift of the foundation for the pebble layer was produced by AP (approx. 35 mm), whereas the smallest was produced by ABL (approx. 1.8 mm). The largest permanent settlement on the left edge of the foundation of approximately 7 mm was produced by AP (thin layer with large pebbles).Figure 14
Vertical displacement at the left edge of the foundation (v2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
## 8.2. Model of Medium-Stiff Building MSSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure15. It can be seen that the rigid base produced maximum acceleration for all applied accelerograms and that the maximum accelerations for the pebble layer were similar (analogous to model MSB). For ag,max = 2.0 m·s−2, the highest acceleration for the model on the rigid base was produced by AA and AP (approx. 7.5 m·s−2), whereas the lowest was produced by ABL (approx. 2.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approximately 4.4 m·s−2 and approximately 2.4 m·s−2, respectively.Figure 15
Horizontal acceleration of the mass center at the column top (a) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacements of the mass center at the column top (u1) were also for the rigid base case (Figure 16): AA produced the largest displacement of approximately 170 mm, whereas ABL produced the smallest of approximately 21.5 mm. The largest displacement for the model on the pebble layer was produced by AP (approx. 110 mm), whereas the smallest was produced by ABL (approx. 21.5 mm). The largest permanent displacement on the pebble layer was for AA (approx. 25 mm), which is the result of the foundation slipping at the pebble layer top and foundation rotation on the vertically deformable substrate.Figure 16
Horizontal displacement of the mass center at the column top (u1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 17. The maximum strain for the rigid base was approximately equal for AA and AP (approx. 0.82‰), i.e., within the elastic steel behavior. The minimum strain was for ABL (approx. 0.33‰). Compared to the MSB model, the MSSB model had significantly greater stresses/strains. For the pebble layer, AA produced maximum strain of approximately 0.45‰.Figure 17
Vertical strain on the right bottom side of the steel column (ε1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest displacement at the foundation top (u2) for the pebble layer (Figure 18) was produced by AA (approx. 13 mm), whereas the smallest was produced by ABL (approx. 1.6 mm). The largest permanent displacement (u2) for the pebble layer was for AA (approx. 7 mm) with a thick layer of large pebbles, as a result of the foundation sliding on the pebble layer top.Figure 18
Horizontal displacement at the foundation top (u2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplift at the left edge of the foundation (v2) for the rigid base (Figure 19) was produced by AA (approx. 37 mm), whereas the smallest was produced by ABL (approx. 1.4 mm). The largest uplift on the pebble layer was produced by AP and AA (approx. 14 mm). The largest permanent settlement on the left edge of the foundation of approximately 5 mm for the pebble layer was for AA (thick layer with large pebbles). The consequence of the different permanent vertical settlement of the left edge and right edge of the foundation is the rotation of the model and the occurrence of an additional permanent horizontal displacement u1.Figure 19
Foundation vertical displacement at the left edge (v2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
## 8.3. Comparison of Experimental Results for Models MSB and MSSB
Table1 presents the maximum values of some of the measured experimental results for building models MSB and MSSB on a rigid base and on the pebble layer as well as the ratio of these values. Note that the efficiency of the pebble layer depends on the stiffness of the building model and the type of accelerogram (earthquake characteristics). The values in Table 1 are shown in Figures 20–23, which provide a better visual insight into the ratio of measured maximum values on the rigid base and pebble layer, i.e., a better insight into the effectiveness of the pebble layer compared to the rigid base.Table 1
Maximum values of some measured experimental results and their ratios.
Applied excitation
Building model
Horizontal displacement of the block center
Vertical uplift of the foundation
Acceleration of the block center
Strain at the bottom of the column
u
1
mm
u
1
∗
mm
u
1
∗
/
u
1
v
1
,
v
2
mm
v
1
∗
,
v
2
∗
mm
v
1
∗
,
v
2
∗
/
v
1
,
v
2
a
m
⋅
s
−
2
a
∗
m
⋅
s
−
2
a
∗
/
a
ε
1
,
ε
2
‰
ε
1
∗
,
ε
2
∗
‰
ε
1
∗
,
ε
2
∗
/
ε
1
,
ε
2
Artificial
MSB
150
45
0.30
64
15
0.23
11.6
5.5
0.47
0.055
0.029
0.53
Accelerogram
MSSB
173
107
0.62
39
17
0.44
7.6
4.4
0.58
0.850
0.460
0.53
Accelerogram
MSB
120
85
0.71
51
35
0.69
12.1
5.7
0.47
0.058
0.027
0.47
Petrovac
MSSB
142
80
0.56
28
13.5
0.48
7.6
4.5
0.59
0.870
0.460
0.53
Accelerogram
MSB
16.2
16.5
1.02
6.8
5.5
0.81
6.5
5.2
0.80
0.031
0.023
0.74
Ston
MSSB
33.6
32
0.95
3.2
3.7
1.16
3.7
3.7
1.00
0.415
0.380
0.92
Accelerogram
MSB
12
6.5
0.54
4.4
1.7
0.39
5.8
4.1
0.71
0.025
0.018
0.72
B. Luka
MSSB
21
14.5
0.69
1.4
2.3
1.64
2.8
2.4
0.86
0.320
0.220
0.69
u
1, v1, v2, a, ε1, and ε2 are the maximum values for the rigid base. u1∗, v1∗, v2∗, a∗, ε1∗, and ε2∗ are the maximum values for the pebble layer.Figure 20
Some maximum measured values for an artificial accelerogram (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 21
Some maximum measured values for the accelerogram Petrovac (AP). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 22
Some maximum measured values for the accelerogram Ston (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 23
Some maximum measured values for the accelerogram B. Luka (ABL). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).
### 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
### 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
### 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
### 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
## 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
## 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
## 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 9. Conclusions
Based on the experimental research results of the behavior of two tested building models with fundamental periodsT = 0.05 s (the so-called model of stiff building (MSB)) and T = 0.6 s (the so-called model of medium-stiff building (MSSB)) supported on a rigid base and a pebble layer with a thicknesses of 0.3 m (the so-called thin layer) and 0.6 m (the so-called thick layer), with pebble fractions of 4–8 mm (the so-called small pebbles) and 16–32 mm (the so-called large pebbles), exposed to four different horizontal accelerograms (artificial accelerogram—AA, accelerogram Petrovac—AP, accelerogram Ston—AS, and accelerogram B. Luka—ABL) with model stresses in the elastic area, the following conclusions can be drawn:(i)
In relation to the behavior of the building models with the foundation on a rigid base, the use of a natural stone pebble layer under the foundation resulted in a much more favorable response to seismic base accelerations.(ii)
The strain/stress reduction in the column above the foundation for AA, AP, AS, and ABL was 47%, 53%, 26%, and 28% for the MSB model and 47%, 47%, 8%, and 31%, respectively, for the MSSB model. Note that all stresses were in the elastic area, without material nonlinearity of the structure.(iii)
The reduction in the horizontal displacement of the mass center at the column top for AA, AP, AS, and ABL was 70%, 29%, 0%, and 46% for MSB and 38%, 44%, 5%, and 31%, respectively, for MSSB.(iv)
The efficiency of the pebble layer for MSSB was almost equal as that for MSB.(v)
The pebble layer efficiency in the performed tests was relatively independent of the thickness (0.3 m and 0.6 m) and the pebble fraction (4–8 mm and 16–32 mm).(vi)
According to the tests results, a small permanent horizontal displacement and vertical settlement (rotation) of the foundation on a real building on the considered pebble layer is expected.(vii)
Based on the results of the conducted experimental research, it can be expected that a stone pebble layer below the foundation of a real building is a sufficiently efficient low-technology seismic base isolation method, which is particularly useful for low-cost buildings in less-developed countries. However, firm conclusions require further research.(viii)
Although the above conclusions are based on the results of tests on small-scale models, we believe that they are also applicable to buildings in practice. This is explained by the fact that small-scale models had a fundamental free oscillation period as full-scale buildings and that only relative effects of the considered parameters were tested on small-scale models.(ix)
It should be noted that the proposed concept of seismic base isolation would not be efficient in earthquakes where the vertical acceleration component is dominant in relation to the horizontal component.
---
*Source: 1012527-2018-12-20.xml* | 1012527-2018-12-20_1012527-2018-12-20.md | 54,939 | Shake Table Study on the Efficiency of Seismic Base Isolation Using Natural Stone Pebbles | Ivan Banović; Jure Radnić; Nikola Grgić | Advances in Materials Science and Engineering
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1012527 | 1012527-2018-12-20.xml | ---
## Abstract
The results of a shake table study of the efficiency of a seismic base isolation using a layer of natural stone pebbles are presented. Models of stiff and medium-stiff buildings were tested. Case studies were conducted with the foundation of model on the rigid base and on four different layers of pebbles (thin and thick layer with small and large pebbles). Four different horizontal accelerograms were applied, and the characteristic displacements, accelerations, and strains were measured. Strains/stresses of the tested models remained in the elastic area. It was concluded that the effectiveness of the stone pebble layer under the foundation, i.e., the reduction in the seismic forces and stresses in the structure compared to the classical solution of foundation, significantly depends on the type of the applied excitation and depends relatively little on the layer thickness and pebble fraction. The results of the study showed that a layer of pebbles can significantly reduce the peak acceleration and strains/stresses of the model, with acceptable displacements. Further research is expected to confirm the effectiveness of this low-cost and low-tech seismic base isolation and to pave the way to its practical application.
---
## Body
## 1. Introduction
Modern science in recent decades has explored numerous solutions to reduce the seismic forces on buildings, aiming to improve safety during earthquakes and to provide more rational solutions. Some of these aseismic solutions are quite simple and rational (e.g., different variants of elastomeric bearings) and have found applications in the construction of bridges and important buildings. Unfortunately, a large number of the devices for reducing the seismic forces on structures and for controlling their displacements in the earthquake remain complex and expensive, and their practical application remains rare. To enable widespread application of a solution for seismic isolation, especially in less-developed countries, it should be simple and based on low technology.Solutions involving the application of a layer of appropriate materials under the foundation to reduce seismic forces on buildings, which are expected to be efficient and rational for use in the so-called low-cost buildings, are the starting point of our study. Such a low-cost and low-technology method could be widely used in seismic isolation of low-rise buildings around the world. The research results of one such seismic isolation method are presented in this paper.There are indications that in ancient history, builders used layers of different materials to increase the seismic resistance of buildings. Contemporary researchers are exploring this ancient approach to find the appropriate solutions that enable replacement of sophisticated devices for seismic isolation in many buildings with simple methods. Currently, to the author’s knowledge, there are very few studies related to the use of natural materials for seismic base isolation of buildings.A concept of interposing an artificial soil layer between the superstructure and the foundation soil was examined by Doudoumis et al. [1]. Extended investigation of utilization of a smooth synthetic liner placed within the soil deposit can be found in [2, 3]. Xiao et al. [4] tested five potential isolation materials to characterize their frictional features by both semidynamic and shake table experiments. The materials were sand, lighting ridge pebble, polypropylene sheet, PVC sheet, and polythene membrane. A series of numerical simulations and a parametric study on seismic base isolation using rubber-soil mixtures can be found in [5]. Radnić et al. [6, 7] found from shake table tests that a thin layer of plain sand under the foundation can reduce seismic forces to a cantilever concrete column by over 10%. Xiong and Li [8] analyzed seismic base isolation using rubber-soil mixtures (RMSs) based on shake table tests and a parametric numerical study in [9]. The effectiveness of utilizing a rubber-sand mixture (RSM) in the foundation soil of different moment-resisting frame (MRF) typologies was assessed through numerical simulations in [10]. The results highlighted the beneficial effects of the use of RSM as a foundation layer on the structures’ response under dynamic loading, particularly for the mid- and high-rise buildings, leading to a reduction in the base shear and maximum interstory drift up to 40% and 30%, respectively, in comparison with the clean sand profile. Panjamani et al. [11] obtained similar results in terms of acceleration and interstory drift reduction; at different floor levels with the use of RSM, the reduction can be approximately 40 to 50%. Bandyopadhyay et al. [12] found from shake table tests that a composite consisting of sand and 50% shredded rubber tire placed under the foundation was most promising as a low-cost effective base isolator. Patil et al. [13] found encouraging results regarding the efficiency of seismic base isolation using river sand based on experimental and analytical work. Nanda et al. [14–17] conducted experimental studies based on shake table tests by providing geotextiles and a smooth marble frictional base isolation system at the plinth level of a brick masonry building. A 65% reduction in absolute response acceleration at the roof level was obtained in comparison with the response of the fixed base structure. Further work on pure-friction base isolation systems can be found in [18, 19].This paper presents the results of a shake table study regarding the efficiency of seismic base isolation using natural stone pebbles below the foundation for the reduction in seismic forces on structures, with the aim that such a solution finds practical application in the construction of low-cost buildings and smaller bridges in seismically active regions. Testing was performed on stiff and medium-stiff buildings. Four different accelerograms were applied, and stresses of the models remained in the elastic area. First, a model with the foundation directly on a rigid base (shake table) was tested, and then a model with a layer of stone pebbles under the foundation (the layer thickness and fraction of the pebbles are varied) was tested. Characteristic displacements, accelerations, and strains were measured. Some study results are presented and discussed, and the main conclusions of the research are given at the end of the paper. However, further research on some important effects that were not considered in this study is required to achieve even more reliable conclusions regarding the efficiency and rationality of the considered concept of seismic isolation.
## 2. Layer of Natural Stone Pebbles below the Foundation
Stone pebbles are natural material created from larger pieces of stone under the long-lasting action of rivers and sea. In this process, the sharp parts of stone were rounded, and the weak parts of stone have fallen off as a result only solid, smooth, rounded pieces of stone (stone pebbles) remain. In this study, stone pebbles from a riverbed were used. The pebbles are mainly of limestone and partly of granite. In the conducted tests, the following two fractions of pebbles were used (Figure1): 4–8 mm (i.e., small pebbles) and 16–32 mm (i.e., large pebbles). The average compressive strength of the pebbles was approximately 80 MPa, and the humidity was approximately 10%. It is assumed that the thickness of the pebble layer of approximately 0.3 to 1.0 m could be effective in terms of reducing the seismic forces to the building, while being a rational approach. A thicker layer is probably more efficient but requires deeper excavation and a taller embankment, i.e., higher costs. In the conducted tests, the following two layer thicknesses were used (Figure 2): d = 0.3 m (thin layer) and d = 0.6 m (thick layer). Layers are formed within a frame with a plan size of 2.5 m × 2.5 m, which was fixed to the shake table. The deformation conditions of the layer within the frame are sought to be similar to those that the layer would have under the foundation of a real building. Although a reduced model of the building was used, the layer thickness was used in real size because the reduced building model has the same dynamic characteristics (periods of free oscillations) as that of the target full-scale building. The layers were formed in sub-layers with a thickness of 0.10 m, with static compaction and dynamic compaction using the shake table. The average compaction module at the top of the layer was approximately MS = 30 MPa.Figure 1
Used fractions of pebbles. (a) 4–8 mm. (b) 16–32 mm.
(a)
(b)Figure 2
Used thicknesses of pebble layer.
## 3. Adopted Building Models
Seismic forces on the structure significantly depend on their dynamic characteristics, i.e., on the structure stiffness and the weight. The dynamic characteristics of the building are well described by its periods and forms of free oscillations. According to [20], for type 1 and type of ground soil A, spectral acceleration Se for a elastic single-degree-of-freedom (SDOF) system of a cantilever column with mass on its top is defined according to the fundamental free oscillation period T (Figure 3). Real buildings have a wide stiffness range, from very stiff to very soft, i.e., a wide spectrum of T.Figure 3
Seismic response spectra according to [20], for type 1 and type of ground soil A.Instead of a small-scale model of a real building, which results in a series of problems and doubts, a model (cantilever column with a mass on top—SDOF) that has the same fundamental periodT as a real building is adopted in this study. Thus, this model well represents the dynamic characteristics of the real building. Two models of buildings shown in Figure 4 were tested: the MSB model with T = 0.05 s which represents stiff buildings and the MSSB model with T = 0.6 s which represents medium-stiff buildings (Figure 3). The adopted models include a foundation because the behavior of real buildings in the earthquake depends significantly on their foundations, i.e., on the soil-structure interaction. The calculation of the seismic forces based on an SDOF system starts from the assumption that there is no displacements and rotations of the column bottom, i.e., there is no displacement and rotation of the foundation. This study takes these effects into consideration.Figure 4
Considered building models. (a) MSB (T = 0.05 s). (b) MSSB (T = 0.60 s).
(a)
(b)The same foundation and mass on the top of the column were adopted in both models, with different column heights and dimensions of its cross section. The foundation and mass at the top of the column (m = 1000 kg) are made from concrete (cube strength of 46 MPa), and the column is a square steel tube with uniaxial tensile strength of 355 MPa. The foundation is highly reinforced and is practically rigid. In the conducted experimental tests, relatively small plan dimensions of the foundation were adopted. However, they are the same in the case of the foundation supported on the rigid base and on the pebble layers. In further research, it is planned to vary the different plan dimensions of the foundation. In the adopted steel columns, stresses remained in the elastic area for all performed tests. Namely, the starting point was that for all tests, nonlinearity does not appear in the whole structure (column and foundation), i.e., all nonlinearity and dissipation of seismic energy are realized in the pebble layer and in the layer-foundation coupling surface. Thus, the intention was to exclude the influence of nonlinearity in the construction material, i.e., the dissipation of seismic energy in the form of plastification and damage of the construction material, on the results regarding the aseismic efficiency of the tested pebble layer.
## 4. Tested Samples
Ten different samples were experimentally tested (Figure5) under four different types of dynamic excitation produced by the shake table. The first tested MSB and MSSB models were supported on a rigid base Pr (Figure 5(a)). A concrete layer was placed and fixed on the top of the shake table to simulate the usual subconcrete under foundation of a real building. This situation approximates the real buildings with a classic foundation without seismic base isolation. The horizontal displacement of the foundation in relation to the base (shake table) is prevented, while the rocking and uplifting of the foundation is allowed. Next, MSB and MSSB models supported on different layers of pebble (Pp1 to Pp4) were tested (Figure 5(b)). The layer thickness d (0.3 m and 0.6 m) and the pebble fraction Φ (4 to 8 mm and 16 to 32 mm) were varied. The pebble layer was returned to its initial condition after each test, i.e., recompaction to the required compaction module and leveling of the layer top. The same shake table acceleration was adopted for the model supported on a rigid base and on a pebble layer. It is assumed that the real earthquake acceleration at the top of the natural solid ground in both cases is the same.Figure 5
Tested samples. (a) Rigid base (Pr). (b) Pebble layers.
(a)
(b)
## 5. Dynamic Excitations
The models of buildings with considered variants of foundation support (Figure5) were exposed to horizontal accelerations of the shake table in the direction of larger dimension of the foundation, using the accelerograms shown in Figure 6. The maximum acceleration ag,max of the accelerogram is scaled to 0.3 g and 0.2 g for the MSB and MSSB model, respectively. An artificial accelerogram (AA), as shown in Figure 6(a), is created to match the elastic response spectra according to [20]. The horizontal component N-S of the Petrovac earthquake (Montenegro) [21] is shown in Figure 6(b) (AP), the horizontal component N-S of the Ston earthquake (Croatia) [21] is shown in Figure 6(c) (AS), and the horizontal component N-S of the Banja Luka earthquake (BiH) [21] is shown in Figure 6(d) (ABL). Elastic response spectra of the adopted accelerograms are shown in Figure 7. It is difficult to predict which applied accelerogram will be most unfavorable for each tested sample in Figure 5 because of the possible occurrence of nonlinearities in the system. The adopted accelerograms cover a wide spectrum of potential earthquake types. Namely, the artificial accelerogram (AA) is characterized by the long-lasting action, moderate predominant period, large spectral displacements, and high earthquake input energy in structure. Compared to AA, accelerogram Petrovac (AP) has similar characteristics, slightly shorter duration and longer predominant period. The Ston accelerogram (AS) and B. Luka accelerogram (ABL) are characterized by a short impact action with a short predominant period. Namely, AS and ABL represent the so-called impact earthquakes.Figure 6
Applied horizontal base accelerations (ag,max scaled to 0.2 g for MSSB and 0.3 g for MSB). (a) Artificial accelerogram (AA). (b) N-S accelerogram of Petrovac earthquake (AP). (c) N-S accelerogram of Ston earthquake (AS). (d) N-S accelerogram of B. Luka earthquake (ABL).
(a)
(b)
(c)
(d)Figure 7
Elastic response spectra for applied accelerograms. (a) Spectral acceleration. (b) Spectral velocity. (c) Spectral displacement.
(a)
(b)
(c)
## 6. Measured Values
The following values were measured on each tested sample (Figure8): horizontal displacement of the mass center at the column top (u1), horizontal displacement at the foundation top (u2), vertical displacement at the right edge (v1) and at the left edge (v2) of the foundation, vertical strain on the bottom of the steel column at the right side (ε1) and at the left side (ε2), and horizontal acceleration of the mass center at the column top (a).Figure 8
Measured values.
## 7. Testing and Measuring Equipment
Tests were performed using a shake table at the University of Split, Faculty of Civil Engineering, Architecture and Geodesy (Croatia). Data collection from all sensors was performed using the Quantum-x mx 840A system (HBM). The displacements were measured using analog displacement sensors, type PB-25-S10-N0S-10C (Uni Measure). The strains were measured using strain gauges, type 6/120 LY11 (HBM). The accelerations were measured by a piezo-electric low frequency accelerometer type 4610 (MS). Some photos of experimental setup before testing are shown in Figure9.Figure 9
Photos of experimental setup before testing. (a) MSB on rigid base. (b) MSB on layerPp2.
(a)
(b)
## 8. Experimental Results
The test results are shown in a graphic form to ensure that the presentation is concise and clear, even with reduced size of the drawings. The results are separately shown for some of the measured values, for the models MSB and MSSB. Each of the drawings shows the measured values separately at each applied accelerogram, for all five considered substrate types:Pr—rigid base; Pp1—pebble layer (d = 0.3 m, Φ = 16 to 32 mm); Pp2—pebble layer (d = 0.6 m, Φ = 16 to 32 mm); Pp3—pebble layer (d = 0.3 m, Φ = 4 to 8 mm); and Pp4—pebble layer (d = 0.6 m, Φ = 4 to 8 mm); see Figure 5.In order to investigate the impact of some possible negative factors on the conclusions of the study, preliminary research has been carried out. Namely, in order to investigate the impact of subsequent earthquakes on the efficiency of the considered seismic base isolation, the tested structure was exposed to a set of six repeated base accelerations, without updating the pebble layer. Testing was performed with AA and AS, for MSB on layerPp1 (Figure 5) and MSSB on layer Pp4. Compared to the first excitation, repeated excitations produced up to 8.6% higher strain/stress on the bottom of the steel column and up to 196% larger irreversible horizontal displacement at the foundation top. This can be considered acceptable because it is unlikely that some buildings would be exposed to a large number of medium to severe earthquakes that would cause building displacements in the same direction. To prevent a possible similar scenario, the problem can be solved so that the width of the aseismic layer is sufficiently wider than the foundation.Tests with repeated high base accelerations that could cause nonlinearities in the model were not performed. The pebble layer efficiency for repeated base accelerations is explained by the fact that the layer of stone pebbles of the same grain size is very difficult to compact. Also, the influence of compaction ofPp1 and Pp4 layers was also tested with AA and AS. The average compaction module at the top of the layers was MS = 30 MPa and MS = 60 MPa, respectively. The maximum strain/stress on the bottom of the steel column for MS = 60 MPa was 4.9% higher than for MS = 30 MPa. This can be considered acceptable.Foregoing suggests that the proposed seismic base isolation can be effective throughout the lifetime of the building and it is not necessary to renew.
### 8.1. Model of Stiff Building MSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure10. It is found that the rigid base produced maximum acceleration for all considered accelerograms and that the maximum accelerations for the pebble layer were similar. Compared to the rigid base, thin layer with large pebbles produced the lowest reduction in acceleration. For ag,max = 3.0 m·s−2, the highest acceleration on the rigid base was produced by AA (approx. 11.6 m·s−2), whereas the lowest was produced by ABL (approx. 5.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approx. 5.7 m·s−2 and approx. 4.1 m·s−2, respectively.Figure 10
Horizontal acceleration of the mass center at the column top (a) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacement of the mass center at the column top (u1) for all considered accelerograms was produced with the rigid base, and the maximum displacements on all pebble layers were similar (Figure 11). Compared to the rigid base, the slightest reduction in the displacement was produced using a thin layer with large pebbles. For the rigid base, AA produced the largest displacement of approximately 150 mm, whereas ABL produced the smallest displacement of approximately 12 mm. The largest displacement on the pebble layer was produced by AP (approx. 80 mm), whereas the smallest was produced by ABL (approx. 3.5 mm).Figure 11
Horizontal displacement of the mass center at the column top (u1). (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 12. Note that the model on the rigid base had the maximum strain for all considered accelerograms and that the maximum strain for the model on the pebble layers was similar. Compared to the rigid base, the slightest reduction in strain also produced a thin layer with large pebbles. The largest strain on the rigid base was caused by AP (approx. 0.059‰), whereas the smallest was caused by ABL (approx. 0.018‰). The largest strain on the pebble layer was caused by AP (approx. 0.028‰), whereas the smallest was caused by ABL (approx. 0.018‰). All strains (stresses) were within the elastic area of the steel (≤1.7‰).Figure 12
Vertical strain on the right bottom side of the steel column (ε1) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The horizontal displacement at the foundation top (u2) is prevented for a rigid base (Figure 13), i.e., the bottom of the foundation is fixed to the base (shake table). The largest displacement for the pebble layer was produced by AP (approx. 18.5 mm), whereas the smallest was produced by ABL (approx. 1.2 mm). Thicker layers resulted in larger horizontal displacements. The largest permanent displacement for the pebble layer was produced also by AP (approx. 6.0 mm), which is the result of the foundation slipping at the pebble layer top. Thus, the ratio of the largest permanent displacement of the foundation and peak foundation displacement for AP is approximately 6 mm : 18.5 mm or about 1 : 3.Figure 13
Horizontal displacement at the foundation top (u2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplifts of the foundation (Figure14) were produced for models with the rigid base, approximately 64 mm for AA and approximately 4.4 mm for ABL. The largest uplift of the foundation for the pebble layer was produced by AP (approx. 35 mm), whereas the smallest was produced by ABL (approx. 1.8 mm). The largest permanent settlement on the left edge of the foundation of approximately 7 mm was produced by AP (thin layer with large pebbles).Figure 14
Vertical displacement at the left edge of the foundation (v2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
### 8.2. Model of Medium-Stiff Building MSSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure15. It can be seen that the rigid base produced maximum acceleration for all applied accelerograms and that the maximum accelerations for the pebble layer were similar (analogous to model MSB). For ag,max = 2.0 m·s−2, the highest acceleration for the model on the rigid base was produced by AA and AP (approx. 7.5 m·s−2), whereas the lowest was produced by ABL (approx. 2.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approximately 4.4 m·s−2 and approximately 2.4 m·s−2, respectively.Figure 15
Horizontal acceleration of the mass center at the column top (a) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacements of the mass center at the column top (u1) were also for the rigid base case (Figure 16): AA produced the largest displacement of approximately 170 mm, whereas ABL produced the smallest of approximately 21.5 mm. The largest displacement for the model on the pebble layer was produced by AP (approx. 110 mm), whereas the smallest was produced by ABL (approx. 21.5 mm). The largest permanent displacement on the pebble layer was for AA (approx. 25 mm), which is the result of the foundation slipping at the pebble layer top and foundation rotation on the vertically deformable substrate.Figure 16
Horizontal displacement of the mass center at the column top (u1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 17. The maximum strain for the rigid base was approximately equal for AA and AP (approx. 0.82‰), i.e., within the elastic steel behavior. The minimum strain was for ABL (approx. 0.33‰). Compared to the MSB model, the MSSB model had significantly greater stresses/strains. For the pebble layer, AA produced maximum strain of approximately 0.45‰.Figure 17
Vertical strain on the right bottom side of the steel column (ε1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest displacement at the foundation top (u2) for the pebble layer (Figure 18) was produced by AA (approx. 13 mm), whereas the smallest was produced by ABL (approx. 1.6 mm). The largest permanent displacement (u2) for the pebble layer was for AA (approx. 7 mm) with a thick layer of large pebbles, as a result of the foundation sliding on the pebble layer top.Figure 18
Horizontal displacement at the foundation top (u2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplift at the left edge of the foundation (v2) for the rigid base (Figure 19) was produced by AA (approx. 37 mm), whereas the smallest was produced by ABL (approx. 1.4 mm). The largest uplift on the pebble layer was produced by AP and AA (approx. 14 mm). The largest permanent settlement on the left edge of the foundation of approximately 5 mm for the pebble layer was for AA (thick layer with large pebbles). The consequence of the different permanent vertical settlement of the left edge and right edge of the foundation is the rotation of the model and the occurrence of an additional permanent horizontal displacement u1.Figure 19
Foundation vertical displacement at the left edge (v2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
### 8.3. Comparison of Experimental Results for Models MSB and MSSB
Table1 presents the maximum values of some of the measured experimental results for building models MSB and MSSB on a rigid base and on the pebble layer as well as the ratio of these values. Note that the efficiency of the pebble layer depends on the stiffness of the building model and the type of accelerogram (earthquake characteristics). The values in Table 1 are shown in Figures 20–23, which provide a better visual insight into the ratio of measured maximum values on the rigid base and pebble layer, i.e., a better insight into the effectiveness of the pebble layer compared to the rigid base.Table 1
Maximum values of some measured experimental results and their ratios.
Applied excitation
Building model
Horizontal displacement of the block center
Vertical uplift of the foundation
Acceleration of the block center
Strain at the bottom of the column
u
1
mm
u
1
∗
mm
u
1
∗
/
u
1
v
1
,
v
2
mm
v
1
∗
,
v
2
∗
mm
v
1
∗
,
v
2
∗
/
v
1
,
v
2
a
m
⋅
s
−
2
a
∗
m
⋅
s
−
2
a
∗
/
a
ε
1
,
ε
2
‰
ε
1
∗
,
ε
2
∗
‰
ε
1
∗
,
ε
2
∗
/
ε
1
,
ε
2
Artificial
MSB
150
45
0.30
64
15
0.23
11.6
5.5
0.47
0.055
0.029
0.53
Accelerogram
MSSB
173
107
0.62
39
17
0.44
7.6
4.4
0.58
0.850
0.460
0.53
Accelerogram
MSB
120
85
0.71
51
35
0.69
12.1
5.7
0.47
0.058
0.027
0.47
Petrovac
MSSB
142
80
0.56
28
13.5
0.48
7.6
4.5
0.59
0.870
0.460
0.53
Accelerogram
MSB
16.2
16.5
1.02
6.8
5.5
0.81
6.5
5.2
0.80
0.031
0.023
0.74
Ston
MSSB
33.6
32
0.95
3.2
3.7
1.16
3.7
3.7
1.00
0.415
0.380
0.92
Accelerogram
MSB
12
6.5
0.54
4.4
1.7
0.39
5.8
4.1
0.71
0.025
0.018
0.72
B. Luka
MSSB
21
14.5
0.69
1.4
2.3
1.64
2.8
2.4
0.86
0.320
0.220
0.69
u
1, v1, v2, a, ε1, and ε2 are the maximum values for the rigid base. u1∗, v1∗, v2∗, a∗, ε1∗, and ε2∗ are the maximum values for the pebble layer.Figure 20
Some maximum measured values for an artificial accelerogram (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 21
Some maximum measured values for the accelerogram Petrovac (AP). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 22
Some maximum measured values for the accelerogram Ston (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 23
Some maximum measured values for the accelerogram B. Luka (ABL). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).
#### 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
#### 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
#### 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
#### 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 8.1. Model of Stiff Building MSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure10. It is found that the rigid base produced maximum acceleration for all considered accelerograms and that the maximum accelerations for the pebble layer were similar. Compared to the rigid base, thin layer with large pebbles produced the lowest reduction in acceleration. For ag,max = 3.0 m·s−2, the highest acceleration on the rigid base was produced by AA (approx. 11.6 m·s−2), whereas the lowest was produced by ABL (approx. 5.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approx. 5.7 m·s−2 and approx. 4.1 m·s−2, respectively.Figure 10
Horizontal acceleration of the mass center at the column top (a) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacement of the mass center at the column top (u1) for all considered accelerograms was produced with the rigid base, and the maximum displacements on all pebble layers were similar (Figure 11). Compared to the rigid base, the slightest reduction in the displacement was produced using a thin layer with large pebbles. For the rigid base, AA produced the largest displacement of approximately 150 mm, whereas ABL produced the smallest displacement of approximately 12 mm. The largest displacement on the pebble layer was produced by AP (approx. 80 mm), whereas the smallest was produced by ABL (approx. 3.5 mm).Figure 11
Horizontal displacement of the mass center at the column top (u1). (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 12. Note that the model on the rigid base had the maximum strain for all considered accelerograms and that the maximum strain for the model on the pebble layers was similar. Compared to the rigid base, the slightest reduction in strain also produced a thin layer with large pebbles. The largest strain on the rigid base was caused by AP (approx. 0.059‰), whereas the smallest was caused by ABL (approx. 0.018‰). The largest strain on the pebble layer was caused by AP (approx. 0.028‰), whereas the smallest was caused by ABL (approx. 0.018‰). All strains (stresses) were within the elastic area of the steel (≤1.7‰).Figure 12
Vertical strain on the right bottom side of the steel column (ε1) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The horizontal displacement at the foundation top (u2) is prevented for a rigid base (Figure 13), i.e., the bottom of the foundation is fixed to the base (shake table). The largest displacement for the pebble layer was produced by AP (approx. 18.5 mm), whereas the smallest was produced by ABL (approx. 1.2 mm). Thicker layers resulted in larger horizontal displacements. The largest permanent displacement for the pebble layer was produced also by AP (approx. 6.0 mm), which is the result of the foundation slipping at the pebble layer top. Thus, the ratio of the largest permanent displacement of the foundation and peak foundation displacement for AP is approximately 6 mm : 18.5 mm or about 1 : 3.Figure 13
Horizontal displacement at the foundation top (u2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplifts of the foundation (Figure14) were produced for models with the rigid base, approximately 64 mm for AA and approximately 4.4 mm for ABL. The largest uplift of the foundation for the pebble layer was produced by AP (approx. 35 mm), whereas the smallest was produced by ABL (approx. 1.8 mm). The largest permanent settlement on the left edge of the foundation of approximately 7 mm was produced by AP (thin layer with large pebbles).Figure 14
Vertical displacement at the left edge of the foundation (v2) for MSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
## 8.2. Model of Medium-Stiff Building MSSB
Horizontal acceleration of the mass center at the column top (a) is shown in Figure15. It can be seen that the rigid base produced maximum acceleration for all applied accelerograms and that the maximum accelerations for the pebble layer were similar (analogous to model MSB). For ag,max = 2.0 m·s−2, the highest acceleration for the model on the rigid base was produced by AA and AP (approx. 7.5 m·s−2), whereas the lowest was produced by ABL (approx. 2.8 m·s−2). The maximum acceleration with a pebble layer for AA and ABL was approximately 4.4 m·s−2 and approximately 2.4 m·s−2, respectively.Figure 15
Horizontal acceleration of the mass center at the column top (a) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest horizontal displacements of the mass center at the column top (u1) were also for the rigid base case (Figure 16): AA produced the largest displacement of approximately 170 mm, whereas ABL produced the smallest of approximately 21.5 mm. The largest displacement for the model on the pebble layer was produced by AP (approx. 110 mm), whereas the smallest was produced by ABL (approx. 21.5 mm). The largest permanent displacement on the pebble layer was for AA (approx. 25 mm), which is the result of the foundation slipping at the pebble layer top and foundation rotation on the vertically deformable substrate.Figure 16
Horizontal displacement of the mass center at the column top (u1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The vertical strain on the right bottom side of the steel column (ε1) is presented in Figure 17. The maximum strain for the rigid base was approximately equal for AA and AP (approx. 0.82‰), i.e., within the elastic steel behavior. The minimum strain was for ABL (approx. 0.33‰). Compared to the MSB model, the MSSB model had significantly greater stresses/strains. For the pebble layer, AA produced maximum strain of approximately 0.45‰.Figure 17
Vertical strain on the right bottom side of the steel column (ε1) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest displacement at the foundation top (u2) for the pebble layer (Figure 18) was produced by AA (approx. 13 mm), whereas the smallest was produced by ABL (approx. 1.6 mm). The largest permanent displacement (u2) for the pebble layer was for AA (approx. 7 mm) with a thick layer of large pebbles, as a result of the foundation sliding on the pebble layer top.Figure 18
Horizontal displacement at the foundation top (u2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)The largest uplift at the left edge of the foundation (v2) for the rigid base (Figure 19) was produced by AA (approx. 37 mm), whereas the smallest was produced by ABL (approx. 1.4 mm). The largest uplift on the pebble layer was produced by AP and AA (approx. 14 mm). The largest permanent settlement on the left edge of the foundation of approximately 5 mm for the pebble layer was for AA (thick layer with large pebbles). The consequence of the different permanent vertical settlement of the left edge and right edge of the foundation is the rotation of the model and the occurrence of an additional permanent horizontal displacement u1.Figure 19
Foundation vertical displacement at the left edge (v2) for MSSB. (a) Artificial accelerogram (AA). (b) Accelerogram Petrovac (AP). (c) Accelerogram Ston (AS). (d) Accelerogram B. Luka (ABL).
(a)
(b)
(c)
(d)
## 8.3. Comparison of Experimental Results for Models MSB and MSSB
Table1 presents the maximum values of some of the measured experimental results for building models MSB and MSSB on a rigid base and on the pebble layer as well as the ratio of these values. Note that the efficiency of the pebble layer depends on the stiffness of the building model and the type of accelerogram (earthquake characteristics). The values in Table 1 are shown in Figures 20–23, which provide a better visual insight into the ratio of measured maximum values on the rigid base and pebble layer, i.e., a better insight into the effectiveness of the pebble layer compared to the rigid base.Table 1
Maximum values of some measured experimental results and their ratios.
Applied excitation
Building model
Horizontal displacement of the block center
Vertical uplift of the foundation
Acceleration of the block center
Strain at the bottom of the column
u
1
mm
u
1
∗
mm
u
1
∗
/
u
1
v
1
,
v
2
mm
v
1
∗
,
v
2
∗
mm
v
1
∗
,
v
2
∗
/
v
1
,
v
2
a
m
⋅
s
−
2
a
∗
m
⋅
s
−
2
a
∗
/
a
ε
1
,
ε
2
‰
ε
1
∗
,
ε
2
∗
‰
ε
1
∗
,
ε
2
∗
/
ε
1
,
ε
2
Artificial
MSB
150
45
0.30
64
15
0.23
11.6
5.5
0.47
0.055
0.029
0.53
Accelerogram
MSSB
173
107
0.62
39
17
0.44
7.6
4.4
0.58
0.850
0.460
0.53
Accelerogram
MSB
120
85
0.71
51
35
0.69
12.1
5.7
0.47
0.058
0.027
0.47
Petrovac
MSSB
142
80
0.56
28
13.5
0.48
7.6
4.5
0.59
0.870
0.460
0.53
Accelerogram
MSB
16.2
16.5
1.02
6.8
5.5
0.81
6.5
5.2
0.80
0.031
0.023
0.74
Ston
MSSB
33.6
32
0.95
3.2
3.7
1.16
3.7
3.7
1.00
0.415
0.380
0.92
Accelerogram
MSB
12
6.5
0.54
4.4
1.7
0.39
5.8
4.1
0.71
0.025
0.018
0.72
B. Luka
MSSB
21
14.5
0.69
1.4
2.3
1.64
2.8
2.4
0.86
0.320
0.220
0.69
u
1, v1, v2, a, ε1, and ε2 are the maximum values for the rigid base. u1∗, v1∗, v2∗, a∗, ε1∗, and ε2∗ are the maximum values for the pebble layer.Figure 20
Some maximum measured values for an artificial accelerogram (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 21
Some maximum measured values for the accelerogram Petrovac (AP). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 22
Some maximum measured values for the accelerogram Ston (AA). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).Figure 23
Some maximum measured values for the accelerogram B. Luka (ABL). (a) Model of stiff building (MSB). (b) Model of medium stiff building (MSSB).
### 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
### 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
### 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
### 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 8.3.1. Artificial Accelerogram (AA)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 70% and reduced the uplift of foundation by 77%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 50%, and the strains/stresses at the bottom of the steel column were reduced by 47%. There is remarkable similarity between the acceleration of the mass at the column top and the strains at the bottom of the steel column because the strains are the dominant consequence of the inertial force of mass at the column top.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 38% and reduced the uplift of the foundation by 56%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 42%, and the strains/stresses at the bottom of the steel column were reduced by 47%.The pebble layer efficiency from the aspect of strain reduction at the bottom of the steel column is similar for the MSB and MSSB models, and from the aspect of displacement reduction, the MSB model is more favorable. The strains at the bottom of the steel column are several times higher for the MSSB model than for the MSB model. Large horizontal displacements of the mass center at the column top are the consequence of the adopted small dimensions of the foundation.
## 8.3.2. Accelerogram Petrovac (AP)
For the MSB model of the stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 29%. The uplift of the foundation was reduced by 31%. The horizontal acceleration of the mass center at the column top and strain at the bottom of the steel column were reduced by 53%.For the MSSB model of the medium-stiff building, compared to the rigid base case, the pebble layer reduced the horizontal displacement of the mass center at the column top by 44% and the uplift of the foundation by 52%. The horizontal acceleration of the mass center at the column top (inertial forces) was reduced by 41%, and the strains/stresses at the bottom of the steel column were reduced by 47%.From the aspect of strain/stress reduction at the bottom of the steel column, the efficiency of the pebble layer is similar for the MSB and MSSB models. Moreover, the strain reduction at the bottom of the steel column is similar for AA and AP.
## 8.3.3. Accelerogram Ston (AS)
Compared to AP and AA, AS produced several times smaller horizontal displacements of the mass center at the column top. However, regarding strains/stresses at the bottom of the steel column, no such difference was found. AS develops low values of displacement and stress/strain reduction because that excitation did not produce strong oscillations of the pebble layer. For the MSB model, compared to the rigid base case, the pebble layer reduced the strains at the bottom of the steel column by 26%. For the MSSB model, the reduction was only 8%. Obviously, the pebble layer for AS showed significantly lower efficiency than those for AA and AP and generated strains/stresses in models for AS that were significantly lower.
## 8.3.4. Accelerogram B. Luka (ABL)
Generally, the comments in Section8.3.3 regarding AS are valid. Compared to AS, the efficiency of the pebble layer in terms of strain reduction at the bottom of the steel column is higher for ABL. Compared to the rigid base case, the pebble layer reduced the strain at the bottom of the steel column by 28% and 31% on the MSB and MSSB model, respectively.
## 9. Conclusions
Based on the experimental research results of the behavior of two tested building models with fundamental periodsT = 0.05 s (the so-called model of stiff building (MSB)) and T = 0.6 s (the so-called model of medium-stiff building (MSSB)) supported on a rigid base and a pebble layer with a thicknesses of 0.3 m (the so-called thin layer) and 0.6 m (the so-called thick layer), with pebble fractions of 4–8 mm (the so-called small pebbles) and 16–32 mm (the so-called large pebbles), exposed to four different horizontal accelerograms (artificial accelerogram—AA, accelerogram Petrovac—AP, accelerogram Ston—AS, and accelerogram B. Luka—ABL) with model stresses in the elastic area, the following conclusions can be drawn:(i)
In relation to the behavior of the building models with the foundation on a rigid base, the use of a natural stone pebble layer under the foundation resulted in a much more favorable response to seismic base accelerations.(ii)
The strain/stress reduction in the column above the foundation for AA, AP, AS, and ABL was 47%, 53%, 26%, and 28% for the MSB model and 47%, 47%, 8%, and 31%, respectively, for the MSSB model. Note that all stresses were in the elastic area, without material nonlinearity of the structure.(iii)
The reduction in the horizontal displacement of the mass center at the column top for AA, AP, AS, and ABL was 70%, 29%, 0%, and 46% for MSB and 38%, 44%, 5%, and 31%, respectively, for MSSB.(iv)
The efficiency of the pebble layer for MSSB was almost equal as that for MSB.(v)
The pebble layer efficiency in the performed tests was relatively independent of the thickness (0.3 m and 0.6 m) and the pebble fraction (4–8 mm and 16–32 mm).(vi)
According to the tests results, a small permanent horizontal displacement and vertical settlement (rotation) of the foundation on a real building on the considered pebble layer is expected.(vii)
Based on the results of the conducted experimental research, it can be expected that a stone pebble layer below the foundation of a real building is a sufficiently efficient low-technology seismic base isolation method, which is particularly useful for low-cost buildings in less-developed countries. However, firm conclusions require further research.(viii)
Although the above conclusions are based on the results of tests on small-scale models, we believe that they are also applicable to buildings in practice. This is explained by the fact that small-scale models had a fundamental free oscillation period as full-scale buildings and that only relative effects of the considered parameters were tested on small-scale models.(ix)
It should be noted that the proposed concept of seismic base isolation would not be efficient in earthquakes where the vertical acceleration component is dominant in relation to the horizontal component.
---
*Source: 1012527-2018-12-20.xml* | 2018 |
# New Predictor of Organ Failure in Acute Pancreatitis: CD4+ T Lymphocytes and CD19+ B Lymphocytes
**Authors:** Chenyuan Shi; Chaoqun Hou; Xiaole Zhu; Yunpeng Peng; Feng Guo; Kai Zhang; Dongya Huang; Qiang Li; Yi Miao
**Journal:** BioMed Research International
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1012584
---
## Abstract
Objective. Lymphocytes are one of the main effector cells in the inflammatory response of acute pancreatitis (AP). The purpose of the study was to evaluate whether peripheral blood lymphocyte (PBL) subsets at admission change during AP based on clinical outcomes and to explore whether these changes vary by aetiology of AP. Hence, we performed a prospective study to find a predictor in lymphocyte subsets that might allow easier, earlier, and more accurate prediction of clinical outcomes.Methods. Patients with AP were enrolled from December 2017 to June 2018 at the First Affiliated Hospital of Nanjing Medical University. Age, sex, clinical and biochemical parameters, and aetiology of AP were obtained at admission. PBL counts were assessed within 24 hours after admission. Clinical outcomes were observed as endpoints. The areas under the curve (AUCs) of different predictors were calculated using the receiver operating characteristic (ROC) curve.Results. Overall, 133 patients were included. Patients (n=24) with organ failure (OF) had significantly lower CD4+ T lymphocyte levels than those (n=109) with No OF (NOF) (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004). The OF group exhibited significantly higher CD19+ B lymphocytes than the NOF group (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001). Of the AP cases, 68.8% were caused by gallstones; 10.1% were attributed to alcohol; 16.5% were due to hyperlipidaemia; and 4.6% had other causes. Across all aetiologies, a lower CD4+ T lymphocyte level was significantly related to OF (P<0.05). However, CD19+ B lymphocytes were significant only in gallstone pancreatitis (P<0.05). The ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes, and combined CD4+T lymphocytes and CD19+ B lymphocytes were similar to those of traditional scoring systems, such as APACHEII and Ranson.Conclusions. CD4+ T and CD19+ B lymphocytes during the early phase of AP can predict OF.
---
## Body
## 1. Introduction
Acute pancreatitis (AP) is one of the most common diseases of the digestive system. Outside of China, the cause of AP is mostly due to excessive alcohol intake, while in China, many cases are caused by gallbladder or biliary stones [1]. Currently, with improvements of living standards, pancreatitis caused by hyperlipidaemia has also shown a clear upward trend. According to the 2012 Revised Atlanta classification, AP is divided into mild (MAP), moderately severe (MSAP), and severe (SAP) categories [2]. MAP is not often associated with organ failure (OF), so the mortality is often less than 1%. Moderate or severe pancreatitis is often associated with transient or persistent organ failure, resulting in an increase in mortality of up to 10-30% [3]. Due to the large clinical differences in AP, multiple severity scoring systems have been used to assess AP patients, such as the acute physiological assessment and chronic health assessment II (APACHE II) score, acute pancreatitis severity bedside index (BISAP) score, Ranson score, and Glasgow-Imia criteria [4]. However, these scoring systems usually involve many variables that are not readily available. For example, the APACHE II score includes 12 clinical or biochemical parameters, so APACHE II scores are more detailed and the calculation is more complex; 11 variables need to be collected at admission and 48 hours after admission in the Ranson score. The Glasgow scoring system is derived from nine variables and requires 48 hours to complete [5, 6]. However, if the occurrence and development of OF in AP can be predicted early, early initiation and targeting of therapy can be undertaken as soon as possible to reduce complications. Prediction of the development of OF in patients can be performed with the modified Marshall scoring system [7].The immune system has the role of immune surveillance, defence, and regulation. This system consists of immune organs, immune cells, and immunologically active substances and is divided into innate immunity (also known as nonspecific immunity) and adaptive immunity (also known as specific immunity), which is further divided into humoural immunity and cellular immunity [8]. Evidence suggests that there is an important relationship between the innate immune component of the pathogenesis of AP and the severity of the disease [9–11]. Neutrophils and macrophages serve as the first line of defence for the immune system, and T and B lymphocytes also play a central role in the immune response of the body. A large number of studies have reported the different inflammatory mediators that are produced in the early stage of AP and their effects on the body. However, the means by which the activation of lymphocyte subsets in the early stage of AP modulates the balance between proinflammatory and anti-inflammatory immune responses are still poorly understood. When immune function declines, the body is more prone to infectious complications and OF, although others have suggested that a reduction in CD4+T lymphocytes is valuable in a variety of inflammatory and immune diseases such as abdominal syndrome in AP patients [12]. However, these studies have some limitations; for example, the diagnosis of abdominal syndrome in AP was retrospective. Thus, we first observed whether peripheral blood lymphocyte subsets (i.e., CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxic T lymphocytes, CD16+CD56+ natural killer cells, CD19+Blymphocytes, and CD4+/CD8+ T lymphocytes) at admission changed in the early stage of AP in order to research the occurrence of AP. Second, we hypothesized that the activation of lymphocyte subsets is associated with different outcomes in AP patients in order to detect the development of AP. Therefore, we conducted this prospective observational survey.
## 2. Materials and Methods
### 2.1. Patients
We selected 133 AP patients who were admitted to the Pancreas Center at the First Affiliated Hospital of Nanjing Medical University from December 2017 to June 2018. We diagnosed AP according to the Revised Atlanta Classification 2012 as follows: (1) acute episodes of abdominal pain that often radiated to the back; (2) levels of serum amylase and lipase upwards of 3 times greater than normal levels; (3) the imaging examination was consistent with AP. Patients who presented at least two of these features were included.Exclusion criteria included any of the following: (1) age less than 18 years old or more than 80 years old; (2) any surgery performed 3 days after admission; (3) previous or long-term use of immunosuppressive therapy; (4) innate impaired immune function; (5) history of tumour or chronic lung, kidney or cardiovascular disease; or (6) traumatic pancreatitis or chronic pancreatitis. The study was conducted in accordance with the principles of the Helsinki Declaration, and the study was approved by the First Affiliated Hospital of Nanjing Medical University.
### 2.2. Data Collection
All patients with AP were divided into four groups according to its aetiology. (1) gallstone pancreatitis, which is caused by gallstones or bile duct stones; (2) alcoholic pancreatitis, which is observed in patients with a history of drinking or recent alcohol intake; (3) hyperlipidaemia pancreatitis, in which the blood TG value exceeds 11.30 mmol/L or blood TG is between 5.65 and 11.30 mmol/L with white, opaqueserum; (4) other pancreatitis, which could not be diagnosed by medical history, physical examination, laboratory studies, or imaging methods.Each patient’s peripheral venous blood was collected within the first 24 hours after hospital admission. Peripheral blood lymphocyte subsets, including CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxicTlymphocytes, CD16+CD56+ natural killer cells, and CD19+ B lymphocytes of patients, were measured in the hospital laboratory. We recorded demographic data, clinical and biochemical parameters, and outcome.
### 2.3. Endpoint of Study and Definition of Organ Failure
The primary observational endpoint of the study was OF, which was evaluated by a modified Marshall scoring system. Patients who presented one or two of the following features were included: (1) definite renal failure, defined as serum creatinine of no more than 1.9 mg/dL; (2) cardiovascular failure, defined as a systolic blood pressure less than 90 mmHg, even after fluid replacement; and (3) respiratory failure, defined as a ratio of PaO2/FiO2 less than 300 mmHg [13].
### 2.4. Treatments
According to the British and Chinese Medical Association guidelines for the treatment of AP, all patients received standard treatment, including nutritional support, early fluid resuscitation, target organ treatment, and prophylactic antibiotics [14, 15].
### 2.5. Statistical Analysis
Statistical analysis was performed with SPSS 23.0. Continuous variables with a normal distribution are shown as the mean ± standard deviation (SD), and Student’s t test was used to compare two groups. If variables were nonnormally distributed, data are presented as the median (25th-75th percentile), and a nonparametric Mann-Whitney U test was chosen. Categorical variables were compared using a chi-square test. P values less than 0.05 were considered to indicate significance. In addition, 95% confidence intervals (95% CIs) were obtained. A receiver operating characteristic (ROC) curve was constructed to predict organ failure, and the area under the curve (AUC) was used to analyse the ability of factors to predict OF.
## 2.1. Patients
We selected 133 AP patients who were admitted to the Pancreas Center at the First Affiliated Hospital of Nanjing Medical University from December 2017 to June 2018. We diagnosed AP according to the Revised Atlanta Classification 2012 as follows: (1) acute episodes of abdominal pain that often radiated to the back; (2) levels of serum amylase and lipase upwards of 3 times greater than normal levels; (3) the imaging examination was consistent with AP. Patients who presented at least two of these features were included.Exclusion criteria included any of the following: (1) age less than 18 years old or more than 80 years old; (2) any surgery performed 3 days after admission; (3) previous or long-term use of immunosuppressive therapy; (4) innate impaired immune function; (5) history of tumour or chronic lung, kidney or cardiovascular disease; or (6) traumatic pancreatitis or chronic pancreatitis. The study was conducted in accordance with the principles of the Helsinki Declaration, and the study was approved by the First Affiliated Hospital of Nanjing Medical University.
## 2.2. Data Collection
All patients with AP were divided into four groups according to its aetiology. (1) gallstone pancreatitis, which is caused by gallstones or bile duct stones; (2) alcoholic pancreatitis, which is observed in patients with a history of drinking or recent alcohol intake; (3) hyperlipidaemia pancreatitis, in which the blood TG value exceeds 11.30 mmol/L or blood TG is between 5.65 and 11.30 mmol/L with white, opaqueserum; (4) other pancreatitis, which could not be diagnosed by medical history, physical examination, laboratory studies, or imaging methods.Each patient’s peripheral venous blood was collected within the first 24 hours after hospital admission. Peripheral blood lymphocyte subsets, including CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxicTlymphocytes, CD16+CD56+ natural killer cells, and CD19+ B lymphocytes of patients, were measured in the hospital laboratory. We recorded demographic data, clinical and biochemical parameters, and outcome.
## 2.3. Endpoint of Study and Definition of Organ Failure
The primary observational endpoint of the study was OF, which was evaluated by a modified Marshall scoring system. Patients who presented one or two of the following features were included: (1) definite renal failure, defined as serum creatinine of no more than 1.9 mg/dL; (2) cardiovascular failure, defined as a systolic blood pressure less than 90 mmHg, even after fluid replacement; and (3) respiratory failure, defined as a ratio of PaO2/FiO2 less than 300 mmHg [13].
## 2.4. Treatments
According to the British and Chinese Medical Association guidelines for the treatment of AP, all patients received standard treatment, including nutritional support, early fluid resuscitation, target organ treatment, and prophylactic antibiotics [14, 15].
## 2.5. Statistical Analysis
Statistical analysis was performed with SPSS 23.0. Continuous variables with a normal distribution are shown as the mean ± standard deviation (SD), and Student’s t test was used to compare two groups. If variables were nonnormally distributed, data are presented as the median (25th-75th percentile), and a nonparametric Mann-Whitney U test was chosen. Categorical variables were compared using a chi-square test. P values less than 0.05 were considered to indicate significance. In addition, 95% confidence intervals (95% CIs) were obtained. A receiver operating characteristic (ROC) curve was constructed to predict organ failure, and the area under the curve (AUC) was used to analyse the ability of factors to predict OF.
## 3. Result
### 3.1. Peripheral Blood Lymphocyte Subsets of AP Patients
A total of 133 patients were included in this study. There were no patients lost to follow-up, and none of the patients had incomplete clinical data. The CD3+ T lymphocyte count was 66.00 (56.69-73.19), the CD4+T lymphocyte count was 38.20 (31.19-45.31), the CD8+cytotoxic T lymphocyte count was 21.62 (17.27-26.39), the CD16+CD56+ natural killer cell count was 12.07 (8.57-17.78), the CD19+B lymphocyte count was 16.80 (11.35-23.01), and the CD4+/CD8+ lymphocyte count was 1.79 (1.25-2.43). Except for CD19+B lymphocyte, the median of all peripheral blood lymphocyte subsets was in the normal range (Table1).Table 1
Peripheral blood lymphocyte subsets of AP Patients.
ALL Normal Range No 133 CD3+ T lymphocytes (%) 66.00 (56.69-73.19) 64-76 CD4+ T lymphocytes (%) 38.20 (31.19-45.31) 30-40 CD8+ cytotoxic T lymphocytes (%) 21.62 (17.27-26.39) 20-30 CD16+CD56+ natural killer cells (%) 12.07 (8.57-17.78) 10-20 CD19+ B lymphocytes (%) 16.80 (11.35-23.01) 9-14 CD4+/CD8+ 1.79 (1.25-2.43) 1-2.5 Dates are presented as the median (25th-75th percentile).
### 3.2. Basic Characteristics and Peripheral Blood Lymphocyte Subsets in the OF and NOF Groups
Based on the 2012 Revised Atlanta classification, AP was divided into MAP and SAP. SAP was usually accompanied by OF. Therefore, patients were divided into two subgroups (OF group and NOF group) according to the clinical outcome of the presence or absence of OF. Baseline patient characteristics, including demographic data, clinical laboratory values at admission and different outcomes, are presented in Table2. Twenty-four (18%) patients presented pulmonary and/or circulatory and/or renal complications. However, after appropriate treatment including multiple percutaneous, CT-guided external drainage procedures, no patient died in the hospital. The mean age was higher in the NOF group, but there were no significant differences in age. In 24 patients with OF, the biliary aetiology accounted for 66.7% (n=16); the alcoholic aetiology accounted for 8.3% (n=2); and the hyperlipidaemia aetiology accounted for 20.8% (n=5). Of the 109 NOF cases, 68.8% (n=75) were attributed to biliary aetiology, 10.1% (n=11) were ascribed to alcoholic aetiology, and 16.5% (n=18) were due to hyperlipidaemia aetiology. No difference in the aetiology of AP was found between the OF and NOF groups. Furthermore, no death was observed in either group.Table 2
Basic characteristic of AP patients.
ALL NOF OF P-value No 133 109 24 Age, years 56.62 ± 17.17 50.56 ± 15.45 44.33 ± 18.56 0.087 Gender, M/F 79/54 66/43 13/11 0.568 Current smoker 20 (15.1%) 19 (17.4%) 1 (4.2%) 0.2 Hypertension 39 (29.3%) 31 (28.4%) 8 (33.3%) 0.637 Diabetes mellitus 19 (9.8%) 15 (13.8%) 4 (16.7%) 0.728 Etiology 0.971 Biliary 91 (68.4%) 75 (68.8%) 16 (66.7%) Alcoholic 13 (9.8%) 11 (10.1%) 2 (8.3%) Hyperlipidemia 23 (17.3%) 18 (16.5%) 5 (20.8%) Idiopathic 6 (4.5%) 5 (4.6%) 1 (4.2%) Dates are presented in either means and standard deviations or frequencies and percentages. Student’s t test and chi-square test are used.The CD3+ T lymphocytes (66.50 (57.45-73.70) vs. 61.31 (51.18-72.38), P=0.133), CD8+cytotoxic Tlymphocytes (21.62 (17.27-26.18) vs. 21.63 (17.33-26.94), P=0.847), CD16+CD56+ natural killer cells (12.14 (8.93-17.96) vs. 11.07 (6.09-16.48), P=0.343), and CD4+/CD8+ (1.82 (1.30-2.51) vs. 1.61 (1.11-2.10), P=0.180) were similar between the NOF and OF groups. However, the CD4+Tlymphocyte count was significantly decreased in the OF group compared with that of the NOF group (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004), and CD19+ B lymphocytes (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001) were significantly higher in the OF group. (Table3) The patients with OF typically spent more days in the hospital than did those with NOF. Therefore, we speculated that CD4+ T lymphocytes and CD19+ B lymphocytes can be used as predictors of OF.Table 3
Peripheral blood lymphocytes subsets in total patients with AP.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (57.45-73.70) 61.31 (51.18-72.38) 0.133 CD4+ T lymphocytes (%) 39.60 (33.94-46.13) 32.41 (26.51-38.00) 0.004 CD8+ Cytotoxic T lymphocytes (%) 21.62 (17.27-26.18) 21.63 (17.33-26.94) 0.847 CD16+CD56+ Natural killer cells (%) 12.14 (8.93-17.96) 11.07 (6.09-16.48) 0.343 CD19+ B lymphocytes (%) 16.07 (10.67-21.06) 23.78 (17.84-29.45) 0.001 CD4+/CD8+ 1.82 (1.30-2.51) 1.61 (1.11-2.10) 0.180 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
### 3.3. Peripheral Blood Lymphocyte Subsets in Different Aetiologies
CD4+ lymphocytes and CD19+ lymphocytes were significantly different in all patients. We next investigated whether CD4+ T lymphocytes and CD19+ B lymphocytes were still significantly different in patients with AP with different pathogenesis. Gallstones, alcohol misuse, and hyperlipidaemia are the main risk factors for AP. Therefore, we performed a subgroup analysis (Tables4, 5, and 6). CD3+CD4+T lymphocytes were significantly decreased across different aetiologies. However, a similar pattern was detected for CD19+B lymphocytes only in gallstone AP (Table 4), whereas this phenomenon was not significant in the alcoholic AP (Table 5) and hyperlipidaemia AP groups (Table 6).Table 4
Peripheral blood lymphocytes subsets in biliary pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (56.82-72.40) 61.31 (48.30-72.38) 0.219 CD4+ T lymphocytes (%) 38.44 (32.51-46.62) 32.41 (28.50-38.84) 0.040 CD8+ Cytotoxic T lymphocytes (%) 21.12 (16.96-25.43) 20.76 (15.22-25.98) 0.855 CD16+CD56+ Natural killer cells (%) 12.14 (8.79-17.84) 11.07 (8.31-21.66) 0.770 CD19+ B lymphocytes (%) 16.07 (11.00-21.53) 24.37 (17.84-28.95) 0.015 CD4+/CD8+ 1.91 (1.28-2.62) 1.84 (1.19-2.47) 0.628 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 5
Peripheral blood lymphocytes subsets in alcoholic pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 72.50 (61.44-76.69) 53.50 (/) 0.076 CD4+ T lymphocytes (%) 44.12 (38.31-51.81) 27.96 (/) 0.048 CD8+ Cytotoxic T lymphocytes (%) 22.06 (17.12-28.50) 24.06 (/) 0.693 CD16+CD56+ Natural killer cells (%) 14.40 (9.29-17.07) 14.65 (/) 0.693 CD19+ B lymphocytes (%) 11.64 (8.86-18.53) 26.76 (/) 0.076 CD4+/CD8+ 2.34 (1.41-2.58) 1.15 (/) 0.076 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 6
Peripheral blood lymphocytes subsets in hyperlipidemia pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.38 (60.31-74.51) 63.31 (54.88-72.72) 0.412 CD4+ T lymphocytes (%) 39.92 (29.94-44.82) 27.88 (26.00-37.19) 0.037 CD8+ Cytotoxic T lymphocytes (%) 22.89 (19.17-27.76) 24.31 (18.41-27.28) 0.881 CD16+CD56+ Natural killer cells (%) 11.34 (8.89-23.36) 7.64 (5.25-17.30) 0.264 CD19+ B lymphocytes (%) 16.39 (11.15-19.70) 21.40 (16.43-30.29) 0.053 CD4+/CD8+ 1.57 (1.25-2.08) 1.39 (1.05-1.69) 0.280 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
### 3.4. Predictive Value of CD4+T Lymphocytes and CD19+ B Lymphocytes
The ROC was used to evaluate the diagnostic value of peripheral blood lymphocyte subsets for OF. For patients with OF, the AUCs of CD4+T lymphocytes and CD19+ B lymphocytes were calculated as follows (Figure1): compared with a complex scoring system such as the Ranson score (AUROC 0.72) or APACHE II score (AUROC 0.78), CD4+T lymphocytes presented an AUC of 0.69, and CD19+ B lymphocytes showed an AUC of 0.72. To predict OF more accurately, the AUC was recalculated by combining CD4+T and CD19+ B lymphocytes as 0.73. To explore whether the predictive value still exists across different aetiologies of AP, the AUCs of different types of AP were calculated using the ROC curve as well. For biliary pancreatitis, CD4+T lymphocytes presented an AUC of 0.66, CD19+ B lymphocytes showed an AUC of 0.70, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.71, and the AUC of APACHE II score and Ranson score were 0.83 and 0.80, respectively. In alcoholic pancreatitis, the CD4+T lymphocytes presented an AUC of 0.96, the CD19+ B lymphocytes showed an AUC of 0.91, the combination ofCD4+T and CD19+ B lymphocytes had an AUC of 0.91, and the AUC of the APACHE II score and Ranson score were 0.66 and 0.64, respectively. In hyperlipidaemia pancreatitis, the CD4+T lymphocytes presented an AUC of 0.81, the CD19+ B lymphocytes showed an AUC of 0.79, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.83, and the AUC of the APACHE II score and Ranson score were 0.60 and 0.54, respectively (Table 7). In total, the ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes and the combination of CD4+T lymphocytes and CD19+ B lymphocytes had accuracies similar to those more complex scoring systems such as the Ranson and APACHE II scores.Table 7
ROC analysis in diagnosing OF.
APACHE-II Ranson CD19+B lymphocytes CD4+T lymphocytes combined CD4+ and CD19+ Total pancreatitis 0.78 0.72 0.72 0.69 0.73 (0.69-0.88) (0.62-0.82) (0.61-0.84) (0.57-0.81) (0.61-0.86) Biliary pancreatitis 0.83 0.80 0.70 0.66 0.71 (0.73-0.93) (0.70-0.90) (0.55-0.84) (0.52-0.81) (0.56-0.85) Alcoholic pancreatitis 0.66 0.64 0.91 0.96 0.91 (0.29-1.00) (0.34-1.00) (0.74-1.00) (0.83-1.00) (0.72-1.00) Hyperlipidemia pancreatitis 0.60 0.54 0.79 0.81 0.83 (0.31-1.00) (0.26-0.82) (0.55-1.00) (0.64-0.99) (0.58-1.00) 95% Cl included. AUC: area under the curve; Cl: confidence intervals.Figure 1
ROC curves to predict organ failure. (a) Total pancreatitis, (b) biliary pancreatitis, (c) alcoholic pancreatitis, and (d) hyperlipidemia pancreatitis.
(a) (b) (c) (d)
## 3.1. Peripheral Blood Lymphocyte Subsets of AP Patients
A total of 133 patients were included in this study. There were no patients lost to follow-up, and none of the patients had incomplete clinical data. The CD3+ T lymphocyte count was 66.00 (56.69-73.19), the CD4+T lymphocyte count was 38.20 (31.19-45.31), the CD8+cytotoxic T lymphocyte count was 21.62 (17.27-26.39), the CD16+CD56+ natural killer cell count was 12.07 (8.57-17.78), the CD19+B lymphocyte count was 16.80 (11.35-23.01), and the CD4+/CD8+ lymphocyte count was 1.79 (1.25-2.43). Except for CD19+B lymphocyte, the median of all peripheral blood lymphocyte subsets was in the normal range (Table1).Table 1
Peripheral blood lymphocyte subsets of AP Patients.
ALL Normal Range No 133 CD3+ T lymphocytes (%) 66.00 (56.69-73.19) 64-76 CD4+ T lymphocytes (%) 38.20 (31.19-45.31) 30-40 CD8+ cytotoxic T lymphocytes (%) 21.62 (17.27-26.39) 20-30 CD16+CD56+ natural killer cells (%) 12.07 (8.57-17.78) 10-20 CD19+ B lymphocytes (%) 16.80 (11.35-23.01) 9-14 CD4+/CD8+ 1.79 (1.25-2.43) 1-2.5 Dates are presented as the median (25th-75th percentile).
## 3.2. Basic Characteristics and Peripheral Blood Lymphocyte Subsets in the OF and NOF Groups
Based on the 2012 Revised Atlanta classification, AP was divided into MAP and SAP. SAP was usually accompanied by OF. Therefore, patients were divided into two subgroups (OF group and NOF group) according to the clinical outcome of the presence or absence of OF. Baseline patient characteristics, including demographic data, clinical laboratory values at admission and different outcomes, are presented in Table2. Twenty-four (18%) patients presented pulmonary and/or circulatory and/or renal complications. However, after appropriate treatment including multiple percutaneous, CT-guided external drainage procedures, no patient died in the hospital. The mean age was higher in the NOF group, but there were no significant differences in age. In 24 patients with OF, the biliary aetiology accounted for 66.7% (n=16); the alcoholic aetiology accounted for 8.3% (n=2); and the hyperlipidaemia aetiology accounted for 20.8% (n=5). Of the 109 NOF cases, 68.8% (n=75) were attributed to biliary aetiology, 10.1% (n=11) were ascribed to alcoholic aetiology, and 16.5% (n=18) were due to hyperlipidaemia aetiology. No difference in the aetiology of AP was found between the OF and NOF groups. Furthermore, no death was observed in either group.Table 2
Basic characteristic of AP patients.
ALL NOF OF P-value No 133 109 24 Age, years 56.62 ± 17.17 50.56 ± 15.45 44.33 ± 18.56 0.087 Gender, M/F 79/54 66/43 13/11 0.568 Current smoker 20 (15.1%) 19 (17.4%) 1 (4.2%) 0.2 Hypertension 39 (29.3%) 31 (28.4%) 8 (33.3%) 0.637 Diabetes mellitus 19 (9.8%) 15 (13.8%) 4 (16.7%) 0.728 Etiology 0.971 Biliary 91 (68.4%) 75 (68.8%) 16 (66.7%) Alcoholic 13 (9.8%) 11 (10.1%) 2 (8.3%) Hyperlipidemia 23 (17.3%) 18 (16.5%) 5 (20.8%) Idiopathic 6 (4.5%) 5 (4.6%) 1 (4.2%) Dates are presented in either means and standard deviations or frequencies and percentages. Student’s t test and chi-square test are used.The CD3+ T lymphocytes (66.50 (57.45-73.70) vs. 61.31 (51.18-72.38), P=0.133), CD8+cytotoxic Tlymphocytes (21.62 (17.27-26.18) vs. 21.63 (17.33-26.94), P=0.847), CD16+CD56+ natural killer cells (12.14 (8.93-17.96) vs. 11.07 (6.09-16.48), P=0.343), and CD4+/CD8+ (1.82 (1.30-2.51) vs. 1.61 (1.11-2.10), P=0.180) were similar between the NOF and OF groups. However, the CD4+Tlymphocyte count was significantly decreased in the OF group compared with that of the NOF group (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004), and CD19+ B lymphocytes (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001) were significantly higher in the OF group. (Table3) The patients with OF typically spent more days in the hospital than did those with NOF. Therefore, we speculated that CD4+ T lymphocytes and CD19+ B lymphocytes can be used as predictors of OF.Table 3
Peripheral blood lymphocytes subsets in total patients with AP.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (57.45-73.70) 61.31 (51.18-72.38) 0.133 CD4+ T lymphocytes (%) 39.60 (33.94-46.13) 32.41 (26.51-38.00) 0.004 CD8+ Cytotoxic T lymphocytes (%) 21.62 (17.27-26.18) 21.63 (17.33-26.94) 0.847 CD16+CD56+ Natural killer cells (%) 12.14 (8.93-17.96) 11.07 (6.09-16.48) 0.343 CD19+ B lymphocytes (%) 16.07 (10.67-21.06) 23.78 (17.84-29.45) 0.001 CD4+/CD8+ 1.82 (1.30-2.51) 1.61 (1.11-2.10) 0.180 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
## 3.3. Peripheral Blood Lymphocyte Subsets in Different Aetiologies
CD4+ lymphocytes and CD19+ lymphocytes were significantly different in all patients. We next investigated whether CD4+ T lymphocytes and CD19+ B lymphocytes were still significantly different in patients with AP with different pathogenesis. Gallstones, alcohol misuse, and hyperlipidaemia are the main risk factors for AP. Therefore, we performed a subgroup analysis (Tables4, 5, and 6). CD3+CD4+T lymphocytes were significantly decreased across different aetiologies. However, a similar pattern was detected for CD19+B lymphocytes only in gallstone AP (Table 4), whereas this phenomenon was not significant in the alcoholic AP (Table 5) and hyperlipidaemia AP groups (Table 6).Table 4
Peripheral blood lymphocytes subsets in biliary pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (56.82-72.40) 61.31 (48.30-72.38) 0.219 CD4+ T lymphocytes (%) 38.44 (32.51-46.62) 32.41 (28.50-38.84) 0.040 CD8+ Cytotoxic T lymphocytes (%) 21.12 (16.96-25.43) 20.76 (15.22-25.98) 0.855 CD16+CD56+ Natural killer cells (%) 12.14 (8.79-17.84) 11.07 (8.31-21.66) 0.770 CD19+ B lymphocytes (%) 16.07 (11.00-21.53) 24.37 (17.84-28.95) 0.015 CD4+/CD8+ 1.91 (1.28-2.62) 1.84 (1.19-2.47) 0.628 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 5
Peripheral blood lymphocytes subsets in alcoholic pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 72.50 (61.44-76.69) 53.50 (/) 0.076 CD4+ T lymphocytes (%) 44.12 (38.31-51.81) 27.96 (/) 0.048 CD8+ Cytotoxic T lymphocytes (%) 22.06 (17.12-28.50) 24.06 (/) 0.693 CD16+CD56+ Natural killer cells (%) 14.40 (9.29-17.07) 14.65 (/) 0.693 CD19+ B lymphocytes (%) 11.64 (8.86-18.53) 26.76 (/) 0.076 CD4+/CD8+ 2.34 (1.41-2.58) 1.15 (/) 0.076 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 6
Peripheral blood lymphocytes subsets in hyperlipidemia pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.38 (60.31-74.51) 63.31 (54.88-72.72) 0.412 CD4+ T lymphocytes (%) 39.92 (29.94-44.82) 27.88 (26.00-37.19) 0.037 CD8+ Cytotoxic T lymphocytes (%) 22.89 (19.17-27.76) 24.31 (18.41-27.28) 0.881 CD16+CD56+ Natural killer cells (%) 11.34 (8.89-23.36) 7.64 (5.25-17.30) 0.264 CD19+ B lymphocytes (%) 16.39 (11.15-19.70) 21.40 (16.43-30.29) 0.053 CD4+/CD8+ 1.57 (1.25-2.08) 1.39 (1.05-1.69) 0.280 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
## 3.4. Predictive Value of CD4+T Lymphocytes and CD19+ B Lymphocytes
The ROC was used to evaluate the diagnostic value of peripheral blood lymphocyte subsets for OF. For patients with OF, the AUCs of CD4+T lymphocytes and CD19+ B lymphocytes were calculated as follows (Figure1): compared with a complex scoring system such as the Ranson score (AUROC 0.72) or APACHE II score (AUROC 0.78), CD4+T lymphocytes presented an AUC of 0.69, and CD19+ B lymphocytes showed an AUC of 0.72. To predict OF more accurately, the AUC was recalculated by combining CD4+T and CD19+ B lymphocytes as 0.73. To explore whether the predictive value still exists across different aetiologies of AP, the AUCs of different types of AP were calculated using the ROC curve as well. For biliary pancreatitis, CD4+T lymphocytes presented an AUC of 0.66, CD19+ B lymphocytes showed an AUC of 0.70, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.71, and the AUC of APACHE II score and Ranson score were 0.83 and 0.80, respectively. In alcoholic pancreatitis, the CD4+T lymphocytes presented an AUC of 0.96, the CD19+ B lymphocytes showed an AUC of 0.91, the combination ofCD4+T and CD19+ B lymphocytes had an AUC of 0.91, and the AUC of the APACHE II score and Ranson score were 0.66 and 0.64, respectively. In hyperlipidaemia pancreatitis, the CD4+T lymphocytes presented an AUC of 0.81, the CD19+ B lymphocytes showed an AUC of 0.79, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.83, and the AUC of the APACHE II score and Ranson score were 0.60 and 0.54, respectively (Table 7). In total, the ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes and the combination of CD4+T lymphocytes and CD19+ B lymphocytes had accuracies similar to those more complex scoring systems such as the Ranson and APACHE II scores.Table 7
ROC analysis in diagnosing OF.
APACHE-II Ranson CD19+B lymphocytes CD4+T lymphocytes combined CD4+ and CD19+ Total pancreatitis 0.78 0.72 0.72 0.69 0.73 (0.69-0.88) (0.62-0.82) (0.61-0.84) (0.57-0.81) (0.61-0.86) Biliary pancreatitis 0.83 0.80 0.70 0.66 0.71 (0.73-0.93) (0.70-0.90) (0.55-0.84) (0.52-0.81) (0.56-0.85) Alcoholic pancreatitis 0.66 0.64 0.91 0.96 0.91 (0.29-1.00) (0.34-1.00) (0.74-1.00) (0.83-1.00) (0.72-1.00) Hyperlipidemia pancreatitis 0.60 0.54 0.79 0.81 0.83 (0.31-1.00) (0.26-0.82) (0.55-1.00) (0.64-0.99) (0.58-1.00) 95% Cl included. AUC: area under the curve; Cl: confidence intervals.Figure 1
ROC curves to predict organ failure. (a) Total pancreatitis, (b) biliary pancreatitis, (c) alcoholic pancreatitis, and (d) hyperlipidemia pancreatitis.
(a) (b) (c) (d)
## 4. Discussion
A few studies have investigated the participation of the innate immune system (macrophages, neutrophils, etc.) and other acquired immune systems (lymphocytes, etc.) in the immune response during the development of AP. Macrophages and neutrophils participate in AP’s strong immune response by secreting a large number of inflammatory factors [16, 17]. Lymphocytes are white blood cells that are produced by lymphoid organs, and participate in the body’s immune response function. Lymphocytes are a kind of cell line with immune recognition function. According to their function and surface molecules, they can be divided into T lymphocytes (T cells), B lymphocytes (B cells) and natural killer (NK) cells. T lymphocytes and B lymphocytes mediate cellular and humoural immunity, respectively. In recent years, a considerable amount of direct or indirect evidence has further confirmed that lymphocytes can not only promote the immune response but also eliminate pathogenic microorganisms. They also have an immune regulatory function that inhibits an excessive immune response[18]. Although a large number of studies exist on how substantial numbers of inflammatory factors are produced in the early stage of AP, there are still many gaps and controversies about how AP over-regulates the inflammatory response. From this perspective, we therefore designed our research. Our prospective study verifies, probably for the first time, an analysis of the relationship of peripheral blood lymphocyte subsets and OF with reference to the different aetiologies of AP. CD3+T lymphocytes, CD4+ Tlymphocytes, CD8+cytotoxicTlymphocytes, CD19+B lymphocytes and CD16+CD56+ NK cells were assessed on the first day after hospitalization, and clinical outcomes were followed. The principal findings were as follows: (1) Except for CD19+B lymphocyte, the median of the peripheral blood lymphocyte counts were all within the normal range at the occurrence of AP. (2) Accompanied by the development of AP, peripheral lymphocyte subsets of total AP patients and activated CD4+T and CD19+B lymphocytes were significantly correlated with OF. The lower the proportion of CD4+T lymphocytes and the higher the proportion of CD19+B lymphocytes at admission, the more likely OF is to occur in the later stage. Thus, these indicators can be used as a predictor of OF estimation in AP. (3) When considering different aetiologies of AP, there is also a statistically significant association between CD4+T lymphocytes and OF. However, CD19+B lymphocytes show a significant difference only in biliary pancreatitis. (4) The AUC value of CD4+T lymphocytes, CD19+ B lymphocytes, and combined CD4+T lymphocytes and CD19+ B lymphocytes show accuracies similar to those of more complex scoring systems such as the Ranson score and APACHE II score.AP is a common inflammatory disease. Respiratory, circulatory, and renal failure are the most important causes of AP death [19]. Although there are many scoring systems that can predict their prognosis, they have major drawbacks [7]. Currently, there is no single indicator that can predict OF. The occurrence of AP is often accompanied by alterations of the immune system. The activation of T and B lymphocytes is a key factor regulating the inflammatory response in different diseases, including AP [20]. When the inflammatory reaction in AP occurs, T lymphocytes are transformed into lymphoblasts and then differentiate into sensitized T lymphocytes, which play an anti-infective role in cellular immunity [21]. Similarly, B lymphocytes are first transformed into plasmablasts and then differentiate into plasma cells [8]. These cells participate in humoural immunity by producing and secreting immunoglobulins (antibodies) [22]. The role of different lymphocytes in AP was partly reported previously, but the mechanisms are still poorly understood. In a previous study, Curley et al. found that the proportion of CD4 + T lymphocytes in severe pancreatitis was significantly reduced and complications such as pseudocysts, local necrosis, and abscess formation occurred [23]. Yao Liu et al. noted that, in the early stage of SAP, the reduction in CD4+ T lymphocytes was closely associated with abdominal syndrome in AP [12], and it has also been found that that knockout of CD4+ T lymphocytes in mice significantly reduced the severity of their AP [24]. Therefore, a certain relationship between the activation of T lymphocytes and the progression of AP is believed to exist, although the function of peripheral blood CD4 + T lymphocytes and CD19+B lymphocytes in AP is still unclear.To our knowledge, activation of circulating lymphocytes, both CD4+T lymphocytes and CD19+B lymphocytes, is a normal response to inflammation and is more likely to enhance the system’s resistance to infection. However, excessive or uncontrolled activation may release toxic mediators, such as cytokines and oxygen free radicals [25]. CD4+ and CD8+ lymphocytes act as two major subsets of T lymphocytes, also known as T helper lymphocytes and cytotoxic T lymphocytes (CTLs). The number of CD4+ lymphocytes was significantly depleted in AP patients with OF, although the number of CD8+ lymphocytes was similar in both the NOF and OF groups. However, B lymphocytes were markedly increased in AP patients with OF compared with those in patients with NOF. These results suggest that T lymphocytes with the phenotype marker CD4+ are Th lymphocytes critical to the innate immune system and secrete anti-inflammatory cytokines, such as interleukin (IL)-10 and transforming growth factor (TGF)-β [26, 27]. We speculate that when AP is present, the proportion of CD4+T lymphocytes in the body decreases more significantly, which may suggest immunosuppression. The cause of this reduction in cell populations may be related to increased apoptosis of lymphocytes and homing of intestinal lymphocytes after pancreatitis occurs [28, 29]. Previously, in a mouse model of pancreatitis, pancreatic oedema, amylase, and pathological scores of B-cell-deficient mice were found to be significantly increased, indicating that B lymphocytes can inhibit inflammation and reduce pancreatic damage in AP. B lymphocytes are generally believed to have immunomodulatory functions, as well as inhibit the activation and proliferation of other inflammatory cells by secreting anti-inflammatory factors or antibodies [30, 31] and present antigens [22]. Interestingly, in this investigation, the CD19+B lymphocyte data may be of value as a reference for predicting the development of OF. The greater the numbers of activated CD19+B lymphocytes were, the more severe the inflammatory response was, and the more likely OF was to occur. When all AP cases were combined, CD4+T lymphocytes, CD19+B lymphocytes, and combined CD4+ and CD19+lymphocytes were of higher value in predicting AP OF. After considering the aetiology of AP, the predictive effect was also obvious. These predictors are easier to implement compared with complex scoring systems. Whether the immunological alterations observed in B lymphocytes are related to the pathogenesis of different causes of AP cannot be answered at present. Our findings suggest a fundamental difference in the pathophysiology and mechanism of biliary AP and hyperlipidaemia AP. Biliary pancreatitis is caused by obstruction of the pancreatic duct due to gallbladder or biliary stones, and then the secretion by the upper pancreas is blocked [32]. Hyperlipidaemia pancreatitis is caused by high TG levels and accumulation of oxidation products, calcium overload, etc [33, 34]. These factors may activate B lymphocytes and inhibit harmful inflammatory responses. However, the exact mechanism needs further clarification.In summary, CD4+T lymphocytes and CD19+B lymphocytes are introduced as easily measurable parameters that can be used to assess OF in AP patients. However, our research has some limitations. In this study, we studied only Chinese people, who may have differences in lymphocytes compared with other populations, and the number of patients was limited (n = 133). Further, we did not compare the AP patients with healthy controls. Additionally, since there may be different changes in immune function during the occurrence and development of AP, separate tests may need to be performed at different stages of AP. This study investigated only immune function at admission and did not dynamically track changes in peripheral blood lymphocyte subsets during hospitalization. Although we analysed peripheral blood lymphocyte subsets across different causes of AP, the sample size was small. Studies with larger sample sizes should be further conducted to investigate the true value of CD4+ T and CD19+B lymphocytes in predicting AP OF. Therefore, further study is needed to confirm these observations.
## 5. Conclusion
Excessive or uncontrolled circulating lymphocyte activation may be important in the development of multiple OF. Patients with lower CD4+ lymphocyte counts and increased peripheral CD19+B lymphocyte levels at admission may have a higher risk of developing OF in AP, and these indicators appear to be novel predictors of OF in AP.
---
*Source: 1012584-2018-12-05.xml* | 1012584-2018-12-05_1012584-2018-12-05.md | 42,240 | New Predictor of Organ Failure in Acute Pancreatitis: CD4+ T Lymphocytes and CD19+ B Lymphocytes | Chenyuan Shi; Chaoqun Hou; Xiaole Zhu; Yunpeng Peng; Feng Guo; Kai Zhang; Dongya Huang; Qiang Li; Yi Miao | BioMed Research International
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1012584 | 1012584-2018-12-05.xml | ---
## Abstract
Objective. Lymphocytes are one of the main effector cells in the inflammatory response of acute pancreatitis (AP). The purpose of the study was to evaluate whether peripheral blood lymphocyte (PBL) subsets at admission change during AP based on clinical outcomes and to explore whether these changes vary by aetiology of AP. Hence, we performed a prospective study to find a predictor in lymphocyte subsets that might allow easier, earlier, and more accurate prediction of clinical outcomes.Methods. Patients with AP were enrolled from December 2017 to June 2018 at the First Affiliated Hospital of Nanjing Medical University. Age, sex, clinical and biochemical parameters, and aetiology of AP were obtained at admission. PBL counts were assessed within 24 hours after admission. Clinical outcomes were observed as endpoints. The areas under the curve (AUCs) of different predictors were calculated using the receiver operating characteristic (ROC) curve.Results. Overall, 133 patients were included. Patients (n=24) with organ failure (OF) had significantly lower CD4+ T lymphocyte levels than those (n=109) with No OF (NOF) (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004). The OF group exhibited significantly higher CD19+ B lymphocytes than the NOF group (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001). Of the AP cases, 68.8% were caused by gallstones; 10.1% were attributed to alcohol; 16.5% were due to hyperlipidaemia; and 4.6% had other causes. Across all aetiologies, a lower CD4+ T lymphocyte level was significantly related to OF (P<0.05). However, CD19+ B lymphocytes were significant only in gallstone pancreatitis (P<0.05). The ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes, and combined CD4+T lymphocytes and CD19+ B lymphocytes were similar to those of traditional scoring systems, such as APACHEII and Ranson.Conclusions. CD4+ T and CD19+ B lymphocytes during the early phase of AP can predict OF.
---
## Body
## 1. Introduction
Acute pancreatitis (AP) is one of the most common diseases of the digestive system. Outside of China, the cause of AP is mostly due to excessive alcohol intake, while in China, many cases are caused by gallbladder or biliary stones [1]. Currently, with improvements of living standards, pancreatitis caused by hyperlipidaemia has also shown a clear upward trend. According to the 2012 Revised Atlanta classification, AP is divided into mild (MAP), moderately severe (MSAP), and severe (SAP) categories [2]. MAP is not often associated with organ failure (OF), so the mortality is often less than 1%. Moderate or severe pancreatitis is often associated with transient or persistent organ failure, resulting in an increase in mortality of up to 10-30% [3]. Due to the large clinical differences in AP, multiple severity scoring systems have been used to assess AP patients, such as the acute physiological assessment and chronic health assessment II (APACHE II) score, acute pancreatitis severity bedside index (BISAP) score, Ranson score, and Glasgow-Imia criteria [4]. However, these scoring systems usually involve many variables that are not readily available. For example, the APACHE II score includes 12 clinical or biochemical parameters, so APACHE II scores are more detailed and the calculation is more complex; 11 variables need to be collected at admission and 48 hours after admission in the Ranson score. The Glasgow scoring system is derived from nine variables and requires 48 hours to complete [5, 6]. However, if the occurrence and development of OF in AP can be predicted early, early initiation and targeting of therapy can be undertaken as soon as possible to reduce complications. Prediction of the development of OF in patients can be performed with the modified Marshall scoring system [7].The immune system has the role of immune surveillance, defence, and regulation. This system consists of immune organs, immune cells, and immunologically active substances and is divided into innate immunity (also known as nonspecific immunity) and adaptive immunity (also known as specific immunity), which is further divided into humoural immunity and cellular immunity [8]. Evidence suggests that there is an important relationship between the innate immune component of the pathogenesis of AP and the severity of the disease [9–11]. Neutrophils and macrophages serve as the first line of defence for the immune system, and T and B lymphocytes also play a central role in the immune response of the body. A large number of studies have reported the different inflammatory mediators that are produced in the early stage of AP and their effects on the body. However, the means by which the activation of lymphocyte subsets in the early stage of AP modulates the balance between proinflammatory and anti-inflammatory immune responses are still poorly understood. When immune function declines, the body is more prone to infectious complications and OF, although others have suggested that a reduction in CD4+T lymphocytes is valuable in a variety of inflammatory and immune diseases such as abdominal syndrome in AP patients [12]. However, these studies have some limitations; for example, the diagnosis of abdominal syndrome in AP was retrospective. Thus, we first observed whether peripheral blood lymphocyte subsets (i.e., CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxic T lymphocytes, CD16+CD56+ natural killer cells, CD19+Blymphocytes, and CD4+/CD8+ T lymphocytes) at admission changed in the early stage of AP in order to research the occurrence of AP. Second, we hypothesized that the activation of lymphocyte subsets is associated with different outcomes in AP patients in order to detect the development of AP. Therefore, we conducted this prospective observational survey.
## 2. Materials and Methods
### 2.1. Patients
We selected 133 AP patients who were admitted to the Pancreas Center at the First Affiliated Hospital of Nanjing Medical University from December 2017 to June 2018. We diagnosed AP according to the Revised Atlanta Classification 2012 as follows: (1) acute episodes of abdominal pain that often radiated to the back; (2) levels of serum amylase and lipase upwards of 3 times greater than normal levels; (3) the imaging examination was consistent with AP. Patients who presented at least two of these features were included.Exclusion criteria included any of the following: (1) age less than 18 years old or more than 80 years old; (2) any surgery performed 3 days after admission; (3) previous or long-term use of immunosuppressive therapy; (4) innate impaired immune function; (5) history of tumour or chronic lung, kidney or cardiovascular disease; or (6) traumatic pancreatitis or chronic pancreatitis. The study was conducted in accordance with the principles of the Helsinki Declaration, and the study was approved by the First Affiliated Hospital of Nanjing Medical University.
### 2.2. Data Collection
All patients with AP were divided into four groups according to its aetiology. (1) gallstone pancreatitis, which is caused by gallstones or bile duct stones; (2) alcoholic pancreatitis, which is observed in patients with a history of drinking or recent alcohol intake; (3) hyperlipidaemia pancreatitis, in which the blood TG value exceeds 11.30 mmol/L or blood TG is between 5.65 and 11.30 mmol/L with white, opaqueserum; (4) other pancreatitis, which could not be diagnosed by medical history, physical examination, laboratory studies, or imaging methods.Each patient’s peripheral venous blood was collected within the first 24 hours after hospital admission. Peripheral blood lymphocyte subsets, including CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxicTlymphocytes, CD16+CD56+ natural killer cells, and CD19+ B lymphocytes of patients, were measured in the hospital laboratory. We recorded demographic data, clinical and biochemical parameters, and outcome.
### 2.3. Endpoint of Study and Definition of Organ Failure
The primary observational endpoint of the study was OF, which was evaluated by a modified Marshall scoring system. Patients who presented one or two of the following features were included: (1) definite renal failure, defined as serum creatinine of no more than 1.9 mg/dL; (2) cardiovascular failure, defined as a systolic blood pressure less than 90 mmHg, even after fluid replacement; and (3) respiratory failure, defined as a ratio of PaO2/FiO2 less than 300 mmHg [13].
### 2.4. Treatments
According to the British and Chinese Medical Association guidelines for the treatment of AP, all patients received standard treatment, including nutritional support, early fluid resuscitation, target organ treatment, and prophylactic antibiotics [14, 15].
### 2.5. Statistical Analysis
Statistical analysis was performed with SPSS 23.0. Continuous variables with a normal distribution are shown as the mean ± standard deviation (SD), and Student’s t test was used to compare two groups. If variables were nonnormally distributed, data are presented as the median (25th-75th percentile), and a nonparametric Mann-Whitney U test was chosen. Categorical variables were compared using a chi-square test. P values less than 0.05 were considered to indicate significance. In addition, 95% confidence intervals (95% CIs) were obtained. A receiver operating characteristic (ROC) curve was constructed to predict organ failure, and the area under the curve (AUC) was used to analyse the ability of factors to predict OF.
## 2.1. Patients
We selected 133 AP patients who were admitted to the Pancreas Center at the First Affiliated Hospital of Nanjing Medical University from December 2017 to June 2018. We diagnosed AP according to the Revised Atlanta Classification 2012 as follows: (1) acute episodes of abdominal pain that often radiated to the back; (2) levels of serum amylase and lipase upwards of 3 times greater than normal levels; (3) the imaging examination was consistent with AP. Patients who presented at least two of these features were included.Exclusion criteria included any of the following: (1) age less than 18 years old or more than 80 years old; (2) any surgery performed 3 days after admission; (3) previous or long-term use of immunosuppressive therapy; (4) innate impaired immune function; (5) history of tumour or chronic lung, kidney or cardiovascular disease; or (6) traumatic pancreatitis or chronic pancreatitis. The study was conducted in accordance with the principles of the Helsinki Declaration, and the study was approved by the First Affiliated Hospital of Nanjing Medical University.
## 2.2. Data Collection
All patients with AP were divided into four groups according to its aetiology. (1) gallstone pancreatitis, which is caused by gallstones or bile duct stones; (2) alcoholic pancreatitis, which is observed in patients with a history of drinking or recent alcohol intake; (3) hyperlipidaemia pancreatitis, in which the blood TG value exceeds 11.30 mmol/L or blood TG is between 5.65 and 11.30 mmol/L with white, opaqueserum; (4) other pancreatitis, which could not be diagnosed by medical history, physical examination, laboratory studies, or imaging methods.Each patient’s peripheral venous blood was collected within the first 24 hours after hospital admission. Peripheral blood lymphocyte subsets, including CD3+Tlymphocytes, CD4+Tlymphocytes, CD8+cytotoxicTlymphocytes, CD16+CD56+ natural killer cells, and CD19+ B lymphocytes of patients, were measured in the hospital laboratory. We recorded demographic data, clinical and biochemical parameters, and outcome.
## 2.3. Endpoint of Study and Definition of Organ Failure
The primary observational endpoint of the study was OF, which was evaluated by a modified Marshall scoring system. Patients who presented one or two of the following features were included: (1) definite renal failure, defined as serum creatinine of no more than 1.9 mg/dL; (2) cardiovascular failure, defined as a systolic blood pressure less than 90 mmHg, even after fluid replacement; and (3) respiratory failure, defined as a ratio of PaO2/FiO2 less than 300 mmHg [13].
## 2.4. Treatments
According to the British and Chinese Medical Association guidelines for the treatment of AP, all patients received standard treatment, including nutritional support, early fluid resuscitation, target organ treatment, and prophylactic antibiotics [14, 15].
## 2.5. Statistical Analysis
Statistical analysis was performed with SPSS 23.0. Continuous variables with a normal distribution are shown as the mean ± standard deviation (SD), and Student’s t test was used to compare two groups. If variables were nonnormally distributed, data are presented as the median (25th-75th percentile), and a nonparametric Mann-Whitney U test was chosen. Categorical variables were compared using a chi-square test. P values less than 0.05 were considered to indicate significance. In addition, 95% confidence intervals (95% CIs) were obtained. A receiver operating characteristic (ROC) curve was constructed to predict organ failure, and the area under the curve (AUC) was used to analyse the ability of factors to predict OF.
## 3. Result
### 3.1. Peripheral Blood Lymphocyte Subsets of AP Patients
A total of 133 patients were included in this study. There were no patients lost to follow-up, and none of the patients had incomplete clinical data. The CD3+ T lymphocyte count was 66.00 (56.69-73.19), the CD4+T lymphocyte count was 38.20 (31.19-45.31), the CD8+cytotoxic T lymphocyte count was 21.62 (17.27-26.39), the CD16+CD56+ natural killer cell count was 12.07 (8.57-17.78), the CD19+B lymphocyte count was 16.80 (11.35-23.01), and the CD4+/CD8+ lymphocyte count was 1.79 (1.25-2.43). Except for CD19+B lymphocyte, the median of all peripheral blood lymphocyte subsets was in the normal range (Table1).Table 1
Peripheral blood lymphocyte subsets of AP Patients.
ALL Normal Range No 133 CD3+ T lymphocytes (%) 66.00 (56.69-73.19) 64-76 CD4+ T lymphocytes (%) 38.20 (31.19-45.31) 30-40 CD8+ cytotoxic T lymphocytes (%) 21.62 (17.27-26.39) 20-30 CD16+CD56+ natural killer cells (%) 12.07 (8.57-17.78) 10-20 CD19+ B lymphocytes (%) 16.80 (11.35-23.01) 9-14 CD4+/CD8+ 1.79 (1.25-2.43) 1-2.5 Dates are presented as the median (25th-75th percentile).
### 3.2. Basic Characteristics and Peripheral Blood Lymphocyte Subsets in the OF and NOF Groups
Based on the 2012 Revised Atlanta classification, AP was divided into MAP and SAP. SAP was usually accompanied by OF. Therefore, patients were divided into two subgroups (OF group and NOF group) according to the clinical outcome of the presence or absence of OF. Baseline patient characteristics, including demographic data, clinical laboratory values at admission and different outcomes, are presented in Table2. Twenty-four (18%) patients presented pulmonary and/or circulatory and/or renal complications. However, after appropriate treatment including multiple percutaneous, CT-guided external drainage procedures, no patient died in the hospital. The mean age was higher in the NOF group, but there were no significant differences in age. In 24 patients with OF, the biliary aetiology accounted for 66.7% (n=16); the alcoholic aetiology accounted for 8.3% (n=2); and the hyperlipidaemia aetiology accounted for 20.8% (n=5). Of the 109 NOF cases, 68.8% (n=75) were attributed to biliary aetiology, 10.1% (n=11) were ascribed to alcoholic aetiology, and 16.5% (n=18) were due to hyperlipidaemia aetiology. No difference in the aetiology of AP was found between the OF and NOF groups. Furthermore, no death was observed in either group.Table 2
Basic characteristic of AP patients.
ALL NOF OF P-value No 133 109 24 Age, years 56.62 ± 17.17 50.56 ± 15.45 44.33 ± 18.56 0.087 Gender, M/F 79/54 66/43 13/11 0.568 Current smoker 20 (15.1%) 19 (17.4%) 1 (4.2%) 0.2 Hypertension 39 (29.3%) 31 (28.4%) 8 (33.3%) 0.637 Diabetes mellitus 19 (9.8%) 15 (13.8%) 4 (16.7%) 0.728 Etiology 0.971 Biliary 91 (68.4%) 75 (68.8%) 16 (66.7%) Alcoholic 13 (9.8%) 11 (10.1%) 2 (8.3%) Hyperlipidemia 23 (17.3%) 18 (16.5%) 5 (20.8%) Idiopathic 6 (4.5%) 5 (4.6%) 1 (4.2%) Dates are presented in either means and standard deviations or frequencies and percentages. Student’s t test and chi-square test are used.The CD3+ T lymphocytes (66.50 (57.45-73.70) vs. 61.31 (51.18-72.38), P=0.133), CD8+cytotoxic Tlymphocytes (21.62 (17.27-26.18) vs. 21.63 (17.33-26.94), P=0.847), CD16+CD56+ natural killer cells (12.14 (8.93-17.96) vs. 11.07 (6.09-16.48), P=0.343), and CD4+/CD8+ (1.82 (1.30-2.51) vs. 1.61 (1.11-2.10), P=0.180) were similar between the NOF and OF groups. However, the CD4+Tlymphocyte count was significantly decreased in the OF group compared with that of the NOF group (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004), and CD19+ B lymphocytes (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001) were significantly higher in the OF group. (Table3) The patients with OF typically spent more days in the hospital than did those with NOF. Therefore, we speculated that CD4+ T lymphocytes and CD19+ B lymphocytes can be used as predictors of OF.Table 3
Peripheral blood lymphocytes subsets in total patients with AP.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (57.45-73.70) 61.31 (51.18-72.38) 0.133 CD4+ T lymphocytes (%) 39.60 (33.94-46.13) 32.41 (26.51-38.00) 0.004 CD8+ Cytotoxic T lymphocytes (%) 21.62 (17.27-26.18) 21.63 (17.33-26.94) 0.847 CD16+CD56+ Natural killer cells (%) 12.14 (8.93-17.96) 11.07 (6.09-16.48) 0.343 CD19+ B lymphocytes (%) 16.07 (10.67-21.06) 23.78 (17.84-29.45) 0.001 CD4+/CD8+ 1.82 (1.30-2.51) 1.61 (1.11-2.10) 0.180 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
### 3.3. Peripheral Blood Lymphocyte Subsets in Different Aetiologies
CD4+ lymphocytes and CD19+ lymphocytes were significantly different in all patients. We next investigated whether CD4+ T lymphocytes and CD19+ B lymphocytes were still significantly different in patients with AP with different pathogenesis. Gallstones, alcohol misuse, and hyperlipidaemia are the main risk factors for AP. Therefore, we performed a subgroup analysis (Tables4, 5, and 6). CD3+CD4+T lymphocytes were significantly decreased across different aetiologies. However, a similar pattern was detected for CD19+B lymphocytes only in gallstone AP (Table 4), whereas this phenomenon was not significant in the alcoholic AP (Table 5) and hyperlipidaemia AP groups (Table 6).Table 4
Peripheral blood lymphocytes subsets in biliary pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (56.82-72.40) 61.31 (48.30-72.38) 0.219 CD4+ T lymphocytes (%) 38.44 (32.51-46.62) 32.41 (28.50-38.84) 0.040 CD8+ Cytotoxic T lymphocytes (%) 21.12 (16.96-25.43) 20.76 (15.22-25.98) 0.855 CD16+CD56+ Natural killer cells (%) 12.14 (8.79-17.84) 11.07 (8.31-21.66) 0.770 CD19+ B lymphocytes (%) 16.07 (11.00-21.53) 24.37 (17.84-28.95) 0.015 CD4+/CD8+ 1.91 (1.28-2.62) 1.84 (1.19-2.47) 0.628 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 5
Peripheral blood lymphocytes subsets in alcoholic pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 72.50 (61.44-76.69) 53.50 (/) 0.076 CD4+ T lymphocytes (%) 44.12 (38.31-51.81) 27.96 (/) 0.048 CD8+ Cytotoxic T lymphocytes (%) 22.06 (17.12-28.50) 24.06 (/) 0.693 CD16+CD56+ Natural killer cells (%) 14.40 (9.29-17.07) 14.65 (/) 0.693 CD19+ B lymphocytes (%) 11.64 (8.86-18.53) 26.76 (/) 0.076 CD4+/CD8+ 2.34 (1.41-2.58) 1.15 (/) 0.076 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 6
Peripheral blood lymphocytes subsets in hyperlipidemia pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.38 (60.31-74.51) 63.31 (54.88-72.72) 0.412 CD4+ T lymphocytes (%) 39.92 (29.94-44.82) 27.88 (26.00-37.19) 0.037 CD8+ Cytotoxic T lymphocytes (%) 22.89 (19.17-27.76) 24.31 (18.41-27.28) 0.881 CD16+CD56+ Natural killer cells (%) 11.34 (8.89-23.36) 7.64 (5.25-17.30) 0.264 CD19+ B lymphocytes (%) 16.39 (11.15-19.70) 21.40 (16.43-30.29) 0.053 CD4+/CD8+ 1.57 (1.25-2.08) 1.39 (1.05-1.69) 0.280 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
### 3.4. Predictive Value of CD4+T Lymphocytes and CD19+ B Lymphocytes
The ROC was used to evaluate the diagnostic value of peripheral blood lymphocyte subsets for OF. For patients with OF, the AUCs of CD4+T lymphocytes and CD19+ B lymphocytes were calculated as follows (Figure1): compared with a complex scoring system such as the Ranson score (AUROC 0.72) or APACHE II score (AUROC 0.78), CD4+T lymphocytes presented an AUC of 0.69, and CD19+ B lymphocytes showed an AUC of 0.72. To predict OF more accurately, the AUC was recalculated by combining CD4+T and CD19+ B lymphocytes as 0.73. To explore whether the predictive value still exists across different aetiologies of AP, the AUCs of different types of AP were calculated using the ROC curve as well. For biliary pancreatitis, CD4+T lymphocytes presented an AUC of 0.66, CD19+ B lymphocytes showed an AUC of 0.70, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.71, and the AUC of APACHE II score and Ranson score were 0.83 and 0.80, respectively. In alcoholic pancreatitis, the CD4+T lymphocytes presented an AUC of 0.96, the CD19+ B lymphocytes showed an AUC of 0.91, the combination ofCD4+T and CD19+ B lymphocytes had an AUC of 0.91, and the AUC of the APACHE II score and Ranson score were 0.66 and 0.64, respectively. In hyperlipidaemia pancreatitis, the CD4+T lymphocytes presented an AUC of 0.81, the CD19+ B lymphocytes showed an AUC of 0.79, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.83, and the AUC of the APACHE II score and Ranson score were 0.60 and 0.54, respectively (Table 7). In total, the ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes and the combination of CD4+T lymphocytes and CD19+ B lymphocytes had accuracies similar to those more complex scoring systems such as the Ranson and APACHE II scores.Table 7
ROC analysis in diagnosing OF.
APACHE-II Ranson CD19+B lymphocytes CD4+T lymphocytes combined CD4+ and CD19+ Total pancreatitis 0.78 0.72 0.72 0.69 0.73 (0.69-0.88) (0.62-0.82) (0.61-0.84) (0.57-0.81) (0.61-0.86) Biliary pancreatitis 0.83 0.80 0.70 0.66 0.71 (0.73-0.93) (0.70-0.90) (0.55-0.84) (0.52-0.81) (0.56-0.85) Alcoholic pancreatitis 0.66 0.64 0.91 0.96 0.91 (0.29-1.00) (0.34-1.00) (0.74-1.00) (0.83-1.00) (0.72-1.00) Hyperlipidemia pancreatitis 0.60 0.54 0.79 0.81 0.83 (0.31-1.00) (0.26-0.82) (0.55-1.00) (0.64-0.99) (0.58-1.00) 95% Cl included. AUC: area under the curve; Cl: confidence intervals.Figure 1
ROC curves to predict organ failure. (a) Total pancreatitis, (b) biliary pancreatitis, (c) alcoholic pancreatitis, and (d) hyperlipidemia pancreatitis.
(a) (b) (c) (d)
## 3.1. Peripheral Blood Lymphocyte Subsets of AP Patients
A total of 133 patients were included in this study. There were no patients lost to follow-up, and none of the patients had incomplete clinical data. The CD3+ T lymphocyte count was 66.00 (56.69-73.19), the CD4+T lymphocyte count was 38.20 (31.19-45.31), the CD8+cytotoxic T lymphocyte count was 21.62 (17.27-26.39), the CD16+CD56+ natural killer cell count was 12.07 (8.57-17.78), the CD19+B lymphocyte count was 16.80 (11.35-23.01), and the CD4+/CD8+ lymphocyte count was 1.79 (1.25-2.43). Except for CD19+B lymphocyte, the median of all peripheral blood lymphocyte subsets was in the normal range (Table1).Table 1
Peripheral blood lymphocyte subsets of AP Patients.
ALL Normal Range No 133 CD3+ T lymphocytes (%) 66.00 (56.69-73.19) 64-76 CD4+ T lymphocytes (%) 38.20 (31.19-45.31) 30-40 CD8+ cytotoxic T lymphocytes (%) 21.62 (17.27-26.39) 20-30 CD16+CD56+ natural killer cells (%) 12.07 (8.57-17.78) 10-20 CD19+ B lymphocytes (%) 16.80 (11.35-23.01) 9-14 CD4+/CD8+ 1.79 (1.25-2.43) 1-2.5 Dates are presented as the median (25th-75th percentile).
## 3.2. Basic Characteristics and Peripheral Blood Lymphocyte Subsets in the OF and NOF Groups
Based on the 2012 Revised Atlanta classification, AP was divided into MAP and SAP. SAP was usually accompanied by OF. Therefore, patients were divided into two subgroups (OF group and NOF group) according to the clinical outcome of the presence or absence of OF. Baseline patient characteristics, including demographic data, clinical laboratory values at admission and different outcomes, are presented in Table2. Twenty-four (18%) patients presented pulmonary and/or circulatory and/or renal complications. However, after appropriate treatment including multiple percutaneous, CT-guided external drainage procedures, no patient died in the hospital. The mean age was higher in the NOF group, but there were no significant differences in age. In 24 patients with OF, the biliary aetiology accounted for 66.7% (n=16); the alcoholic aetiology accounted for 8.3% (n=2); and the hyperlipidaemia aetiology accounted for 20.8% (n=5). Of the 109 NOF cases, 68.8% (n=75) were attributed to biliary aetiology, 10.1% (n=11) were ascribed to alcoholic aetiology, and 16.5% (n=18) were due to hyperlipidaemia aetiology. No difference in the aetiology of AP was found between the OF and NOF groups. Furthermore, no death was observed in either group.Table 2
Basic characteristic of AP patients.
ALL NOF OF P-value No 133 109 24 Age, years 56.62 ± 17.17 50.56 ± 15.45 44.33 ± 18.56 0.087 Gender, M/F 79/54 66/43 13/11 0.568 Current smoker 20 (15.1%) 19 (17.4%) 1 (4.2%) 0.2 Hypertension 39 (29.3%) 31 (28.4%) 8 (33.3%) 0.637 Diabetes mellitus 19 (9.8%) 15 (13.8%) 4 (16.7%) 0.728 Etiology 0.971 Biliary 91 (68.4%) 75 (68.8%) 16 (66.7%) Alcoholic 13 (9.8%) 11 (10.1%) 2 (8.3%) Hyperlipidemia 23 (17.3%) 18 (16.5%) 5 (20.8%) Idiopathic 6 (4.5%) 5 (4.6%) 1 (4.2%) Dates are presented in either means and standard deviations or frequencies and percentages. Student’s t test and chi-square test are used.The CD3+ T lymphocytes (66.50 (57.45-73.70) vs. 61.31 (51.18-72.38), P=0.133), CD8+cytotoxic Tlymphocytes (21.62 (17.27-26.18) vs. 21.63 (17.33-26.94), P=0.847), CD16+CD56+ natural killer cells (12.14 (8.93-17.96) vs. 11.07 (6.09-16.48), P=0.343), and CD4+/CD8+ (1.82 (1.30-2.51) vs. 1.61 (1.11-2.10), P=0.180) were similar between the NOF and OF groups. However, the CD4+Tlymphocyte count was significantly decreased in the OF group compared with that of the NOF group (39.60 (33.94-46.13) vs. 32.41 (26.51-38.00), P=0.004), and CD19+ B lymphocytes (16.07 (10.67-21.06) vs. 23.78 (17.84-29.45), P=0.001) were significantly higher in the OF group. (Table3) The patients with OF typically spent more days in the hospital than did those with NOF. Therefore, we speculated that CD4+ T lymphocytes and CD19+ B lymphocytes can be used as predictors of OF.Table 3
Peripheral blood lymphocytes subsets in total patients with AP.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (57.45-73.70) 61.31 (51.18-72.38) 0.133 CD4+ T lymphocytes (%) 39.60 (33.94-46.13) 32.41 (26.51-38.00) 0.004 CD8+ Cytotoxic T lymphocytes (%) 21.62 (17.27-26.18) 21.63 (17.33-26.94) 0.847 CD16+CD56+ Natural killer cells (%) 12.14 (8.93-17.96) 11.07 (6.09-16.48) 0.343 CD19+ B lymphocytes (%) 16.07 (10.67-21.06) 23.78 (17.84-29.45) 0.001 CD4+/CD8+ 1.82 (1.30-2.51) 1.61 (1.11-2.10) 0.180 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
## 3.3. Peripheral Blood Lymphocyte Subsets in Different Aetiologies
CD4+ lymphocytes and CD19+ lymphocytes were significantly different in all patients. We next investigated whether CD4+ T lymphocytes and CD19+ B lymphocytes were still significantly different in patients with AP with different pathogenesis. Gallstones, alcohol misuse, and hyperlipidaemia are the main risk factors for AP. Therefore, we performed a subgroup analysis (Tables4, 5, and 6). CD3+CD4+T lymphocytes were significantly decreased across different aetiologies. However, a similar pattern was detected for CD19+B lymphocytes only in gallstone AP (Table 4), whereas this phenomenon was not significant in the alcoholic AP (Table 5) and hyperlipidaemia AP groups (Table 6).Table 4
Peripheral blood lymphocytes subsets in biliary pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.50 (56.82-72.40) 61.31 (48.30-72.38) 0.219 CD4+ T lymphocytes (%) 38.44 (32.51-46.62) 32.41 (28.50-38.84) 0.040 CD8+ Cytotoxic T lymphocytes (%) 21.12 (16.96-25.43) 20.76 (15.22-25.98) 0.855 CD16+CD56+ Natural killer cells (%) 12.14 (8.79-17.84) 11.07 (8.31-21.66) 0.770 CD19+ B lymphocytes (%) 16.07 (11.00-21.53) 24.37 (17.84-28.95) 0.015 CD4+/CD8+ 1.91 (1.28-2.62) 1.84 (1.19-2.47) 0.628 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 5
Peripheral blood lymphocytes subsets in alcoholic pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 72.50 (61.44-76.69) 53.50 (/) 0.076 CD4+ T lymphocytes (%) 44.12 (38.31-51.81) 27.96 (/) 0.048 CD8+ Cytotoxic T lymphocytes (%) 22.06 (17.12-28.50) 24.06 (/) 0.693 CD16+CD56+ Natural killer cells (%) 14.40 (9.29-17.07) 14.65 (/) 0.693 CD19+ B lymphocytes (%) 11.64 (8.86-18.53) 26.76 (/) 0.076 CD4+/CD8+ 2.34 (1.41-2.58) 1.15 (/) 0.076 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.Table 6
Peripheral blood lymphocytes subsets in hyperlipidemia pancreatitis.
NOF ( % ) OF ( % ) P-value CD3+ T lymphocytes (%) 66.38 (60.31-74.51) 63.31 (54.88-72.72) 0.412 CD4+ T lymphocytes (%) 39.92 (29.94-44.82) 27.88 (26.00-37.19) 0.037 CD8+ Cytotoxic T lymphocytes (%) 22.89 (19.17-27.76) 24.31 (18.41-27.28) 0.881 CD16+CD56+ Natural killer cells (%) 11.34 (8.89-23.36) 7.64 (5.25-17.30) 0.264 CD19+ B lymphocytes (%) 16.39 (11.15-19.70) 21.40 (16.43-30.29) 0.053 CD4+/CD8+ 1.57 (1.25-2.08) 1.39 (1.05-1.69) 0.280 Dates are presented in median (25th-75th percentile). Mann-Whitney U test is used.
## 3.4. Predictive Value of CD4+T Lymphocytes and CD19+ B Lymphocytes
The ROC was used to evaluate the diagnostic value of peripheral blood lymphocyte subsets for OF. For patients with OF, the AUCs of CD4+T lymphocytes and CD19+ B lymphocytes were calculated as follows (Figure1): compared with a complex scoring system such as the Ranson score (AUROC 0.72) or APACHE II score (AUROC 0.78), CD4+T lymphocytes presented an AUC of 0.69, and CD19+ B lymphocytes showed an AUC of 0.72. To predict OF more accurately, the AUC was recalculated by combining CD4+T and CD19+ B lymphocytes as 0.73. To explore whether the predictive value still exists across different aetiologies of AP, the AUCs of different types of AP were calculated using the ROC curve as well. For biliary pancreatitis, CD4+T lymphocytes presented an AUC of 0.66, CD19+ B lymphocytes showed an AUC of 0.70, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.71, and the AUC of APACHE II score and Ranson score were 0.83 and 0.80, respectively. In alcoholic pancreatitis, the CD4+T lymphocytes presented an AUC of 0.96, the CD19+ B lymphocytes showed an AUC of 0.91, the combination ofCD4+T and CD19+ B lymphocytes had an AUC of 0.91, and the AUC of the APACHE II score and Ranson score were 0.66 and 0.64, respectively. In hyperlipidaemia pancreatitis, the CD4+T lymphocytes presented an AUC of 0.81, the CD19+ B lymphocytes showed an AUC of 0.79, the combination of CD4+T and CD19+ B lymphocytes had an AUC of 0.83, and the AUC of the APACHE II score and Ranson score were 0.60 and 0.54, respectively (Table 7). In total, the ROC curve results showed that the AUC values of CD4+T lymphocytes, CD19+ B lymphocytes and the combination of CD4+T lymphocytes and CD19+ B lymphocytes had accuracies similar to those more complex scoring systems such as the Ranson and APACHE II scores.Table 7
ROC analysis in diagnosing OF.
APACHE-II Ranson CD19+B lymphocytes CD4+T lymphocytes combined CD4+ and CD19+ Total pancreatitis 0.78 0.72 0.72 0.69 0.73 (0.69-0.88) (0.62-0.82) (0.61-0.84) (0.57-0.81) (0.61-0.86) Biliary pancreatitis 0.83 0.80 0.70 0.66 0.71 (0.73-0.93) (0.70-0.90) (0.55-0.84) (0.52-0.81) (0.56-0.85) Alcoholic pancreatitis 0.66 0.64 0.91 0.96 0.91 (0.29-1.00) (0.34-1.00) (0.74-1.00) (0.83-1.00) (0.72-1.00) Hyperlipidemia pancreatitis 0.60 0.54 0.79 0.81 0.83 (0.31-1.00) (0.26-0.82) (0.55-1.00) (0.64-0.99) (0.58-1.00) 95% Cl included. AUC: area under the curve; Cl: confidence intervals.Figure 1
ROC curves to predict organ failure. (a) Total pancreatitis, (b) biliary pancreatitis, (c) alcoholic pancreatitis, and (d) hyperlipidemia pancreatitis.
(a) (b) (c) (d)
## 4. Discussion
A few studies have investigated the participation of the innate immune system (macrophages, neutrophils, etc.) and other acquired immune systems (lymphocytes, etc.) in the immune response during the development of AP. Macrophages and neutrophils participate in AP’s strong immune response by secreting a large number of inflammatory factors [16, 17]. Lymphocytes are white blood cells that are produced by lymphoid organs, and participate in the body’s immune response function. Lymphocytes are a kind of cell line with immune recognition function. According to their function and surface molecules, they can be divided into T lymphocytes (T cells), B lymphocytes (B cells) and natural killer (NK) cells. T lymphocytes and B lymphocytes mediate cellular and humoural immunity, respectively. In recent years, a considerable amount of direct or indirect evidence has further confirmed that lymphocytes can not only promote the immune response but also eliminate pathogenic microorganisms. They also have an immune regulatory function that inhibits an excessive immune response[18]. Although a large number of studies exist on how substantial numbers of inflammatory factors are produced in the early stage of AP, there are still many gaps and controversies about how AP over-regulates the inflammatory response. From this perspective, we therefore designed our research. Our prospective study verifies, probably for the first time, an analysis of the relationship of peripheral blood lymphocyte subsets and OF with reference to the different aetiologies of AP. CD3+T lymphocytes, CD4+ Tlymphocytes, CD8+cytotoxicTlymphocytes, CD19+B lymphocytes and CD16+CD56+ NK cells were assessed on the first day after hospitalization, and clinical outcomes were followed. The principal findings were as follows: (1) Except for CD19+B lymphocyte, the median of the peripheral blood lymphocyte counts were all within the normal range at the occurrence of AP. (2) Accompanied by the development of AP, peripheral lymphocyte subsets of total AP patients and activated CD4+T and CD19+B lymphocytes were significantly correlated with OF. The lower the proportion of CD4+T lymphocytes and the higher the proportion of CD19+B lymphocytes at admission, the more likely OF is to occur in the later stage. Thus, these indicators can be used as a predictor of OF estimation in AP. (3) When considering different aetiologies of AP, there is also a statistically significant association between CD4+T lymphocytes and OF. However, CD19+B lymphocytes show a significant difference only in biliary pancreatitis. (4) The AUC value of CD4+T lymphocytes, CD19+ B lymphocytes, and combined CD4+T lymphocytes and CD19+ B lymphocytes show accuracies similar to those of more complex scoring systems such as the Ranson score and APACHE II score.AP is a common inflammatory disease. Respiratory, circulatory, and renal failure are the most important causes of AP death [19]. Although there are many scoring systems that can predict their prognosis, they have major drawbacks [7]. Currently, there is no single indicator that can predict OF. The occurrence of AP is often accompanied by alterations of the immune system. The activation of T and B lymphocytes is a key factor regulating the inflammatory response in different diseases, including AP [20]. When the inflammatory reaction in AP occurs, T lymphocytes are transformed into lymphoblasts and then differentiate into sensitized T lymphocytes, which play an anti-infective role in cellular immunity [21]. Similarly, B lymphocytes are first transformed into plasmablasts and then differentiate into plasma cells [8]. These cells participate in humoural immunity by producing and secreting immunoglobulins (antibodies) [22]. The role of different lymphocytes in AP was partly reported previously, but the mechanisms are still poorly understood. In a previous study, Curley et al. found that the proportion of CD4 + T lymphocytes in severe pancreatitis was significantly reduced and complications such as pseudocysts, local necrosis, and abscess formation occurred [23]. Yao Liu et al. noted that, in the early stage of SAP, the reduction in CD4+ T lymphocytes was closely associated with abdominal syndrome in AP [12], and it has also been found that that knockout of CD4+ T lymphocytes in mice significantly reduced the severity of their AP [24]. Therefore, a certain relationship between the activation of T lymphocytes and the progression of AP is believed to exist, although the function of peripheral blood CD4 + T lymphocytes and CD19+B lymphocytes in AP is still unclear.To our knowledge, activation of circulating lymphocytes, both CD4+T lymphocytes and CD19+B lymphocytes, is a normal response to inflammation and is more likely to enhance the system’s resistance to infection. However, excessive or uncontrolled activation may release toxic mediators, such as cytokines and oxygen free radicals [25]. CD4+ and CD8+ lymphocytes act as two major subsets of T lymphocytes, also known as T helper lymphocytes and cytotoxic T lymphocytes (CTLs). The number of CD4+ lymphocytes was significantly depleted in AP patients with OF, although the number of CD8+ lymphocytes was similar in both the NOF and OF groups. However, B lymphocytes were markedly increased in AP patients with OF compared with those in patients with NOF. These results suggest that T lymphocytes with the phenotype marker CD4+ are Th lymphocytes critical to the innate immune system and secrete anti-inflammatory cytokines, such as interleukin (IL)-10 and transforming growth factor (TGF)-β [26, 27]. We speculate that when AP is present, the proportion of CD4+T lymphocytes in the body decreases more significantly, which may suggest immunosuppression. The cause of this reduction in cell populations may be related to increased apoptosis of lymphocytes and homing of intestinal lymphocytes after pancreatitis occurs [28, 29]. Previously, in a mouse model of pancreatitis, pancreatic oedema, amylase, and pathological scores of B-cell-deficient mice were found to be significantly increased, indicating that B lymphocytes can inhibit inflammation and reduce pancreatic damage in AP. B lymphocytes are generally believed to have immunomodulatory functions, as well as inhibit the activation and proliferation of other inflammatory cells by secreting anti-inflammatory factors or antibodies [30, 31] and present antigens [22]. Interestingly, in this investigation, the CD19+B lymphocyte data may be of value as a reference for predicting the development of OF. The greater the numbers of activated CD19+B lymphocytes were, the more severe the inflammatory response was, and the more likely OF was to occur. When all AP cases were combined, CD4+T lymphocytes, CD19+B lymphocytes, and combined CD4+ and CD19+lymphocytes were of higher value in predicting AP OF. After considering the aetiology of AP, the predictive effect was also obvious. These predictors are easier to implement compared with complex scoring systems. Whether the immunological alterations observed in B lymphocytes are related to the pathogenesis of different causes of AP cannot be answered at present. Our findings suggest a fundamental difference in the pathophysiology and mechanism of biliary AP and hyperlipidaemia AP. Biliary pancreatitis is caused by obstruction of the pancreatic duct due to gallbladder or biliary stones, and then the secretion by the upper pancreas is blocked [32]. Hyperlipidaemia pancreatitis is caused by high TG levels and accumulation of oxidation products, calcium overload, etc [33, 34]. These factors may activate B lymphocytes and inhibit harmful inflammatory responses. However, the exact mechanism needs further clarification.In summary, CD4+T lymphocytes and CD19+B lymphocytes are introduced as easily measurable parameters that can be used to assess OF in AP patients. However, our research has some limitations. In this study, we studied only Chinese people, who may have differences in lymphocytes compared with other populations, and the number of patients was limited (n = 133). Further, we did not compare the AP patients with healthy controls. Additionally, since there may be different changes in immune function during the occurrence and development of AP, separate tests may need to be performed at different stages of AP. This study investigated only immune function at admission and did not dynamically track changes in peripheral blood lymphocyte subsets during hospitalization. Although we analysed peripheral blood lymphocyte subsets across different causes of AP, the sample size was small. Studies with larger sample sizes should be further conducted to investigate the true value of CD4+ T and CD19+B lymphocytes in predicting AP OF. Therefore, further study is needed to confirm these observations.
## 5. Conclusion
Excessive or uncontrolled circulating lymphocyte activation may be important in the development of multiple OF. Patients with lower CD4+ lymphocyte counts and increased peripheral CD19+B lymphocyte levels at admission may have a higher risk of developing OF in AP, and these indicators appear to be novel predictors of OF in AP.
---
*Source: 1012584-2018-12-05.xml* | 2018 |
# A Poisson-Gamma Model for Zero Inflated Rainfall Data
**Authors:** Nelson Christopher Dzupire; Philip Ngare; Leo Odongo
**Journal:** Journal of Probability and Statistics
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1012647
---
## Abstract
Rainfall modeling is significant for prediction and forecasting purposes in agriculture, weather derivatives, hydrology, and risk and disaster preparedness. Normally two models are used to model the rainfall process as a chain dependent process representing the occurrence and intensity of rainfall. Such two models help in understanding the physical features and dynamics of rainfall process. However rainfall data is zero inflated and exhibits overdispersion which is always underestimated by such models. In this study we have modeled the two processes simultaneously as a compound Poisson process. The rainfall events are modeled as a Poisson process while the intensity of each rainfall event is Gamma distributed. We minimize overdispersion by introducing the dispersion parameter in the model implemented through Tweedie distributions. Simulated rainfall data from the model shows a resemblance of the actual rainfall data in terms of seasonal variation, means, variance, and magnitude. The model also provides mechanisms for small but important properties of the rainfall process. The model developed can be used in forecasting and predicting rainfall amounts and occurrences which is important in weather derivatives, agriculture, hydrology, and prediction of drought and flood occurrences.
---
## Body
## 1. Introduction
Climate variables, in particular, rainfall occurrence and intensity, hugely impact human and physical environment. Knowledge of the frequency of the occurrence and intensity of rainfall events is essential for planning, designing, and management of various water resources system [1]. Specifically rain-fed agriculture is a sensitive sector to weather and crop production is directly dependent on the amount of rainfall and its occurrence. Rainfall modeling has a great impact on crop growth, weather derivatives, hydrological systems, drought, and flood management and crop simulated studies.Rainfall modeling is also important in pricing of weather derivatives which are financial instruments that are used as a tool for risk management to reduce risk associated with adverse or unexpected weather conditions.Further as climate change greatly affects the environment there is an urgent need for predicting the variability of rainfall for future periods for different climate change scenarios in order to provide necessary information for high quality climate related impact studies [1].However modeling precipitation poses a lot of challenges, namely, accurate measurement of precipitation since rainfall data consists of sequences of values which are either zero or some positive numbers (intensity) depending on the depth of accumulation over discrete intervals. In addition factors like wind can affect collection accuracy. Rainfall is localized unlike temperature which is highly correlated across regions; therefore a derivative holder based on rainfall may suffer geographical basis risk in case of pricing weather derivatives. The final challenge is the choice of a proper probability distribution function to describe precipitation data. The statistical property of precipitation is far more complex and a more sophisticated distribution is required [2].Rainfall has been modeled as a chain dependent process where a two-state Markov chain model represents the occurrence of rainfall and the intensity of rainfall is modeled by fitting a suitable distribution like Gamma [3], exponential, and mixed exponential [1, 4]. These models are easy to understand and interpret and use maximum likelihood to find the parameters. However models involve many parameters to fully describe the dynamics of rainfall as well as making several assumptions for the process.Wilks [5] proposed a multisite model for daily precipitation using a combination of two-state Markov process (for the rainfall occurrence) and a mixed exponential distribution (for the precipitation amount). He found that the mixture of exponential distributions offered a much better fit than the commonly used Gamma distribution.In study of Leobacher and Ngare [3] the precipitation is modeled on a monthly basis by constructing a suitable Markov-Gamma process to take into account seasonal changes of precipitation. It is assumed that rainfall data for different years of the same month is independent and identically distributed. It is assumed that precipitation can be forecast with sufficient accuracy for a month.Another approach of modeling rainfall is based on the Poisson cluster model where two of the most recognized cluster based models in the stochastic modeling of rainfall are the Newman-Scott Rectangular Pulses model and the Bartlett-Lewis Rectangular Pulse model. These models represent rainfall sequences in time and rainfall fields in space where both the occurrence and depth processes are combined. The difficulty in Poisson cluster models as observed by Onof et al. [6] is the challenge of how many features should be addressed so that the model is still mathematically tractable. In addition the models are best fitted by the method of moments and so requires matching analytic expressions for the statistical properties such as mean and variance.Carmona and Diko [7] developed a time-homogeneous jump Markov process to describe rainfall dynamics. The rainfall process was assumed to be in form of storms which consists of cells themselves. At a cell arrival time the rainfall process jumps up by a random amount and at extinction time it jumps down by a random amount, both modeled as Poisson process. Each time the rain intensity changes, an exponential increase occurs either upwards or downwards. To preserve nonnegative intensity, the downward jump size is truncated to the current jump size. The Markov jump process also allows for a jump directly to zero corresponding to the state of no rain [8].In this study the rainfall process is modeled as a single model where the occurrence and intensity of rainfall are simultaneously modeled. The Poisson process models the daily occurrence of rainfall while the intensity is modeled using Gamma distribution as the magnitude of the jumps of the Poisson process. Hence we have a compound Poisson process which is Poisson-Gamma model. The contribution of this study is twofold: a Poisson-Gamma model that simultaneously describes the rainfall occurrence and intensity at once and a suitable model for zero inflated data which reduces overdispersion.This paper is structured as follows. In Section2 the Poisson-Gamma model is described and then formulated mathematically while Section 3 presents methods of estimating the parameters of the model. In Section 4 the model is fitted to the data and goodness of fit of the model is evaluated by mean deviance whereas quantile residuals perform the diagnostics check of the model. Simulation and forecasting are carried out in Section 5 and the study concludes in Section 6.
## 2. Model Formulation
### 2.1. Model Description
Rainfall comprises discrete and continuous components in that if it does not rain the amount of rainfall is discrete whereas if it rains the amount is continuous. In most research works [3, 4, 9] the rainfall process is presented by use of two separate models: one is for the occurrence and conditioned on the occurrence and another model is developed for the amount of rainfall. Rainfall occurrence is basically modeled as first or higher order Markov chain process and conditioned on this process a distribution is used to fit the precipitation amount. Commonly used distributions are Gamma, exponential, mixture of exponential, Weibull, and so on. These models work based on several assumptions and inclusion of several parameters to capture the observed temporal dependence of the rainfall process. However rainfall data exhibit overdispersion [10] which is caused by various factors like clustering, unaccounted temporal correlation, or the fact that the data is a product of Bernoulli trials with unequal probability of events. The stochastic models developed in this way underestimate the overdispersion of rainfall data which may result in underestimating the risk of low or high seasonal rainfall.Our interest in this research is to simultaneously model the occurrence and intensity of rainfall in one model. We would model the rainfall process by using a Poisson-Gamma probability distribution which is flexible to model the exact zeros and the amount of rainfall together.Rainfall is modeled as a compound Poisson process which is a Lévy process with Gamma distributed jumps. This is motivated by the sudden changes of rainfall amount from zero to a large positive value following each rainfall event which are modeled as pure jumps of the compound Poisson process.We assume rainfall arrives in forms of storms following a Poisson process, and at each arrival time the current intensity increases by a random amount based on Gamma distribution. The jumps of the driving process represent the arrival of the storm events generating a jump size of random size. Each storm comprises cells that also arrive following another Poisson process.The Poisson cluster processes gives an appropriate tool as rainfall data indicating presence of clusters of rainfall cells. As observed by Onof et al. [6] use of Gamma distributed variables for cell depth improves the reproduction of extreme values.Lord [11] used the Poisson-Gamma compound process to model the motor vehicle crashes where they examined the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter. Wang [12] proposed a Poisson-Gamma compound approach for species richness estimation.
### 2.2. Mathematical Formulation
LetNt be total number of rainfall event per day following a Poisson process such that(1)PNt=n=e-λλnn!,∀n∈N,Nt=∑t≥11t,∞t.The amount of rainfall is the total sum of the jumps of each rainfall event, say (yi)i≥1, assumed to be identically and independently Gamma distributed and independent of the times of the occurrence of rainfall:(2)Lt=∑i=1NtyiNt=1,2,3,…0Nt=0,such that yi~Gamma(α,P) is with probability density function (3)fy=αpyP-1e-αyΓPy>0,0y≤0.Lemma 1.
The compound Poisson process (2) has a cumulant function (4)ψs,t,x=λteMYx-1,for 0≤s<t and x∈R, where MY(x) is the moment generating function of the Gamma distribution.Proof.
The moment generating functionΦ(s) of L(s) is given by (5)MLs=EesLt=∑j=0∞EesLt∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+Lj∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+LjPNt=jbecauseofindependenceofLandNt=∑j=0∞MYsje-λtλtjj!=e-λt∑j=0∞MYsjλtjj!=e-λt+MYsλt.So the cumulant of L is (6)lnMLs=λMYs-1=λ1-αx-P-1.If we observe the occurrence of rainfall forn periods, then we have the sequence {Li}i=1n which is independent and identically distributed.If on a particular day there is no rainfall that occurred, then(7)PL=0=exp-λλ00!=exp-λ=p0.Therefore the process has a point mass at0 which implies that it is not entirely continuous random variable.Lemma 2.
The probability density function ofL in (2) is(8)fθL=exp-λδL+exp-λ-αLL-1rPvLP,where δ0(L) is a dirac function at zero.Proof.
Letq0=1-p0 be the probability that it rained. Hence for Li>0 we have (9)fθ+L=∑i=1∞piq0αiPLiP-1exp-αLΓipwherepi=exp-λλii!=1q0∑i=1∞piexp-αLαiPLiP-1ΓiP=1q0exp-αL∑i=1∞piαiPLiP-1ΓiP=1q0exp-αL∑i=1∞exp-λλii!αiPLiP-1ΓiP=exp-λq0exp-αL∑i=1∞λii!αipLiP-1Γip=exp-λq0exp-αL∑i=1∞λiαLipLi!ΓiP=L-1exp-αLexpλ-1∑i=1∞λαPLPi!ΓiP.If we let v=λαP and rp(vLP)=∑i=1∞vLP/i!ΓiP, then we have(10)fθ+L=L-1exp-αLexpλ-1rPvLP.
We can express the probability density functionfθ(L) in terms of a Dirac function as(11)fθL=p0δ0L+q0fθ+L=exp-λδ0L+q0expλ-1L-1exp-αLrPvLP=exp-λδ0L+exp-λ-αLL-1rPvLP.Consider a random sample of sizen of Li with the probability density function(12)fθL=exp-λδL+exp-λ-αLL-1rpvLp.If we assume that there are m positive values L1,L2,…,Lm, then there are M=n-m zeros where m>0.We observe thatm~Bi(n,1-exp-λ) and p(m=0)=exp(-nλ); hence the likelihood function is (13)L=nmp0n-mq0m∏i=1mfθ+Liand the log-likelihood for θ=(λ,α,p) is (14)logLθ;L1,L2,…,Ln=lognmp0n-mq0m∏i=1mfθ+Li=lognme-λn+λm1-e-λm∏i=1me-λ-αLi1Li∑j=1∞λαpLijpjj!Γjp=lognm+λm-n+mlog1-e-λ+∑i=1m-λ-αLi-logLi+log∑i=1m∑j=1∞λαpLijpjj!Γjp.Now for λ^ we have (15)∂logLθ;L1,L2,…,Ln∂λ=m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i∂logLθ;L1,L2,…,Ln∂λ=0⟹m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i=0.We can observe from the above evaluation that λ can not be expressed in closed form; similar derivation also shows that α as well can not be expressed in closed form. Therefore we can only estimate λ and α using numerical methods. Withers and Nadarajah [13] also observed that the probability density function can not be expressed in closed form and therefore it is difficult to find the analytic form of the estimators. So we will express the probability density function in terms of exponential dispersion models as described below.Definition 3 (see [14]).
A probability density function of the form(16)fy;θ,Θ=ay,Θexp1Θyθ-kθfor suitable functions k() and a() is called an exponential dispersion model.Θ > 0 is the dispersion parameter. The function k(θ) is the cumulant of the exponential dispersion model; since Θ=1, then k′() are the successive cumulants of the distribution [15]. The exponential dispersion models were first introduced by Fisher in 1922.If we letLi=logfyi;θi,Θ as a contribution of yi to the likelihood function L=∑iLi, then (17)Li=1Θyiθ-kθi+logay,Θ,∂Li∂θi=1Θyi-k′θi,∂2Li∂θi2=-1Θk′′θi. However we expect that E∂Li/∂θi=0 and -E∂2Li/∂θi2=E∂Li/∂θi2 so that (18)E1Θyi-k′θi=0,1ΘEyi-k′θi=0,Eyi=k′θi.Furthermore (19)-E∂2Li∂θi2=E∂Li∂θi2,-E-1Θk′′θi=E1Θyi-k′θi2,k′′θiΘ=VaryiΘ2,Varyi=Θk′′θi. Therefore the mean of the distribution is E[Y]=μ=dk(θ)/dθ and the variance is Var(Y)=Θd2kθ/dθ2.The relationshipμ=dk(θ)/dθ is invertible so that θ can be expressed as a function of μ; as such we have Var(Y)=ΘV(μ), where V(μ) is called a variance function.Definition 4.
The family of exponential dispersion models, whose variance functions are of the formV(μ)=μp for p∈(-∞,0]∪[1,∞), are called Tweedie family distributions.Examples are as follows: forp=0 then we have a normal distribution, p=1, and Θ=1; it is a Poisson distribution, and Gamma distribution for p=2, while when p=3 it is Gaussian inverse distribution. Tweedie densities can not be expressed in closed form (apart from the examples above) but can instead be identified by their cumulants generating functions.FromVar(Y)=Θd2kθ/dθ2, then for Tweedie family distribution we have (20)VarY=Θd2kθdθ2=ΘVμ=Θμp.Hence we can solve for μ and k(θ) as follows: (21)μ=dkθdθ,dμdθ=μp⟹∫dμμp=∫dθ,θ=μ1-p1-pp≠1,logμp=1by equating the constants of integration above to zero.Forp≠1 we have μ=1-pθ1/1-p so that (22)∫dkθ=∫1-pθ1/1-pdθ,kθ=1-pθ2-p/1-p2-p=μ2-p/1-p2-p,p≠2.Proposition 5.
The cumulant generating function of a Tweedie distribution for1<p<2 is (23)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.Proof.
From (16) the moment generating function is given by (24)MYt=∫exptyay,Θexp1Θyθ-kθdy=∫ay,Θexp1Θyθ+tΘ-kθdy=∫ay,Θexpyθ+tΘ-kθΘ+kθ+tΘ-kθΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘ+kθ+tΘ-kθ+tΘΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘexpkθ+tΘ-kθ+tΘΘdy=expkθ+tΘ-kθ+tΘΘ∫ay,Θexpyθ+tΘ-kθ+tΘΘdy=exp1Θkθ+tΘ-kθ.Hence cumulant generating function is (25)logMYt=1Θkθ+tΘ-kθ.
For1<p<2 we substitute θ and k(θ) to have (26)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.By comparing the cumulant generating functions in Lemma1 and Proposition 5 the compound Poisson process can be thought of as Tweedie distribution with parameters (λ,α,P) expressed as follows: (27)λ=μ2-pΘ2-p,α=Θp-1μp-1,P=2-pp-1.The requirement that the Gamma shape parameterP be positive implies that only Tweedie distributions between 1<p<2 can represent the Poisson-Gamma compound process. In addition, for λ>0, α>0 implies μ>0 and Θ>0.Proposition 6.
Based on Tweedie distribution, the probability of receiving no rainfall at all is(28)PL=0=exp-μ2-pΘ2-p and the probability of having a rainfall event is (29)PL>0=Wλ,α,L,PexpL1-pμp-1-μ2-p2-p,where (30)Wλ,α,L,P=∑j=1∞λjαLjPe-λj!ΓjP.Proof.
This follows by directly substituting the values ofλ and θ,k(θ) into (16).The functionW(λ,α,L,P) is an example of Wright’s generalized Bessel function; however it can not be expressed in terms of the more common Bessel function. To evaluate it the value of j is determined for which the function Wj reaches the maximum [15].
## 2.1. Model Description
Rainfall comprises discrete and continuous components in that if it does not rain the amount of rainfall is discrete whereas if it rains the amount is continuous. In most research works [3, 4, 9] the rainfall process is presented by use of two separate models: one is for the occurrence and conditioned on the occurrence and another model is developed for the amount of rainfall. Rainfall occurrence is basically modeled as first or higher order Markov chain process and conditioned on this process a distribution is used to fit the precipitation amount. Commonly used distributions are Gamma, exponential, mixture of exponential, Weibull, and so on. These models work based on several assumptions and inclusion of several parameters to capture the observed temporal dependence of the rainfall process. However rainfall data exhibit overdispersion [10] which is caused by various factors like clustering, unaccounted temporal correlation, or the fact that the data is a product of Bernoulli trials with unequal probability of events. The stochastic models developed in this way underestimate the overdispersion of rainfall data which may result in underestimating the risk of low or high seasonal rainfall.Our interest in this research is to simultaneously model the occurrence and intensity of rainfall in one model. We would model the rainfall process by using a Poisson-Gamma probability distribution which is flexible to model the exact zeros and the amount of rainfall together.Rainfall is modeled as a compound Poisson process which is a Lévy process with Gamma distributed jumps. This is motivated by the sudden changes of rainfall amount from zero to a large positive value following each rainfall event which are modeled as pure jumps of the compound Poisson process.We assume rainfall arrives in forms of storms following a Poisson process, and at each arrival time the current intensity increases by a random amount based on Gamma distribution. The jumps of the driving process represent the arrival of the storm events generating a jump size of random size. Each storm comprises cells that also arrive following another Poisson process.The Poisson cluster processes gives an appropriate tool as rainfall data indicating presence of clusters of rainfall cells. As observed by Onof et al. [6] use of Gamma distributed variables for cell depth improves the reproduction of extreme values.Lord [11] used the Poisson-Gamma compound process to model the motor vehicle crashes where they examined the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter. Wang [12] proposed a Poisson-Gamma compound approach for species richness estimation.
## 2.2. Mathematical Formulation
LetNt be total number of rainfall event per day following a Poisson process such that(1)PNt=n=e-λλnn!,∀n∈N,Nt=∑t≥11t,∞t.The amount of rainfall is the total sum of the jumps of each rainfall event, say (yi)i≥1, assumed to be identically and independently Gamma distributed and independent of the times of the occurrence of rainfall:(2)Lt=∑i=1NtyiNt=1,2,3,…0Nt=0,such that yi~Gamma(α,P) is with probability density function (3)fy=αpyP-1e-αyΓPy>0,0y≤0.Lemma 1.
The compound Poisson process (2) has a cumulant function (4)ψs,t,x=λteMYx-1,for 0≤s<t and x∈R, where MY(x) is the moment generating function of the Gamma distribution.Proof.
The moment generating functionΦ(s) of L(s) is given by (5)MLs=EesLt=∑j=0∞EesLt∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+Lj∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+LjPNt=jbecauseofindependenceofLandNt=∑j=0∞MYsje-λtλtjj!=e-λt∑j=0∞MYsjλtjj!=e-λt+MYsλt.So the cumulant of L is (6)lnMLs=λMYs-1=λ1-αx-P-1.If we observe the occurrence of rainfall forn periods, then we have the sequence {Li}i=1n which is independent and identically distributed.If on a particular day there is no rainfall that occurred, then(7)PL=0=exp-λλ00!=exp-λ=p0.Therefore the process has a point mass at0 which implies that it is not entirely continuous random variable.Lemma 2.
The probability density function ofL in (2) is(8)fθL=exp-λδL+exp-λ-αLL-1rPvLP,where δ0(L) is a dirac function at zero.Proof.
Letq0=1-p0 be the probability that it rained. Hence for Li>0 we have (9)fθ+L=∑i=1∞piq0αiPLiP-1exp-αLΓipwherepi=exp-λλii!=1q0∑i=1∞piexp-αLαiPLiP-1ΓiP=1q0exp-αL∑i=1∞piαiPLiP-1ΓiP=1q0exp-αL∑i=1∞exp-λλii!αiPLiP-1ΓiP=exp-λq0exp-αL∑i=1∞λii!αipLiP-1Γip=exp-λq0exp-αL∑i=1∞λiαLipLi!ΓiP=L-1exp-αLexpλ-1∑i=1∞λαPLPi!ΓiP.If we let v=λαP and rp(vLP)=∑i=1∞vLP/i!ΓiP, then we have(10)fθ+L=L-1exp-αLexpλ-1rPvLP.
We can express the probability density functionfθ(L) in terms of a Dirac function as(11)fθL=p0δ0L+q0fθ+L=exp-λδ0L+q0expλ-1L-1exp-αLrPvLP=exp-λδ0L+exp-λ-αLL-1rPvLP.Consider a random sample of sizen of Li with the probability density function(12)fθL=exp-λδL+exp-λ-αLL-1rpvLp.If we assume that there are m positive values L1,L2,…,Lm, then there are M=n-m zeros where m>0.We observe thatm~Bi(n,1-exp-λ) and p(m=0)=exp(-nλ); hence the likelihood function is (13)L=nmp0n-mq0m∏i=1mfθ+Liand the log-likelihood for θ=(λ,α,p) is (14)logLθ;L1,L2,…,Ln=lognmp0n-mq0m∏i=1mfθ+Li=lognme-λn+λm1-e-λm∏i=1me-λ-αLi1Li∑j=1∞λαpLijpjj!Γjp=lognm+λm-n+mlog1-e-λ+∑i=1m-λ-αLi-logLi+log∑i=1m∑j=1∞λαpLijpjj!Γjp.Now for λ^ we have (15)∂logLθ;L1,L2,…,Ln∂λ=m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i∂logLθ;L1,L2,…,Ln∂λ=0⟹m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i=0.We can observe from the above evaluation that λ can not be expressed in closed form; similar derivation also shows that α as well can not be expressed in closed form. Therefore we can only estimate λ and α using numerical methods. Withers and Nadarajah [13] also observed that the probability density function can not be expressed in closed form and therefore it is difficult to find the analytic form of the estimators. So we will express the probability density function in terms of exponential dispersion models as described below.Definition 3 (see [14]).
A probability density function of the form(16)fy;θ,Θ=ay,Θexp1Θyθ-kθfor suitable functions k() and a() is called an exponential dispersion model.Θ > 0 is the dispersion parameter. The function k(θ) is the cumulant of the exponential dispersion model; since Θ=1, then k′() are the successive cumulants of the distribution [15]. The exponential dispersion models were first introduced by Fisher in 1922.If we letLi=logfyi;θi,Θ as a contribution of yi to the likelihood function L=∑iLi, then (17)Li=1Θyiθ-kθi+logay,Θ,∂Li∂θi=1Θyi-k′θi,∂2Li∂θi2=-1Θk′′θi. However we expect that E∂Li/∂θi=0 and -E∂2Li/∂θi2=E∂Li/∂θi2 so that (18)E1Θyi-k′θi=0,1ΘEyi-k′θi=0,Eyi=k′θi.Furthermore (19)-E∂2Li∂θi2=E∂Li∂θi2,-E-1Θk′′θi=E1Θyi-k′θi2,k′′θiΘ=VaryiΘ2,Varyi=Θk′′θi. Therefore the mean of the distribution is E[Y]=μ=dk(θ)/dθ and the variance is Var(Y)=Θd2kθ/dθ2.The relationshipμ=dk(θ)/dθ is invertible so that θ can be expressed as a function of μ; as such we have Var(Y)=ΘV(μ), where V(μ) is called a variance function.Definition 4.
The family of exponential dispersion models, whose variance functions are of the formV(μ)=μp for p∈(-∞,0]∪[1,∞), are called Tweedie family distributions.Examples are as follows: forp=0 then we have a normal distribution, p=1, and Θ=1; it is a Poisson distribution, and Gamma distribution for p=2, while when p=3 it is Gaussian inverse distribution. Tweedie densities can not be expressed in closed form (apart from the examples above) but can instead be identified by their cumulants generating functions.FromVar(Y)=Θd2kθ/dθ2, then for Tweedie family distribution we have (20)VarY=Θd2kθdθ2=ΘVμ=Θμp.Hence we can solve for μ and k(θ) as follows: (21)μ=dkθdθ,dμdθ=μp⟹∫dμμp=∫dθ,θ=μ1-p1-pp≠1,logμp=1by equating the constants of integration above to zero.Forp≠1 we have μ=1-pθ1/1-p so that (22)∫dkθ=∫1-pθ1/1-pdθ,kθ=1-pθ2-p/1-p2-p=μ2-p/1-p2-p,p≠2.Proposition 5.
The cumulant generating function of a Tweedie distribution for1<p<2 is (23)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.Proof.
From (16) the moment generating function is given by (24)MYt=∫exptyay,Θexp1Θyθ-kθdy=∫ay,Θexp1Θyθ+tΘ-kθdy=∫ay,Θexpyθ+tΘ-kθΘ+kθ+tΘ-kθΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘ+kθ+tΘ-kθ+tΘΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘexpkθ+tΘ-kθ+tΘΘdy=expkθ+tΘ-kθ+tΘΘ∫ay,Θexpyθ+tΘ-kθ+tΘΘdy=exp1Θkθ+tΘ-kθ.Hence cumulant generating function is (25)logMYt=1Θkθ+tΘ-kθ.
For1<p<2 we substitute θ and k(θ) to have (26)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.By comparing the cumulant generating functions in Lemma1 and Proposition 5 the compound Poisson process can be thought of as Tweedie distribution with parameters (λ,α,P) expressed as follows: (27)λ=μ2-pΘ2-p,α=Θp-1μp-1,P=2-pp-1.The requirement that the Gamma shape parameterP be positive implies that only Tweedie distributions between 1<p<2 can represent the Poisson-Gamma compound process. In addition, for λ>0, α>0 implies μ>0 and Θ>0.Proposition 6.
Based on Tweedie distribution, the probability of receiving no rainfall at all is(28)PL=0=exp-μ2-pΘ2-p and the probability of having a rainfall event is (29)PL>0=Wλ,α,L,PexpL1-pμp-1-μ2-p2-p,where (30)Wλ,α,L,P=∑j=1∞λjαLjPe-λj!ΓjP.Proof.
This follows by directly substituting the values ofλ and θ,k(θ) into (16).The functionW(λ,α,L,P) is an example of Wright’s generalized Bessel function; however it can not be expressed in terms of the more common Bessel function. To evaluate it the value of j is determined for which the function Wj reaches the maximum [15].
## 3. Parameter Estimation
We approximate the functionW(λ,α,L,P)=∑j=1∞(λjαLjPe-λ/j!ΓjP)=∑j=1∞Wj following the procedure by [15] where the value of j is determined for which Wj reaches maximum. We treat j as continuous so that Wj is differentiated with respect to j and set the derivative to zero. So for L>0 we have the following.Lemma 7 (see [15]).
The log maximum approximation ofWj is given by (31)logWmax=L2-p2-pΘlogLPp-1PΘ1-P2-p+1+P-PlogP-1-PlogL2-p2-pΘ-log2π-12logP-logL2-p2-pΘ,where jmax=L2-p/(2-p)Θ.Proof.
(32) W λ , α , L , P = ∑ j = 1 ∞ λ j α L j P - 1 e - λ j ! Γ j P = ∑ j = 1 ∞ λ j L j P - 1 e - L / τ e - λ j ! τ P j Γ P j w h e r e τ = 1 α . Substituting the values of λ,α in the above equation we have (33)Wλ,α,L,P=∑j=1∞μ2-p/Θ2-pjLjP-1Θ1-pμp-1jPe-L/τe-λj!ΓPj=e-L/τ-λL-1∑j=1∞μ2-pjΘp-1μp-1jPLjPΘj2-pjj!ΓjP=e-L/τ-λL-1∑j=1∞LjPp-1jPμ2-pj+p-1jPΘj1-P2-pjj!ΓjP.The term μ(2-p)j+(p-1)jP depends on the L,p,P,Θ values so we maximize the summation (34)WL,Θ,P=∑j=1∞LjPp-1jPΘj1-P2-pjj!ΓjP=∑j=1∞zjj!ΓjPwherez=LPp-1PΘ1-P2-p=Wj.Considering Wj we have (35)logWj=jlogz-logj!-logPj=jlogz-logΓj+1-logPj. Using Stirling’s approximation of Gamma functions we have (36)logΓ1+j≈1+jlog1+j-1+j+12log2π1+j,logΓPj≈PjlogPj-Pj+12log2πPj.And hence we have (37)Wj≈jlogz+1+P-PlogP-1-Plogj-log2π-12logP-logj.For 1<p<2 we have P=(2-p)/p-1>0; hence the logarithms have positive arguments. Differentiating with respect to j we have (38)∂logWj∂j≈logz-1j-logj-PlogPj≈logz-logj-PlogPj,where 1/j is ignored for large j. Solving for (∂logWj)/∂j=0 we have (39)jmax=L2-p2-pΘ.Substituting jmax in logWj to find the maximum approximation of Wj we have (40)logWmax=L2-p2-pΘlogLPp-1PΘ1-P2-p+1+P-PlogP-1-PlogL2-p2-pΘ-log2π-12logP-logL2-p2-pΘ.Hence the result follows.It can be observed that∂Wj/∂j is monotonically decreasing; hence logWj is strictly convex as a function of j. Therefore Wj decays faster than geometrically on either side of jmax [15]. Therefore if we are to estimate W(L,Θ,P) by W^(L,Θ,P)=∑j=jdjuWj the approximation error is bounded by geometric sum (41)WL,Θ,P-W^L,Θ,P<Wjd-11-rljd-11-rl+Wju+111-ru,rl=exp∂Wj∂jj=jd-1,ru=exp∂Wj∂jj=ju+1.For quick and accurate evaluation of W(λ,α,L,P), the series is summed for only those terms in the series which contribute significantly to the sum.Generalized linear models extend the standard linear regression models to incorporate nonnormal response distributions and possibly nonlinear functions of the mean. The advantage of GLMs is that the fitting process maximizes the likelihood for the choice of the distribution for a random variabley and the choice is not restricted to normality unlike linear regression [16].The exponential dispersion models are the response distributions for the generalized linear models. Tweedie distributions are members of the exponential dispersion models upon which the generalized linear models are based. Consequently fitting a Tweedie distribution follows the framework of fitting a generalized linear model.Lemma 8.
In case of a canonical link function, the sufficient statistics for{βj} are ∑i=1nyixij.Proof.
Forn independent observations yi of the exponential dispersion model (16) the log-likelihood function is (42)Lβ=∑i=1nLi=∑inlogfyi,θi,Θ=∑i=1nyiθi-kθiΘ+∑inlogayi,Θ.
Butθi=∑jpβjxij; hence (43)∑inyiθi=∑i=1nyi∑jpβjxij=∑jpβj∑i=1nyixij.Proposition 9.
Given thatyi is distributed as (16) then its distribution depends only on its first two moments, namely, μi and Var(yi).Proof.
Letg(μi) be the link function of the GLM such that ηi=∑j=1pβjxij=g(μi). The likelihood equations are (44)∂Lβ∂β=∑i=1n∂Li∂βj∀j.Using chain rule we have (45)∂Li∂βj=∂Li∂θi∂θi∂μi∂μi∂ηi∂ηi∂βj=yi-μiVaryixij∂μi∂ηi.
Hence(46)∂Lβ∂β=yi-μiVaryixij∂μi∂ηi=yi-μiΘμipxij∂μi∂ηi.Since Var(yi)=V(μi), the relationship between the mean and variance characterizes the distribution.Clearly a GLM only requires the first two moments of the responseyi; hence despite the difficulty of full likelihood analysis of Tweedie distribution as it can not be expressed in closed form for 1<p<2 we can still fit a Tweedie distribution family. The likelihood is only required to estimate p and Θ as well as diagnostic check of the model.Proposition 10.
Under the standard regularity conditions, for largen, the maximum likelihood estimator β^ of β for generalized linear model is efficient and has an approximate normal distribution.Proof.
From the log-likelihood, the covariance matrix of the distribution is the inverse of the information matrixJ=E-∂2L(β)/∂βh∂βj.
So(47)J=E-∂2Lβ∂βh∂βj=E∂2Li∂βh∂2Li∂βj=yi-μiVaryixih∂μi∂ηiyi-μiVaryixij∂μi∂ηi=xihxijVaryi∂μi∂ηi2.Hence (48)E-∂2Lβ∂βh∂βj=∑inxihxijVaryi∂μi∂ηi2=XTWX,where W=diag1/Varyi∂μi/∂ηi2.
Thereforeβ^ has an approximate N[β,XTWX-1] with Var(β^)=XTW^X-1, where W^ is evaluated at β^.To computeβ^ we use the iteratively reweighted least square algorithm proposed by Dobson and Barnett [17] where the iterations use the working weights wi: (49)wiVμig˙μi2,where V(μi)=μip.However estimatingp is more difficult than estimating β and Θ such that most researchers working with Tweedie densities have p a priori. In this study we use the procedure in [15] where the maximum likelihood estimator of p is obtained by directly maximizing the profile likelihood function. For any given value of p we find the maximum likelihood estimate of β,Θ and compute the log-likelihood function. This is repeated several times until we have a value of p which maximizes the log-likelihood function.Given the estimated values ofp and β, then the unbiased estimator of Θ is given by (50)Θ^=∑i=1nLi-μiβ^2μiβ^p^.Since for 1<p<2 the Tweedie density can not be expressed in closed form, it is recommended that the maximum likelihood estimate of Θ must be computed iteratively from full data [15].
## 4. Data and Model Fitting
### 4.1. Data Analysis
Daily rainfall data of Balaka district in Malawi covering the period 1995–2015 is used. The data was obtained from Meteorological Surveys of Malawi. Figure1 shows a plot of the data.Figure 1
Daily rainfall amount for Balaka district.In summary the minimum value is 0 mm which indicates that there were no rainfall on particular days, whereas the maximum amount is 123.7 mm. The mean rainfall for the whole period is 3.167 mm.We investigated the relationship between the variance and the mean of the data by plotting thelog(variance) against log(mean) as shown in Figure 2. From the figure we can observe a linear relationship between the variance and the mean which can be expressed as (51)logVariance=α+βlogmean(52)Variace=A∗meanβ,A∈R.Hence the variance can be expressed as some power β∈R of the mean agreeing with the Tweedie variance function requirement.Figure 2
Variance mean relationship.
### 4.2. Fitted Model
To model the daily rainfall data we usesin and cos as predictors due to the cyclic nature and seasonality of rainfall. We have assumed that February ends on 28th for all the years to be uniform in our modeling.The canonical link function is given by(53)logμi=a0+a1sin2πi365+a2cos2πi365,where i=1,2,…,365 corresponds to days of the year and a0,a1,a2 are the coefficients of regression.In the first place we estimatep^ by maximizing the profile log-likelihood function. Figure 3 shows the graph of the profile log-likelihood function. As can be observed the value of p that maximizes the function is 1.5306.Figure 3
Profile likelihood.From the results obtained after fitting the model, both the cycliccosine and sine terms are important characteristics for daily rainfall Table 1. The covariates were determined to take into account the seasonal variations in the stochastic model.Table 1
Estimated parameter values.
Parameter Estimate Std. error t value Pr(>|t|) a^0 0.1653 0.0473 3.4930 0.0005 ∗ ∗ ∗ a^1 0.9049 0.0572 15.81100 <2e − 16∗∗∗ a^2 2.0326 0.0622 32.6720 <2e − 16∗∗∗ Θ^ 14.8057 - - - Withsignif code: 0 ∗∗∗.The predictedμ^i,p^,Θ^ for each day only depends on the day’s conditions so that for each day i we have (54)μ^i=exp0.1653+0.9049sin2πi365+2.0326cos2πi365,p^=1.5306,Θ^=14.8057.From these estimated values we can calculate the parameter (λ^i,α^i,P^) from the corresponding formulas above as (55)λ^i=16.5716exp0.1653+0.9049sin2πi365+2.03263cos2πi3650.4694,α^=7.4284exp0.1653+0.9049sin2πi365+2.0326cos2πi3650.5306,P^=0.8847.Comparing the actual means and the predicted means for 2 July we have μ^=0.3820, whereas μ=0.4333; similarly for 31 December we have μ^=9.0065 and μ=10.6952, respectively. Figure 4 shows the estimated mean and actual mean where the model behaves well generally.Figure 4
Actual versus predicted mean.
### 4.3. Goodness of Fit of the Model
Let the maximum likelihood estimate ofθi be θ^i for all i and μ^ as the model’s mean estimate. Let θ~i denote the estimate of θi for the saturated model with corresponding μ~=yi.The goodness of fit is determined by deviance which is defined as(56)-2maximum likelihood of the fitted modelMaximum likelihood of the saturated model=-2Lμ^;y-Ly,y=2∑i=1nyiθ~i-kθ~iΘ-2∑i=1nyiθ^i-kθ^iΘ=2∑i=1nyiθ~i-θ^i-kθ~i+kθ^iΘ=Devy,μ^Θ.Dev(y,μ^) is called the deviance of the model and the greater the deviance, the poorer the fitted model as maximizing the likelihood corresponds to minimizing the deviance.In terms of Tweedie distributions with1<p<2, the deviance is (57)Devp=2∑i=1nyi2-p-2-pyiμi1-p+1-pμi2-p1-p2-p.Based on results from fitting the model, the residual deviance is 43144 less than the null deviance 62955 which implies that the fitted model explains the data better than a null model.
### 4.4. Diagnostic Check
The model diagnostic is considered as a way of residual analysis. The fitted model faces challenges to be assessed especially for days with no rainfall at all as they produce spurious results and distracting patterns similarly as observed by [15]. Since this is a nonnormal regression, residuals are far from being normally distributed and having equal variances unlike in a normal linear regression. Here the residuals lie parallel to distinct values; hence it is difficult to make any meaningful decision about the fitted model (Figure 5).Figure 5
Residuals of the model.So we assess the model based on quantile residuals which remove the pattern in discrete data by adding the smallest amount of randomization necessary on the cumulative probability scale.The quantile residuals are obtained by inverting the distribution function for each response and finding the equivalent standard normal quantile.Mathematically, letai=limy↑yiF(y;μ^i,Θ^) and bi=F(yi;μ^i,Θ^), where F is the cumulative function of the probability density function f(y;μ,Θ); then the randomized quantile residuals for yi are (58)rq,i=Φ-1uiwith ui being the uniform random variable on (ai,bi]. The randomized quantile residuals are distributed normally barring the variability in μ^ and Θ^.Figure6 shows the normalized Q-Q plot and as can be observed there are no large deviations from the straight line, only small deviations at the tail. The linearity observed indicates an acceptable fitted model.Figure 6
Q-Q plot of the quantile residuals.
## 4.1. Data Analysis
Daily rainfall data of Balaka district in Malawi covering the period 1995–2015 is used. The data was obtained from Meteorological Surveys of Malawi. Figure1 shows a plot of the data.Figure 1
Daily rainfall amount for Balaka district.In summary the minimum value is 0 mm which indicates that there were no rainfall on particular days, whereas the maximum amount is 123.7 mm. The mean rainfall for the whole period is 3.167 mm.We investigated the relationship between the variance and the mean of the data by plotting thelog(variance) against log(mean) as shown in Figure 2. From the figure we can observe a linear relationship between the variance and the mean which can be expressed as (51)logVariance=α+βlogmean(52)Variace=A∗meanβ,A∈R.Hence the variance can be expressed as some power β∈R of the mean agreeing with the Tweedie variance function requirement.Figure 2
Variance mean relationship.
## 4.2. Fitted Model
To model the daily rainfall data we usesin and cos as predictors due to the cyclic nature and seasonality of rainfall. We have assumed that February ends on 28th for all the years to be uniform in our modeling.The canonical link function is given by(53)logμi=a0+a1sin2πi365+a2cos2πi365,where i=1,2,…,365 corresponds to days of the year and a0,a1,a2 are the coefficients of regression.In the first place we estimatep^ by maximizing the profile log-likelihood function. Figure 3 shows the graph of the profile log-likelihood function. As can be observed the value of p that maximizes the function is 1.5306.Figure 3
Profile likelihood.From the results obtained after fitting the model, both the cycliccosine and sine terms are important characteristics for daily rainfall Table 1. The covariates were determined to take into account the seasonal variations in the stochastic model.Table 1
Estimated parameter values.
Parameter Estimate Std. error t value Pr(>|t|) a^0 0.1653 0.0473 3.4930 0.0005 ∗ ∗ ∗ a^1 0.9049 0.0572 15.81100 <2e − 16∗∗∗ a^2 2.0326 0.0622 32.6720 <2e − 16∗∗∗ Θ^ 14.8057 - - - Withsignif code: 0 ∗∗∗.The predictedμ^i,p^,Θ^ for each day only depends on the day’s conditions so that for each day i we have (54)μ^i=exp0.1653+0.9049sin2πi365+2.0326cos2πi365,p^=1.5306,Θ^=14.8057.From these estimated values we can calculate the parameter (λ^i,α^i,P^) from the corresponding formulas above as (55)λ^i=16.5716exp0.1653+0.9049sin2πi365+2.03263cos2πi3650.4694,α^=7.4284exp0.1653+0.9049sin2πi365+2.0326cos2πi3650.5306,P^=0.8847.Comparing the actual means and the predicted means for 2 July we have μ^=0.3820, whereas μ=0.4333; similarly for 31 December we have μ^=9.0065 and μ=10.6952, respectively. Figure 4 shows the estimated mean and actual mean where the model behaves well generally.Figure 4
Actual versus predicted mean.
## 4.3. Goodness of Fit of the Model
Let the maximum likelihood estimate ofθi be θ^i for all i and μ^ as the model’s mean estimate. Let θ~i denote the estimate of θi for the saturated model with corresponding μ~=yi.The goodness of fit is determined by deviance which is defined as(56)-2maximum likelihood of the fitted modelMaximum likelihood of the saturated model=-2Lμ^;y-Ly,y=2∑i=1nyiθ~i-kθ~iΘ-2∑i=1nyiθ^i-kθ^iΘ=2∑i=1nyiθ~i-θ^i-kθ~i+kθ^iΘ=Devy,μ^Θ.Dev(y,μ^) is called the deviance of the model and the greater the deviance, the poorer the fitted model as maximizing the likelihood corresponds to minimizing the deviance.In terms of Tweedie distributions with1<p<2, the deviance is (57)Devp=2∑i=1nyi2-p-2-pyiμi1-p+1-pμi2-p1-p2-p.Based on results from fitting the model, the residual deviance is 43144 less than the null deviance 62955 which implies that the fitted model explains the data better than a null model.
## 4.4. Diagnostic Check
The model diagnostic is considered as a way of residual analysis. The fitted model faces challenges to be assessed especially for days with no rainfall at all as they produce spurious results and distracting patterns similarly as observed by [15]. Since this is a nonnormal regression, residuals are far from being normally distributed and having equal variances unlike in a normal linear regression. Here the residuals lie parallel to distinct values; hence it is difficult to make any meaningful decision about the fitted model (Figure 5).Figure 5
Residuals of the model.So we assess the model based on quantile residuals which remove the pattern in discrete data by adding the smallest amount of randomization necessary on the cumulative probability scale.The quantile residuals are obtained by inverting the distribution function for each response and finding the equivalent standard normal quantile.Mathematically, letai=limy↑yiF(y;μ^i,Θ^) and bi=F(yi;μ^i,Θ^), where F is the cumulative function of the probability density function f(y;μ,Θ); then the randomized quantile residuals for yi are (58)rq,i=Φ-1uiwith ui being the uniform random variable on (ai,bi]. The randomized quantile residuals are distributed normally barring the variability in μ^ and Θ^.Figure6 shows the normalized Q-Q plot and as can be observed there are no large deviations from the straight line, only small deviations at the tail. The linearity observed indicates an acceptable fitted model.Figure 6
Q-Q plot of the quantile residuals.
## 5. Simulation
The model is simulated to test whether it produces data with similar characteristics to the actual observed rainfall. The simulation is done for a period of two years where one was the last year of the data (2015) and the other year (2016) was a future prediction. Then comparison was done with a graph for 2015 data as shown in Figure7.Figure 7
Simulated rainfall and observed rainfall.The different statistics of the simulated data and actual data are shown in Table2 for comparisons.Table 2
Data statistics.
Min 1st Qu. Median Mean 3rd Qu. Max Predicted data 0.00 0.00 0.00 3.314 0.00 116.5 Actual data[10 yrs] 0.00 0.00 0.00 3.183 0.300 123.7 Actual data[2015] 0.00 0.00 0.00 3.328 0.00 84.5The main objective of simulation is to demonstrate that the Poisson-Gamma can be used to predict and forecast rainfall occurrence and intensity simultaneously. Based on the results above (Figure8), the model has shown that it works well in predicting the rainfall intensity and hence can be used in agriculture, actuarial science, hydrology, and so on.Figure 8
Probability of rainfall occurrence.However the model performed poorly in predicting probability of rainfall occurrence as it underestimated the probability of rainfall occurrence. It is suggested here that probably the use of truncated Fourier series can improve this estimation as compared to the sinusoidal.But it performed better in predicting probability of no rainfall on days where there was little or no rainfall as indicated in Figure8.It can also be observed that the model produces synthetic precipitation that agrees with the four characteristics of a stochastic precipitation model as suggested by [4] as follows. The probability of rainfall occurrence obeys a seasonal pattern (Figure 8); in addition we can also tell that a probability of a rain in a day is higher if the previous day was wet which is the basis of precipitation models that involve the Markov process. From Figure 7 we can also observe variation of rainfall intensity based on time of the season.In addition the model allows modeling of exact zeros in the data and is able to predict a probability of no rainfall event simultaneously.
## 6. Conclusion
A daily stochastic rainfall model was developed based on a compound Poisson process where rainfall events follow a Poisson distribution and the intensity is independent of events following a Gamma distribution. Unlike several researches that have been carried out into precipitation modeling whereby two models are developed for occurrence and intensity, the model proposed here is able to model both processes simultaneously. The proposed model is also able to model the exact zeros, the event of no rainfall, which is not the case with the other models. This precipitation model is an important tool to study the impact of weather on a variety of systems including ecosystem, risk assessment, drought predictions, and weather derivatives as we can be able to simulate synthetic rainfall data. The model provides mechanisms for understanding the fine scale structure like number and mean of rainfall events, mean daily rainfall, and probability of rainfall occurrence. This is applicable in agriculture activities, disaster preparedness, and water cycle systems.The model developed can easily be used for forecasting future events and, in terms of weather derivatives, the weather index can be derived from simulating a sample path by summing up daily precipitation in the relevant accumulation period. Rather than developing a weather index which is not flexible enough to forecast future events, we can use this model in pricing weather derivatives.Rainfall data is generally zero inflated in that the amount of rainfall received on a day can be zero with a positive probability but continuously distributed otherwise. This makes it difficult to transform the data to normality by power transforms or to model it directly using continuous distribution. The Poisson-Gamma distribution has a complicated probability density function whose parameters are difficult to estimate. Hence expressing it in terms of a Tweedie distribution makes estimating the parameters easy. In addition, Tweedie distributions belong to the exponential family of distributions upon which generalized linear models are based; hence there is an already existing framework in place for fitting and diagnostic testing of the model.The model developed allows the information in both zero and positive observations to contribute to the estimation of all parts of the model unlike the other model [3, 4, 9] which conditions rainfall intensity based on probability of occurrence. In addition the introduction of the dispersion parameter in the model helps in reducing underestimation of overdispersion of the data which is also common in the aforementioned models.
---
*Source: 1012647-2018-04-04.xml* | 1012647-2018-04-04_1012647-2018-04-04.md | 49,348 | A Poisson-Gamma Model for Zero Inflated Rainfall Data | Nelson Christopher Dzupire; Philip Ngare; Leo Odongo | Journal of Probability and Statistics
(2018) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1012647 | 1012647-2018-04-04.xml | ---
## Abstract
Rainfall modeling is significant for prediction and forecasting purposes in agriculture, weather derivatives, hydrology, and risk and disaster preparedness. Normally two models are used to model the rainfall process as a chain dependent process representing the occurrence and intensity of rainfall. Such two models help in understanding the physical features and dynamics of rainfall process. However rainfall data is zero inflated and exhibits overdispersion which is always underestimated by such models. In this study we have modeled the two processes simultaneously as a compound Poisson process. The rainfall events are modeled as a Poisson process while the intensity of each rainfall event is Gamma distributed. We minimize overdispersion by introducing the dispersion parameter in the model implemented through Tweedie distributions. Simulated rainfall data from the model shows a resemblance of the actual rainfall data in terms of seasonal variation, means, variance, and magnitude. The model also provides mechanisms for small but important properties of the rainfall process. The model developed can be used in forecasting and predicting rainfall amounts and occurrences which is important in weather derivatives, agriculture, hydrology, and prediction of drought and flood occurrences.
---
## Body
## 1. Introduction
Climate variables, in particular, rainfall occurrence and intensity, hugely impact human and physical environment. Knowledge of the frequency of the occurrence and intensity of rainfall events is essential for planning, designing, and management of various water resources system [1]. Specifically rain-fed agriculture is a sensitive sector to weather and crop production is directly dependent on the amount of rainfall and its occurrence. Rainfall modeling has a great impact on crop growth, weather derivatives, hydrological systems, drought, and flood management and crop simulated studies.Rainfall modeling is also important in pricing of weather derivatives which are financial instruments that are used as a tool for risk management to reduce risk associated with adverse or unexpected weather conditions.Further as climate change greatly affects the environment there is an urgent need for predicting the variability of rainfall for future periods for different climate change scenarios in order to provide necessary information for high quality climate related impact studies [1].However modeling precipitation poses a lot of challenges, namely, accurate measurement of precipitation since rainfall data consists of sequences of values which are either zero or some positive numbers (intensity) depending on the depth of accumulation over discrete intervals. In addition factors like wind can affect collection accuracy. Rainfall is localized unlike temperature which is highly correlated across regions; therefore a derivative holder based on rainfall may suffer geographical basis risk in case of pricing weather derivatives. The final challenge is the choice of a proper probability distribution function to describe precipitation data. The statistical property of precipitation is far more complex and a more sophisticated distribution is required [2].Rainfall has been modeled as a chain dependent process where a two-state Markov chain model represents the occurrence of rainfall and the intensity of rainfall is modeled by fitting a suitable distribution like Gamma [3], exponential, and mixed exponential [1, 4]. These models are easy to understand and interpret and use maximum likelihood to find the parameters. However models involve many parameters to fully describe the dynamics of rainfall as well as making several assumptions for the process.Wilks [5] proposed a multisite model for daily precipitation using a combination of two-state Markov process (for the rainfall occurrence) and a mixed exponential distribution (for the precipitation amount). He found that the mixture of exponential distributions offered a much better fit than the commonly used Gamma distribution.In study of Leobacher and Ngare [3] the precipitation is modeled on a monthly basis by constructing a suitable Markov-Gamma process to take into account seasonal changes of precipitation. It is assumed that rainfall data for different years of the same month is independent and identically distributed. It is assumed that precipitation can be forecast with sufficient accuracy for a month.Another approach of modeling rainfall is based on the Poisson cluster model where two of the most recognized cluster based models in the stochastic modeling of rainfall are the Newman-Scott Rectangular Pulses model and the Bartlett-Lewis Rectangular Pulse model. These models represent rainfall sequences in time and rainfall fields in space where both the occurrence and depth processes are combined. The difficulty in Poisson cluster models as observed by Onof et al. [6] is the challenge of how many features should be addressed so that the model is still mathematically tractable. In addition the models are best fitted by the method of moments and so requires matching analytic expressions for the statistical properties such as mean and variance.Carmona and Diko [7] developed a time-homogeneous jump Markov process to describe rainfall dynamics. The rainfall process was assumed to be in form of storms which consists of cells themselves. At a cell arrival time the rainfall process jumps up by a random amount and at extinction time it jumps down by a random amount, both modeled as Poisson process. Each time the rain intensity changes, an exponential increase occurs either upwards or downwards. To preserve nonnegative intensity, the downward jump size is truncated to the current jump size. The Markov jump process also allows for a jump directly to zero corresponding to the state of no rain [8].In this study the rainfall process is modeled as a single model where the occurrence and intensity of rainfall are simultaneously modeled. The Poisson process models the daily occurrence of rainfall while the intensity is modeled using Gamma distribution as the magnitude of the jumps of the Poisson process. Hence we have a compound Poisson process which is Poisson-Gamma model. The contribution of this study is twofold: a Poisson-Gamma model that simultaneously describes the rainfall occurrence and intensity at once and a suitable model for zero inflated data which reduces overdispersion.This paper is structured as follows. In Section2 the Poisson-Gamma model is described and then formulated mathematically while Section 3 presents methods of estimating the parameters of the model. In Section 4 the model is fitted to the data and goodness of fit of the model is evaluated by mean deviance whereas quantile residuals perform the diagnostics check of the model. Simulation and forecasting are carried out in Section 5 and the study concludes in Section 6.
## 2. Model Formulation
### 2.1. Model Description
Rainfall comprises discrete and continuous components in that if it does not rain the amount of rainfall is discrete whereas if it rains the amount is continuous. In most research works [3, 4, 9] the rainfall process is presented by use of two separate models: one is for the occurrence and conditioned on the occurrence and another model is developed for the amount of rainfall. Rainfall occurrence is basically modeled as first or higher order Markov chain process and conditioned on this process a distribution is used to fit the precipitation amount. Commonly used distributions are Gamma, exponential, mixture of exponential, Weibull, and so on. These models work based on several assumptions and inclusion of several parameters to capture the observed temporal dependence of the rainfall process. However rainfall data exhibit overdispersion [10] which is caused by various factors like clustering, unaccounted temporal correlation, or the fact that the data is a product of Bernoulli trials with unequal probability of events. The stochastic models developed in this way underestimate the overdispersion of rainfall data which may result in underestimating the risk of low or high seasonal rainfall.Our interest in this research is to simultaneously model the occurrence and intensity of rainfall in one model. We would model the rainfall process by using a Poisson-Gamma probability distribution which is flexible to model the exact zeros and the amount of rainfall together.Rainfall is modeled as a compound Poisson process which is a Lévy process with Gamma distributed jumps. This is motivated by the sudden changes of rainfall amount from zero to a large positive value following each rainfall event which are modeled as pure jumps of the compound Poisson process.We assume rainfall arrives in forms of storms following a Poisson process, and at each arrival time the current intensity increases by a random amount based on Gamma distribution. The jumps of the driving process represent the arrival of the storm events generating a jump size of random size. Each storm comprises cells that also arrive following another Poisson process.The Poisson cluster processes gives an appropriate tool as rainfall data indicating presence of clusters of rainfall cells. As observed by Onof et al. [6] use of Gamma distributed variables for cell depth improves the reproduction of extreme values.Lord [11] used the Poisson-Gamma compound process to model the motor vehicle crashes where they examined the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter. Wang [12] proposed a Poisson-Gamma compound approach for species richness estimation.
### 2.2. Mathematical Formulation
LetNt be total number of rainfall event per day following a Poisson process such that(1)PNt=n=e-λλnn!,∀n∈N,Nt=∑t≥11t,∞t.The amount of rainfall is the total sum of the jumps of each rainfall event, say (yi)i≥1, assumed to be identically and independently Gamma distributed and independent of the times of the occurrence of rainfall:(2)Lt=∑i=1NtyiNt=1,2,3,…0Nt=0,such that yi~Gamma(α,P) is with probability density function (3)fy=αpyP-1e-αyΓPy>0,0y≤0.Lemma 1.
The compound Poisson process (2) has a cumulant function (4)ψs,t,x=λteMYx-1,for 0≤s<t and x∈R, where MY(x) is the moment generating function of the Gamma distribution.Proof.
The moment generating functionΦ(s) of L(s) is given by (5)MLs=EesLt=∑j=0∞EesLt∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+Lj∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+LjPNt=jbecauseofindependenceofLandNt=∑j=0∞MYsje-λtλtjj!=e-λt∑j=0∞MYsjλtjj!=e-λt+MYsλt.So the cumulant of L is (6)lnMLs=λMYs-1=λ1-αx-P-1.If we observe the occurrence of rainfall forn periods, then we have the sequence {Li}i=1n which is independent and identically distributed.If on a particular day there is no rainfall that occurred, then(7)PL=0=exp-λλ00!=exp-λ=p0.Therefore the process has a point mass at0 which implies that it is not entirely continuous random variable.Lemma 2.
The probability density function ofL in (2) is(8)fθL=exp-λδL+exp-λ-αLL-1rPvLP,where δ0(L) is a dirac function at zero.Proof.
Letq0=1-p0 be the probability that it rained. Hence for Li>0 we have (9)fθ+L=∑i=1∞piq0αiPLiP-1exp-αLΓipwherepi=exp-λλii!=1q0∑i=1∞piexp-αLαiPLiP-1ΓiP=1q0exp-αL∑i=1∞piαiPLiP-1ΓiP=1q0exp-αL∑i=1∞exp-λλii!αiPLiP-1ΓiP=exp-λq0exp-αL∑i=1∞λii!αipLiP-1Γip=exp-λq0exp-αL∑i=1∞λiαLipLi!ΓiP=L-1exp-αLexpλ-1∑i=1∞λαPLPi!ΓiP.If we let v=λαP and rp(vLP)=∑i=1∞vLP/i!ΓiP, then we have(10)fθ+L=L-1exp-αLexpλ-1rPvLP.
We can express the probability density functionfθ(L) in terms of a Dirac function as(11)fθL=p0δ0L+q0fθ+L=exp-λδ0L+q0expλ-1L-1exp-αLrPvLP=exp-λδ0L+exp-λ-αLL-1rPvLP.Consider a random sample of sizen of Li with the probability density function(12)fθL=exp-λδL+exp-λ-αLL-1rpvLp.If we assume that there are m positive values L1,L2,…,Lm, then there are M=n-m zeros where m>0.We observe thatm~Bi(n,1-exp-λ) and p(m=0)=exp(-nλ); hence the likelihood function is (13)L=nmp0n-mq0m∏i=1mfθ+Liand the log-likelihood for θ=(λ,α,p) is (14)logLθ;L1,L2,…,Ln=lognmp0n-mq0m∏i=1mfθ+Li=lognme-λn+λm1-e-λm∏i=1me-λ-αLi1Li∑j=1∞λαpLijpjj!Γjp=lognm+λm-n+mlog1-e-λ+∑i=1m-λ-αLi-logLi+log∑i=1m∑j=1∞λαpLijpjj!Γjp.Now for λ^ we have (15)∂logLθ;L1,L2,…,Ln∂λ=m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i∂logLθ;L1,L2,…,Ln∂λ=0⟹m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i=0.We can observe from the above evaluation that λ can not be expressed in closed form; similar derivation also shows that α as well can not be expressed in closed form. Therefore we can only estimate λ and α using numerical methods. Withers and Nadarajah [13] also observed that the probability density function can not be expressed in closed form and therefore it is difficult to find the analytic form of the estimators. So we will express the probability density function in terms of exponential dispersion models as described below.Definition 3 (see [14]).
A probability density function of the form(16)fy;θ,Θ=ay,Θexp1Θyθ-kθfor suitable functions k() and a() is called an exponential dispersion model.Θ > 0 is the dispersion parameter. The function k(θ) is the cumulant of the exponential dispersion model; since Θ=1, then k′() are the successive cumulants of the distribution [15]. The exponential dispersion models were first introduced by Fisher in 1922.If we letLi=logfyi;θi,Θ as a contribution of yi to the likelihood function L=∑iLi, then (17)Li=1Θyiθ-kθi+logay,Θ,∂Li∂θi=1Θyi-k′θi,∂2Li∂θi2=-1Θk′′θi. However we expect that E∂Li/∂θi=0 and -E∂2Li/∂θi2=E∂Li/∂θi2 so that (18)E1Θyi-k′θi=0,1ΘEyi-k′θi=0,Eyi=k′θi.Furthermore (19)-E∂2Li∂θi2=E∂Li∂θi2,-E-1Θk′′θi=E1Θyi-k′θi2,k′′θiΘ=VaryiΘ2,Varyi=Θk′′θi. Therefore the mean of the distribution is E[Y]=μ=dk(θ)/dθ and the variance is Var(Y)=Θd2kθ/dθ2.The relationshipμ=dk(θ)/dθ is invertible so that θ can be expressed as a function of μ; as such we have Var(Y)=ΘV(μ), where V(μ) is called a variance function.Definition 4.
The family of exponential dispersion models, whose variance functions are of the formV(μ)=μp for p∈(-∞,0]∪[1,∞), are called Tweedie family distributions.Examples are as follows: forp=0 then we have a normal distribution, p=1, and Θ=1; it is a Poisson distribution, and Gamma distribution for p=2, while when p=3 it is Gaussian inverse distribution. Tweedie densities can not be expressed in closed form (apart from the examples above) but can instead be identified by their cumulants generating functions.FromVar(Y)=Θd2kθ/dθ2, then for Tweedie family distribution we have (20)VarY=Θd2kθdθ2=ΘVμ=Θμp.Hence we can solve for μ and k(θ) as follows: (21)μ=dkθdθ,dμdθ=μp⟹∫dμμp=∫dθ,θ=μ1-p1-pp≠1,logμp=1by equating the constants of integration above to zero.Forp≠1 we have μ=1-pθ1/1-p so that (22)∫dkθ=∫1-pθ1/1-pdθ,kθ=1-pθ2-p/1-p2-p=μ2-p/1-p2-p,p≠2.Proposition 5.
The cumulant generating function of a Tweedie distribution for1<p<2 is (23)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.Proof.
From (16) the moment generating function is given by (24)MYt=∫exptyay,Θexp1Θyθ-kθdy=∫ay,Θexp1Θyθ+tΘ-kθdy=∫ay,Θexpyθ+tΘ-kθΘ+kθ+tΘ-kθΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘ+kθ+tΘ-kθ+tΘΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘexpkθ+tΘ-kθ+tΘΘdy=expkθ+tΘ-kθ+tΘΘ∫ay,Θexpyθ+tΘ-kθ+tΘΘdy=exp1Θkθ+tΘ-kθ.Hence cumulant generating function is (25)logMYt=1Θkθ+tΘ-kθ.
For1<p<2 we substitute θ and k(θ) to have (26)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.By comparing the cumulant generating functions in Lemma1 and Proposition 5 the compound Poisson process can be thought of as Tweedie distribution with parameters (λ,α,P) expressed as follows: (27)λ=μ2-pΘ2-p,α=Θp-1μp-1,P=2-pp-1.The requirement that the Gamma shape parameterP be positive implies that only Tweedie distributions between 1<p<2 can represent the Poisson-Gamma compound process. In addition, for λ>0, α>0 implies μ>0 and Θ>0.Proposition 6.
Based on Tweedie distribution, the probability of receiving no rainfall at all is(28)PL=0=exp-μ2-pΘ2-p and the probability of having a rainfall event is (29)PL>0=Wλ,α,L,PexpL1-pμp-1-μ2-p2-p,where (30)Wλ,α,L,P=∑j=1∞λjαLjPe-λj!ΓjP.Proof.
This follows by directly substituting the values ofλ and θ,k(θ) into (16).The functionW(λ,α,L,P) is an example of Wright’s generalized Bessel function; however it can not be expressed in terms of the more common Bessel function. To evaluate it the value of j is determined for which the function Wj reaches the maximum [15].
## 2.1. Model Description
Rainfall comprises discrete and continuous components in that if it does not rain the amount of rainfall is discrete whereas if it rains the amount is continuous. In most research works [3, 4, 9] the rainfall process is presented by use of two separate models: one is for the occurrence and conditioned on the occurrence and another model is developed for the amount of rainfall. Rainfall occurrence is basically modeled as first or higher order Markov chain process and conditioned on this process a distribution is used to fit the precipitation amount. Commonly used distributions are Gamma, exponential, mixture of exponential, Weibull, and so on. These models work based on several assumptions and inclusion of several parameters to capture the observed temporal dependence of the rainfall process. However rainfall data exhibit overdispersion [10] which is caused by various factors like clustering, unaccounted temporal correlation, or the fact that the data is a product of Bernoulli trials with unequal probability of events. The stochastic models developed in this way underestimate the overdispersion of rainfall data which may result in underestimating the risk of low or high seasonal rainfall.Our interest in this research is to simultaneously model the occurrence and intensity of rainfall in one model. We would model the rainfall process by using a Poisson-Gamma probability distribution which is flexible to model the exact zeros and the amount of rainfall together.Rainfall is modeled as a compound Poisson process which is a Lévy process with Gamma distributed jumps. This is motivated by the sudden changes of rainfall amount from zero to a large positive value following each rainfall event which are modeled as pure jumps of the compound Poisson process.We assume rainfall arrives in forms of storms following a Poisson process, and at each arrival time the current intensity increases by a random amount based on Gamma distribution. The jumps of the driving process represent the arrival of the storm events generating a jump size of random size. Each storm comprises cells that also arrive following another Poisson process.The Poisson cluster processes gives an appropriate tool as rainfall data indicating presence of clusters of rainfall cells. As observed by Onof et al. [6] use of Gamma distributed variables for cell depth improves the reproduction of extreme values.Lord [11] used the Poisson-Gamma compound process to model the motor vehicle crashes where they examined the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter. Wang [12] proposed a Poisson-Gamma compound approach for species richness estimation.
## 2.2. Mathematical Formulation
LetNt be total number of rainfall event per day following a Poisson process such that(1)PNt=n=e-λλnn!,∀n∈N,Nt=∑t≥11t,∞t.The amount of rainfall is the total sum of the jumps of each rainfall event, say (yi)i≥1, assumed to be identically and independently Gamma distributed and independent of the times of the occurrence of rainfall:(2)Lt=∑i=1NtyiNt=1,2,3,…0Nt=0,such that yi~Gamma(α,P) is with probability density function (3)fy=αpyP-1e-αyΓPy>0,0y≤0.Lemma 1.
The compound Poisson process (2) has a cumulant function (4)ψs,t,x=λteMYx-1,for 0≤s<t and x∈R, where MY(x) is the moment generating function of the Gamma distribution.Proof.
The moment generating functionΦ(s) of L(s) is given by (5)MLs=EesLt=∑j=0∞EesLt∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+Lj∣Nt=jPNt=j=∑j=0∞EesL1+L2+⋯+LjPNt=jbecauseofindependenceofLandNt=∑j=0∞MYsje-λtλtjj!=e-λt∑j=0∞MYsjλtjj!=e-λt+MYsλt.So the cumulant of L is (6)lnMLs=λMYs-1=λ1-αx-P-1.If we observe the occurrence of rainfall forn periods, then we have the sequence {Li}i=1n which is independent and identically distributed.If on a particular day there is no rainfall that occurred, then(7)PL=0=exp-λλ00!=exp-λ=p0.Therefore the process has a point mass at0 which implies that it is not entirely continuous random variable.Lemma 2.
The probability density function ofL in (2) is(8)fθL=exp-λδL+exp-λ-αLL-1rPvLP,where δ0(L) is a dirac function at zero.Proof.
Letq0=1-p0 be the probability that it rained. Hence for Li>0 we have (9)fθ+L=∑i=1∞piq0αiPLiP-1exp-αLΓipwherepi=exp-λλii!=1q0∑i=1∞piexp-αLαiPLiP-1ΓiP=1q0exp-αL∑i=1∞piαiPLiP-1ΓiP=1q0exp-αL∑i=1∞exp-λλii!αiPLiP-1ΓiP=exp-λq0exp-αL∑i=1∞λii!αipLiP-1Γip=exp-λq0exp-αL∑i=1∞λiαLipLi!ΓiP=L-1exp-αLexpλ-1∑i=1∞λαPLPi!ΓiP.If we let v=λαP and rp(vLP)=∑i=1∞vLP/i!ΓiP, then we have(10)fθ+L=L-1exp-αLexpλ-1rPvLP.
We can express the probability density functionfθ(L) in terms of a Dirac function as(11)fθL=p0δ0L+q0fθ+L=exp-λδ0L+q0expλ-1L-1exp-αLrPvLP=exp-λδ0L+exp-λ-αLL-1rPvLP.Consider a random sample of sizen of Li with the probability density function(12)fθL=exp-λδL+exp-λ-αLL-1rpvLp.If we assume that there are m positive values L1,L2,…,Lm, then there are M=n-m zeros where m>0.We observe thatm~Bi(n,1-exp-λ) and p(m=0)=exp(-nλ); hence the likelihood function is (13)L=nmp0n-mq0m∏i=1mfθ+Liand the log-likelihood for θ=(λ,α,p) is (14)logLθ;L1,L2,…,Ln=lognmp0n-mq0m∏i=1mfθ+Li=lognme-λn+λm1-e-λm∏i=1me-λ-αLi1Li∑j=1∞λαpLijpjj!Γjp=lognm+λm-n+mlog1-e-λ+∑i=1m-λ-αLi-logLi+log∑i=1m∑j=1∞λαpLijpjj!Γjp.Now for λ^ we have (15)∂logLθ;L1,L2,…,Ln∂λ=m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i∂logLθ;L1,L2,…,Ln∂λ=0⟹m-n+m1-e-λ+-1m+1λ∑i=1m∑j=1∞i=0.We can observe from the above evaluation that λ can not be expressed in closed form; similar derivation also shows that α as well can not be expressed in closed form. Therefore we can only estimate λ and α using numerical methods. Withers and Nadarajah [13] also observed that the probability density function can not be expressed in closed form and therefore it is difficult to find the analytic form of the estimators. So we will express the probability density function in terms of exponential dispersion models as described below.Definition 3 (see [14]).
A probability density function of the form(16)fy;θ,Θ=ay,Θexp1Θyθ-kθfor suitable functions k() and a() is called an exponential dispersion model.Θ > 0 is the dispersion parameter. The function k(θ) is the cumulant of the exponential dispersion model; since Θ=1, then k′() are the successive cumulants of the distribution [15]. The exponential dispersion models were first introduced by Fisher in 1922.If we letLi=logfyi;θi,Θ as a contribution of yi to the likelihood function L=∑iLi, then (17)Li=1Θyiθ-kθi+logay,Θ,∂Li∂θi=1Θyi-k′θi,∂2Li∂θi2=-1Θk′′θi. However we expect that E∂Li/∂θi=0 and -E∂2Li/∂θi2=E∂Li/∂θi2 so that (18)E1Θyi-k′θi=0,1ΘEyi-k′θi=0,Eyi=k′θi.Furthermore (19)-E∂2Li∂θi2=E∂Li∂θi2,-E-1Θk′′θi=E1Θyi-k′θi2,k′′θiΘ=VaryiΘ2,Varyi=Θk′′θi. Therefore the mean of the distribution is E[Y]=μ=dk(θ)/dθ and the variance is Var(Y)=Θd2kθ/dθ2.The relationshipμ=dk(θ)/dθ is invertible so that θ can be expressed as a function of μ; as such we have Var(Y)=ΘV(μ), where V(μ) is called a variance function.Definition 4.
The family of exponential dispersion models, whose variance functions are of the formV(μ)=μp for p∈(-∞,0]∪[1,∞), are called Tweedie family distributions.Examples are as follows: forp=0 then we have a normal distribution, p=1, and Θ=1; it is a Poisson distribution, and Gamma distribution for p=2, while when p=3 it is Gaussian inverse distribution. Tweedie densities can not be expressed in closed form (apart from the examples above) but can instead be identified by their cumulants generating functions.FromVar(Y)=Θd2kθ/dθ2, then for Tweedie family distribution we have (20)VarY=Θd2kθdθ2=ΘVμ=Θμp.Hence we can solve for μ and k(θ) as follows: (21)μ=dkθdθ,dμdθ=μp⟹∫dμμp=∫dθ,θ=μ1-p1-pp≠1,logμp=1by equating the constants of integration above to zero.Forp≠1 we have μ=1-pθ1/1-p so that (22)∫dkθ=∫1-pθ1/1-pdθ,kθ=1-pθ2-p/1-p2-p=μ2-p/1-p2-p,p≠2.Proposition 5.
The cumulant generating function of a Tweedie distribution for1<p<2 is (23)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.Proof.
From (16) the moment generating function is given by (24)MYt=∫exptyay,Θexp1Θyθ-kθdy=∫ay,Θexp1Θyθ+tΘ-kθdy=∫ay,Θexpyθ+tΘ-kθΘ+kθ+tΘ-kθΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘ+kθ+tΘ-kθ+tΘΘdy=∫ay,Θexpyθ+tΘ-kθ+tΘΘexpkθ+tΘ-kθ+tΘΘdy=expkθ+tΘ-kθ+tΘΘ∫ay,Θexpyθ+tΘ-kθ+tΘΘdy=exp1Θkθ+tΘ-kθ.Hence cumulant generating function is (25)logMYt=1Θkθ+tΘ-kθ.
For1<p<2 we substitute θ and k(θ) to have (26)logMYt=1Θμ2-pp-11+tΘ1-pμp-12-p/1-p-1.By comparing the cumulant generating functions in Lemma1 and Proposition 5 the compound Poisson process can be thought of as Tweedie distribution with parameters (λ,α,P) expressed as follows: (27)λ=μ2-pΘ2-p,α=Θp-1μp-1,P=2-pp-1.The requirement that the Gamma shape parameterP be positive implies that only Tweedie distributions between 1<p<2 can represent the Poisson-Gamma compound process. In addition, for λ>0, α>0 implies μ>0 and Θ>0.Proposition 6.
Based on Tweedie distribution, the probability of receiving no rainfall at all is(28)PL=0=exp-μ2-pΘ2-p and the probability of having a rainfall event is (29)PL>0=Wλ,α,L,PexpL1-pμp-1-μ2-p2-p,where (30)Wλ,α,L,P=∑j=1∞λjαLjPe-λj!ΓjP.Proof.
This follows by directly substituting the values ofλ and θ,k(θ) into (16).The functionW(λ,α,L,P) is an example of Wright’s generalized Bessel function; however it can not be expressed in terms of the more common Bessel function. To evaluate it the value of j is determined for which the function Wj reaches the maximum [15].
## 3. Parameter Estimation
We approximate the functionW(λ,α,L,P)=∑j=1∞(λjαLjPe-λ/j!ΓjP)=∑j=1∞Wj following the procedure by [15] where the value of j is determined for which Wj reaches maximum. We treat j as continuous so that Wj is differentiated with respect to j and set the derivative to zero. So for L>0 we have the following.Lemma 7 (see [15]).
The log maximum approximation ofWj is given by (31)logWmax=L2-p2-pΘlogLPp-1PΘ1-P2-p+1+P-PlogP-1-PlogL2-p2-pΘ-log2π-12logP-logL2-p2-pΘ,where jmax=L2-p/(2-p)Θ.Proof.
(32) W λ , α , L , P = ∑ j = 1 ∞ λ j α L j P - 1 e - λ j ! Γ j P = ∑ j = 1 ∞ λ j L j P - 1 e - L / τ e - λ j ! τ P j Γ P j w h e r e τ = 1 α . Substituting the values of λ,α in the above equation we have (33)Wλ,α,L,P=∑j=1∞μ2-p/Θ2-pjLjP-1Θ1-pμp-1jPe-L/τe-λj!ΓPj=e-L/τ-λL-1∑j=1∞μ2-pjΘp-1μp-1jPLjPΘj2-pjj!ΓjP=e-L/τ-λL-1∑j=1∞LjPp-1jPμ2-pj+p-1jPΘj1-P2-pjj!ΓjP.The term μ(2-p)j+(p-1)jP depends on the L,p,P,Θ values so we maximize the summation (34)WL,Θ,P=∑j=1∞LjPp-1jPΘj1-P2-pjj!ΓjP=∑j=1∞zjj!ΓjPwherez=LPp-1PΘ1-P2-p=Wj.Considering Wj we have (35)logWj=jlogz-logj!-logPj=jlogz-logΓj+1-logPj. Using Stirling’s approximation of Gamma functions we have (36)logΓ1+j≈1+jlog1+j-1+j+12log2π1+j,logΓPj≈PjlogPj-Pj+12log2πPj.And hence we have (37)Wj≈jlogz+1+P-PlogP-1-Plogj-log2π-12logP-logj.For 1<p<2 we have P=(2-p)/p-1>0; hence the logarithms have positive arguments. Differentiating with respect to j we have (38)∂logWj∂j≈logz-1j-logj-PlogPj≈logz-logj-PlogPj,where 1/j is ignored for large j. Solving for (∂logWj)/∂j=0 we have (39)jmax=L2-p2-pΘ.Substituting jmax in logWj to find the maximum approximation of Wj we have (40)logWmax=L2-p2-pΘlogLPp-1PΘ1-P2-p+1+P-PlogP-1-PlogL2-p2-pΘ-log2π-12logP-logL2-p2-pΘ.Hence the result follows.It can be observed that∂Wj/∂j is monotonically decreasing; hence logWj is strictly convex as a function of j. Therefore Wj decays faster than geometrically on either side of jmax [15]. Therefore if we are to estimate W(L,Θ,P) by W^(L,Θ,P)=∑j=jdjuWj the approximation error is bounded by geometric sum (41)WL,Θ,P-W^L,Θ,P<Wjd-11-rljd-11-rl+Wju+111-ru,rl=exp∂Wj∂jj=jd-1,ru=exp∂Wj∂jj=ju+1.For quick and accurate evaluation of W(λ,α,L,P), the series is summed for only those terms in the series which contribute significantly to the sum.Generalized linear models extend the standard linear regression models to incorporate nonnormal response distributions and possibly nonlinear functions of the mean. The advantage of GLMs is that the fitting process maximizes the likelihood for the choice of the distribution for a random variabley and the choice is not restricted to normality unlike linear regression [16].The exponential dispersion models are the response distributions for the generalized linear models. Tweedie distributions are members of the exponential dispersion models upon which the generalized linear models are based. Consequently fitting a Tweedie distribution follows the framework of fitting a generalized linear model.Lemma 8.
In case of a canonical link function, the sufficient statistics for{βj} are ∑i=1nyixij.Proof.
Forn independent observations yi of the exponential dispersion model (16) the log-likelihood function is (42)Lβ=∑i=1nLi=∑inlogfyi,θi,Θ=∑i=1nyiθi-kθiΘ+∑inlogayi,Θ.
Butθi=∑jpβjxij; hence (43)∑inyiθi=∑i=1nyi∑jpβjxij=∑jpβj∑i=1nyixij.Proposition 9.
Given thatyi is distributed as (16) then its distribution depends only on its first two moments, namely, μi and Var(yi).Proof.
Letg(μi) be the link function of the GLM such that ηi=∑j=1pβjxij=g(μi). The likelihood equations are (44)∂Lβ∂β=∑i=1n∂Li∂βj∀j.Using chain rule we have (45)∂Li∂βj=∂Li∂θi∂θi∂μi∂μi∂ηi∂ηi∂βj=yi-μiVaryixij∂μi∂ηi.
Hence(46)∂Lβ∂β=yi-μiVaryixij∂μi∂ηi=yi-μiΘμipxij∂μi∂ηi.Since Var(yi)=V(μi), the relationship between the mean and variance characterizes the distribution.Clearly a GLM only requires the first two moments of the responseyi; hence despite the difficulty of full likelihood analysis of Tweedie distribution as it can not be expressed in closed form for 1<p<2 we can still fit a Tweedie distribution family. The likelihood is only required to estimate p and Θ as well as diagnostic check of the model.Proposition 10.
Under the standard regularity conditions, for largen, the maximum likelihood estimator β^ of β for generalized linear model is efficient and has an approximate normal distribution.Proof.
From the log-likelihood, the covariance matrix of the distribution is the inverse of the information matrixJ=E-∂2L(β)/∂βh∂βj.
So(47)J=E-∂2Lβ∂βh∂βj=E∂2Li∂βh∂2Li∂βj=yi-μiVaryixih∂μi∂ηiyi-μiVaryixij∂μi∂ηi=xihxijVaryi∂μi∂ηi2.Hence (48)E-∂2Lβ∂βh∂βj=∑inxihxijVaryi∂μi∂ηi2=XTWX,where W=diag1/Varyi∂μi/∂ηi2.
Thereforeβ^ has an approximate N[β,XTWX-1] with Var(β^)=XTW^X-1, where W^ is evaluated at β^.To computeβ^ we use the iteratively reweighted least square algorithm proposed by Dobson and Barnett [17] where the iterations use the working weights wi: (49)wiVμig˙μi2,where V(μi)=μip.However estimatingp is more difficult than estimating β and Θ such that most researchers working with Tweedie densities have p a priori. In this study we use the procedure in [15] where the maximum likelihood estimator of p is obtained by directly maximizing the profile likelihood function. For any given value of p we find the maximum likelihood estimate of β,Θ and compute the log-likelihood function. This is repeated several times until we have a value of p which maximizes the log-likelihood function.Given the estimated values ofp and β, then the unbiased estimator of Θ is given by (50)Θ^=∑i=1nLi-μiβ^2μiβ^p^.Since for 1<p<2 the Tweedie density can not be expressed in closed form, it is recommended that the maximum likelihood estimate of Θ must be computed iteratively from full data [15].
## 4. Data and Model Fitting
### 4.1. Data Analysis
Daily rainfall data of Balaka district in Malawi covering the period 1995–2015 is used. The data was obtained from Meteorological Surveys of Malawi. Figure1 shows a plot of the data.Figure 1
Daily rainfall amount for Balaka district.In summary the minimum value is 0 mm which indicates that there were no rainfall on particular days, whereas the maximum amount is 123.7 mm. The mean rainfall for the whole period is 3.167 mm.We investigated the relationship between the variance and the mean of the data by plotting thelog(variance) against log(mean) as shown in Figure 2. From the figure we can observe a linear relationship between the variance and the mean which can be expressed as (51)logVariance=α+βlogmean(52)Variace=A∗meanβ,A∈R.Hence the variance can be expressed as some power β∈R of the mean agreeing with the Tweedie variance function requirement.Figure 2
Variance mean relationship.
### 4.2. Fitted Model
To model the daily rainfall data we usesin and cos as predictors due to the cyclic nature and seasonality of rainfall. We have assumed that February ends on 28th for all the years to be uniform in our modeling.The canonical link function is given by(53)logμi=a0+a1sin2πi365+a2cos2πi365,where i=1,2,…,365 corresponds to days of the year and a0,a1,a2 are the coefficients of regression.In the first place we estimatep^ by maximizing the profile log-likelihood function. Figure 3 shows the graph of the profile log-likelihood function. As can be observed the value of p that maximizes the function is 1.5306.Figure 3
Profile likelihood.From the results obtained after fitting the model, both the cycliccosine and sine terms are important characteristics for daily rainfall Table 1. The covariates were determined to take into account the seasonal variations in the stochastic model.Table 1
Estimated parameter values.
Parameter Estimate Std. error t value Pr(>|t|) a^0 0.1653 0.0473 3.4930 0.0005 ∗ ∗ ∗ a^1 0.9049 0.0572 15.81100 <2e − 16∗∗∗ a^2 2.0326 0.0622 32.6720 <2e − 16∗∗∗ Θ^ 14.8057 - - - Withsignif code: 0 ∗∗∗.The predictedμ^i,p^,Θ^ for each day only depends on the day’s conditions so that for each day i we have (54)μ^i=exp0.1653+0.9049sin2πi365+2.0326cos2πi365,p^=1.5306,Θ^=14.8057.From these estimated values we can calculate the parameter (λ^i,α^i,P^) from the corresponding formulas above as (55)λ^i=16.5716exp0.1653+0.9049sin2πi365+2.03263cos2πi3650.4694,α^=7.4284exp0.1653+0.9049sin2πi365+2.0326cos2πi3650.5306,P^=0.8847.Comparing the actual means and the predicted means for 2 July we have μ^=0.3820, whereas μ=0.4333; similarly for 31 December we have μ^=9.0065 and μ=10.6952, respectively. Figure 4 shows the estimated mean and actual mean where the model behaves well generally.Figure 4
Actual versus predicted mean.
### 4.3. Goodness of Fit of the Model
Let the maximum likelihood estimate ofθi be θ^i for all i and μ^ as the model’s mean estimate. Let θ~i denote the estimate of θi for the saturated model with corresponding μ~=yi.The goodness of fit is determined by deviance which is defined as(56)-2maximum likelihood of the fitted modelMaximum likelihood of the saturated model=-2Lμ^;y-Ly,y=2∑i=1nyiθ~i-kθ~iΘ-2∑i=1nyiθ^i-kθ^iΘ=2∑i=1nyiθ~i-θ^i-kθ~i+kθ^iΘ=Devy,μ^Θ.Dev(y,μ^) is called the deviance of the model and the greater the deviance, the poorer the fitted model as maximizing the likelihood corresponds to minimizing the deviance.In terms of Tweedie distributions with1<p<2, the deviance is (57)Devp=2∑i=1nyi2-p-2-pyiμi1-p+1-pμi2-p1-p2-p.Based on results from fitting the model, the residual deviance is 43144 less than the null deviance 62955 which implies that the fitted model explains the data better than a null model.
### 4.4. Diagnostic Check
The model diagnostic is considered as a way of residual analysis. The fitted model faces challenges to be assessed especially for days with no rainfall at all as they produce spurious results and distracting patterns similarly as observed by [15]. Since this is a nonnormal regression, residuals are far from being normally distributed and having equal variances unlike in a normal linear regression. Here the residuals lie parallel to distinct values; hence it is difficult to make any meaningful decision about the fitted model (Figure 5).Figure 5
Residuals of the model.So we assess the model based on quantile residuals which remove the pattern in discrete data by adding the smallest amount of randomization necessary on the cumulative probability scale.The quantile residuals are obtained by inverting the distribution function for each response and finding the equivalent standard normal quantile.Mathematically, letai=limy↑yiF(y;μ^i,Θ^) and bi=F(yi;μ^i,Θ^), where F is the cumulative function of the probability density function f(y;μ,Θ); then the randomized quantile residuals for yi are (58)rq,i=Φ-1uiwith ui being the uniform random variable on (ai,bi]. The randomized quantile residuals are distributed normally barring the variability in μ^ and Θ^.Figure6 shows the normalized Q-Q plot and as can be observed there are no large deviations from the straight line, only small deviations at the tail. The linearity observed indicates an acceptable fitted model.Figure 6
Q-Q plot of the quantile residuals.
## 4.1. Data Analysis
Daily rainfall data of Balaka district in Malawi covering the period 1995–2015 is used. The data was obtained from Meteorological Surveys of Malawi. Figure1 shows a plot of the data.Figure 1
Daily rainfall amount for Balaka district.In summary the minimum value is 0 mm which indicates that there were no rainfall on particular days, whereas the maximum amount is 123.7 mm. The mean rainfall for the whole period is 3.167 mm.We investigated the relationship between the variance and the mean of the data by plotting thelog(variance) against log(mean) as shown in Figure 2. From the figure we can observe a linear relationship between the variance and the mean which can be expressed as (51)logVariance=α+βlogmean(52)Variace=A∗meanβ,A∈R.Hence the variance can be expressed as some power β∈R of the mean agreeing with the Tweedie variance function requirement.Figure 2
Variance mean relationship.
## 4.2. Fitted Model
To model the daily rainfall data we usesin and cos as predictors due to the cyclic nature and seasonality of rainfall. We have assumed that February ends on 28th for all the years to be uniform in our modeling.The canonical link function is given by(53)logμi=a0+a1sin2πi365+a2cos2πi365,where i=1,2,…,365 corresponds to days of the year and a0,a1,a2 are the coefficients of regression.In the first place we estimatep^ by maximizing the profile log-likelihood function. Figure 3 shows the graph of the profile log-likelihood function. As can be observed the value of p that maximizes the function is 1.5306.Figure 3
Profile likelihood.From the results obtained after fitting the model, both the cycliccosine and sine terms are important characteristics for daily rainfall Table 1. The covariates were determined to take into account the seasonal variations in the stochastic model.Table 1
Estimated parameter values.
Parameter Estimate Std. error t value Pr(>|t|) a^0 0.1653 0.0473 3.4930 0.0005 ∗ ∗ ∗ a^1 0.9049 0.0572 15.81100 <2e − 16∗∗∗ a^2 2.0326 0.0622 32.6720 <2e − 16∗∗∗ Θ^ 14.8057 - - - Withsignif code: 0 ∗∗∗.The predictedμ^i,p^,Θ^ for each day only depends on the day’s conditions so that for each day i we have (54)μ^i=exp0.1653+0.9049sin2πi365+2.0326cos2πi365,p^=1.5306,Θ^=14.8057.From these estimated values we can calculate the parameter (λ^i,α^i,P^) from the corresponding formulas above as (55)λ^i=16.5716exp0.1653+0.9049sin2πi365+2.03263cos2πi3650.4694,α^=7.4284exp0.1653+0.9049sin2πi365+2.0326cos2πi3650.5306,P^=0.8847.Comparing the actual means and the predicted means for 2 July we have μ^=0.3820, whereas μ=0.4333; similarly for 31 December we have μ^=9.0065 and μ=10.6952, respectively. Figure 4 shows the estimated mean and actual mean where the model behaves well generally.Figure 4
Actual versus predicted mean.
## 4.3. Goodness of Fit of the Model
Let the maximum likelihood estimate ofθi be θ^i for all i and μ^ as the model’s mean estimate. Let θ~i denote the estimate of θi for the saturated model with corresponding μ~=yi.The goodness of fit is determined by deviance which is defined as(56)-2maximum likelihood of the fitted modelMaximum likelihood of the saturated model=-2Lμ^;y-Ly,y=2∑i=1nyiθ~i-kθ~iΘ-2∑i=1nyiθ^i-kθ^iΘ=2∑i=1nyiθ~i-θ^i-kθ~i+kθ^iΘ=Devy,μ^Θ.Dev(y,μ^) is called the deviance of the model and the greater the deviance, the poorer the fitted model as maximizing the likelihood corresponds to minimizing the deviance.In terms of Tweedie distributions with1<p<2, the deviance is (57)Devp=2∑i=1nyi2-p-2-pyiμi1-p+1-pμi2-p1-p2-p.Based on results from fitting the model, the residual deviance is 43144 less than the null deviance 62955 which implies that the fitted model explains the data better than a null model.
## 4.4. Diagnostic Check
The model diagnostic is considered as a way of residual analysis. The fitted model faces challenges to be assessed especially for days with no rainfall at all as they produce spurious results and distracting patterns similarly as observed by [15]. Since this is a nonnormal regression, residuals are far from being normally distributed and having equal variances unlike in a normal linear regression. Here the residuals lie parallel to distinct values; hence it is difficult to make any meaningful decision about the fitted model (Figure 5).Figure 5
Residuals of the model.So we assess the model based on quantile residuals which remove the pattern in discrete data by adding the smallest amount of randomization necessary on the cumulative probability scale.The quantile residuals are obtained by inverting the distribution function for each response and finding the equivalent standard normal quantile.Mathematically, letai=limy↑yiF(y;μ^i,Θ^) and bi=F(yi;μ^i,Θ^), where F is the cumulative function of the probability density function f(y;μ,Θ); then the randomized quantile residuals for yi are (58)rq,i=Φ-1uiwith ui being the uniform random variable on (ai,bi]. The randomized quantile residuals are distributed normally barring the variability in μ^ and Θ^.Figure6 shows the normalized Q-Q plot and as can be observed there are no large deviations from the straight line, only small deviations at the tail. The linearity observed indicates an acceptable fitted model.Figure 6
Q-Q plot of the quantile residuals.
## 5. Simulation
The model is simulated to test whether it produces data with similar characteristics to the actual observed rainfall. The simulation is done for a period of two years where one was the last year of the data (2015) and the other year (2016) was a future prediction. Then comparison was done with a graph for 2015 data as shown in Figure7.Figure 7
Simulated rainfall and observed rainfall.The different statistics of the simulated data and actual data are shown in Table2 for comparisons.Table 2
Data statistics.
Min 1st Qu. Median Mean 3rd Qu. Max Predicted data 0.00 0.00 0.00 3.314 0.00 116.5 Actual data[10 yrs] 0.00 0.00 0.00 3.183 0.300 123.7 Actual data[2015] 0.00 0.00 0.00 3.328 0.00 84.5The main objective of simulation is to demonstrate that the Poisson-Gamma can be used to predict and forecast rainfall occurrence and intensity simultaneously. Based on the results above (Figure8), the model has shown that it works well in predicting the rainfall intensity and hence can be used in agriculture, actuarial science, hydrology, and so on.Figure 8
Probability of rainfall occurrence.However the model performed poorly in predicting probability of rainfall occurrence as it underestimated the probability of rainfall occurrence. It is suggested here that probably the use of truncated Fourier series can improve this estimation as compared to the sinusoidal.But it performed better in predicting probability of no rainfall on days where there was little or no rainfall as indicated in Figure8.It can also be observed that the model produces synthetic precipitation that agrees with the four characteristics of a stochastic precipitation model as suggested by [4] as follows. The probability of rainfall occurrence obeys a seasonal pattern (Figure 8); in addition we can also tell that a probability of a rain in a day is higher if the previous day was wet which is the basis of precipitation models that involve the Markov process. From Figure 7 we can also observe variation of rainfall intensity based on time of the season.In addition the model allows modeling of exact zeros in the data and is able to predict a probability of no rainfall event simultaneously.
## 6. Conclusion
A daily stochastic rainfall model was developed based on a compound Poisson process where rainfall events follow a Poisson distribution and the intensity is independent of events following a Gamma distribution. Unlike several researches that have been carried out into precipitation modeling whereby two models are developed for occurrence and intensity, the model proposed here is able to model both processes simultaneously. The proposed model is also able to model the exact zeros, the event of no rainfall, which is not the case with the other models. This precipitation model is an important tool to study the impact of weather on a variety of systems including ecosystem, risk assessment, drought predictions, and weather derivatives as we can be able to simulate synthetic rainfall data. The model provides mechanisms for understanding the fine scale structure like number and mean of rainfall events, mean daily rainfall, and probability of rainfall occurrence. This is applicable in agriculture activities, disaster preparedness, and water cycle systems.The model developed can easily be used for forecasting future events and, in terms of weather derivatives, the weather index can be derived from simulating a sample path by summing up daily precipitation in the relevant accumulation period. Rather than developing a weather index which is not flexible enough to forecast future events, we can use this model in pricing weather derivatives.Rainfall data is generally zero inflated in that the amount of rainfall received on a day can be zero with a positive probability but continuously distributed otherwise. This makes it difficult to transform the data to normality by power transforms or to model it directly using continuous distribution. The Poisson-Gamma distribution has a complicated probability density function whose parameters are difficult to estimate. Hence expressing it in terms of a Tweedie distribution makes estimating the parameters easy. In addition, Tweedie distributions belong to the exponential family of distributions upon which generalized linear models are based; hence there is an already existing framework in place for fitting and diagnostic testing of the model.The model developed allows the information in both zero and positive observations to contribute to the estimation of all parts of the model unlike the other model [3, 4, 9] which conditions rainfall intensity based on probability of occurrence. In addition the introduction of the dispersion parameter in the model helps in reducing underestimation of overdispersion of the data which is also common in the aforementioned models.
---
*Source: 1012647-2018-04-04.xml* | 2018 |
# Immunology and Cell Biology of Parasitic Diseases 2013
**Authors:** Luis I. Terrazas; Abhay R. Satoskar; Miriam Rodriguez-Sosa; Jorge Morales-Montor
**Journal:** BioMed Research International
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101268
---
## Body
---
*Source: 101268-2013-06-17.xml* | 101268-2013-06-17_101268-2013-06-17.md | 393 | Immunology and Cell Biology of Parasitic Diseases 2013 | Luis I. Terrazas; Abhay R. Satoskar; Miriam Rodriguez-Sosa; Jorge Morales-Montor | BioMed Research International
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101268 | 101268-2013-06-17.xml | ---
## Body
---
*Source: 101268-2013-06-17.xml* | 2013 |
# ML-Based Texture and Wavelet Features Extraction Technique to Predict Gastric Mesothelioma Cancer
**Authors:** Neeraj Garg; Divyanshu Sinha; Babita Yadav; Bhoomi Gupta; Sachin Gupta; Shahajan Miah
**Journal:** BioMed Research International
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012684
---
## Abstract
Microsatellites are small, repetitive sequences found all across the human genome. Microsatellite instability is the phenomenon of variations in the length of microsatellites induced by the insertion or deletion of repeat units in tumor tissue (MSI). MSI-type stomach malignancy has distinct genetic phenotypes and clinic pathological characteristics, and the stability of microsatellites influences whether or not patients with gastric mesothelioma react to immunotherapy. As a result, determining MSI status prior to surgery is critical for developing treatment options for individuals with gastric cancer. Traditional MSI detection approaches need immunological histochemistry and genetic analysis, which adds to the expense and makes it difficult to apply to every patient in clinical practice. In this study, to predict the MSI status of gastric cancer patients, researchers used image feature extraction technology and a machine learning algorithm to evaluate high-resolution histopathology pictures of patients. 279 cases of raw data were obtained from the TCGA database, 442 samples were obtained after preprocessing and upsampling, and 445 quantitative image features, including first-order statistics of impressions, texture features, and wavelet features, were extracted from the histopathological images of each sample. To filter the characteristics and provide a prediction label (risk score) for MSI status of gastric cancer, Lasso regression was utilized. The predictive label’s classification performance was evaluated using a logistic classification model, which was then coupled with the clinical data of each patient to create a customized nomogram for MSI status prediction using multivariate analysis.
---
## Body
## 1. Introduction
Gastric cancer is one of the most common malignant tumors in the world. There were 1,033,701 new cancer cases, accounting for 5.7% of the global new cancer cases, and 782, 685 deaths, accounting for 8.2% of global cancer deaths. It ranks fifth in cancer incidence and third in mortality, and there is no decreasing trend in the incidence rate [1]. The heterogeneity of cancer, the appearance of gastric cancer, and the complex and diverse cancer types make the diagnosis and treatment of cancer more difficult. Microsatellite instability results from an impaired DNA mismatch repair, and a specific cancer phenotype is characterized by hypervariability of short repeats in the genome, a form characterized by DNA polymerase slippage and single nucleotides [2]. Extensive lengths of the microsatellite repeats are due to increased frequency of variants (SNVs). Polymorphism studies have shown that MSI-type gastric cancer accounts for about 15% of gastric cancer patients; these patients are more likely to benefit from immunotherapy [3]. MSI-type gastric cancer patients have their unique clinical features, such as the diffuse cancer tissue genome which is less stable, the disease site which is often distal to the tumor tissue, and the tumor types which are mostly type 3; MSI-type gastric cancer patients usually have a good overall long-term prognosis, compared with the contemporary MSS-type gastric cancer patients; for MSI-type gastric cancer, the survival rate of patients is high [4]. From precancer to onset, MSI gradually accumulates and increases, and therefore, MSI detection for early diagnosis and screening of gastric cancer is prolonged [5]. The prognosis of gastric cancer patients and the clinical decision-making of adjuvant gastric cancer treatment are of great significance. There are two main methods of MSI detection: immunohistochemistry (Immunohistochemistry, IHC) and polymerase chain reaction (PCR). IHC responds to MSI by detecting the expression of mismatch repair gene state; PCR is carried out through a specific single-nucleotide site gene tagging genetic analysis; however, both IHC and PCR testing methods need to be large-capacity tertiary medical center and require high economic and time cost; it is difficult to extend to every patient in clinical practice [6]. Therefore, none provides timely immune screening for a large number of potential immunotherapy-sensitive patients with point inhibitor therapy, thereby losing the chance to control the disease [7].Histopathology is an essential tool for cancer diagnosis and prediction, and its type reflects the combined effects of molecular changes on cancer cell behavior. Assessing disease progression provides a direct visualization tool. A group of histopathologists can assess cell density, tissue structure, and histological filamentous features such as cleft status which were used to classify lesions. Along with advances in microscopy, imaging technology, and computer technology based on pathological pictures, auxiliary diagnostic models are developing rapidly. Among them, image texture analysis is used for pathology. Image texture feature extraction for cancer grading, Classification and predict for example, the author [8]. For extracting tissue disease from breast cancer patients, the grayscale co-occurrence matrix (GLCM) and the graph run-length matrix (GRLM) of the image is used. Euler number and other texture features, using Linear Discriminant Classifier (LDA) are used to map histological images, malignant and non-malignant histopathology Image, and the classification accuracy was 80% and 100%, respectively. The researcher in this study has done extracting three sets of texture features of soft tissue sarcoma: gray level cooccurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and local binary modulus texture analysis using the LBP method to achieve the metastases and lesions of soft tissue sarcoma’s death prediction [9]. Author has trained a deep convolutional neural network; two subtypes of lung cancer can be accurately distinguished from histopathological images: lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). Mutation status of six genes is associated with lung cancer. In this study, tumors, malignancy of the lymph node is a predictor that has consequences for the degree of lymph dissection. Numerous nodal units are engaged in the capillary permeability of the stomach, each with a variable risk of malignancy. This study aimed to construct a deep network system for predicting lymph cancer in numerous nodal sites in individuals with gastric cancer using preoperative CT data. ML techniques are employed for the examination of these CT scans for the investigation of any changes if occurred to predict the ailments and recommend precautions for better curability [10]. The focus of this research was to see if radiomic evaluation employing spectroscopic micro-CT-enhanced nanoparticle contrast enhancing may help distinguish tumors dependent on the amount of malignant cell lymphocytes [11]. In this research to improve survival prognosis, we offer a unique combined multitask system with multilayer characteristics that predicts clinical tumor and metastasis stages simultaneously to detect gastric cancer [12]. This paper can establish to fuse the statistical model of multiple residual networks; it can be obtained from a standard hematoxylin and accurate prediction of prostate cancer patients in histopathological images after eosin staining the mutation status of the speckle-type POZ gene [13].This paper proposes gastric cancer based on the texture features of histopathological images. Authors in this research have forecasted MSI prediction method that targets tumor heterogeneity in gastric cancer histopathology, where researchers have used image feature extraction technology and a machine learning algorithm to evaluate high-resolution histopathology pictures of the patients. 279 cases of raw data were obtained from the TCGA database, out of which 442 samples were acquired after preprocessing and upsampling, and 445 quantitative image features, including first-order statistics of impressions, texture features, and wavelet features, were extracted from the histopathological images of each sample. To filter the characteristics and provide a prediction label (risk score) for MSI status of gastric cancer, Lasso regression was employed. Furthermore, the predictive label’s classification performance was evaluated using a logistic classification model, which was then coupled with the clinical data of each patient to create a customized nomogram for MSI status prediction using multivariate analysis as an achievement of the research.
### 1.1. Organization
The paper is outlined in several sections where the starting section is the introduction part followed by the second section which discusses the data and methods employed in the study. The third section defines the analysis of experimental results, followed by the penultimate section that states about discussions and findings, and the ultimate section is the conclusion of the study.In the representation of Figure1 as depicted below, the extracted quantitative image features from images have been acquired, and the use of Lasso regression to construct the prediction has targeted a signature, and using the predictive signature as an independent predictor to be combined with the patient’s clinical features has been opted, additionally the multivariate analysis by logistic regression to build a predictive model has been obtained; at last the prediction tool is being drawn termed as nomogram of personality that provides a powerful instrument for MSI prediction in gastric cancer patients. The method of flow is shown in the figure below.Figure 1
Construction process of MSI prediction model for gastric cancer.
## 1.1. Organization
The paper is outlined in several sections where the starting section is the introduction part followed by the second section which discusses the data and methods employed in the study. The third section defines the analysis of experimental results, followed by the penultimate section that states about discussions and findings, and the ultimate section is the conclusion of the study.In the representation of Figure1 as depicted below, the extracted quantitative image features from images have been acquired, and the use of Lasso regression to construct the prediction has targeted a signature, and using the predictive signature as an independent predictor to be combined with the patient’s clinical features has been opted, additionally the multivariate analysis by logistic regression to build a predictive model has been obtained; at last the prediction tool is being drawn termed as nomogram of personality that provides a powerful instrument for MSI prediction in gastric cancer patients. The method of flow is shown in the figure below.Figure 1
Construction process of MSI prediction model for gastric cancer.
## 2. Data and Methods
### 2.1. Patient Data
This paper’s histopathological images of gastric cancer are from the TCGA data library. In addition, the MSI status of gastric cancer patients was analyzed to use the obtained data effectively. This study established three inclusion criteria for the collected data: (1) Pathological images showing uniform staining, precise imaging and no tissue adhesion; (2) uniformly complete personal basic information and clinical characteristics; (3) have clear MSI status information. After screening, 277 case samples were eligible for the inclusion standard.
### 2.2. Data Preprocessing
To ensure the validity of the experiment and obtain valuable results, it is necessary to solve the problem of sample imbalance. Augment the minority class by sampling. For MSI-type cases, histopathological images for each patient is considered. Select multiple ROIs, each ROI as an independent sample, upsampled. The dataset has a total of 442 pieces. The obtained models are randomly divided into a training set and a validation set: There are 313 samples in the training set, of which 156 are of MSI type. There are 157 cases of MSS type; there are 129 samples in the validation set, of which 64 cases are of MSI type, for example, 65 cases of MSS type.
### 2.3. Image Segmentation
The histopathological image needs to be processed before image feature extraction to ensure the accuracy of the resulting image features and reduce the computational complexity degree segmentation. To obtain the most representative lesion area, under the guidance of a chief physician with experience in histopathological image detection, the tumor area was annotated and examined the marked lesion area by another expert. Finally, the ROI of all histopathological images was obtained by segmentation.
### 2.4. Feature Extraction
In this study, the original image of the ROI is obtained from the segmentation and the processed wavelet. A total of 445 image features are extracted from the filtered image, which can be divided into two classes, six groups per class: first-order statistics and gray-level cooccurrence matrix (GLCM), gray-level size zone matrix (GLSZM), gray-level run-length matrix (GLRLM), neighboring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM).First-order statistics describe interest through common statistical indicators pixel intensity distribution within the region of interest. GLCM describes the grayscale of an image and the second-order joint probability function of the spatial correlation characteristics obtained by calculating GLCM using the partial eigenvalues of the matrix to represent the texture features of the image, which can give the comprehensive information about the direction, adjacent interval and changing amplitude of the grayscale of the response image [14]. GLSZM is used to quantify the gray-level area in the image; the gray-level area domain is defined as the number of connected pixels that share the same gray-level intensity. GLRLM is used to quantify grayscale runs, which are defined as the length of consecutive pixels of the same gray value. In NGTDM through grayscale, the sum of absolute differences reflects the difference between the average gray values of adjacent pixels different. GLDM can quantify the grayscale dependence in images; grayscale dependence is defined as the number of connected pixels within a distance δ that depends on the center pixel [15].This study extracted 18 features from first-order statistics, mainly including entropy, total energy, mean absolute deviation, and skewness; from GLCM, 22 kinds of features are extracted, mainly including autocorrelation, joint average, clustering shading, and cluster tendency; 16 features were extracted from GLSZM, mainly including grayscale uneven normalization, uneven area size, and area percentage size area nonuniformity normalization; 16 were extracted from GLRLM features, including run entropy, run difference, gray variance, and run nonuniformity uniform standardization; 5 kinds of features are extracted from NGTDM, mainly including roughness, contrast, complexity, and intensity; 14 were extracted from GLDM features, mainly including dependence entropy, dependence nonuniformity, dependence nonuniformity standard standardization, and dependent variance.
### 2.5. Feature Selection
To reduce the complexity of the model and prevent overfitting, before modeling, this paper, features are selected using the lasso method [16]. Lasso improves the traditional, linear regression method provides a new perspective on the general linear regression algorithm on the basis of adding the L1 penalty term, the linear regression parameters have sparsity from the resulting model which has good predictability, and the selected features are related to the prediction. The test label is more relevant. For the feature vector xii=1,2,⋯,N of a given sample, xi∈Rn. The objective function of Lasso regression is
(1)Lα,γ=∑i=1Nyi−∑jxijaj2+γ∑j=1paj,where y is the label of the sample and α=aj is the regression parameter to get the most optimal regression parameters and transform the objective function minimization problem into the following subproblems:
(2)ajk+1=argminL2a−zj22+γa1In:(3)zj=ajk−1L∇fajk,(4)∇fajk=2∑i=1Nxij∑s=1Pxisask−yi.Using proximal gradient descent [17], the algorithm iteratively solves Equation (3) and uses the soft domain function to solve Equation (2); the final solution is as follows:
(5)ajk+1=zj−γ/L,zj>γ/L,0,zj≤γL,zj+γL,zj<γL.Through the above algorithm, the sparse feature matrix is finally obtained, which is used to build a classification model.
### 2.6. Predictive Label Construction
In this study, the sparse eigenvalues and their regression coefficients were used to construct a sample. Table1 shows the risk score of the proposed model over the number of features and log variance. In the predicted label of Ben, the formula is as follows:
(6)Riskscore=∑i=1nFeaturei×αi.Table 1
Risk score of the proposed model over the number of features and log variance.
LogγBinomial devianceNumber of features-21.42-41.355-61.35-81.259-101.212Among them, feature is theith eigenvalue of the sample feature vector, and αi is the regression coefficient corresponding to the eigenvalue. Table 2 shows the risk coefficient of the proposed model over the number of feature and log variance.Table 2
Risk coefficient of the proposed model over number of feature and log variance.
LogγCoefficientsNumber of features-21012-4511-638-82.511-10512Using risk score as an independent predictor and the clinical samples, combine features to build logistic regression models and draw personalized nomogram picture and through C index, AUC value, calibration curve, and decision curve evaluation predictive performance of the model [18].
## 2.1. Patient Data
This paper’s histopathological images of gastric cancer are from the TCGA data library. In addition, the MSI status of gastric cancer patients was analyzed to use the obtained data effectively. This study established three inclusion criteria for the collected data: (1) Pathological images showing uniform staining, precise imaging and no tissue adhesion; (2) uniformly complete personal basic information and clinical characteristics; (3) have clear MSI status information. After screening, 277 case samples were eligible for the inclusion standard.
## 2.2. Data Preprocessing
To ensure the validity of the experiment and obtain valuable results, it is necessary to solve the problem of sample imbalance. Augment the minority class by sampling. For MSI-type cases, histopathological images for each patient is considered. Select multiple ROIs, each ROI as an independent sample, upsampled. The dataset has a total of 442 pieces. The obtained models are randomly divided into a training set and a validation set: There are 313 samples in the training set, of which 156 are of MSI type. There are 157 cases of MSS type; there are 129 samples in the validation set, of which 64 cases are of MSI type, for example, 65 cases of MSS type.
## 2.3. Image Segmentation
The histopathological image needs to be processed before image feature extraction to ensure the accuracy of the resulting image features and reduce the computational complexity degree segmentation. To obtain the most representative lesion area, under the guidance of a chief physician with experience in histopathological image detection, the tumor area was annotated and examined the marked lesion area by another expert. Finally, the ROI of all histopathological images was obtained by segmentation.
## 2.4. Feature Extraction
In this study, the original image of the ROI is obtained from the segmentation and the processed wavelet. A total of 445 image features are extracted from the filtered image, which can be divided into two classes, six groups per class: first-order statistics and gray-level cooccurrence matrix (GLCM), gray-level size zone matrix (GLSZM), gray-level run-length matrix (GLRLM), neighboring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM).First-order statistics describe interest through common statistical indicators pixel intensity distribution within the region of interest. GLCM describes the grayscale of an image and the second-order joint probability function of the spatial correlation characteristics obtained by calculating GLCM using the partial eigenvalues of the matrix to represent the texture features of the image, which can give the comprehensive information about the direction, adjacent interval and changing amplitude of the grayscale of the response image [14]. GLSZM is used to quantify the gray-level area in the image; the gray-level area domain is defined as the number of connected pixels that share the same gray-level intensity. GLRLM is used to quantify grayscale runs, which are defined as the length of consecutive pixels of the same gray value. In NGTDM through grayscale, the sum of absolute differences reflects the difference between the average gray values of adjacent pixels different. GLDM can quantify the grayscale dependence in images; grayscale dependence is defined as the number of connected pixels within a distance δ that depends on the center pixel [15].This study extracted 18 features from first-order statistics, mainly including entropy, total energy, mean absolute deviation, and skewness; from GLCM, 22 kinds of features are extracted, mainly including autocorrelation, joint average, clustering shading, and cluster tendency; 16 features were extracted from GLSZM, mainly including grayscale uneven normalization, uneven area size, and area percentage size area nonuniformity normalization; 16 were extracted from GLRLM features, including run entropy, run difference, gray variance, and run nonuniformity uniform standardization; 5 kinds of features are extracted from NGTDM, mainly including roughness, contrast, complexity, and intensity; 14 were extracted from GLDM features, mainly including dependence entropy, dependence nonuniformity, dependence nonuniformity standard standardization, and dependent variance.
## 2.5. Feature Selection
To reduce the complexity of the model and prevent overfitting, before modeling, this paper, features are selected using the lasso method [16]. Lasso improves the traditional, linear regression method provides a new perspective on the general linear regression algorithm on the basis of adding the L1 penalty term, the linear regression parameters have sparsity from the resulting model which has good predictability, and the selected features are related to the prediction. The test label is more relevant. For the feature vector xii=1,2,⋯,N of a given sample, xi∈Rn. The objective function of Lasso regression is
(1)Lα,γ=∑i=1Nyi−∑jxijaj2+γ∑j=1paj,where y is the label of the sample and α=aj is the regression parameter to get the most optimal regression parameters and transform the objective function minimization problem into the following subproblems:
(2)ajk+1=argminL2a−zj22+γa1In:(3)zj=ajk−1L∇fajk,(4)∇fajk=2∑i=1Nxij∑s=1Pxisask−yi.Using proximal gradient descent [17], the algorithm iteratively solves Equation (3) and uses the soft domain function to solve Equation (2); the final solution is as follows:
(5)ajk+1=zj−γ/L,zj>γ/L,0,zj≤γL,zj+γL,zj<γL.Through the above algorithm, the sparse feature matrix is finally obtained, which is used to build a classification model.
## 2.6. Predictive Label Construction
In this study, the sparse eigenvalues and their regression coefficients were used to construct a sample. Table1 shows the risk score of the proposed model over the number of features and log variance. In the predicted label of Ben, the formula is as follows:
(6)Riskscore=∑i=1nFeaturei×αi.Table 1
Risk score of the proposed model over the number of features and log variance.
LogγBinomial devianceNumber of features-21.42-41.355-61.35-81.259-101.212Among them, feature is theith eigenvalue of the sample feature vector, and αi is the regression coefficient corresponding to the eigenvalue. Table 2 shows the risk coefficient of the proposed model over the number of feature and log variance.Table 2
Risk coefficient of the proposed model over number of feature and log variance.
LogγCoefficientsNumber of features-21012-4511-638-82.511-10512Using risk score as an independent predictor and the clinical samples, combine features to build logistic regression models and draw personalized nomogram picture and through C index, AUC value, calibration curve, and decision curve evaluation predictive performance of the model [18].
## 3. Analysis of Experimental Results
### 3.1. Clinical Features
The histopathological images used in this study were obtained from 277 gastric cancer patients, including 55 patients with MSI-type gastric cancer and 222 patients with MSS-type gastric cancer. Among them, there were 188 male patients and 89 female patients, with a median age of 67.64 years (33-90 years old), and the prevalence of MSI was 19.85% (55/277). According to gastric cancer, patients were divided into two groups by MSI status. There are differences in gender, age, and TNM staging between patients and MSS patients. The clinical characteristics of the patients are shown in Table3.Table 3
Clinical characteristics of the patients.
Feature itemClassificationMSI (n=55)MSS (n=222)P valueAgeMean70.9163.74<0.001∗∗∗Range46~9036~90GenderMale35 (54.5%)157 (71.2%)<0.001∗∗∗Female25 (45.5%)64 (28.8%)TNM stageI12 (21.8%)25 (11.3%)<0.001∗∗∗II20 (36.4%)67 (30.1%)III19 (34.5%)106 (47.7%)IV4 (7.3%)24 (10.8%)
### 3.2. Image Feature Screening and Predicted Label Construction
Based on the MSI status, Lasso regression is applied on the training set, features are filtered, and Figure2(a) shows the binomial error classification points with log γ, where the least binomial error classification point represents the most retained. The best number of features fit the model. Based on the minimum criterion and 1 standard error standard, with 10-fold cross-validation, draw the dashed vertical with the best γ value wire. Figure 3(b) shows the lasso coefficient curve of the image features [19].Figure 2
Lasso regression process.
(a)
Parameterγ tuning process(b)
Regression coefficient compression processFigure 3
ROC curves of training set and test set.The results of lasso regression are shown in Tables4, and 9 lines were finally screened nonzero number of features, including 4 image features based on the original image and based on 5 image features after wavelet filtering. Calculate the sample by Formula (6) Ben’s risk score. Single-factorial correlation of 9 image features with MSI status and prime variance analysis shows that the P values were all less than 0.001, indicating that the characteristics obtained from the screening were closely related to gastric MSI status of cancer patients and was significantly correlated.Table 4
Lasso regression results.
Feature nameRegression coefficientsP valueoriginal_firstorder_10Percentile0.212 204<0.001∗∗∗original_firstorder_90Percentile0.404 922<0.001∗∗∗original_firstorder_Median6.118 815<0.001∗∗∗original_firstorder_Skewness wavelet--0.817 240<0.001∗∗∗HL_glcm_Imc2 wavelet--0.650 800<0.001∗∗∗LL_firstorder_10Percentile wavelet-0.490 395<0.001∗∗∗LL_firstorder_Median wavelet--5.750 580<0.001∗∗∗LL_glcm_ClusterShade wavelet-1.133 542<0.001∗∗∗LL_glrlm_GrayLevelEmphasis-0.254 150<0.001∗∗∗
### 3.3. Prediction Accuracy Verification
Based on the selected image texture features and logistic regression training, a predictive classification model for MSI was constructed. As shown in Figure3, the ROC curve in line analysis, the AUC value was 0.75. Then apply that model to the validation set which can effectively predict MSI status in ROC curve analysis, AUC. The value is 0.74. Therefore, 9 features constituting the model associated with gastric cancer histopathological image features associated with patients’ MSI status. Table 5 gives the results of each evaluation index of the classification model [20].Table 5
Each evaluation index of the classification model.
LogγTrue positive rateFalse positive rateTrain Auc:0.74Train Auc:0.75-2000.10.1-40.20.20.160.18-60.450.450.210.25-80.650.650.450.48-100.850.850.710.75
### 3.4. Construction and Evaluation of Monogram
To reflect the clinical value of the predictive model, this study used all datasets. Table6 and Figure 4 show the model evaluation results.Table 6
Model evaluation results.
DatasetPrecisionRecallF1 valueAUC valueTraining set0.680.730.720.75Validation set0.650.670.670.74Figure 4
Model evaluation results.The Nomo-gram based on clinical characteristics were constructed using Risk Score. The latter nomogram was used to predict the MSI status of gastric cancer patients, as shown in Tables7 and 8.Table 7
Evaluation results of model (before joining risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprTotal point0-180Linear predictor[-2.5,2.5]Risk of MSI0.1-0.9Table 8
Evaluation results of model (after adding risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprRisk score[-3.5,3.0]Total point0-130Linear predictor[-4,3]Risk of MSI0.1-0.9The nomogram includes gender, age, TNM stage, and risk score, which allows users to obtain MSI status predictions corresponding to patient covariate combination probability. For example, locate the patient’s TNM stage axis; draw a line on that axis a vertical line to determine the predicted score corresponding to that TNM stage. For each variable, repeat this process and add the scores for each covariate to make the total score corresponding to get the predicted probability to achieve the MSI status of gastric cancer patients predict.Apply the index of concordance (C-index), respectively, AUC, and calibration curve to evaluate the predictive performance of the nomogram. AUC values before and after adding risk score were 0.696 and 0.802; the consistency index is shown in Table9; after adding risk score, the value of C-index is improved from 0.69. The calibration curve is shown in Figure 5. The dotted line represents the ideal prediction state. The results show that the calibration curve fits better after adding the prediction label constructed in this study. Table 10 shows the Calibration Curve Comparisons.Table 9
C-index evaluation of prediction model.
Predictive modelC-index95% CIBefore joining risk score0.70.64~0.74After joining risk score0.80.76~0.84Figure 5
Calibration curve comparisons.
(a)
Before joining risk score(b)
After joining risk scoreTable 10
Calibration curve comparisons.
(a)
Before joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.10.090.20.40.40.430.420.40.60.60.580.550.60.80.80.780.750.8(b)
After joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.20.190.20.40.40.380.350.40.60.60.70.680.60.80.80.790.780.8To further validate the clinical utility of the predictive model, a decision curve line analysis to quantify the net gain to evaluate columns based on texture features of pathological images was done. As shown in Figure6, in the entire risk threshold area during the period, the predictive model after adding risk score achieved a larger net income beneficial. Table 11 shows the decision curve comparison.Figure 6
Decision curve comparison.Table 11
Decision curve comparison.
Net benefitHigh-risk thresholdClinical featureRisk score + clinical featureALL-0.0500.30.30.3000.260.280.30.050.20.20.250.150.10.40.160.1900.150.60.10.1500.20.8-0.01-0.0100.250.90.030.100.31000This result shows that adding the risk score nomogram has a greater bed application potential.
### 3.5. Comparison with Other MSI Prediction Studies
To further verify the performance of the model, other studies on MSI prediction were compared, and the comparison results shown in Table12 developed three prediction models for MSI prediction by extracting the morphology, texture, Gabor wavelet, and other radiomic features of CT images, combined with clinical features, using Lasso and Naive Bayes classifiers, and using clinical features alone [21]. The AUC value of the model with radiomic features was 0.598, the AUC value of the model using radiomic features alone was 0.688, and the AUC value of the model combining radiomic and clinical features was 0.752, which has a large gap with the classification performance of the proposed MSI prediction model.Table 12
Performance comparison of MSI prediction models.
MethodType of dataImage featuresClinical featuresJoint modelWinHistopathological images0.74___________NanoCT image0.680.5990.755Proposed modelHistopathological images0.750.6970.801Win trained a ResNet-18 network through the slices of histopathological images to obtain the likelihood distribution of the patient’s MSI status, generated the plaque likelihood histogram feature, and used the XGBoost classifier to predict the patient’s MSI status [22]. The model has an AUC value of 0.93 on the training set and 0.73 on the test set, indicating obvious overfitting.
## 3.1. Clinical Features
The histopathological images used in this study were obtained from 277 gastric cancer patients, including 55 patients with MSI-type gastric cancer and 222 patients with MSS-type gastric cancer. Among them, there were 188 male patients and 89 female patients, with a median age of 67.64 years (33-90 years old), and the prevalence of MSI was 19.85% (55/277). According to gastric cancer, patients were divided into two groups by MSI status. There are differences in gender, age, and TNM staging between patients and MSS patients. The clinical characteristics of the patients are shown in Table3.Table 3
Clinical characteristics of the patients.
Feature itemClassificationMSI (n=55)MSS (n=222)P valueAgeMean70.9163.74<0.001∗∗∗Range46~9036~90GenderMale35 (54.5%)157 (71.2%)<0.001∗∗∗Female25 (45.5%)64 (28.8%)TNM stageI12 (21.8%)25 (11.3%)<0.001∗∗∗II20 (36.4%)67 (30.1%)III19 (34.5%)106 (47.7%)IV4 (7.3%)24 (10.8%)
## 3.2. Image Feature Screening and Predicted Label Construction
Based on the MSI status, Lasso regression is applied on the training set, features are filtered, and Figure2(a) shows the binomial error classification points with log γ, where the least binomial error classification point represents the most retained. The best number of features fit the model. Based on the minimum criterion and 1 standard error standard, with 10-fold cross-validation, draw the dashed vertical with the best γ value wire. Figure 3(b) shows the lasso coefficient curve of the image features [19].Figure 2
Lasso regression process.
(a)
Parameterγ tuning process(b)
Regression coefficient compression processFigure 3
ROC curves of training set and test set.The results of lasso regression are shown in Tables4, and 9 lines were finally screened nonzero number of features, including 4 image features based on the original image and based on 5 image features after wavelet filtering. Calculate the sample by Formula (6) Ben’s risk score. Single-factorial correlation of 9 image features with MSI status and prime variance analysis shows that the P values were all less than 0.001, indicating that the characteristics obtained from the screening were closely related to gastric MSI status of cancer patients and was significantly correlated.Table 4
Lasso regression results.
Feature nameRegression coefficientsP valueoriginal_firstorder_10Percentile0.212 204<0.001∗∗∗original_firstorder_90Percentile0.404 922<0.001∗∗∗original_firstorder_Median6.118 815<0.001∗∗∗original_firstorder_Skewness wavelet--0.817 240<0.001∗∗∗HL_glcm_Imc2 wavelet--0.650 800<0.001∗∗∗LL_firstorder_10Percentile wavelet-0.490 395<0.001∗∗∗LL_firstorder_Median wavelet--5.750 580<0.001∗∗∗LL_glcm_ClusterShade wavelet-1.133 542<0.001∗∗∗LL_glrlm_GrayLevelEmphasis-0.254 150<0.001∗∗∗
## 3.3. Prediction Accuracy Verification
Based on the selected image texture features and logistic regression training, a predictive classification model for MSI was constructed. As shown in Figure3, the ROC curve in line analysis, the AUC value was 0.75. Then apply that model to the validation set which can effectively predict MSI status in ROC curve analysis, AUC. The value is 0.74. Therefore, 9 features constituting the model associated with gastric cancer histopathological image features associated with patients’ MSI status. Table 5 gives the results of each evaluation index of the classification model [20].Table 5
Each evaluation index of the classification model.
LogγTrue positive rateFalse positive rateTrain Auc:0.74Train Auc:0.75-2000.10.1-40.20.20.160.18-60.450.450.210.25-80.650.650.450.48-100.850.850.710.75
## 3.4. Construction and Evaluation of Monogram
To reflect the clinical value of the predictive model, this study used all datasets. Table6 and Figure 4 show the model evaluation results.Table 6
Model evaluation results.
DatasetPrecisionRecallF1 valueAUC valueTraining set0.680.730.720.75Validation set0.650.670.670.74Figure 4
Model evaluation results.The Nomo-gram based on clinical characteristics were constructed using Risk Score. The latter nomogram was used to predict the MSI status of gastric cancer patients, as shown in Tables7 and 8.Table 7
Evaluation results of model (before joining risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprTotal point0-180Linear predictor[-2.5,2.5]Risk of MSI0.1-0.9Table 8
Evaluation results of model (after adding risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprRisk score[-3.5,3.0]Total point0-130Linear predictor[-4,3]Risk of MSI0.1-0.9The nomogram includes gender, age, TNM stage, and risk score, which allows users to obtain MSI status predictions corresponding to patient covariate combination probability. For example, locate the patient’s TNM stage axis; draw a line on that axis a vertical line to determine the predicted score corresponding to that TNM stage. For each variable, repeat this process and add the scores for each covariate to make the total score corresponding to get the predicted probability to achieve the MSI status of gastric cancer patients predict.Apply the index of concordance (C-index), respectively, AUC, and calibration curve to evaluate the predictive performance of the nomogram. AUC values before and after adding risk score were 0.696 and 0.802; the consistency index is shown in Table9; after adding risk score, the value of C-index is improved from 0.69. The calibration curve is shown in Figure 5. The dotted line represents the ideal prediction state. The results show that the calibration curve fits better after adding the prediction label constructed in this study. Table 10 shows the Calibration Curve Comparisons.Table 9
C-index evaluation of prediction model.
Predictive modelC-index95% CIBefore joining risk score0.70.64~0.74After joining risk score0.80.76~0.84Figure 5
Calibration curve comparisons.
(a)
Before joining risk score(b)
After joining risk scoreTable 10
Calibration curve comparisons.
(a)
Before joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.10.090.20.40.40.430.420.40.60.60.580.550.60.80.80.780.750.8(b)
After joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.20.190.20.40.40.380.350.40.60.60.70.680.60.80.80.790.780.8To further validate the clinical utility of the predictive model, a decision curve line analysis to quantify the net gain to evaluate columns based on texture features of pathological images was done. As shown in Figure6, in the entire risk threshold area during the period, the predictive model after adding risk score achieved a larger net income beneficial. Table 11 shows the decision curve comparison.Figure 6
Decision curve comparison.Table 11
Decision curve comparison.
Net benefitHigh-risk thresholdClinical featureRisk score + clinical featureALL-0.0500.30.30.3000.260.280.30.050.20.20.250.150.10.40.160.1900.150.60.10.1500.20.8-0.01-0.0100.250.90.030.100.31000This result shows that adding the risk score nomogram has a greater bed application potential.
## 3.5. Comparison with Other MSI Prediction Studies
To further verify the performance of the model, other studies on MSI prediction were compared, and the comparison results shown in Table12 developed three prediction models for MSI prediction by extracting the morphology, texture, Gabor wavelet, and other radiomic features of CT images, combined with clinical features, using Lasso and Naive Bayes classifiers, and using clinical features alone [21]. The AUC value of the model with radiomic features was 0.598, the AUC value of the model using radiomic features alone was 0.688, and the AUC value of the model combining radiomic and clinical features was 0.752, which has a large gap with the classification performance of the proposed MSI prediction model.Table 12
Performance comparison of MSI prediction models.
MethodType of dataImage featuresClinical featuresJoint modelWinHistopathological images0.74___________NanoCT image0.680.5990.755Proposed modelHistopathological images0.750.6970.801Win trained a ResNet-18 network through the slices of histopathological images to obtain the likelihood distribution of the patient’s MSI status, generated the plaque likelihood histogram feature, and used the XGBoost classifier to predict the patient’s MSI status [22]. The model has an AUC value of 0.93 on the training set and 0.73 on the test set, indicating obvious overfitting.
## 4. Discussion and Findings
This paper proposes a texture feature based on histopathological images of gastric cancer. The MSI prediction method of sign was used to extract the texture features such as GLCM, GLSZM, GLSZM, and GLRLM. In these texture features, after wavelet transformation sign, we have employed Lasso regression for feature selection, and lastly the texture features most relevant to the MSI state of the user are constructed based on these texture features. The MSI prediction labels of gastric cancer were obtained, and the predictions were compared on the training and validation sets. The label classification performance is verified, and the AUC values obtained are 0.75 and 0.74, respectively. The results show that the proposed predictive signature has a better effect on the MSI status of gastric cancer patients compared with the traditional MSI detection methods initially opted, using machine learning technology based on direct prediction of MSI in gastric cancer patients on the basis of readily available histopathological image status, without the need for additional laboratories for genetic testing and immunohistochemistry analysis; the prediction of MSI status can be achieved at a lower cost. Hence, this method when compared to computer-aided MSI prediction methods based on CT images outperforms, because the reproducibility of radiology features considering different scanners and imaging protocols and the potential differences in terms of the formation of H&E-stained histopathological images are less stable comparing to the performance of the MSI prediction model proposed in this paper. Therefore, this investigation proposes and confirms a strategy for predicting MSI in gastric cancer based on histopathological pictures that may accurately predict the MSI status of patients with gastric cancer, allowing for universal MSI screening and benefiting more gastric cancer patients to be investigated in a significant manner.
## 5. Conclusion
This study proposes and validates a method for predicting MSI in gastric cancer based on histopathological images, which can effectively predict the MSI status of patients with gastric cancer, hence providing a possibility for universal screening of MSI, and is expected to benefit more gastric cancer patients. Immunotherapy by combining the clinical features with predictive labels is proposed in this paper to construct gastric cancer MSI prediction models, as compared with prediction models based on clinical characteristics; after entering the predicted labels that are proposed in this paper, the AUC value of the model is improved from 0.696 to 0.802. To further verify the validity of the predicted labels, the clinical value of sex and predictive models, respectively, before and after adding predictive labels are analyzed. The prediction model was evaluated by calibration curves, C-index values, and decision curves. The results show that after adding the predicted labels proposed in this paper, the C-index value and the calibration performance of the quasi-curve are significantly improved, and the decision curve analysis has also demonstrated a greater net income.
---
*Source: 1012684-2022-07-04.xml* | 1012684-2022-07-04_1012684-2022-07-04.md | 45,137 | ML-Based Texture and Wavelet Features Extraction Technique to Predict Gastric Mesothelioma Cancer | Neeraj Garg; Divyanshu Sinha; Babita Yadav; Bhoomi Gupta; Sachin Gupta; Shahajan Miah | BioMed Research International
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012684 | 1012684-2022-07-04.xml | ---
## Abstract
Microsatellites are small, repetitive sequences found all across the human genome. Microsatellite instability is the phenomenon of variations in the length of microsatellites induced by the insertion or deletion of repeat units in tumor tissue (MSI). MSI-type stomach malignancy has distinct genetic phenotypes and clinic pathological characteristics, and the stability of microsatellites influences whether or not patients with gastric mesothelioma react to immunotherapy. As a result, determining MSI status prior to surgery is critical for developing treatment options for individuals with gastric cancer. Traditional MSI detection approaches need immunological histochemistry and genetic analysis, which adds to the expense and makes it difficult to apply to every patient in clinical practice. In this study, to predict the MSI status of gastric cancer patients, researchers used image feature extraction technology and a machine learning algorithm to evaluate high-resolution histopathology pictures of patients. 279 cases of raw data were obtained from the TCGA database, 442 samples were obtained after preprocessing and upsampling, and 445 quantitative image features, including first-order statistics of impressions, texture features, and wavelet features, were extracted from the histopathological images of each sample. To filter the characteristics and provide a prediction label (risk score) for MSI status of gastric cancer, Lasso regression was utilized. The predictive label’s classification performance was evaluated using a logistic classification model, which was then coupled with the clinical data of each patient to create a customized nomogram for MSI status prediction using multivariate analysis.
---
## Body
## 1. Introduction
Gastric cancer is one of the most common malignant tumors in the world. There were 1,033,701 new cancer cases, accounting for 5.7% of the global new cancer cases, and 782, 685 deaths, accounting for 8.2% of global cancer deaths. It ranks fifth in cancer incidence and third in mortality, and there is no decreasing trend in the incidence rate [1]. The heterogeneity of cancer, the appearance of gastric cancer, and the complex and diverse cancer types make the diagnosis and treatment of cancer more difficult. Microsatellite instability results from an impaired DNA mismatch repair, and a specific cancer phenotype is characterized by hypervariability of short repeats in the genome, a form characterized by DNA polymerase slippage and single nucleotides [2]. Extensive lengths of the microsatellite repeats are due to increased frequency of variants (SNVs). Polymorphism studies have shown that MSI-type gastric cancer accounts for about 15% of gastric cancer patients; these patients are more likely to benefit from immunotherapy [3]. MSI-type gastric cancer patients have their unique clinical features, such as the diffuse cancer tissue genome which is less stable, the disease site which is often distal to the tumor tissue, and the tumor types which are mostly type 3; MSI-type gastric cancer patients usually have a good overall long-term prognosis, compared with the contemporary MSS-type gastric cancer patients; for MSI-type gastric cancer, the survival rate of patients is high [4]. From precancer to onset, MSI gradually accumulates and increases, and therefore, MSI detection for early diagnosis and screening of gastric cancer is prolonged [5]. The prognosis of gastric cancer patients and the clinical decision-making of adjuvant gastric cancer treatment are of great significance. There are two main methods of MSI detection: immunohistochemistry (Immunohistochemistry, IHC) and polymerase chain reaction (PCR). IHC responds to MSI by detecting the expression of mismatch repair gene state; PCR is carried out through a specific single-nucleotide site gene tagging genetic analysis; however, both IHC and PCR testing methods need to be large-capacity tertiary medical center and require high economic and time cost; it is difficult to extend to every patient in clinical practice [6]. Therefore, none provides timely immune screening for a large number of potential immunotherapy-sensitive patients with point inhibitor therapy, thereby losing the chance to control the disease [7].Histopathology is an essential tool for cancer diagnosis and prediction, and its type reflects the combined effects of molecular changes on cancer cell behavior. Assessing disease progression provides a direct visualization tool. A group of histopathologists can assess cell density, tissue structure, and histological filamentous features such as cleft status which were used to classify lesions. Along with advances in microscopy, imaging technology, and computer technology based on pathological pictures, auxiliary diagnostic models are developing rapidly. Among them, image texture analysis is used for pathology. Image texture feature extraction for cancer grading, Classification and predict for example, the author [8]. For extracting tissue disease from breast cancer patients, the grayscale co-occurrence matrix (GLCM) and the graph run-length matrix (GRLM) of the image is used. Euler number and other texture features, using Linear Discriminant Classifier (LDA) are used to map histological images, malignant and non-malignant histopathology Image, and the classification accuracy was 80% and 100%, respectively. The researcher in this study has done extracting three sets of texture features of soft tissue sarcoma: gray level cooccurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and local binary modulus texture analysis using the LBP method to achieve the metastases and lesions of soft tissue sarcoma’s death prediction [9]. Author has trained a deep convolutional neural network; two subtypes of lung cancer can be accurately distinguished from histopathological images: lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). Mutation status of six genes is associated with lung cancer. In this study, tumors, malignancy of the lymph node is a predictor that has consequences for the degree of lymph dissection. Numerous nodal units are engaged in the capillary permeability of the stomach, each with a variable risk of malignancy. This study aimed to construct a deep network system for predicting lymph cancer in numerous nodal sites in individuals with gastric cancer using preoperative CT data. ML techniques are employed for the examination of these CT scans for the investigation of any changes if occurred to predict the ailments and recommend precautions for better curability [10]. The focus of this research was to see if radiomic evaluation employing spectroscopic micro-CT-enhanced nanoparticle contrast enhancing may help distinguish tumors dependent on the amount of malignant cell lymphocytes [11]. In this research to improve survival prognosis, we offer a unique combined multitask system with multilayer characteristics that predicts clinical tumor and metastasis stages simultaneously to detect gastric cancer [12]. This paper can establish to fuse the statistical model of multiple residual networks; it can be obtained from a standard hematoxylin and accurate prediction of prostate cancer patients in histopathological images after eosin staining the mutation status of the speckle-type POZ gene [13].This paper proposes gastric cancer based on the texture features of histopathological images. Authors in this research have forecasted MSI prediction method that targets tumor heterogeneity in gastric cancer histopathology, where researchers have used image feature extraction technology and a machine learning algorithm to evaluate high-resolution histopathology pictures of the patients. 279 cases of raw data were obtained from the TCGA database, out of which 442 samples were acquired after preprocessing and upsampling, and 445 quantitative image features, including first-order statistics of impressions, texture features, and wavelet features, were extracted from the histopathological images of each sample. To filter the characteristics and provide a prediction label (risk score) for MSI status of gastric cancer, Lasso regression was employed. Furthermore, the predictive label’s classification performance was evaluated using a logistic classification model, which was then coupled with the clinical data of each patient to create a customized nomogram for MSI status prediction using multivariate analysis as an achievement of the research.
### 1.1. Organization
The paper is outlined in several sections where the starting section is the introduction part followed by the second section which discusses the data and methods employed in the study. The third section defines the analysis of experimental results, followed by the penultimate section that states about discussions and findings, and the ultimate section is the conclusion of the study.In the representation of Figure1 as depicted below, the extracted quantitative image features from images have been acquired, and the use of Lasso regression to construct the prediction has targeted a signature, and using the predictive signature as an independent predictor to be combined with the patient’s clinical features has been opted, additionally the multivariate analysis by logistic regression to build a predictive model has been obtained; at last the prediction tool is being drawn termed as nomogram of personality that provides a powerful instrument for MSI prediction in gastric cancer patients. The method of flow is shown in the figure below.Figure 1
Construction process of MSI prediction model for gastric cancer.
## 1.1. Organization
The paper is outlined in several sections where the starting section is the introduction part followed by the second section which discusses the data and methods employed in the study. The third section defines the analysis of experimental results, followed by the penultimate section that states about discussions and findings, and the ultimate section is the conclusion of the study.In the representation of Figure1 as depicted below, the extracted quantitative image features from images have been acquired, and the use of Lasso regression to construct the prediction has targeted a signature, and using the predictive signature as an independent predictor to be combined with the patient’s clinical features has been opted, additionally the multivariate analysis by logistic regression to build a predictive model has been obtained; at last the prediction tool is being drawn termed as nomogram of personality that provides a powerful instrument for MSI prediction in gastric cancer patients. The method of flow is shown in the figure below.Figure 1
Construction process of MSI prediction model for gastric cancer.
## 2. Data and Methods
### 2.1. Patient Data
This paper’s histopathological images of gastric cancer are from the TCGA data library. In addition, the MSI status of gastric cancer patients was analyzed to use the obtained data effectively. This study established three inclusion criteria for the collected data: (1) Pathological images showing uniform staining, precise imaging and no tissue adhesion; (2) uniformly complete personal basic information and clinical characteristics; (3) have clear MSI status information. After screening, 277 case samples were eligible for the inclusion standard.
### 2.2. Data Preprocessing
To ensure the validity of the experiment and obtain valuable results, it is necessary to solve the problem of sample imbalance. Augment the minority class by sampling. For MSI-type cases, histopathological images for each patient is considered. Select multiple ROIs, each ROI as an independent sample, upsampled. The dataset has a total of 442 pieces. The obtained models are randomly divided into a training set and a validation set: There are 313 samples in the training set, of which 156 are of MSI type. There are 157 cases of MSS type; there are 129 samples in the validation set, of which 64 cases are of MSI type, for example, 65 cases of MSS type.
### 2.3. Image Segmentation
The histopathological image needs to be processed before image feature extraction to ensure the accuracy of the resulting image features and reduce the computational complexity degree segmentation. To obtain the most representative lesion area, under the guidance of a chief physician with experience in histopathological image detection, the tumor area was annotated and examined the marked lesion area by another expert. Finally, the ROI of all histopathological images was obtained by segmentation.
### 2.4. Feature Extraction
In this study, the original image of the ROI is obtained from the segmentation and the processed wavelet. A total of 445 image features are extracted from the filtered image, which can be divided into two classes, six groups per class: first-order statistics and gray-level cooccurrence matrix (GLCM), gray-level size zone matrix (GLSZM), gray-level run-length matrix (GLRLM), neighboring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM).First-order statistics describe interest through common statistical indicators pixel intensity distribution within the region of interest. GLCM describes the grayscale of an image and the second-order joint probability function of the spatial correlation characteristics obtained by calculating GLCM using the partial eigenvalues of the matrix to represent the texture features of the image, which can give the comprehensive information about the direction, adjacent interval and changing amplitude of the grayscale of the response image [14]. GLSZM is used to quantify the gray-level area in the image; the gray-level area domain is defined as the number of connected pixels that share the same gray-level intensity. GLRLM is used to quantify grayscale runs, which are defined as the length of consecutive pixels of the same gray value. In NGTDM through grayscale, the sum of absolute differences reflects the difference between the average gray values of adjacent pixels different. GLDM can quantify the grayscale dependence in images; grayscale dependence is defined as the number of connected pixels within a distance δ that depends on the center pixel [15].This study extracted 18 features from first-order statistics, mainly including entropy, total energy, mean absolute deviation, and skewness; from GLCM, 22 kinds of features are extracted, mainly including autocorrelation, joint average, clustering shading, and cluster tendency; 16 features were extracted from GLSZM, mainly including grayscale uneven normalization, uneven area size, and area percentage size area nonuniformity normalization; 16 were extracted from GLRLM features, including run entropy, run difference, gray variance, and run nonuniformity uniform standardization; 5 kinds of features are extracted from NGTDM, mainly including roughness, contrast, complexity, and intensity; 14 were extracted from GLDM features, mainly including dependence entropy, dependence nonuniformity, dependence nonuniformity standard standardization, and dependent variance.
### 2.5. Feature Selection
To reduce the complexity of the model and prevent overfitting, before modeling, this paper, features are selected using the lasso method [16]. Lasso improves the traditional, linear regression method provides a new perspective on the general linear regression algorithm on the basis of adding the L1 penalty term, the linear regression parameters have sparsity from the resulting model which has good predictability, and the selected features are related to the prediction. The test label is more relevant. For the feature vector xii=1,2,⋯,N of a given sample, xi∈Rn. The objective function of Lasso regression is
(1)Lα,γ=∑i=1Nyi−∑jxijaj2+γ∑j=1paj,where y is the label of the sample and α=aj is the regression parameter to get the most optimal regression parameters and transform the objective function minimization problem into the following subproblems:
(2)ajk+1=argminL2a−zj22+γa1In:(3)zj=ajk−1L∇fajk,(4)∇fajk=2∑i=1Nxij∑s=1Pxisask−yi.Using proximal gradient descent [17], the algorithm iteratively solves Equation (3) and uses the soft domain function to solve Equation (2); the final solution is as follows:
(5)ajk+1=zj−γ/L,zj>γ/L,0,zj≤γL,zj+γL,zj<γL.Through the above algorithm, the sparse feature matrix is finally obtained, which is used to build a classification model.
### 2.6. Predictive Label Construction
In this study, the sparse eigenvalues and their regression coefficients were used to construct a sample. Table1 shows the risk score of the proposed model over the number of features and log variance. In the predicted label of Ben, the formula is as follows:
(6)Riskscore=∑i=1nFeaturei×αi.Table 1
Risk score of the proposed model over the number of features and log variance.
LogγBinomial devianceNumber of features-21.42-41.355-61.35-81.259-101.212Among them, feature is theith eigenvalue of the sample feature vector, and αi is the regression coefficient corresponding to the eigenvalue. Table 2 shows the risk coefficient of the proposed model over the number of feature and log variance.Table 2
Risk coefficient of the proposed model over number of feature and log variance.
LogγCoefficientsNumber of features-21012-4511-638-82.511-10512Using risk score as an independent predictor and the clinical samples, combine features to build logistic regression models and draw personalized nomogram picture and through C index, AUC value, calibration curve, and decision curve evaluation predictive performance of the model [18].
## 2.1. Patient Data
This paper’s histopathological images of gastric cancer are from the TCGA data library. In addition, the MSI status of gastric cancer patients was analyzed to use the obtained data effectively. This study established three inclusion criteria for the collected data: (1) Pathological images showing uniform staining, precise imaging and no tissue adhesion; (2) uniformly complete personal basic information and clinical characteristics; (3) have clear MSI status information. After screening, 277 case samples were eligible for the inclusion standard.
## 2.2. Data Preprocessing
To ensure the validity of the experiment and obtain valuable results, it is necessary to solve the problem of sample imbalance. Augment the minority class by sampling. For MSI-type cases, histopathological images for each patient is considered. Select multiple ROIs, each ROI as an independent sample, upsampled. The dataset has a total of 442 pieces. The obtained models are randomly divided into a training set and a validation set: There are 313 samples in the training set, of which 156 are of MSI type. There are 157 cases of MSS type; there are 129 samples in the validation set, of which 64 cases are of MSI type, for example, 65 cases of MSS type.
## 2.3. Image Segmentation
The histopathological image needs to be processed before image feature extraction to ensure the accuracy of the resulting image features and reduce the computational complexity degree segmentation. To obtain the most representative lesion area, under the guidance of a chief physician with experience in histopathological image detection, the tumor area was annotated and examined the marked lesion area by another expert. Finally, the ROI of all histopathological images was obtained by segmentation.
## 2.4. Feature Extraction
In this study, the original image of the ROI is obtained from the segmentation and the processed wavelet. A total of 445 image features are extracted from the filtered image, which can be divided into two classes, six groups per class: first-order statistics and gray-level cooccurrence matrix (GLCM), gray-level size zone matrix (GLSZM), gray-level run-length matrix (GLRLM), neighboring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM).First-order statistics describe interest through common statistical indicators pixel intensity distribution within the region of interest. GLCM describes the grayscale of an image and the second-order joint probability function of the spatial correlation characteristics obtained by calculating GLCM using the partial eigenvalues of the matrix to represent the texture features of the image, which can give the comprehensive information about the direction, adjacent interval and changing amplitude of the grayscale of the response image [14]. GLSZM is used to quantify the gray-level area in the image; the gray-level area domain is defined as the number of connected pixels that share the same gray-level intensity. GLRLM is used to quantify grayscale runs, which are defined as the length of consecutive pixels of the same gray value. In NGTDM through grayscale, the sum of absolute differences reflects the difference between the average gray values of adjacent pixels different. GLDM can quantify the grayscale dependence in images; grayscale dependence is defined as the number of connected pixels within a distance δ that depends on the center pixel [15].This study extracted 18 features from first-order statistics, mainly including entropy, total energy, mean absolute deviation, and skewness; from GLCM, 22 kinds of features are extracted, mainly including autocorrelation, joint average, clustering shading, and cluster tendency; 16 features were extracted from GLSZM, mainly including grayscale uneven normalization, uneven area size, and area percentage size area nonuniformity normalization; 16 were extracted from GLRLM features, including run entropy, run difference, gray variance, and run nonuniformity uniform standardization; 5 kinds of features are extracted from NGTDM, mainly including roughness, contrast, complexity, and intensity; 14 were extracted from GLDM features, mainly including dependence entropy, dependence nonuniformity, dependence nonuniformity standard standardization, and dependent variance.
## 2.5. Feature Selection
To reduce the complexity of the model and prevent overfitting, before modeling, this paper, features are selected using the lasso method [16]. Lasso improves the traditional, linear regression method provides a new perspective on the general linear regression algorithm on the basis of adding the L1 penalty term, the linear regression parameters have sparsity from the resulting model which has good predictability, and the selected features are related to the prediction. The test label is more relevant. For the feature vector xii=1,2,⋯,N of a given sample, xi∈Rn. The objective function of Lasso regression is
(1)Lα,γ=∑i=1Nyi−∑jxijaj2+γ∑j=1paj,where y is the label of the sample and α=aj is the regression parameter to get the most optimal regression parameters and transform the objective function minimization problem into the following subproblems:
(2)ajk+1=argminL2a−zj22+γa1In:(3)zj=ajk−1L∇fajk,(4)∇fajk=2∑i=1Nxij∑s=1Pxisask−yi.Using proximal gradient descent [17], the algorithm iteratively solves Equation (3) and uses the soft domain function to solve Equation (2); the final solution is as follows:
(5)ajk+1=zj−γ/L,zj>γ/L,0,zj≤γL,zj+γL,zj<γL.Through the above algorithm, the sparse feature matrix is finally obtained, which is used to build a classification model.
## 2.6. Predictive Label Construction
In this study, the sparse eigenvalues and their regression coefficients were used to construct a sample. Table1 shows the risk score of the proposed model over the number of features and log variance. In the predicted label of Ben, the formula is as follows:
(6)Riskscore=∑i=1nFeaturei×αi.Table 1
Risk score of the proposed model over the number of features and log variance.
LogγBinomial devianceNumber of features-21.42-41.355-61.35-81.259-101.212Among them, feature is theith eigenvalue of the sample feature vector, and αi is the regression coefficient corresponding to the eigenvalue. Table 2 shows the risk coefficient of the proposed model over the number of feature and log variance.Table 2
Risk coefficient of the proposed model over number of feature and log variance.
LogγCoefficientsNumber of features-21012-4511-638-82.511-10512Using risk score as an independent predictor and the clinical samples, combine features to build logistic regression models and draw personalized nomogram picture and through C index, AUC value, calibration curve, and decision curve evaluation predictive performance of the model [18].
## 3. Analysis of Experimental Results
### 3.1. Clinical Features
The histopathological images used in this study were obtained from 277 gastric cancer patients, including 55 patients with MSI-type gastric cancer and 222 patients with MSS-type gastric cancer. Among them, there were 188 male patients and 89 female patients, with a median age of 67.64 years (33-90 years old), and the prevalence of MSI was 19.85% (55/277). According to gastric cancer, patients were divided into two groups by MSI status. There are differences in gender, age, and TNM staging between patients and MSS patients. The clinical characteristics of the patients are shown in Table3.Table 3
Clinical characteristics of the patients.
Feature itemClassificationMSI (n=55)MSS (n=222)P valueAgeMean70.9163.74<0.001∗∗∗Range46~9036~90GenderMale35 (54.5%)157 (71.2%)<0.001∗∗∗Female25 (45.5%)64 (28.8%)TNM stageI12 (21.8%)25 (11.3%)<0.001∗∗∗II20 (36.4%)67 (30.1%)III19 (34.5%)106 (47.7%)IV4 (7.3%)24 (10.8%)
### 3.2. Image Feature Screening and Predicted Label Construction
Based on the MSI status, Lasso regression is applied on the training set, features are filtered, and Figure2(a) shows the binomial error classification points with log γ, where the least binomial error classification point represents the most retained. The best number of features fit the model. Based on the minimum criterion and 1 standard error standard, with 10-fold cross-validation, draw the dashed vertical with the best γ value wire. Figure 3(b) shows the lasso coefficient curve of the image features [19].Figure 2
Lasso regression process.
(a)
Parameterγ tuning process(b)
Regression coefficient compression processFigure 3
ROC curves of training set and test set.The results of lasso regression are shown in Tables4, and 9 lines were finally screened nonzero number of features, including 4 image features based on the original image and based on 5 image features after wavelet filtering. Calculate the sample by Formula (6) Ben’s risk score. Single-factorial correlation of 9 image features with MSI status and prime variance analysis shows that the P values were all less than 0.001, indicating that the characteristics obtained from the screening were closely related to gastric MSI status of cancer patients and was significantly correlated.Table 4
Lasso regression results.
Feature nameRegression coefficientsP valueoriginal_firstorder_10Percentile0.212 204<0.001∗∗∗original_firstorder_90Percentile0.404 922<0.001∗∗∗original_firstorder_Median6.118 815<0.001∗∗∗original_firstorder_Skewness wavelet--0.817 240<0.001∗∗∗HL_glcm_Imc2 wavelet--0.650 800<0.001∗∗∗LL_firstorder_10Percentile wavelet-0.490 395<0.001∗∗∗LL_firstorder_Median wavelet--5.750 580<0.001∗∗∗LL_glcm_ClusterShade wavelet-1.133 542<0.001∗∗∗LL_glrlm_GrayLevelEmphasis-0.254 150<0.001∗∗∗
### 3.3. Prediction Accuracy Verification
Based on the selected image texture features and logistic regression training, a predictive classification model for MSI was constructed. As shown in Figure3, the ROC curve in line analysis, the AUC value was 0.75. Then apply that model to the validation set which can effectively predict MSI status in ROC curve analysis, AUC. The value is 0.74. Therefore, 9 features constituting the model associated with gastric cancer histopathological image features associated with patients’ MSI status. Table 5 gives the results of each evaluation index of the classification model [20].Table 5
Each evaluation index of the classification model.
LogγTrue positive rateFalse positive rateTrain Auc:0.74Train Auc:0.75-2000.10.1-40.20.20.160.18-60.450.450.210.25-80.650.650.450.48-100.850.850.710.75
### 3.4. Construction and Evaluation of Monogram
To reflect the clinical value of the predictive model, this study used all datasets. Table6 and Figure 4 show the model evaluation results.Table 6
Model evaluation results.
DatasetPrecisionRecallF1 valueAUC valueTraining set0.680.730.720.75Validation set0.650.670.670.74Figure 4
Model evaluation results.The Nomo-gram based on clinical characteristics were constructed using Risk Score. The latter nomogram was used to predict the MSI status of gastric cancer patients, as shown in Tables7 and 8.Table 7
Evaluation results of model (before joining risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprTotal point0-180Linear predictor[-2.5,2.5]Risk of MSI0.1-0.9Table 8
Evaluation results of model (after adding risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprRisk score[-3.5,3.0]Total point0-130Linear predictor[-4,3]Risk of MSI0.1-0.9The nomogram includes gender, age, TNM stage, and risk score, which allows users to obtain MSI status predictions corresponding to patient covariate combination probability. For example, locate the patient’s TNM stage axis; draw a line on that axis a vertical line to determine the predicted score corresponding to that TNM stage. For each variable, repeat this process and add the scores for each covariate to make the total score corresponding to get the predicted probability to achieve the MSI status of gastric cancer patients predict.Apply the index of concordance (C-index), respectively, AUC, and calibration curve to evaluate the predictive performance of the nomogram. AUC values before and after adding risk score were 0.696 and 0.802; the consistency index is shown in Table9; after adding risk score, the value of C-index is improved from 0.69. The calibration curve is shown in Figure 5. The dotted line represents the ideal prediction state. The results show that the calibration curve fits better after adding the prediction label constructed in this study. Table 10 shows the Calibration Curve Comparisons.Table 9
C-index evaluation of prediction model.
Predictive modelC-index95% CIBefore joining risk score0.70.64~0.74After joining risk score0.80.76~0.84Figure 5
Calibration curve comparisons.
(a)
Before joining risk score(b)
After joining risk scoreTable 10
Calibration curve comparisons.
(a)
Before joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.10.090.20.40.40.430.420.40.60.60.580.550.60.80.80.780.750.8(b)
After joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.20.190.20.40.40.380.350.40.60.60.70.680.60.80.80.790.780.8To further validate the clinical utility of the predictive model, a decision curve line analysis to quantify the net gain to evaluate columns based on texture features of pathological images was done. As shown in Figure6, in the entire risk threshold area during the period, the predictive model after adding risk score achieved a larger net income beneficial. Table 11 shows the decision curve comparison.Figure 6
Decision curve comparison.Table 11
Decision curve comparison.
Net benefitHigh-risk thresholdClinical featureRisk score + clinical featureALL-0.0500.30.30.3000.260.280.30.050.20.20.250.150.10.40.160.1900.150.60.10.1500.20.8-0.01-0.0100.250.90.030.100.31000This result shows that adding the risk score nomogram has a greater bed application potential.
### 3.5. Comparison with Other MSI Prediction Studies
To further verify the performance of the model, other studies on MSI prediction were compared, and the comparison results shown in Table12 developed three prediction models for MSI prediction by extracting the morphology, texture, Gabor wavelet, and other radiomic features of CT images, combined with clinical features, using Lasso and Naive Bayes classifiers, and using clinical features alone [21]. The AUC value of the model with radiomic features was 0.598, the AUC value of the model using radiomic features alone was 0.688, and the AUC value of the model combining radiomic and clinical features was 0.752, which has a large gap with the classification performance of the proposed MSI prediction model.Table 12
Performance comparison of MSI prediction models.
MethodType of dataImage featuresClinical featuresJoint modelWinHistopathological images0.74___________NanoCT image0.680.5990.755Proposed modelHistopathological images0.750.6970.801Win trained a ResNet-18 network through the slices of histopathological images to obtain the likelihood distribution of the patient’s MSI status, generated the plaque likelihood histogram feature, and used the XGBoost classifier to predict the patient’s MSI status [22]. The model has an AUC value of 0.93 on the training set and 0.73 on the test set, indicating obvious overfitting.
## 3.1. Clinical Features
The histopathological images used in this study were obtained from 277 gastric cancer patients, including 55 patients with MSI-type gastric cancer and 222 patients with MSS-type gastric cancer. Among them, there were 188 male patients and 89 female patients, with a median age of 67.64 years (33-90 years old), and the prevalence of MSI was 19.85% (55/277). According to gastric cancer, patients were divided into two groups by MSI status. There are differences in gender, age, and TNM staging between patients and MSS patients. The clinical characteristics of the patients are shown in Table3.Table 3
Clinical characteristics of the patients.
Feature itemClassificationMSI (n=55)MSS (n=222)P valueAgeMean70.9163.74<0.001∗∗∗Range46~9036~90GenderMale35 (54.5%)157 (71.2%)<0.001∗∗∗Female25 (45.5%)64 (28.8%)TNM stageI12 (21.8%)25 (11.3%)<0.001∗∗∗II20 (36.4%)67 (30.1%)III19 (34.5%)106 (47.7%)IV4 (7.3%)24 (10.8%)
## 3.2. Image Feature Screening and Predicted Label Construction
Based on the MSI status, Lasso regression is applied on the training set, features are filtered, and Figure2(a) shows the binomial error classification points with log γ, where the least binomial error classification point represents the most retained. The best number of features fit the model. Based on the minimum criterion and 1 standard error standard, with 10-fold cross-validation, draw the dashed vertical with the best γ value wire. Figure 3(b) shows the lasso coefficient curve of the image features [19].Figure 2
Lasso regression process.
(a)
Parameterγ tuning process(b)
Regression coefficient compression processFigure 3
ROC curves of training set and test set.The results of lasso regression are shown in Tables4, and 9 lines were finally screened nonzero number of features, including 4 image features based on the original image and based on 5 image features after wavelet filtering. Calculate the sample by Formula (6) Ben’s risk score. Single-factorial correlation of 9 image features with MSI status and prime variance analysis shows that the P values were all less than 0.001, indicating that the characteristics obtained from the screening were closely related to gastric MSI status of cancer patients and was significantly correlated.Table 4
Lasso regression results.
Feature nameRegression coefficientsP valueoriginal_firstorder_10Percentile0.212 204<0.001∗∗∗original_firstorder_90Percentile0.404 922<0.001∗∗∗original_firstorder_Median6.118 815<0.001∗∗∗original_firstorder_Skewness wavelet--0.817 240<0.001∗∗∗HL_glcm_Imc2 wavelet--0.650 800<0.001∗∗∗LL_firstorder_10Percentile wavelet-0.490 395<0.001∗∗∗LL_firstorder_Median wavelet--5.750 580<0.001∗∗∗LL_glcm_ClusterShade wavelet-1.133 542<0.001∗∗∗LL_glrlm_GrayLevelEmphasis-0.254 150<0.001∗∗∗
## 3.3. Prediction Accuracy Verification
Based on the selected image texture features and logistic regression training, a predictive classification model for MSI was constructed. As shown in Figure3, the ROC curve in line analysis, the AUC value was 0.75. Then apply that model to the validation set which can effectively predict MSI status in ROC curve analysis, AUC. The value is 0.74. Therefore, 9 features constituting the model associated with gastric cancer histopathological image features associated with patients’ MSI status. Table 5 gives the results of each evaluation index of the classification model [20].Table 5
Each evaluation index of the classification model.
LogγTrue positive rateFalse positive rateTrain Auc:0.74Train Auc:0.75-2000.10.1-40.20.20.160.18-60.450.450.210.25-80.650.650.450.48-100.850.850.710.75
## 3.4. Construction and Evaluation of Monogram
To reflect the clinical value of the predictive model, this study used all datasets. Table6 and Figure 4 show the model evaluation results.Table 6
Model evaluation results.
DatasetPrecisionRecallF1 valueAUC valueTraining set0.680.730.720.75Validation set0.650.670.670.74Figure 4
Model evaluation results.The Nomo-gram based on clinical characteristics were constructed using Risk Score. The latter nomogram was used to predict the MSI status of gastric cancer patients, as shown in Tables7 and 8.Table 7
Evaluation results of model (before joining risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprTotal point0-180Linear predictor[-2.5,2.5]Risk of MSI0.1-0.9Table 8
Evaluation results of model (after adding risk score).
Points0-100Gender0 or 1Age30-90TNM stage01-AprRisk score[-3.5,3.0]Total point0-130Linear predictor[-4,3]Risk of MSI0.1-0.9The nomogram includes gender, age, TNM stage, and risk score, which allows users to obtain MSI status predictions corresponding to patient covariate combination probability. For example, locate the patient’s TNM stage axis; draw a line on that axis a vertical line to determine the predicted score corresponding to that TNM stage. For each variable, repeat this process and add the scores for each covariate to make the total score corresponding to get the predicted probability to achieve the MSI status of gastric cancer patients predict.Apply the index of concordance (C-index), respectively, AUC, and calibration curve to evaluate the predictive performance of the nomogram. AUC values before and after adding risk score were 0.696 and 0.802; the consistency index is shown in Table9; after adding risk score, the value of C-index is improved from 0.69. The calibration curve is shown in Figure 5. The dotted line represents the ideal prediction state. The results show that the calibration curve fits better after adding the prediction label constructed in this study. Table 10 shows the Calibration Curve Comparisons.Table 9
C-index evaluation of prediction model.
Predictive modelC-index95% CIBefore joining risk score0.70.64~0.74After joining risk score0.80.76~0.84Figure 5
Calibration curve comparisons.
(a)
Before joining risk score(b)
After joining risk scoreTable 10
Calibration curve comparisons.
(a)
Before joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.10.090.20.40.40.430.420.40.60.60.580.550.60.80.80.780.750.8(b)
After joining risk score
Actual probabilityPredicted probabilityApparentBias-correctedIdeal000000.20.20.20.190.20.40.40.380.350.40.60.60.70.680.60.80.80.790.780.8To further validate the clinical utility of the predictive model, a decision curve line analysis to quantify the net gain to evaluate columns based on texture features of pathological images was done. As shown in Figure6, in the entire risk threshold area during the period, the predictive model after adding risk score achieved a larger net income beneficial. Table 11 shows the decision curve comparison.Figure 6
Decision curve comparison.Table 11
Decision curve comparison.
Net benefitHigh-risk thresholdClinical featureRisk score + clinical featureALL-0.0500.30.30.3000.260.280.30.050.20.20.250.150.10.40.160.1900.150.60.10.1500.20.8-0.01-0.0100.250.90.030.100.31000This result shows that adding the risk score nomogram has a greater bed application potential.
## 3.5. Comparison with Other MSI Prediction Studies
To further verify the performance of the model, other studies on MSI prediction were compared, and the comparison results shown in Table12 developed three prediction models for MSI prediction by extracting the morphology, texture, Gabor wavelet, and other radiomic features of CT images, combined with clinical features, using Lasso and Naive Bayes classifiers, and using clinical features alone [21]. The AUC value of the model with radiomic features was 0.598, the AUC value of the model using radiomic features alone was 0.688, and the AUC value of the model combining radiomic and clinical features was 0.752, which has a large gap with the classification performance of the proposed MSI prediction model.Table 12
Performance comparison of MSI prediction models.
MethodType of dataImage featuresClinical featuresJoint modelWinHistopathological images0.74___________NanoCT image0.680.5990.755Proposed modelHistopathological images0.750.6970.801Win trained a ResNet-18 network through the slices of histopathological images to obtain the likelihood distribution of the patient’s MSI status, generated the plaque likelihood histogram feature, and used the XGBoost classifier to predict the patient’s MSI status [22]. The model has an AUC value of 0.93 on the training set and 0.73 on the test set, indicating obvious overfitting.
## 4. Discussion and Findings
This paper proposes a texture feature based on histopathological images of gastric cancer. The MSI prediction method of sign was used to extract the texture features such as GLCM, GLSZM, GLSZM, and GLRLM. In these texture features, after wavelet transformation sign, we have employed Lasso regression for feature selection, and lastly the texture features most relevant to the MSI state of the user are constructed based on these texture features. The MSI prediction labels of gastric cancer were obtained, and the predictions were compared on the training and validation sets. The label classification performance is verified, and the AUC values obtained are 0.75 and 0.74, respectively. The results show that the proposed predictive signature has a better effect on the MSI status of gastric cancer patients compared with the traditional MSI detection methods initially opted, using machine learning technology based on direct prediction of MSI in gastric cancer patients on the basis of readily available histopathological image status, without the need for additional laboratories for genetic testing and immunohistochemistry analysis; the prediction of MSI status can be achieved at a lower cost. Hence, this method when compared to computer-aided MSI prediction methods based on CT images outperforms, because the reproducibility of radiology features considering different scanners and imaging protocols and the potential differences in terms of the formation of H&E-stained histopathological images are less stable comparing to the performance of the MSI prediction model proposed in this paper. Therefore, this investigation proposes and confirms a strategy for predicting MSI in gastric cancer based on histopathological pictures that may accurately predict the MSI status of patients with gastric cancer, allowing for universal MSI screening and benefiting more gastric cancer patients to be investigated in a significant manner.
## 5. Conclusion
This study proposes and validates a method for predicting MSI in gastric cancer based on histopathological images, which can effectively predict the MSI status of patients with gastric cancer, hence providing a possibility for universal screening of MSI, and is expected to benefit more gastric cancer patients. Immunotherapy by combining the clinical features with predictive labels is proposed in this paper to construct gastric cancer MSI prediction models, as compared with prediction models based on clinical characteristics; after entering the predicted labels that are proposed in this paper, the AUC value of the model is improved from 0.696 to 0.802. To further verify the validity of the predicted labels, the clinical value of sex and predictive models, respectively, before and after adding predictive labels are analyzed. The prediction model was evaluated by calibration curves, C-index values, and decision curves. The results show that after adding the predicted labels proposed in this paper, the C-index value and the calibration performance of the quasi-curve are significantly improved, and the decision curve analysis has also demonstrated a greater net income.
---
*Source: 1012684-2022-07-04.xml* | 2022 |
# A Rare Presentation of Orbital Castleman’s Disease
**Authors:** Ruchi Goel; Akash Raut; Ayushi Agarwal; Shweta Raghav; Sumit Kumar; Simmy Chaudhary; Priyanka Golhait; Sushil Kumar; Ravindra Saran
**Journal:** Case Reports in Ophthalmological Medicine
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1012759
---
## Abstract
Castleman’s disease (CD) is an uncommon group of atypical lymphoproliferative disorders. Extranodal involvement such as the orbit is extremely rare. We aim to report a case of a 62-year-old male who presented with left painless proptosis for the past three years. Examination revealed a firm, lobulated mass in the left superotemporal orbit, displacing the globe inferomedially. A well-defined extraconal orbital lesion encasing the left lateral rectus muscle with intraconal extension was seen on Magnetic Resonance Imaging (MRI) that led to the provisional diagnosis of left solitary encapsulated venous malformation. Excision of the mass via lateral orbitotomy was performed. However, on histopathology, the features were consistent with a mixed-cell variant of Castleman’s disease. A detailed systemic workup was unremarkable. Proptosis resolved after surgery and no recurrence was noted in the three-year follow-up. To the best of our knowledge, this is the first case report of a mixed-cell variant of unicentric orbital CD without any systemic features. This case highlights the importance of including CD in the differential diagnosis of well-defined orbital lesions so as to enable its early detection and timely management.
---
## Body
## 1. Introduction
Castleman’s disease (CD), a rare entity first described in 1956 by Dr. Benjamin Castleman, is a group of atypical lymphoproliferative disorders [1, 2]. Also called as angiofollicular lymphoid hyperplasia, it can present as either a unicentric (one site involvement) or a multicentric disease (more than one site involvement) [3]. On the basis of histopathology, four subtypes are found, namely, the hyaline-vascular variant, plasma cell variant, mixed-cell type, and plasmablastic type (plasmablasts expressing Human Herpes Virus- (HHV-) 8 antigen) [4]. Extranodal involvement such as the orbit is extremely uncommon, and so far, very few cases of orbital CD have been reported in the literature [5]. The most common histopathological type of orbital CD is the hyaline-vascular variant (approximately 90% of cases) which is usually unicentric, clinically presenting as a gradually progressive orbital mass. We describe a rare case of unicentric mixed-cell variant orbital CD that was initially misdiagnosed as a solitary encapsulated venous malformation.
## 2. Case
A 62-year-old man presented with protrusion of the left eye for the past three years. It was insidious in onset, painless, and gradually progressive. There was no history of trauma, systemic illness, previous surgery, or diplopia. The best corrected visual acuity (BCVA) in the right eye was 20/200 and in the left eye 20/50. On examination, a nontender, nonpulsatile, lobulated, firm mass was palpated in the left superotemporal region, not associated with a change in size on valsalva. A horizontal dystopia of 2 mm and a vertical dystopia of 4 mm were observed displacing the eyeball inferomedially (Figure1). The extraocular movement of the left eye was limited in levoelevation. Pupillary reaction and fundoscopy were unremarkable. No associated ocular involvement was detected. The systemic examination was within normal limits with no lymphadenopathy or organomegaly. A complete blood workup, including immunological profile, showed a relatively increased Erythrocytic Sedimentation Rate (ESR) of 25 mm/1st hour. The rest of the parameters were within normal limits. Immunocompromised status was ruled out. The ultrasound B-scan of the left orbit revealed a mass lesion of low internal reflectivity in the lacrimal gland region.Figure 1
Clinical photograph showing left superior orbital fullness displacing the eyeball inferomedially.A Contrast-Enhanced Magnetic Resonance Imaging (CE-MRI) of the orbit revealed a well-defined soft tissue lesion predominantly in the superolateral aspect of the extraconal compartment of the left orbit with intraconal extension, abutting the left lateral rectus muscle (Figure2). The lesion measured 3.2cmanteroposterior×1.6cmtransverse×2.4cmcraniocaudal. It appeared isointense on T1 and hyperintense on T2-Weighted Image (T2WI) with tiny hypointense foci on T1 as well as T2WI. Dynamic contrast-enhanced scans revealed progressive accumulation of contrast with persistence on delayed sequences with the lesion showing diffuse restriction. The lesion reached up to the lateral orbital wall laterally and the roof of the orbit superiorly. Since a provisional diagnosis of left solitary encapsulated venous malformation with bilateral immature senile cataract (right>left) was made, the patient was planned for mass excision via lateral orbitotomy.Figure 2
CE-MRI (Contrast-Enhanced Magnetic Resonance Imaging) showing left well-defined orbital lesion in the extraconal compartment abutting the left lateral rectus muscle with intraconal extension.Macroscopically, a3.5×3×2cm well-defined, greyish-red lobulated, firm mass was excised (Figure 3). Histopathological examination revealed sheets of mature-looking lymphoid tissue with attempted pseudofollicle formation and interfollicular hyalinized thick-walled blood vessels. Focal eccentric layering of the mantle zone (“onion skinning”) was present. There was increased reticulin framework in the interfollicular area. On immunohistochemistry, an intermixed population of CD5 and CD20 with numerous plasma cells highlighted by CD138 was noted. Polyclonal expression of both kappa and lambda was seen (Figure 4). These findings were consistent with extranodal Castleman’s disease of mixed-cell variant. Systemic involvement was ruled out as serum Interleukin-6 levels, gamma globulins, and Computed Tomography (CT) of the chest, abdomen, brain, and pelvis were unremarkable. A confirmatory diagnosis of unicentric, mixed-cell type of orbital CD was made. Surgical excision resulted in complete resolution of proptosis.Figure 3
Gross examination showing a3.5×3×2 cm well-defined, greyish-red, lobulated, and firm mass.Figure 4
Histopathology suggestive of extranodal CD revealed (a) extranodal tissue showing a dilated blood vessel (black arrow) with multiple sheaths of lymphoid cells with attempted ill-formed follicles (white arrow) (HES2X), (b) mature looking lymphocytes (white arrow) (HES40X), (c) follicular dendritic cells positive for CD23 (black arrow) with garlanding (“onion skinning”) of lymphocytes around it (white arrow), (d) polyclonal expression of kappa and (e) lambda, and (f) CD138-positive for plasma cells (g) CD5-positive T cells and (h) cytotoxic CD8 cells.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
## 3. Discussion
Orbital CD, an extremely rare entity, most commonly presents as a progressive painless swelling resulting in proptosis or as B symptoms, i.e., weight loss, fever, and night sweats in presence of systemic involvement. The aetiology is multifactorial. Viral, neoplastic, and inflammatory mechanisms play an important role in the pathogenesis of unicentric CD (UCD). Of all these, the primary driving force is believed to be Neoplastic Follicular Dendritic Cells (FDCs). Aberrant production of interleukin-6 (IL-6) has also been implicated in unicentric CD [6]. In immunocompromised conditions, HHV-8 (Human Herpes Virus/Kaposi-sarcoma virus), a lymphotropic virus, escapes the host immune response and replicates in the tissues, resulting in uncontrolled cytokine production mainly seen in multicentric CD [6].Associated posterior segment or intracranial involvement must always be ruled out [2]. CD-like clinicopathological findings can be seen in lymphomas, Ig-G4-related disease, autoimmune disorders (rheumatoid arthritis, systemic lupus erythematosus), Idiopathic Orbital Inflammatory Disease (IOID), and metastatic lesions. Histopathological examination with IHC remains the gold standard for confirmatory diagnosis. Various management options are available depending upon the location (Table 1). A complete surgical excision in this case was curative, with no signs of recurrence seen during the follow-up of three years.Table 1
Management of CD: an overview.
Treatment
Prognosis
Unicentric (orbital) CD
Surgery (cornerstone)±neoadjuvant chemotherapy alternative: radiotherapy(i) If associated ocular involvement—trial of steroids (unless contraindicated) followed by radiotherapy
Surgery: excellent prognosis with 10-yearsurvivalrate>95%Radiotherapy alone: 2- year survival rate—approximately 80% [7]
Multicentric CD
(i) Cytotoxic chemotherapy (CHOP/etoposide/CVAD/COP)(ii) Monoclonal antibody—rituximab (+CHOP)(iii) Corticosteroids (limited role)(iv) Emergent therapies: IL-6 receptor antagonist (tocilizumab, siltuximab), anakinra (IL-1 receptor antagonist), and autologous stem cell transplantation(v) Antiviral agents (in combination/maintenance therapy)
Multicentric CD has a long-term poor prognosis (worst in idiopathic MCD)Emergent therapies including rituximab show a better survival rate, but larger multicenter studies are required
CHOP: cyclophosphamide, doxorubicin, vincristine, and prednisolone; CVAD: cyclophosphamide, vincristine, doxorubicin, dexamethasone; COP: cyclophosphamide, vincristine, prednisolone.To conclude, CD must be included as a differential diagnosis of a well-defined orbital lesion. To the best of our knowledge, this is the first case report of a unicentric, mixed-cell variant of orbital CD in the absence of systemic features. A step-wise multidisciplinary approach is crucial for an early diagnosis and appropriate treatment.
---
*Source: 1012759-2020-01-04.xml* | 1012759-2020-01-04_1012759-2020-01-04.md | 9,912 | A Rare Presentation of Orbital Castleman’s Disease | Ruchi Goel; Akash Raut; Ayushi Agarwal; Shweta Raghav; Sumit Kumar; Simmy Chaudhary; Priyanka Golhait; Sushil Kumar; Ravindra Saran | Case Reports in Ophthalmological Medicine
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1012759 | 1012759-2020-01-04.xml | ---
## Abstract
Castleman’s disease (CD) is an uncommon group of atypical lymphoproliferative disorders. Extranodal involvement such as the orbit is extremely rare. We aim to report a case of a 62-year-old male who presented with left painless proptosis for the past three years. Examination revealed a firm, lobulated mass in the left superotemporal orbit, displacing the globe inferomedially. A well-defined extraconal orbital lesion encasing the left lateral rectus muscle with intraconal extension was seen on Magnetic Resonance Imaging (MRI) that led to the provisional diagnosis of left solitary encapsulated venous malformation. Excision of the mass via lateral orbitotomy was performed. However, on histopathology, the features were consistent with a mixed-cell variant of Castleman’s disease. A detailed systemic workup was unremarkable. Proptosis resolved after surgery and no recurrence was noted in the three-year follow-up. To the best of our knowledge, this is the first case report of a mixed-cell variant of unicentric orbital CD without any systemic features. This case highlights the importance of including CD in the differential diagnosis of well-defined orbital lesions so as to enable its early detection and timely management.
---
## Body
## 1. Introduction
Castleman’s disease (CD), a rare entity first described in 1956 by Dr. Benjamin Castleman, is a group of atypical lymphoproliferative disorders [1, 2]. Also called as angiofollicular lymphoid hyperplasia, it can present as either a unicentric (one site involvement) or a multicentric disease (more than one site involvement) [3]. On the basis of histopathology, four subtypes are found, namely, the hyaline-vascular variant, plasma cell variant, mixed-cell type, and plasmablastic type (plasmablasts expressing Human Herpes Virus- (HHV-) 8 antigen) [4]. Extranodal involvement such as the orbit is extremely uncommon, and so far, very few cases of orbital CD have been reported in the literature [5]. The most common histopathological type of orbital CD is the hyaline-vascular variant (approximately 90% of cases) which is usually unicentric, clinically presenting as a gradually progressive orbital mass. We describe a rare case of unicentric mixed-cell variant orbital CD that was initially misdiagnosed as a solitary encapsulated venous malformation.
## 2. Case
A 62-year-old man presented with protrusion of the left eye for the past three years. It was insidious in onset, painless, and gradually progressive. There was no history of trauma, systemic illness, previous surgery, or diplopia. The best corrected visual acuity (BCVA) in the right eye was 20/200 and in the left eye 20/50. On examination, a nontender, nonpulsatile, lobulated, firm mass was palpated in the left superotemporal region, not associated with a change in size on valsalva. A horizontal dystopia of 2 mm and a vertical dystopia of 4 mm were observed displacing the eyeball inferomedially (Figure1). The extraocular movement of the left eye was limited in levoelevation. Pupillary reaction and fundoscopy were unremarkable. No associated ocular involvement was detected. The systemic examination was within normal limits with no lymphadenopathy or organomegaly. A complete blood workup, including immunological profile, showed a relatively increased Erythrocytic Sedimentation Rate (ESR) of 25 mm/1st hour. The rest of the parameters were within normal limits. Immunocompromised status was ruled out. The ultrasound B-scan of the left orbit revealed a mass lesion of low internal reflectivity in the lacrimal gland region.Figure 1
Clinical photograph showing left superior orbital fullness displacing the eyeball inferomedially.A Contrast-Enhanced Magnetic Resonance Imaging (CE-MRI) of the orbit revealed a well-defined soft tissue lesion predominantly in the superolateral aspect of the extraconal compartment of the left orbit with intraconal extension, abutting the left lateral rectus muscle (Figure2). The lesion measured 3.2cmanteroposterior×1.6cmtransverse×2.4cmcraniocaudal. It appeared isointense on T1 and hyperintense on T2-Weighted Image (T2WI) with tiny hypointense foci on T1 as well as T2WI. Dynamic contrast-enhanced scans revealed progressive accumulation of contrast with persistence on delayed sequences with the lesion showing diffuse restriction. The lesion reached up to the lateral orbital wall laterally and the roof of the orbit superiorly. Since a provisional diagnosis of left solitary encapsulated venous malformation with bilateral immature senile cataract (right>left) was made, the patient was planned for mass excision via lateral orbitotomy.Figure 2
CE-MRI (Contrast-Enhanced Magnetic Resonance Imaging) showing left well-defined orbital lesion in the extraconal compartment abutting the left lateral rectus muscle with intraconal extension.Macroscopically, a3.5×3×2cm well-defined, greyish-red lobulated, firm mass was excised (Figure 3). Histopathological examination revealed sheets of mature-looking lymphoid tissue with attempted pseudofollicle formation and interfollicular hyalinized thick-walled blood vessels. Focal eccentric layering of the mantle zone (“onion skinning”) was present. There was increased reticulin framework in the interfollicular area. On immunohistochemistry, an intermixed population of CD5 and CD20 with numerous plasma cells highlighted by CD138 was noted. Polyclonal expression of both kappa and lambda was seen (Figure 4). These findings were consistent with extranodal Castleman’s disease of mixed-cell variant. Systemic involvement was ruled out as serum Interleukin-6 levels, gamma globulins, and Computed Tomography (CT) of the chest, abdomen, brain, and pelvis were unremarkable. A confirmatory diagnosis of unicentric, mixed-cell type of orbital CD was made. Surgical excision resulted in complete resolution of proptosis.Figure 3
Gross examination showing a3.5×3×2 cm well-defined, greyish-red, lobulated, and firm mass.Figure 4
Histopathology suggestive of extranodal CD revealed (a) extranodal tissue showing a dilated blood vessel (black arrow) with multiple sheaths of lymphoid cells with attempted ill-formed follicles (white arrow) (HES2X), (b) mature looking lymphocytes (white arrow) (HES40X), (c) follicular dendritic cells positive for CD23 (black arrow) with garlanding (“onion skinning”) of lymphocytes around it (white arrow), (d) polyclonal expression of kappa and (e) lambda, and (f) CD138-positive for plasma cells (g) CD5-positive T cells and (h) cytotoxic CD8 cells.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
## 3. Discussion
Orbital CD, an extremely rare entity, most commonly presents as a progressive painless swelling resulting in proptosis or as B symptoms, i.e., weight loss, fever, and night sweats in presence of systemic involvement. The aetiology is multifactorial. Viral, neoplastic, and inflammatory mechanisms play an important role in the pathogenesis of unicentric CD (UCD). Of all these, the primary driving force is believed to be Neoplastic Follicular Dendritic Cells (FDCs). Aberrant production of interleukin-6 (IL-6) has also been implicated in unicentric CD [6]. In immunocompromised conditions, HHV-8 (Human Herpes Virus/Kaposi-sarcoma virus), a lymphotropic virus, escapes the host immune response and replicates in the tissues, resulting in uncontrolled cytokine production mainly seen in multicentric CD [6].Associated posterior segment or intracranial involvement must always be ruled out [2]. CD-like clinicopathological findings can be seen in lymphomas, Ig-G4-related disease, autoimmune disorders (rheumatoid arthritis, systemic lupus erythematosus), Idiopathic Orbital Inflammatory Disease (IOID), and metastatic lesions. Histopathological examination with IHC remains the gold standard for confirmatory diagnosis. Various management options are available depending upon the location (Table 1). A complete surgical excision in this case was curative, with no signs of recurrence seen during the follow-up of three years.Table 1
Management of CD: an overview.
Treatment
Prognosis
Unicentric (orbital) CD
Surgery (cornerstone)±neoadjuvant chemotherapy alternative: radiotherapy(i) If associated ocular involvement—trial of steroids (unless contraindicated) followed by radiotherapy
Surgery: excellent prognosis with 10-yearsurvivalrate>95%Radiotherapy alone: 2- year survival rate—approximately 80% [7]
Multicentric CD
(i) Cytotoxic chemotherapy (CHOP/etoposide/CVAD/COP)(ii) Monoclonal antibody—rituximab (+CHOP)(iii) Corticosteroids (limited role)(iv) Emergent therapies: IL-6 receptor antagonist (tocilizumab, siltuximab), anakinra (IL-1 receptor antagonist), and autologous stem cell transplantation(v) Antiviral agents (in combination/maintenance therapy)
Multicentric CD has a long-term poor prognosis (worst in idiopathic MCD)Emergent therapies including rituximab show a better survival rate, but larger multicenter studies are required
CHOP: cyclophosphamide, doxorubicin, vincristine, and prednisolone; CVAD: cyclophosphamide, vincristine, doxorubicin, dexamethasone; COP: cyclophosphamide, vincristine, prednisolone.To conclude, CD must be included as a differential diagnosis of a well-defined orbital lesion. To the best of our knowledge, this is the first case report of a unicentric, mixed-cell variant of orbital CD in the absence of systemic features. A step-wise multidisciplinary approach is crucial for an early diagnosis and appropriate treatment.
---
*Source: 1012759-2020-01-04.xml* | 2020 |
# Age-Based Differences in Care Setting Transitions over the Last Year of Life
**Authors:** Donna M. Wilson; Jessica A. Hewitt; Roger Thomas; Deepthi Mohankumar; Katharina Kovacs Burns
**Journal:** Current Gerontology and Geriatrics Research
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101276
---
## Abstract
Context. Little is known about the number and types of moves made in the last year of life to obtain healthcare and end-of-life support, with older adults more vulnerable to care setting transition issues. Research Objective. Compare care setting transitions across older (65+ years) and younger individuals. Design. Secondary analyses of provincial hospital and ambulatory database data. Every individual who lived in the province for one year prior to death from April 1, 2005 through March 31, 2007 was retained (N=19,397). Results. Transitions averaged 3.5, with 3.9 and 3.4 for younger and older persons, respectively. Older persons also had fewer ER and ambulatory visits, fewer procedures performed in the last year of life, but longer inpatient stays (42.7 days versus 36.2 for younger persons). Conclusion. Younger and older persons differ somewhat in the number and type of end-of-life care setting transitions, a matter for continuing research and healthcare policy.
---
## Body
## 1. Introduction
Rapid population aging is occurring now in most developed and developing countries, leading to an increased interest in palliative and end-of-life care [1, 2]. Dying people are often older, age 65 or more [3], and for terminally ill individuals and especially older persons, smooth transitions from acute cure-oriented care to palliative care and from one care setting to another are essential for quality of life remaining [4–7]. Care setting relocations are typically considered the moves that an individual makes from one place to another to obtain healthcare and other supports needed to address their end-of-life care needs, with the person's home often the main end-of-life care setting [8]. Each move, however, requires a number of transitions, such as in care providers, aims of care, technologies available for use, and other less tangible factors such as fatigue, pain, and frustration with having to move and/or relief with moving to a care setting where current care needs can be met. End-of-life care setting transitions are therefore more than just a physical move from one care setting to another; they are also the physical, psychological, emotional, and spiritual changes and impacts that occur as a result of the temporary or permanent moves made in the year before death to obtain healthcare and other needed supports. To date, few studies have focused on the number and types of moves that a person makes in the last year of life to obtain healthcare and other end-of-life support, an issue as this information would assist planning appropriate end-of-life health policy and care and help to avoid unnecessary or difficult care setting transitions that are traumatizing for the individual and their family [9].
## 2. Background
As indicated, few studies have focused on how often people move during the last year of life, although many studies have indicated that hospital utilization tends to be high in the last year of life [10]. Few of these studies have considered the impact of having to move to receive healthcare and other needed end-of-life services in the last year of life. Some difficulties associated with care setting transitions near death have been studied, such as burden to caregivers and cost of hospital transfers [11–14]. Hospital readmission has also been a focus of some research, with evidence now that 12–25% of hospital discharges in the last year of life are followed by a hospital readmission, and with almost 50% of these readmissions through an emergency room or ER [15, 16].Older frail and older terminally ill individuals are particularly vulnerable to difficult care setting transitions. Older individuals have a higher prevalence of chronic and terminal illnesses that require general and specialist medical attention, and so they tend to frequently visit a wide range of healthcare practitioners [17, 18]. Healthcare services today are most often provided on a day surgery or outpatient basis, instead of in hospital after admission there, with older persons thus at risk of needing to make frequent same-day trips to obtain health care and then return home. Healthcare technologies and high-tech hospital services have also become centralized in larger cities, with all persons not living in larger cities needing to travel to access these services [19]. Traveling long distances when ill is understandably difficult, and this difficulty is increased when the person is terminally ill. In Canada, specialized palliative care services have remained centralized in larger cities and in larger hospitals, so accessing palliative care services may also involve considerable travel time and additional complexity associated with moving terminally ill people from one place to another [20, 21].Each trip to obtain needed healthcare services or other supports means a change in care setting. Moves or transfers from one care setting to another often result in care gaps or issues, such as discontinuity in care planning. A recent study found hospital discharge summaries were available for only 12–34% of repeat office or hospital visits, with this gap identified as leading to poor quality of care in 25% of all cases [22]. Other issues, such as increased risk of medical error, are also of concern with care setting changes. Around 50% of medication errors are thought to occur during care setting transitions [23]. For older adults, errors in clinical plans and medications are often more harmful as they are less resilient and more vulnerable to serious illnesses than younger persons [24–26]. The importance of minimizing the number of care setting transitions when terminally ill to reduce any negative impacts or effects of moving cannot be emphasized enough.Care setting transitions of any kind can be a psychological burden for older adults, in large part because of the stress of leaving a familiar environment and familiar people, often their own home and family members or friends [27–29]. For a terminally ill person, every departure from home to hospital or another care setting could be considered a major emotional risk, as they must realize they may never return home again. Travelling long distances to see specialists or have diagnostic tests performed to diagnose progressive disease poses additional risks and burdens, with older persons potentially much more impacted than younger persons. Elderly individuals are more likely to have complicated or difficult and lengthy hospitalizations prior to being discharged home or to a nursing home for continuing care [30]. In short, although care setting transitions may be necessary, they can cause psychological, economic, physical, and social burdens; burdens that are more commonly impactful on older adults. It is therefore important to determine the number and types of end-of-life care setting transitions across older and younger persons so as to gain evidence for health policy and healthcare services planning.
## 3. Methods
The paucity of research on the number and types of end-of-life care setting transitions and concern for older terminally ill persons who are more at risk from care setting transitions provided the impetus for a research study. This study involved secondary analyses of complete population-level hospital and ambulatory care data to examine care setting transitions in the last year of life and determine if there were differences in the numbers and types of care setting transitions for older versus younger individuals who had lived one full year in Alberta, a Canadian province, at any time from April 1, 2005 through March 31, 2007.
### 3.1. Data and Participants
Complete individual anonymous data for two recent years were obtained from Alberta Health and Wellness upon request. The data received were individual anonymous data on all persons in the province's healthcare registry (sociodemographic data), inpatient hospital, and ambulatory care (ER, outpatient clinic, and day surgery clinic) databases. Alberta Health and Wellness is a government agency that collects and then supplies healthcare data to researchers. Research ethics approval is required prior to data delivery, with the University of Alberta's Health Research Ethics Board supplying this approval. A total of 19,397 persons who had died in Alberta in the 2006-07 year had one full year of data before death available for analysis. In total, 3,216,624 care episodes were attributed to these 19,397 individuals.Data cleaning and manipulation using ACCESS were first required to ensure that all data for analysis were error free (such as 999 recorded as an age instead of as missing data) and that the data reflected only those individuals who had lived for 365 days in Alberta prior to death in Alberta. In addition, care was taken to ensure that all data linkages across the three databases were correct for each subject and that the compiled data were comparable across subjects, with a composite database constructed for this purpose. The composite database data were then analyzed using the SPSS computer program (version 18). As indicated, analyses were restricted to individuals who had at least one complete year of information, which excluded children under the age of 1, any persons who died shortly after moving to the province, and any persons who died out of province. Each care setting transition was defined as any move made in the last 365 days of life, as identified and tabulated from the data contained in the original databases. The composite database thus contained information on every care setting transition, which could be a move from home to hospital for inpatient admission or to visit an ER or ambulatory care clinic, a move required by a discharge home from hospital or an ER or ambulatory care clinic, a transfer from one hospital to another, a transfer from hospital to nursing home, or nursing home to hospital. In addition, the composite database contained data for many other variables per subject, including their total number of inpatient hospital days accumulated in the last year of life, with this total number of stay days calculated by adding all days for each hospitalization episode. Understandably, not all subjects were hospitalized for inpatient care in the last year of life, and not all subjects visited ERs or ambulatory care clinics in the last year of life.
### 3.2. Data Analyses
The main focus of analyses was to determine if there were differences between younger persons and older adults (age 65+) in the number and type of care setting transitions. To meet this goal, two sets of analyses were conducted. First, descriptive and exploratory analyses were conducted to determine the number of care setting transitions, length of each hospital stay, total inpatient hospital stay days, number of visits to all types (provincial, regional, or local) of hospitals, number of surgical and other procedures performed in the last year of life, number of palliative care visits or admissions, number of cancer care visits or admissions, and also the number of visits to ambulatory care clinics (i.e., outpatient and/or day surgery clinics combined). Sociodemographic variables, such as gender and rural/urban status, were also examined and compared across younger/older subjects. Counts, percentages, chi square, andt-tests were computed to describe the above variables for all subjects collectively and then younger persons and older adults separately. t-tests for independent samples were used to determine if there were significant age-based differences in the number of care setting transitions in the last year of life.Logistic regression analysis was then performed to assess care setting transition and healthcare utilization differences between younger and older subjects, with complete information on 10,897 subjects available for this analysis. Gender and urban/rural status were initially included as covariates but were removed because they did not improve model fit. The variables included in the logistic regression model ultimately were total inpatient stay days; then as a second set of covariates, number of diagnoses, number of all procedures received in the hospital setting, number of surgical procedures specifically; then as a third set the number of care setting transitions, and as the final set the number of outpatient visits and day surgery visits, number of outpatient and day surgery procedures, number of ER visits, and number of ER procedures.
## 3.1. Data and Participants
Complete individual anonymous data for two recent years were obtained from Alberta Health and Wellness upon request. The data received were individual anonymous data on all persons in the province's healthcare registry (sociodemographic data), inpatient hospital, and ambulatory care (ER, outpatient clinic, and day surgery clinic) databases. Alberta Health and Wellness is a government agency that collects and then supplies healthcare data to researchers. Research ethics approval is required prior to data delivery, with the University of Alberta's Health Research Ethics Board supplying this approval. A total of 19,397 persons who had died in Alberta in the 2006-07 year had one full year of data before death available for analysis. In total, 3,216,624 care episodes were attributed to these 19,397 individuals.Data cleaning and manipulation using ACCESS were first required to ensure that all data for analysis were error free (such as 999 recorded as an age instead of as missing data) and that the data reflected only those individuals who had lived for 365 days in Alberta prior to death in Alberta. In addition, care was taken to ensure that all data linkages across the three databases were correct for each subject and that the compiled data were comparable across subjects, with a composite database constructed for this purpose. The composite database data were then analyzed using the SPSS computer program (version 18). As indicated, analyses were restricted to individuals who had at least one complete year of information, which excluded children under the age of 1, any persons who died shortly after moving to the province, and any persons who died out of province. Each care setting transition was defined as any move made in the last 365 days of life, as identified and tabulated from the data contained in the original databases. The composite database thus contained information on every care setting transition, which could be a move from home to hospital for inpatient admission or to visit an ER or ambulatory care clinic, a move required by a discharge home from hospital or an ER or ambulatory care clinic, a transfer from one hospital to another, a transfer from hospital to nursing home, or nursing home to hospital. In addition, the composite database contained data for many other variables per subject, including their total number of inpatient hospital days accumulated in the last year of life, with this total number of stay days calculated by adding all days for each hospitalization episode. Understandably, not all subjects were hospitalized for inpatient care in the last year of life, and not all subjects visited ERs or ambulatory care clinics in the last year of life.
## 3.2. Data Analyses
The main focus of analyses was to determine if there were differences between younger persons and older adults (age 65+) in the number and type of care setting transitions. To meet this goal, two sets of analyses were conducted. First, descriptive and exploratory analyses were conducted to determine the number of care setting transitions, length of each hospital stay, total inpatient hospital stay days, number of visits to all types (provincial, regional, or local) of hospitals, number of surgical and other procedures performed in the last year of life, number of palliative care visits or admissions, number of cancer care visits or admissions, and also the number of visits to ambulatory care clinics (i.e., outpatient and/or day surgery clinics combined). Sociodemographic variables, such as gender and rural/urban status, were also examined and compared across younger/older subjects. Counts, percentages, chi square, andt-tests were computed to describe the above variables for all subjects collectively and then younger persons and older adults separately. t-tests for independent samples were used to determine if there were significant age-based differences in the number of care setting transitions in the last year of life.Logistic regression analysis was then performed to assess care setting transition and healthcare utilization differences between younger and older subjects, with complete information on 10,897 subjects available for this analysis. Gender and urban/rural status were initially included as covariates but were removed because they did not improve model fit. The variables included in the logistic regression model ultimately were total inpatient stay days; then as a second set of covariates, number of diagnoses, number of all procedures received in the hospital setting, number of surgical procedures specifically; then as a third set the number of care setting transitions, and as the final set the number of outpatient visits and day surgery visits, number of outpatient and day surgery procedures, number of ER visits, and number of ER procedures.
## 4. Results
### 4.1. Initial Sociodemographic and Care Setting Transition Findings
Nearly 3/4 (73%) of the 19,397 subjects were 65 years of age or older (n=14,168), with a slight preponderance of all subjects being male (n=10,008,51.6%). More of the younger subjects were male (60.1%, n=3,145), while slightly more of the older subjects were female (51.6%, n=7,306). The majority (81.6%) were urban dwellers (including 79.6% of those <65 and 82.6%≥65).The 19,397 subjects averaged 3.5 care setting transitions in the last year of life (range of 1–41, standard deviation = 3). A large proportion (81.0%) had between 1 and 5 care setting transitions, while only 3.3% (n=454) had more than 10 transitions. Total inpatient days averaged 41 for all subjects combined. Two-thirds (68.0%) had 1–5 inpatient hospital separations, with the remaining often having no hospital separations, but a small minority (2.1%) had 6 or more admissions. Most individual hospitalizations were greater than 10 days for 72.8% of the subjects, but 15.2% had stays of only 1–5 days. The length of stay for the 9,270 persons who were admitted to a very large provincial hospital was typically only 1–5 days in length, with only 0.1% staying more than 10 days. A majority (84.3%) had no admission to any of the medium-sized regional hospitals in mid-sized cities or small (local) hospitals in towns or small cities. Only 15.4% of all subjects were admitted to a regional hospital, with stays there almost always 1–5 days in length.The 19,397 subjects had an average of 2.4 major diagnostic or treatment procedures performed on them during their last year of life. Almost half (49.1%) had 1 to 5 procedures performed, with the remaining almost equally split into those who had none and those who had more than 5 procedures performed. In addition, 43.7%(n=6,063) had undergone one or more surgical procedures in the last year of life. Many of these procedures were performed in ambulatory care settings. Total visits to outpatient and day surgery clinics ranged from 1 to 5 times for nearly half of all subjects (47.0%, n=9,112), and 66.4% of all subjects (n=12,876) were admitted to an ER 1 to 5 times. Only 27.4% of all subjects (n=5,314) had one or more palliative care hospital admissions or ambulatory care visits, with 21.6% having accessed hospitals or ambulatory care settings to receive cancer care.
### 4.2. Comparisons
As indicated above, care setting transitions averaged 3.5 across all subjects, but with 3.9 and 3.4 transitions for younger and older persons, respectively. As shown in Table1, this difference was significant [t=8.3, P<.05]. In contrast, younger and older subjects did not differ significantly in the number of inpatient hospital separations [t=0.48, P>.05; as X̅=1.6 for younger and older subjects]. However, some differences were present as only 62.0% of younger subjects as compared to 70.2% for those aged ≥65 had 1 to 5 inpatient hospital separations. Younger and older subjects differed significantly in the total number of inpatient days of care accumulated over the year, with older persons hospitalized more days on average [t=9.9,P<.05, X̅=36.2 younger subjects versus X̅=42.7 for older subjects]. Regardless, the majority (75.4%) of older subjects and the majority (65.2%) of younger subjects had individual hospital stays over 10 days in length. Older subjects had longer stays in large provincial hospitals as compared to younger subjects (X̅=32.4 younger versus X̅=39.8 older), a nonsignificant difference. Older subjects also had longer stays in regional and local hospitals as compared to younger subjects (Means of 28.4 and 34.5 versus 24.9 and 24.9 for older and younger subjects, resp.), another nonsignificant difference.Table 1
Comparisons across younger and older subjects (N=19,397).
Younger adultsOlder adultsPMeansMeansTotal care setting transitions3.93.4.00Inpatient hospital discharges (count)1.61.6.63Total inpatient days36.242.7.00Provincial hospital stays1.61.2.00Provincial hospital length of stays32.439.9.00Regional hospital stays.3.3.08Regional hospital length of stays24.928.4.04Local hospitalizations.6.8.00Local hospital stays24.934.5.00Number of procedures3.62.0.00Number of surgical procedures only2.61.3.00Palliative care visits1.31.2.00Cancer care visits1.91.6.00Outpatient visits6.24.6.00Outpatient procedures12.08.7.00ER visits3.22.7.00ER procedures4.13.6.00Day surgery visits2.92.4.11Day surgery procedures17.517.4.96Mean differences were tested using thet-test for independent samples.In contrast, younger subjects had a greater number of procedures (total and surgical only) performed on them in comparison to older subjects (Means of 3.6 and 2.6 for younger subjects versus 2.0 and 1.3 for older subjects, resp.). These differences were statistically significant (t=22.2, P<.05 for total procedures and t=23.3, P<.05 for surgical procedures only). In addition, younger subjects had a significantly higher average number of visits to outpatient clinics (X̅=6.2 younger versus X̅=4.6 older) and a significantly higher average number of visits to ERs (X̅=3.2 younger versus X̅=2.7 older) (t=19.6, P<.05 for outpatient clinics and t=16.2, P<.05 for ER). The number of procedures performed at outpatient clinics (X̅=12.0 younger versus X̅=8.7 older) and in ERs (X̅=4.1 younger versus X̅=3.6 older) was also higher for younger subjects (t=27.8, P<.05 for outpatient procedures and t=8.8, P<.05 for ER procedures, resp.). However, younger and older subjects had a similar number of day surgery visits (X̅=2.9 younger versus X̅=2.4 older) and day surgery procedures (X̅=17.5 younger versus X̅=17.4 older); both were nonsignificant differences. In addition, younger and older subjects had a similar (nonsignificant) number of visits for palliative care (X̅=1.3 younger versus X̅=1.1 older) and a similar (nonsignificant) number of visits for cancer care (X̅=1.9 younger versus X̅=1.6 older).
### 4.3. Logistic Regression Findings
The above identified differences between older and younger subjects were emphasized by the findings of the logistic regression analysis. Model fit was determined as needing to remain significant when variables were entered. This indicates that the model with variables of interest is a better fit for the data over the null model. The model with total number of inpatient stay days added in was significantly different from the constant only model (see Table2 model summary). Older subjects had 1.01 greater odds of longer inpatient stays as compared to younger subjects (χ2(1)=10.9, P<.05). The overall model continued to remain significant with the addition of number of diagnoses, number of procedures, and number of surgical procedures (χ2(4)=521.1, P<.05). The addition of the number of care setting transitions also did not change model fit, with the odds ratio of .881 indicating a small but still significant difference between younger and older subjects (χ2(5)=669.3, P<.05). The final model, including the number of visits to outpatient clinics, number of procedures in outpatient clinics, number of ER visits, and number of procedures in emergency rooms continued to remain significant (χ2(9)=785.9, P<.05). In the final model, older subjects had 1.01 higher odds of longer inpatient stays compared to younger subjects. Older subjects also had .92 lesser odds of greater total number of procedures and .91 lesser odds of more care setting transitions than younger subjects.Table 2
Summary of logistic regression findings for younger and older subjects (n=10,897; reference category—older subjects).
PredictorsBS.EWalddfPOR*1Total stay days.01.0067.271.001.012Number of diagnoses.03.0078.311.001.033Number of procedures−.08.0224.521.00.924Number of surgical procedures−.09.0217.761.00.925Number of care setting transitions−.09.0169.011.00.916Number of outpatient clinic visits−.02.0034.461.00.987Number of outpatient clinic procedures.00.002.691.101.008Number of emergency room visits−.02.0115.611.00.989Number of emergency room procedures−.03.0118.071.00.98Constant1.55.04*OR stands for Odds Ratios.
## 4.1. Initial Sociodemographic and Care Setting Transition Findings
Nearly 3/4 (73%) of the 19,397 subjects were 65 years of age or older (n=14,168), with a slight preponderance of all subjects being male (n=10,008,51.6%). More of the younger subjects were male (60.1%, n=3,145), while slightly more of the older subjects were female (51.6%, n=7,306). The majority (81.6%) were urban dwellers (including 79.6% of those <65 and 82.6%≥65).The 19,397 subjects averaged 3.5 care setting transitions in the last year of life (range of 1–41, standard deviation = 3). A large proportion (81.0%) had between 1 and 5 care setting transitions, while only 3.3% (n=454) had more than 10 transitions. Total inpatient days averaged 41 for all subjects combined. Two-thirds (68.0%) had 1–5 inpatient hospital separations, with the remaining often having no hospital separations, but a small minority (2.1%) had 6 or more admissions. Most individual hospitalizations were greater than 10 days for 72.8% of the subjects, but 15.2% had stays of only 1–5 days. The length of stay for the 9,270 persons who were admitted to a very large provincial hospital was typically only 1–5 days in length, with only 0.1% staying more than 10 days. A majority (84.3%) had no admission to any of the medium-sized regional hospitals in mid-sized cities or small (local) hospitals in towns or small cities. Only 15.4% of all subjects were admitted to a regional hospital, with stays there almost always 1–5 days in length.The 19,397 subjects had an average of 2.4 major diagnostic or treatment procedures performed on them during their last year of life. Almost half (49.1%) had 1 to 5 procedures performed, with the remaining almost equally split into those who had none and those who had more than 5 procedures performed. In addition, 43.7%(n=6,063) had undergone one or more surgical procedures in the last year of life. Many of these procedures were performed in ambulatory care settings. Total visits to outpatient and day surgery clinics ranged from 1 to 5 times for nearly half of all subjects (47.0%, n=9,112), and 66.4% of all subjects (n=12,876) were admitted to an ER 1 to 5 times. Only 27.4% of all subjects (n=5,314) had one or more palliative care hospital admissions or ambulatory care visits, with 21.6% having accessed hospitals or ambulatory care settings to receive cancer care.
## 4.2. Comparisons
As indicated above, care setting transitions averaged 3.5 across all subjects, but with 3.9 and 3.4 transitions for younger and older persons, respectively. As shown in Table1, this difference was significant [t=8.3, P<.05]. In contrast, younger and older subjects did not differ significantly in the number of inpatient hospital separations [t=0.48, P>.05; as X̅=1.6 for younger and older subjects]. However, some differences were present as only 62.0% of younger subjects as compared to 70.2% for those aged ≥65 had 1 to 5 inpatient hospital separations. Younger and older subjects differed significantly in the total number of inpatient days of care accumulated over the year, with older persons hospitalized more days on average [t=9.9,P<.05, X̅=36.2 younger subjects versus X̅=42.7 for older subjects]. Regardless, the majority (75.4%) of older subjects and the majority (65.2%) of younger subjects had individual hospital stays over 10 days in length. Older subjects had longer stays in large provincial hospitals as compared to younger subjects (X̅=32.4 younger versus X̅=39.8 older), a nonsignificant difference. Older subjects also had longer stays in regional and local hospitals as compared to younger subjects (Means of 28.4 and 34.5 versus 24.9 and 24.9 for older and younger subjects, resp.), another nonsignificant difference.Table 1
Comparisons across younger and older subjects (N=19,397).
Younger adultsOlder adultsPMeansMeansTotal care setting transitions3.93.4.00Inpatient hospital discharges (count)1.61.6.63Total inpatient days36.242.7.00Provincial hospital stays1.61.2.00Provincial hospital length of stays32.439.9.00Regional hospital stays.3.3.08Regional hospital length of stays24.928.4.04Local hospitalizations.6.8.00Local hospital stays24.934.5.00Number of procedures3.62.0.00Number of surgical procedures only2.61.3.00Palliative care visits1.31.2.00Cancer care visits1.91.6.00Outpatient visits6.24.6.00Outpatient procedures12.08.7.00ER visits3.22.7.00ER procedures4.13.6.00Day surgery visits2.92.4.11Day surgery procedures17.517.4.96Mean differences were tested using thet-test for independent samples.In contrast, younger subjects had a greater number of procedures (total and surgical only) performed on them in comparison to older subjects (Means of 3.6 and 2.6 for younger subjects versus 2.0 and 1.3 for older subjects, resp.). These differences were statistically significant (t=22.2, P<.05 for total procedures and t=23.3, P<.05 for surgical procedures only). In addition, younger subjects had a significantly higher average number of visits to outpatient clinics (X̅=6.2 younger versus X̅=4.6 older) and a significantly higher average number of visits to ERs (X̅=3.2 younger versus X̅=2.7 older) (t=19.6, P<.05 for outpatient clinics and t=16.2, P<.05 for ER). The number of procedures performed at outpatient clinics (X̅=12.0 younger versus X̅=8.7 older) and in ERs (X̅=4.1 younger versus X̅=3.6 older) was also higher for younger subjects (t=27.8, P<.05 for outpatient procedures and t=8.8, P<.05 for ER procedures, resp.). However, younger and older subjects had a similar number of day surgery visits (X̅=2.9 younger versus X̅=2.4 older) and day surgery procedures (X̅=17.5 younger versus X̅=17.4 older); both were nonsignificant differences. In addition, younger and older subjects had a similar (nonsignificant) number of visits for palliative care (X̅=1.3 younger versus X̅=1.1 older) and a similar (nonsignificant) number of visits for cancer care (X̅=1.9 younger versus X̅=1.6 older).
## 4.3. Logistic Regression Findings
The above identified differences between older and younger subjects were emphasized by the findings of the logistic regression analysis. Model fit was determined as needing to remain significant when variables were entered. This indicates that the model with variables of interest is a better fit for the data over the null model. The model with total number of inpatient stay days added in was significantly different from the constant only model (see Table2 model summary). Older subjects had 1.01 greater odds of longer inpatient stays as compared to younger subjects (χ2(1)=10.9, P<.05). The overall model continued to remain significant with the addition of number of diagnoses, number of procedures, and number of surgical procedures (χ2(4)=521.1, P<.05). The addition of the number of care setting transitions also did not change model fit, with the odds ratio of .881 indicating a small but still significant difference between younger and older subjects (χ2(5)=669.3, P<.05). The final model, including the number of visits to outpatient clinics, number of procedures in outpatient clinics, number of ER visits, and number of procedures in emergency rooms continued to remain significant (χ2(9)=785.9, P<.05). In the final model, older subjects had 1.01 higher odds of longer inpatient stays compared to younger subjects. Older subjects also had .92 lesser odds of greater total number of procedures and .91 lesser odds of more care setting transitions than younger subjects.Table 2
Summary of logistic regression findings for younger and older subjects (n=10,897; reference category—older subjects).
PredictorsBS.EWalddfPOR*1Total stay days.01.0067.271.001.012Number of diagnoses.03.0078.311.001.033Number of procedures−.08.0224.521.00.924Number of surgical procedures−.09.0217.761.00.925Number of care setting transitions−.09.0169.011.00.916Number of outpatient clinic visits−.02.0034.461.00.987Number of outpatient clinic procedures.00.002.691.101.008Number of emergency room visits−.02.0115.611.00.989Number of emergency room procedures−.03.0118.071.00.98Constant1.55.04*OR stands for Odds Ratios.
## 5. Discussion
The subjects in this end-of-life care setting transitions study were mainly older adults, aged 65 years or older, a finding that is consistent with previous age-based findings in other end-of-life studies [3, 9]. Although younger subjects were more often male, there was a slightly larger number of women among the older subjects, with this finding expected as females tend to live longer than males [31]. The average number of care setting transitions was only 3.5 for all subjects, which is not a large number and could simply indicate two trips to a hospital, a hospital ER, or another ambulatory care setting. A small proportion (4%) had more than 10 care setting transitions in their last year of life. Contrary to expectations, persons under the age of 65 had a significantly higher average number of care setting transitions in the last year of life. This is a major finding, as older people are typically considered high users of healthcare services as they near death [10]. It is possible that death is a more expected outcome of illnesses occurring in old age as compared to illnesses occurring among persons who are less than 65 years of age, with visits to healthcare facilities for diagnostic and treatment efforts thus understandably differing. It is also possible that the illnesses suffered by younger people and older people differ in type and severity, such that younger people are more in need to healthcare and other supports over the last year of life.Although the average number of transitions for all subjects (3.5) and across older and younger subjects (3.4 versus 3.9, resp.) were relatively low, it is also important to note that 4% had 10 or more care settings transitions, with 41 the highest number of care setting transitions. Some persons clearly travelled more often to access healthcare and other end-of-life supports, and each of these trips could involve many hours of travel. Although these persons and all of the others would have likely benefitted overall from this travelling to access health care and other end-of-life supports, it could also be argued that any and all care setting transitions occurring in the last year of life represent a large number of risks and other considerations or adjustments to be made by the individual and their family. In addition, healthcare workers must adapt to a patient who could vary considerably in care needs from one time to another, as care needs typically vary over the course of terminal illnesses. While 3.5 moves or care setting changes may not appear to be burdensome, each care setting transition should be optimized so that high quality care is obtained upon arrival and that the move from one place to another is optimized as much as possible. For instance, long stays in ERs prior to admission to hospital could and should be minimized for persons designated as terminally ill. In some cases, care should be taken to reduce the number of care setting transitions as each poses risks and burdens regardless of the potential benefit.It is also remarkable that over the last year of life, people under the age of 65 had a higher average number of total procedures performed, a higher average number of surgical procedures performed, a higher average number of procedures performed in the ER, and a higher average number of procedures performed in outpatient clinics as compared to persons 65 years of age or older. There may be some important reasons for these age-based differences. One reason could be the tendency to provide cure-oriented care for younger individuals and the corresponding tendency to more often provide noncurative or palliative care for older individuals who are less likely to survive aggressive curative treatments such as major surgery and chemotherapy. Although younger persons may benefit from aggressive curative treatment by surviving, younger irrevocably dying individuals could be subject to more futile care in the last year of life, a major concern. This concern brings to attention the importance of advance care planning for people of all ages [6, 32]. In addition, with improved diagnostic tests, it is becoming more and more obvious when an illness is incurable and also when dying is becoming more immediately evident. Both younger and older persons should be able to benefit from these prognostication advancements.Ageism is another possible concern with the higher procedure rates among younger versus older persons. Age-based discrimination could be actively or passively occurring, and this is highly problematic if older people who could potentially benefit from diagnostic and therapeutic procedures are not offered them. The finding that older subjects had a higher number of total days in hospital in the last year of life could simply be an outcome of their having received less diagnostic or treatment-oriented healthcare services previously or their having less timely access to needed healthcare services. Prolonged stays in hospital could also be a factor of the greater difficulty in moving older persons from one place to another. Older subjects with rural residences in particular would have long travelling distances to access healthcare and other end-of-life services, as rural areas typically have minimal healthcare services overall [33, 34]. Travelling from rural areas to urban areas and from one urban area to another could also be highly problematic for older people if family members and friends are not able or available for these moves. Difficulties in travel could mean that older people are at risk from refusing tests and treatments that could be beneficial to them. The longer inpatient hospital stays for older subjects are also explained by a higher incidence of chronic illnesses and disabilities with aging, health conditions that often necessitate longer hospital stays as recovery is more complicated [17, 30].Although longer hospital stays may be indicated for older terminally ill persons with both acute and additional underlying health reasons, the impact of long hospital stays on terminally ill or dying individuals of all ages and their families must be considered. The majority of individuals in both age groups typically had hospital stays of 10 days or more. Respite for family caregivers could be a welcome benefit to both the family caregiver and care recipient, but separations from home and family can have serious consequences. One of the greatest concerns is that death can suddenly take place in hospital, with family and friends not present. Sudden death and other sudden health crises, in the absence of a living will, could also result in life support being initiated even when this intensive care is expected to have a negative or nominal outcome.This study also showed that only about 1 in 5 subjects received specialized palliative care. It is important to note that in Alberta few regional and local hospitals have palliative care specialists and specialist palliative care services, with some trips to large provincial hospitals thus likely made specifically for specialist palliative care. This gap in basic hospital services is highly problematic, as most individuals nearing the end of life could benefit from specialized palliative care. This is not the first study that identified gaps in palliative care services [26]. In Alberta, as there are only two cities with large provincial hospitals, most terminally ill persons could have considerable travelling distances to access specialist palliative care services. This travelling is typically by private car and through family drivers. With seasonal weather, these trips to and from hospitals could be very burdensome for both the terminally ill person and their family. Expanding specialist palliative care services to regional and local hospitals would have the advantage of ensuring that more people overall would be able to access palliative care and access it more easily as well.
## 6. Conclusion
Healthcare age disparities have been a concern for some time, with older people more often assumed to be high users of hospitals and other healthcare services in the last year of life. The findings of this study revealed that younger people are more often admitted to ERs and outpatients clinics, and thus they have a significantly higher number of care setting transitions in the last year of life as compared to older persons. Some additional health services utilization differences were apparent, such as a higher total inpatient care days for older persons. The nature of these care setting transitions and their impact on dying individuals and their families need to be further examined for quality of life and quality of care considerations. One concern is that older dying persons are not able to return home to be in familiar surroundings and with familiar family caregivers but instead are retained in hospital. All end-of-life care setting transitions, particularly if they are well above the average number, raise a number of risks and considerations for future research and practice planning. Focusing on the provision of more accessible and equitable palliative care for all persons, irrespective of age, must be the goal of future palliative care research and policy action.
---
*Source: 101276-2011-08-07.xml* | 101276-2011-08-07_101276-2011-08-07.md | 43,333 | Age-Based Differences in Care Setting Transitions over the Last Year of Life | Donna M. Wilson; Jessica A. Hewitt; Roger Thomas; Deepthi Mohankumar; Katharina Kovacs Burns | Current Gerontology and Geriatrics Research
(2011) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101276 | 101276-2011-08-07.xml | ---
## Abstract
Context. Little is known about the number and types of moves made in the last year of life to obtain healthcare and end-of-life support, with older adults more vulnerable to care setting transition issues. Research Objective. Compare care setting transitions across older (65+ years) and younger individuals. Design. Secondary analyses of provincial hospital and ambulatory database data. Every individual who lived in the province for one year prior to death from April 1, 2005 through March 31, 2007 was retained (N=19,397). Results. Transitions averaged 3.5, with 3.9 and 3.4 for younger and older persons, respectively. Older persons also had fewer ER and ambulatory visits, fewer procedures performed in the last year of life, but longer inpatient stays (42.7 days versus 36.2 for younger persons). Conclusion. Younger and older persons differ somewhat in the number and type of end-of-life care setting transitions, a matter for continuing research and healthcare policy.
---
## Body
## 1. Introduction
Rapid population aging is occurring now in most developed and developing countries, leading to an increased interest in palliative and end-of-life care [1, 2]. Dying people are often older, age 65 or more [3], and for terminally ill individuals and especially older persons, smooth transitions from acute cure-oriented care to palliative care and from one care setting to another are essential for quality of life remaining [4–7]. Care setting relocations are typically considered the moves that an individual makes from one place to another to obtain healthcare and other supports needed to address their end-of-life care needs, with the person's home often the main end-of-life care setting [8]. Each move, however, requires a number of transitions, such as in care providers, aims of care, technologies available for use, and other less tangible factors such as fatigue, pain, and frustration with having to move and/or relief with moving to a care setting where current care needs can be met. End-of-life care setting transitions are therefore more than just a physical move from one care setting to another; they are also the physical, psychological, emotional, and spiritual changes and impacts that occur as a result of the temporary or permanent moves made in the year before death to obtain healthcare and other needed supports. To date, few studies have focused on the number and types of moves that a person makes in the last year of life to obtain healthcare and other end-of-life support, an issue as this information would assist planning appropriate end-of-life health policy and care and help to avoid unnecessary or difficult care setting transitions that are traumatizing for the individual and their family [9].
## 2. Background
As indicated, few studies have focused on how often people move during the last year of life, although many studies have indicated that hospital utilization tends to be high in the last year of life [10]. Few of these studies have considered the impact of having to move to receive healthcare and other needed end-of-life services in the last year of life. Some difficulties associated with care setting transitions near death have been studied, such as burden to caregivers and cost of hospital transfers [11–14]. Hospital readmission has also been a focus of some research, with evidence now that 12–25% of hospital discharges in the last year of life are followed by a hospital readmission, and with almost 50% of these readmissions through an emergency room or ER [15, 16].Older frail and older terminally ill individuals are particularly vulnerable to difficult care setting transitions. Older individuals have a higher prevalence of chronic and terminal illnesses that require general and specialist medical attention, and so they tend to frequently visit a wide range of healthcare practitioners [17, 18]. Healthcare services today are most often provided on a day surgery or outpatient basis, instead of in hospital after admission there, with older persons thus at risk of needing to make frequent same-day trips to obtain health care and then return home. Healthcare technologies and high-tech hospital services have also become centralized in larger cities, with all persons not living in larger cities needing to travel to access these services [19]. Traveling long distances when ill is understandably difficult, and this difficulty is increased when the person is terminally ill. In Canada, specialized palliative care services have remained centralized in larger cities and in larger hospitals, so accessing palliative care services may also involve considerable travel time and additional complexity associated with moving terminally ill people from one place to another [20, 21].Each trip to obtain needed healthcare services or other supports means a change in care setting. Moves or transfers from one care setting to another often result in care gaps or issues, such as discontinuity in care planning. A recent study found hospital discharge summaries were available for only 12–34% of repeat office or hospital visits, with this gap identified as leading to poor quality of care in 25% of all cases [22]. Other issues, such as increased risk of medical error, are also of concern with care setting changes. Around 50% of medication errors are thought to occur during care setting transitions [23]. For older adults, errors in clinical plans and medications are often more harmful as they are less resilient and more vulnerable to serious illnesses than younger persons [24–26]. The importance of minimizing the number of care setting transitions when terminally ill to reduce any negative impacts or effects of moving cannot be emphasized enough.Care setting transitions of any kind can be a psychological burden for older adults, in large part because of the stress of leaving a familiar environment and familiar people, often their own home and family members or friends [27–29]. For a terminally ill person, every departure from home to hospital or another care setting could be considered a major emotional risk, as they must realize they may never return home again. Travelling long distances to see specialists or have diagnostic tests performed to diagnose progressive disease poses additional risks and burdens, with older persons potentially much more impacted than younger persons. Elderly individuals are more likely to have complicated or difficult and lengthy hospitalizations prior to being discharged home or to a nursing home for continuing care [30]. In short, although care setting transitions may be necessary, they can cause psychological, economic, physical, and social burdens; burdens that are more commonly impactful on older adults. It is therefore important to determine the number and types of end-of-life care setting transitions across older and younger persons so as to gain evidence for health policy and healthcare services planning.
## 3. Methods
The paucity of research on the number and types of end-of-life care setting transitions and concern for older terminally ill persons who are more at risk from care setting transitions provided the impetus for a research study. This study involved secondary analyses of complete population-level hospital and ambulatory care data to examine care setting transitions in the last year of life and determine if there were differences in the numbers and types of care setting transitions for older versus younger individuals who had lived one full year in Alberta, a Canadian province, at any time from April 1, 2005 through March 31, 2007.
### 3.1. Data and Participants
Complete individual anonymous data for two recent years were obtained from Alberta Health and Wellness upon request. The data received were individual anonymous data on all persons in the province's healthcare registry (sociodemographic data), inpatient hospital, and ambulatory care (ER, outpatient clinic, and day surgery clinic) databases. Alberta Health and Wellness is a government agency that collects and then supplies healthcare data to researchers. Research ethics approval is required prior to data delivery, with the University of Alberta's Health Research Ethics Board supplying this approval. A total of 19,397 persons who had died in Alberta in the 2006-07 year had one full year of data before death available for analysis. In total, 3,216,624 care episodes were attributed to these 19,397 individuals.Data cleaning and manipulation using ACCESS were first required to ensure that all data for analysis were error free (such as 999 recorded as an age instead of as missing data) and that the data reflected only those individuals who had lived for 365 days in Alberta prior to death in Alberta. In addition, care was taken to ensure that all data linkages across the three databases were correct for each subject and that the compiled data were comparable across subjects, with a composite database constructed for this purpose. The composite database data were then analyzed using the SPSS computer program (version 18). As indicated, analyses were restricted to individuals who had at least one complete year of information, which excluded children under the age of 1, any persons who died shortly after moving to the province, and any persons who died out of province. Each care setting transition was defined as any move made in the last 365 days of life, as identified and tabulated from the data contained in the original databases. The composite database thus contained information on every care setting transition, which could be a move from home to hospital for inpatient admission or to visit an ER or ambulatory care clinic, a move required by a discharge home from hospital or an ER or ambulatory care clinic, a transfer from one hospital to another, a transfer from hospital to nursing home, or nursing home to hospital. In addition, the composite database contained data for many other variables per subject, including their total number of inpatient hospital days accumulated in the last year of life, with this total number of stay days calculated by adding all days for each hospitalization episode. Understandably, not all subjects were hospitalized for inpatient care in the last year of life, and not all subjects visited ERs or ambulatory care clinics in the last year of life.
### 3.2. Data Analyses
The main focus of analyses was to determine if there were differences between younger persons and older adults (age 65+) in the number and type of care setting transitions. To meet this goal, two sets of analyses were conducted. First, descriptive and exploratory analyses were conducted to determine the number of care setting transitions, length of each hospital stay, total inpatient hospital stay days, number of visits to all types (provincial, regional, or local) of hospitals, number of surgical and other procedures performed in the last year of life, number of palliative care visits or admissions, number of cancer care visits or admissions, and also the number of visits to ambulatory care clinics (i.e., outpatient and/or day surgery clinics combined). Sociodemographic variables, such as gender and rural/urban status, were also examined and compared across younger/older subjects. Counts, percentages, chi square, andt-tests were computed to describe the above variables for all subjects collectively and then younger persons and older adults separately. t-tests for independent samples were used to determine if there were significant age-based differences in the number of care setting transitions in the last year of life.Logistic regression analysis was then performed to assess care setting transition and healthcare utilization differences between younger and older subjects, with complete information on 10,897 subjects available for this analysis. Gender and urban/rural status were initially included as covariates but were removed because they did not improve model fit. The variables included in the logistic regression model ultimately were total inpatient stay days; then as a second set of covariates, number of diagnoses, number of all procedures received in the hospital setting, number of surgical procedures specifically; then as a third set the number of care setting transitions, and as the final set the number of outpatient visits and day surgery visits, number of outpatient and day surgery procedures, number of ER visits, and number of ER procedures.
## 3.1. Data and Participants
Complete individual anonymous data for two recent years were obtained from Alberta Health and Wellness upon request. The data received were individual anonymous data on all persons in the province's healthcare registry (sociodemographic data), inpatient hospital, and ambulatory care (ER, outpatient clinic, and day surgery clinic) databases. Alberta Health and Wellness is a government agency that collects and then supplies healthcare data to researchers. Research ethics approval is required prior to data delivery, with the University of Alberta's Health Research Ethics Board supplying this approval. A total of 19,397 persons who had died in Alberta in the 2006-07 year had one full year of data before death available for analysis. In total, 3,216,624 care episodes were attributed to these 19,397 individuals.Data cleaning and manipulation using ACCESS were first required to ensure that all data for analysis were error free (such as 999 recorded as an age instead of as missing data) and that the data reflected only those individuals who had lived for 365 days in Alberta prior to death in Alberta. In addition, care was taken to ensure that all data linkages across the three databases were correct for each subject and that the compiled data were comparable across subjects, with a composite database constructed for this purpose. The composite database data were then analyzed using the SPSS computer program (version 18). As indicated, analyses were restricted to individuals who had at least one complete year of information, which excluded children under the age of 1, any persons who died shortly after moving to the province, and any persons who died out of province. Each care setting transition was defined as any move made in the last 365 days of life, as identified and tabulated from the data contained in the original databases. The composite database thus contained information on every care setting transition, which could be a move from home to hospital for inpatient admission or to visit an ER or ambulatory care clinic, a move required by a discharge home from hospital or an ER or ambulatory care clinic, a transfer from one hospital to another, a transfer from hospital to nursing home, or nursing home to hospital. In addition, the composite database contained data for many other variables per subject, including their total number of inpatient hospital days accumulated in the last year of life, with this total number of stay days calculated by adding all days for each hospitalization episode. Understandably, not all subjects were hospitalized for inpatient care in the last year of life, and not all subjects visited ERs or ambulatory care clinics in the last year of life.
## 3.2. Data Analyses
The main focus of analyses was to determine if there were differences between younger persons and older adults (age 65+) in the number and type of care setting transitions. To meet this goal, two sets of analyses were conducted. First, descriptive and exploratory analyses were conducted to determine the number of care setting transitions, length of each hospital stay, total inpatient hospital stay days, number of visits to all types (provincial, regional, or local) of hospitals, number of surgical and other procedures performed in the last year of life, number of palliative care visits or admissions, number of cancer care visits or admissions, and also the number of visits to ambulatory care clinics (i.e., outpatient and/or day surgery clinics combined). Sociodemographic variables, such as gender and rural/urban status, were also examined and compared across younger/older subjects. Counts, percentages, chi square, andt-tests were computed to describe the above variables for all subjects collectively and then younger persons and older adults separately. t-tests for independent samples were used to determine if there were significant age-based differences in the number of care setting transitions in the last year of life.Logistic regression analysis was then performed to assess care setting transition and healthcare utilization differences between younger and older subjects, with complete information on 10,897 subjects available for this analysis. Gender and urban/rural status were initially included as covariates but were removed because they did not improve model fit. The variables included in the logistic regression model ultimately were total inpatient stay days; then as a second set of covariates, number of diagnoses, number of all procedures received in the hospital setting, number of surgical procedures specifically; then as a third set the number of care setting transitions, and as the final set the number of outpatient visits and day surgery visits, number of outpatient and day surgery procedures, number of ER visits, and number of ER procedures.
## 4. Results
### 4.1. Initial Sociodemographic and Care Setting Transition Findings
Nearly 3/4 (73%) of the 19,397 subjects were 65 years of age or older (n=14,168), with a slight preponderance of all subjects being male (n=10,008,51.6%). More of the younger subjects were male (60.1%, n=3,145), while slightly more of the older subjects were female (51.6%, n=7,306). The majority (81.6%) were urban dwellers (including 79.6% of those <65 and 82.6%≥65).The 19,397 subjects averaged 3.5 care setting transitions in the last year of life (range of 1–41, standard deviation = 3). A large proportion (81.0%) had between 1 and 5 care setting transitions, while only 3.3% (n=454) had more than 10 transitions. Total inpatient days averaged 41 for all subjects combined. Two-thirds (68.0%) had 1–5 inpatient hospital separations, with the remaining often having no hospital separations, but a small minority (2.1%) had 6 or more admissions. Most individual hospitalizations were greater than 10 days for 72.8% of the subjects, but 15.2% had stays of only 1–5 days. The length of stay for the 9,270 persons who were admitted to a very large provincial hospital was typically only 1–5 days in length, with only 0.1% staying more than 10 days. A majority (84.3%) had no admission to any of the medium-sized regional hospitals in mid-sized cities or small (local) hospitals in towns or small cities. Only 15.4% of all subjects were admitted to a regional hospital, with stays there almost always 1–5 days in length.The 19,397 subjects had an average of 2.4 major diagnostic or treatment procedures performed on them during their last year of life. Almost half (49.1%) had 1 to 5 procedures performed, with the remaining almost equally split into those who had none and those who had more than 5 procedures performed. In addition, 43.7%(n=6,063) had undergone one or more surgical procedures in the last year of life. Many of these procedures were performed in ambulatory care settings. Total visits to outpatient and day surgery clinics ranged from 1 to 5 times for nearly half of all subjects (47.0%, n=9,112), and 66.4% of all subjects (n=12,876) were admitted to an ER 1 to 5 times. Only 27.4% of all subjects (n=5,314) had one or more palliative care hospital admissions or ambulatory care visits, with 21.6% having accessed hospitals or ambulatory care settings to receive cancer care.
### 4.2. Comparisons
As indicated above, care setting transitions averaged 3.5 across all subjects, but with 3.9 and 3.4 transitions for younger and older persons, respectively. As shown in Table1, this difference was significant [t=8.3, P<.05]. In contrast, younger and older subjects did not differ significantly in the number of inpatient hospital separations [t=0.48, P>.05; as X̅=1.6 for younger and older subjects]. However, some differences were present as only 62.0% of younger subjects as compared to 70.2% for those aged ≥65 had 1 to 5 inpatient hospital separations. Younger and older subjects differed significantly in the total number of inpatient days of care accumulated over the year, with older persons hospitalized more days on average [t=9.9,P<.05, X̅=36.2 younger subjects versus X̅=42.7 for older subjects]. Regardless, the majority (75.4%) of older subjects and the majority (65.2%) of younger subjects had individual hospital stays over 10 days in length. Older subjects had longer stays in large provincial hospitals as compared to younger subjects (X̅=32.4 younger versus X̅=39.8 older), a nonsignificant difference. Older subjects also had longer stays in regional and local hospitals as compared to younger subjects (Means of 28.4 and 34.5 versus 24.9 and 24.9 for older and younger subjects, resp.), another nonsignificant difference.Table 1
Comparisons across younger and older subjects (N=19,397).
Younger adultsOlder adultsPMeansMeansTotal care setting transitions3.93.4.00Inpatient hospital discharges (count)1.61.6.63Total inpatient days36.242.7.00Provincial hospital stays1.61.2.00Provincial hospital length of stays32.439.9.00Regional hospital stays.3.3.08Regional hospital length of stays24.928.4.04Local hospitalizations.6.8.00Local hospital stays24.934.5.00Number of procedures3.62.0.00Number of surgical procedures only2.61.3.00Palliative care visits1.31.2.00Cancer care visits1.91.6.00Outpatient visits6.24.6.00Outpatient procedures12.08.7.00ER visits3.22.7.00ER procedures4.13.6.00Day surgery visits2.92.4.11Day surgery procedures17.517.4.96Mean differences were tested using thet-test for independent samples.In contrast, younger subjects had a greater number of procedures (total and surgical only) performed on them in comparison to older subjects (Means of 3.6 and 2.6 for younger subjects versus 2.0 and 1.3 for older subjects, resp.). These differences were statistically significant (t=22.2, P<.05 for total procedures and t=23.3, P<.05 for surgical procedures only). In addition, younger subjects had a significantly higher average number of visits to outpatient clinics (X̅=6.2 younger versus X̅=4.6 older) and a significantly higher average number of visits to ERs (X̅=3.2 younger versus X̅=2.7 older) (t=19.6, P<.05 for outpatient clinics and t=16.2, P<.05 for ER). The number of procedures performed at outpatient clinics (X̅=12.0 younger versus X̅=8.7 older) and in ERs (X̅=4.1 younger versus X̅=3.6 older) was also higher for younger subjects (t=27.8, P<.05 for outpatient procedures and t=8.8, P<.05 for ER procedures, resp.). However, younger and older subjects had a similar number of day surgery visits (X̅=2.9 younger versus X̅=2.4 older) and day surgery procedures (X̅=17.5 younger versus X̅=17.4 older); both were nonsignificant differences. In addition, younger and older subjects had a similar (nonsignificant) number of visits for palliative care (X̅=1.3 younger versus X̅=1.1 older) and a similar (nonsignificant) number of visits for cancer care (X̅=1.9 younger versus X̅=1.6 older).
### 4.3. Logistic Regression Findings
The above identified differences between older and younger subjects were emphasized by the findings of the logistic regression analysis. Model fit was determined as needing to remain significant when variables were entered. This indicates that the model with variables of interest is a better fit for the data over the null model. The model with total number of inpatient stay days added in was significantly different from the constant only model (see Table2 model summary). Older subjects had 1.01 greater odds of longer inpatient stays as compared to younger subjects (χ2(1)=10.9, P<.05). The overall model continued to remain significant with the addition of number of diagnoses, number of procedures, and number of surgical procedures (χ2(4)=521.1, P<.05). The addition of the number of care setting transitions also did not change model fit, with the odds ratio of .881 indicating a small but still significant difference between younger and older subjects (χ2(5)=669.3, P<.05). The final model, including the number of visits to outpatient clinics, number of procedures in outpatient clinics, number of ER visits, and number of procedures in emergency rooms continued to remain significant (χ2(9)=785.9, P<.05). In the final model, older subjects had 1.01 higher odds of longer inpatient stays compared to younger subjects. Older subjects also had .92 lesser odds of greater total number of procedures and .91 lesser odds of more care setting transitions than younger subjects.Table 2
Summary of logistic regression findings for younger and older subjects (n=10,897; reference category—older subjects).
PredictorsBS.EWalddfPOR*1Total stay days.01.0067.271.001.012Number of diagnoses.03.0078.311.001.033Number of procedures−.08.0224.521.00.924Number of surgical procedures−.09.0217.761.00.925Number of care setting transitions−.09.0169.011.00.916Number of outpatient clinic visits−.02.0034.461.00.987Number of outpatient clinic procedures.00.002.691.101.008Number of emergency room visits−.02.0115.611.00.989Number of emergency room procedures−.03.0118.071.00.98Constant1.55.04*OR stands for Odds Ratios.
## 4.1. Initial Sociodemographic and Care Setting Transition Findings
Nearly 3/4 (73%) of the 19,397 subjects were 65 years of age or older (n=14,168), with a slight preponderance of all subjects being male (n=10,008,51.6%). More of the younger subjects were male (60.1%, n=3,145), while slightly more of the older subjects were female (51.6%, n=7,306). The majority (81.6%) were urban dwellers (including 79.6% of those <65 and 82.6%≥65).The 19,397 subjects averaged 3.5 care setting transitions in the last year of life (range of 1–41, standard deviation = 3). A large proportion (81.0%) had between 1 and 5 care setting transitions, while only 3.3% (n=454) had more than 10 transitions. Total inpatient days averaged 41 for all subjects combined. Two-thirds (68.0%) had 1–5 inpatient hospital separations, with the remaining often having no hospital separations, but a small minority (2.1%) had 6 or more admissions. Most individual hospitalizations were greater than 10 days for 72.8% of the subjects, but 15.2% had stays of only 1–5 days. The length of stay for the 9,270 persons who were admitted to a very large provincial hospital was typically only 1–5 days in length, with only 0.1% staying more than 10 days. A majority (84.3%) had no admission to any of the medium-sized regional hospitals in mid-sized cities or small (local) hospitals in towns or small cities. Only 15.4% of all subjects were admitted to a regional hospital, with stays there almost always 1–5 days in length.The 19,397 subjects had an average of 2.4 major diagnostic or treatment procedures performed on them during their last year of life. Almost half (49.1%) had 1 to 5 procedures performed, with the remaining almost equally split into those who had none and those who had more than 5 procedures performed. In addition, 43.7%(n=6,063) had undergone one or more surgical procedures in the last year of life. Many of these procedures were performed in ambulatory care settings. Total visits to outpatient and day surgery clinics ranged from 1 to 5 times for nearly half of all subjects (47.0%, n=9,112), and 66.4% of all subjects (n=12,876) were admitted to an ER 1 to 5 times. Only 27.4% of all subjects (n=5,314) had one or more palliative care hospital admissions or ambulatory care visits, with 21.6% having accessed hospitals or ambulatory care settings to receive cancer care.
## 4.2. Comparisons
As indicated above, care setting transitions averaged 3.5 across all subjects, but with 3.9 and 3.4 transitions for younger and older persons, respectively. As shown in Table1, this difference was significant [t=8.3, P<.05]. In contrast, younger and older subjects did not differ significantly in the number of inpatient hospital separations [t=0.48, P>.05; as X̅=1.6 for younger and older subjects]. However, some differences were present as only 62.0% of younger subjects as compared to 70.2% for those aged ≥65 had 1 to 5 inpatient hospital separations. Younger and older subjects differed significantly in the total number of inpatient days of care accumulated over the year, with older persons hospitalized more days on average [t=9.9,P<.05, X̅=36.2 younger subjects versus X̅=42.7 for older subjects]. Regardless, the majority (75.4%) of older subjects and the majority (65.2%) of younger subjects had individual hospital stays over 10 days in length. Older subjects had longer stays in large provincial hospitals as compared to younger subjects (X̅=32.4 younger versus X̅=39.8 older), a nonsignificant difference. Older subjects also had longer stays in regional and local hospitals as compared to younger subjects (Means of 28.4 and 34.5 versus 24.9 and 24.9 for older and younger subjects, resp.), another nonsignificant difference.Table 1
Comparisons across younger and older subjects (N=19,397).
Younger adultsOlder adultsPMeansMeansTotal care setting transitions3.93.4.00Inpatient hospital discharges (count)1.61.6.63Total inpatient days36.242.7.00Provincial hospital stays1.61.2.00Provincial hospital length of stays32.439.9.00Regional hospital stays.3.3.08Regional hospital length of stays24.928.4.04Local hospitalizations.6.8.00Local hospital stays24.934.5.00Number of procedures3.62.0.00Number of surgical procedures only2.61.3.00Palliative care visits1.31.2.00Cancer care visits1.91.6.00Outpatient visits6.24.6.00Outpatient procedures12.08.7.00ER visits3.22.7.00ER procedures4.13.6.00Day surgery visits2.92.4.11Day surgery procedures17.517.4.96Mean differences were tested using thet-test for independent samples.In contrast, younger subjects had a greater number of procedures (total and surgical only) performed on them in comparison to older subjects (Means of 3.6 and 2.6 for younger subjects versus 2.0 and 1.3 for older subjects, resp.). These differences were statistically significant (t=22.2, P<.05 for total procedures and t=23.3, P<.05 for surgical procedures only). In addition, younger subjects had a significantly higher average number of visits to outpatient clinics (X̅=6.2 younger versus X̅=4.6 older) and a significantly higher average number of visits to ERs (X̅=3.2 younger versus X̅=2.7 older) (t=19.6, P<.05 for outpatient clinics and t=16.2, P<.05 for ER). The number of procedures performed at outpatient clinics (X̅=12.0 younger versus X̅=8.7 older) and in ERs (X̅=4.1 younger versus X̅=3.6 older) was also higher for younger subjects (t=27.8, P<.05 for outpatient procedures and t=8.8, P<.05 for ER procedures, resp.). However, younger and older subjects had a similar number of day surgery visits (X̅=2.9 younger versus X̅=2.4 older) and day surgery procedures (X̅=17.5 younger versus X̅=17.4 older); both were nonsignificant differences. In addition, younger and older subjects had a similar (nonsignificant) number of visits for palliative care (X̅=1.3 younger versus X̅=1.1 older) and a similar (nonsignificant) number of visits for cancer care (X̅=1.9 younger versus X̅=1.6 older).
## 4.3. Logistic Regression Findings
The above identified differences between older and younger subjects were emphasized by the findings of the logistic regression analysis. Model fit was determined as needing to remain significant when variables were entered. This indicates that the model with variables of interest is a better fit for the data over the null model. The model with total number of inpatient stay days added in was significantly different from the constant only model (see Table2 model summary). Older subjects had 1.01 greater odds of longer inpatient stays as compared to younger subjects (χ2(1)=10.9, P<.05). The overall model continued to remain significant with the addition of number of diagnoses, number of procedures, and number of surgical procedures (χ2(4)=521.1, P<.05). The addition of the number of care setting transitions also did not change model fit, with the odds ratio of .881 indicating a small but still significant difference between younger and older subjects (χ2(5)=669.3, P<.05). The final model, including the number of visits to outpatient clinics, number of procedures in outpatient clinics, number of ER visits, and number of procedures in emergency rooms continued to remain significant (χ2(9)=785.9, P<.05). In the final model, older subjects had 1.01 higher odds of longer inpatient stays compared to younger subjects. Older subjects also had .92 lesser odds of greater total number of procedures and .91 lesser odds of more care setting transitions than younger subjects.Table 2
Summary of logistic regression findings for younger and older subjects (n=10,897; reference category—older subjects).
PredictorsBS.EWalddfPOR*1Total stay days.01.0067.271.001.012Number of diagnoses.03.0078.311.001.033Number of procedures−.08.0224.521.00.924Number of surgical procedures−.09.0217.761.00.925Number of care setting transitions−.09.0169.011.00.916Number of outpatient clinic visits−.02.0034.461.00.987Number of outpatient clinic procedures.00.002.691.101.008Number of emergency room visits−.02.0115.611.00.989Number of emergency room procedures−.03.0118.071.00.98Constant1.55.04*OR stands for Odds Ratios.
## 5. Discussion
The subjects in this end-of-life care setting transitions study were mainly older adults, aged 65 years or older, a finding that is consistent with previous age-based findings in other end-of-life studies [3, 9]. Although younger subjects were more often male, there was a slightly larger number of women among the older subjects, with this finding expected as females tend to live longer than males [31]. The average number of care setting transitions was only 3.5 for all subjects, which is not a large number and could simply indicate two trips to a hospital, a hospital ER, or another ambulatory care setting. A small proportion (4%) had more than 10 care setting transitions in their last year of life. Contrary to expectations, persons under the age of 65 had a significantly higher average number of care setting transitions in the last year of life. This is a major finding, as older people are typically considered high users of healthcare services as they near death [10]. It is possible that death is a more expected outcome of illnesses occurring in old age as compared to illnesses occurring among persons who are less than 65 years of age, with visits to healthcare facilities for diagnostic and treatment efforts thus understandably differing. It is also possible that the illnesses suffered by younger people and older people differ in type and severity, such that younger people are more in need to healthcare and other supports over the last year of life.Although the average number of transitions for all subjects (3.5) and across older and younger subjects (3.4 versus 3.9, resp.) were relatively low, it is also important to note that 4% had 10 or more care settings transitions, with 41 the highest number of care setting transitions. Some persons clearly travelled more often to access healthcare and other end-of-life supports, and each of these trips could involve many hours of travel. Although these persons and all of the others would have likely benefitted overall from this travelling to access health care and other end-of-life supports, it could also be argued that any and all care setting transitions occurring in the last year of life represent a large number of risks and other considerations or adjustments to be made by the individual and their family. In addition, healthcare workers must adapt to a patient who could vary considerably in care needs from one time to another, as care needs typically vary over the course of terminal illnesses. While 3.5 moves or care setting changes may not appear to be burdensome, each care setting transition should be optimized so that high quality care is obtained upon arrival and that the move from one place to another is optimized as much as possible. For instance, long stays in ERs prior to admission to hospital could and should be minimized for persons designated as terminally ill. In some cases, care should be taken to reduce the number of care setting transitions as each poses risks and burdens regardless of the potential benefit.It is also remarkable that over the last year of life, people under the age of 65 had a higher average number of total procedures performed, a higher average number of surgical procedures performed, a higher average number of procedures performed in the ER, and a higher average number of procedures performed in outpatient clinics as compared to persons 65 years of age or older. There may be some important reasons for these age-based differences. One reason could be the tendency to provide cure-oriented care for younger individuals and the corresponding tendency to more often provide noncurative or palliative care for older individuals who are less likely to survive aggressive curative treatments such as major surgery and chemotherapy. Although younger persons may benefit from aggressive curative treatment by surviving, younger irrevocably dying individuals could be subject to more futile care in the last year of life, a major concern. This concern brings to attention the importance of advance care planning for people of all ages [6, 32]. In addition, with improved diagnostic tests, it is becoming more and more obvious when an illness is incurable and also when dying is becoming more immediately evident. Both younger and older persons should be able to benefit from these prognostication advancements.Ageism is another possible concern with the higher procedure rates among younger versus older persons. Age-based discrimination could be actively or passively occurring, and this is highly problematic if older people who could potentially benefit from diagnostic and therapeutic procedures are not offered them. The finding that older subjects had a higher number of total days in hospital in the last year of life could simply be an outcome of their having received less diagnostic or treatment-oriented healthcare services previously or their having less timely access to needed healthcare services. Prolonged stays in hospital could also be a factor of the greater difficulty in moving older persons from one place to another. Older subjects with rural residences in particular would have long travelling distances to access healthcare and other end-of-life services, as rural areas typically have minimal healthcare services overall [33, 34]. Travelling from rural areas to urban areas and from one urban area to another could also be highly problematic for older people if family members and friends are not able or available for these moves. Difficulties in travel could mean that older people are at risk from refusing tests and treatments that could be beneficial to them. The longer inpatient hospital stays for older subjects are also explained by a higher incidence of chronic illnesses and disabilities with aging, health conditions that often necessitate longer hospital stays as recovery is more complicated [17, 30].Although longer hospital stays may be indicated for older terminally ill persons with both acute and additional underlying health reasons, the impact of long hospital stays on terminally ill or dying individuals of all ages and their families must be considered. The majority of individuals in both age groups typically had hospital stays of 10 days or more. Respite for family caregivers could be a welcome benefit to both the family caregiver and care recipient, but separations from home and family can have serious consequences. One of the greatest concerns is that death can suddenly take place in hospital, with family and friends not present. Sudden death and other sudden health crises, in the absence of a living will, could also result in life support being initiated even when this intensive care is expected to have a negative or nominal outcome.This study also showed that only about 1 in 5 subjects received specialized palliative care. It is important to note that in Alberta few regional and local hospitals have palliative care specialists and specialist palliative care services, with some trips to large provincial hospitals thus likely made specifically for specialist palliative care. This gap in basic hospital services is highly problematic, as most individuals nearing the end of life could benefit from specialized palliative care. This is not the first study that identified gaps in palliative care services [26]. In Alberta, as there are only two cities with large provincial hospitals, most terminally ill persons could have considerable travelling distances to access specialist palliative care services. This travelling is typically by private car and through family drivers. With seasonal weather, these trips to and from hospitals could be very burdensome for both the terminally ill person and their family. Expanding specialist palliative care services to regional and local hospitals would have the advantage of ensuring that more people overall would be able to access palliative care and access it more easily as well.
## 6. Conclusion
Healthcare age disparities have been a concern for some time, with older people more often assumed to be high users of hospitals and other healthcare services in the last year of life. The findings of this study revealed that younger people are more often admitted to ERs and outpatients clinics, and thus they have a significantly higher number of care setting transitions in the last year of life as compared to older persons. Some additional health services utilization differences were apparent, such as a higher total inpatient care days for older persons. The nature of these care setting transitions and their impact on dying individuals and their families need to be further examined for quality of life and quality of care considerations. One concern is that older dying persons are not able to return home to be in familiar surroundings and with familiar family caregivers but instead are retained in hospital. All end-of-life care setting transitions, particularly if they are well above the average number, raise a number of risks and considerations for future research and practice planning. Focusing on the provision of more accessible and equitable palliative care for all persons, irrespective of age, must be the goal of future palliative care research and policy action.
---
*Source: 101276-2011-08-07.xml* | 2011 |
# Origin of the Autophagosome Membrane in Mammals
**Authors:** Yun Wei; Meixia Liu; Xianxiao Li; Jiangang Liu; Hao Li
**Journal:** BioMed Research International
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1012789
---
## Abstract
Autophagy begins with the nucleation of phagophores, which then expand to give rise to the double-membrane autophagosomes. Autophagosomes ultimately fuse with lysosomes, where the cytosolic cargoes are degraded. Accumulation of autophagosomes is a hallmark of autophagy and neurodegenerative disorders including Alzheimer’s and Huntington’s disease. In recent years, the sources of autophagosome membrane have attracted a great deal of interests, even so, the membrane donors for autophagosomes are still under debate. In this review, we describe the probable sources of autophagosome membrane.
---
## Body
## 1. Introduction
Macroautophagy (henceforth known as autophagy) is a nonselective “self-eating” process that maintains cellular homeostasis, manages stress responses, and controls large proteins and cytoplasmic components quality by eliminating defective or superfluous molecules/structures such as misfolded proteins, damaged mitochondria, excessive peroxisomes, ribosomes, and invading pathogens. Autophagy, which can be induced by exogenous stimulations, such as nutrient starvation, endoplasmic reticulum (ER) stress, rapamycin, vitamin D3, and IFN-γ treatment, provides a source of nutrition and emergy during periods of stress to promote healthy cell homeostasis and synaptic function. Dysfunction of autophagy is a misregulated process in various neurodegenerative diseases, different types of cancer, autoimmune diseases, and uncontrolled infections characterized by the accumulation of protein aggregates, degradation of intracellular pathogens, and clearance of aging organelles, including Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis.The function of autophagy relies on the formation of double- or multimembrane vesicles named autophagosome (AP), which plays a key role in cell homeostasis through engulf cargo including damaged mitochondria (mitophagy) and protein aggregates. The activity of autophagy is modulated primarily by the size and number of APs. The production/accumulation of APs subsequently unfuse to lysosomes (or accumulation of APs) directly induces cellular toxicity under the condition of various stress conditions, such as oxidation and toxic protein aggregation, and this process may be implicated in the pathogenesis of neurodegenerative diseases, tumorigenesis, and infections, among others. The acidic pH and enzymatic action of hydrolases within the lysosome lead to the breakdown of the internal membranes of APs as well as the APs’ contents. So far, however, the origins of the autophagosomal membrane and the molecular mechanisms of AP formation are still unknown.Here, we organize and summarize the papers in this issue according to four focus areas: morphology, formation, function, and source of AP.
## 2. Morphology of Autophagosome
As the double- or multimembrane organelle, the AP in the cytoplasm is a hallmark and the key initial event of autophagy. Sometimes double-membrane structure which contains part of cytoplasmic components also can be judged as AP. The size and number of APs may be separately regulated by different subgroups of autophagy-related genes (ATGs) proteins and the members of the ATG8/ light chain 3 (LC3) protein family. This makes sense considering that modulating AP size may affect primarily cargo selectivity, while regulating the number of APs could be carried out to regulate mostly autophagy flux. In mammals, the time of the AP formation takes 5-10 min [1]. The time courses of starvation decide the extent of AP expansion. The amount of ATG9 protein correlates with the numbers of autophagic bodies; that is, ATG9 levels determine the number of APs. Nitrogen-starvation induces APs range from 300 to 900 nm in diameter, larger than most other vesicles in the cell. During nonselective autophagy, premature closure of the phagophore results in smaller AP.Another study has found that the size of AP may also depend on specific cargo [2], which can range from proteins to intracellular bacteria (0.06 to 0.2 μm) [3]. The size and formation of APs are regulated by the steady-state level of microtubule-associated protein LC3 [4], but the regulatory mechanism of this protein is not understood. The size of APs is likely determined by distinct autophagic steps. Frist, APs can expand by the addition of membrane to form isolation membranes during the early stage of autophagy. Second, APs may grow by fusion with endosomes and lysosomes at the late stage of autophagy, though endosomal/lysosomal fusion is not sufficient for proper autophagosomal growth. Besides, the extent of AP expansion is mainly dependent on the time course of induction condition, such as starvation. The reduction of the cellular level of ATG8, which anchored to the surface of APs, results in smaller AP compared with a wild-type strain, but the number of AP is the same as that of the wild type. Meanwhile, the AP is a highly dynamic organelle and its proteome differs if cells are under the conditions of starvation or basal macroautophagy when blocked with concanamycin A.
## 3. Formation of Autophagosome
AP formation is a complex series of discrete events which is mediated and controlled by a large number of proteins, but the process is poorly understood. Essentially, APs are formed by the induction, expansion (the phagophore/isolation membrane and the omegasome), vesicle completion (the AP), fusion (the amphisome), and degradation (the autolysosome/lysosome) [5], as shown in Figure 1. In the brain, AP biogenesis occurs distally in a constitutive process at the neurite tip which is far away from the nucleus in a microtubule- and dynein-dynactin motor complex-dependent manner. Various stress conditions, such as starvation, oxidation, and toxic protein aggregation, can accelerate the biogenesis of APs and degradation of lysosome. A mature AP is generated when the isolation membrane closes, and then the mature APs fuse with the vacuole/lysosome, where the contents are degraded and the products recycled to the cytosol for reuse. Nearly 40 ATGs have been identified and most of them are conserved across higher eukaryotes, but only parts of ATGs are directly associated with mammalian AP biogenesis, as shown in Table 1. The homotypic fusion of ATG16-positive and LC3-negative AP precursors is a critical regulatory step in AP biogenesis. As the conjugated form of LC3, LC3-II is associated with the outer of the autophagosomal membranes following completion and remains with the AP until fusion with the lysosomes [6]. As an aside, elevated levels of LC3-II generally correlate with the accumulation of APs in the cell but so not indicate an increase in AP biogenesis.Table 1
The function of ATG proteins in AP.
ATG Proteins Features Function in AP Mammals Yeast ULK1/2 [7] ATG1 Serine/threonine kinase; form a complex with mATG13, FIP200 and ATG101; phosphorylated by mTORC1 and AMPK kinases late stage of AP biogenesis ATG2A/B [8] ATG2 Interacts with ATG18; associates to autophagosomal membranes through lipid binding and independently from ATG9 closure of the AP membrane, late stage of AP biogenesis ATG3[9] ATG3 E2-like enzyme facilitates LC3/GABARAP lipidation in highly curved membranes; curvature and maturation of AP biogenesis ATG4A-D [1] ATG4 cysteine protease; phosphorylation by ATG1 initial stage of phagophore formation ATG5[10] ATG5 conjugated by ATG12 elongation of the isolation membranes, the AP-formation marker Beclin1 ATG6, vacuolar protein sorting (Vps)-30 conjugated by PI3KC3 and ULK intervene at every major step in autophagic pathways, from autophagosome formation, to autophagosome/endosome maturation ATG7 ATG7 autophagy-related E1-like enzyme elongation of the AP membranes LC3A/B/C, GABARAP, GATE-16, GABARAPL1/2/3 [4] ATG8 ubiquitin-like protein; conjugates to phosphatidylethanolamine (PE) determines the size of AP; induce membrane tethering and fusion; expansion and closure of phagophores ATG9L1 [11] ATG9 transmembrane autophagy-related protein initial stage of AP formation, generate the isolation membrane ATG10 ATG10 E2-like enzyme; catalyze or facilitate ATG5-12 conjugation promotes autophagolysosome formation — ATG11 Scaffold Protein regulates autophagosome-vacuole fusion ATG12 [12] ATG12 ubiquitin-like molecules; conjugates to ATG5 elongation and maturation of the phagophore membrane KIAA0652[13] ATG13 Phosphorylated by (m)TORC1 later stage of autophagosome maturation ATG14(L)/Barkor [14] ATG14 autophagy-specific subunit fusion of APs to endolysosomes, regulates autophagosome nucleation; the preautophagosome/autophagosome marker ATG16L1/2 [15] ATG16 conjugated by ATG12 and ATG5, E3‐Ubiquitin ligase‐like enzyme elongation of AP membrane WIPI1/2/3/4 [16] ATG18 PtdIns(3)P-binding protein recycle of membrane proteins from the vacuole to the late endosome ATG19 [17] ATG19 contains multiple ATG8 binding sites serves as cargo receptor and directly interacts with ATG8 on the isolation membrane — ATG20 [18] sorting nexin required for efficient autophagy and membrane tubulation — ATG21 PtdIns(3)P-binding protein, only detected at endosomes facilitates the recruitment of Atg8-PE to the site of autophagosome formation _ ATG23 peripheral membrane protein facilitates Atg9 trafficking SNX4 ATG24 [19] a member of the BAR domain family of proteins inhibits the number of APs — ATG27 [20] transmembrane protein retrieval of Atg9 from the vacuole RB1CC1/FIP200 ATG17 PI3P binding effector — ATG29 Ternary complex with Atg17 and Atg31 Atg29-Atg31-Atg17 complex [21], formation a dimer with two crescents for fusion into the expanding phagophore — ATG31 Ternary complex with Atg17 and Atg29 — ATG32 [22] outer mitochondrial membrane protein essential for the initiation of mitophagy; facilitates mitochondrial capture in phagophores ATG101 — Interacts with Atg13; maintains ULK1 basal phosphorylation interacts with the ULK1 complex via direct binding to ATG13 to induce the formation of APFigure 1
Overview of the autophagy process.There are reports that the ULK1/ATG1-ATG13-FIP200-ATG101 protein kinase complex, the VPS (vacuolar protein sorting) 34 (VPS34) complex, the ATG9 trafficking system, ATG5-ATG12-ATG16L1 complex, the two ubiquitin-like proteins ATG12 and ATG8/LC3, and their conjugation systems all have been found which lead to formation and expansion of the phagophore, which eventually seals to form the complete AP. Meanwhile, AP formation is highly inducible. Amino acid can induce autophagy and the protein kinase complex TORC1/mTORC1 suppresses AP formation in nutrient-rich conditions. Actin protein also is necessary for starvation-mediated autophagy through the Arp2/3 complex and WHAMM and actin depolymerization participates in the formation of APs at a very early stage rather than in the maturation steps. In addition, the deconjugation of ATG8–phosphatidylethanolamine (PE) is also required for efficient AP biogenesis for optimal phagophore expansion.
## 4. Function of Autophagosome
Autophagy is an evolutionarily conserved cellular process to maintain energy homeostasis. As the marker of autophagy, the dynamics and functions of APs remain robust in the mouse model of neurodegenerative disease, but AP flux is not increased even as protein accumulates along the axon [23]. Currently, stimulation of APs synthesis is often used to enhance autophagy to alleviate aggregation toxicity of protein in neurodegeneration and aging. Retrograde transport of APs might play a role in neuronal signaling processes, promoting neuronal morphological complexity and preventing neurodegeneration. Autophagy can promote infection by picornaviruses, such as poliovirus and coxsackieviruses, just because APs provide sites for replication. As the double-membrane vesicles, the compositions of the outer and inner AP membranes seem to be quite different [24]. So, there are different roles of different membranes: the inner autophagosomal membrane in charge of cargo sequestration and the outer autophagosomal membrane in charge of fusion with the lysosomal membrane.Meanwhile two characteristics make APs a unique type of cellular transport carrier. First, two lipid bilayers surround the cargo and second, these giant vesicles generally have an average diameter of approximately 700 nm, which can further expand to accommodate large structures such as cellular organelles and bacteria [25]. But accumulation of APs causes cytotoxicity; especially in certain stress conditions the excessive APs subsequently unfused to lysosomes directly induce cellular toxicity independent of apoptosis and necroptosis, and this process may be implicated that AP is a hallmark of neurodegenerative disorders including Alzheimer’s and Huntington’s disease or amyotrophic lateral sclerosis [26, 27].
## 5. Source of Autophagosome Membrane
Because autophagy is an unselective bulk degradation pathway, the specific membrane origin of all APs remains obscure, though morphological features of APs are basically common to conventional and alternative autophagy. Different from yeast and plant cells, there is no preautophagosomal structure (PAS) in mammalian cells. The endoplasmic reticulum exit sites (ERES), mitochondria, ER-mitochondria contact sites, ER-Golgi intermediate compartment (ERGIC), Golgi apparatus, and plasma membrane (PM) have been suggested to supply lipids to the growing isolation membrane in mammalian cells, but the exact mechanism mediating this process remains ambiguity. It is possible that there are different membrane sources dependent on the cell type, growth conditions, physiological conditions, and so on, as shown in Table2.Table 2
The source of autophagosome membrane.
Origin Parts of probable Induction condition Different stages of autophagosomes Contribution Mitochondria outer mitochondrial membrane [32] serum, or serum and amino acid deprivation [32] phagophore expansion the isolation membrane (also called the phagophore) expansion [1] Endoplasmic reticulum the rough endoplasmic reticulum [33], subdomain of the ER termed the omegasome amino acid starvation [34], fasted animals early stages of autophagosome formation [35] phagophore expansion, elongation of isolation membranes [36] Golgi trans-Golgi network,TGN nitrogen starvation or fasted animals early stages of autophagosome formation [37] phagophore formation [38] and expansion, maturation of autophagosomes [39] Plasma membrane ATG16L vesicle, lipids of the plasma membrane amino acid and serum starvation, or nitrogen starvation[40] early stages of autophagosome formation [41] the formation of early autophagic precursors [42] ER–mitochondria contact site the mitochondria-associated ER membrane (MAM) rapamycin and Torin 1 [43] uncertain phagophore expansion ERGIC ERGIC-enriched membrane nutrient starvation [43] generate an early autophagosomal membrane precursor [44] trigger phagophore formation Recycling endosomes membrane lipids nutrient starvation or rich medium early stages of autophagosome formation supply membrane lipids for autophagosome formation
## 6. ER-Mitochondria Contact Sites [28]
ER-mitochondria contact sites have gained a lot of attention recently. In 1952, Wilhelm Bernhard first reported the ER-mitochondria contact sites in rat liver by electron micrographs [29]. The ER-mitochondria contact sites are essential for several key cellular processes, such as calcium homeostasis, lipid metabolism, autophagy regulation, mitochondrial dynamics [30], cell survival to energy metabolism, and protein folding [31]. The correct maintenance of the ER-mitochondria interface is a critical part of the autophagic process; meanwhile the emergence of phagophores and the sites of contact between the ER and mitochondria have surprising correlation. Upon starvation, components of the autophagy-specific class III PI3K (ATG14L, Vps34, Beclin1, and Vps15) all accumulated in the mitochondria-associated membranes (MAMs) fraction and probably recruited to ER-mitochondria contact sites upon autophagy induction, and the autophagosome-formation marker ATG5 also localizes at the ER-mitochondria contact site until AP formation is complete. And AP formation is significantly suppressed in mitofusin 2-knockout cells, in which the ER-mitochondria contact sites are disrupted [32].
## 7. Endoplasmic Reticulum Exit Sites
As the largest organelle in the cell, the ER is in close proximity with other endomembrane compartments because of its membrane lipids and proteins synthesis and outward transports, establishing membrane-membrane contact sites (MCS) that facilitate signaling events, modulate dynamic organelle processes, and exchange the lipids and ions. ERES, the specialized regions of the ER where COPII transport vesicles are generated, are thought to be spatially, physically, and functionally linked to APs. Early autophagic structures tightly associate with the ER membrane due to the presence of omegasome subdomain positive for double FYVE-containing protein 1 (DFCP1), a phosphatidyl-inositol-3-phosphate (PI3P) binding protein; meanwhile the omegasome isolation membrane and AP phagophore probably are all derived from the ER [34]. Sandra Maday [45] also thought that APs are generated at DFCP1-positive subdomains of the ER in the distal end of the axon, distinct from ER exit sites in primary neurons. Studies have found that [46] lysosome membrane-associated protein-2 (LAMP-2), as a heavily glycosylated type-1 membrane protein, must be critical for translocating syntaxin-17 (STX17) to autophagosomal membranes. Sanchez-Wandelmer et al. [47] have found that ERES are core elements in the formation of isolation membranes and ER associates with the extension of isolation membranes or phagophore in mammalian cells by electron tomography. However, contradictory data emerged indicating that only 30% of all APs are associated with the ER or specialized regions of the ER [36].
## 8. Mitochondria
The mitochondria are the most important organelle in determining fundamental metabolic activities, iron and calcium homeostasis, and signal transduction of various cellular pathways. Mitochondrial dysfunction and dysregulation may lead to many human maladies, including cardiovascular diseases, neurodegenerative disease, and cancer. A number of reports suggest that [32] there is a connection between mitochondrial outer membrane lipids, proteins, and AP formation. It is suggested that the outer mitochondrial membrane donates a flow of lipids and membrane proteins to the AP. The early autophagy protein ATG5 and the autophagosomal marker LC3 translocate to puncta localized on the outer membrane of mitochondria following starvation, suggesting that mitochondria plays a central role in starvation-induced autophagy. Meanwhile, researches have found mitochondrial membrane donation to AP formation in both basal (in the presence of serum and vehicle) and drug-induced autophagy in human breast cancer cell line, other than being engulfed by the forming AP [48]. In other studies the authors also show that the connection between ER and mitochondria is crucial because in its absence, starvation-induced APs are not formed [32]. The counterargument is that mitochondria have nothing to the source of AP membranes, because mammalian ATG9 is found only to localize to the trans-Golgi network (TGN) and late endosomes, but not to mitochondria [49], while ATG9-containing compartments are a source of membranes for the formation and/or expansion of APs.
## 9. Golgi Apparatus
The Golgi apparatus is a major glycosylation site involved in protein and lipid synthesis, modification, and secretion. The Golgi apparatus is proposed to be a pivotal membrane source for the mammal AP formation. This result is first observed in developing invertebrate fat body cells by Locke and Collins in 1965. More researches have found that Golgi complex contributes to an early stage of autophagy [50]. At cell telophase, the Golgi structures in APs are again observed to be distributed at the cell periphery when Golgi apparatus is known to reassemble. Based on these, they proposed that Golgi apparatus is a membrane source for autophagosomal growth. As the only transmembrane ATG protein, ATG9 has been associated with the Golgi apparatus and may be involved in providing membrane for AP formation [51]. However, others argued that mammalian ATG9 (mATG9 and ATG9L1) is seen to associate with many other compartments, including recycling endosomes, early endosomes, and late endosomes [37]. It is possible that these organelles all participate in AP formation.
## 10. Plasma Membrane
The PM, which forms the barrier between the cytoplasm and the environment [52], plays critical roles in promoting virulence through mediating secretion of virulence factors, endocytosis, cell wall synthesis, and invasive hyphal morphogenesis. Meanwhile, proteins in the PM also mediate nutrient transport and sense pH, osmolarity, nutrients, and other factors in the extracellular environment. The ability of PM to contribute the AP formation may be particularly important in time of increasing autophagy in mammalian cells. The PM’s large surface area might act as a massive membrane store that allows cells to experience cycles of AP synthesis at much higher rates than under basic conditions without compromising other processes. ATGs and membranes that are necessary for AP formation are originated from PM [47]. Claudia Puri et al. found that ATG16L1 associates with clathrin-coated pits, and after internalization and uncoating, the ATG16L1-associated PM becomes associated with phagophore precursors, which mature into phagophores and then APs [53].
## 11. Recycling Endosomes
In mammalian cells, the endosomal system is extremely dynamic and generates several structurally and functionally distinct compartments, namely, early/recycling endosomes (REs), late endosomes, and lysosomes. The identity of endosomes is ensured by the specific localization of regulators. REs consist of a tubular network that emerges from vacuolar sorting endosomes and diverts cargoes toward the cell surface, the Golgi, or lysosome-related organelles. REs are also implicated in AP formation. mATG9, trafficking from the PM to the REs, is essential for the initiation and progression of autophagy. A. Orsi and his research team have found that mATG9-positive structures interact dynamically with phagophores and APs. TBC1D14, a Tre-2/Bub2/Cdc16 (TBC) domain protein, regulates Tfn receptor- (TfnR-) positive REs, which are required for AP formation. As a positive regulator of AP formation, the membrane remodeling protein sorting nexin18 (SNX18) is required for regulating ATG9A trafficking from REs and formation of ATG16L1- and WIPI2-positive AP precursor membranes [54]. ATG16L1 and mATG9-positive vesicles are present in the same sites on RE, which can reduce membrane egress from the REs to increase the formation of AP. These all indicate that REs probably are membrane donator for SNX18-mediated AP biogenesis.
## 12. Endoplasmic Reticulum-Golgi Intermediate Compartment
ERGIC structures may move from ERES to the Golgi apparatus by tracking on microtubules. Membrane traffic between the ER and the Golgi is bidirectional and occurs via similar mechanisms as other MAMs. Recently, ERGIC, a membrane compartment between the ER and Golgi for cargo sorting and recycling, is proposed as another membrane source for the phagophore. At the ER-Golgi interface, coat protein complex I (COPI) vesicle buds and facilitates retrograde transport from the Golgi and ERGIC, while COPII vesicles may be the precursor of the phagophore membrane. As the donor membrane, the ERGIC is a sorting station undergoing dynamic membrane exchange with COPI and COPII vesicles [55], the later of which are supposed to a source of membrane for APs at the ERGIC. Liang Ge et al. [44] also found that generation of COPII vesicles from the ERGIC could thus be a special event for autophagy-related membrane mobilization induced by starvation-induced remodeling of ERES. Drugs that disrupt ERGIC also suppressed LC3 conjugation and LC3 dot formation.
## 13. ER-Plasma Membrane Contact Sites (ER-PMcs)
ER-PMcs are also mobilized for AP biogenesis. At present researches have identified some functions of ER-PMcs, for example, regulate Ca2+ signaling [56], and conserve lipid homeostasis. The research team of Nascimbeni AC revealed that ER-PMcs can tether to extended synaptotagmins (E-Syts) proteins, which are essential for autophagy-associated phosphatidyl-inositol-3-phosphate (PI3P) synthesis at the cortical ER membrane, and then adjust the mammal AP biogenesis [57].According to other literatures, ATG9-containing cytoplasmic vesicles (ATG9 vesicles) [58], ER- lipid droplet (LD) contact site [59], and COPII vesicles [11]s are also considered sources of membrane to build the AP. Still others think that the ATG9-associated membranes do not fuse with APs and only regulate autophagy because of its transient structural or catalytic functions [60].
## 14. Conclusion
Lowering the accumulation of APs may be a treatment option for neurodegenerative diseases with protein aggregates, so make sure where does the AP membrane come from and how is it formed? In recent years, these questions have attracted a great deal of interests. ER, mitochondria, Golgi complex, and the plasma membrane all have been proposed as the sources of autophagosomal membranes. Of course, the origin of AP membrane is probability multisources, which has been proved by more and more researches. The diverse origins of AP membrane interact each other. These different conclusions reached by the different laboratories could be in part due to different experimental approaches and techniques used in the various laboratories. And the relative contribution of each source under any one set of conditions remains to be determined. Although distinct sources of AP membranes have been proposed, it is not clear to what extent they may be mutually exclusive or whether they may coalesce and cooperate. A lot of research is needed.
---
*Source: 1012789-2018-09-24.xml* | 1012789-2018-09-24_1012789-2018-09-24.md | 26,520 | Origin of the Autophagosome Membrane in Mammals | Yun Wei; Meixia Liu; Xianxiao Li; Jiangang Liu; Hao Li | BioMed Research International
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1012789 | 1012789-2018-09-24.xml | ---
## Abstract
Autophagy begins with the nucleation of phagophores, which then expand to give rise to the double-membrane autophagosomes. Autophagosomes ultimately fuse with lysosomes, where the cytosolic cargoes are degraded. Accumulation of autophagosomes is a hallmark of autophagy and neurodegenerative disorders including Alzheimer’s and Huntington’s disease. In recent years, the sources of autophagosome membrane have attracted a great deal of interests, even so, the membrane donors for autophagosomes are still under debate. In this review, we describe the probable sources of autophagosome membrane.
---
## Body
## 1. Introduction
Macroautophagy (henceforth known as autophagy) is a nonselective “self-eating” process that maintains cellular homeostasis, manages stress responses, and controls large proteins and cytoplasmic components quality by eliminating defective or superfluous molecules/structures such as misfolded proteins, damaged mitochondria, excessive peroxisomes, ribosomes, and invading pathogens. Autophagy, which can be induced by exogenous stimulations, such as nutrient starvation, endoplasmic reticulum (ER) stress, rapamycin, vitamin D3, and IFN-γ treatment, provides a source of nutrition and emergy during periods of stress to promote healthy cell homeostasis and synaptic function. Dysfunction of autophagy is a misregulated process in various neurodegenerative diseases, different types of cancer, autoimmune diseases, and uncontrolled infections characterized by the accumulation of protein aggregates, degradation of intracellular pathogens, and clearance of aging organelles, including Alzheimer’s disease, Huntington’s disease, Parkinson’s disease, and amyotrophic lateral sclerosis.The function of autophagy relies on the formation of double- or multimembrane vesicles named autophagosome (AP), which plays a key role in cell homeostasis through engulf cargo including damaged mitochondria (mitophagy) and protein aggregates. The activity of autophagy is modulated primarily by the size and number of APs. The production/accumulation of APs subsequently unfuse to lysosomes (or accumulation of APs) directly induces cellular toxicity under the condition of various stress conditions, such as oxidation and toxic protein aggregation, and this process may be implicated in the pathogenesis of neurodegenerative diseases, tumorigenesis, and infections, among others. The acidic pH and enzymatic action of hydrolases within the lysosome lead to the breakdown of the internal membranes of APs as well as the APs’ contents. So far, however, the origins of the autophagosomal membrane and the molecular mechanisms of AP formation are still unknown.Here, we organize and summarize the papers in this issue according to four focus areas: morphology, formation, function, and source of AP.
## 2. Morphology of Autophagosome
As the double- or multimembrane organelle, the AP in the cytoplasm is a hallmark and the key initial event of autophagy. Sometimes double-membrane structure which contains part of cytoplasmic components also can be judged as AP. The size and number of APs may be separately regulated by different subgroups of autophagy-related genes (ATGs) proteins and the members of the ATG8/ light chain 3 (LC3) protein family. This makes sense considering that modulating AP size may affect primarily cargo selectivity, while regulating the number of APs could be carried out to regulate mostly autophagy flux. In mammals, the time of the AP formation takes 5-10 min [1]. The time courses of starvation decide the extent of AP expansion. The amount of ATG9 protein correlates with the numbers of autophagic bodies; that is, ATG9 levels determine the number of APs. Nitrogen-starvation induces APs range from 300 to 900 nm in diameter, larger than most other vesicles in the cell. During nonselective autophagy, premature closure of the phagophore results in smaller AP.Another study has found that the size of AP may also depend on specific cargo [2], which can range from proteins to intracellular bacteria (0.06 to 0.2 μm) [3]. The size and formation of APs are regulated by the steady-state level of microtubule-associated protein LC3 [4], but the regulatory mechanism of this protein is not understood. The size of APs is likely determined by distinct autophagic steps. Frist, APs can expand by the addition of membrane to form isolation membranes during the early stage of autophagy. Second, APs may grow by fusion with endosomes and lysosomes at the late stage of autophagy, though endosomal/lysosomal fusion is not sufficient for proper autophagosomal growth. Besides, the extent of AP expansion is mainly dependent on the time course of induction condition, such as starvation. The reduction of the cellular level of ATG8, which anchored to the surface of APs, results in smaller AP compared with a wild-type strain, but the number of AP is the same as that of the wild type. Meanwhile, the AP is a highly dynamic organelle and its proteome differs if cells are under the conditions of starvation or basal macroautophagy when blocked with concanamycin A.
## 3. Formation of Autophagosome
AP formation is a complex series of discrete events which is mediated and controlled by a large number of proteins, but the process is poorly understood. Essentially, APs are formed by the induction, expansion (the phagophore/isolation membrane and the omegasome), vesicle completion (the AP), fusion (the amphisome), and degradation (the autolysosome/lysosome) [5], as shown in Figure 1. In the brain, AP biogenesis occurs distally in a constitutive process at the neurite tip which is far away from the nucleus in a microtubule- and dynein-dynactin motor complex-dependent manner. Various stress conditions, such as starvation, oxidation, and toxic protein aggregation, can accelerate the biogenesis of APs and degradation of lysosome. A mature AP is generated when the isolation membrane closes, and then the mature APs fuse with the vacuole/lysosome, where the contents are degraded and the products recycled to the cytosol for reuse. Nearly 40 ATGs have been identified and most of them are conserved across higher eukaryotes, but only parts of ATGs are directly associated with mammalian AP biogenesis, as shown in Table 1. The homotypic fusion of ATG16-positive and LC3-negative AP precursors is a critical regulatory step in AP biogenesis. As the conjugated form of LC3, LC3-II is associated with the outer of the autophagosomal membranes following completion and remains with the AP until fusion with the lysosomes [6]. As an aside, elevated levels of LC3-II generally correlate with the accumulation of APs in the cell but so not indicate an increase in AP biogenesis.Table 1
The function of ATG proteins in AP.
ATG Proteins Features Function in AP Mammals Yeast ULK1/2 [7] ATG1 Serine/threonine kinase; form a complex with mATG13, FIP200 and ATG101; phosphorylated by mTORC1 and AMPK kinases late stage of AP biogenesis ATG2A/B [8] ATG2 Interacts with ATG18; associates to autophagosomal membranes through lipid binding and independently from ATG9 closure of the AP membrane, late stage of AP biogenesis ATG3[9] ATG3 E2-like enzyme facilitates LC3/GABARAP lipidation in highly curved membranes; curvature and maturation of AP biogenesis ATG4A-D [1] ATG4 cysteine protease; phosphorylation by ATG1 initial stage of phagophore formation ATG5[10] ATG5 conjugated by ATG12 elongation of the isolation membranes, the AP-formation marker Beclin1 ATG6, vacuolar protein sorting (Vps)-30 conjugated by PI3KC3 and ULK intervene at every major step in autophagic pathways, from autophagosome formation, to autophagosome/endosome maturation ATG7 ATG7 autophagy-related E1-like enzyme elongation of the AP membranes LC3A/B/C, GABARAP, GATE-16, GABARAPL1/2/3 [4] ATG8 ubiquitin-like protein; conjugates to phosphatidylethanolamine (PE) determines the size of AP; induce membrane tethering and fusion; expansion and closure of phagophores ATG9L1 [11] ATG9 transmembrane autophagy-related protein initial stage of AP formation, generate the isolation membrane ATG10 ATG10 E2-like enzyme; catalyze or facilitate ATG5-12 conjugation promotes autophagolysosome formation — ATG11 Scaffold Protein regulates autophagosome-vacuole fusion ATG12 [12] ATG12 ubiquitin-like molecules; conjugates to ATG5 elongation and maturation of the phagophore membrane KIAA0652[13] ATG13 Phosphorylated by (m)TORC1 later stage of autophagosome maturation ATG14(L)/Barkor [14] ATG14 autophagy-specific subunit fusion of APs to endolysosomes, regulates autophagosome nucleation; the preautophagosome/autophagosome marker ATG16L1/2 [15] ATG16 conjugated by ATG12 and ATG5, E3‐Ubiquitin ligase‐like enzyme elongation of AP membrane WIPI1/2/3/4 [16] ATG18 PtdIns(3)P-binding protein recycle of membrane proteins from the vacuole to the late endosome ATG19 [17] ATG19 contains multiple ATG8 binding sites serves as cargo receptor and directly interacts with ATG8 on the isolation membrane — ATG20 [18] sorting nexin required for efficient autophagy and membrane tubulation — ATG21 PtdIns(3)P-binding protein, only detected at endosomes facilitates the recruitment of Atg8-PE to the site of autophagosome formation _ ATG23 peripheral membrane protein facilitates Atg9 trafficking SNX4 ATG24 [19] a member of the BAR domain family of proteins inhibits the number of APs — ATG27 [20] transmembrane protein retrieval of Atg9 from the vacuole RB1CC1/FIP200 ATG17 PI3P binding effector — ATG29 Ternary complex with Atg17 and Atg31 Atg29-Atg31-Atg17 complex [21], formation a dimer with two crescents for fusion into the expanding phagophore — ATG31 Ternary complex with Atg17 and Atg29 — ATG32 [22] outer mitochondrial membrane protein essential for the initiation of mitophagy; facilitates mitochondrial capture in phagophores ATG101 — Interacts with Atg13; maintains ULK1 basal phosphorylation interacts with the ULK1 complex via direct binding to ATG13 to induce the formation of APFigure 1
Overview of the autophagy process.There are reports that the ULK1/ATG1-ATG13-FIP200-ATG101 protein kinase complex, the VPS (vacuolar protein sorting) 34 (VPS34) complex, the ATG9 trafficking system, ATG5-ATG12-ATG16L1 complex, the two ubiquitin-like proteins ATG12 and ATG8/LC3, and their conjugation systems all have been found which lead to formation and expansion of the phagophore, which eventually seals to form the complete AP. Meanwhile, AP formation is highly inducible. Amino acid can induce autophagy and the protein kinase complex TORC1/mTORC1 suppresses AP formation in nutrient-rich conditions. Actin protein also is necessary for starvation-mediated autophagy through the Arp2/3 complex and WHAMM and actin depolymerization participates in the formation of APs at a very early stage rather than in the maturation steps. In addition, the deconjugation of ATG8–phosphatidylethanolamine (PE) is also required for efficient AP biogenesis for optimal phagophore expansion.
## 4. Function of Autophagosome
Autophagy is an evolutionarily conserved cellular process to maintain energy homeostasis. As the marker of autophagy, the dynamics and functions of APs remain robust in the mouse model of neurodegenerative disease, but AP flux is not increased even as protein accumulates along the axon [23]. Currently, stimulation of APs synthesis is often used to enhance autophagy to alleviate aggregation toxicity of protein in neurodegeneration and aging. Retrograde transport of APs might play a role in neuronal signaling processes, promoting neuronal morphological complexity and preventing neurodegeneration. Autophagy can promote infection by picornaviruses, such as poliovirus and coxsackieviruses, just because APs provide sites for replication. As the double-membrane vesicles, the compositions of the outer and inner AP membranes seem to be quite different [24]. So, there are different roles of different membranes: the inner autophagosomal membrane in charge of cargo sequestration and the outer autophagosomal membrane in charge of fusion with the lysosomal membrane.Meanwhile two characteristics make APs a unique type of cellular transport carrier. First, two lipid bilayers surround the cargo and second, these giant vesicles generally have an average diameter of approximately 700 nm, which can further expand to accommodate large structures such as cellular organelles and bacteria [25]. But accumulation of APs causes cytotoxicity; especially in certain stress conditions the excessive APs subsequently unfused to lysosomes directly induce cellular toxicity independent of apoptosis and necroptosis, and this process may be implicated that AP is a hallmark of neurodegenerative disorders including Alzheimer’s and Huntington’s disease or amyotrophic lateral sclerosis [26, 27].
## 5. Source of Autophagosome Membrane
Because autophagy is an unselective bulk degradation pathway, the specific membrane origin of all APs remains obscure, though morphological features of APs are basically common to conventional and alternative autophagy. Different from yeast and plant cells, there is no preautophagosomal structure (PAS) in mammalian cells. The endoplasmic reticulum exit sites (ERES), mitochondria, ER-mitochondria contact sites, ER-Golgi intermediate compartment (ERGIC), Golgi apparatus, and plasma membrane (PM) have been suggested to supply lipids to the growing isolation membrane in mammalian cells, but the exact mechanism mediating this process remains ambiguity. It is possible that there are different membrane sources dependent on the cell type, growth conditions, physiological conditions, and so on, as shown in Table2.Table 2
The source of autophagosome membrane.
Origin Parts of probable Induction condition Different stages of autophagosomes Contribution Mitochondria outer mitochondrial membrane [32] serum, or serum and amino acid deprivation [32] phagophore expansion the isolation membrane (also called the phagophore) expansion [1] Endoplasmic reticulum the rough endoplasmic reticulum [33], subdomain of the ER termed the omegasome amino acid starvation [34], fasted animals early stages of autophagosome formation [35] phagophore expansion, elongation of isolation membranes [36] Golgi trans-Golgi network,TGN nitrogen starvation or fasted animals early stages of autophagosome formation [37] phagophore formation [38] and expansion, maturation of autophagosomes [39] Plasma membrane ATG16L vesicle, lipids of the plasma membrane amino acid and serum starvation, or nitrogen starvation[40] early stages of autophagosome formation [41] the formation of early autophagic precursors [42] ER–mitochondria contact site the mitochondria-associated ER membrane (MAM) rapamycin and Torin 1 [43] uncertain phagophore expansion ERGIC ERGIC-enriched membrane nutrient starvation [43] generate an early autophagosomal membrane precursor [44] trigger phagophore formation Recycling endosomes membrane lipids nutrient starvation or rich medium early stages of autophagosome formation supply membrane lipids for autophagosome formation
## 6. ER-Mitochondria Contact Sites [28]
ER-mitochondria contact sites have gained a lot of attention recently. In 1952, Wilhelm Bernhard first reported the ER-mitochondria contact sites in rat liver by electron micrographs [29]. The ER-mitochondria contact sites are essential for several key cellular processes, such as calcium homeostasis, lipid metabolism, autophagy regulation, mitochondrial dynamics [30], cell survival to energy metabolism, and protein folding [31]. The correct maintenance of the ER-mitochondria interface is a critical part of the autophagic process; meanwhile the emergence of phagophores and the sites of contact between the ER and mitochondria have surprising correlation. Upon starvation, components of the autophagy-specific class III PI3K (ATG14L, Vps34, Beclin1, and Vps15) all accumulated in the mitochondria-associated membranes (MAMs) fraction and probably recruited to ER-mitochondria contact sites upon autophagy induction, and the autophagosome-formation marker ATG5 also localizes at the ER-mitochondria contact site until AP formation is complete. And AP formation is significantly suppressed in mitofusin 2-knockout cells, in which the ER-mitochondria contact sites are disrupted [32].
## 7. Endoplasmic Reticulum Exit Sites
As the largest organelle in the cell, the ER is in close proximity with other endomembrane compartments because of its membrane lipids and proteins synthesis and outward transports, establishing membrane-membrane contact sites (MCS) that facilitate signaling events, modulate dynamic organelle processes, and exchange the lipids and ions. ERES, the specialized regions of the ER where COPII transport vesicles are generated, are thought to be spatially, physically, and functionally linked to APs. Early autophagic structures tightly associate with the ER membrane due to the presence of omegasome subdomain positive for double FYVE-containing protein 1 (DFCP1), a phosphatidyl-inositol-3-phosphate (PI3P) binding protein; meanwhile the omegasome isolation membrane and AP phagophore probably are all derived from the ER [34]. Sandra Maday [45] also thought that APs are generated at DFCP1-positive subdomains of the ER in the distal end of the axon, distinct from ER exit sites in primary neurons. Studies have found that [46] lysosome membrane-associated protein-2 (LAMP-2), as a heavily glycosylated type-1 membrane protein, must be critical for translocating syntaxin-17 (STX17) to autophagosomal membranes. Sanchez-Wandelmer et al. [47] have found that ERES are core elements in the formation of isolation membranes and ER associates with the extension of isolation membranes or phagophore in mammalian cells by electron tomography. However, contradictory data emerged indicating that only 30% of all APs are associated with the ER or specialized regions of the ER [36].
## 8. Mitochondria
The mitochondria are the most important organelle in determining fundamental metabolic activities, iron and calcium homeostasis, and signal transduction of various cellular pathways. Mitochondrial dysfunction and dysregulation may lead to many human maladies, including cardiovascular diseases, neurodegenerative disease, and cancer. A number of reports suggest that [32] there is a connection between mitochondrial outer membrane lipids, proteins, and AP formation. It is suggested that the outer mitochondrial membrane donates a flow of lipids and membrane proteins to the AP. The early autophagy protein ATG5 and the autophagosomal marker LC3 translocate to puncta localized on the outer membrane of mitochondria following starvation, suggesting that mitochondria plays a central role in starvation-induced autophagy. Meanwhile, researches have found mitochondrial membrane donation to AP formation in both basal (in the presence of serum and vehicle) and drug-induced autophagy in human breast cancer cell line, other than being engulfed by the forming AP [48]. In other studies the authors also show that the connection between ER and mitochondria is crucial because in its absence, starvation-induced APs are not formed [32]. The counterargument is that mitochondria have nothing to the source of AP membranes, because mammalian ATG9 is found only to localize to the trans-Golgi network (TGN) and late endosomes, but not to mitochondria [49], while ATG9-containing compartments are a source of membranes for the formation and/or expansion of APs.
## 9. Golgi Apparatus
The Golgi apparatus is a major glycosylation site involved in protein and lipid synthesis, modification, and secretion. The Golgi apparatus is proposed to be a pivotal membrane source for the mammal AP formation. This result is first observed in developing invertebrate fat body cells by Locke and Collins in 1965. More researches have found that Golgi complex contributes to an early stage of autophagy [50]. At cell telophase, the Golgi structures in APs are again observed to be distributed at the cell periphery when Golgi apparatus is known to reassemble. Based on these, they proposed that Golgi apparatus is a membrane source for autophagosomal growth. As the only transmembrane ATG protein, ATG9 has been associated with the Golgi apparatus and may be involved in providing membrane for AP formation [51]. However, others argued that mammalian ATG9 (mATG9 and ATG9L1) is seen to associate with many other compartments, including recycling endosomes, early endosomes, and late endosomes [37]. It is possible that these organelles all participate in AP formation.
## 10. Plasma Membrane
The PM, which forms the barrier between the cytoplasm and the environment [52], plays critical roles in promoting virulence through mediating secretion of virulence factors, endocytosis, cell wall synthesis, and invasive hyphal morphogenesis. Meanwhile, proteins in the PM also mediate nutrient transport and sense pH, osmolarity, nutrients, and other factors in the extracellular environment. The ability of PM to contribute the AP formation may be particularly important in time of increasing autophagy in mammalian cells. The PM’s large surface area might act as a massive membrane store that allows cells to experience cycles of AP synthesis at much higher rates than under basic conditions without compromising other processes. ATGs and membranes that are necessary for AP formation are originated from PM [47]. Claudia Puri et al. found that ATG16L1 associates with clathrin-coated pits, and after internalization and uncoating, the ATG16L1-associated PM becomes associated with phagophore precursors, which mature into phagophores and then APs [53].
## 11. Recycling Endosomes
In mammalian cells, the endosomal system is extremely dynamic and generates several structurally and functionally distinct compartments, namely, early/recycling endosomes (REs), late endosomes, and lysosomes. The identity of endosomes is ensured by the specific localization of regulators. REs consist of a tubular network that emerges from vacuolar sorting endosomes and diverts cargoes toward the cell surface, the Golgi, or lysosome-related organelles. REs are also implicated in AP formation. mATG9, trafficking from the PM to the REs, is essential for the initiation and progression of autophagy. A. Orsi and his research team have found that mATG9-positive structures interact dynamically with phagophores and APs. TBC1D14, a Tre-2/Bub2/Cdc16 (TBC) domain protein, regulates Tfn receptor- (TfnR-) positive REs, which are required for AP formation. As a positive regulator of AP formation, the membrane remodeling protein sorting nexin18 (SNX18) is required for regulating ATG9A trafficking from REs and formation of ATG16L1- and WIPI2-positive AP precursor membranes [54]. ATG16L1 and mATG9-positive vesicles are present in the same sites on RE, which can reduce membrane egress from the REs to increase the formation of AP. These all indicate that REs probably are membrane donator for SNX18-mediated AP biogenesis.
## 12. Endoplasmic Reticulum-Golgi Intermediate Compartment
ERGIC structures may move from ERES to the Golgi apparatus by tracking on microtubules. Membrane traffic between the ER and the Golgi is bidirectional and occurs via similar mechanisms as other MAMs. Recently, ERGIC, a membrane compartment between the ER and Golgi for cargo sorting and recycling, is proposed as another membrane source for the phagophore. At the ER-Golgi interface, coat protein complex I (COPI) vesicle buds and facilitates retrograde transport from the Golgi and ERGIC, while COPII vesicles may be the precursor of the phagophore membrane. As the donor membrane, the ERGIC is a sorting station undergoing dynamic membrane exchange with COPI and COPII vesicles [55], the later of which are supposed to a source of membrane for APs at the ERGIC. Liang Ge et al. [44] also found that generation of COPII vesicles from the ERGIC could thus be a special event for autophagy-related membrane mobilization induced by starvation-induced remodeling of ERES. Drugs that disrupt ERGIC also suppressed LC3 conjugation and LC3 dot formation.
## 13. ER-Plasma Membrane Contact Sites (ER-PMcs)
ER-PMcs are also mobilized for AP biogenesis. At present researches have identified some functions of ER-PMcs, for example, regulate Ca2+ signaling [56], and conserve lipid homeostasis. The research team of Nascimbeni AC revealed that ER-PMcs can tether to extended synaptotagmins (E-Syts) proteins, which are essential for autophagy-associated phosphatidyl-inositol-3-phosphate (PI3P) synthesis at the cortical ER membrane, and then adjust the mammal AP biogenesis [57].According to other literatures, ATG9-containing cytoplasmic vesicles (ATG9 vesicles) [58], ER- lipid droplet (LD) contact site [59], and COPII vesicles [11]s are also considered sources of membrane to build the AP. Still others think that the ATG9-associated membranes do not fuse with APs and only regulate autophagy because of its transient structural or catalytic functions [60].
## 14. Conclusion
Lowering the accumulation of APs may be a treatment option for neurodegenerative diseases with protein aggregates, so make sure where does the AP membrane come from and how is it formed? In recent years, these questions have attracted a great deal of interests. ER, mitochondria, Golgi complex, and the plasma membrane all have been proposed as the sources of autophagosomal membranes. Of course, the origin of AP membrane is probability multisources, which has been proved by more and more researches. The diverse origins of AP membrane interact each other. These different conclusions reached by the different laboratories could be in part due to different experimental approaches and techniques used in the various laboratories. And the relative contribution of each source under any one set of conditions remains to be determined. Although distinct sources of AP membranes have been proposed, it is not clear to what extent they may be mutually exclusive or whether they may coalesce and cooperate. A lot of research is needed.
---
*Source: 1012789-2018-09-24.xml* | 2018 |
# Association between Timing of Surgical Intervention and Mortality in 15,813 Acute Pancreatitis
**Authors:** Lan Lan; Jiawei Luo; Xiaoyan Yang; Dujiang Yang; Mengjiao Li; Fangwei Chen; Nianyin Zeng; Xiaobo Zhou
**Journal:** Computational and Mathematical Methods in Medicine
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1012796
---
## Abstract
Objective. In order to find the quantitative relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis. Methods. The generalized additive model was applied to quantitate the relationship between surgical time (from the onset of acute pancreatitis to first surgical intervention) and risk of death adjusted for demographic characteristics, infection, organ failure, and important lab indicators extracted from the Electronic Medical Record of West China Hospital of Sichuan University. Results. We analyzed 1,176 inpatients who had pancreatic drainage, pancreatic debridement, or pancreatectomy experience of 15,813 acute pancreatitis retrospectively. It showed that when surgical time was either modelled alone or adjusted for infection or organ failure, an L-shaped relationship between surgical time and risk of death was presented. When surgical time was within 32.60 days, the risk of death was greater than 50%. Conclusion. There is an L-shaped relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis.
---
## Body
## 1. Introduction
Indications for surgical intervention of acute pancreatitis (AP) are secondary infections of the pancreas, secondary infections, or compression symptoms, mainly including the pancreas or peripancreatic symptoms or necrosis of secondary infections and organ failure. It is well known that early debridement is associated with higher morbidity and mortality, and recommendations are to delay by at least 4 weeks after the acute pancreatitis episode. Recommendations of guidelines for surgical timing of necrotizing pancreatitis from United States, United Kingdom, Italy, Finland, and Japan are delayed as far as possible, without recommendations for individuals [1–6]. Those recommendations lacking details could result in a large difference in the selection of best surgical timing in practice.Previous studies [7, 8] on the timing of surgical interventions mostly calculated the time from admission. The time of admission of each patient was susceptible to a variety of factors, such as economic factors and availability of medical resources. Because of the big difference of the time before admission, there is often a certain error based on the time of admission. A more reasonable evaluation should be calculated from the onset of AP (the time of onset of abdominal pain). Additionally, previous studies were mostly qualitative research. A prospective study of 223 patients with well-defined early and late intervention with a subgroup analysis with multiorgan failure and infected necrosis was used [8]. However, they cannot continuously give the risk of death corresponding to a certain point in time. Infection and organ failure have been used as key factors in determining whether or not to undergo surgery and are considered the determinants of mortality for the patients with necrotizing pancreatitis [9, 10]. It was observed that organ failure was more likely to determine mortality in AP [11, 12]. While a prospective cohort study from the Netherlands showed that there were no associations between infection, onset of organ failure, duration of organ failure, and mortality in the patients with necrotizing pancreatitis [13]. What is more, pancreatic amylase is one of the criteria for the diagnosis of AP [14]. High-density lipoprotein within 48 hours after admission is a good predictor of the severity of AP [15], so that the effect of severity can be adjusted by early high-density lipoprotein. White blood cell count on admission is a good indicator of infection, and it can be used to adjust the impact of infection on mortality [16]. Creatinine is the diagnostic criteria for renal failure [14]. Collecting this information prospectively is labor intensive, which often results in a small sample size. Therefore, it is critical that this information can be obtained from Electronic Medical Record (EMR) without extra cost to be researched based on a large sample size.Therefore, we applied a generalized additive model to quantitate the relationship between surgical time (from the onset of AP to first surgical intervention) and risk of death for 15,813 inpatients diagnosed with AP from EMR, as well as adjusting for demographic characteristics, infection, organ failure, and important lab indicators.
## 2. Materials and Methods
### 2.1. Study Setting and Population
The surgical approach for necrotizing pancreatitis can be classified into three categories: drainage, pancreatectomy, and pancreatic necrotic tissue removal plus extensive drainage [5]. Therefore, we defined study patients as follows: (1) diagnosis with AP on admission based on ICD codes (ICD-9: 577.0, ICD-10: K85) and (2) having at least one surgical intervention experience including pancreatic drainage, pancreatic debridement, or pancreatectomy in a same encounter. At the beginning, 15,813 patients diagnosed with acute pancreatitis were included. After extracting surgical records of the patients, 1,176 patients were included finally (see Figure 1). This study retrospectively collected data of patients with AP and followed the STROBE guidelines [17] for observational studies. The research protocol was approved by the ethics review board of West China Hospital of Sichuan University, and the need for informed consent was waived owing to the retrospective nature of the study.Figure 1
Flow diagram of this study.
### 2.2. Data Collection and Definitions
After admission, all patients diagnosed with AP from West China Hospital of Sichuan University initially received traditional treatment. The etiology for patients was main biliary, alcohol abuse, and others. When abdominal pain, severe clinical deterioration, or development of clinical signs of sepsis persisted or recurred, the CECT was performed. Patients with confirmed or suspected infected necrosis were advised to receive surgical intervention based on the CT results. Then, experienced surgeons discussed the case with the radiologist to decide the type and time for surgical intervention, which delayed as much as possible after four weeks from the onset. When patients had persistent clinical manifestations of sepsis, prompt surgical intervention was considered. The data were retrospectively extracted from EMR of West China Hospital of Sichuan University from 2010 to 2018, including demographic characteristics, lab tests, vital signs, and death information. If it was positive for the bacteria in the pancreas or peripancreatic drain, pus, or secretions, the patient was infected. Respiratory failure was defined as the partial pressure of oxygen in blood gas analysis less than 60 mmHg or the use of a ventilator. Circulatory failure was defined as diastolic blood pressure less than 60 mmHg or systolic blood pressure less than 90 mmHg and the use of vasoactive drugs. Kidney failure was defined as creatinine greater than 177μmol/L. The time from the onset of AP to admission was asked by physicians. The lab test results were extracted from the laboratory information system, and the clinical events (vital signs information, etc.) were extracted from the nursing system.
### 2.3. Statistical Analysis
We used a regular expression [18] to extract the patients who had specific surgical intervention experience and the onset of AP from the clinical notes of EMR in the patients diagnosed with AP on admission. We explored the difference between died and survived inpatients diagnosed with AP after the specific surgical intervention. The baselines of the two groups were compared, including important lab indicators, infection, and organ failure. t-test and Chi-square test were used to evaluate the difference between the two groups.Considering that the relationship between many clinical factors and risk of death are often not linear and the generalized additive model [18] allows each variable to be put in the model in different nonlinear forms, the generalized additive model was used to explore the association between the timing of surgical intervention and risk of death, controlling the potential confounding factors like infection and organ failure. We assumed that the death of the patients obeys the Bernoulli distribution. The formula of the generalized additive model is as follows: gYi=α+fx1i+fx2i+⋯, where Y is death or not, a is the intercept, x is the independent variable, i indicates the ith patient, and f is the nonlinear function of independent variable. f is a smooth cubic spline regression function formulated as s⋅ in this study. The backfitting method was used to evaluate the model, and the hyperparameter was selected by the Akaike information criterion (AIC). Based on the adjustment of demographic characteristics and important lab indicators, we first adjusted for infection, secondly adjusted for organ failure, and finally modelled surgical time lonely. When a variable with missing values was to be used, the patient with the missing value was deleted. It is statistically significant if the P value is less than 0.05. All data analyses were done in the R software.
## 2.1. Study Setting and Population
The surgical approach for necrotizing pancreatitis can be classified into three categories: drainage, pancreatectomy, and pancreatic necrotic tissue removal plus extensive drainage [5]. Therefore, we defined study patients as follows: (1) diagnosis with AP on admission based on ICD codes (ICD-9: 577.0, ICD-10: K85) and (2) having at least one surgical intervention experience including pancreatic drainage, pancreatic debridement, or pancreatectomy in a same encounter. At the beginning, 15,813 patients diagnosed with acute pancreatitis were included. After extracting surgical records of the patients, 1,176 patients were included finally (see Figure 1). This study retrospectively collected data of patients with AP and followed the STROBE guidelines [17] for observational studies. The research protocol was approved by the ethics review board of West China Hospital of Sichuan University, and the need for informed consent was waived owing to the retrospective nature of the study.Figure 1
Flow diagram of this study.
## 2.2. Data Collection and Definitions
After admission, all patients diagnosed with AP from West China Hospital of Sichuan University initially received traditional treatment. The etiology for patients was main biliary, alcohol abuse, and others. When abdominal pain, severe clinical deterioration, or development of clinical signs of sepsis persisted or recurred, the CECT was performed. Patients with confirmed or suspected infected necrosis were advised to receive surgical intervention based on the CT results. Then, experienced surgeons discussed the case with the radiologist to decide the type and time for surgical intervention, which delayed as much as possible after four weeks from the onset. When patients had persistent clinical manifestations of sepsis, prompt surgical intervention was considered. The data were retrospectively extracted from EMR of West China Hospital of Sichuan University from 2010 to 2018, including demographic characteristics, lab tests, vital signs, and death information. If it was positive for the bacteria in the pancreas or peripancreatic drain, pus, or secretions, the patient was infected. Respiratory failure was defined as the partial pressure of oxygen in blood gas analysis less than 60 mmHg or the use of a ventilator. Circulatory failure was defined as diastolic blood pressure less than 60 mmHg or systolic blood pressure less than 90 mmHg and the use of vasoactive drugs. Kidney failure was defined as creatinine greater than 177μmol/L. The time from the onset of AP to admission was asked by physicians. The lab test results were extracted from the laboratory information system, and the clinical events (vital signs information, etc.) were extracted from the nursing system.
## 2.3. Statistical Analysis
We used a regular expression [18] to extract the patients who had specific surgical intervention experience and the onset of AP from the clinical notes of EMR in the patients diagnosed with AP on admission. We explored the difference between died and survived inpatients diagnosed with AP after the specific surgical intervention. The baselines of the two groups were compared, including important lab indicators, infection, and organ failure. t-test and Chi-square test were used to evaluate the difference between the two groups.Considering that the relationship between many clinical factors and risk of death are often not linear and the generalized additive model [18] allows each variable to be put in the model in different nonlinear forms, the generalized additive model was used to explore the association between the timing of surgical intervention and risk of death, controlling the potential confounding factors like infection and organ failure. We assumed that the death of the patients obeys the Bernoulli distribution. The formula of the generalized additive model is as follows: gYi=α+fx1i+fx2i+⋯, where Y is death or not, a is the intercept, x is the independent variable, i indicates the ith patient, and f is the nonlinear function of independent variable. f is a smooth cubic spline regression function formulated as s⋅ in this study. The backfitting method was used to evaluate the model, and the hyperparameter was selected by the Akaike information criterion (AIC). Based on the adjustment of demographic characteristics and important lab indicators, we first adjusted for infection, secondly adjusted for organ failure, and finally modelled surgical time lonely. When a variable with missing values was to be used, the patient with the missing value was deleted. It is statistically significant if the P value is less than 0.05. All data analyses were done in the R software.
## 3. Results
### 3.1. Baseline Characteristics
In this study, 1,176 patients with a mean age of45.57±12.72 years and 780 (66.33%) males who had surgical intervention (pancreatic drainage, pancreatic debridement, or pancreatectomy) in 15, 813 patients diagnosed with AP on admission were analyzed. The number of patients with respiratory failure, circulatory failure, and kidney failure before surgical intervention was 36 (3.06%), 522 (44.39%), and 171 (14.54%), respectively. There were 463 (39.37%) patients infected. The time from the onset of AP to admission and first surgical intervention was 23.05±35.42 days and 34.43±34.95 days, respectively. The total hospital stay was 31.54±25.03 days. Sixty-two (5.27%) patients after surgical intervention died in the hospital.The baselines between died and survived patients after surgical intervention were compared. There was no difference between the two groups with respect to age and gender. High-density lipoprotein on admission of survived patients was a little higher than that of died patients. Died patients were 2.45 times and 2.44 times than survived patients for amylase on admission and maximum preoperative creatinine, respectively. Their white blood cell count on admission looked similar. The proportion of infection and organ failure in the death group was higher than that in the surviving group except for respiratory failure without statistical difference. The time from the onset of AP to admission and surgical intervention of died patients was shorter than that of survived patients, while total hospital stay was longer without statistical significance (see Table1).Table 1
Baselines between died and survived patients after surgical intervention.
CharacteristicsDied (n=62)Survived (n=1,114)PAge (year, mean (SD))48.21 (13.32)45.43 (12.67)0.094Male,n (%)39 (62.90)741 (66.52)0.654Lab indicatorHigh-density lipoprotein on admissionª (mmol/L, mean (SD))0.43 (0.31)0.60 (0.37)0.001∗Amylase on admissionª (U/L, mean (SD))635.79 (647.36)259.47 (537.16)<0.001∗White blood cell count on admissionª (109/L, mean (SD))11.92 (5.25)10.82 (6.51)0.208Maximum preoperative creatinineª (μmol/L, mean (SD))217.34 (190.20)89.10 (103.13)<0.001∗Infection,n (%)37 (59.68)426 (38.24)0.001∗Organ failure before surgical interventionRespiratory failure,n (%)2 (3.23)34 (3.05)1.000Circulatory failure,n (%)49 (79.03)473 (42.46)<0.001∗Kidney failure,n (%)37 (59.68)134 (12.03)<0.001∗Time from the onset to admission (day, mean (SD))11.55 (14.96)23.69 (36.12)0.009∗Time from the onset to surgical intervention (day, mean (SD))23.03 (16.33)35.07 (35.60)0.008∗Total hospital stay (day, mean (SD))32.21 (27.54)31.50 (24.90)0.829SD: standard deviation;n (%): number and percentage; ∗ indicates statistical significance; aDifferent missing rates 2.7%, 8.3%, 4.2%, and 2.7% for high-density lipoprotein, amylase, white blood cell count on admission, and maximum preoperative creatinine, respectively.
### 3.2. Modelling Surgical Time and Mortality Adjusted for Infection
Firstly, we modelled surgical time and mortality adjusted for infection, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7, where x1 is the time from the onset of AP to surgical intervention, x2 the is age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, and x7 is the white blood cell count on admission (n=708, R2=18.2%). Amylase, high-density lipoprotein on admission, and surgical time had statistical association with death adjusted for age, gender, infection, and white blood cell count on admission (see Table 2).Table 2
Model results between surgical time and mortality adjusted for infection.
CovariatesβSDZ or χ2PIntercept-4.1170.502-8.200<0.001∗s (time from the onset to surgical intervention)--4.2820.042∗s (age)--0.8360.563Male-0.3000.453-0.6620.508Infection0.5970.4561.3080.191s (high-density lipoprotein)——7.0370.022∗s(amylase)--20.197<0.001∗s (white blood cell count)--0.9520.575‘-’ no traditional slope concept in this study;∗ indicates statistical significance.We further analyzed the independent relationships between risk factors and risk of death. Figure2 shows that there was a roughly L-shaped relationship between the time from the onset of AP to surgical intervention and risk of death, which indicates that premature surgery has a higher risk of death than postponed surgery. The older, the smaller the high-density lipoprotein or the higher the amylase on admission and the higher the risk of death. The risk of death in white blood cell count on admission was first rising and then falling. The shaded area represents the 95% confidence interval.Figure 2
The relationship between risk of death and five risk factors.
### 3.3. Modelling Surgical Time and Mortality Adjusted for Infection and Organ Failure
Secondly, we modelled surgical time and mortality adjusted for infection and organ failure, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7+sx8i,β8+β9x9i+β10x10i+β11x11i, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, x7 is the white blood cell count on admission, x8 is the maximum preoperative creatinine, x9 is respiratory failure or not, x10 is circulatory failure or not, and x11 is kidney failure or not (n=708, R2=31.5%). Amylase on admission had a statistical association with death adjusted for surgical time, age, gender, infection, organ failure, and other lab indicators (see Table 3).Table 3
Model results between surgical time and mortality adjusted for infection and organ failure.
CovariatesβSDZ or χ2PIntercept-4.5860.608-7.541<0.001∗s (time from the onset to surgical intervention)--1.4890.235s (age)--0.9190.459Male-0.8950.516-1.7340.083Infection0.5310.5051.0510.293s (high-density lipoprotein)--1.7390.268s (amylase)--12.7490.005∗s (white blood cell count)--0.6970.665s (creatinine)--2.8450.408Respiratory failure-1.7941.409-1.2730.203Circulatory failure0.8580.4981.7220.085Kidney failure1.3060.7601.7180.086‘-’ no traditional slope concept in this study;∗ indicates statistical significance.The independent relationships between risk factors and risk of death were also analyzed. Figure3 shows that after the inclusion of more variables, the relationship between surgical time, age, high-density lipoprotein, amylase, and white blood cell count and risk of death remained similar. The risk of death was high in a specific range and low in both ends of creatinine.Figure 3
The relationship between risk of death and six risk factors.
### 3.4. Modelling Surgical Time and Mortality
Finally, we developed a model adjusted for age, gender, and high-density lipoprotein formulated aslogitYi=α+sx1i,β1+sx2i,β2+β3x3i+sx4i,β4, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, and x4 is the high-density lipoprotein on admission, to find the relationship between surgical time and risk of death. Age and gender, as well as high-density lipoprotein, were used to adjust for basic characteristics and severity of AP, respectively. Based on the premise, the relationship between surgical time and risk of death in the infected and noninfected groups was also studied. In this section, we applied generalized additive model based on different samples: all patients (n=1, 144, R2=7.92%), infected patients (n=463, R2=5.96%), and noninfected patients (n=681, R2=13.40%). There was a statistical correlation between surgical time and mortality in the three groups.Figure4 shows that the relationship between surgical time and death was similar among all, infected, and noninfected patients. Because the risk of death was very low after 100 days of surgical time, we only figured out the surgical time within 100 days. The relationship between surgical time and death was the same in the infected and noninfected patient groups. Surgical time 32.60, 32.84, and 36.55 days in all, infected, and noninfected patients, respectively, had 50% risk of death. The risk of death would be more than 50% if the surgical time was less than the thresholds.Figure 4
The relationship between risk of death and surgical time ((a1) is for all patients, (a2) is for infected patients, and (a3) is for noninfected patients).
## 3.1. Baseline Characteristics
In this study, 1,176 patients with a mean age of45.57±12.72 years and 780 (66.33%) males who had surgical intervention (pancreatic drainage, pancreatic debridement, or pancreatectomy) in 15, 813 patients diagnosed with AP on admission were analyzed. The number of patients with respiratory failure, circulatory failure, and kidney failure before surgical intervention was 36 (3.06%), 522 (44.39%), and 171 (14.54%), respectively. There were 463 (39.37%) patients infected. The time from the onset of AP to admission and first surgical intervention was 23.05±35.42 days and 34.43±34.95 days, respectively. The total hospital stay was 31.54±25.03 days. Sixty-two (5.27%) patients after surgical intervention died in the hospital.The baselines between died and survived patients after surgical intervention were compared. There was no difference between the two groups with respect to age and gender. High-density lipoprotein on admission of survived patients was a little higher than that of died patients. Died patients were 2.45 times and 2.44 times than survived patients for amylase on admission and maximum preoperative creatinine, respectively. Their white blood cell count on admission looked similar. The proportion of infection and organ failure in the death group was higher than that in the surviving group except for respiratory failure without statistical difference. The time from the onset of AP to admission and surgical intervention of died patients was shorter than that of survived patients, while total hospital stay was longer without statistical significance (see Table1).Table 1
Baselines between died and survived patients after surgical intervention.
CharacteristicsDied (n=62)Survived (n=1,114)PAge (year, mean (SD))48.21 (13.32)45.43 (12.67)0.094Male,n (%)39 (62.90)741 (66.52)0.654Lab indicatorHigh-density lipoprotein on admissionª (mmol/L, mean (SD))0.43 (0.31)0.60 (0.37)0.001∗Amylase on admissionª (U/L, mean (SD))635.79 (647.36)259.47 (537.16)<0.001∗White blood cell count on admissionª (109/L, mean (SD))11.92 (5.25)10.82 (6.51)0.208Maximum preoperative creatinineª (μmol/L, mean (SD))217.34 (190.20)89.10 (103.13)<0.001∗Infection,n (%)37 (59.68)426 (38.24)0.001∗Organ failure before surgical interventionRespiratory failure,n (%)2 (3.23)34 (3.05)1.000Circulatory failure,n (%)49 (79.03)473 (42.46)<0.001∗Kidney failure,n (%)37 (59.68)134 (12.03)<0.001∗Time from the onset to admission (day, mean (SD))11.55 (14.96)23.69 (36.12)0.009∗Time from the onset to surgical intervention (day, mean (SD))23.03 (16.33)35.07 (35.60)0.008∗Total hospital stay (day, mean (SD))32.21 (27.54)31.50 (24.90)0.829SD: standard deviation;n (%): number and percentage; ∗ indicates statistical significance; aDifferent missing rates 2.7%, 8.3%, 4.2%, and 2.7% for high-density lipoprotein, amylase, white blood cell count on admission, and maximum preoperative creatinine, respectively.
## 3.2. Modelling Surgical Time and Mortality Adjusted for Infection
Firstly, we modelled surgical time and mortality adjusted for infection, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7, where x1 is the time from the onset of AP to surgical intervention, x2 the is age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, and x7 is the white blood cell count on admission (n=708, R2=18.2%). Amylase, high-density lipoprotein on admission, and surgical time had statistical association with death adjusted for age, gender, infection, and white blood cell count on admission (see Table 2).Table 2
Model results between surgical time and mortality adjusted for infection.
CovariatesβSDZ or χ2PIntercept-4.1170.502-8.200<0.001∗s (time from the onset to surgical intervention)--4.2820.042∗s (age)--0.8360.563Male-0.3000.453-0.6620.508Infection0.5970.4561.3080.191s (high-density lipoprotein)——7.0370.022∗s(amylase)--20.197<0.001∗s (white blood cell count)--0.9520.575‘-’ no traditional slope concept in this study;∗ indicates statistical significance.We further analyzed the independent relationships between risk factors and risk of death. Figure2 shows that there was a roughly L-shaped relationship between the time from the onset of AP to surgical intervention and risk of death, which indicates that premature surgery has a higher risk of death than postponed surgery. The older, the smaller the high-density lipoprotein or the higher the amylase on admission and the higher the risk of death. The risk of death in white blood cell count on admission was first rising and then falling. The shaded area represents the 95% confidence interval.Figure 2
The relationship between risk of death and five risk factors.
## 3.3. Modelling Surgical Time and Mortality Adjusted for Infection and Organ Failure
Secondly, we modelled surgical time and mortality adjusted for infection and organ failure, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7+sx8i,β8+β9x9i+β10x10i+β11x11i, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, x7 is the white blood cell count on admission, x8 is the maximum preoperative creatinine, x9 is respiratory failure or not, x10 is circulatory failure or not, and x11 is kidney failure or not (n=708, R2=31.5%). Amylase on admission had a statistical association with death adjusted for surgical time, age, gender, infection, organ failure, and other lab indicators (see Table 3).Table 3
Model results between surgical time and mortality adjusted for infection and organ failure.
CovariatesβSDZ or χ2PIntercept-4.5860.608-7.541<0.001∗s (time from the onset to surgical intervention)--1.4890.235s (age)--0.9190.459Male-0.8950.516-1.7340.083Infection0.5310.5051.0510.293s (high-density lipoprotein)--1.7390.268s (amylase)--12.7490.005∗s (white blood cell count)--0.6970.665s (creatinine)--2.8450.408Respiratory failure-1.7941.409-1.2730.203Circulatory failure0.8580.4981.7220.085Kidney failure1.3060.7601.7180.086‘-’ no traditional slope concept in this study;∗ indicates statistical significance.The independent relationships between risk factors and risk of death were also analyzed. Figure3 shows that after the inclusion of more variables, the relationship between surgical time, age, high-density lipoprotein, amylase, and white blood cell count and risk of death remained similar. The risk of death was high in a specific range and low in both ends of creatinine.Figure 3
The relationship between risk of death and six risk factors.
## 3.4. Modelling Surgical Time and Mortality
Finally, we developed a model adjusted for age, gender, and high-density lipoprotein formulated aslogitYi=α+sx1i,β1+sx2i,β2+β3x3i+sx4i,β4, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, and x4 is the high-density lipoprotein on admission, to find the relationship between surgical time and risk of death. Age and gender, as well as high-density lipoprotein, were used to adjust for basic characteristics and severity of AP, respectively. Based on the premise, the relationship between surgical time and risk of death in the infected and noninfected groups was also studied. In this section, we applied generalized additive model based on different samples: all patients (n=1, 144, R2=7.92%), infected patients (n=463, R2=5.96%), and noninfected patients (n=681, R2=13.40%). There was a statistical correlation between surgical time and mortality in the three groups.Figure4 shows that the relationship between surgical time and death was similar among all, infected, and noninfected patients. Because the risk of death was very low after 100 days of surgical time, we only figured out the surgical time within 100 days. The relationship between surgical time and death was the same in the infected and noninfected patient groups. Surgical time 32.60, 32.84, and 36.55 days in all, infected, and noninfected patients, respectively, had 50% risk of death. The risk of death would be more than 50% if the surgical time was less than the thresholds.Figure 4
The relationship between risk of death and surgical time ((a1) is for all patients, (a2) is for infected patients, and (a3) is for noninfected patients).
## 4. Discussion
This study investigated the relationship between surgical timing and death in necrotizing pancreatitis based on a large sample of EMR. The inpatients of AP with the specific surgery (pancreatic drainage, pancreatic debridement, or pancreatectomy) were modelled, who were almost patients with necrotizing pancreatitis. According to our best knowledge, there is no quantitative study between the timing of surgical intervention (from the onset of AP to first surgical intervention) and risk of death in necrotizing pancreatitis. This study is the first case. There is an L-shaped relationship between surgical time and risk of death in necrotizing pancreatitis, showing that premature surgery carries a higher risk of death among patients with necrotizing pancreatitis. This kind of relationship is still robust after sensitivity analyses.In descriptive analyses, the time from the onset of AP to surgical intervention, time from the onset of AP to admission, high-density lipoprotein on admission, amylase on admission, maximum preoperative creatinine, infection, circulatory failure, and kidney failure had a statistical difference with respect to death. These variables were further put in the model at the same time to check if there was a real impact on death. In the first model incorporating infection and other covariates, results showed that the lower high-density lipoprotein on admission, the higher the risk of death, which is consistent with previous study [15]. And for the second model inclusion of infection and organ failure as well as other covariates, the relationship between the two was similar, but it was not statistically significant. Amylase was statistically significant in the inclusion of infection or organ failure. The higher the amylase, the higher the risk of death, and risk of death exceeded 50% when amylase was over 175.54 mmol/L. The risk of death for white blood cell count was first rising slowly and then decreasing quickly. One of the most possible reasons is that the doctor will give an antibiotic treatment to control the white blood cell count in a normal range and reduce the probability of infection when the white blood cell count exceeds 10×109/L. Therefore, the risk of death would decline when the white blood cell count exceeds 10×109/L. In both models, infection was not statistically significant, which has similar results with Guo et al. [11] Our proposed model can deal with a collinear independent variable. Respiratory failure, circulatory failure, kidney failure, and creatinine were not statistically significant after including in the second model, consistent with the findings of the Dutch Pancreatitis Study Group [13]. However, we found that risk of death was low when creatinine was too low or too high, and risk of death was higher than 50% with creatinine ranging from 73.55 to 818.06 μmol/L. For the relationship between age and death, although there was no statistical difference in the first and second models, it presented increased risk of death with increase in age.Some covariates may not have statistical differences, but as can be seen from previous figures, these variables have a regularity with risk of death, and our model gave a threshold of 50% risk of death, which is worthy of attention of surgeons. After adjusting for infection, of surgical time and death that was still statistically significant. But after adjusting for organ failure, there was no statistical significance. No statistical difference does not mean that there is no real association between the two. Statistical difference is related to many factors such as the choice of independent variables and sample size. Therefore, in order to find out the relationship between surgical time and risk of death, we finally modelled surgical time adjusted for age, gender and high-density lipoprotein on admission since demographic factors can also be utilized as predictors of inpatients mortality in AP [19]. It was found that when surgical time was either modelled alone or adjusted for infection or organ failure, an L-shaped relationship was presented. Surgical time was within 32.60 days, the risk of death was greater than 50%. Not only that, but this study also obtained the mortality risk corresponding to the timing of surgical intervention at each time point. Although the relationship between surgical time and death was similar in the infected and noninfected groups, surgical time of the infected group (32.84 days) was earlier than that of the noninfected group (36.55 days) at 50% risk of death, and risk of death from early surgery for the noninfected group was 77%, which was a little higher than that (72%) of the infected group.Although amylase is one of the criteria for the diagnosis of acute pancreatitis, the relationship between amylase and the severity of acute pancreatitis is rarely reported. However, there are many reasons for the patients who have abnormal levels of amylase in their blood, including sudden inflammation of the pancreas, long-term inflammation of the pancreas, fluid-filled sac around the pancreas, pancreatic cancer, inflammation of the gallbladder, and kidney problems. The results of this study showed that the higher the amylase, the higher the risk of death. The reason for this result may be that there are other diseases that also cause high amylase, except acute pancreatitis. Under the combined effect of various diseases, the risk of death is increased. If considering the effects separately, the quantitative relationships between different surgical time and other covariates at different levels and risk of death can be a good reference for surgeons. The results of this work are based on EMR. Other hospitals can use this research strategy to obtain preliminary results and then conduct prospective design. Therefore, this study provides an important prerequisite for a prospective study.However, there are still some limitations in this study. The data is retrospectively extracted from EMR, and the performance of the model is strongly correlated with the quality of the data. The death cases were recorded during hospitalizations, and the cause of death was not available based on EMR. Due to the Chinese cultural characteristics, some patients who do not want to die in the hospital will be discharged early and those deaths will not be recorded in the EMR. Therefore, in this study, mortality was underestimated, and its relationship with surgical time was also underestimated. On the other hand, since the generalized additive model cannot analyze the interaction between variables, there may be interactions between variables. It is the limitation of the model itself.
## 5. Conclusions
In conclusion, by applying the generalized additive model, we obtained the relationship between surgical time (from the onset of AP to first surgical intervention) and risk of death in the case of controlling demographic characteristics, infection, organ failure, and important lab indicators in necrotizing pancreatitis. There is an L-shaped relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis, providing an important reference for surgeons when making surgical decisions.
---
*Source: 1012796-2020-05-16.xml* | 1012796-2020-05-16_1012796-2020-05-16.md | 38,712 | Association between Timing of Surgical Intervention and Mortality in 15,813 Acute Pancreatitis | Lan Lan; Jiawei Luo; Xiaoyan Yang; Dujiang Yang; Mengjiao Li; Fangwei Chen; Nianyin Zeng; Xiaobo Zhou | Computational and Mathematical Methods in Medicine
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1012796 | 1012796-2020-05-16.xml | ---
## Abstract
Objective. In order to find the quantitative relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis. Methods. The generalized additive model was applied to quantitate the relationship between surgical time (from the onset of acute pancreatitis to first surgical intervention) and risk of death adjusted for demographic characteristics, infection, organ failure, and important lab indicators extracted from the Electronic Medical Record of West China Hospital of Sichuan University. Results. We analyzed 1,176 inpatients who had pancreatic drainage, pancreatic debridement, or pancreatectomy experience of 15,813 acute pancreatitis retrospectively. It showed that when surgical time was either modelled alone or adjusted for infection or organ failure, an L-shaped relationship between surgical time and risk of death was presented. When surgical time was within 32.60 days, the risk of death was greater than 50%. Conclusion. There is an L-shaped relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis.
---
## Body
## 1. Introduction
Indications for surgical intervention of acute pancreatitis (AP) are secondary infections of the pancreas, secondary infections, or compression symptoms, mainly including the pancreas or peripancreatic symptoms or necrosis of secondary infections and organ failure. It is well known that early debridement is associated with higher morbidity and mortality, and recommendations are to delay by at least 4 weeks after the acute pancreatitis episode. Recommendations of guidelines for surgical timing of necrotizing pancreatitis from United States, United Kingdom, Italy, Finland, and Japan are delayed as far as possible, without recommendations for individuals [1–6]. Those recommendations lacking details could result in a large difference in the selection of best surgical timing in practice.Previous studies [7, 8] on the timing of surgical interventions mostly calculated the time from admission. The time of admission of each patient was susceptible to a variety of factors, such as economic factors and availability of medical resources. Because of the big difference of the time before admission, there is often a certain error based on the time of admission. A more reasonable evaluation should be calculated from the onset of AP (the time of onset of abdominal pain). Additionally, previous studies were mostly qualitative research. A prospective study of 223 patients with well-defined early and late intervention with a subgroup analysis with multiorgan failure and infected necrosis was used [8]. However, they cannot continuously give the risk of death corresponding to a certain point in time. Infection and organ failure have been used as key factors in determining whether or not to undergo surgery and are considered the determinants of mortality for the patients with necrotizing pancreatitis [9, 10]. It was observed that organ failure was more likely to determine mortality in AP [11, 12]. While a prospective cohort study from the Netherlands showed that there were no associations between infection, onset of organ failure, duration of organ failure, and mortality in the patients with necrotizing pancreatitis [13]. What is more, pancreatic amylase is one of the criteria for the diagnosis of AP [14]. High-density lipoprotein within 48 hours after admission is a good predictor of the severity of AP [15], so that the effect of severity can be adjusted by early high-density lipoprotein. White blood cell count on admission is a good indicator of infection, and it can be used to adjust the impact of infection on mortality [16]. Creatinine is the diagnostic criteria for renal failure [14]. Collecting this information prospectively is labor intensive, which often results in a small sample size. Therefore, it is critical that this information can be obtained from Electronic Medical Record (EMR) without extra cost to be researched based on a large sample size.Therefore, we applied a generalized additive model to quantitate the relationship between surgical time (from the onset of AP to first surgical intervention) and risk of death for 15,813 inpatients diagnosed with AP from EMR, as well as adjusting for demographic characteristics, infection, organ failure, and important lab indicators.
## 2. Materials and Methods
### 2.1. Study Setting and Population
The surgical approach for necrotizing pancreatitis can be classified into three categories: drainage, pancreatectomy, and pancreatic necrotic tissue removal plus extensive drainage [5]. Therefore, we defined study patients as follows: (1) diagnosis with AP on admission based on ICD codes (ICD-9: 577.0, ICD-10: K85) and (2) having at least one surgical intervention experience including pancreatic drainage, pancreatic debridement, or pancreatectomy in a same encounter. At the beginning, 15,813 patients diagnosed with acute pancreatitis were included. After extracting surgical records of the patients, 1,176 patients were included finally (see Figure 1). This study retrospectively collected data of patients with AP and followed the STROBE guidelines [17] for observational studies. The research protocol was approved by the ethics review board of West China Hospital of Sichuan University, and the need for informed consent was waived owing to the retrospective nature of the study.Figure 1
Flow diagram of this study.
### 2.2. Data Collection and Definitions
After admission, all patients diagnosed with AP from West China Hospital of Sichuan University initially received traditional treatment. The etiology for patients was main biliary, alcohol abuse, and others. When abdominal pain, severe clinical deterioration, or development of clinical signs of sepsis persisted or recurred, the CECT was performed. Patients with confirmed or suspected infected necrosis were advised to receive surgical intervention based on the CT results. Then, experienced surgeons discussed the case with the radiologist to decide the type and time for surgical intervention, which delayed as much as possible after four weeks from the onset. When patients had persistent clinical manifestations of sepsis, prompt surgical intervention was considered. The data were retrospectively extracted from EMR of West China Hospital of Sichuan University from 2010 to 2018, including demographic characteristics, lab tests, vital signs, and death information. If it was positive for the bacteria in the pancreas or peripancreatic drain, pus, or secretions, the patient was infected. Respiratory failure was defined as the partial pressure of oxygen in blood gas analysis less than 60 mmHg or the use of a ventilator. Circulatory failure was defined as diastolic blood pressure less than 60 mmHg or systolic blood pressure less than 90 mmHg and the use of vasoactive drugs. Kidney failure was defined as creatinine greater than 177μmol/L. The time from the onset of AP to admission was asked by physicians. The lab test results were extracted from the laboratory information system, and the clinical events (vital signs information, etc.) were extracted from the nursing system.
### 2.3. Statistical Analysis
We used a regular expression [18] to extract the patients who had specific surgical intervention experience and the onset of AP from the clinical notes of EMR in the patients diagnosed with AP on admission. We explored the difference between died and survived inpatients diagnosed with AP after the specific surgical intervention. The baselines of the two groups were compared, including important lab indicators, infection, and organ failure. t-test and Chi-square test were used to evaluate the difference between the two groups.Considering that the relationship between many clinical factors and risk of death are often not linear and the generalized additive model [18] allows each variable to be put in the model in different nonlinear forms, the generalized additive model was used to explore the association between the timing of surgical intervention and risk of death, controlling the potential confounding factors like infection and organ failure. We assumed that the death of the patients obeys the Bernoulli distribution. The formula of the generalized additive model is as follows: gYi=α+fx1i+fx2i+⋯, where Y is death or not, a is the intercept, x is the independent variable, i indicates the ith patient, and f is the nonlinear function of independent variable. f is a smooth cubic spline regression function formulated as s⋅ in this study. The backfitting method was used to evaluate the model, and the hyperparameter was selected by the Akaike information criterion (AIC). Based on the adjustment of demographic characteristics and important lab indicators, we first adjusted for infection, secondly adjusted for organ failure, and finally modelled surgical time lonely. When a variable with missing values was to be used, the patient with the missing value was deleted. It is statistically significant if the P value is less than 0.05. All data analyses were done in the R software.
## 2.1. Study Setting and Population
The surgical approach for necrotizing pancreatitis can be classified into three categories: drainage, pancreatectomy, and pancreatic necrotic tissue removal plus extensive drainage [5]. Therefore, we defined study patients as follows: (1) diagnosis with AP on admission based on ICD codes (ICD-9: 577.0, ICD-10: K85) and (2) having at least one surgical intervention experience including pancreatic drainage, pancreatic debridement, or pancreatectomy in a same encounter. At the beginning, 15,813 patients diagnosed with acute pancreatitis were included. After extracting surgical records of the patients, 1,176 patients were included finally (see Figure 1). This study retrospectively collected data of patients with AP and followed the STROBE guidelines [17] for observational studies. The research protocol was approved by the ethics review board of West China Hospital of Sichuan University, and the need for informed consent was waived owing to the retrospective nature of the study.Figure 1
Flow diagram of this study.
## 2.2. Data Collection and Definitions
After admission, all patients diagnosed with AP from West China Hospital of Sichuan University initially received traditional treatment. The etiology for patients was main biliary, alcohol abuse, and others. When abdominal pain, severe clinical deterioration, or development of clinical signs of sepsis persisted or recurred, the CECT was performed. Patients with confirmed or suspected infected necrosis were advised to receive surgical intervention based on the CT results. Then, experienced surgeons discussed the case with the radiologist to decide the type and time for surgical intervention, which delayed as much as possible after four weeks from the onset. When patients had persistent clinical manifestations of sepsis, prompt surgical intervention was considered. The data were retrospectively extracted from EMR of West China Hospital of Sichuan University from 2010 to 2018, including demographic characteristics, lab tests, vital signs, and death information. If it was positive for the bacteria in the pancreas or peripancreatic drain, pus, or secretions, the patient was infected. Respiratory failure was defined as the partial pressure of oxygen in blood gas analysis less than 60 mmHg or the use of a ventilator. Circulatory failure was defined as diastolic blood pressure less than 60 mmHg or systolic blood pressure less than 90 mmHg and the use of vasoactive drugs. Kidney failure was defined as creatinine greater than 177μmol/L. The time from the onset of AP to admission was asked by physicians. The lab test results were extracted from the laboratory information system, and the clinical events (vital signs information, etc.) were extracted from the nursing system.
## 2.3. Statistical Analysis
We used a regular expression [18] to extract the patients who had specific surgical intervention experience and the onset of AP from the clinical notes of EMR in the patients diagnosed with AP on admission. We explored the difference between died and survived inpatients diagnosed with AP after the specific surgical intervention. The baselines of the two groups were compared, including important lab indicators, infection, and organ failure. t-test and Chi-square test were used to evaluate the difference between the two groups.Considering that the relationship between many clinical factors and risk of death are often not linear and the generalized additive model [18] allows each variable to be put in the model in different nonlinear forms, the generalized additive model was used to explore the association between the timing of surgical intervention and risk of death, controlling the potential confounding factors like infection and organ failure. We assumed that the death of the patients obeys the Bernoulli distribution. The formula of the generalized additive model is as follows: gYi=α+fx1i+fx2i+⋯, where Y is death or not, a is the intercept, x is the independent variable, i indicates the ith patient, and f is the nonlinear function of independent variable. f is a smooth cubic spline regression function formulated as s⋅ in this study. The backfitting method was used to evaluate the model, and the hyperparameter was selected by the Akaike information criterion (AIC). Based on the adjustment of demographic characteristics and important lab indicators, we first adjusted for infection, secondly adjusted for organ failure, and finally modelled surgical time lonely. When a variable with missing values was to be used, the patient with the missing value was deleted. It is statistically significant if the P value is less than 0.05. All data analyses were done in the R software.
## 3. Results
### 3.1. Baseline Characteristics
In this study, 1,176 patients with a mean age of45.57±12.72 years and 780 (66.33%) males who had surgical intervention (pancreatic drainage, pancreatic debridement, or pancreatectomy) in 15, 813 patients diagnosed with AP on admission were analyzed. The number of patients with respiratory failure, circulatory failure, and kidney failure before surgical intervention was 36 (3.06%), 522 (44.39%), and 171 (14.54%), respectively. There were 463 (39.37%) patients infected. The time from the onset of AP to admission and first surgical intervention was 23.05±35.42 days and 34.43±34.95 days, respectively. The total hospital stay was 31.54±25.03 days. Sixty-two (5.27%) patients after surgical intervention died in the hospital.The baselines between died and survived patients after surgical intervention were compared. There was no difference between the two groups with respect to age and gender. High-density lipoprotein on admission of survived patients was a little higher than that of died patients. Died patients were 2.45 times and 2.44 times than survived patients for amylase on admission and maximum preoperative creatinine, respectively. Their white blood cell count on admission looked similar. The proportion of infection and organ failure in the death group was higher than that in the surviving group except for respiratory failure without statistical difference. The time from the onset of AP to admission and surgical intervention of died patients was shorter than that of survived patients, while total hospital stay was longer without statistical significance (see Table1).Table 1
Baselines between died and survived patients after surgical intervention.
CharacteristicsDied (n=62)Survived (n=1,114)PAge (year, mean (SD))48.21 (13.32)45.43 (12.67)0.094Male,n (%)39 (62.90)741 (66.52)0.654Lab indicatorHigh-density lipoprotein on admissionª (mmol/L, mean (SD))0.43 (0.31)0.60 (0.37)0.001∗Amylase on admissionª (U/L, mean (SD))635.79 (647.36)259.47 (537.16)<0.001∗White blood cell count on admissionª (109/L, mean (SD))11.92 (5.25)10.82 (6.51)0.208Maximum preoperative creatinineª (μmol/L, mean (SD))217.34 (190.20)89.10 (103.13)<0.001∗Infection,n (%)37 (59.68)426 (38.24)0.001∗Organ failure before surgical interventionRespiratory failure,n (%)2 (3.23)34 (3.05)1.000Circulatory failure,n (%)49 (79.03)473 (42.46)<0.001∗Kidney failure,n (%)37 (59.68)134 (12.03)<0.001∗Time from the onset to admission (day, mean (SD))11.55 (14.96)23.69 (36.12)0.009∗Time from the onset to surgical intervention (day, mean (SD))23.03 (16.33)35.07 (35.60)0.008∗Total hospital stay (day, mean (SD))32.21 (27.54)31.50 (24.90)0.829SD: standard deviation;n (%): number and percentage; ∗ indicates statistical significance; aDifferent missing rates 2.7%, 8.3%, 4.2%, and 2.7% for high-density lipoprotein, amylase, white blood cell count on admission, and maximum preoperative creatinine, respectively.
### 3.2. Modelling Surgical Time and Mortality Adjusted for Infection
Firstly, we modelled surgical time and mortality adjusted for infection, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7, where x1 is the time from the onset of AP to surgical intervention, x2 the is age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, and x7 is the white blood cell count on admission (n=708, R2=18.2%). Amylase, high-density lipoprotein on admission, and surgical time had statistical association with death adjusted for age, gender, infection, and white blood cell count on admission (see Table 2).Table 2
Model results between surgical time and mortality adjusted for infection.
CovariatesβSDZ or χ2PIntercept-4.1170.502-8.200<0.001∗s (time from the onset to surgical intervention)--4.2820.042∗s (age)--0.8360.563Male-0.3000.453-0.6620.508Infection0.5970.4561.3080.191s (high-density lipoprotein)——7.0370.022∗s(amylase)--20.197<0.001∗s (white blood cell count)--0.9520.575‘-’ no traditional slope concept in this study;∗ indicates statistical significance.We further analyzed the independent relationships between risk factors and risk of death. Figure2 shows that there was a roughly L-shaped relationship between the time from the onset of AP to surgical intervention and risk of death, which indicates that premature surgery has a higher risk of death than postponed surgery. The older, the smaller the high-density lipoprotein or the higher the amylase on admission and the higher the risk of death. The risk of death in white blood cell count on admission was first rising and then falling. The shaded area represents the 95% confidence interval.Figure 2
The relationship between risk of death and five risk factors.
### 3.3. Modelling Surgical Time and Mortality Adjusted for Infection and Organ Failure
Secondly, we modelled surgical time and mortality adjusted for infection and organ failure, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7+sx8i,β8+β9x9i+β10x10i+β11x11i, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, x7 is the white blood cell count on admission, x8 is the maximum preoperative creatinine, x9 is respiratory failure or not, x10 is circulatory failure or not, and x11 is kidney failure or not (n=708, R2=31.5%). Amylase on admission had a statistical association with death adjusted for surgical time, age, gender, infection, organ failure, and other lab indicators (see Table 3).Table 3
Model results between surgical time and mortality adjusted for infection and organ failure.
CovariatesβSDZ or χ2PIntercept-4.5860.608-7.541<0.001∗s (time from the onset to surgical intervention)--1.4890.235s (age)--0.9190.459Male-0.8950.516-1.7340.083Infection0.5310.5051.0510.293s (high-density lipoprotein)--1.7390.268s (amylase)--12.7490.005∗s (white blood cell count)--0.6970.665s (creatinine)--2.8450.408Respiratory failure-1.7941.409-1.2730.203Circulatory failure0.8580.4981.7220.085Kidney failure1.3060.7601.7180.086‘-’ no traditional slope concept in this study;∗ indicates statistical significance.The independent relationships between risk factors and risk of death were also analyzed. Figure3 shows that after the inclusion of more variables, the relationship between surgical time, age, high-density lipoprotein, amylase, and white blood cell count and risk of death remained similar. The risk of death was high in a specific range and low in both ends of creatinine.Figure 3
The relationship between risk of death and six risk factors.
### 3.4. Modelling Surgical Time and Mortality
Finally, we developed a model adjusted for age, gender, and high-density lipoprotein formulated aslogitYi=α+sx1i,β1+sx2i,β2+β3x3i+sx4i,β4, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, and x4 is the high-density lipoprotein on admission, to find the relationship between surgical time and risk of death. Age and gender, as well as high-density lipoprotein, were used to adjust for basic characteristics and severity of AP, respectively. Based on the premise, the relationship between surgical time and risk of death in the infected and noninfected groups was also studied. In this section, we applied generalized additive model based on different samples: all patients (n=1, 144, R2=7.92%), infected patients (n=463, R2=5.96%), and noninfected patients (n=681, R2=13.40%). There was a statistical correlation between surgical time and mortality in the three groups.Figure4 shows that the relationship between surgical time and death was similar among all, infected, and noninfected patients. Because the risk of death was very low after 100 days of surgical time, we only figured out the surgical time within 100 days. The relationship between surgical time and death was the same in the infected and noninfected patient groups. Surgical time 32.60, 32.84, and 36.55 days in all, infected, and noninfected patients, respectively, had 50% risk of death. The risk of death would be more than 50% if the surgical time was less than the thresholds.Figure 4
The relationship between risk of death and surgical time ((a1) is for all patients, (a2) is for infected patients, and (a3) is for noninfected patients).
## 3.1. Baseline Characteristics
In this study, 1,176 patients with a mean age of45.57±12.72 years and 780 (66.33%) males who had surgical intervention (pancreatic drainage, pancreatic debridement, or pancreatectomy) in 15, 813 patients diagnosed with AP on admission were analyzed. The number of patients with respiratory failure, circulatory failure, and kidney failure before surgical intervention was 36 (3.06%), 522 (44.39%), and 171 (14.54%), respectively. There were 463 (39.37%) patients infected. The time from the onset of AP to admission and first surgical intervention was 23.05±35.42 days and 34.43±34.95 days, respectively. The total hospital stay was 31.54±25.03 days. Sixty-two (5.27%) patients after surgical intervention died in the hospital.The baselines between died and survived patients after surgical intervention were compared. There was no difference between the two groups with respect to age and gender. High-density lipoprotein on admission of survived patients was a little higher than that of died patients. Died patients were 2.45 times and 2.44 times than survived patients for amylase on admission and maximum preoperative creatinine, respectively. Their white blood cell count on admission looked similar. The proportion of infection and organ failure in the death group was higher than that in the surviving group except for respiratory failure without statistical difference. The time from the onset of AP to admission and surgical intervention of died patients was shorter than that of survived patients, while total hospital stay was longer without statistical significance (see Table1).Table 1
Baselines between died and survived patients after surgical intervention.
CharacteristicsDied (n=62)Survived (n=1,114)PAge (year, mean (SD))48.21 (13.32)45.43 (12.67)0.094Male,n (%)39 (62.90)741 (66.52)0.654Lab indicatorHigh-density lipoprotein on admissionª (mmol/L, mean (SD))0.43 (0.31)0.60 (0.37)0.001∗Amylase on admissionª (U/L, mean (SD))635.79 (647.36)259.47 (537.16)<0.001∗White blood cell count on admissionª (109/L, mean (SD))11.92 (5.25)10.82 (6.51)0.208Maximum preoperative creatinineª (μmol/L, mean (SD))217.34 (190.20)89.10 (103.13)<0.001∗Infection,n (%)37 (59.68)426 (38.24)0.001∗Organ failure before surgical interventionRespiratory failure,n (%)2 (3.23)34 (3.05)1.000Circulatory failure,n (%)49 (79.03)473 (42.46)<0.001∗Kidney failure,n (%)37 (59.68)134 (12.03)<0.001∗Time from the onset to admission (day, mean (SD))11.55 (14.96)23.69 (36.12)0.009∗Time from the onset to surgical intervention (day, mean (SD))23.03 (16.33)35.07 (35.60)0.008∗Total hospital stay (day, mean (SD))32.21 (27.54)31.50 (24.90)0.829SD: standard deviation;n (%): number and percentage; ∗ indicates statistical significance; aDifferent missing rates 2.7%, 8.3%, 4.2%, and 2.7% for high-density lipoprotein, amylase, white blood cell count on admission, and maximum preoperative creatinine, respectively.
## 3.2. Modelling Surgical Time and Mortality Adjusted for Infection
Firstly, we modelled surgical time and mortality adjusted for infection, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7, where x1 is the time from the onset of AP to surgical intervention, x2 the is age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, and x7 is the white blood cell count on admission (n=708, R2=18.2%). Amylase, high-density lipoprotein on admission, and surgical time had statistical association with death adjusted for age, gender, infection, and white blood cell count on admission (see Table 2).Table 2
Model results between surgical time and mortality adjusted for infection.
CovariatesβSDZ or χ2PIntercept-4.1170.502-8.200<0.001∗s (time from the onset to surgical intervention)--4.2820.042∗s (age)--0.8360.563Male-0.3000.453-0.6620.508Infection0.5970.4561.3080.191s (high-density lipoprotein)——7.0370.022∗s(amylase)--20.197<0.001∗s (white blood cell count)--0.9520.575‘-’ no traditional slope concept in this study;∗ indicates statistical significance.We further analyzed the independent relationships between risk factors and risk of death. Figure2 shows that there was a roughly L-shaped relationship between the time from the onset of AP to surgical intervention and risk of death, which indicates that premature surgery has a higher risk of death than postponed surgery. The older, the smaller the high-density lipoprotein or the higher the amylase on admission and the higher the risk of death. The risk of death in white blood cell count on admission was first rising and then falling. The shaded area represents the 95% confidence interval.Figure 2
The relationship between risk of death and five risk factors.
## 3.3. Modelling Surgical Time and Mortality Adjusted for Infection and Organ Failure
Secondly, we modelled surgical time and mortality adjusted for infection and organ failure, as well as other covariates. The formula is as follows:logitYi=α+sx1i,β1+sx2i,β2+β3x3i+β4x4i+sx5i,β5+sx6i,β6+sx7i,β7+sx8i,β8+β9x9i+β10x10i+β11x11i, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, x4 is infection or not, x5 is the high-density lipoprotein on admission, x6 is the amylase on admission, x7 is the white blood cell count on admission, x8 is the maximum preoperative creatinine, x9 is respiratory failure or not, x10 is circulatory failure or not, and x11 is kidney failure or not (n=708, R2=31.5%). Amylase on admission had a statistical association with death adjusted for surgical time, age, gender, infection, organ failure, and other lab indicators (see Table 3).Table 3
Model results between surgical time and mortality adjusted for infection and organ failure.
CovariatesβSDZ or χ2PIntercept-4.5860.608-7.541<0.001∗s (time from the onset to surgical intervention)--1.4890.235s (age)--0.9190.459Male-0.8950.516-1.7340.083Infection0.5310.5051.0510.293s (high-density lipoprotein)--1.7390.268s (amylase)--12.7490.005∗s (white blood cell count)--0.6970.665s (creatinine)--2.8450.408Respiratory failure-1.7941.409-1.2730.203Circulatory failure0.8580.4981.7220.085Kidney failure1.3060.7601.7180.086‘-’ no traditional slope concept in this study;∗ indicates statistical significance.The independent relationships between risk factors and risk of death were also analyzed. Figure3 shows that after the inclusion of more variables, the relationship between surgical time, age, high-density lipoprotein, amylase, and white blood cell count and risk of death remained similar. The risk of death was high in a specific range and low in both ends of creatinine.Figure 3
The relationship between risk of death and six risk factors.
## 3.4. Modelling Surgical Time and Mortality
Finally, we developed a model adjusted for age, gender, and high-density lipoprotein formulated aslogitYi=α+sx1i,β1+sx2i,β2+β3x3i+sx4i,β4, where x1 is the time from the onset of AP to surgical intervention, x2 is the age, x3 is the gender, and x4 is the high-density lipoprotein on admission, to find the relationship between surgical time and risk of death. Age and gender, as well as high-density lipoprotein, were used to adjust for basic characteristics and severity of AP, respectively. Based on the premise, the relationship between surgical time and risk of death in the infected and noninfected groups was also studied. In this section, we applied generalized additive model based on different samples: all patients (n=1, 144, R2=7.92%), infected patients (n=463, R2=5.96%), and noninfected patients (n=681, R2=13.40%). There was a statistical correlation between surgical time and mortality in the three groups.Figure4 shows that the relationship between surgical time and death was similar among all, infected, and noninfected patients. Because the risk of death was very low after 100 days of surgical time, we only figured out the surgical time within 100 days. The relationship between surgical time and death was the same in the infected and noninfected patient groups. Surgical time 32.60, 32.84, and 36.55 days in all, infected, and noninfected patients, respectively, had 50% risk of death. The risk of death would be more than 50% if the surgical time was less than the thresholds.Figure 4
The relationship between risk of death and surgical time ((a1) is for all patients, (a2) is for infected patients, and (a3) is for noninfected patients).
## 4. Discussion
This study investigated the relationship between surgical timing and death in necrotizing pancreatitis based on a large sample of EMR. The inpatients of AP with the specific surgery (pancreatic drainage, pancreatic debridement, or pancreatectomy) were modelled, who were almost patients with necrotizing pancreatitis. According to our best knowledge, there is no quantitative study between the timing of surgical intervention (from the onset of AP to first surgical intervention) and risk of death in necrotizing pancreatitis. This study is the first case. There is an L-shaped relationship between surgical time and risk of death in necrotizing pancreatitis, showing that premature surgery carries a higher risk of death among patients with necrotizing pancreatitis. This kind of relationship is still robust after sensitivity analyses.In descriptive analyses, the time from the onset of AP to surgical intervention, time from the onset of AP to admission, high-density lipoprotein on admission, amylase on admission, maximum preoperative creatinine, infection, circulatory failure, and kidney failure had a statistical difference with respect to death. These variables were further put in the model at the same time to check if there was a real impact on death. In the first model incorporating infection and other covariates, results showed that the lower high-density lipoprotein on admission, the higher the risk of death, which is consistent with previous study [15]. And for the second model inclusion of infection and organ failure as well as other covariates, the relationship between the two was similar, but it was not statistically significant. Amylase was statistically significant in the inclusion of infection or organ failure. The higher the amylase, the higher the risk of death, and risk of death exceeded 50% when amylase was over 175.54 mmol/L. The risk of death for white blood cell count was first rising slowly and then decreasing quickly. One of the most possible reasons is that the doctor will give an antibiotic treatment to control the white blood cell count in a normal range and reduce the probability of infection when the white blood cell count exceeds 10×109/L. Therefore, the risk of death would decline when the white blood cell count exceeds 10×109/L. In both models, infection was not statistically significant, which has similar results with Guo et al. [11] Our proposed model can deal with a collinear independent variable. Respiratory failure, circulatory failure, kidney failure, and creatinine were not statistically significant after including in the second model, consistent with the findings of the Dutch Pancreatitis Study Group [13]. However, we found that risk of death was low when creatinine was too low or too high, and risk of death was higher than 50% with creatinine ranging from 73.55 to 818.06 μmol/L. For the relationship between age and death, although there was no statistical difference in the first and second models, it presented increased risk of death with increase in age.Some covariates may not have statistical differences, but as can be seen from previous figures, these variables have a regularity with risk of death, and our model gave a threshold of 50% risk of death, which is worthy of attention of surgeons. After adjusting for infection, of surgical time and death that was still statistically significant. But after adjusting for organ failure, there was no statistical significance. No statistical difference does not mean that there is no real association between the two. Statistical difference is related to many factors such as the choice of independent variables and sample size. Therefore, in order to find out the relationship between surgical time and risk of death, we finally modelled surgical time adjusted for age, gender and high-density lipoprotein on admission since demographic factors can also be utilized as predictors of inpatients mortality in AP [19]. It was found that when surgical time was either modelled alone or adjusted for infection or organ failure, an L-shaped relationship was presented. Surgical time was within 32.60 days, the risk of death was greater than 50%. Not only that, but this study also obtained the mortality risk corresponding to the timing of surgical intervention at each time point. Although the relationship between surgical time and death was similar in the infected and noninfected groups, surgical time of the infected group (32.84 days) was earlier than that of the noninfected group (36.55 days) at 50% risk of death, and risk of death from early surgery for the noninfected group was 77%, which was a little higher than that (72%) of the infected group.Although amylase is one of the criteria for the diagnosis of acute pancreatitis, the relationship between amylase and the severity of acute pancreatitis is rarely reported. However, there are many reasons for the patients who have abnormal levels of amylase in their blood, including sudden inflammation of the pancreas, long-term inflammation of the pancreas, fluid-filled sac around the pancreas, pancreatic cancer, inflammation of the gallbladder, and kidney problems. The results of this study showed that the higher the amylase, the higher the risk of death. The reason for this result may be that there are other diseases that also cause high amylase, except acute pancreatitis. Under the combined effect of various diseases, the risk of death is increased. If considering the effects separately, the quantitative relationships between different surgical time and other covariates at different levels and risk of death can be a good reference for surgeons. The results of this work are based on EMR. Other hospitals can use this research strategy to obtain preliminary results and then conduct prospective design. Therefore, this study provides an important prerequisite for a prospective study.However, there are still some limitations in this study. The data is retrospectively extracted from EMR, and the performance of the model is strongly correlated with the quality of the data. The death cases were recorded during hospitalizations, and the cause of death was not available based on EMR. Due to the Chinese cultural characteristics, some patients who do not want to die in the hospital will be discharged early and those deaths will not be recorded in the EMR. Therefore, in this study, mortality was underestimated, and its relationship with surgical time was also underestimated. On the other hand, since the generalized additive model cannot analyze the interaction between variables, there may be interactions between variables. It is the limitation of the model itself.
## 5. Conclusions
In conclusion, by applying the generalized additive model, we obtained the relationship between surgical time (from the onset of AP to first surgical intervention) and risk of death in the case of controlling demographic characteristics, infection, organ failure, and important lab indicators in necrotizing pancreatitis. There is an L-shaped relationship between timing of surgical intervention and risk of death in necrotizing pancreatitis, providing an important reference for surgeons when making surgical decisions.
---
*Source: 1012796-2020-05-16.xml* | 2020 |
# The Application of New Educational Concepts in Digital Educational Media
**Authors:** Chun Yang
**Journal:** Advances in Multimedia
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012803
---
## Abstract
With the development and progress of society, great changes have taken place in educational concepts and teaching models compared with the past. Faced with the new educational concept of advocating diversified teaching modes and all-round talent training, the teaching space under the traditional single-fixed teaching mode is insufficient. The field of digital media education is a form of education with the development of information technology. The purpose of this paper is to find out the construction form of teaching space under the new educational concept that adapts to the development of the social era and to respond to the constantly updated and emerging educational concept and teaching mode. The process is as follows: based on the collected education data, mining the specific factors that will affect the application ability of teachers’ digital education resources and building a multiple machine learning regression model using these objective and significant features to predict the score of teachers’ digital education resources application ability. Through the comparison and optimization of model performance, a more suitable prediction method was found. MSE, MAE, RMSE, and MAPE are used as performance evaluation indicators to compare the performance of each model. It is found that there are multilayer linear regression < mild gradient advance < extreme gradient advance < random forest in each indicator. In addition, in the two integration models, bagging idea represented by the random forest is more suitable for this group than two gradient boosting.
---
## Body
## 1. Introduction
In the twenty-first century of comprehensive informatization and economic integration, peace and development are the main themes of the world. However, this does not mean that there is no competition between countries [1]. On the contrary, competition and cooperation between countries and regions are taking place in a new form [2]. In today’s society dominated by the knowledge economy, the population structure and population quality of a country or region have become one of the most important factors that restrict or assist its development [3]. Talents will dominate the future and destiny of every country and nation as the most important human resource in the future society. It has become the consensus of the world [4]. The most important indicator to measure the comprehensive national strength of a country or region is the level of scientific and technological development [5]. The development of science and technology is inseparable from the reserve of human resources [6]. The most important and effective way to cultivate human resources and maximize the development of sleeping human resources is education [7]. Therefore, all countries and regions in the world attach great importance to the development of domestic education [8]. Although the level of development varies from country to country, they all take education to an extremely important level and give priority to development [9]. Entering the new century, the international environment and the background of the era have become increasingly complex and diverse, and the opportunities and challenges facing the country are even more severe [10]. The importance of education and talent training has become more prominent [11]. The outline of the national medium - and long-term education reform and development plan points out that “information technology has a revolutionary impact on education development and must be attached great importance to. By 2020, we will basically build a digital education service system covering all types of schools at all levels in urban and rural areas and promote the modernization of education content and teaching means and methods.” It is obvious that digital teaching has become the inevitable trend of current school education development.At the beginning of the century, several major powers in the world have formulated new talent training strategies to adapt to the future development and promulgated a series of education laws [12]. On January 8, 2002, President Bush of the United States signed the “Leave No Child Left Behind” education reform bill and at the same time increased the budget for primary and secondary education; the Australian Federal Department of Education also signed the “About 21st Century National Schools” on future education reform and the Adelaide Declaration of Educational Goals; At the same time, the two major East Asian countries face each other across the sea. South Korea has formulated the “21st Century Reform Plan” and implemented a series of reform plans for different education stages. The Ministry of Education, Culture, Sports, Science and Technology of Japan has promulgated the Japanese 21st Century Education Freshmen Plan [13]. From the new education policies introduced by these countries, it is evident that primary and secondary education, as the basic stage of lifelong education for people, has received more and more attention from all aspects [14]. Instead of training according to the needs of industrialized mass production in the last century, it is more inclined to cultivate comprehensive talents in line with the information society [15]. With the emergence of various new educational concepts and the popularization of more advanced teaching equipment, the teaching methods and curriculum content in countries around the world have also changed accordingly [16]. Comprehensively, while paying attention to fair cultivation, it also emphasizes guiding and cultivating citizens’ ideological and moral character [17]. The criteria for judging students are also more diverse and objective, and instead of blindly referring to test scores, the evaluation system is more abundant, the importance of test scores is reduced, and more emphasis is placed on students’ additional abilities beyond their scores [18]. In today’s world, science and technology are advancing with each passing day. Modern information technologies such as the Internet, cloud computing, and big data are profoundly changing the way people think, produce, live, and learn. The main task for educators is to respond to the development of information technology, promote educational reform and innovation, build a networked, digital, personalize a lifelong education system, build a learning society where everyone can learn at any place and any time, and cultivate a large number of innovative talents.The characteristics of the communication structure of the “micro era” have become the most important communication method in the information society [19]. The emergence of new media has a direct impact on the teaching mode, learning methods, and teaching methods, as well as the feelings and experiences of practical links in the education classroom from the perspective of concepts and methods [20]. Therefore, great changes have taken place and will continue to improve and innovate with the technical requirements of the new era. The rapid development of media technology has accelerated the trend of media integration, and traditional media is rapidly transforming into digital media. Entering a new era, virtual reality technology and artificial intelligence have begun to promote a new round of technological revolution [21]. Under the new situation, higher education needs to cultivate compound innovative talents who meet the diverse needs of society. Since its establishment in 2006, the Department of Digital Media Technology of Central China Normal University has carried out three reforms and innovations around talent training and teaching models to meet the new demands of the times for digital media talents. In the theory of “teaching and learning”, Ausubel, an American constructivist educator, put forward the teaching mode of “taking skill training as the core, establishing project oriented, and task driven” [22]. However, current research shows that there are generally obvious differences in the application level of teachers’ digital educational resources. Due to the lack of teachers’ own application ability, the use of existing infrastructure and high-quality digital resources to develop educational and teaching activities has not achieved good results [23]. Therefore, it is necessary to understand the current level of teachers’ informatization application, especially the relevant influencing factors of their ability to apply digital educational resources, and to explore how to predict the application status of teachers in the region, so as to provide interventions to improve the professional development of teachers’ digital resources and solve the application of inter-regional resources [24]. Differences need to be considered. Therefore, based on machine learning, this paper mainly analyzes new educational concepts and the ability of teachers to apply digital educational resources in digital educational media.Existing studies have shown that teachers will encounter two types of resistance in integrating information technology into teaching, namely, external resistance and internal resistance. In the context of using information technology to support teaching, external resistance is the external factor affecting teachers’ behavior, such as Internet access, preparation of information technology equipment, bandwidth, technology-related training, and other external environmental factors. When the external resistance is removed, teachers will not consciously integrate information technology into teaching to improve the effect of meaningful teaching. This is because factors related to teachers’ teaching ideas and knowledge status, such as teachers’ attitude towards using information technology, self-efficacy, intention, and other internal resistance, will become the second obstacle affecting teachers’ teaching behavior, and these obstacles are called internal resistance.
## 2. Materials and Methods
### 2.1. Development of New Educational Concepts in China
The so-called digital media education refers to the use of multimedia and network technology to digitize the main information resources of schools and to achieve digital information management and communication, so as to form a highly information-based talent training environment. Chinese modern education started in the early 20th century. After more than 100 years of development, it has cultivated a large number of talents for social development and national construction. However, due to the special and complex national conditions and historical and social reasons in China since modern times, even though some areas have made attempts to reform and innovate education, the current primary and secondary education model is still relatively traditional exam-oriented education, and the education level is relatively backward. This leads to some common problems in the stage of compulsory education. For example, the main body of teaching behavior is still teachers, and most of them are the previous teaching mode of cramming, which often ignores the cultivation of students’ self-thinking and self-innovation ability. Teaching evaluation is still based on students’ test scores, and the evaluation of students is also based on test scores. Therefore, the traditional teaching mode and concept seriously restrict the development of education in our country, and students’ collaboration ability, innovation ability, learning ability, and moral quality cannot be developed as they should.In the 21st century of comprehensive informationization, society has a new understanding of the definition of talents, and the demand for talents is higher and higher. Traditional single-type talents are no longer suitable for the development of today’s society. Quality education aims to impart knowledge to students, cultivate their comprehensive abilities such as self-learning cognition and practical innovation, and attaches great importance to students’ all-round development. Based on the concept of quality education, combined with some advanced international education models and experiences, scholars in China have also conducted a series of research and exploration. So far, various new educational concepts have emerged in schools across the country. Modernized and open teaching methods and teaching spaces have also appeared one after another. Most of the teaching behavior advocated by the new educational concept is the communication and interaction between teachers and students, abandoning the previous one-way knowledge transfer. In the process of teaching behavior, the two sides of interaction are not limited to the interaction between teachers and students, and the interaction between students and students is equally important. At the same time, the teaching methods are also more diverse, and more attention is paid to the unique psychological states of different groups of students at different stages.
### 2.2. Necessity of Innovation in Digital Media Education in the New Era
With the rapid development of China’s economy and technology, the public’s demand for culture is increasing day by day. Under the national “Internet +” strategy, the in-depth integration of information technology into various fields has brought opportunities for the development of cultural and creative industries and also put forward new requirements for the reform and innovation of digital media education in China.(1)
The development of the industry is the inherent demand for the innovation of media talent training. For the prosperity and development of cultural and creative industries, creativity is the core competitiveness. In the era of “content is king,” whether it is traditional media such as movies and TV, or emerging new audio-visual media such as live broadcast and online short video, their operation mainly relies on high-quality content and platforms. The high-quality content that stands out from the competition comes from the rich creativity of media people. The talent training mode of colleges and universities should focus on the national innovation and development strategy and cultivate the innovation consciousness of contemporary college students. In the digital media technology major of the author’s school, in addition to imparting students’ professional knowledge and skills, teaching should further cultivate and forge students’ innovative ability. In the new educational environment, colleges and universities urgently need to explore how to integrate curriculum resources to build a talent training model with innovative spirit and ability to adapt to the needs of the times and focus on openness, sharing, and digital in-depth integration.(2)
The ability training of colleges and universities drives the transformation of course teaching methods. The main content of theStanford University 2025 Plan covers four aspects, namely, “open-loop university,” “flipping around the axis,” “adaptive education,” and “purposeful learning.” Focusing on the reform of the academic system, the transfer of teaching focus, the reconstruction of the curriculum system, and the reform of learning methods, it puts forward a bold reform concept, emphasizes the dominant position of students, implements personalized education and independent education, integrates multidisciplinary resources, and cultivates awareness and ability to solve global problems. Students majoring in digital media technology at Central China Normal University have formed a teamwork working method in the school stage by completing work in the course study, exercised the ability of collaborative learning and problem-solving, and accumulated industry experience in advance. In addition, all-round interaction with teachers in class and after class through project practice effectively promotes students’ internalization of knowledge and improves their ability to solve problems in the process of practice.
### 2.3. Research on the Current Situation of Machine Learning in the Field of Education and Teaching
The research purpose of machine learning is to use computers to simulate human learning activities. It is a method for computers to recognize existing knowledge, acquire new knowledge, constantly improve performance, and achieve self-perfection. There are three research objectives of machine learning: the cognitive model of the human learning process, the general learning algorithm, and the method of constructing a task-oriented special learning system. With the explosion of educational data, how to use a large amount of educational and teaching data for analysis to achieve accurate prediction and decision support is a new direction for thinking in the era of artificial intelligence. As an important method in the field of artificial intelligence, machine learning can meet the needs of educational big data analysis and prediction. In recent years, machine learning education application cases and related research based on real data have been continuously carried out at home and abroad, and they are committed to introducing machine learning technology into education and teaching activities. At the policy level, in May and October 2016, the National Security and Technology Council of the United States released two reports, “Preparing for the Future of Artificial Intelligence” and “National Artificial Intelligence Research and Development Strategic Plan,” respectively, pointing out that the realization and promotion of AI research are necessary. The core technology is machine learning. The Chinese government also attaches great importance to the development of artificial intelligence technologies such as machine learning. In April 2018, the Ministry of Education issued the “Education Informatization 2.0 Action Plan,” which was proposed to “rely on emerging technologies such as artificial intelligence to promote the reform of education models supported by new technologies.” In the field of practical research, many foreign scholars have begun to explore machine learning in the field of education earlier and have made certain progress. Although domestic researchers have begun to use machine learning technology to generate value in the field of education research relatively late but also achieved varying degrees of success. There are also practical applications of machine learning for the group of teachers in this study and the forecasting techniques to be implemented.
## 2.1. Development of New Educational Concepts in China
The so-called digital media education refers to the use of multimedia and network technology to digitize the main information resources of schools and to achieve digital information management and communication, so as to form a highly information-based talent training environment. Chinese modern education started in the early 20th century. After more than 100 years of development, it has cultivated a large number of talents for social development and national construction. However, due to the special and complex national conditions and historical and social reasons in China since modern times, even though some areas have made attempts to reform and innovate education, the current primary and secondary education model is still relatively traditional exam-oriented education, and the education level is relatively backward. This leads to some common problems in the stage of compulsory education. For example, the main body of teaching behavior is still teachers, and most of them are the previous teaching mode of cramming, which often ignores the cultivation of students’ self-thinking and self-innovation ability. Teaching evaluation is still based on students’ test scores, and the evaluation of students is also based on test scores. Therefore, the traditional teaching mode and concept seriously restrict the development of education in our country, and students’ collaboration ability, innovation ability, learning ability, and moral quality cannot be developed as they should.In the 21st century of comprehensive informationization, society has a new understanding of the definition of talents, and the demand for talents is higher and higher. Traditional single-type talents are no longer suitable for the development of today’s society. Quality education aims to impart knowledge to students, cultivate their comprehensive abilities such as self-learning cognition and practical innovation, and attaches great importance to students’ all-round development. Based on the concept of quality education, combined with some advanced international education models and experiences, scholars in China have also conducted a series of research and exploration. So far, various new educational concepts have emerged in schools across the country. Modernized and open teaching methods and teaching spaces have also appeared one after another. Most of the teaching behavior advocated by the new educational concept is the communication and interaction between teachers and students, abandoning the previous one-way knowledge transfer. In the process of teaching behavior, the two sides of interaction are not limited to the interaction between teachers and students, and the interaction between students and students is equally important. At the same time, the teaching methods are also more diverse, and more attention is paid to the unique psychological states of different groups of students at different stages.
## 2.2. Necessity of Innovation in Digital Media Education in the New Era
With the rapid development of China’s economy and technology, the public’s demand for culture is increasing day by day. Under the national “Internet +” strategy, the in-depth integration of information technology into various fields has brought opportunities for the development of cultural and creative industries and also put forward new requirements for the reform and innovation of digital media education in China.(1)
The development of the industry is the inherent demand for the innovation of media talent training. For the prosperity and development of cultural and creative industries, creativity is the core competitiveness. In the era of “content is king,” whether it is traditional media such as movies and TV, or emerging new audio-visual media such as live broadcast and online short video, their operation mainly relies on high-quality content and platforms. The high-quality content that stands out from the competition comes from the rich creativity of media people. The talent training mode of colleges and universities should focus on the national innovation and development strategy and cultivate the innovation consciousness of contemporary college students. In the digital media technology major of the author’s school, in addition to imparting students’ professional knowledge and skills, teaching should further cultivate and forge students’ innovative ability. In the new educational environment, colleges and universities urgently need to explore how to integrate curriculum resources to build a talent training model with innovative spirit and ability to adapt to the needs of the times and focus on openness, sharing, and digital in-depth integration.(2)
The ability training of colleges and universities drives the transformation of course teaching methods. The main content of theStanford University 2025 Plan covers four aspects, namely, “open-loop university,” “flipping around the axis,” “adaptive education,” and “purposeful learning.” Focusing on the reform of the academic system, the transfer of teaching focus, the reconstruction of the curriculum system, and the reform of learning methods, it puts forward a bold reform concept, emphasizes the dominant position of students, implements personalized education and independent education, integrates multidisciplinary resources, and cultivates awareness and ability to solve global problems. Students majoring in digital media technology at Central China Normal University have formed a teamwork working method in the school stage by completing work in the course study, exercised the ability of collaborative learning and problem-solving, and accumulated industry experience in advance. In addition, all-round interaction with teachers in class and after class through project practice effectively promotes students’ internalization of knowledge and improves their ability to solve problems in the process of practice.
## 2.3. Research on the Current Situation of Machine Learning in the Field of Education and Teaching
The research purpose of machine learning is to use computers to simulate human learning activities. It is a method for computers to recognize existing knowledge, acquire new knowledge, constantly improve performance, and achieve self-perfection. There are three research objectives of machine learning: the cognitive model of the human learning process, the general learning algorithm, and the method of constructing a task-oriented special learning system. With the explosion of educational data, how to use a large amount of educational and teaching data for analysis to achieve accurate prediction and decision support is a new direction for thinking in the era of artificial intelligence. As an important method in the field of artificial intelligence, machine learning can meet the needs of educational big data analysis and prediction. In recent years, machine learning education application cases and related research based on real data have been continuously carried out at home and abroad, and they are committed to introducing machine learning technology into education and teaching activities. At the policy level, in May and October 2016, the National Security and Technology Council of the United States released two reports, “Preparing for the Future of Artificial Intelligence” and “National Artificial Intelligence Research and Development Strategic Plan,” respectively, pointing out that the realization and promotion of AI research are necessary. The core technology is machine learning. The Chinese government also attaches great importance to the development of artificial intelligence technologies such as machine learning. In April 2018, the Ministry of Education issued the “Education Informatization 2.0 Action Plan,” which was proposed to “rely on emerging technologies such as artificial intelligence to promote the reform of education models supported by new technologies.” In the field of practical research, many foreign scholars have begun to explore machine learning in the field of education earlier and have made certain progress. Although domestic researchers have begun to use machine learning technology to generate value in the field of education research relatively late but also achieved varying degrees of success. There are also practical applications of machine learning for the group of teachers in this study and the forecasting techniques to be implemented.
## 3. Results and Discussion
### 3.1. Common Regression Algorithms
Regression is the prediction of new data based on existing data. Linear regression can accurately describe the relationship between data with a straight line so that when new data appear, a simple value can be predicted. The linear regression model is very easy to understand, and the results are very interpretable, which is conducive to decision analysis. Multilayer linear regression is to study the regression problem between a dependent variable and multiple independent variables, and it is also a statistical method to determine the relationship between independent variables and dependent variables and give explanations. In regression analysis, when there is only one independent variable and one dependent variable, the independent variable will be used as the main factor to explain the change in the dependent variable, and a straight line can be used to approximate the relationship between the two. Such a regression analysis is called univariate linear regression, and its formula is as follows:(1)y=α+βx+ε,where α and β are the regression coefficients and ε is the random error term. When there is a linear relationship between multiple independent variables and dependent variables, this regression analysis is a multilayer linear regression analysis, so the multilayer linear regression model is optimized on the basis of the univariate regression model, and its formula is as follows: (2)y=α+β1x1+β2x2……+βnxn+ε.Among them,n represents the number of explanatory variables, α and β (i = 1, 2, ....., n) are partial regression coefficients, and ε is a random error term.Extreme gradient boosting, also known as XGBoost, is one of the boosting algorithms in the inheritance algorithm, and it is an improved algorithm based on GBDT. The objective function of the algorithm is as follows:(3)L∅=∑ilyi,y^l+∑kΩfk.(4)Ωf=γT+λw22.Among them,L(Ф) is the loss function, usually a convex function, which measures the difference between the predicted value yi and the actual value yi. The second-order Taylor expansion of the loss function is obtained as follows:(5)Lt=∑i=1nlyi,y^lt−1+ftxi+Ωft,∑j=1Tgftxi+12hftxi+γT+12λ∑j=1Twj2.In the formula,gi is the first derivative and hi is the second derivative.The formula is expressed as follows:(6)Ij=iqxi=j.If the sample set on the leaf node isj, then the following formula is obtained:(7)Lt=∑i∈Ijgiwj+12∑i∈Ijhi+λwj2+γT.When the tree structureq is known and the leaf node weight wjof the formula has a closed-form solution, the objective function is as follows:(8)w∗=−∑i∈Ijgi∑i∈Ijhi+λ,Ltq=12∑j=1T∑i∈Ijgi2∑i∈Ijhi+λ+γT.The advantage of XGBoost is that on the one hand, it supports linear classification and regression, which can speed up the training speed and provide a fast lane for model training. On the other hand, XGBoost sets the learning rate for leaf nodes when creating trees and reduces the cost of each tree. The weights that reduce the influence of each tree on the model provide a better learning space. However, XGBoost still needs to traverse the data set in the process of node splitting. In the presorting process, not only feature values but also features are stored. There are many samples corresponding to the gradient statistics of the index, so the memory consumption is relatively large.Model fusion, as the name suggests, is to fuse multiple models together, also known as ensemble learning. In the process of solving real problems, each algorithm model has its own advantages, but it also has certain limitations. If multiple algorithms are fused together, a learner with better performance than a single model can be produced, which can not only inherit the advantages of different algorithms but also avoid some shortcomings of a single model. Therefore, in the field of machine learning, the practice of model fusion is very common. Common model fusion strategies include the voting method, the average fusion method, and the learning method, and ensemble learning methods include bagging, boosting, and stacking.The simple average method refers to the direct summation of the prediction results of all multiple regression models, and then, the average value is calculated. The fusion expression is as follows:(9)Yx=1n∑i−1ny^ix.The weighted average is to give different weights to different models through some methods and then combine the weights with the results of the base learner and then calculate the average. The fusion expression is as follows:(10)Yx=∑i−1nwixy^ix.There are various methods, which need to be selected according to actual needs. Because the effects of the three models that need to be fused this time are close, it is not suitable to choose the most basic “voting method” for fusion. It is more suitable to use the weighted average. Although the results are similar, there are always differences.Random forest performs sampling according to a random method, which can establish a decision forest that meets needs, and there are multiple decision trees in this forest. A decision tree is a basic classifier, which generally divides features into two categories (decision tree can also be used for regression). The constructed decision tree has a tree structure and can be considered as a collection of if then rules. The main advantage is that the model is readable and that the classification speed is fast. Therefore, the random forest is an integrated classification algorithm composed of several decision trees. Its basis is the decision tree algorithm, and its core is how to use the idea of randomization to construct multiple decision trees of the random forest. In the ensemble algorithm, the algorithm idea of the random forest is not complicated, and it is a more common method in dealing with binary classification, multiclassification, and regression tasks. Because each decision tree is operated independently in parallel, it can save time and computational overhead, which is called “representative method of ensemble learning technology level.” Of course, random forests also have shortcomings. Random forests do not continuously output results, and if the data exceed the range of the training set, they become unpredictable, so this sometimes leads to overfitting, but no further research was conducted in this paper on the problem of overfitting.
### 3.2. Model Setting of Teachers’ Digital Educational Resource Application Ability
In this study, the effects from teachers (Tier 1) are not considered to be completely independent due to the nested nature of the data; i.e., teachers from schools (Tier 2) share the school environment, and a two-tier linear model is used as the basic analysis tool. The first layer of equations contains teacher-level predictors, and the second layer contains school-level predictors. If a multilevel linear model is used to conduct cross-level analysis on research objects with multilevel and nested data structures, individual effects and external environmental effects can be separated, and then, the influence of variables at the two levels can be explored separately. Therefore, this paper will construct five models and use the multilayer linear regression model to explore the influence of the school level and the teacher level on the ability of teachers to apply digital resources. Figure1 shows the hypothetical model of the factors influencing the application of digital educational resources for teachers.Figure 1
Hypothetical model of factors influencing the application of digital educational resources for teachers.The zero model is as follows:(11)Yij=β0j+rij,β0j=γ00+μ0j.Yij represents the digital resource application ability of the teacher i in the school j, β0j represents the average value of the digital resource application ability of teachers in the j school, which is a random error at the teacher level, Yij represents the overall average value of teachers’ digital resource application ability, μoj is the difference between the school j, and the overall the difference between the averages is a random error at the school level.The influence (slope) of each predictor in Model 1 and Model 2 on teachers’ ability to apply digital educational resources remains constant among schools, and both belong to the random-effect covariance model:(12)Yij=β0j+β1jX1+β2jX2+⋯+β10jX10+rij,βcj=γc0+μcj,c=0,1,2,3…10,where β1j. β10j are the partial regression coefficients of the influence of the predictors X1, X2...X10 on Yj, respectively. On the basis of Model 2, the background of the school’s geographical location and the number of in-service teachers (Model 3) are included in the second-level equation as predictors. The school informatization training variable of the proportion of teachers in school-based training is used as a predictive model (Model 4) to examine the impact of school-level factors on teachers’ ability to apply digital resources. Both Model 3 and Model 4 belong to the nonrandom variation intercept model.
## 3.1. Common Regression Algorithms
Regression is the prediction of new data based on existing data. Linear regression can accurately describe the relationship between data with a straight line so that when new data appear, a simple value can be predicted. The linear regression model is very easy to understand, and the results are very interpretable, which is conducive to decision analysis. Multilayer linear regression is to study the regression problem between a dependent variable and multiple independent variables, and it is also a statistical method to determine the relationship between independent variables and dependent variables and give explanations. In regression analysis, when there is only one independent variable and one dependent variable, the independent variable will be used as the main factor to explain the change in the dependent variable, and a straight line can be used to approximate the relationship between the two. Such a regression analysis is called univariate linear regression, and its formula is as follows:(1)y=α+βx+ε,where α and β are the regression coefficients and ε is the random error term. When there is a linear relationship between multiple independent variables and dependent variables, this regression analysis is a multilayer linear regression analysis, so the multilayer linear regression model is optimized on the basis of the univariate regression model, and its formula is as follows: (2)y=α+β1x1+β2x2……+βnxn+ε.Among them,n represents the number of explanatory variables, α and β (i = 1, 2, ....., n) are partial regression coefficients, and ε is a random error term.Extreme gradient boosting, also known as XGBoost, is one of the boosting algorithms in the inheritance algorithm, and it is an improved algorithm based on GBDT. The objective function of the algorithm is as follows:(3)L∅=∑ilyi,y^l+∑kΩfk.(4)Ωf=γT+λw22.Among them,L(Ф) is the loss function, usually a convex function, which measures the difference between the predicted value yi and the actual value yi. The second-order Taylor expansion of the loss function is obtained as follows:(5)Lt=∑i=1nlyi,y^lt−1+ftxi+Ωft,∑j=1Tgftxi+12hftxi+γT+12λ∑j=1Twj2.In the formula,gi is the first derivative and hi is the second derivative.The formula is expressed as follows:(6)Ij=iqxi=j.If the sample set on the leaf node isj, then the following formula is obtained:(7)Lt=∑i∈Ijgiwj+12∑i∈Ijhi+λwj2+γT.When the tree structureq is known and the leaf node weight wjof the formula has a closed-form solution, the objective function is as follows:(8)w∗=−∑i∈Ijgi∑i∈Ijhi+λ,Ltq=12∑j=1T∑i∈Ijgi2∑i∈Ijhi+λ+γT.The advantage of XGBoost is that on the one hand, it supports linear classification and regression, which can speed up the training speed and provide a fast lane for model training. On the other hand, XGBoost sets the learning rate for leaf nodes when creating trees and reduces the cost of each tree. The weights that reduce the influence of each tree on the model provide a better learning space. However, XGBoost still needs to traverse the data set in the process of node splitting. In the presorting process, not only feature values but also features are stored. There are many samples corresponding to the gradient statistics of the index, so the memory consumption is relatively large.Model fusion, as the name suggests, is to fuse multiple models together, also known as ensemble learning. In the process of solving real problems, each algorithm model has its own advantages, but it also has certain limitations. If multiple algorithms are fused together, a learner with better performance than a single model can be produced, which can not only inherit the advantages of different algorithms but also avoid some shortcomings of a single model. Therefore, in the field of machine learning, the practice of model fusion is very common. Common model fusion strategies include the voting method, the average fusion method, and the learning method, and ensemble learning methods include bagging, boosting, and stacking.The simple average method refers to the direct summation of the prediction results of all multiple regression models, and then, the average value is calculated. The fusion expression is as follows:(9)Yx=1n∑i−1ny^ix.The weighted average is to give different weights to different models through some methods and then combine the weights with the results of the base learner and then calculate the average. The fusion expression is as follows:(10)Yx=∑i−1nwixy^ix.There are various methods, which need to be selected according to actual needs. Because the effects of the three models that need to be fused this time are close, it is not suitable to choose the most basic “voting method” for fusion. It is more suitable to use the weighted average. Although the results are similar, there are always differences.Random forest performs sampling according to a random method, which can establish a decision forest that meets needs, and there are multiple decision trees in this forest. A decision tree is a basic classifier, which generally divides features into two categories (decision tree can also be used for regression). The constructed decision tree has a tree structure and can be considered as a collection of if then rules. The main advantage is that the model is readable and that the classification speed is fast. Therefore, the random forest is an integrated classification algorithm composed of several decision trees. Its basis is the decision tree algorithm, and its core is how to use the idea of randomization to construct multiple decision trees of the random forest. In the ensemble algorithm, the algorithm idea of the random forest is not complicated, and it is a more common method in dealing with binary classification, multiclassification, and regression tasks. Because each decision tree is operated independently in parallel, it can save time and computational overhead, which is called “representative method of ensemble learning technology level.” Of course, random forests also have shortcomings. Random forests do not continuously output results, and if the data exceed the range of the training set, they become unpredictable, so this sometimes leads to overfitting, but no further research was conducted in this paper on the problem of overfitting.
## 3.2. Model Setting of Teachers’ Digital Educational Resource Application Ability
In this study, the effects from teachers (Tier 1) are not considered to be completely independent due to the nested nature of the data; i.e., teachers from schools (Tier 2) share the school environment, and a two-tier linear model is used as the basic analysis tool. The first layer of equations contains teacher-level predictors, and the second layer contains school-level predictors. If a multilevel linear model is used to conduct cross-level analysis on research objects with multilevel and nested data structures, individual effects and external environmental effects can be separated, and then, the influence of variables at the two levels can be explored separately. Therefore, this paper will construct five models and use the multilayer linear regression model to explore the influence of the school level and the teacher level on the ability of teachers to apply digital resources. Figure1 shows the hypothetical model of the factors influencing the application of digital educational resources for teachers.Figure 1
Hypothetical model of factors influencing the application of digital educational resources for teachers.The zero model is as follows:(11)Yij=β0j+rij,β0j=γ00+μ0j.Yij represents the digital resource application ability of the teacher i in the school j, β0j represents the average value of the digital resource application ability of teachers in the j school, which is a random error at the teacher level, Yij represents the overall average value of teachers’ digital resource application ability, μoj is the difference between the school j, and the overall the difference between the averages is a random error at the school level.The influence (slope) of each predictor in Model 1 and Model 2 on teachers’ ability to apply digital educational resources remains constant among schools, and both belong to the random-effect covariance model:(12)Yij=β0j+β1jX1+β2jX2+⋯+β10jX10+rij,βcj=γc0+μcj,c=0,1,2,3…10,where β1j. β10j are the partial regression coefficients of the influence of the predictors X1, X2...X10 on Yj, respectively. On the basis of Model 2, the background of the school’s geographical location and the number of in-service teachers (Model 3) are included in the second-level equation as predictors. The school informatization training variable of the proportion of teachers in school-based training is used as a predictive model (Model 4) to examine the impact of school-level factors on teachers’ ability to apply digital resources. Both Model 3 and Model 4 belong to the nonrandom variation intercept model.
## 4. Result Analysis and Discussion
### 4.1. Data Acquisition and Preprocessing
The experimental data in this study come from two types of questionnaires, the “Questionnaire on the Development of Informatization of Primary and Secondary School Teachers” and the “Questionnaire on the Development of Informatization in Primary and Secondary Schools” in the National Informatization Research Action. In the determination of schools, a total of 1,579 questionnaires from primary and secondary schools in the province were selected, and more than 20,000 pieces of information were used as the data set for this study. Using this series of questionnaires, the following four parts of useful information can be collected: the first part is the background information of teachers and their schools, including gender, education, teaching years, and the number of teachers in the school; the second part is the ability of teachers to apply digital resources, such as the frequency of teachers’ use of informatization in each link; the third part is the training participation of teachers, such as the number of training participation, duration, and number of models; the fourth part is the school’s development of training, such as school development, the number of school-based training, and the proportion of teachers participating in school-based training.During the implementation of data mining technology, both algorithms and targets have higher requirements for data, and if the original data are mixed, it is difficult to meet the experimental requirements. Usually, the obtained educational data are based on the evaluation objectives and evaluation objects. Different scenarios and different dimensions of data are selected. Therefore, it may occur that the data collected initially cannot be directly used for mining. Therefore, the data must be processed and cleaned to a certain extent. Unify the data form, reduce noise, and integrate high-quality data to reduce mining costs, improve operational efficiency, and obtain good experimental results. The specific data preprocessing process is shown in Figure2.Figure 2
Data preprocessing process.The data cleaning routine “cleans” the data by filling in missing values, smoothing noise data, identifying or deleting outliers, and resolving inconsistencies. It mainly achieves the following objectives: format standardization, abnormal data elimination, error correction, and duplicate data elimination. Data integration refers to the centralized integration of multiple required data sources to form a complete data set, reducing data clutter and storage inconsistency, thereby improving the efficiency of data analysis and mining. In this study, the data of the experiment were provided by two types of questionnaires: teachers and schools. Each teacher filled in the name of the school where he was employed, and each school also filled in its own detailed name. The teacher’s personal information, training situation, and informatization application situation are matched with the background and training situation of the school where they work, so as to realize the integration of data. In order to ensure the high quality of the sample data and the high accuracy of the mining analysis results, the sample data that the teachers failed to successfully correspond to the school should be eliminated.
### 4.2. Experimental Results and Analysis
After solving the parametric equation and bringing it into the test set, the performance is shown in Figure3. The prediction results are mainly in the range of 12 to 16, while the parts below 12 and above 16 are not fitted at all. This is also in line with the characteristics of the linear model, and after all, the geometric meaning of its objective function is a straight line.Figure 3
Multilayer linear regression test set fitting results.The multilayer linear regression model has the advantage of using multilayer linear regression; that is, it does not waste samples and can study individual differences while ensuring that error independence assumption is not violated. The shortcomings of multilayer linear regression are also the shortcomings of most single models of machine learning. The learning ability of the model is far less than that of ensemble learning, and its prediction performance is relatively weak. Figures4 and 5 show the evaluation indicators of the multilayer linear regression model. It can be seen that the overall value is not high, but the MAPE is more than 15%. It shows that the prediction effect of this model is quite different from the best fit.Figure 4
Regression curve of the multilayer linear regression model.Figure 5
Residual distribution diagram.If a separate ensemble model is also regarded as a single model, there are often problems such as insufficient data utilization and low prediction accuracy, and in general, the performance of the fusion model is better than that of the single model. Therefore, the author continues to propose two model fusion methods based on low-error machine learning for teachers’ digital education resource application ability prediction, so as to improve the effect of ability prediction. The prediction and fitting effect of the model on the test set is shown in Figure6. It can be seen that except for minor differences, the prediction results of the model are relatively good. In terms of the MAPE index, the multilayer combination model based on the random forest has the best performance, so it verifies the correctness of the fusion idea. Since the MAPE indicator represents that it is closer in the comparison between the model-predicted value and the actual value and that its residual distribution is also the smallest, in general, the multilayer combination model based on the random forest is the best choice for the prediction scheme.Figure 6
Comparison of the multilayer combination test set of the random forest.Since it is a case display and analysis of the application of the predicted value of teachers’ digital educational resource application ability, this paper will select two training indicators as representatives to analyze the feasibility of accurately identifying the training needs of low-level teachers in the indicators. The obtained analysis results provide the basis for education management departments and schools in the design of training programs, the size of personnel, and the determination of lists. In order to make the recognition effect more intuitive, this paper visually displays the training participation of 185 middle- and low-level teachers in district A in the form of scatter plots. The specific results are shown in Figure7.Figure 7
Predicted number of government training participation of low- and medium-level teachers by region.According to Figure8, it can be seen that teachers are also more active in participating in informatization training. The number of government training participation and the hours of theoretical knowledge are both above the average level. Although there is a slight deficiency in the number of school-based training, the total annual time, and the number of models, the gap is small. If these aspects can be improved, the application of digital education resources for teachers will be greatly improved. Capabilities may be further enhanced.Figure 8
Teachers’ participation in the dimension of informatization training.Therefore, in order to improve the application of digital education resources and expand the coverage of digital education resources, the connotation construction of digital education resources should be improved first. Second, we should create a campus cultural environment that supports teachers to integrate and use digital education resources. Finally, it is necessary to carry out the ability training for teachers to integrate digital education resources and provide teachers with a sense of self-efficacy in the use of digital education resources.
## 4.1. Data Acquisition and Preprocessing
The experimental data in this study come from two types of questionnaires, the “Questionnaire on the Development of Informatization of Primary and Secondary School Teachers” and the “Questionnaire on the Development of Informatization in Primary and Secondary Schools” in the National Informatization Research Action. In the determination of schools, a total of 1,579 questionnaires from primary and secondary schools in the province were selected, and more than 20,000 pieces of information were used as the data set for this study. Using this series of questionnaires, the following four parts of useful information can be collected: the first part is the background information of teachers and their schools, including gender, education, teaching years, and the number of teachers in the school; the second part is the ability of teachers to apply digital resources, such as the frequency of teachers’ use of informatization in each link; the third part is the training participation of teachers, such as the number of training participation, duration, and number of models; the fourth part is the school’s development of training, such as school development, the number of school-based training, and the proportion of teachers participating in school-based training.During the implementation of data mining technology, both algorithms and targets have higher requirements for data, and if the original data are mixed, it is difficult to meet the experimental requirements. Usually, the obtained educational data are based on the evaluation objectives and evaluation objects. Different scenarios and different dimensions of data are selected. Therefore, it may occur that the data collected initially cannot be directly used for mining. Therefore, the data must be processed and cleaned to a certain extent. Unify the data form, reduce noise, and integrate high-quality data to reduce mining costs, improve operational efficiency, and obtain good experimental results. The specific data preprocessing process is shown in Figure2.Figure 2
Data preprocessing process.The data cleaning routine “cleans” the data by filling in missing values, smoothing noise data, identifying or deleting outliers, and resolving inconsistencies. It mainly achieves the following objectives: format standardization, abnormal data elimination, error correction, and duplicate data elimination. Data integration refers to the centralized integration of multiple required data sources to form a complete data set, reducing data clutter and storage inconsistency, thereby improving the efficiency of data analysis and mining. In this study, the data of the experiment were provided by two types of questionnaires: teachers and schools. Each teacher filled in the name of the school where he was employed, and each school also filled in its own detailed name. The teacher’s personal information, training situation, and informatization application situation are matched with the background and training situation of the school where they work, so as to realize the integration of data. In order to ensure the high quality of the sample data and the high accuracy of the mining analysis results, the sample data that the teachers failed to successfully correspond to the school should be eliminated.
## 4.2. Experimental Results and Analysis
After solving the parametric equation and bringing it into the test set, the performance is shown in Figure3. The prediction results are mainly in the range of 12 to 16, while the parts below 12 and above 16 are not fitted at all. This is also in line with the characteristics of the linear model, and after all, the geometric meaning of its objective function is a straight line.Figure 3
Multilayer linear regression test set fitting results.The multilayer linear regression model has the advantage of using multilayer linear regression; that is, it does not waste samples and can study individual differences while ensuring that error independence assumption is not violated. The shortcomings of multilayer linear regression are also the shortcomings of most single models of machine learning. The learning ability of the model is far less than that of ensemble learning, and its prediction performance is relatively weak. Figures4 and 5 show the evaluation indicators of the multilayer linear regression model. It can be seen that the overall value is not high, but the MAPE is more than 15%. It shows that the prediction effect of this model is quite different from the best fit.Figure 4
Regression curve of the multilayer linear regression model.Figure 5
Residual distribution diagram.If a separate ensemble model is also regarded as a single model, there are often problems such as insufficient data utilization and low prediction accuracy, and in general, the performance of the fusion model is better than that of the single model. Therefore, the author continues to propose two model fusion methods based on low-error machine learning for teachers’ digital education resource application ability prediction, so as to improve the effect of ability prediction. The prediction and fitting effect of the model on the test set is shown in Figure6. It can be seen that except for minor differences, the prediction results of the model are relatively good. In terms of the MAPE index, the multilayer combination model based on the random forest has the best performance, so it verifies the correctness of the fusion idea. Since the MAPE indicator represents that it is closer in the comparison between the model-predicted value and the actual value and that its residual distribution is also the smallest, in general, the multilayer combination model based on the random forest is the best choice for the prediction scheme.Figure 6
Comparison of the multilayer combination test set of the random forest.Since it is a case display and analysis of the application of the predicted value of teachers’ digital educational resource application ability, this paper will select two training indicators as representatives to analyze the feasibility of accurately identifying the training needs of low-level teachers in the indicators. The obtained analysis results provide the basis for education management departments and schools in the design of training programs, the size of personnel, and the determination of lists. In order to make the recognition effect more intuitive, this paper visually displays the training participation of 185 middle- and low-level teachers in district A in the form of scatter plots. The specific results are shown in Figure7.Figure 7
Predicted number of government training participation of low- and medium-level teachers by region.According to Figure8, it can be seen that teachers are also more active in participating in informatization training. The number of government training participation and the hours of theoretical knowledge are both above the average level. Although there is a slight deficiency in the number of school-based training, the total annual time, and the number of models, the gap is small. If these aspects can be improved, the application of digital education resources for teachers will be greatly improved. Capabilities may be further enhanced.Figure 8
Teachers’ participation in the dimension of informatization training.Therefore, in order to improve the application of digital education resources and expand the coverage of digital education resources, the connotation construction of digital education resources should be improved first. Second, we should create a campus cultural environment that supports teachers to integrate and use digital education resources. Finally, it is necessary to carry out the ability training for teachers to integrate digital education resources and provide teachers with a sense of self-efficacy in the use of digital education resources.
## 5. Conclusion
Since modern times, China’s educational philosophy has been deeply influenced by Western countries. In the early days of the founding of the People’s Republic of China, the teaching form of the “class teaching system” of the former Soviet Union was basically followed. On the basis of copying the former Soviet Union, the educational concept and teaching mode have made certain changes and adjustments according to China’s national conditions. At the same time, influenced by the traditional “employment examination system”, it developed into “examination-oriented education” with Chinese characteristics in the later stage. In the traditional education concept and teaching mode, teachers are the main body of teaching activities in teaching, mainly one-way collective teaching of teachers to students. The method in the teaching process is not innovative, and the teaching content only focuses on students’ cultural knowledge. To a great extent, the purpose of teaching is to ensure the entrance examination in the next stage. The new educational concept advocates the all-round development of education, pays attention to the development of students’ potential, and respects the subjective initiative of the individual. Under the influence of this educational concept, a large number of new teaching modes have emerged. The common point of these teaching modes is that the main body of teaching activities has changed from traditional teachers to students; information has changed from one-way transmission between teachers and students to multidirectional communication between teachers and students. Among them, teachers’ ability to apply teaching resources becomes very important. Based on the collected educational data, this paper mines the specific factors that will affect the ability of teachers to apply digital educational resources and uses these objective salient features to construct multiple machine learning regression models to predict the application ability score of teachers’ digital educational resources. Through the comparison and optimization of the model’s performance, a more suitable prediction method for the research group was found. Using MSE, MAE, RMSE, and MAPE as performance evaluation indicators to compare the performance of each model, it is found that in each indicator, there is a multilayer linear regression < mild gradient boosting < extreme gradient boosting < random forest. In addition, in the two integrated models, the bagging idea represented by the random forest is more suitable for this group than two gradient boosting.
---
*Source: 1012803-2022-11-21.xml* | 1012803-2022-11-21_1012803-2022-11-21.md | 64,059 | The Application of New Educational Concepts in Digital Educational Media | Chun Yang | Advances in Multimedia
(2022) | Computer Science | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012803 | 1012803-2022-11-21.xml | ---
## Abstract
With the development and progress of society, great changes have taken place in educational concepts and teaching models compared with the past. Faced with the new educational concept of advocating diversified teaching modes and all-round talent training, the teaching space under the traditional single-fixed teaching mode is insufficient. The field of digital media education is a form of education with the development of information technology. The purpose of this paper is to find out the construction form of teaching space under the new educational concept that adapts to the development of the social era and to respond to the constantly updated and emerging educational concept and teaching mode. The process is as follows: based on the collected education data, mining the specific factors that will affect the application ability of teachers’ digital education resources and building a multiple machine learning regression model using these objective and significant features to predict the score of teachers’ digital education resources application ability. Through the comparison and optimization of model performance, a more suitable prediction method was found. MSE, MAE, RMSE, and MAPE are used as performance evaluation indicators to compare the performance of each model. It is found that there are multilayer linear regression < mild gradient advance < extreme gradient advance < random forest in each indicator. In addition, in the two integration models, bagging idea represented by the random forest is more suitable for this group than two gradient boosting.
---
## Body
## 1. Introduction
In the twenty-first century of comprehensive informatization and economic integration, peace and development are the main themes of the world. However, this does not mean that there is no competition between countries [1]. On the contrary, competition and cooperation between countries and regions are taking place in a new form [2]. In today’s society dominated by the knowledge economy, the population structure and population quality of a country or region have become one of the most important factors that restrict or assist its development [3]. Talents will dominate the future and destiny of every country and nation as the most important human resource in the future society. It has become the consensus of the world [4]. The most important indicator to measure the comprehensive national strength of a country or region is the level of scientific and technological development [5]. The development of science and technology is inseparable from the reserve of human resources [6]. The most important and effective way to cultivate human resources and maximize the development of sleeping human resources is education [7]. Therefore, all countries and regions in the world attach great importance to the development of domestic education [8]. Although the level of development varies from country to country, they all take education to an extremely important level and give priority to development [9]. Entering the new century, the international environment and the background of the era have become increasingly complex and diverse, and the opportunities and challenges facing the country are even more severe [10]. The importance of education and talent training has become more prominent [11]. The outline of the national medium - and long-term education reform and development plan points out that “information technology has a revolutionary impact on education development and must be attached great importance to. By 2020, we will basically build a digital education service system covering all types of schools at all levels in urban and rural areas and promote the modernization of education content and teaching means and methods.” It is obvious that digital teaching has become the inevitable trend of current school education development.At the beginning of the century, several major powers in the world have formulated new talent training strategies to adapt to the future development and promulgated a series of education laws [12]. On January 8, 2002, President Bush of the United States signed the “Leave No Child Left Behind” education reform bill and at the same time increased the budget for primary and secondary education; the Australian Federal Department of Education also signed the “About 21st Century National Schools” on future education reform and the Adelaide Declaration of Educational Goals; At the same time, the two major East Asian countries face each other across the sea. South Korea has formulated the “21st Century Reform Plan” and implemented a series of reform plans for different education stages. The Ministry of Education, Culture, Sports, Science and Technology of Japan has promulgated the Japanese 21st Century Education Freshmen Plan [13]. From the new education policies introduced by these countries, it is evident that primary and secondary education, as the basic stage of lifelong education for people, has received more and more attention from all aspects [14]. Instead of training according to the needs of industrialized mass production in the last century, it is more inclined to cultivate comprehensive talents in line with the information society [15]. With the emergence of various new educational concepts and the popularization of more advanced teaching equipment, the teaching methods and curriculum content in countries around the world have also changed accordingly [16]. Comprehensively, while paying attention to fair cultivation, it also emphasizes guiding and cultivating citizens’ ideological and moral character [17]. The criteria for judging students are also more diverse and objective, and instead of blindly referring to test scores, the evaluation system is more abundant, the importance of test scores is reduced, and more emphasis is placed on students’ additional abilities beyond their scores [18]. In today’s world, science and technology are advancing with each passing day. Modern information technologies such as the Internet, cloud computing, and big data are profoundly changing the way people think, produce, live, and learn. The main task for educators is to respond to the development of information technology, promote educational reform and innovation, build a networked, digital, personalize a lifelong education system, build a learning society where everyone can learn at any place and any time, and cultivate a large number of innovative talents.The characteristics of the communication structure of the “micro era” have become the most important communication method in the information society [19]. The emergence of new media has a direct impact on the teaching mode, learning methods, and teaching methods, as well as the feelings and experiences of practical links in the education classroom from the perspective of concepts and methods [20]. Therefore, great changes have taken place and will continue to improve and innovate with the technical requirements of the new era. The rapid development of media technology has accelerated the trend of media integration, and traditional media is rapidly transforming into digital media. Entering a new era, virtual reality technology and artificial intelligence have begun to promote a new round of technological revolution [21]. Under the new situation, higher education needs to cultivate compound innovative talents who meet the diverse needs of society. Since its establishment in 2006, the Department of Digital Media Technology of Central China Normal University has carried out three reforms and innovations around talent training and teaching models to meet the new demands of the times for digital media talents. In the theory of “teaching and learning”, Ausubel, an American constructivist educator, put forward the teaching mode of “taking skill training as the core, establishing project oriented, and task driven” [22]. However, current research shows that there are generally obvious differences in the application level of teachers’ digital educational resources. Due to the lack of teachers’ own application ability, the use of existing infrastructure and high-quality digital resources to develop educational and teaching activities has not achieved good results [23]. Therefore, it is necessary to understand the current level of teachers’ informatization application, especially the relevant influencing factors of their ability to apply digital educational resources, and to explore how to predict the application status of teachers in the region, so as to provide interventions to improve the professional development of teachers’ digital resources and solve the application of inter-regional resources [24]. Differences need to be considered. Therefore, based on machine learning, this paper mainly analyzes new educational concepts and the ability of teachers to apply digital educational resources in digital educational media.Existing studies have shown that teachers will encounter two types of resistance in integrating information technology into teaching, namely, external resistance and internal resistance. In the context of using information technology to support teaching, external resistance is the external factor affecting teachers’ behavior, such as Internet access, preparation of information technology equipment, bandwidth, technology-related training, and other external environmental factors. When the external resistance is removed, teachers will not consciously integrate information technology into teaching to improve the effect of meaningful teaching. This is because factors related to teachers’ teaching ideas and knowledge status, such as teachers’ attitude towards using information technology, self-efficacy, intention, and other internal resistance, will become the second obstacle affecting teachers’ teaching behavior, and these obstacles are called internal resistance.
## 2. Materials and Methods
### 2.1. Development of New Educational Concepts in China
The so-called digital media education refers to the use of multimedia and network technology to digitize the main information resources of schools and to achieve digital information management and communication, so as to form a highly information-based talent training environment. Chinese modern education started in the early 20th century. After more than 100 years of development, it has cultivated a large number of talents for social development and national construction. However, due to the special and complex national conditions and historical and social reasons in China since modern times, even though some areas have made attempts to reform and innovate education, the current primary and secondary education model is still relatively traditional exam-oriented education, and the education level is relatively backward. This leads to some common problems in the stage of compulsory education. For example, the main body of teaching behavior is still teachers, and most of them are the previous teaching mode of cramming, which often ignores the cultivation of students’ self-thinking and self-innovation ability. Teaching evaluation is still based on students’ test scores, and the evaluation of students is also based on test scores. Therefore, the traditional teaching mode and concept seriously restrict the development of education in our country, and students’ collaboration ability, innovation ability, learning ability, and moral quality cannot be developed as they should.In the 21st century of comprehensive informationization, society has a new understanding of the definition of talents, and the demand for talents is higher and higher. Traditional single-type talents are no longer suitable for the development of today’s society. Quality education aims to impart knowledge to students, cultivate their comprehensive abilities such as self-learning cognition and practical innovation, and attaches great importance to students’ all-round development. Based on the concept of quality education, combined with some advanced international education models and experiences, scholars in China have also conducted a series of research and exploration. So far, various new educational concepts have emerged in schools across the country. Modernized and open teaching methods and teaching spaces have also appeared one after another. Most of the teaching behavior advocated by the new educational concept is the communication and interaction between teachers and students, abandoning the previous one-way knowledge transfer. In the process of teaching behavior, the two sides of interaction are not limited to the interaction between teachers and students, and the interaction between students and students is equally important. At the same time, the teaching methods are also more diverse, and more attention is paid to the unique psychological states of different groups of students at different stages.
### 2.2. Necessity of Innovation in Digital Media Education in the New Era
With the rapid development of China’s economy and technology, the public’s demand for culture is increasing day by day. Under the national “Internet +” strategy, the in-depth integration of information technology into various fields has brought opportunities for the development of cultural and creative industries and also put forward new requirements for the reform and innovation of digital media education in China.(1)
The development of the industry is the inherent demand for the innovation of media talent training. For the prosperity and development of cultural and creative industries, creativity is the core competitiveness. In the era of “content is king,” whether it is traditional media such as movies and TV, or emerging new audio-visual media such as live broadcast and online short video, their operation mainly relies on high-quality content and platforms. The high-quality content that stands out from the competition comes from the rich creativity of media people. The talent training mode of colleges and universities should focus on the national innovation and development strategy and cultivate the innovation consciousness of contemporary college students. In the digital media technology major of the author’s school, in addition to imparting students’ professional knowledge and skills, teaching should further cultivate and forge students’ innovative ability. In the new educational environment, colleges and universities urgently need to explore how to integrate curriculum resources to build a talent training model with innovative spirit and ability to adapt to the needs of the times and focus on openness, sharing, and digital in-depth integration.(2)
The ability training of colleges and universities drives the transformation of course teaching methods. The main content of theStanford University 2025 Plan covers four aspects, namely, “open-loop university,” “flipping around the axis,” “adaptive education,” and “purposeful learning.” Focusing on the reform of the academic system, the transfer of teaching focus, the reconstruction of the curriculum system, and the reform of learning methods, it puts forward a bold reform concept, emphasizes the dominant position of students, implements personalized education and independent education, integrates multidisciplinary resources, and cultivates awareness and ability to solve global problems. Students majoring in digital media technology at Central China Normal University have formed a teamwork working method in the school stage by completing work in the course study, exercised the ability of collaborative learning and problem-solving, and accumulated industry experience in advance. In addition, all-round interaction with teachers in class and after class through project practice effectively promotes students’ internalization of knowledge and improves their ability to solve problems in the process of practice.
### 2.3. Research on the Current Situation of Machine Learning in the Field of Education and Teaching
The research purpose of machine learning is to use computers to simulate human learning activities. It is a method for computers to recognize existing knowledge, acquire new knowledge, constantly improve performance, and achieve self-perfection. There are three research objectives of machine learning: the cognitive model of the human learning process, the general learning algorithm, and the method of constructing a task-oriented special learning system. With the explosion of educational data, how to use a large amount of educational and teaching data for analysis to achieve accurate prediction and decision support is a new direction for thinking in the era of artificial intelligence. As an important method in the field of artificial intelligence, machine learning can meet the needs of educational big data analysis and prediction. In recent years, machine learning education application cases and related research based on real data have been continuously carried out at home and abroad, and they are committed to introducing machine learning technology into education and teaching activities. At the policy level, in May and October 2016, the National Security and Technology Council of the United States released two reports, “Preparing for the Future of Artificial Intelligence” and “National Artificial Intelligence Research and Development Strategic Plan,” respectively, pointing out that the realization and promotion of AI research are necessary. The core technology is machine learning. The Chinese government also attaches great importance to the development of artificial intelligence technologies such as machine learning. In April 2018, the Ministry of Education issued the “Education Informatization 2.0 Action Plan,” which was proposed to “rely on emerging technologies such as artificial intelligence to promote the reform of education models supported by new technologies.” In the field of practical research, many foreign scholars have begun to explore machine learning in the field of education earlier and have made certain progress. Although domestic researchers have begun to use machine learning technology to generate value in the field of education research relatively late but also achieved varying degrees of success. There are also practical applications of machine learning for the group of teachers in this study and the forecasting techniques to be implemented.
## 2.1. Development of New Educational Concepts in China
The so-called digital media education refers to the use of multimedia and network technology to digitize the main information resources of schools and to achieve digital information management and communication, so as to form a highly information-based talent training environment. Chinese modern education started in the early 20th century. After more than 100 years of development, it has cultivated a large number of talents for social development and national construction. However, due to the special and complex national conditions and historical and social reasons in China since modern times, even though some areas have made attempts to reform and innovate education, the current primary and secondary education model is still relatively traditional exam-oriented education, and the education level is relatively backward. This leads to some common problems in the stage of compulsory education. For example, the main body of teaching behavior is still teachers, and most of them are the previous teaching mode of cramming, which often ignores the cultivation of students’ self-thinking and self-innovation ability. Teaching evaluation is still based on students’ test scores, and the evaluation of students is also based on test scores. Therefore, the traditional teaching mode and concept seriously restrict the development of education in our country, and students’ collaboration ability, innovation ability, learning ability, and moral quality cannot be developed as they should.In the 21st century of comprehensive informationization, society has a new understanding of the definition of talents, and the demand for talents is higher and higher. Traditional single-type talents are no longer suitable for the development of today’s society. Quality education aims to impart knowledge to students, cultivate their comprehensive abilities such as self-learning cognition and practical innovation, and attaches great importance to students’ all-round development. Based on the concept of quality education, combined with some advanced international education models and experiences, scholars in China have also conducted a series of research and exploration. So far, various new educational concepts have emerged in schools across the country. Modernized and open teaching methods and teaching spaces have also appeared one after another. Most of the teaching behavior advocated by the new educational concept is the communication and interaction between teachers and students, abandoning the previous one-way knowledge transfer. In the process of teaching behavior, the two sides of interaction are not limited to the interaction between teachers and students, and the interaction between students and students is equally important. At the same time, the teaching methods are also more diverse, and more attention is paid to the unique psychological states of different groups of students at different stages.
## 2.2. Necessity of Innovation in Digital Media Education in the New Era
With the rapid development of China’s economy and technology, the public’s demand for culture is increasing day by day. Under the national “Internet +” strategy, the in-depth integration of information technology into various fields has brought opportunities for the development of cultural and creative industries and also put forward new requirements for the reform and innovation of digital media education in China.(1)
The development of the industry is the inherent demand for the innovation of media talent training. For the prosperity and development of cultural and creative industries, creativity is the core competitiveness. In the era of “content is king,” whether it is traditional media such as movies and TV, or emerging new audio-visual media such as live broadcast and online short video, their operation mainly relies on high-quality content and platforms. The high-quality content that stands out from the competition comes from the rich creativity of media people. The talent training mode of colleges and universities should focus on the national innovation and development strategy and cultivate the innovation consciousness of contemporary college students. In the digital media technology major of the author’s school, in addition to imparting students’ professional knowledge and skills, teaching should further cultivate and forge students’ innovative ability. In the new educational environment, colleges and universities urgently need to explore how to integrate curriculum resources to build a talent training model with innovative spirit and ability to adapt to the needs of the times and focus on openness, sharing, and digital in-depth integration.(2)
The ability training of colleges and universities drives the transformation of course teaching methods. The main content of theStanford University 2025 Plan covers four aspects, namely, “open-loop university,” “flipping around the axis,” “adaptive education,” and “purposeful learning.” Focusing on the reform of the academic system, the transfer of teaching focus, the reconstruction of the curriculum system, and the reform of learning methods, it puts forward a bold reform concept, emphasizes the dominant position of students, implements personalized education and independent education, integrates multidisciplinary resources, and cultivates awareness and ability to solve global problems. Students majoring in digital media technology at Central China Normal University have formed a teamwork working method in the school stage by completing work in the course study, exercised the ability of collaborative learning and problem-solving, and accumulated industry experience in advance. In addition, all-round interaction with teachers in class and after class through project practice effectively promotes students’ internalization of knowledge and improves their ability to solve problems in the process of practice.
## 2.3. Research on the Current Situation of Machine Learning in the Field of Education and Teaching
The research purpose of machine learning is to use computers to simulate human learning activities. It is a method for computers to recognize existing knowledge, acquire new knowledge, constantly improve performance, and achieve self-perfection. There are three research objectives of machine learning: the cognitive model of the human learning process, the general learning algorithm, and the method of constructing a task-oriented special learning system. With the explosion of educational data, how to use a large amount of educational and teaching data for analysis to achieve accurate prediction and decision support is a new direction for thinking in the era of artificial intelligence. As an important method in the field of artificial intelligence, machine learning can meet the needs of educational big data analysis and prediction. In recent years, machine learning education application cases and related research based on real data have been continuously carried out at home and abroad, and they are committed to introducing machine learning technology into education and teaching activities. At the policy level, in May and October 2016, the National Security and Technology Council of the United States released two reports, “Preparing for the Future of Artificial Intelligence” and “National Artificial Intelligence Research and Development Strategic Plan,” respectively, pointing out that the realization and promotion of AI research are necessary. The core technology is machine learning. The Chinese government also attaches great importance to the development of artificial intelligence technologies such as machine learning. In April 2018, the Ministry of Education issued the “Education Informatization 2.0 Action Plan,” which was proposed to “rely on emerging technologies such as artificial intelligence to promote the reform of education models supported by new technologies.” In the field of practical research, many foreign scholars have begun to explore machine learning in the field of education earlier and have made certain progress. Although domestic researchers have begun to use machine learning technology to generate value in the field of education research relatively late but also achieved varying degrees of success. There are also practical applications of machine learning for the group of teachers in this study and the forecasting techniques to be implemented.
## 3. Results and Discussion
### 3.1. Common Regression Algorithms
Regression is the prediction of new data based on existing data. Linear regression can accurately describe the relationship between data with a straight line so that when new data appear, a simple value can be predicted. The linear regression model is very easy to understand, and the results are very interpretable, which is conducive to decision analysis. Multilayer linear regression is to study the regression problem between a dependent variable and multiple independent variables, and it is also a statistical method to determine the relationship between independent variables and dependent variables and give explanations. In regression analysis, when there is only one independent variable and one dependent variable, the independent variable will be used as the main factor to explain the change in the dependent variable, and a straight line can be used to approximate the relationship between the two. Such a regression analysis is called univariate linear regression, and its formula is as follows:(1)y=α+βx+ε,where α and β are the regression coefficients and ε is the random error term. When there is a linear relationship between multiple independent variables and dependent variables, this regression analysis is a multilayer linear regression analysis, so the multilayer linear regression model is optimized on the basis of the univariate regression model, and its formula is as follows: (2)y=α+β1x1+β2x2……+βnxn+ε.Among them,n represents the number of explanatory variables, α and β (i = 1, 2, ....., n) are partial regression coefficients, and ε is a random error term.Extreme gradient boosting, also known as XGBoost, is one of the boosting algorithms in the inheritance algorithm, and it is an improved algorithm based on GBDT. The objective function of the algorithm is as follows:(3)L∅=∑ilyi,y^l+∑kΩfk.(4)Ωf=γT+λw22.Among them,L(Ф) is the loss function, usually a convex function, which measures the difference between the predicted value yi and the actual value yi. The second-order Taylor expansion of the loss function is obtained as follows:(5)Lt=∑i=1nlyi,y^lt−1+ftxi+Ωft,∑j=1Tgftxi+12hftxi+γT+12λ∑j=1Twj2.In the formula,gi is the first derivative and hi is the second derivative.The formula is expressed as follows:(6)Ij=iqxi=j.If the sample set on the leaf node isj, then the following formula is obtained:(7)Lt=∑i∈Ijgiwj+12∑i∈Ijhi+λwj2+γT.When the tree structureq is known and the leaf node weight wjof the formula has a closed-form solution, the objective function is as follows:(8)w∗=−∑i∈Ijgi∑i∈Ijhi+λ,Ltq=12∑j=1T∑i∈Ijgi2∑i∈Ijhi+λ+γT.The advantage of XGBoost is that on the one hand, it supports linear classification and regression, which can speed up the training speed and provide a fast lane for model training. On the other hand, XGBoost sets the learning rate for leaf nodes when creating trees and reduces the cost of each tree. The weights that reduce the influence of each tree on the model provide a better learning space. However, XGBoost still needs to traverse the data set in the process of node splitting. In the presorting process, not only feature values but also features are stored. There are many samples corresponding to the gradient statistics of the index, so the memory consumption is relatively large.Model fusion, as the name suggests, is to fuse multiple models together, also known as ensemble learning. In the process of solving real problems, each algorithm model has its own advantages, but it also has certain limitations. If multiple algorithms are fused together, a learner with better performance than a single model can be produced, which can not only inherit the advantages of different algorithms but also avoid some shortcomings of a single model. Therefore, in the field of machine learning, the practice of model fusion is very common. Common model fusion strategies include the voting method, the average fusion method, and the learning method, and ensemble learning methods include bagging, boosting, and stacking.The simple average method refers to the direct summation of the prediction results of all multiple regression models, and then, the average value is calculated. The fusion expression is as follows:(9)Yx=1n∑i−1ny^ix.The weighted average is to give different weights to different models through some methods and then combine the weights with the results of the base learner and then calculate the average. The fusion expression is as follows:(10)Yx=∑i−1nwixy^ix.There are various methods, which need to be selected according to actual needs. Because the effects of the three models that need to be fused this time are close, it is not suitable to choose the most basic “voting method” for fusion. It is more suitable to use the weighted average. Although the results are similar, there are always differences.Random forest performs sampling according to a random method, which can establish a decision forest that meets needs, and there are multiple decision trees in this forest. A decision tree is a basic classifier, which generally divides features into two categories (decision tree can also be used for regression). The constructed decision tree has a tree structure and can be considered as a collection of if then rules. The main advantage is that the model is readable and that the classification speed is fast. Therefore, the random forest is an integrated classification algorithm composed of several decision trees. Its basis is the decision tree algorithm, and its core is how to use the idea of randomization to construct multiple decision trees of the random forest. In the ensemble algorithm, the algorithm idea of the random forest is not complicated, and it is a more common method in dealing with binary classification, multiclassification, and regression tasks. Because each decision tree is operated independently in parallel, it can save time and computational overhead, which is called “representative method of ensemble learning technology level.” Of course, random forests also have shortcomings. Random forests do not continuously output results, and if the data exceed the range of the training set, they become unpredictable, so this sometimes leads to overfitting, but no further research was conducted in this paper on the problem of overfitting.
### 3.2. Model Setting of Teachers’ Digital Educational Resource Application Ability
In this study, the effects from teachers (Tier 1) are not considered to be completely independent due to the nested nature of the data; i.e., teachers from schools (Tier 2) share the school environment, and a two-tier linear model is used as the basic analysis tool. The first layer of equations contains teacher-level predictors, and the second layer contains school-level predictors. If a multilevel linear model is used to conduct cross-level analysis on research objects with multilevel and nested data structures, individual effects and external environmental effects can be separated, and then, the influence of variables at the two levels can be explored separately. Therefore, this paper will construct five models and use the multilayer linear regression model to explore the influence of the school level and the teacher level on the ability of teachers to apply digital resources. Figure1 shows the hypothetical model of the factors influencing the application of digital educational resources for teachers.Figure 1
Hypothetical model of factors influencing the application of digital educational resources for teachers.The zero model is as follows:(11)Yij=β0j+rij,β0j=γ00+μ0j.Yij represents the digital resource application ability of the teacher i in the school j, β0j represents the average value of the digital resource application ability of teachers in the j school, which is a random error at the teacher level, Yij represents the overall average value of teachers’ digital resource application ability, μoj is the difference between the school j, and the overall the difference between the averages is a random error at the school level.The influence (slope) of each predictor in Model 1 and Model 2 on teachers’ ability to apply digital educational resources remains constant among schools, and both belong to the random-effect covariance model:(12)Yij=β0j+β1jX1+β2jX2+⋯+β10jX10+rij,βcj=γc0+μcj,c=0,1,2,3…10,where β1j. β10j are the partial regression coefficients of the influence of the predictors X1, X2...X10 on Yj, respectively. On the basis of Model 2, the background of the school’s geographical location and the number of in-service teachers (Model 3) are included in the second-level equation as predictors. The school informatization training variable of the proportion of teachers in school-based training is used as a predictive model (Model 4) to examine the impact of school-level factors on teachers’ ability to apply digital resources. Both Model 3 and Model 4 belong to the nonrandom variation intercept model.
## 3.1. Common Regression Algorithms
Regression is the prediction of new data based on existing data. Linear regression can accurately describe the relationship between data with a straight line so that when new data appear, a simple value can be predicted. The linear regression model is very easy to understand, and the results are very interpretable, which is conducive to decision analysis. Multilayer linear regression is to study the regression problem between a dependent variable and multiple independent variables, and it is also a statistical method to determine the relationship between independent variables and dependent variables and give explanations. In regression analysis, when there is only one independent variable and one dependent variable, the independent variable will be used as the main factor to explain the change in the dependent variable, and a straight line can be used to approximate the relationship between the two. Such a regression analysis is called univariate linear regression, and its formula is as follows:(1)y=α+βx+ε,where α and β are the regression coefficients and ε is the random error term. When there is a linear relationship between multiple independent variables and dependent variables, this regression analysis is a multilayer linear regression analysis, so the multilayer linear regression model is optimized on the basis of the univariate regression model, and its formula is as follows: (2)y=α+β1x1+β2x2……+βnxn+ε.Among them,n represents the number of explanatory variables, α and β (i = 1, 2, ....., n) are partial regression coefficients, and ε is a random error term.Extreme gradient boosting, also known as XGBoost, is one of the boosting algorithms in the inheritance algorithm, and it is an improved algorithm based on GBDT. The objective function of the algorithm is as follows:(3)L∅=∑ilyi,y^l+∑kΩfk.(4)Ωf=γT+λw22.Among them,L(Ф) is the loss function, usually a convex function, which measures the difference between the predicted value yi and the actual value yi. The second-order Taylor expansion of the loss function is obtained as follows:(5)Lt=∑i=1nlyi,y^lt−1+ftxi+Ωft,∑j=1Tgftxi+12hftxi+γT+12λ∑j=1Twj2.In the formula,gi is the first derivative and hi is the second derivative.The formula is expressed as follows:(6)Ij=iqxi=j.If the sample set on the leaf node isj, then the following formula is obtained:(7)Lt=∑i∈Ijgiwj+12∑i∈Ijhi+λwj2+γT.When the tree structureq is known and the leaf node weight wjof the formula has a closed-form solution, the objective function is as follows:(8)w∗=−∑i∈Ijgi∑i∈Ijhi+λ,Ltq=12∑j=1T∑i∈Ijgi2∑i∈Ijhi+λ+γT.The advantage of XGBoost is that on the one hand, it supports linear classification and regression, which can speed up the training speed and provide a fast lane for model training. On the other hand, XGBoost sets the learning rate for leaf nodes when creating trees and reduces the cost of each tree. The weights that reduce the influence of each tree on the model provide a better learning space. However, XGBoost still needs to traverse the data set in the process of node splitting. In the presorting process, not only feature values but also features are stored. There are many samples corresponding to the gradient statistics of the index, so the memory consumption is relatively large.Model fusion, as the name suggests, is to fuse multiple models together, also known as ensemble learning. In the process of solving real problems, each algorithm model has its own advantages, but it also has certain limitations. If multiple algorithms are fused together, a learner with better performance than a single model can be produced, which can not only inherit the advantages of different algorithms but also avoid some shortcomings of a single model. Therefore, in the field of machine learning, the practice of model fusion is very common. Common model fusion strategies include the voting method, the average fusion method, and the learning method, and ensemble learning methods include bagging, boosting, and stacking.The simple average method refers to the direct summation of the prediction results of all multiple regression models, and then, the average value is calculated. The fusion expression is as follows:(9)Yx=1n∑i−1ny^ix.The weighted average is to give different weights to different models through some methods and then combine the weights with the results of the base learner and then calculate the average. The fusion expression is as follows:(10)Yx=∑i−1nwixy^ix.There are various methods, which need to be selected according to actual needs. Because the effects of the three models that need to be fused this time are close, it is not suitable to choose the most basic “voting method” for fusion. It is more suitable to use the weighted average. Although the results are similar, there are always differences.Random forest performs sampling according to a random method, which can establish a decision forest that meets needs, and there are multiple decision trees in this forest. A decision tree is a basic classifier, which generally divides features into two categories (decision tree can also be used for regression). The constructed decision tree has a tree structure and can be considered as a collection of if then rules. The main advantage is that the model is readable and that the classification speed is fast. Therefore, the random forest is an integrated classification algorithm composed of several decision trees. Its basis is the decision tree algorithm, and its core is how to use the idea of randomization to construct multiple decision trees of the random forest. In the ensemble algorithm, the algorithm idea of the random forest is not complicated, and it is a more common method in dealing with binary classification, multiclassification, and regression tasks. Because each decision tree is operated independently in parallel, it can save time and computational overhead, which is called “representative method of ensemble learning technology level.” Of course, random forests also have shortcomings. Random forests do not continuously output results, and if the data exceed the range of the training set, they become unpredictable, so this sometimes leads to overfitting, but no further research was conducted in this paper on the problem of overfitting.
## 3.2. Model Setting of Teachers’ Digital Educational Resource Application Ability
In this study, the effects from teachers (Tier 1) are not considered to be completely independent due to the nested nature of the data; i.e., teachers from schools (Tier 2) share the school environment, and a two-tier linear model is used as the basic analysis tool. The first layer of equations contains teacher-level predictors, and the second layer contains school-level predictors. If a multilevel linear model is used to conduct cross-level analysis on research objects with multilevel and nested data structures, individual effects and external environmental effects can be separated, and then, the influence of variables at the two levels can be explored separately. Therefore, this paper will construct five models and use the multilayer linear regression model to explore the influence of the school level and the teacher level on the ability of teachers to apply digital resources. Figure1 shows the hypothetical model of the factors influencing the application of digital educational resources for teachers.Figure 1
Hypothetical model of factors influencing the application of digital educational resources for teachers.The zero model is as follows:(11)Yij=β0j+rij,β0j=γ00+μ0j.Yij represents the digital resource application ability of the teacher i in the school j, β0j represents the average value of the digital resource application ability of teachers in the j school, which is a random error at the teacher level, Yij represents the overall average value of teachers’ digital resource application ability, μoj is the difference between the school j, and the overall the difference between the averages is a random error at the school level.The influence (slope) of each predictor in Model 1 and Model 2 on teachers’ ability to apply digital educational resources remains constant among schools, and both belong to the random-effect covariance model:(12)Yij=β0j+β1jX1+β2jX2+⋯+β10jX10+rij,βcj=γc0+μcj,c=0,1,2,3…10,where β1j. β10j are the partial regression coefficients of the influence of the predictors X1, X2...X10 on Yj, respectively. On the basis of Model 2, the background of the school’s geographical location and the number of in-service teachers (Model 3) are included in the second-level equation as predictors. The school informatization training variable of the proportion of teachers in school-based training is used as a predictive model (Model 4) to examine the impact of school-level factors on teachers’ ability to apply digital resources. Both Model 3 and Model 4 belong to the nonrandom variation intercept model.
## 4. Result Analysis and Discussion
### 4.1. Data Acquisition and Preprocessing
The experimental data in this study come from two types of questionnaires, the “Questionnaire on the Development of Informatization of Primary and Secondary School Teachers” and the “Questionnaire on the Development of Informatization in Primary and Secondary Schools” in the National Informatization Research Action. In the determination of schools, a total of 1,579 questionnaires from primary and secondary schools in the province were selected, and more than 20,000 pieces of information were used as the data set for this study. Using this series of questionnaires, the following four parts of useful information can be collected: the first part is the background information of teachers and their schools, including gender, education, teaching years, and the number of teachers in the school; the second part is the ability of teachers to apply digital resources, such as the frequency of teachers’ use of informatization in each link; the third part is the training participation of teachers, such as the number of training participation, duration, and number of models; the fourth part is the school’s development of training, such as school development, the number of school-based training, and the proportion of teachers participating in school-based training.During the implementation of data mining technology, both algorithms and targets have higher requirements for data, and if the original data are mixed, it is difficult to meet the experimental requirements. Usually, the obtained educational data are based on the evaluation objectives and evaluation objects. Different scenarios and different dimensions of data are selected. Therefore, it may occur that the data collected initially cannot be directly used for mining. Therefore, the data must be processed and cleaned to a certain extent. Unify the data form, reduce noise, and integrate high-quality data to reduce mining costs, improve operational efficiency, and obtain good experimental results. The specific data preprocessing process is shown in Figure2.Figure 2
Data preprocessing process.The data cleaning routine “cleans” the data by filling in missing values, smoothing noise data, identifying or deleting outliers, and resolving inconsistencies. It mainly achieves the following objectives: format standardization, abnormal data elimination, error correction, and duplicate data elimination. Data integration refers to the centralized integration of multiple required data sources to form a complete data set, reducing data clutter and storage inconsistency, thereby improving the efficiency of data analysis and mining. In this study, the data of the experiment were provided by two types of questionnaires: teachers and schools. Each teacher filled in the name of the school where he was employed, and each school also filled in its own detailed name. The teacher’s personal information, training situation, and informatization application situation are matched with the background and training situation of the school where they work, so as to realize the integration of data. In order to ensure the high quality of the sample data and the high accuracy of the mining analysis results, the sample data that the teachers failed to successfully correspond to the school should be eliminated.
### 4.2. Experimental Results and Analysis
After solving the parametric equation and bringing it into the test set, the performance is shown in Figure3. The prediction results are mainly in the range of 12 to 16, while the parts below 12 and above 16 are not fitted at all. This is also in line with the characteristics of the linear model, and after all, the geometric meaning of its objective function is a straight line.Figure 3
Multilayer linear regression test set fitting results.The multilayer linear regression model has the advantage of using multilayer linear regression; that is, it does not waste samples and can study individual differences while ensuring that error independence assumption is not violated. The shortcomings of multilayer linear regression are also the shortcomings of most single models of machine learning. The learning ability of the model is far less than that of ensemble learning, and its prediction performance is relatively weak. Figures4 and 5 show the evaluation indicators of the multilayer linear regression model. It can be seen that the overall value is not high, but the MAPE is more than 15%. It shows that the prediction effect of this model is quite different from the best fit.Figure 4
Regression curve of the multilayer linear regression model.Figure 5
Residual distribution diagram.If a separate ensemble model is also regarded as a single model, there are often problems such as insufficient data utilization and low prediction accuracy, and in general, the performance of the fusion model is better than that of the single model. Therefore, the author continues to propose two model fusion methods based on low-error machine learning for teachers’ digital education resource application ability prediction, so as to improve the effect of ability prediction. The prediction and fitting effect of the model on the test set is shown in Figure6. It can be seen that except for minor differences, the prediction results of the model are relatively good. In terms of the MAPE index, the multilayer combination model based on the random forest has the best performance, so it verifies the correctness of the fusion idea. Since the MAPE indicator represents that it is closer in the comparison between the model-predicted value and the actual value and that its residual distribution is also the smallest, in general, the multilayer combination model based on the random forest is the best choice for the prediction scheme.Figure 6
Comparison of the multilayer combination test set of the random forest.Since it is a case display and analysis of the application of the predicted value of teachers’ digital educational resource application ability, this paper will select two training indicators as representatives to analyze the feasibility of accurately identifying the training needs of low-level teachers in the indicators. The obtained analysis results provide the basis for education management departments and schools in the design of training programs, the size of personnel, and the determination of lists. In order to make the recognition effect more intuitive, this paper visually displays the training participation of 185 middle- and low-level teachers in district A in the form of scatter plots. The specific results are shown in Figure7.Figure 7
Predicted number of government training participation of low- and medium-level teachers by region.According to Figure8, it can be seen that teachers are also more active in participating in informatization training. The number of government training participation and the hours of theoretical knowledge are both above the average level. Although there is a slight deficiency in the number of school-based training, the total annual time, and the number of models, the gap is small. If these aspects can be improved, the application of digital education resources for teachers will be greatly improved. Capabilities may be further enhanced.Figure 8
Teachers’ participation in the dimension of informatization training.Therefore, in order to improve the application of digital education resources and expand the coverage of digital education resources, the connotation construction of digital education resources should be improved first. Second, we should create a campus cultural environment that supports teachers to integrate and use digital education resources. Finally, it is necessary to carry out the ability training for teachers to integrate digital education resources and provide teachers with a sense of self-efficacy in the use of digital education resources.
## 4.1. Data Acquisition and Preprocessing
The experimental data in this study come from two types of questionnaires, the “Questionnaire on the Development of Informatization of Primary and Secondary School Teachers” and the “Questionnaire on the Development of Informatization in Primary and Secondary Schools” in the National Informatization Research Action. In the determination of schools, a total of 1,579 questionnaires from primary and secondary schools in the province were selected, and more than 20,000 pieces of information were used as the data set for this study. Using this series of questionnaires, the following four parts of useful information can be collected: the first part is the background information of teachers and their schools, including gender, education, teaching years, and the number of teachers in the school; the second part is the ability of teachers to apply digital resources, such as the frequency of teachers’ use of informatization in each link; the third part is the training participation of teachers, such as the number of training participation, duration, and number of models; the fourth part is the school’s development of training, such as school development, the number of school-based training, and the proportion of teachers participating in school-based training.During the implementation of data mining technology, both algorithms and targets have higher requirements for data, and if the original data are mixed, it is difficult to meet the experimental requirements. Usually, the obtained educational data are based on the evaluation objectives and evaluation objects. Different scenarios and different dimensions of data are selected. Therefore, it may occur that the data collected initially cannot be directly used for mining. Therefore, the data must be processed and cleaned to a certain extent. Unify the data form, reduce noise, and integrate high-quality data to reduce mining costs, improve operational efficiency, and obtain good experimental results. The specific data preprocessing process is shown in Figure2.Figure 2
Data preprocessing process.The data cleaning routine “cleans” the data by filling in missing values, smoothing noise data, identifying or deleting outliers, and resolving inconsistencies. It mainly achieves the following objectives: format standardization, abnormal data elimination, error correction, and duplicate data elimination. Data integration refers to the centralized integration of multiple required data sources to form a complete data set, reducing data clutter and storage inconsistency, thereby improving the efficiency of data analysis and mining. In this study, the data of the experiment were provided by two types of questionnaires: teachers and schools. Each teacher filled in the name of the school where he was employed, and each school also filled in its own detailed name. The teacher’s personal information, training situation, and informatization application situation are matched with the background and training situation of the school where they work, so as to realize the integration of data. In order to ensure the high quality of the sample data and the high accuracy of the mining analysis results, the sample data that the teachers failed to successfully correspond to the school should be eliminated.
## 4.2. Experimental Results and Analysis
After solving the parametric equation and bringing it into the test set, the performance is shown in Figure3. The prediction results are mainly in the range of 12 to 16, while the parts below 12 and above 16 are not fitted at all. This is also in line with the characteristics of the linear model, and after all, the geometric meaning of its objective function is a straight line.Figure 3
Multilayer linear regression test set fitting results.The multilayer linear regression model has the advantage of using multilayer linear regression; that is, it does not waste samples and can study individual differences while ensuring that error independence assumption is not violated. The shortcomings of multilayer linear regression are also the shortcomings of most single models of machine learning. The learning ability of the model is far less than that of ensemble learning, and its prediction performance is relatively weak. Figures4 and 5 show the evaluation indicators of the multilayer linear regression model. It can be seen that the overall value is not high, but the MAPE is more than 15%. It shows that the prediction effect of this model is quite different from the best fit.Figure 4
Regression curve of the multilayer linear regression model.Figure 5
Residual distribution diagram.If a separate ensemble model is also regarded as a single model, there are often problems such as insufficient data utilization and low prediction accuracy, and in general, the performance of the fusion model is better than that of the single model. Therefore, the author continues to propose two model fusion methods based on low-error machine learning for teachers’ digital education resource application ability prediction, so as to improve the effect of ability prediction. The prediction and fitting effect of the model on the test set is shown in Figure6. It can be seen that except for minor differences, the prediction results of the model are relatively good. In terms of the MAPE index, the multilayer combination model based on the random forest has the best performance, so it verifies the correctness of the fusion idea. Since the MAPE indicator represents that it is closer in the comparison between the model-predicted value and the actual value and that its residual distribution is also the smallest, in general, the multilayer combination model based on the random forest is the best choice for the prediction scheme.Figure 6
Comparison of the multilayer combination test set of the random forest.Since it is a case display and analysis of the application of the predicted value of teachers’ digital educational resource application ability, this paper will select two training indicators as representatives to analyze the feasibility of accurately identifying the training needs of low-level teachers in the indicators. The obtained analysis results provide the basis for education management departments and schools in the design of training programs, the size of personnel, and the determination of lists. In order to make the recognition effect more intuitive, this paper visually displays the training participation of 185 middle- and low-level teachers in district A in the form of scatter plots. The specific results are shown in Figure7.Figure 7
Predicted number of government training participation of low- and medium-level teachers by region.According to Figure8, it can be seen that teachers are also more active in participating in informatization training. The number of government training participation and the hours of theoretical knowledge are both above the average level. Although there is a slight deficiency in the number of school-based training, the total annual time, and the number of models, the gap is small. If these aspects can be improved, the application of digital education resources for teachers will be greatly improved. Capabilities may be further enhanced.Figure 8
Teachers’ participation in the dimension of informatization training.Therefore, in order to improve the application of digital education resources and expand the coverage of digital education resources, the connotation construction of digital education resources should be improved first. Second, we should create a campus cultural environment that supports teachers to integrate and use digital education resources. Finally, it is necessary to carry out the ability training for teachers to integrate digital education resources and provide teachers with a sense of self-efficacy in the use of digital education resources.
## 5. Conclusion
Since modern times, China’s educational philosophy has been deeply influenced by Western countries. In the early days of the founding of the People’s Republic of China, the teaching form of the “class teaching system” of the former Soviet Union was basically followed. On the basis of copying the former Soviet Union, the educational concept and teaching mode have made certain changes and adjustments according to China’s national conditions. At the same time, influenced by the traditional “employment examination system”, it developed into “examination-oriented education” with Chinese characteristics in the later stage. In the traditional education concept and teaching mode, teachers are the main body of teaching activities in teaching, mainly one-way collective teaching of teachers to students. The method in the teaching process is not innovative, and the teaching content only focuses on students’ cultural knowledge. To a great extent, the purpose of teaching is to ensure the entrance examination in the next stage. The new educational concept advocates the all-round development of education, pays attention to the development of students’ potential, and respects the subjective initiative of the individual. Under the influence of this educational concept, a large number of new teaching modes have emerged. The common point of these teaching modes is that the main body of teaching activities has changed from traditional teachers to students; information has changed from one-way transmission between teachers and students to multidirectional communication between teachers and students. Among them, teachers’ ability to apply teaching resources becomes very important. Based on the collected educational data, this paper mines the specific factors that will affect the ability of teachers to apply digital educational resources and uses these objective salient features to construct multiple machine learning regression models to predict the application ability score of teachers’ digital educational resources. Through the comparison and optimization of the model’s performance, a more suitable prediction method for the research group was found. Using MSE, MAE, RMSE, and MAPE as performance evaluation indicators to compare the performance of each model, it is found that in each indicator, there is a multilayer linear regression < mild gradient boosting < extreme gradient boosting < random forest. In addition, in the two integrated models, the bagging idea represented by the random forest is more suitable for this group than two gradient boosting.
---
*Source: 1012803-2022-11-21.xml* | 2022 |
# The Impact of VR-CALM Intervention Based on VR on Psychological Distress and Symptom Management in Breast Cancer Survivors
**Authors:** Xiuqing Zhang; Senbang Yao; Menglian Wang; Xiangxiang Yin; Ziran Bi; Yanyan Jing; Huaidong Cheng
**Journal:** Journal of Oncology
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012813
---
## Abstract
Objective. To evaluate the effectiveness and feasibility of Managing Cancer and Living Meaningfully based on VR (VR-CALM), which is used to manage expected symptoms of cancer itself, relieve psychological distress, and improve quality of life (QOL) in the Chinese breast cancer survivors (BCs). Methods. Ninety-eight patients with breast cancer were recruited in this study. These patients were randomly assigned to the VR-CALM group or the care as usual (CAU) group. All patients were evaluated by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B), Distress Thermometer (DT), Concerns About Recurrence Scale (CARS), Piper Fatigue Scale (PFS), Pittsburgh Sleep Quality Index (PSQI), The Self-Rating Anxiety Scale (SAS), and The Self-Rating Depression Scale (SDS) before and after VR-CALM or CAU application to BCs. We compared the differences in all these scores between the VR-CALM group and the control group. Results. Patients in the VR-CALM group showed a significant decrease in levels of distress, anxiety, depression, sleep disorders, and fatigue (t = −6.829, t = −5.819, t = −2.094, t = −3.031, t = −10.082, P≤0.001, 0.001, 0.05, 0.01, 0.001, respectively) and had higher level of quality of life (t = 8.216, P≤0.001) compared with the CAU group after intervention. And postintervention patients in VR-CALM group compared with preintervention showed lower level of distress and remarkable improvement of QOL (t = 11.521, t = −10.379, P≤0.001, 0.001). The preintervention questionnaire revealed no significant between-group differences regarding distress, anxiety, depression, sleep disorders, fatigue, and quality of life. Conclusion. VR-CALM is a psychotherapy tailored to the needs of patients with breast cancer. This research innovatively used VR-based CALM intervention to improve psychological and chronic symptoms in BCs. The results of the present study indicate that VR-CALM has salutary effects on the improvement of QOL and relieves psychological distress, anxiety, depression, sleep disorders, and fatigue in BCs.
---
## Body
## 1. Introduction
People with cancer are living longer than they did in past, especially breast cancer survivors (BCs). At present, the five-year survival rate of breast cancer is as high as 90%, far higher than that of other cancers, indicating that cancer has been transformed into a chronic disease [1]. While in 2020, breast cancer in women surpassed lung cancer for the first time as the most common cancer worldwide, accounting for approximately 11.7 percent of new cancer cases according to the global cancer observatory, which is due to the improvement of breast cancer diagnosis and treatment, as well as BCs’ overall process management. Since 1999, the NCCN has provided updated guidelines for the management of psychological distress in cancer [2]. According to studies, around one-third of cancer patients have psychological discomfort, and clinically significant psychological symptoms have been found in 38% of cancer survivors [3, 4]. Most cancer patients experience symptoms, and their incidence and severity vary by cancer type, stage, treatment, and comorbidities [5, 6]. Symptoms can be caused by cancer itself, early or late therapeutic side effects, and comorbid conditions, and simultaneously symptom management is critical for treatment effectiveness, necessitating the development of innovative approaches by healthcare professionals to improve quality of life [7].Gary Rodin suggested that CALM has a positive effect on symptom management in patients with advanced cancer, whereas CALM is designed to help patients live with cancer and decrease psychological distress [8]. CALM is comprised of four major components: (1) symptom control and communication with health care providers; (2) changes in self and relationships with others; (3) spiritual well-being and the meaning of life; (4) communicating about future concerns, hopes, and mortality [9]. One small sample randomized controlled experiment has demonstrated the effectiveness of CALM intervention in the improvement of cognitive impairment and QOL and relieving psychological distress in breast cancer patients [10]. A study demonstrates the feasibility and treatment potential of VR-CBT in patients with generalized SAD and its results suggest that VR-CBT may be effective in reducing anxiety as well as depression and can improve quality of life [11]. This study was conducted through virtual reality (VR) technology, which immerses the patients in a computer-generated virtual environment. It is a head-mounted device that projects a virtual picture as well as noise-cancelling audio. The advancement of VR software allows for increased involvement and greater immersion [12]. Our research team’s original VR scenarios include a beach house and Butterfly Valley, with original lead phrases. One study showed that high-interactive VR systems are more effective [13]. Therefore, interactivity and immersion are key factors affecting VR efficacy.VR-CALM intervention is to apply VR technology to the intervention process of CALM. Traditional CALM intervention mode is based on the communication between CALM therapists and patients. VR-CALM is to immerse patients in virtual reality in beautiful environment like Butterfly Valley and seaside, while listening to ambient sounds as well as instructions provided by the CALM therapist. The instructions follow CALM intervention manual and offer intervention in four areas, including symptom management and health guidance, analysis of how illness has changed people and their relationships with those close to them, exploration of meaning and purpose in life, and talk about the future and hope. And at the end of each session, there will be a special time and place to give patients an opportunity to “speak,” where they can tell what is on their mind, why they are unhappy, and patients are provided with specific guidance about their specific problems.Since the effectiveness of CALM intervention and VR-CBT has already been proven, there is no study discussed whether VR-CALM can manage expected symptoms of breast cancer itself. In this context, we undertake this randomized controlled experiment to explore the availability of VR-CALM intervention relative to care as usual in both symptom management and psychological distress including quality of life, anxiety, depression, distress, concerns about recurrence, fatigue, and somnipathy.
## 2. Materials and Methods
### 2.1. Design
This is a nonblind, parallel assignment randomized controlled trial (RCT).
### 2.2. Sample
A total of 98 breast cancer patients receiving at least 2 courses of regular chemotherapy was recruited from the Second Affiliated Hospital of Anhui Medical University between January 2021 and August 2021.
### 2.3. Randomization
The statistical staff of our team, who did not involve in the experiment, managed the randomization. After participants’ baseline assessments, statistical staff provided computer-generated random assignments. The researchers were unknown about the sequence, which was written on a card, sealed in an envelope, and opened when dispensed.
### 2.4. Inclusion and Exclusion Criteria
Inclusion criteria were as follows: (1) pathologically diagnosed breast cancer patients, completion of at least 2 cycles of regular chemotherapy, with no intolerable side effects; (2) Karnofsky performance status score≥80; (3) no auditory, visual, language, and other functional disorders; and (4) age from 18 to 70, with a sufficient ability to complete the necessary tests for the study. Exclusion criteria were as follows: (1) patients with advanced cachexia; (2) patients with obvious anxiety, depression, and other mental symptoms; (3) lack of adequate baseline bone marrow and organ reserves or associated with serious heart, liver, kidney, brain, and hematopoietic diseases; (4) patients with brain metastases and other brain diseases; and (5) patients with a history of alcohol or drug dependence and taking cognitive-improving medications.
### 2.5. Procedure
BCs were identified through prescreening of test results, and eligible patients were recruited during hospital stays in the oncology department and breast surgery department. Oncologists presented patients with experiments and VR-CALM intervention methods and obtained informed consent from the patients and their families. The researchers next assessed whether the patients meet the requirement and performed baseline measurements. Then, statisticians randomly divided the participants into two groups. The intervention was conducted in an oncology conference to ensure patient privacy during hospitalization. After six cycles of intervention, complete follow-up assessments were conducted during patients’ hospitalization or by telephone.
### 2.6. Measures
#### 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
#### 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
#### 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
#### 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
#### 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
#### 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
### 2.7. Intervention
VR-CALM intervention was conducted by 1 psychologist, 1 oncologist, and 3 postgraduates with the certificate of psychological consultant from the Department of Oncology, the Second Affiliated Hospital of Anhui Medical University. All VR-CALM therapists received relevant professional training and ongoing supervision from clinical researchers and were qualified to provide VR-CALM intervention to patients. The group supervision meeting was held once a week for case discussion and the summary in order to ensure that the VR-CALM intervention and the trail going on wheels. VR-CALM is a brief, manual form of personal psychotherapy [9, 17]. The intervention course for VR-CALM was 3 months, during which patients in the intervention group would receive 6 separate treatments for VR-CALM, each time for 30 min, and the first 3 treatments would be completed in the first month. The VR equipment consisted of a head-mounted glasses and two controllers. The patients wearing the equipment will find themself immersed in a beauty spot such as a seaside and Butterfly Valley and are allowed to walk around. During this process, patients can hear wind and intermittent guidance. And patients are able to touch butterflies with the controller in their hands. This immersive experience of audio-visual integration constitutes a complete intervention model. Each patient will experience these scenes at least 2 times, and then they can choose which scenery they prefer for the next intervention. This also ensures the consistency of intervention for each patient.
### 2.8. Statistical Analysis
All the data were collected and analysed by Statistical Package for the Social Sciences (SPSS, V22.0) and were expressed as the mean ± SD. The Chi-squared test was used in the comparison of classification data, and the pairedt-test was used in the comparison before and after the same group. The unpaired t-test was used to compare the differences between different groups. The data differences were determined as statistically significant at P<0.05.
## 2.1. Design
This is a nonblind, parallel assignment randomized controlled trial (RCT).
## 2.2. Sample
A total of 98 breast cancer patients receiving at least 2 courses of regular chemotherapy was recruited from the Second Affiliated Hospital of Anhui Medical University between January 2021 and August 2021.
## 2.3. Randomization
The statistical staff of our team, who did not involve in the experiment, managed the randomization. After participants’ baseline assessments, statistical staff provided computer-generated random assignments. The researchers were unknown about the sequence, which was written on a card, sealed in an envelope, and opened when dispensed.
## 2.4. Inclusion and Exclusion Criteria
Inclusion criteria were as follows: (1) pathologically diagnosed breast cancer patients, completion of at least 2 cycles of regular chemotherapy, with no intolerable side effects; (2) Karnofsky performance status score≥80; (3) no auditory, visual, language, and other functional disorders; and (4) age from 18 to 70, with a sufficient ability to complete the necessary tests for the study. Exclusion criteria were as follows: (1) patients with advanced cachexia; (2) patients with obvious anxiety, depression, and other mental symptoms; (3) lack of adequate baseline bone marrow and organ reserves or associated with serious heart, liver, kidney, brain, and hematopoietic diseases; (4) patients with brain metastases and other brain diseases; and (5) patients with a history of alcohol or drug dependence and taking cognitive-improving medications.
## 2.5. Procedure
BCs were identified through prescreening of test results, and eligible patients were recruited during hospital stays in the oncology department and breast surgery department. Oncologists presented patients with experiments and VR-CALM intervention methods and obtained informed consent from the patients and their families. The researchers next assessed whether the patients meet the requirement and performed baseline measurements. Then, statisticians randomly divided the participants into two groups. The intervention was conducted in an oncology conference to ensure patient privacy during hospitalization. After six cycles of intervention, complete follow-up assessments were conducted during patients’ hospitalization or by telephone.
## 2.6. Measures
### 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
### 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
### 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
### 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
### 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
### 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
## 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
## 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
## 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
## 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
## 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
## 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
## 2.7. Intervention
VR-CALM intervention was conducted by 1 psychologist, 1 oncologist, and 3 postgraduates with the certificate of psychological consultant from the Department of Oncology, the Second Affiliated Hospital of Anhui Medical University. All VR-CALM therapists received relevant professional training and ongoing supervision from clinical researchers and were qualified to provide VR-CALM intervention to patients. The group supervision meeting was held once a week for case discussion and the summary in order to ensure that the VR-CALM intervention and the trail going on wheels. VR-CALM is a brief, manual form of personal psychotherapy [9, 17]. The intervention course for VR-CALM was 3 months, during which patients in the intervention group would receive 6 separate treatments for VR-CALM, each time for 30 min, and the first 3 treatments would be completed in the first month. The VR equipment consisted of a head-mounted glasses and two controllers. The patients wearing the equipment will find themself immersed in a beauty spot such as a seaside and Butterfly Valley and are allowed to walk around. During this process, patients can hear wind and intermittent guidance. And patients are able to touch butterflies with the controller in their hands. This immersive experience of audio-visual integration constitutes a complete intervention model. Each patient will experience these scenes at least 2 times, and then they can choose which scenery they prefer for the next intervention. This also ensures the consistency of intervention for each patient.
## 2.8. Statistical Analysis
All the data were collected and analysed by Statistical Package for the Social Sciences (SPSS, V22.0) and were expressed as the mean ± SD. The Chi-squared test was used in the comparison of classification data, and the pairedt-test was used in the comparison before and after the same group. The unpaired t-test was used to compare the differences between different groups. The data differences were determined as statistically significant at P<0.05.
## 3. Results
### 3.1. The Baseline Characteristics of the Research Enrolled Patients and Flow
Various patients’ demographics, including age, education, surgical method, and postoperative pathological information, were obtained at the time of enrolment (Table1). The results suggest that there were no significant differences in demographic information, including age (t = 0.708, P=0.481), years of education (t = −1.182, P=0.241), clinical information including the Karnofsky performance status (χ = 0.317, P=0.573), and the tumour stage (χ = 0.445, P=0.247, P=0.961) between the VR-CALM group and CAU group. The flow diagram showed that 98 breast cancer patients are eligible to take part in the study between January 2021 and August 2021 (Figure 1), 90 of whom were randomly assigned to VR-CALM (n = 45) and CAU (n = 45). However, 7 people did not complete the VR-CALM intervention and 6 people in the CAU group did not complete the final assessment. Finally, there were 38 people in the VR-CALM group and 39 in the CAU group.Table 1
Comparison of demographic characteristics and clinical data of breast cancer between the VR-CALM group and the CAU group.
VR-CALMCAUt/χPAge (years)52.29 (7.686)51.03 (7.979)0.7080.481Education (years)8.55 (2.435)9.21 (2.408)−1.1820.241Surgical method0.1120.738Breast conservative surgery35Mastectomy3534Tumour stageI670.4450.961II1618III1110IV54Molecular classification1.3210.724Luminal A46Luminal B1720HER-2 overexpression1210TNBC53Pathological type1.0210.692Invasive carcinoma no special type3433Invasive carcinoma special type13Noninvasive carcinoma33KPS0.3170.573801412902427Data are presented as the mean ± SD. Abbreviations: SD, standard deviation; KPS, Karnofsky performance status; VR-CALM, Managing Cancer and Living Meaningfully based on VR; CAU, care as usual.Figure 1
Consort flow diagram.
### 3.2. Comparison of the Symptom Assessment before and after Intervention Periods within Groups
The performance of the BCs on the Anxiety and Depression test, Concerns About Recurrence Test, Fatigue test, somnipathy test, DT, and QOL evaluation scale before and after VR-CALM or CAU are shown in Table2. Compared with the CAU group, the performance of the VR-CALM group had remarkable changes in the overall scores of the FACT-B (t = −10.379, P≤0.001), SAS (t = 4.680, P≤0.001), SDS (t = 4.101, P≤0.001), CARS (t = 2.742, P≤0.001), PFS (t = 9.913, P≤0.001), and PSQI (t = 6.066, P≤0.001) before and after VR-CALM, but for the CAU group, there were substantial variations in total scores on the FACT-B (−2.988, P≤0.01), SAS (t = 22124.817, P≤0.001), CARS (t = 2.370, P≤0.05), and DT (t = 3.328, P≤0.01) before and after CAU, while no significant differences were found in SDS (t = −1.517, P=0.138), PSQI (t = −1.839, P=0.074), and PFS (t = 1.515, P=0.138). When these data were combined, the magnitude of data changes in the VR-CALM group was higher than in the CAU group, despite the fact that both groups’ performance demonstrated significant change.Table 2
Separate comparison of symptoms in the 2 groups of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIVR-CALM groupBCT3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438ACT38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565t−10.37911.5214.6804.1012.7429.9136.066p≤0.001≤0.001≤0.001≤0.001≤0.01≤0.001≤0.001CAU groupBCT3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426ACT3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t−2.9883.328−4.817−1.5172.3701.515−1.839p≤0.01≤0.01≤0.0010.138≤0.050.1380.074Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; BCT, before CALM/CAU treatment; ACT, after CALM/CAU treatment; SD, standard deviation. Data are presented as mean ± SD.
### 3.3. Comparison of the Symptom Assessment before and after Intervention Periods between Groups
The differences between the two groups before and after VR-CALM or CAU are shown in Table3. At the start of the study, there were no statistically significant differences between the VR-CALM group and the CAU group on any of the scale scores: FACT-B (t = 1.411, P=0.162), DT (t = −1.806, P=0.076), SAS (t = 1.105, P=0.273), SDS (t = 1.315, P=0.194), CARS (t = 1.869, P=0.068), PFS (t = 0.687, P=0.494), and PSQI (t = 0.506, P=0.614), which means that there was no significant difference between the two groups at the baseline before the VR-CALM or CAU. However, 3 months after the interventions, the two groups showed statistically significant differences: FACT-B (t = 8.216, P≤0.001), DT (t = −6.829, P≤0.001), SAS (t = −5.819, P≤0.001), SDS (t = −2.094, P≤0.05), CARS (t = 1.170, P=0.247), PFS (t = −10.082, P≤0.001), and PSQI (t = −3.031, P≤0.01).Table 3
Comparison of symptoms between the VR-CALM group and the CAU group of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIBefore VR-CALM or CAUVR-CALM3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438CAU3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426t1.411−1.8061.1051.3151.8690.6870.506p0.1620.0760.2730.1940.0680.4940.614After VR-CALM or CAUVR-CALM38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565CAU3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t8.216−6.829−5.819−2.0941.170−10.082−3.031p≤0.001≤0.001≤0.001≤0.050.247≤0.001≤0.01Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation. Data are presented as mean ± SD.
## 3.1. The Baseline Characteristics of the Research Enrolled Patients and Flow
Various patients’ demographics, including age, education, surgical method, and postoperative pathological information, were obtained at the time of enrolment (Table1). The results suggest that there were no significant differences in demographic information, including age (t = 0.708, P=0.481), years of education (t = −1.182, P=0.241), clinical information including the Karnofsky performance status (χ = 0.317, P=0.573), and the tumour stage (χ = 0.445, P=0.247, P=0.961) between the VR-CALM group and CAU group. The flow diagram showed that 98 breast cancer patients are eligible to take part in the study between January 2021 and August 2021 (Figure 1), 90 of whom were randomly assigned to VR-CALM (n = 45) and CAU (n = 45). However, 7 people did not complete the VR-CALM intervention and 6 people in the CAU group did not complete the final assessment. Finally, there were 38 people in the VR-CALM group and 39 in the CAU group.Table 1
Comparison of demographic characteristics and clinical data of breast cancer between the VR-CALM group and the CAU group.
VR-CALMCAUt/χPAge (years)52.29 (7.686)51.03 (7.979)0.7080.481Education (years)8.55 (2.435)9.21 (2.408)−1.1820.241Surgical method0.1120.738Breast conservative surgery35Mastectomy3534Tumour stageI670.4450.961II1618III1110IV54Molecular classification1.3210.724Luminal A46Luminal B1720HER-2 overexpression1210TNBC53Pathological type1.0210.692Invasive carcinoma no special type3433Invasive carcinoma special type13Noninvasive carcinoma33KPS0.3170.573801412902427Data are presented as the mean ± SD. Abbreviations: SD, standard deviation; KPS, Karnofsky performance status; VR-CALM, Managing Cancer and Living Meaningfully based on VR; CAU, care as usual.Figure 1
Consort flow diagram.
## 3.2. Comparison of the Symptom Assessment before and after Intervention Periods within Groups
The performance of the BCs on the Anxiety and Depression test, Concerns About Recurrence Test, Fatigue test, somnipathy test, DT, and QOL evaluation scale before and after VR-CALM or CAU are shown in Table2. Compared with the CAU group, the performance of the VR-CALM group had remarkable changes in the overall scores of the FACT-B (t = −10.379, P≤0.001), SAS (t = 4.680, P≤0.001), SDS (t = 4.101, P≤0.001), CARS (t = 2.742, P≤0.001), PFS (t = 9.913, P≤0.001), and PSQI (t = 6.066, P≤0.001) before and after VR-CALM, but for the CAU group, there were substantial variations in total scores on the FACT-B (−2.988, P≤0.01), SAS (t = 22124.817, P≤0.001), CARS (t = 2.370, P≤0.05), and DT (t = 3.328, P≤0.01) before and after CAU, while no significant differences were found in SDS (t = −1.517, P=0.138), PSQI (t = −1.839, P=0.074), and PFS (t = 1.515, P=0.138). When these data were combined, the magnitude of data changes in the VR-CALM group was higher than in the CAU group, despite the fact that both groups’ performance demonstrated significant change.Table 2
Separate comparison of symptoms in the 2 groups of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIVR-CALM groupBCT3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438ACT38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565t−10.37911.5214.6804.1012.7429.9136.066p≤0.001≤0.001≤0.001≤0.001≤0.01≤0.001≤0.001CAU groupBCT3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426ACT3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t−2.9883.328−4.817−1.5172.3701.515−1.839p≤0.01≤0.01≤0.0010.138≤0.050.1380.074Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; BCT, before CALM/CAU treatment; ACT, after CALM/CAU treatment; SD, standard deviation. Data are presented as mean ± SD.
## 3.3. Comparison of the Symptom Assessment before and after Intervention Periods between Groups
The differences between the two groups before and after VR-CALM or CAU are shown in Table3. At the start of the study, there were no statistically significant differences between the VR-CALM group and the CAU group on any of the scale scores: FACT-B (t = 1.411, P=0.162), DT (t = −1.806, P=0.076), SAS (t = 1.105, P=0.273), SDS (t = 1.315, P=0.194), CARS (t = 1.869, P=0.068), PFS (t = 0.687, P=0.494), and PSQI (t = 0.506, P=0.614), which means that there was no significant difference between the two groups at the baseline before the VR-CALM or CAU. However, 3 months after the interventions, the two groups showed statistically significant differences: FACT-B (t = 8.216, P≤0.001), DT (t = −6.829, P≤0.001), SAS (t = −5.819, P≤0.001), SDS (t = −2.094, P≤0.05), CARS (t = 1.170, P=0.247), PFS (t = −10.082, P≤0.001), and PSQI (t = −3.031, P≤0.01).Table 3
Comparison of symptoms between the VR-CALM group and the CAU group of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIBefore VR-CALM or CAUVR-CALM3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438CAU3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426t1.411−1.8061.1051.3151.8690.6870.506p0.1620.0760.2730.1940.0680.4940.614After VR-CALM or CAUVR-CALM38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565CAU3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t8.216−6.829−5.819−2.0941.170−10.082−3.031p≤0.001≤0.001≤0.001≤0.050.247≤0.001≤0.01Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation. Data are presented as mean ± SD.
## 4. Discussion
The purpose of this study is to evaluate the effectiveness of VR-CALM intervention in symptom management and psychological distress relief in BCs. Our study found that VR-CALM intervention was more effective than CAU in most of the variables (QOL, psychological distress, anxiety, depression, sleep disorders, and fatigue). These findings are consistent with the previous studies [8, 17–22]. It is highlighted that psychological care is beneficial for patients with advanced cancer according to international machinery such as the European Palliative Care Research Collaborative (EPCRC), the Worldwide Palliative Care Alliance, the World Health Organization, and the International Psycho-Oncology Society. While VR-CALM is a psychological intervention tailored to each individual patient to help prevent and treat adverse psychological reactions, improve quality of life, and give meaning to life.The CALM intervention has proven to be a useful treatment, providing a systematic approach to alleviate the suffering in cancer patients [8]. CALM builds on a decade of theoretical and empirical work and focuses on the influence of many factors, including the severity of physical symptoms, attachment style, and other interpersonal variables [23]. CALM is conducive to address practical and existential problems faced by cancer patients [9], including symptom management, role change, and mental-, existential-, and death-related issues. According to the CALM therapy manual, the components that CALM contributes to its therapeutic effectiveness are the creation of meaning, the renegotiation of attachment security, the regulation of emotion, and mentalization, all within a true supportive relationship [24].CALM is unique as it is based on relationship, attachment, and existentialist theories and specifically addresses the specific issues of advanced cancer. Preliminary results [14, 25] and results from a large randomized controlled trial in Toronto [26] are encouraging, suggesting that CALM is a promising intervention for patients with advanced or metastatic cancer. Because culture shapes cognitive, emotional and behavioural responses to cancer and cancer treatment, awareness and knowledge of treatment options, and acceptance of psychological interventions [27–29], the use of specific interventions (e.g., CALM) needs to be examined in a specific cultural context. Effective mechanisms may include offer the patients, doctors, nurses and other health care providers the opportunity to communicate, solve the impact of disease to their self-awareness and family relationships, find or regain the meaning of life and goals, express and manage the fear and desire related to the end of the life, and begin to prepare for the end of life [26]. Previous evidence has indicated that there are changes in cytokine levels in BCs with cognitive impairment, and a correlation was noted between cognition and QOL in BCs [30]. It may be the underlying mechanism of CALM intervention to improve QOL.Many studies and clinical trials use VR as a simulation, interactive, and distraction tool for patients with mental disorders such as posttraumatic stress disorder (PTSD), anxiety, specific phobias, schizophrenia, autism, dementia, and severe stress [31]. A study has shown that VR-based interventions are particularly important when compared to traditional cancer symptom management interventions [32]. First, VR-based cognitive training allows cancer patients to learn [32]. VR-based interventions provide immediate feedback on patient performance and can be adapted to suit patient needs [33]. In addition, VR-based interventions combine the latest real-time graphics and imaging technologies to allow patients to experience a multitude of visual and auditory stimuli in a computer-generated virtual environment to meet their rehabilitation needs [34, 35]. A previous study has shown that distracting interventions can effectively alleviate the adverse reactions and symptoms in cancer patients undergoing chemotherapy [36]. Among these interventions, virtual reality, with its powerful attention-grabbing power, has proved effective for cancer patients in different settings [37]. VR allows patients to temporarily break away from the medical environment and immerse themselves in a pleasant atmosphere, thus producing pleasant emotions and reducing negative emotions [12, 38]. Previous literature has indicated that VR is a useful intervention for relieving anxiety, depression, pain, and fatigue during chemotherapy [32, 39–44]. But there is no study that has proven its effectiveness on symptoms caused by cancer itself, which lasts longer and are more destructive than symptoms related to chemotherapy. Nevertheless, discomfort caused by wearing a VR device including headaches, eyestrain, and nausea is also a confounding factor that cannot be kicked out [45].Psychological symptoms should be synthetically assessed in all patients. In the latest NCCN guidelines, experts suggest that a complete symptom management should provide psychological and emotional support to both the cancer patients and their family caregivers.Distress is a multifactorial unpleasant experience and may have a negative impact on patients’ ability to cope with cancer, symptom, and treatment. The distress should be recognized, monitored, documented, and treated promptly at all the stages and in all the circumstances [2]. In addition, patients should be assessed for psychological distress at each visit ideally, at least during the first visit or at appropriate intervals [2]. The present study found that VR-CALM is effective for relieving distress in BCs, and CAU group also had a remission after intervention, while there was still a difference compared with the VR-CALM group.Previous studies estimated that 10% of cancer survivors live with anxiety and 16% have a major depressive disorder [46]. The study by Madhusudhana et al. confirmed that anxiety and depression are associated with longer treatment initiation intervals [47]. So, it is necessary for oncologists to intervene with this anxiety and depression. Our finding confirmed the usefulness of VR-CALM in reducing BCs’ anxiety and depression.Fear of cancer recurrence (FCR) is a concept of fear, worry, or concern about cancer returning or progressing [48]. The majority of BCs are concerned about possible disease progression and fear of cancer recurrence [49]. The estimated prevalence of moderate to high levels of CAR ranges from 24% to 56%, while FCR is one of the most common chronic and severe problems for cancer patients [50]. A systematic review shows that most BCs have problems in dealing with FCR which needs specialists’ help [51]. And Cognitive Behavioural Therapy (CBT) is a common treatment for FCR as its effectiveness has already been proven in previous studies [1, 52–54]. This study did not find any evidence that VR-CALM has influence on easing fear of cancer recurrence in BCs, as well as CAU.Fatigue is a subjective and unpleasant symptom, which is a general feeling that interferes with normal work and life [55]. Previous studies have shown that one-third of breast cancer patients have persistent sleep disturbances [56]. However, sleep disorders are associated with a greater fatality risk in BCs [57, 58]. A randomized trial revealed that compared with usual care, the physical activity intervention would improve sleep quality in self-report [59]. The present study revealed that both VR-CALM and CAU are valid in decreasing fatigue, while the effect of VR-CALM intervention is more obvious.Quality of life assessment is an important criterion for women recuperating from cancer [60, 61]. Ferguson et al. research suggested that CBT can compensate for cognitive deficits and improve QOL [62]. While medications such as methylphenidate, modafinil, and epoetin may improve cognition by reducing fatigue and boosting sobriety, its effectiveness has not been proven in previous studies [63]. It advocates regular physical exercises as effective tools for relieving fatigue, stress, and depression and quality of life in BCs [64–66]. The present study found that after VR-CALM intervention, BCs in the VR-CALM group had a better quality of life than those in the CAU group.Breast cancer patients who participated in VR-CALM reported that VR-CALM provided an opportunity for them to get out of the ward and out of the house and see the outside world. VR-CALM provided a time and place for patient-to-patient communication as well as opportunities for further communication with doctors and nurses. In addition to discussing, disturbing, and frightening issues such as treatment and prognosis of diseases, family communication has been enhanced and family experience sharing has been facilitated. The primary goal of breast cancer patients is to address the issues that are troubling them in life and the survival and psychological issues that arise after a cancer diagnosis, highlighting the importance of the safe space provided by the VR-CALM intervention. In addition, although cancer patients feel loved and cared for by their families, they also want to alleviate the pain caused by their disease, which can lead to further anxiety and depression due to family and financial burden. Patients in the VR-CALM intervention group reported after the intervention that the VR immersive experience provided them with emotional relief and relaxation, and this was considered an achievable goal in this population. In addition, the importance of CALM therapists was highlighted in the course of this study. CALM therapists have nonjudgmental compassion and an emphasis on reflection. Patients indicated that they valued the therapist’s point of view as a professional and that CALM therapist’s point of view helped solve the problem and made patients feel that someone understood their situation and experience. It also suggests that oncology professionals who are trained in psychotherapy and have compassion and understanding for patients can offer CALM interventions. In addition, the study found that the advantage of the CALM intervention is that oncologists can be trained to provide CALM as long as the integrity of the treatment is maintained through continuous monitoring. Although there may be subject-specific differences in oncological knowledge among therapists, there is certainly much in common and CALM interventions are not fundamentally different based on the expertise of the providers.In general, psychological intervention superior to medications in terms of symptom management and improving quality of life, with no adverse effects. Therefore, the psychological intervention has become the most effective and appropriate strategy for symptom management, which also promotes the conduct of this research. The current review focused on the impact of VR and CALM interventions. These therapies have been used in a variety of medical and psychological sectors, yielding findings that, if validated by more rigorous trials, might have a considerable impact on healthcare. However, the success of integrating VR and CALM interventions in ordinary clinical practice will also be determined by their practical viability. Compared with similar studies, this research innovatively used VR-based CALM intervention to improve psychological and chronic symptoms in breast cancer patients. The findings provide a simple and easy-to-adopt intervention and will help to better help breast cancer patients recover their mental health.Some design limitations merit further comments. First, the small sample size does not allow comparisons between subgroups of patients to determine moderators of treatment outcome and may not represent the actual difference in improvement between the CAU group and the VR-CALM group. And this study did not explain the specific mechanism of symptom management of VR-CALM. Secondly, the short follow-up period of this study could not prove the long-term effectiveness of VR-CALM intervention.
## 5. Conclusion
VR-CALM intervention strongly reduced psychological distress and improved quality of life among breast cancer survivors in daily life. In the 8-month follow-up, patients who participated in VR-CALM also reported to have less distress, anxiety, depression, and sleep disorders. The study did not find robust evidence that both VR-CALM and CAU can ease the patient’s fear of cancer recurrence. Our finding suggests the potential viability of VR as a new type of CLAM intervention approach to relieve psychological distress and improve quality of life in BCs. Furthermore, if future research is agreeing to this study about the usefulness of VR-CALM in symptom management in BCs, this could promote the research of CLAM mechanism.
---
*Source: 1012813-2022-06-07.xml* | 1012813-2022-06-07_1012813-2022-06-07.md | 51,222 | The Impact of VR-CALM Intervention Based on VR on Psychological Distress and Symptom Management in Breast Cancer Survivors | Xiuqing Zhang; Senbang Yao; Menglian Wang; Xiangxiang Yin; Ziran Bi; Yanyan Jing; Huaidong Cheng | Journal of Oncology
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012813 | 1012813-2022-06-07.xml | ---
## Abstract
Objective. To evaluate the effectiveness and feasibility of Managing Cancer and Living Meaningfully based on VR (VR-CALM), which is used to manage expected symptoms of cancer itself, relieve psychological distress, and improve quality of life (QOL) in the Chinese breast cancer survivors (BCs). Methods. Ninety-eight patients with breast cancer were recruited in this study. These patients were randomly assigned to the VR-CALM group or the care as usual (CAU) group. All patients were evaluated by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B), Distress Thermometer (DT), Concerns About Recurrence Scale (CARS), Piper Fatigue Scale (PFS), Pittsburgh Sleep Quality Index (PSQI), The Self-Rating Anxiety Scale (SAS), and The Self-Rating Depression Scale (SDS) before and after VR-CALM or CAU application to BCs. We compared the differences in all these scores between the VR-CALM group and the control group. Results. Patients in the VR-CALM group showed a significant decrease in levels of distress, anxiety, depression, sleep disorders, and fatigue (t = −6.829, t = −5.819, t = −2.094, t = −3.031, t = −10.082, P≤0.001, 0.001, 0.05, 0.01, 0.001, respectively) and had higher level of quality of life (t = 8.216, P≤0.001) compared with the CAU group after intervention. And postintervention patients in VR-CALM group compared with preintervention showed lower level of distress and remarkable improvement of QOL (t = 11.521, t = −10.379, P≤0.001, 0.001). The preintervention questionnaire revealed no significant between-group differences regarding distress, anxiety, depression, sleep disorders, fatigue, and quality of life. Conclusion. VR-CALM is a psychotherapy tailored to the needs of patients with breast cancer. This research innovatively used VR-based CALM intervention to improve psychological and chronic symptoms in BCs. The results of the present study indicate that VR-CALM has salutary effects on the improvement of QOL and relieves psychological distress, anxiety, depression, sleep disorders, and fatigue in BCs.
---
## Body
## 1. Introduction
People with cancer are living longer than they did in past, especially breast cancer survivors (BCs). At present, the five-year survival rate of breast cancer is as high as 90%, far higher than that of other cancers, indicating that cancer has been transformed into a chronic disease [1]. While in 2020, breast cancer in women surpassed lung cancer for the first time as the most common cancer worldwide, accounting for approximately 11.7 percent of new cancer cases according to the global cancer observatory, which is due to the improvement of breast cancer diagnosis and treatment, as well as BCs’ overall process management. Since 1999, the NCCN has provided updated guidelines for the management of psychological distress in cancer [2]. According to studies, around one-third of cancer patients have psychological discomfort, and clinically significant psychological symptoms have been found in 38% of cancer survivors [3, 4]. Most cancer patients experience symptoms, and their incidence and severity vary by cancer type, stage, treatment, and comorbidities [5, 6]. Symptoms can be caused by cancer itself, early or late therapeutic side effects, and comorbid conditions, and simultaneously symptom management is critical for treatment effectiveness, necessitating the development of innovative approaches by healthcare professionals to improve quality of life [7].Gary Rodin suggested that CALM has a positive effect on symptom management in patients with advanced cancer, whereas CALM is designed to help patients live with cancer and decrease psychological distress [8]. CALM is comprised of four major components: (1) symptom control and communication with health care providers; (2) changes in self and relationships with others; (3) spiritual well-being and the meaning of life; (4) communicating about future concerns, hopes, and mortality [9]. One small sample randomized controlled experiment has demonstrated the effectiveness of CALM intervention in the improvement of cognitive impairment and QOL and relieving psychological distress in breast cancer patients [10]. A study demonstrates the feasibility and treatment potential of VR-CBT in patients with generalized SAD and its results suggest that VR-CBT may be effective in reducing anxiety as well as depression and can improve quality of life [11]. This study was conducted through virtual reality (VR) technology, which immerses the patients in a computer-generated virtual environment. It is a head-mounted device that projects a virtual picture as well as noise-cancelling audio. The advancement of VR software allows for increased involvement and greater immersion [12]. Our research team’s original VR scenarios include a beach house and Butterfly Valley, with original lead phrases. One study showed that high-interactive VR systems are more effective [13]. Therefore, interactivity and immersion are key factors affecting VR efficacy.VR-CALM intervention is to apply VR technology to the intervention process of CALM. Traditional CALM intervention mode is based on the communication between CALM therapists and patients. VR-CALM is to immerse patients in virtual reality in beautiful environment like Butterfly Valley and seaside, while listening to ambient sounds as well as instructions provided by the CALM therapist. The instructions follow CALM intervention manual and offer intervention in four areas, including symptom management and health guidance, analysis of how illness has changed people and their relationships with those close to them, exploration of meaning and purpose in life, and talk about the future and hope. And at the end of each session, there will be a special time and place to give patients an opportunity to “speak,” where they can tell what is on their mind, why they are unhappy, and patients are provided with specific guidance about their specific problems.Since the effectiveness of CALM intervention and VR-CBT has already been proven, there is no study discussed whether VR-CALM can manage expected symptoms of breast cancer itself. In this context, we undertake this randomized controlled experiment to explore the availability of VR-CALM intervention relative to care as usual in both symptom management and psychological distress including quality of life, anxiety, depression, distress, concerns about recurrence, fatigue, and somnipathy.
## 2. Materials and Methods
### 2.1. Design
This is a nonblind, parallel assignment randomized controlled trial (RCT).
### 2.2. Sample
A total of 98 breast cancer patients receiving at least 2 courses of regular chemotherapy was recruited from the Second Affiliated Hospital of Anhui Medical University between January 2021 and August 2021.
### 2.3. Randomization
The statistical staff of our team, who did not involve in the experiment, managed the randomization. After participants’ baseline assessments, statistical staff provided computer-generated random assignments. The researchers were unknown about the sequence, which was written on a card, sealed in an envelope, and opened when dispensed.
### 2.4. Inclusion and Exclusion Criteria
Inclusion criteria were as follows: (1) pathologically diagnosed breast cancer patients, completion of at least 2 cycles of regular chemotherapy, with no intolerable side effects; (2) Karnofsky performance status score≥80; (3) no auditory, visual, language, and other functional disorders; and (4) age from 18 to 70, with a sufficient ability to complete the necessary tests for the study. Exclusion criteria were as follows: (1) patients with advanced cachexia; (2) patients with obvious anxiety, depression, and other mental symptoms; (3) lack of adequate baseline bone marrow and organ reserves or associated with serious heart, liver, kidney, brain, and hematopoietic diseases; (4) patients with brain metastases and other brain diseases; and (5) patients with a history of alcohol or drug dependence and taking cognitive-improving medications.
### 2.5. Procedure
BCs were identified through prescreening of test results, and eligible patients were recruited during hospital stays in the oncology department and breast surgery department. Oncologists presented patients with experiments and VR-CALM intervention methods and obtained informed consent from the patients and their families. The researchers next assessed whether the patients meet the requirement and performed baseline measurements. Then, statisticians randomly divided the participants into two groups. The intervention was conducted in an oncology conference to ensure patient privacy during hospitalization. After six cycles of intervention, complete follow-up assessments were conducted during patients’ hospitalization or by telephone.
### 2.6. Measures
#### 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
#### 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
#### 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
#### 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
#### 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
#### 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
### 2.7. Intervention
VR-CALM intervention was conducted by 1 psychologist, 1 oncologist, and 3 postgraduates with the certificate of psychological consultant from the Department of Oncology, the Second Affiliated Hospital of Anhui Medical University. All VR-CALM therapists received relevant professional training and ongoing supervision from clinical researchers and were qualified to provide VR-CALM intervention to patients. The group supervision meeting was held once a week for case discussion and the summary in order to ensure that the VR-CALM intervention and the trail going on wheels. VR-CALM is a brief, manual form of personal psychotherapy [9, 17]. The intervention course for VR-CALM was 3 months, during which patients in the intervention group would receive 6 separate treatments for VR-CALM, each time for 30 min, and the first 3 treatments would be completed in the first month. The VR equipment consisted of a head-mounted glasses and two controllers. The patients wearing the equipment will find themself immersed in a beauty spot such as a seaside and Butterfly Valley and are allowed to walk around. During this process, patients can hear wind and intermittent guidance. And patients are able to touch butterflies with the controller in their hands. This immersive experience of audio-visual integration constitutes a complete intervention model. Each patient will experience these scenes at least 2 times, and then they can choose which scenery they prefer for the next intervention. This also ensures the consistency of intervention for each patient.
### 2.8. Statistical Analysis
All the data were collected and analysed by Statistical Package for the Social Sciences (SPSS, V22.0) and were expressed as the mean ± SD. The Chi-squared test was used in the comparison of classification data, and the pairedt-test was used in the comparison before and after the same group. The unpaired t-test was used to compare the differences between different groups. The data differences were determined as statistically significant at P<0.05.
## 2.1. Design
This is a nonblind, parallel assignment randomized controlled trial (RCT).
## 2.2. Sample
A total of 98 breast cancer patients receiving at least 2 courses of regular chemotherapy was recruited from the Second Affiliated Hospital of Anhui Medical University between January 2021 and August 2021.
## 2.3. Randomization
The statistical staff of our team, who did not involve in the experiment, managed the randomization. After participants’ baseline assessments, statistical staff provided computer-generated random assignments. The researchers were unknown about the sequence, which was written on a card, sealed in an envelope, and opened when dispensed.
## 2.4. Inclusion and Exclusion Criteria
Inclusion criteria were as follows: (1) pathologically diagnosed breast cancer patients, completion of at least 2 cycles of regular chemotherapy, with no intolerable side effects; (2) Karnofsky performance status score≥80; (3) no auditory, visual, language, and other functional disorders; and (4) age from 18 to 70, with a sufficient ability to complete the necessary tests for the study. Exclusion criteria were as follows: (1) patients with advanced cachexia; (2) patients with obvious anxiety, depression, and other mental symptoms; (3) lack of adequate baseline bone marrow and organ reserves or associated with serious heart, liver, kidney, brain, and hematopoietic diseases; (4) patients with brain metastases and other brain diseases; and (5) patients with a history of alcohol or drug dependence and taking cognitive-improving medications.
## 2.5. Procedure
BCs were identified through prescreening of test results, and eligible patients were recruited during hospital stays in the oncology department and breast surgery department. Oncologists presented patients with experiments and VR-CALM intervention methods and obtained informed consent from the patients and their families. The researchers next assessed whether the patients meet the requirement and performed baseline measurements. Then, statisticians randomly divided the participants into two groups. The intervention was conducted in an oncology conference to ensure patient privacy during hospitalization. After six cycles of intervention, complete follow-up assessments were conducted during patients’ hospitalization or by telephone.
## 2.6. Measures
### 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
### 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
### 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
### 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
### 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
### 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
## 2.6.1. QoL
Quality of life was assessed by the Functional Assessment of Cancer Therapy-Breast cancer patient (FACT-B). Quality of life scores were recorded at both baselines and after 6 cycles of VR-CALM intervention. This study used the fourth vision of FACT-B that includes four domains: physical well-being, social/family well-being, emotional well-being, and functional well-being, and a subscale for BCs (rated by 28, 28, 24, 36, and 20 points, respectively) and has better reliability and validity. The higher the total score, the better is the QoL.
## 2.6.2. Distress
The Distress Thermometer (DT) was developed as a Distress Management Screening Measure (DMSM) for assessing the level of distress and was recommended by the National Comprehensive Cancer Center. It is a visual analog scale ranging from 0 (no distress) to 10 (extreme distress). Patients with a score above 4 are considered psychologically distressed.
## 2.6.3. Anxiety and Depression
The Self-Rating Anxiety Scale (SAS) is a norm-referenced scale used as a screener for anxiety disorders with 20 items. Patients with a score of 40 or more illustrate anxiety. The higher the total score, the higher the degree of anxiety. The main indicator of the Self-Rating Depression Scale (SDS) is the total score, and the cutoff point for depression is 50 points. A higher score indicates a higher degree of depression.
## 2.6.4. Concerns about Recurrence
Concerns About Recurrence Scale (CARS) consists of two domains: the overall Fear of Cancer Recurrence (FCR) and the nature of women’s FCR. The first part contains 4 items: measure the perceived likelihood of a recurrence of cancer; how often participants thought about a recurrence; how long they thought about a possible recurrence; and how emotionally painful thoughts about a recurrence were [14]. The second section measures the nature of women’s concerns about recurrence. The higher the total score is, the more significant the FCR tendency is.
## 2.6.5. Fatigue
Piper Fatigue Scale (PFS) is the most effective instrument to assess the perceived fatigue of patients with chronic disease, especially cancer, with 24 items and each item has 11 response categories from 0 to 10. The severity of fatigue represented by each score is as follows: 0 = no fatigue, 1–3 = mild fatigue, 4–6 = moderate fatigue, and 9–10 = severe fatigue. The greater total scores suggested a higher degree of fatigue.
## 2.6.6. Somnipathy
Pittsburgh Sleep Quality Index (PSQI) is the most commonly used universal measurement method in both clinical and research. The PSQI was developed traced back to 1988, with no aimed population [15]. PSQI includes 7 parts: the quality of sleep, sleep time, the amount of sleep, sleep efficiency, somnipathy, the hypnotic drug, and the daytime dysfunction. Previous studies have shown that each part score measured a particular aspect of the construct of sleep quality [16]. The final score is the sum of seven partial scores and higher scores indicate poorer sleep quality.
## 2.7. Intervention
VR-CALM intervention was conducted by 1 psychologist, 1 oncologist, and 3 postgraduates with the certificate of psychological consultant from the Department of Oncology, the Second Affiliated Hospital of Anhui Medical University. All VR-CALM therapists received relevant professional training and ongoing supervision from clinical researchers and were qualified to provide VR-CALM intervention to patients. The group supervision meeting was held once a week for case discussion and the summary in order to ensure that the VR-CALM intervention and the trail going on wheels. VR-CALM is a brief, manual form of personal psychotherapy [9, 17]. The intervention course for VR-CALM was 3 months, during which patients in the intervention group would receive 6 separate treatments for VR-CALM, each time for 30 min, and the first 3 treatments would be completed in the first month. The VR equipment consisted of a head-mounted glasses and two controllers. The patients wearing the equipment will find themself immersed in a beauty spot such as a seaside and Butterfly Valley and are allowed to walk around. During this process, patients can hear wind and intermittent guidance. And patients are able to touch butterflies with the controller in their hands. This immersive experience of audio-visual integration constitutes a complete intervention model. Each patient will experience these scenes at least 2 times, and then they can choose which scenery they prefer for the next intervention. This also ensures the consistency of intervention for each patient.
## 2.8. Statistical Analysis
All the data were collected and analysed by Statistical Package for the Social Sciences (SPSS, V22.0) and were expressed as the mean ± SD. The Chi-squared test was used in the comparison of classification data, and the pairedt-test was used in the comparison before and after the same group. The unpaired t-test was used to compare the differences between different groups. The data differences were determined as statistically significant at P<0.05.
## 3. Results
### 3.1. The Baseline Characteristics of the Research Enrolled Patients and Flow
Various patients’ demographics, including age, education, surgical method, and postoperative pathological information, were obtained at the time of enrolment (Table1). The results suggest that there were no significant differences in demographic information, including age (t = 0.708, P=0.481), years of education (t = −1.182, P=0.241), clinical information including the Karnofsky performance status (χ = 0.317, P=0.573), and the tumour stage (χ = 0.445, P=0.247, P=0.961) between the VR-CALM group and CAU group. The flow diagram showed that 98 breast cancer patients are eligible to take part in the study between January 2021 and August 2021 (Figure 1), 90 of whom were randomly assigned to VR-CALM (n = 45) and CAU (n = 45). However, 7 people did not complete the VR-CALM intervention and 6 people in the CAU group did not complete the final assessment. Finally, there were 38 people in the VR-CALM group and 39 in the CAU group.Table 1
Comparison of demographic characteristics and clinical data of breast cancer between the VR-CALM group and the CAU group.
VR-CALMCAUt/χPAge (years)52.29 (7.686)51.03 (7.979)0.7080.481Education (years)8.55 (2.435)9.21 (2.408)−1.1820.241Surgical method0.1120.738Breast conservative surgery35Mastectomy3534Tumour stageI670.4450.961II1618III1110IV54Molecular classification1.3210.724Luminal A46Luminal B1720HER-2 overexpression1210TNBC53Pathological type1.0210.692Invasive carcinoma no special type3433Invasive carcinoma special type13Noninvasive carcinoma33KPS0.3170.573801412902427Data are presented as the mean ± SD. Abbreviations: SD, standard deviation; KPS, Karnofsky performance status; VR-CALM, Managing Cancer and Living Meaningfully based on VR; CAU, care as usual.Figure 1
Consort flow diagram.
### 3.2. Comparison of the Symptom Assessment before and after Intervention Periods within Groups
The performance of the BCs on the Anxiety and Depression test, Concerns About Recurrence Test, Fatigue test, somnipathy test, DT, and QOL evaluation scale before and after VR-CALM or CAU are shown in Table2. Compared with the CAU group, the performance of the VR-CALM group had remarkable changes in the overall scores of the FACT-B (t = −10.379, P≤0.001), SAS (t = 4.680, P≤0.001), SDS (t = 4.101, P≤0.001), CARS (t = 2.742, P≤0.001), PFS (t = 9.913, P≤0.001), and PSQI (t = 6.066, P≤0.001) before and after VR-CALM, but for the CAU group, there were substantial variations in total scores on the FACT-B (−2.988, P≤0.01), SAS (t = 22124.817, P≤0.001), CARS (t = 2.370, P≤0.05), and DT (t = 3.328, P≤0.01) before and after CAU, while no significant differences were found in SDS (t = −1.517, P=0.138), PSQI (t = −1.839, P=0.074), and PFS (t = 1.515, P=0.138). When these data were combined, the magnitude of data changes in the VR-CALM group was higher than in the CAU group, despite the fact that both groups’ performance demonstrated significant change.Table 2
Separate comparison of symptoms in the 2 groups of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIVR-CALM groupBCT3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438ACT38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565t−10.37911.5214.6804.1012.7429.9136.066p≤0.001≤0.001≤0.001≤0.001≤0.01≤0.001≤0.001CAU groupBCT3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426ACT3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t−2.9883.328−4.817−1.5172.3701.515−1.839p≤0.01≤0.01≤0.0010.138≤0.050.1380.074Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; BCT, before CALM/CAU treatment; ACT, after CALM/CAU treatment; SD, standard deviation. Data are presented as mean ± SD.
### 3.3. Comparison of the Symptom Assessment before and after Intervention Periods between Groups
The differences between the two groups before and after VR-CALM or CAU are shown in Table3. At the start of the study, there were no statistically significant differences between the VR-CALM group and the CAU group on any of the scale scores: FACT-B (t = 1.411, P=0.162), DT (t = −1.806, P=0.076), SAS (t = 1.105, P=0.273), SDS (t = 1.315, P=0.194), CARS (t = 1.869, P=0.068), PFS (t = 0.687, P=0.494), and PSQI (t = 0.506, P=0.614), which means that there was no significant difference between the two groups at the baseline before the VR-CALM or CAU. However, 3 months after the interventions, the two groups showed statistically significant differences: FACT-B (t = 8.216, P≤0.001), DT (t = −6.829, P≤0.001), SAS (t = −5.819, P≤0.001), SDS (t = −2.094, P≤0.05), CARS (t = 1.170, P=0.247), PFS (t = −10.082, P≤0.001), and PSQI (t = −3.031, P≤0.01).Table 3
Comparison of symptoms between the VR-CALM group and the CAU group of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIBefore VR-CALM or CAUVR-CALM3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438CAU3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426t1.411−1.8061.1051.3151.8690.6870.506p0.1620.0760.2730.1940.0680.4940.614After VR-CALM or CAUVR-CALM38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565CAU3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t8.216−6.829−5.819−2.0941.170−10.082−3.031p≤0.001≤0.001≤0.001≤0.050.247≤0.001≤0.01Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation. Data are presented as mean ± SD.
## 3.1. The Baseline Characteristics of the Research Enrolled Patients and Flow
Various patients’ demographics, including age, education, surgical method, and postoperative pathological information, were obtained at the time of enrolment (Table1). The results suggest that there were no significant differences in demographic information, including age (t = 0.708, P=0.481), years of education (t = −1.182, P=0.241), clinical information including the Karnofsky performance status (χ = 0.317, P=0.573), and the tumour stage (χ = 0.445, P=0.247, P=0.961) between the VR-CALM group and CAU group. The flow diagram showed that 98 breast cancer patients are eligible to take part in the study between January 2021 and August 2021 (Figure 1), 90 of whom were randomly assigned to VR-CALM (n = 45) and CAU (n = 45). However, 7 people did not complete the VR-CALM intervention and 6 people in the CAU group did not complete the final assessment. Finally, there were 38 people in the VR-CALM group and 39 in the CAU group.Table 1
Comparison of demographic characteristics and clinical data of breast cancer between the VR-CALM group and the CAU group.
VR-CALMCAUt/χPAge (years)52.29 (7.686)51.03 (7.979)0.7080.481Education (years)8.55 (2.435)9.21 (2.408)−1.1820.241Surgical method0.1120.738Breast conservative surgery35Mastectomy3534Tumour stageI670.4450.961II1618III1110IV54Molecular classification1.3210.724Luminal A46Luminal B1720HER-2 overexpression1210TNBC53Pathological type1.0210.692Invasive carcinoma no special type3433Invasive carcinoma special type13Noninvasive carcinoma33KPS0.3170.573801412902427Data are presented as the mean ± SD. Abbreviations: SD, standard deviation; KPS, Karnofsky performance status; VR-CALM, Managing Cancer and Living Meaningfully based on VR; CAU, care as usual.Figure 1
Consort flow diagram.
## 3.2. Comparison of the Symptom Assessment before and after Intervention Periods within Groups
The performance of the BCs on the Anxiety and Depression test, Concerns About Recurrence Test, Fatigue test, somnipathy test, DT, and QOL evaluation scale before and after VR-CALM or CAU are shown in Table2. Compared with the CAU group, the performance of the VR-CALM group had remarkable changes in the overall scores of the FACT-B (t = −10.379, P≤0.001), SAS (t = 4.680, P≤0.001), SDS (t = 4.101, P≤0.001), CARS (t = 2.742, P≤0.001), PFS (t = 9.913, P≤0.001), and PSQI (t = 6.066, P≤0.001) before and after VR-CALM, but for the CAU group, there were substantial variations in total scores on the FACT-B (−2.988, P≤0.01), SAS (t = 22124.817, P≤0.001), CARS (t = 2.370, P≤0.05), and DT (t = 3.328, P≤0.01) before and after CAU, while no significant differences were found in SDS (t = −1.517, P=0.138), PSQI (t = −1.839, P=0.074), and PFS (t = 1.515, P=0.138). When these data were combined, the magnitude of data changes in the VR-CALM group was higher than in the CAU group, despite the fact that both groups’ performance demonstrated significant change.Table 2
Separate comparison of symptoms in the 2 groups of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIVR-CALM groupBCT3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438ACT38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565t−10.37911.5214.6804.1012.7429.9136.066p≤0.001≤0.001≤0.001≤0.001≤0.01≤0.001≤0.001CAU groupBCT3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426ACT3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t−2.9883.328−4.817−1.5172.3701.515−1.839p≤0.01≤0.01≤0.0010.138≤0.050.1380.074Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; BCT, before CALM/CAU treatment; ACT, after CALM/CAU treatment; SD, standard deviation. Data are presented as mean ± SD.
## 3.3. Comparison of the Symptom Assessment before and after Intervention Periods between Groups
The differences between the two groups before and after VR-CALM or CAU are shown in Table3. At the start of the study, there were no statistically significant differences between the VR-CALM group and the CAU group on any of the scale scores: FACT-B (t = 1.411, P=0.162), DT (t = −1.806, P=0.076), SAS (t = 1.105, P=0.273), SDS (t = 1.315, P=0.194), CARS (t = 1.869, P=0.068), PFS (t = 0.687, P=0.494), and PSQI (t = 0.506, P=0.614), which means that there was no significant difference between the two groups at the baseline before the VR-CALM or CAU. However, 3 months after the interventions, the two groups showed statistically significant differences: FACT-B (t = 8.216, P≤0.001), DT (t = −6.829, P≤0.001), SAS (t = −5.819, P≤0.001), SDS (t = −2.094, P≤0.05), CARS (t = 1.170, P=0.247), PFS (t = −10.082, P≤0.001), and PSQI (t = −3.031, P≤0.01).Table 3
Comparison of symptoms between the VR-CALM group and the CAU group of breast cancer patients before and after VR-CALM or CAU.
GroupNFACT-BDTSASSDSCARSPFSPSQIBefore VR-CALM or CAUVR-CALM3878.74 ± 10.1255.71 ± 1.25051.66 ± 11.25251.32 ± 11.55259.34 ± 13.581103.03 ± 8.64510.45 ± 3.438CAU3975.44 ± 10.396.13 ± 0.69549.31 ± 6.81448.64 ± 4.93455.00 ± 4.605101.62 ± 9.3510.05 ± 3.426t1.411−1.8061.1051.3151.8690.6870.506p0.1620.0760.2730.1940.0680.4940.614After VR-CALM or CAUVR-CALM38100.47 ± 14.332.95 ± 1.84544.16 ± 11.08346.63 ± 9.82455.82 ± 10.55777.79 ± 11.5488.74 ± 2.565CAU3979.44 ± 6.7035.49 ± 1.39355.21 ± 3.80650.21 ± 3.80653.49 ± 6.32899.77 ± 6.96411.26 ± 4.494t8.216−6.829−5.819−2.0941.170−10.082−3.031p≤0.001≤0.001≤0.001≤0.050.247≤0.001≤0.01Note. FACT-B, Functional Assessment of Cancer Therapy-Breast cancer patient; DT, Distress Thermometer; SAS, The Self-Rating Anxiety Scale; SDS, The Self-Rating Depression Scale; CARS, Concerns About Recurrence Scale; PFS, Piper Fatigue Scale; PSQI, Pittsburgh Sleep Quality Index; SD, standard deviation. Data are presented as mean ± SD.
## 4. Discussion
The purpose of this study is to evaluate the effectiveness of VR-CALM intervention in symptom management and psychological distress relief in BCs. Our study found that VR-CALM intervention was more effective than CAU in most of the variables (QOL, psychological distress, anxiety, depression, sleep disorders, and fatigue). These findings are consistent with the previous studies [8, 17–22]. It is highlighted that psychological care is beneficial for patients with advanced cancer according to international machinery such as the European Palliative Care Research Collaborative (EPCRC), the Worldwide Palliative Care Alliance, the World Health Organization, and the International Psycho-Oncology Society. While VR-CALM is a psychological intervention tailored to each individual patient to help prevent and treat adverse psychological reactions, improve quality of life, and give meaning to life.The CALM intervention has proven to be a useful treatment, providing a systematic approach to alleviate the suffering in cancer patients [8]. CALM builds on a decade of theoretical and empirical work and focuses on the influence of many factors, including the severity of physical symptoms, attachment style, and other interpersonal variables [23]. CALM is conducive to address practical and existential problems faced by cancer patients [9], including symptom management, role change, and mental-, existential-, and death-related issues. According to the CALM therapy manual, the components that CALM contributes to its therapeutic effectiveness are the creation of meaning, the renegotiation of attachment security, the regulation of emotion, and mentalization, all within a true supportive relationship [24].CALM is unique as it is based on relationship, attachment, and existentialist theories and specifically addresses the specific issues of advanced cancer. Preliminary results [14, 25] and results from a large randomized controlled trial in Toronto [26] are encouraging, suggesting that CALM is a promising intervention for patients with advanced or metastatic cancer. Because culture shapes cognitive, emotional and behavioural responses to cancer and cancer treatment, awareness and knowledge of treatment options, and acceptance of psychological interventions [27–29], the use of specific interventions (e.g., CALM) needs to be examined in a specific cultural context. Effective mechanisms may include offer the patients, doctors, nurses and other health care providers the opportunity to communicate, solve the impact of disease to their self-awareness and family relationships, find or regain the meaning of life and goals, express and manage the fear and desire related to the end of the life, and begin to prepare for the end of life [26]. Previous evidence has indicated that there are changes in cytokine levels in BCs with cognitive impairment, and a correlation was noted between cognition and QOL in BCs [30]. It may be the underlying mechanism of CALM intervention to improve QOL.Many studies and clinical trials use VR as a simulation, interactive, and distraction tool for patients with mental disorders such as posttraumatic stress disorder (PTSD), anxiety, specific phobias, schizophrenia, autism, dementia, and severe stress [31]. A study has shown that VR-based interventions are particularly important when compared to traditional cancer symptom management interventions [32]. First, VR-based cognitive training allows cancer patients to learn [32]. VR-based interventions provide immediate feedback on patient performance and can be adapted to suit patient needs [33]. In addition, VR-based interventions combine the latest real-time graphics and imaging technologies to allow patients to experience a multitude of visual and auditory stimuli in a computer-generated virtual environment to meet their rehabilitation needs [34, 35]. A previous study has shown that distracting interventions can effectively alleviate the adverse reactions and symptoms in cancer patients undergoing chemotherapy [36]. Among these interventions, virtual reality, with its powerful attention-grabbing power, has proved effective for cancer patients in different settings [37]. VR allows patients to temporarily break away from the medical environment and immerse themselves in a pleasant atmosphere, thus producing pleasant emotions and reducing negative emotions [12, 38]. Previous literature has indicated that VR is a useful intervention for relieving anxiety, depression, pain, and fatigue during chemotherapy [32, 39–44]. But there is no study that has proven its effectiveness on symptoms caused by cancer itself, which lasts longer and are more destructive than symptoms related to chemotherapy. Nevertheless, discomfort caused by wearing a VR device including headaches, eyestrain, and nausea is also a confounding factor that cannot be kicked out [45].Psychological symptoms should be synthetically assessed in all patients. In the latest NCCN guidelines, experts suggest that a complete symptom management should provide psychological and emotional support to both the cancer patients and their family caregivers.Distress is a multifactorial unpleasant experience and may have a negative impact on patients’ ability to cope with cancer, symptom, and treatment. The distress should be recognized, monitored, documented, and treated promptly at all the stages and in all the circumstances [2]. In addition, patients should be assessed for psychological distress at each visit ideally, at least during the first visit or at appropriate intervals [2]. The present study found that VR-CALM is effective for relieving distress in BCs, and CAU group also had a remission after intervention, while there was still a difference compared with the VR-CALM group.Previous studies estimated that 10% of cancer survivors live with anxiety and 16% have a major depressive disorder [46]. The study by Madhusudhana et al. confirmed that anxiety and depression are associated with longer treatment initiation intervals [47]. So, it is necessary for oncologists to intervene with this anxiety and depression. Our finding confirmed the usefulness of VR-CALM in reducing BCs’ anxiety and depression.Fear of cancer recurrence (FCR) is a concept of fear, worry, or concern about cancer returning or progressing [48]. The majority of BCs are concerned about possible disease progression and fear of cancer recurrence [49]. The estimated prevalence of moderate to high levels of CAR ranges from 24% to 56%, while FCR is one of the most common chronic and severe problems for cancer patients [50]. A systematic review shows that most BCs have problems in dealing with FCR which needs specialists’ help [51]. And Cognitive Behavioural Therapy (CBT) is a common treatment for FCR as its effectiveness has already been proven in previous studies [1, 52–54]. This study did not find any evidence that VR-CALM has influence on easing fear of cancer recurrence in BCs, as well as CAU.Fatigue is a subjective and unpleasant symptom, which is a general feeling that interferes with normal work and life [55]. Previous studies have shown that one-third of breast cancer patients have persistent sleep disturbances [56]. However, sleep disorders are associated with a greater fatality risk in BCs [57, 58]. A randomized trial revealed that compared with usual care, the physical activity intervention would improve sleep quality in self-report [59]. The present study revealed that both VR-CALM and CAU are valid in decreasing fatigue, while the effect of VR-CALM intervention is more obvious.Quality of life assessment is an important criterion for women recuperating from cancer [60, 61]. Ferguson et al. research suggested that CBT can compensate for cognitive deficits and improve QOL [62]. While medications such as methylphenidate, modafinil, and epoetin may improve cognition by reducing fatigue and boosting sobriety, its effectiveness has not been proven in previous studies [63]. It advocates regular physical exercises as effective tools for relieving fatigue, stress, and depression and quality of life in BCs [64–66]. The present study found that after VR-CALM intervention, BCs in the VR-CALM group had a better quality of life than those in the CAU group.Breast cancer patients who participated in VR-CALM reported that VR-CALM provided an opportunity for them to get out of the ward and out of the house and see the outside world. VR-CALM provided a time and place for patient-to-patient communication as well as opportunities for further communication with doctors and nurses. In addition to discussing, disturbing, and frightening issues such as treatment and prognosis of diseases, family communication has been enhanced and family experience sharing has been facilitated. The primary goal of breast cancer patients is to address the issues that are troubling them in life and the survival and psychological issues that arise after a cancer diagnosis, highlighting the importance of the safe space provided by the VR-CALM intervention. In addition, although cancer patients feel loved and cared for by their families, they also want to alleviate the pain caused by their disease, which can lead to further anxiety and depression due to family and financial burden. Patients in the VR-CALM intervention group reported after the intervention that the VR immersive experience provided them with emotional relief and relaxation, and this was considered an achievable goal in this population. In addition, the importance of CALM therapists was highlighted in the course of this study. CALM therapists have nonjudgmental compassion and an emphasis on reflection. Patients indicated that they valued the therapist’s point of view as a professional and that CALM therapist’s point of view helped solve the problem and made patients feel that someone understood their situation and experience. It also suggests that oncology professionals who are trained in psychotherapy and have compassion and understanding for patients can offer CALM interventions. In addition, the study found that the advantage of the CALM intervention is that oncologists can be trained to provide CALM as long as the integrity of the treatment is maintained through continuous monitoring. Although there may be subject-specific differences in oncological knowledge among therapists, there is certainly much in common and CALM interventions are not fundamentally different based on the expertise of the providers.In general, psychological intervention superior to medications in terms of symptom management and improving quality of life, with no adverse effects. Therefore, the psychological intervention has become the most effective and appropriate strategy for symptom management, which also promotes the conduct of this research. The current review focused on the impact of VR and CALM interventions. These therapies have been used in a variety of medical and psychological sectors, yielding findings that, if validated by more rigorous trials, might have a considerable impact on healthcare. However, the success of integrating VR and CALM interventions in ordinary clinical practice will also be determined by their practical viability. Compared with similar studies, this research innovatively used VR-based CALM intervention to improve psychological and chronic symptoms in breast cancer patients. The findings provide a simple and easy-to-adopt intervention and will help to better help breast cancer patients recover their mental health.Some design limitations merit further comments. First, the small sample size does not allow comparisons between subgroups of patients to determine moderators of treatment outcome and may not represent the actual difference in improvement between the CAU group and the VR-CALM group. And this study did not explain the specific mechanism of symptom management of VR-CALM. Secondly, the short follow-up period of this study could not prove the long-term effectiveness of VR-CALM intervention.
## 5. Conclusion
VR-CALM intervention strongly reduced psychological distress and improved quality of life among breast cancer survivors in daily life. In the 8-month follow-up, patients who participated in VR-CALM also reported to have less distress, anxiety, depression, and sleep disorders. The study did not find robust evidence that both VR-CALM and CAU can ease the patient’s fear of cancer recurrence. Our finding suggests the potential viability of VR as a new type of CLAM intervention approach to relieve psychological distress and improve quality of life in BCs. Furthermore, if future research is agreeing to this study about the usefulness of VR-CALM in symptom management in BCs, this could promote the research of CLAM mechanism.
---
*Source: 1012813-2022-06-07.xml* | 2022 |
# Small-Bowel Capsule Endoscopy in Patients with Suspected Crohn's Disease—Diagnostic Value and Complications
**Authors:** Pedro Figueiredo; Nuno Almeida; Sandra Lopes; Gabriela Duque; Paulo Freire; Clotilde Lérias; Hermano Gouveia; Carlos Sofia
**Journal:** Diagnostic and Therapeutic Endoscopy
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101284
---
## Abstract
Background. The aim of this work was to assess the value of capsule enteroscopy in the diagnosis of patients with suspected Crohn's Disease (CD). Methods. This was a retrospective study in a single tertiary care centre involving patients undergoing capsule enteroscopy for suspected CD. Patients taking nonsteroidal anti inflammatory drugs during the thirty preceding days or with a follow-up period of less than six months were excluded. Results. Seventy eight patients were included. The endoscopic findings included mucosal breaks in 50%, ulcerated stenosis in 5%, and villous atrophy in 4%. The diagnosis of CD was established in 31 patients. The sensitivity, specificity, positive and negative predictive value of the endoscopic findings were 93%, 80%, 77%, and 94%, respectively. Capsule retention occurred in four patients (5%). The presence of ulcerated stenosis was significantly more frequent in patients with positive inflammatory markers. The diagnostic yield of capsule enteroscopy in patients with negative ileoscopy was 56%, with a diagnostic acuity of 93%. Conclusions. Small bowel capsule endoscopy is a safe and valid technique for assessing patients with suspected CD. Capsule retention is more frequent in patients with positive inflammatory markers. Patients with negative ileoscopy and suspected CD should be submitted to capsule enteroscopy.
---
## Body
## 1. Introduction
The current view is that the diagnosis of Crohn’s Disease (CD) is established by a combination, not strictly defined, of clinical presentation, endoscopic appearance, radiology, histology, surgical findings, and, more recently, serology [1].The role of small-bowel capsule endoscopy (SBCE) in this context is still debateable [2], namely, because the concept of suspected CD, with implications in the selection of the patients, is itself under discussion. The other reason for debate is the lack of a clear definition of the endoscopic findings that should be considered indicative of CD. Even though lesions such as aphthae, erosions, or ulcers may be considered suggestive of the existence of the disease, the fact is that other aetiologies, namely, the use of nonsteroidal anti-inflammatory drugs (NSAIDs) may also be associated with the presence of these lesions, not forgetting the fact that healthy adults with no history of ingesting pharmaceutical drugs may also present similar lesions [3]. Furthermore, it is not clear whether, in the case of patients with suspected CD, SBCE is superior to other diagnostic methods, namely, ileoscopy [4].The aim of this study was to assess the value of SBCE in diagnosing CD, as well as the complications associated with the technique.
## 2. Material and Methods
### 2.1. Patients
A retrospective study of patients with suspected CD who had undergone SBCE in a single tertiary care academic centre was carried out. The criteria to perform SBCE in our department include the absence of any clinical or imaging study indicating the existence of stenosis of the small intestine. The following data was collected: age, sex, starting date of symptoms, clinical symptomatology, history of NSAID use, examinations carried out from the onset of complaints to the date of SBCE, endoscopic findings, complications associated with the examination, clinical assessment during the follow-up period, and duration of the same. Patents for whom there were references in the medical files to the use of NSAIDs during the month prior to the examination and patients with a follow-up period of less than six months after the date of the examination were excluded from the study.The patients were analysed according to the algorithm proposed by the International Conference on Capsule Endoscopy (ICCE) for suspected CD [5].Information on patient follow-up was obtained by contacting the referring physician. The diagnosis of CD was established by clinical evaluation during the follow-up period, by a combination of endoscopic, histological, radiological, and/or biochemical investigations [1].Erosions, ulcers, ulcerated stenosis, and villous atrophy were considered suggestive of CD, irrespective of the number of lesions found. Ulcers were defined as white lesions within a crater and with a surrounding erythema (Figures1 and 2) [6] and erosions as superficial white lesions with surrounding erythema (Figure 3) [6]. The diagnosis of ulcerated stenosis was based on the presence of an ulcer associated with retention of the capsule (Figure 4). The diagnosis of villous atrophy was presumed, but not submitted to histological confirmation, after the endoscopic diagnosis of a circumscribed area of villous denudation (Figure 5).Figure 1
Jejunal ulcer.Figure 2
Bleeding jejunal ulcer.Figure 3
Ileal erosions.Figure 4
Jejunal ulcerated stenosis.Figure 5
Area of villous atrophy in the ileo (arrow).
### 2.2. Capsule Endoscopy Procedure
A PillCam SB (Given Imaging Ltd; Yoqneam, Israel) was used. After an overnight fast of 12 hours, the patients ingested the capsule with a small amount of water with simethicone. No oral purge was administered. All the patients were advised to drink after four hours and, after eight hours, the sensor array and the recording device were removed. The digital video image streams of the examinations were downloaded to the RAPID system. The digital image stream was assessed and interpreted by endoscopy fellows (A.N., L.S., F.P., D.G.) and reviewed by two staff endoscopists (F.P., L.C.). The interobserver agreement was not properly assessed, but all the videos were widely scrutinized and discussed.
### 2.3. Statistical Analysis
The sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the diagnostic test, as well as confidence intervals were assessed using the VassarStats Website for Statistical Computation (available athttp://faculty.vassar.edu/lowry/VassarStats.html). Statistical comparisons of categorical data were made using the chi-squared test, with the Yates correction when needed, and with the Fisher exact test. A P value of less than .05 was considered significant. The analysis was performed with statistical software (SPSS version 11.5, SPSS Inc, Chicago, Illinois).
## 2.1. Patients
A retrospective study of patients with suspected CD who had undergone SBCE in a single tertiary care academic centre was carried out. The criteria to perform SBCE in our department include the absence of any clinical or imaging study indicating the existence of stenosis of the small intestine. The following data was collected: age, sex, starting date of symptoms, clinical symptomatology, history of NSAID use, examinations carried out from the onset of complaints to the date of SBCE, endoscopic findings, complications associated with the examination, clinical assessment during the follow-up period, and duration of the same. Patents for whom there were references in the medical files to the use of NSAIDs during the month prior to the examination and patients with a follow-up period of less than six months after the date of the examination were excluded from the study.The patients were analysed according to the algorithm proposed by the International Conference on Capsule Endoscopy (ICCE) for suspected CD [5].Information on patient follow-up was obtained by contacting the referring physician. The diagnosis of CD was established by clinical evaluation during the follow-up period, by a combination of endoscopic, histological, radiological, and/or biochemical investigations [1].Erosions, ulcers, ulcerated stenosis, and villous atrophy were considered suggestive of CD, irrespective of the number of lesions found. Ulcers were defined as white lesions within a crater and with a surrounding erythema (Figures1 and 2) [6] and erosions as superficial white lesions with surrounding erythema (Figure 3) [6]. The diagnosis of ulcerated stenosis was based on the presence of an ulcer associated with retention of the capsule (Figure 4). The diagnosis of villous atrophy was presumed, but not submitted to histological confirmation, after the endoscopic diagnosis of a circumscribed area of villous denudation (Figure 5).Figure 1
Jejunal ulcer.Figure 2
Bleeding jejunal ulcer.Figure 3
Ileal erosions.Figure 4
Jejunal ulcerated stenosis.Figure 5
Area of villous atrophy in the ileo (arrow).
## 2.2. Capsule Endoscopy Procedure
A PillCam SB (Given Imaging Ltd; Yoqneam, Israel) was used. After an overnight fast of 12 hours, the patients ingested the capsule with a small amount of water with simethicone. No oral purge was administered. All the patients were advised to drink after four hours and, after eight hours, the sensor array and the recording device were removed. The digital video image streams of the examinations were downloaded to the RAPID system. The digital image stream was assessed and interpreted by endoscopy fellows (A.N., L.S., F.P., D.G.) and reviewed by two staff endoscopists (F.P., L.C.). The interobserver agreement was not properly assessed, but all the videos were widely scrutinized and discussed.
## 2.3. Statistical Analysis
The sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the diagnostic test, as well as confidence intervals were assessed using the VassarStats Website for Statistical Computation (available athttp://faculty.vassar.edu/lowry/VassarStats.html). Statistical comparisons of categorical data were made using the chi-squared test, with the Yates correction when needed, and with the Fisher exact test. A P value of less than .05 was considered significant. The analysis was performed with statistical software (SPSS version 11.5, SPSS Inc, Chicago, Illinois).
## 3. Results
Between January 2001 and December 2007, 95 patients clinically suspected of having CD underwent capsule enteroscopy. Fourteen of the patients were excluded from the study as it was stated that they had taken NSAIDs in the preceding thirty days and a further three were excluded because the follow-up period amounted to less than six months.The demographic and clinical characteristics of the remaining 78 patients are showed in Table1.Table 1
Demographic and clinical characteristics of the patients.
Number of patients78Age (years) (mean± SD)37.2±16.4Gender (female)n (%)53 (67.9%)Abdominal painn (%)62 (79.5)Diarrhoean (%)47 (60.3)Weight lossn (%)27 (34.6)Arthralgiasn (%)27 (34.6)Fevern (%)11 (14.1)Duration of symptoms (months) (mean± SD)22.3± 26.2Anaemian (%)42 (53.8)Elevated CRPn (%)28 (35.9)SD: standard deviation; CRP: C-reactive protein.With regard to previous endoscopic examinations, all the patients had undergone colonoscopy, but no lesions indicative of CD had been detected in any of the cases. Retrograde ileoscopy had been carried out on 47 patients (60.3%), revealing a slightly congested mucosa in 7 cases. A histological study of the biopsies did not reveal any indications of CD. In 31 patients (39.7%), the intubation of the terminal ileum was not accomplished.The small-bowel series revealed lesions in 5 (26.3%) of the 19 patients who had undergone enteroclysis and in 5 (10.8%) of the 47 who had undergone small-bowel follow-through (SBFT). A computed tomography (CT) carried out on 37 of the patients revealed lesions in 14 cases (37.8%).Seventy eight examinations with capsule were carried out, achieving total enteroscopy in 64 cases (82.1%). Of the 14 cases with incomplete enteroscopy, in 4 (5.1%) this was due to the presence of a stenosis which led to retention of the capsule, whilst in the remaining ten cases it was attributed to slower transit. Six of these patients with incomplete enteroscopy, in which no lesions were detected, were excluded from further analysis, as it was not possible to know if they had any of the findings considered. The remaining four patients, known to present lesions considered suggestive of CD, were included.No other complications, apart from retention, were recorded.Pathological images were detected in 37 patients of the 72 patients considered, giving the technique a diagnostic yield of 51,3%. The main endoscopic findings were mucosal breaks, which were detected in 36 patients (50%). Five of these patients presented ulcers, four of which were in conjunction with erosions, whilst in one case only ulcers were found. The remaining 31 patients presented only erosions. With regard to the number of mucosal breaks, in 3 patients this amounted to 3 or less (4.2%) whereas in 26 cases (36.1%) it totalled 6 or more. After mucosal breaks, the most frequently detected endoscopic findings were ulcerated stenosis, observed in 4 cases (5.5%), followed by areas of villous atrophy, observed in 3 cases (4.2%). In 6 patients, there was more than one type of pathological finding (3 with stenosis and mucosal breaks, and 3 with villous atrophy and mucosal breaks). The remaining 31 patients showed only one type of lesion, namely, an isolated stenosis in one patient and mucosal breaks in the remaining 30 (81% of the patients with lesions).We are able to evaluate ICCE criteria in 70 patients (in two there was not enough information available in the patient file) (Table2). The ICCE criteria were fulfilled in 36 patients (51.4%). Some patients presented, in addition to two gastrointestinal symptoms considered in the algorithm, more than one of the following criteria: extraintestinal manifestations, abnormal imaging studies, or inflammatory markers.Table 2
Distribution of patients according to ICCE criteria.
N (%)ICCE criteria present36 (51.4)Symptoms plus extraintestinal symptoms and signs19 (27.1)Symptoms plus abnormal imaging*10 (16.9)Symptoms plus inflammatory markers25 (35.7)ICCE criteria absent34 (48.6)Total70*This group includes only the 59 patients submitted to CT or small bowel series.Among the 36 patients with positive ICCE criteria, 20 (55.6%) presented pathological images on SBCE versus 16 (44.4%) without lesions (P=.237). In the subgroup with inflammatory markers, 17 (68%) presented lesions versus 8 (32%), reaching statistical significance (P=.022). The presence of ulcerated stenosis was more frequent among patients with ICCE criteria (P=.64), the difference being statistically significant only in the subgroup with inflammatory markers (P=.014).During the follow-up period, which lasted on average 28.8 months (sd 13.3 months) (6–65 months), 31 patients (43%) were diagnosed with CD. The demographic and clinical characteristics of these patients are shown in Table3. In relation to the four patients with ulcerated stenoses which led to retention of the capsule, they presented symptoms for an average of 13 months (4–24 months). Abdominal pain, weight loss, anaemia, and elevated CRP were present in all of them. The prior diagnostic work-up, that included colonoscopy with retrograde ileoscopy in three, SBT in four and CT in one, did not found any lesions. None of the four patients developed symptoms or signs of intestinal occlusion. Two patients underwent surgery involving the resection of a segment of the small intestine, and a histological study of the tissue showed aspects compatible with CD. The other two patients, who were twin brothers, were only given medical treatment and the capsule was expelled voluntarily.Table 3
Demographic and clinical characteristics of the 31 patients with confirmed CD during the follow-up.
Age (years) (mean± SD)35.8± 16.2Gender (female)n (%)20 (64.5%)Abdominal painn (%)23 (74.2)Diarrhoean (%)47 (60.3)Weight lossn (%)17 (54.8)Arthralgiasn (%)7 (22.6)Fevern (%)7 (22.6)Duration of symptoms (months) (mean± SD)18.5± 17.2Anaemian (%)20 (64.5)Elevated CRPn (%)16 (51.6)ICCE criteria presentn (%)20 (68.9)Symptoms plus extraintestinal symptoms and signsn (%)6 (19.4)Symptoms plus abnormal imagingn (%)9 (37)Symptoms plus inflammatory markersn (%)17 (58.6)Suggestive endoscopic findings presentn (%)29 (93.5%)Erosions/ulcersn (%)29 (100)Ulcerated stenosisn (%)4 (13.7)Villous atrophyn (%)3 (10.3)Duration of follow-up (months) (mean± SD)30.7 (13.2)SD: standard deviation; CRP: C-reactive protein.Table4 shows the sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the endoscopic findings, ICCE criteria and ICCE criteria plus endoscopic findings in the diagnosis of CD.Table 4
Value of different criteria in the diagnosis of CD.
Sens. (95%CI)Spec. (95%CI)PPV (95%CI)NPV (95%CI)LLR+ (95%CI)LLR− (95%CI)Lesions on SBCE93 (75–98)80 (64–90)77 (59–88)94 (79–99)4.7 (2.5–8.9)0.08 (0.02–0.3)ICCE criteria65 (45–81)58 (42–73)52 (35–69)70 (52–84)1.5 (1–2.4)0.58 (0.34–1)plus lesions on SBCE100 (70–100)86 (64–96)85 (61–86)100 (79–100)7.3 (2.5–20)0Symptoms plus extraint.symptoms/signs24 (11–43)70 (54–83)36 (17–61)56 (42–70)0.8 (0.3–1.8)1.0 (0.8–1.34)plus lesions on SBCE77 (40–96)100 (80–100)100 (50–100)91 (70–98)(a)0.22 (0.06–0.7)Symptoms plus abnormal imaging29 (13–51)91 (75–97)70 (35–91)65 (50–77)3.4 (0.9–11)0.77 (0.59–1)plus lesions on SBCE100 (46–100)96 (80–99)83 (36–99)100 (84–100)29 (4,2–198)0Symptoms plus Inflammatory markers55 (35–73)78 (61–88)64 (42–81)71 (55–83)2.5 (1.2–4.8)0.57 (0.37–0.8)plus lesions on SBCE88 (863–98)96 (78–99)94 (69–99)92 (74–98)23.1 (3.3–159)0.1 (0.03–0.4)Sens.: sensitivity; Spec.: specificity; PPV: positive predictive value; NPV: negative predictive value; LLR+: likelihood ratio positive; LLR−: likelihood ratio negative; CI: Confidence Interval; extraint.: extraintestinal;(a)infinity.Table5 shows the capsule findings in patients submitted to retrograde ileoscopy. The negative predictive value for ileoscopy in the diagnosis of CD was 49%. It should be noted that out of the 22 patients subsequently diagnosed with CD in whom ileoscopy had shown no apparent lesions, in 21 cases lesions were revealed during capsule examination. The diagnostic yield for SBCE in the 43 patients who underwent retrograde ileoscopy was 56%, with diagnostic acuity of 93%, 95% sensitivity, 86% specificity, 88% positive predictive value, and 95% negative predictive value.Table 5
Endoscopic lesions detected by SBCE in patients with negative retrograde ileoscopy and subsequent diagnosis of CD.
Negative ileoscopy (N=43)CD confirmedCD not confirmedTotalEndoscopic lesions present,n (%)21 (87.5)3 (12.5)24 (100)Endoscopic lesions not present,n (%)1 (5.3)18 (94.7)19 (100)Totaln (%)22 (51.2)21 (48.8)43 (100)
## 4. Discussion
The sensitivity and, above all, the high negative predictive value and low negative likelihood ratio, suggesting the high probability of absence of the illness in patients who do not show endoscopic lesions, are, in our opinion, the most relevant piece of information to emerge from the study. It is important to emphasize that the methodology used in our study, as with the one reported by Tukey et al. [7], involved a follow-up period which, in our case, lasted more than six months, extending on average to 28.8 months. The CD diagnosis was not, therefore, established immediately on the basis of the capsule enteroscopy findings. We consider this methodology to be more correct, given the recognised difficulties in diagnosing the disease and the absence of a gold standard [1].The issue of selecting patients for SBCE is of the greatest importance. In fact, the recognition that abdominal pain of an unknown aetiology should not, on its own, constitute an indication for capsule enteroscopy [8, 9], as well as the problem of capsule retention and the high cost of the procedure, must be taken into consideration. Recently, the ICCE issued recommendations about SBCE in cases of suspected CD, formulating an algorithm which proposed that patients who presented suggestive symptoms plus either extraintestinal manifestations, inflammatory markers, or abnormal imaging studies should be selected to undergo capsule enteroscopy [5]. Our results show the high level of success obtained with the technique when this algorithm is used. In this context, it is legitimate to ask whether it would not be preferable, given that the criteria in the aforementioned algorithm can be met, to opt for balloon-assisted enteroscopy, thus preventing any capsule retention and enabling tissue to be collected for biopsy.A variety of studies have been published seeking to assess the value of SBCE in the diagnostic assessment of patients with suspected CD [6, 7, 10–15]. The inclusion criteria, based on known data relating to the clinical and biological manifestations of CD, are similar, even though the number of patients included in each study varies considerably. It is recognised that, in patients with CD, the endoscopic findings most frequently detected by the capsule are aphthoid ulcers/erosions [16]. However, we are far from reaching a consensus on the number of lesions considered significant. In fact, in the study by Mow et al. [6], the criterion used to presume a diagnosis of CD is the presence of more than three ulcers, whereas in the study by Voderholzer et al. [17] it is the detection of more than ten aphthoid or erosive lesions. The question of NSAID is, naturally, crucial, and it is recognised that NSAID should not be administered to patients involved in these studies for at least thirty days prior to SBCE [18, 19]. Unlike most of the series cited [6, 7, 11, 14, 15], we did not exclude patients from our calculations according to the number of mucosal breaks. In fact, we think that, since there is no consensus on the number of mucosal breaks that should be considered indicative of a diagnosis of CD, the exclusion of patients on the basis of a numerical criterion appears arbitrary. It should be emphasised that in our study only three patients presented three or fewer mucosal breaks. This data is relevant, given that in the work of Goldstein et al. [3] no healthy individual without a history of ulcerogenic medications presented more than three mucosal breaks.This study highlights the good diagnostic yield of the technique in patients with suspected CD. In fact, diagnostic yield achieved 51.3% in our series, which places it within the range spanning the 37.5% cited in the study by Mow et al. [6] and the 70.6% of Fireman et al. [10]. These variations may be related to the different admission criteria associated with NSAID use, together with the different characteristics of the patients included in the various studies.The rate of incomplete enteroscopies, which in our series was 17.9%, is similar to the rates reported in the literature considering all the indications [20, 21], consistent with the hypothesis that CD may not increase the risk of an incomplete observation [21]. The capsule retention rate in our series is higher than the one reported in studies which include all indications [20, 22]. When weighing against series which include patients with suspected CD, the rate is also higher, namely, when compared with the rate reported by Cheifetz et al. [23] and by Li et al. [22], 1.6% and 0%, respectively. It should be noted that for none of our patients did the clinical assessment and/or the prior imaging studies suggest the presence of stenosis, and this conforms to reports in other studies which demonstrate the low reliability of clinical and imaging studies in predicting the existence of stenoses [22, 24]. In this context, the recommendation to carry out imaging studies before capsule enteroscopy aiming the exclusion of a stenosis [5, 25] is hard to understand. In fact, we found that ulcerated stenosis leading to capsule retention were significantly more frequent in the subgroup of patients with positive inflammatory markers. It should be emphasised that none of our four patients with retained capsules developed any clinical manifestations indicative of intestinal occlusion and only two underwent surgery.The retrospective nature of our study, along with the fact that many patients included are referred from other hospitals, may explain the high rate of unaccomplished intubations of the terminal ileum. Nevertheless, the question of ileoscopy in the diagnostic investigation of these patients also merits discussion. The aim of this study was not to compare ileoscopy with SBCE, given that patients who presented lesions indicative of CD in the terminal ileum did not undergo SBCE. In this context, it was only possible to calculate the negative predictive value of the ileoscopy which, as it amounted to 49%, attests to the fact that when the ileoscopy is negative in these patients, the probability of existence of the disease is high. Furthermore, the diagnosis of CD was later established in 22 (51.2%) of the 43 patients who had a negative ileoscopy, and that in 21 of these SBCE revealed lesions. The SBCE diagnostic yield for patients with a negative ileoscopy was 56%, with a high diagnostic acuity, sensitivity, and negative predictive value for the diagnosis of CD. The meta-analysis published by Triester et al. [4] showed that SBCE was not significantly better than ileoscopy for patients with suspected CD. However, a study of double-balloon enteroscopy demonstrated that, in a high percentage of patients, ileal involvement in CD may be outside the range of the ileoscopy [26]. In fact, the most recent published meta-analysis by Dionisio et al. shows that SBCE is superior to colonoscopy with ileoscopy [27]. Our results corroborate this finding, signifying that, faced with clinically suspected CD and a negative ileoscopy, the use of enteroscopy, namely, with capsule, is advisable.In conclusion, SBCE is a valid diagnostic tool in patients with suspected CD, namely, when inflammatory markers are present. It is particularly informative when lesions are not detected, a case in which the diagnosis of CD is very unlikely. The use of SBCE in this indication may lead to retention of the capsule, more frequently in the subgroup of patients with positive inflammatory markers, but this event is not accompanied by symptoms of intestinal occlusion and can be remedied without the need for surgery. Finally, the diagnosis of CD is not infrequent among patients with negative ileoscopy, suggesting that capsule enteroscopy is advisable in these cases.
---
*Source: 101284-2010-08-05.xml* | 101284-2010-08-05_101284-2010-08-05.md | 26,794 | Small-Bowel Capsule Endoscopy in Patients with Suspected Crohn's Disease—Diagnostic Value and Complications | Pedro Figueiredo; Nuno Almeida; Sandra Lopes; Gabriela Duque; Paulo Freire; Clotilde Lérias; Hermano Gouveia; Carlos Sofia | Diagnostic and Therapeutic Endoscopy
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101284 | 101284-2010-08-05.xml | ---
## Abstract
Background. The aim of this work was to assess the value of capsule enteroscopy in the diagnosis of patients with suspected Crohn's Disease (CD). Methods. This was a retrospective study in a single tertiary care centre involving patients undergoing capsule enteroscopy for suspected CD. Patients taking nonsteroidal anti inflammatory drugs during the thirty preceding days or with a follow-up period of less than six months were excluded. Results. Seventy eight patients were included. The endoscopic findings included mucosal breaks in 50%, ulcerated stenosis in 5%, and villous atrophy in 4%. The diagnosis of CD was established in 31 patients. The sensitivity, specificity, positive and negative predictive value of the endoscopic findings were 93%, 80%, 77%, and 94%, respectively. Capsule retention occurred in four patients (5%). The presence of ulcerated stenosis was significantly more frequent in patients with positive inflammatory markers. The diagnostic yield of capsule enteroscopy in patients with negative ileoscopy was 56%, with a diagnostic acuity of 93%. Conclusions. Small bowel capsule endoscopy is a safe and valid technique for assessing patients with suspected CD. Capsule retention is more frequent in patients with positive inflammatory markers. Patients with negative ileoscopy and suspected CD should be submitted to capsule enteroscopy.
---
## Body
## 1. Introduction
The current view is that the diagnosis of Crohn’s Disease (CD) is established by a combination, not strictly defined, of clinical presentation, endoscopic appearance, radiology, histology, surgical findings, and, more recently, serology [1].The role of small-bowel capsule endoscopy (SBCE) in this context is still debateable [2], namely, because the concept of suspected CD, with implications in the selection of the patients, is itself under discussion. The other reason for debate is the lack of a clear definition of the endoscopic findings that should be considered indicative of CD. Even though lesions such as aphthae, erosions, or ulcers may be considered suggestive of the existence of the disease, the fact is that other aetiologies, namely, the use of nonsteroidal anti-inflammatory drugs (NSAIDs) may also be associated with the presence of these lesions, not forgetting the fact that healthy adults with no history of ingesting pharmaceutical drugs may also present similar lesions [3]. Furthermore, it is not clear whether, in the case of patients with suspected CD, SBCE is superior to other diagnostic methods, namely, ileoscopy [4].The aim of this study was to assess the value of SBCE in diagnosing CD, as well as the complications associated with the technique.
## 2. Material and Methods
### 2.1. Patients
A retrospective study of patients with suspected CD who had undergone SBCE in a single tertiary care academic centre was carried out. The criteria to perform SBCE in our department include the absence of any clinical or imaging study indicating the existence of stenosis of the small intestine. The following data was collected: age, sex, starting date of symptoms, clinical symptomatology, history of NSAID use, examinations carried out from the onset of complaints to the date of SBCE, endoscopic findings, complications associated with the examination, clinical assessment during the follow-up period, and duration of the same. Patents for whom there were references in the medical files to the use of NSAIDs during the month prior to the examination and patients with a follow-up period of less than six months after the date of the examination were excluded from the study.The patients were analysed according to the algorithm proposed by the International Conference on Capsule Endoscopy (ICCE) for suspected CD [5].Information on patient follow-up was obtained by contacting the referring physician. The diagnosis of CD was established by clinical evaluation during the follow-up period, by a combination of endoscopic, histological, radiological, and/or biochemical investigations [1].Erosions, ulcers, ulcerated stenosis, and villous atrophy were considered suggestive of CD, irrespective of the number of lesions found. Ulcers were defined as white lesions within a crater and with a surrounding erythema (Figures1 and 2) [6] and erosions as superficial white lesions with surrounding erythema (Figure 3) [6]. The diagnosis of ulcerated stenosis was based on the presence of an ulcer associated with retention of the capsule (Figure 4). The diagnosis of villous atrophy was presumed, but not submitted to histological confirmation, after the endoscopic diagnosis of a circumscribed area of villous denudation (Figure 5).Figure 1
Jejunal ulcer.Figure 2
Bleeding jejunal ulcer.Figure 3
Ileal erosions.Figure 4
Jejunal ulcerated stenosis.Figure 5
Area of villous atrophy in the ileo (arrow).
### 2.2. Capsule Endoscopy Procedure
A PillCam SB (Given Imaging Ltd; Yoqneam, Israel) was used. After an overnight fast of 12 hours, the patients ingested the capsule with a small amount of water with simethicone. No oral purge was administered. All the patients were advised to drink after four hours and, after eight hours, the sensor array and the recording device were removed. The digital video image streams of the examinations were downloaded to the RAPID system. The digital image stream was assessed and interpreted by endoscopy fellows (A.N., L.S., F.P., D.G.) and reviewed by two staff endoscopists (F.P., L.C.). The interobserver agreement was not properly assessed, but all the videos were widely scrutinized and discussed.
### 2.3. Statistical Analysis
The sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the diagnostic test, as well as confidence intervals were assessed using the VassarStats Website for Statistical Computation (available athttp://faculty.vassar.edu/lowry/VassarStats.html). Statistical comparisons of categorical data were made using the chi-squared test, with the Yates correction when needed, and with the Fisher exact test. A P value of less than .05 was considered significant. The analysis was performed with statistical software (SPSS version 11.5, SPSS Inc, Chicago, Illinois).
## 2.1. Patients
A retrospective study of patients with suspected CD who had undergone SBCE in a single tertiary care academic centre was carried out. The criteria to perform SBCE in our department include the absence of any clinical or imaging study indicating the existence of stenosis of the small intestine. The following data was collected: age, sex, starting date of symptoms, clinical symptomatology, history of NSAID use, examinations carried out from the onset of complaints to the date of SBCE, endoscopic findings, complications associated with the examination, clinical assessment during the follow-up period, and duration of the same. Patents for whom there were references in the medical files to the use of NSAIDs during the month prior to the examination and patients with a follow-up period of less than six months after the date of the examination were excluded from the study.The patients were analysed according to the algorithm proposed by the International Conference on Capsule Endoscopy (ICCE) for suspected CD [5].Information on patient follow-up was obtained by contacting the referring physician. The diagnosis of CD was established by clinical evaluation during the follow-up period, by a combination of endoscopic, histological, radiological, and/or biochemical investigations [1].Erosions, ulcers, ulcerated stenosis, and villous atrophy were considered suggestive of CD, irrespective of the number of lesions found. Ulcers were defined as white lesions within a crater and with a surrounding erythema (Figures1 and 2) [6] and erosions as superficial white lesions with surrounding erythema (Figure 3) [6]. The diagnosis of ulcerated stenosis was based on the presence of an ulcer associated with retention of the capsule (Figure 4). The diagnosis of villous atrophy was presumed, but not submitted to histological confirmation, after the endoscopic diagnosis of a circumscribed area of villous denudation (Figure 5).Figure 1
Jejunal ulcer.Figure 2
Bleeding jejunal ulcer.Figure 3
Ileal erosions.Figure 4
Jejunal ulcerated stenosis.Figure 5
Area of villous atrophy in the ileo (arrow).
## 2.2. Capsule Endoscopy Procedure
A PillCam SB (Given Imaging Ltd; Yoqneam, Israel) was used. After an overnight fast of 12 hours, the patients ingested the capsule with a small amount of water with simethicone. No oral purge was administered. All the patients were advised to drink after four hours and, after eight hours, the sensor array and the recording device were removed. The digital video image streams of the examinations were downloaded to the RAPID system. The digital image stream was assessed and interpreted by endoscopy fellows (A.N., L.S., F.P., D.G.) and reviewed by two staff endoscopists (F.P., L.C.). The interobserver agreement was not properly assessed, but all the videos were widely scrutinized and discussed.
## 2.3. Statistical Analysis
The sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the diagnostic test, as well as confidence intervals were assessed using the VassarStats Website for Statistical Computation (available athttp://faculty.vassar.edu/lowry/VassarStats.html). Statistical comparisons of categorical data were made using the chi-squared test, with the Yates correction when needed, and with the Fisher exact test. A P value of less than .05 was considered significant. The analysis was performed with statistical software (SPSS version 11.5, SPSS Inc, Chicago, Illinois).
## 3. Results
Between January 2001 and December 2007, 95 patients clinically suspected of having CD underwent capsule enteroscopy. Fourteen of the patients were excluded from the study as it was stated that they had taken NSAIDs in the preceding thirty days and a further three were excluded because the follow-up period amounted to less than six months.The demographic and clinical characteristics of the remaining 78 patients are showed in Table1.Table 1
Demographic and clinical characteristics of the patients.
Number of patients78Age (years) (mean± SD)37.2±16.4Gender (female)n (%)53 (67.9%)Abdominal painn (%)62 (79.5)Diarrhoean (%)47 (60.3)Weight lossn (%)27 (34.6)Arthralgiasn (%)27 (34.6)Fevern (%)11 (14.1)Duration of symptoms (months) (mean± SD)22.3± 26.2Anaemian (%)42 (53.8)Elevated CRPn (%)28 (35.9)SD: standard deviation; CRP: C-reactive protein.With regard to previous endoscopic examinations, all the patients had undergone colonoscopy, but no lesions indicative of CD had been detected in any of the cases. Retrograde ileoscopy had been carried out on 47 patients (60.3%), revealing a slightly congested mucosa in 7 cases. A histological study of the biopsies did not reveal any indications of CD. In 31 patients (39.7%), the intubation of the terminal ileum was not accomplished.The small-bowel series revealed lesions in 5 (26.3%) of the 19 patients who had undergone enteroclysis and in 5 (10.8%) of the 47 who had undergone small-bowel follow-through (SBFT). A computed tomography (CT) carried out on 37 of the patients revealed lesions in 14 cases (37.8%).Seventy eight examinations with capsule were carried out, achieving total enteroscopy in 64 cases (82.1%). Of the 14 cases with incomplete enteroscopy, in 4 (5.1%) this was due to the presence of a stenosis which led to retention of the capsule, whilst in the remaining ten cases it was attributed to slower transit. Six of these patients with incomplete enteroscopy, in which no lesions were detected, were excluded from further analysis, as it was not possible to know if they had any of the findings considered. The remaining four patients, known to present lesions considered suggestive of CD, were included.No other complications, apart from retention, were recorded.Pathological images were detected in 37 patients of the 72 patients considered, giving the technique a diagnostic yield of 51,3%. The main endoscopic findings were mucosal breaks, which were detected in 36 patients (50%). Five of these patients presented ulcers, four of which were in conjunction with erosions, whilst in one case only ulcers were found. The remaining 31 patients presented only erosions. With regard to the number of mucosal breaks, in 3 patients this amounted to 3 or less (4.2%) whereas in 26 cases (36.1%) it totalled 6 or more. After mucosal breaks, the most frequently detected endoscopic findings were ulcerated stenosis, observed in 4 cases (5.5%), followed by areas of villous atrophy, observed in 3 cases (4.2%). In 6 patients, there was more than one type of pathological finding (3 with stenosis and mucosal breaks, and 3 with villous atrophy and mucosal breaks). The remaining 31 patients showed only one type of lesion, namely, an isolated stenosis in one patient and mucosal breaks in the remaining 30 (81% of the patients with lesions).We are able to evaluate ICCE criteria in 70 patients (in two there was not enough information available in the patient file) (Table2). The ICCE criteria were fulfilled in 36 patients (51.4%). Some patients presented, in addition to two gastrointestinal symptoms considered in the algorithm, more than one of the following criteria: extraintestinal manifestations, abnormal imaging studies, or inflammatory markers.Table 2
Distribution of patients according to ICCE criteria.
N (%)ICCE criteria present36 (51.4)Symptoms plus extraintestinal symptoms and signs19 (27.1)Symptoms plus abnormal imaging*10 (16.9)Symptoms plus inflammatory markers25 (35.7)ICCE criteria absent34 (48.6)Total70*This group includes only the 59 patients submitted to CT or small bowel series.Among the 36 patients with positive ICCE criteria, 20 (55.6%) presented pathological images on SBCE versus 16 (44.4%) without lesions (P=.237). In the subgroup with inflammatory markers, 17 (68%) presented lesions versus 8 (32%), reaching statistical significance (P=.022). The presence of ulcerated stenosis was more frequent among patients with ICCE criteria (P=.64), the difference being statistically significant only in the subgroup with inflammatory markers (P=.014).During the follow-up period, which lasted on average 28.8 months (sd 13.3 months) (6–65 months), 31 patients (43%) were diagnosed with CD. The demographic and clinical characteristics of these patients are shown in Table3. In relation to the four patients with ulcerated stenoses which led to retention of the capsule, they presented symptoms for an average of 13 months (4–24 months). Abdominal pain, weight loss, anaemia, and elevated CRP were present in all of them. The prior diagnostic work-up, that included colonoscopy with retrograde ileoscopy in three, SBT in four and CT in one, did not found any lesions. None of the four patients developed symptoms or signs of intestinal occlusion. Two patients underwent surgery involving the resection of a segment of the small intestine, and a histological study of the tissue showed aspects compatible with CD. The other two patients, who were twin brothers, were only given medical treatment and the capsule was expelled voluntarily.Table 3
Demographic and clinical characteristics of the 31 patients with confirmed CD during the follow-up.
Age (years) (mean± SD)35.8± 16.2Gender (female)n (%)20 (64.5%)Abdominal painn (%)23 (74.2)Diarrhoean (%)47 (60.3)Weight lossn (%)17 (54.8)Arthralgiasn (%)7 (22.6)Fevern (%)7 (22.6)Duration of symptoms (months) (mean± SD)18.5± 17.2Anaemian (%)20 (64.5)Elevated CRPn (%)16 (51.6)ICCE criteria presentn (%)20 (68.9)Symptoms plus extraintestinal symptoms and signsn (%)6 (19.4)Symptoms plus abnormal imagingn (%)9 (37)Symptoms plus inflammatory markersn (%)17 (58.6)Suggestive endoscopic findings presentn (%)29 (93.5%)Erosions/ulcersn (%)29 (100)Ulcerated stenosisn (%)4 (13.7)Villous atrophyn (%)3 (10.3)Duration of follow-up (months) (mean± SD)30.7 (13.2)SD: standard deviation; CRP: C-reactive protein.Table4 shows the sensitivity, specificity, positive predictive value, negative predictive value, positive and negative likelihood ratio of the endoscopic findings, ICCE criteria and ICCE criteria plus endoscopic findings in the diagnosis of CD.Table 4
Value of different criteria in the diagnosis of CD.
Sens. (95%CI)Spec. (95%CI)PPV (95%CI)NPV (95%CI)LLR+ (95%CI)LLR− (95%CI)Lesions on SBCE93 (75–98)80 (64–90)77 (59–88)94 (79–99)4.7 (2.5–8.9)0.08 (0.02–0.3)ICCE criteria65 (45–81)58 (42–73)52 (35–69)70 (52–84)1.5 (1–2.4)0.58 (0.34–1)plus lesions on SBCE100 (70–100)86 (64–96)85 (61–86)100 (79–100)7.3 (2.5–20)0Symptoms plus extraint.symptoms/signs24 (11–43)70 (54–83)36 (17–61)56 (42–70)0.8 (0.3–1.8)1.0 (0.8–1.34)plus lesions on SBCE77 (40–96)100 (80–100)100 (50–100)91 (70–98)(a)0.22 (0.06–0.7)Symptoms plus abnormal imaging29 (13–51)91 (75–97)70 (35–91)65 (50–77)3.4 (0.9–11)0.77 (0.59–1)plus lesions on SBCE100 (46–100)96 (80–99)83 (36–99)100 (84–100)29 (4,2–198)0Symptoms plus Inflammatory markers55 (35–73)78 (61–88)64 (42–81)71 (55–83)2.5 (1.2–4.8)0.57 (0.37–0.8)plus lesions on SBCE88 (863–98)96 (78–99)94 (69–99)92 (74–98)23.1 (3.3–159)0.1 (0.03–0.4)Sens.: sensitivity; Spec.: specificity; PPV: positive predictive value; NPV: negative predictive value; LLR+: likelihood ratio positive; LLR−: likelihood ratio negative; CI: Confidence Interval; extraint.: extraintestinal;(a)infinity.Table5 shows the capsule findings in patients submitted to retrograde ileoscopy. The negative predictive value for ileoscopy in the diagnosis of CD was 49%. It should be noted that out of the 22 patients subsequently diagnosed with CD in whom ileoscopy had shown no apparent lesions, in 21 cases lesions were revealed during capsule examination. The diagnostic yield for SBCE in the 43 patients who underwent retrograde ileoscopy was 56%, with diagnostic acuity of 93%, 95% sensitivity, 86% specificity, 88% positive predictive value, and 95% negative predictive value.Table 5
Endoscopic lesions detected by SBCE in patients with negative retrograde ileoscopy and subsequent diagnosis of CD.
Negative ileoscopy (N=43)CD confirmedCD not confirmedTotalEndoscopic lesions present,n (%)21 (87.5)3 (12.5)24 (100)Endoscopic lesions not present,n (%)1 (5.3)18 (94.7)19 (100)Totaln (%)22 (51.2)21 (48.8)43 (100)
## 4. Discussion
The sensitivity and, above all, the high negative predictive value and low negative likelihood ratio, suggesting the high probability of absence of the illness in patients who do not show endoscopic lesions, are, in our opinion, the most relevant piece of information to emerge from the study. It is important to emphasize that the methodology used in our study, as with the one reported by Tukey et al. [7], involved a follow-up period which, in our case, lasted more than six months, extending on average to 28.8 months. The CD diagnosis was not, therefore, established immediately on the basis of the capsule enteroscopy findings. We consider this methodology to be more correct, given the recognised difficulties in diagnosing the disease and the absence of a gold standard [1].The issue of selecting patients for SBCE is of the greatest importance. In fact, the recognition that abdominal pain of an unknown aetiology should not, on its own, constitute an indication for capsule enteroscopy [8, 9], as well as the problem of capsule retention and the high cost of the procedure, must be taken into consideration. Recently, the ICCE issued recommendations about SBCE in cases of suspected CD, formulating an algorithm which proposed that patients who presented suggestive symptoms plus either extraintestinal manifestations, inflammatory markers, or abnormal imaging studies should be selected to undergo capsule enteroscopy [5]. Our results show the high level of success obtained with the technique when this algorithm is used. In this context, it is legitimate to ask whether it would not be preferable, given that the criteria in the aforementioned algorithm can be met, to opt for balloon-assisted enteroscopy, thus preventing any capsule retention and enabling tissue to be collected for biopsy.A variety of studies have been published seeking to assess the value of SBCE in the diagnostic assessment of patients with suspected CD [6, 7, 10–15]. The inclusion criteria, based on known data relating to the clinical and biological manifestations of CD, are similar, even though the number of patients included in each study varies considerably. It is recognised that, in patients with CD, the endoscopic findings most frequently detected by the capsule are aphthoid ulcers/erosions [16]. However, we are far from reaching a consensus on the number of lesions considered significant. In fact, in the study by Mow et al. [6], the criterion used to presume a diagnosis of CD is the presence of more than three ulcers, whereas in the study by Voderholzer et al. [17] it is the detection of more than ten aphthoid or erosive lesions. The question of NSAID is, naturally, crucial, and it is recognised that NSAID should not be administered to patients involved in these studies for at least thirty days prior to SBCE [18, 19]. Unlike most of the series cited [6, 7, 11, 14, 15], we did not exclude patients from our calculations according to the number of mucosal breaks. In fact, we think that, since there is no consensus on the number of mucosal breaks that should be considered indicative of a diagnosis of CD, the exclusion of patients on the basis of a numerical criterion appears arbitrary. It should be emphasised that in our study only three patients presented three or fewer mucosal breaks. This data is relevant, given that in the work of Goldstein et al. [3] no healthy individual without a history of ulcerogenic medications presented more than three mucosal breaks.This study highlights the good diagnostic yield of the technique in patients with suspected CD. In fact, diagnostic yield achieved 51.3% in our series, which places it within the range spanning the 37.5% cited in the study by Mow et al. [6] and the 70.6% of Fireman et al. [10]. These variations may be related to the different admission criteria associated with NSAID use, together with the different characteristics of the patients included in the various studies.The rate of incomplete enteroscopies, which in our series was 17.9%, is similar to the rates reported in the literature considering all the indications [20, 21], consistent with the hypothesis that CD may not increase the risk of an incomplete observation [21]. The capsule retention rate in our series is higher than the one reported in studies which include all indications [20, 22]. When weighing against series which include patients with suspected CD, the rate is also higher, namely, when compared with the rate reported by Cheifetz et al. [23] and by Li et al. [22], 1.6% and 0%, respectively. It should be noted that for none of our patients did the clinical assessment and/or the prior imaging studies suggest the presence of stenosis, and this conforms to reports in other studies which demonstrate the low reliability of clinical and imaging studies in predicting the existence of stenoses [22, 24]. In this context, the recommendation to carry out imaging studies before capsule enteroscopy aiming the exclusion of a stenosis [5, 25] is hard to understand. In fact, we found that ulcerated stenosis leading to capsule retention were significantly more frequent in the subgroup of patients with positive inflammatory markers. It should be emphasised that none of our four patients with retained capsules developed any clinical manifestations indicative of intestinal occlusion and only two underwent surgery.The retrospective nature of our study, along with the fact that many patients included are referred from other hospitals, may explain the high rate of unaccomplished intubations of the terminal ileum. Nevertheless, the question of ileoscopy in the diagnostic investigation of these patients also merits discussion. The aim of this study was not to compare ileoscopy with SBCE, given that patients who presented lesions indicative of CD in the terminal ileum did not undergo SBCE. In this context, it was only possible to calculate the negative predictive value of the ileoscopy which, as it amounted to 49%, attests to the fact that when the ileoscopy is negative in these patients, the probability of existence of the disease is high. Furthermore, the diagnosis of CD was later established in 22 (51.2%) of the 43 patients who had a negative ileoscopy, and that in 21 of these SBCE revealed lesions. The SBCE diagnostic yield for patients with a negative ileoscopy was 56%, with a high diagnostic acuity, sensitivity, and negative predictive value for the diagnosis of CD. The meta-analysis published by Triester et al. [4] showed that SBCE was not significantly better than ileoscopy for patients with suspected CD. However, a study of double-balloon enteroscopy demonstrated that, in a high percentage of patients, ileal involvement in CD may be outside the range of the ileoscopy [26]. In fact, the most recent published meta-analysis by Dionisio et al. shows that SBCE is superior to colonoscopy with ileoscopy [27]. Our results corroborate this finding, signifying that, faced with clinically suspected CD and a negative ileoscopy, the use of enteroscopy, namely, with capsule, is advisable.In conclusion, SBCE is a valid diagnostic tool in patients with suspected CD, namely, when inflammatory markers are present. It is particularly informative when lesions are not detected, a case in which the diagnosis of CD is very unlikely. The use of SBCE in this indication may lead to retention of the capsule, more frequently in the subgroup of patients with positive inflammatory markers, but this event is not accompanied by symptoms of intestinal occlusion and can be remedied without the need for surgery. Finally, the diagnosis of CD is not infrequent among patients with negative ileoscopy, suggesting that capsule enteroscopy is advisable in these cases.
---
*Source: 101284-2010-08-05.xml* | 2010 |
# Comparison of Test Your Memory and Montreal Cognitive Assessment Measures in Parkinson’s Disease
**Authors:** Emily J. Henderson; Howard Chu; Daisy M. Gaunt; Alan L. Whone; Yoav Ben-Shlomo; Veronica Lyell
**Journal:** Parkinson’s Disease
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1012847
---
## Abstract
Background. MoCA is widely used in Parkinson’s disease (PD) to assess cognition. The Test Your Memory (TYM) test is a cognitive screening tool that is self-administered.Objectives. We sought to determine (a) the optimal value of TYM to discriminate between PD patients with and without cognitive deficits on MoCA testing, (b) equivalent MoCA and TYM scores, and (c) interrater reliability in TYM testing.Methods. We assessed the discriminant ability of TYM and the equivalence between TYM and MoCA scores and measured the interrater reliability between three raters.Results. Of the 135 subjects that completed both tests, 55% had cognitive impairment according to MoCA. A MoCA score of 25 was equivalent to a TYM score of 43-44. The area under the receiver operator characteristic (ROC) curve for TYM to differentiate between PD-normal and PD-cognitive impairment was 0.82 (95% CI 0.75 to 0.89). The optimal cutoff to distinguish PD-cognitive impairment from PD-normal was ≤45 (sensitivity 90.5%, specificity 59%) thereby correctly classifying 76.3% of patients with PD-cognitive impairment. Interrater agreement was high (0.97) and TYM was completed in under 7 minutes (interquartile range 5.33 to 8.52 minutes).Conclusions. The TYM test is a useful and less resource intensive screening test for cognitive deficits in PD.
---
## Body
## 1. Introduction
Cognitive impairment in Parkinson’s disease (PD) is common and associated with functional impairment and poor quality of life [1, 2]. The spectrum of dysfunction ranges from executive dysfunction to mild cognitive impairment (MCI) seen even in early PD (PD-MCI), through to Parkinson’s disease dementia (PDD). PDD has a cumulative incidence of 80% and is associated with significant morbidity, mortality, and carer stress [1, 3–5]. As the presence of MCI is associated with the development of dementia [6–8] and cognitive deficits impact quality of life [9], accurate identification of those with early cognitive changes is important to facilitate early planning, support, and intervention.The Montreal Cognitive Test (MoCA) is increasingly used to screen for cognitive deficits, largely replacing the less sensitive Mini Mental State Examination (MMSE) [10–12]. MoCA takes 10–15 minutes to administer and assesses seven cognitive domains: visuospatial/executive (5 points), naming (3 points), attention (6 points), language (3 points), abstraction (2 points), memory (5 points), and orientation (6 points), yielding a total possible score of 30. One point is added if the individual has ≤12 years of education. Two studies have examined MoCA as a screening test for cognitive impairment in PD. To identify possible PD-MCI with >80% sensitivity, MoCA cutoff scores of 26/27 [10] or <26/30 [11] have been advocated. The Movement Disorder Society (MDS) Task Force for the diagnosis of PD-MCI (level 1 criteria) supports the use of MoCA to demonstrate global cognitive deficits in a clinical setting [13].The Test Your Memory (TYM) (available fromhttp://www.tymtest.com/) scale is a self-administered test that is validated in Alzheimer’s disease (AD) and has been used in different regions (including in other languages) and clinical settings [14–16]. TYM’s distinct advantage is that it reduces demands on clinical time as it can be supervised by nonclinical staff. TYM tests the same domains as MoCA: orientation (10 points), ability to copy a sentence (2 points), semantic knowledge (3 points), calculation (4 points), verbal fluency (4 points), similarities (4 points), naming (5 points), visuospatial abilities (2 tasks, total 7 points), and recall of a copied sentence (6 points). Ability to complete the test without assistance is scored (executive function, 5 points), yielding a total possible score of 50. For both tests, a higher score indicates better performance. With constraints on clinical time, TYM may represent a helpful additional or alternative tool to screen for cognitive deficits in PD.This substudy sought to determine the ability of TYM to detect cognitive deficits in PD, determine equivalence between TYM and MoCA scores in PD, and assess interrater reliability of TYM scoring.
## 2. Materials and Methods
### 2.1. Study Population
We undertook a diagnostic test study nested within the ReSPonD trial, a double blind randomised controlled trial of Rivastigmine versus placebo to stabilise gait in people with PD [17]. Patients were invited to attend a screening clinic appointment if they appeared to meet the eligibility criteria for the ReSPonD trial. We sought to identify participants with idiopathic Parkinson’s who did not have established dementia, were not treated with cholinesterase inhibitors, were able to walk 18 m, and had been stable on PD medication for 2 weeks. Patients were excluded if they had neurological, visual, or orthopaedic problems that interfered with balance or gait or were non-English speaking (cognitive tests were performed in English). Potential participants were identified from community and hospital settings, through registers and publicity campaigns.Interested participants were sent an information pack and, if interested, had their eligibility checked by telephone. They were then invited for a face-to-face assessment when they completed the MoCA as part of the screening protocol. All patients at this visit were invited to participate in the TYM study regardless of their subsequent involvement in the ReSPonD trial. We excluded patients from the drug trial who had overt PD-dementia, the diagnosis of which was operationalised using the Movement Disorder Society Task Force definition of decreased cognition of sufficient severity to impair daily life [18]. Patients with a low MoCA score without clinically overt dementia (on global clinical assessment) were not excluded. Ethical approval was granted from the South West Central Bristol Research Ethics Committee and written informed consent was obtained from participants.
### 2.2. Procedures
Basic demographic information was obtained for all participants who were assessed in a clinically defined “on” medication state. More in-depth demographic and clinical information was gathered for the participants who subsequently enrolled in the RCT. MoCA and TYM were performed by trained research staff in a variable but nonrandomised order. The MoCA was completed by a registrar in geriatric medicine or trained research nurse, both of whom supervised and timed the TYM tests. All TYM tests were scored by a medical student (HC). To assess interrater reliability, 30% (n
=
40
/
135) were additionally scored by two other individuals, a consultant geriatrician (VL) and a research assistant with no clinical experience, both of whom were provided with the published marking instructions only.
### 2.3. Statistical Analysis
Baseline data are described as mean ± SD if normally distributed or as median interquartile range (25th percentile, 75th percentile) if skewed. As TYM and MoCA are scored on different scales, equipercentile equating with log-linear smoothing [19] was undertaken using the “equate” package developed for “R”. Equally ranked percentiles are considered equivalent for the two scores and a conversion table is produced.We used published screening criteria to classify participants as “PD-normal” (MoCA score 26–30), “PD-MCI” (MoCA score 21–25), and “PDD” (MoCA < 21) [11]. We then grouped PD-MCI and PDD into one group (“PD-cognitive impairment”). A receiver operating characteristic (ROC) was plotted and the area under the curve (AUC) was calculated to determine the ability of worsening TYM score to discriminate between PD-normal (MoCA score 26–30) versus PD-cognitive impairment (MoCA score ≤ 25). The optimal TYM screening cutoff was calculated by maximising Youden’s J statistic [20] which gives equal weighting to sensitivity and specificity.To assess reliability of TYM, we calculated the intraclass correlation coefficient (ICC) for interrater agreement using a two-way random-effects model assuming that the raters were randomly drawn from the population. The ICC is the ratio of intersubject variability to the total variability, defined as the sum of the intersubject variability, the between rater variability, and error variability. An ICC greater than 0.80 is regarded as indicative of high reliability [21]. Absolute difference between raters on TYM was calculated using a gold standard rater (VL) and subtracting the individual scores of the other two raters (i.e., VL TYM score minus other rater TYM scores). Statistical analysis was performed using Stata version 13.1 and “R” [22].
## 2.1. Study Population
We undertook a diagnostic test study nested within the ReSPonD trial, a double blind randomised controlled trial of Rivastigmine versus placebo to stabilise gait in people with PD [17]. Patients were invited to attend a screening clinic appointment if they appeared to meet the eligibility criteria for the ReSPonD trial. We sought to identify participants with idiopathic Parkinson’s who did not have established dementia, were not treated with cholinesterase inhibitors, were able to walk 18 m, and had been stable on PD medication for 2 weeks. Patients were excluded if they had neurological, visual, or orthopaedic problems that interfered with balance or gait or were non-English speaking (cognitive tests were performed in English). Potential participants were identified from community and hospital settings, through registers and publicity campaigns.Interested participants were sent an information pack and, if interested, had their eligibility checked by telephone. They were then invited for a face-to-face assessment when they completed the MoCA as part of the screening protocol. All patients at this visit were invited to participate in the TYM study regardless of their subsequent involvement in the ReSPonD trial. We excluded patients from the drug trial who had overt PD-dementia, the diagnosis of which was operationalised using the Movement Disorder Society Task Force definition of decreased cognition of sufficient severity to impair daily life [18]. Patients with a low MoCA score without clinically overt dementia (on global clinical assessment) were not excluded. Ethical approval was granted from the South West Central Bristol Research Ethics Committee and written informed consent was obtained from participants.
## 2.2. Procedures
Basic demographic information was obtained for all participants who were assessed in a clinically defined “on” medication state. More in-depth demographic and clinical information was gathered for the participants who subsequently enrolled in the RCT. MoCA and TYM were performed by trained research staff in a variable but nonrandomised order. The MoCA was completed by a registrar in geriatric medicine or trained research nurse, both of whom supervised and timed the TYM tests. All TYM tests were scored by a medical student (HC). To assess interrater reliability, 30% (n
=
40
/
135) were additionally scored by two other individuals, a consultant geriatrician (VL) and a research assistant with no clinical experience, both of whom were provided with the published marking instructions only.
## 2.3. Statistical Analysis
Baseline data are described as mean ± SD if normally distributed or as median interquartile range (25th percentile, 75th percentile) if skewed. As TYM and MoCA are scored on different scales, equipercentile equating with log-linear smoothing [19] was undertaken using the “equate” package developed for “R”. Equally ranked percentiles are considered equivalent for the two scores and a conversion table is produced.We used published screening criteria to classify participants as “PD-normal” (MoCA score 26–30), “PD-MCI” (MoCA score 21–25), and “PDD” (MoCA < 21) [11]. We then grouped PD-MCI and PDD into one group (“PD-cognitive impairment”). A receiver operating characteristic (ROC) was plotted and the area under the curve (AUC) was calculated to determine the ability of worsening TYM score to discriminate between PD-normal (MoCA score 26–30) versus PD-cognitive impairment (MoCA score ≤ 25). The optimal TYM screening cutoff was calculated by maximising Youden’s J statistic [20] which gives equal weighting to sensitivity and specificity.To assess reliability of TYM, we calculated the intraclass correlation coefficient (ICC) for interrater agreement using a two-way random-effects model assuming that the raters were randomly drawn from the population. The ICC is the ratio of intersubject variability to the total variability, defined as the sum of the intersubject variability, the between rater variability, and error variability. An ICC greater than 0.80 is regarded as indicative of high reliability [21]. Absolute difference between raters on TYM was calculated using a gold standard rater (VL) and subtracting the individual scores of the other two raters (i.e., VL TYM score minus other rater TYM scores). Statistical analysis was performed using Stata version 13.1 and “R” [22].
## 3. Results
### 3.1. Screening and Demographic Characteristics
Overall, 931 patients were screened for potential inclusion in the study. Of these, 500 (54%) did not meet the trial eligibility criteria, regardless of whether they wished to participate. Of the remaining 301 who were not enrolled and were potentially eligible, 143 did not reply to the initial invitation to attend and 158 declined to participate. Therefore, 135 attended for face-to-face screening, of whom 130 went on to participate in the ReSPonD trial. Of the 5 who did not subsequently enroll in the drug trial,n
=
1 declined, n
=
2 had likely PDD, and 2 were unable to walk 18 m without an aid. All 5 however participated in the TYM study. Participant recruitment is shown in Figure 1.Figure 1
Patient flow.The characteristics of our cohort are summarised in Table1. Participants were predominantly Caucasian with a mean (SD) age of 70 (8.1) years. The median MoCA score was 25 and median TYM score was 43.Table 1
Baseline demographic data.
Participants (n
=
135)
Mean age
70.0 (8.1)
Sex (n female (%))
51 (38%)
Caucasian ethnicity
134 (99%)
Age at leaving school
16 (15–17)
Montreal cognitive assessment (total score)
25 (22–27)
“PD-normal impairment” (MoCA 26–30)
61 (45%)
“PD-cognitive impairment” (MoCA ≤ 25)
74 (55%)
Test your memory (total score)
43 (39–46)
Total MDS-UPDRS (total score)
90 (74–106)∗
Duration of PD (yrs)
9 (5–13)∗
∗
n
=
130.
### 3.2. Test Distributions
MoCA and TYM assessments were performed on all 135 participants. MoCA scores ranged from 7 to 30 and TYM ranged from 15 to 50. Both measures were negatively skewed. Using the published screening cutoffs for MoCA [11], n
=
25 (19%) had deficits consistent with PDD, n
=
49 (36%) had deficits consistent with MCI, and n
=
61 (45%) had normal cognition. The median time taken to complete TYM was 6.53 mins (interquartile range 5.33 to 8.52 mins). 47% (n
=
63) of patients required some degree of assistance to complete the test.
### 3.3. Translation between MoCA and TYM
Corresponding MoCA and TYM scores, after log-linear smoothed equipercentile equating, are shown in Figure2 and Table 2. Extrapolated data are shown in italics in Table 2 corresponding to a TYM score of <15. A MoCA score of 25 (the upper limit for screening PD-MCI) corresponds to a TYM score of 43-44, highlighted in bold.Table 2
Equivalent TYM and MoCA scores.
TYM
MoCA
TYM
MoCA
0
0
25
15
1
1
26
16
2
2
27
16
3
3
28
17
4
4
29
17
5
5
30
18
6
6
31
18
7
7
32
19
8
7
33
19
9
8
34
20
10
8
35
20
11
9
36
21
12
9
37
21
13
10
38
22
14
10
39
22
15
11
40
23
16
11
41
23
17
12
42
24
18
12
43
25
19
12
44
25
20
13
45
26
21
13
46
27
22
14
47
27
23
14
48
28
24
15
49
29
25
15
50
30
Italics indicate extrapolated data.Figure 2
Corresponding raw scores and percentile rank for TYM and MoCA.
### 3.4. Sensitivity and Specificity of TYM
The area under the ROC curve (Figure3) for TYM to differentiate between PD-normal impairment and PD-cognitive impairment as defined by MoCA was 0.82 (95% CI 0.75 to 0.89). The maximised Youden’s J statistic with sensitivity 90.5% and specificity 59.0% giving optimum accuracy was a TYM score of 45, which correctly classified 76.3% of patients with PD-cognitive impairment.Figure 3
ROC curve showing the sensitivity and specificity of different TYM scores for PD-normal (MoCA 26–30) or PD-cognitive impairment (MoCA ≤ 25). Labelled data point (TYM = 45) gives optimum sensitivity and specificity.
### 3.5. Interrater Reliability
The ICC for absolute agreement (ICC = 0.97, 95% CI 0.94 to 0.99,p
<
0.001) was high, indicating excellent scoring reliability. The median (IQR) difference between the gold standard rater (VL) and the other raters was −1 (−2 to 0) in both cases.
## 3.1. Screening and Demographic Characteristics
Overall, 931 patients were screened for potential inclusion in the study. Of these, 500 (54%) did not meet the trial eligibility criteria, regardless of whether they wished to participate. Of the remaining 301 who were not enrolled and were potentially eligible, 143 did not reply to the initial invitation to attend and 158 declined to participate. Therefore, 135 attended for face-to-face screening, of whom 130 went on to participate in the ReSPonD trial. Of the 5 who did not subsequently enroll in the drug trial,n
=
1 declined, n
=
2 had likely PDD, and 2 were unable to walk 18 m without an aid. All 5 however participated in the TYM study. Participant recruitment is shown in Figure 1.Figure 1
Patient flow.The characteristics of our cohort are summarised in Table1. Participants were predominantly Caucasian with a mean (SD) age of 70 (8.1) years. The median MoCA score was 25 and median TYM score was 43.Table 1
Baseline demographic data.
Participants (n
=
135)
Mean age
70.0 (8.1)
Sex (n female (%))
51 (38%)
Caucasian ethnicity
134 (99%)
Age at leaving school
16 (15–17)
Montreal cognitive assessment (total score)
25 (22–27)
“PD-normal impairment” (MoCA 26–30)
61 (45%)
“PD-cognitive impairment” (MoCA ≤ 25)
74 (55%)
Test your memory (total score)
43 (39–46)
Total MDS-UPDRS (total score)
90 (74–106)∗
Duration of PD (yrs)
9 (5–13)∗
∗
n
=
130.
## 3.2. Test Distributions
MoCA and TYM assessments were performed on all 135 participants. MoCA scores ranged from 7 to 30 and TYM ranged from 15 to 50. Both measures were negatively skewed. Using the published screening cutoffs for MoCA [11], n
=
25 (19%) had deficits consistent with PDD, n
=
49 (36%) had deficits consistent with MCI, and n
=
61 (45%) had normal cognition. The median time taken to complete TYM was 6.53 mins (interquartile range 5.33 to 8.52 mins). 47% (n
=
63) of patients required some degree of assistance to complete the test.
## 3.3. Translation between MoCA and TYM
Corresponding MoCA and TYM scores, after log-linear smoothed equipercentile equating, are shown in Figure2 and Table 2. Extrapolated data are shown in italics in Table 2 corresponding to a TYM score of <15. A MoCA score of 25 (the upper limit for screening PD-MCI) corresponds to a TYM score of 43-44, highlighted in bold.Table 2
Equivalent TYM and MoCA scores.
TYM
MoCA
TYM
MoCA
0
0
25
15
1
1
26
16
2
2
27
16
3
3
28
17
4
4
29
17
5
5
30
18
6
6
31
18
7
7
32
19
8
7
33
19
9
8
34
20
10
8
35
20
11
9
36
21
12
9
37
21
13
10
38
22
14
10
39
22
15
11
40
23
16
11
41
23
17
12
42
24
18
12
43
25
19
12
44
25
20
13
45
26
21
13
46
27
22
14
47
27
23
14
48
28
24
15
49
29
25
15
50
30
Italics indicate extrapolated data.Figure 2
Corresponding raw scores and percentile rank for TYM and MoCA.
## 3.4. Sensitivity and Specificity of TYM
The area under the ROC curve (Figure3) for TYM to differentiate between PD-normal impairment and PD-cognitive impairment as defined by MoCA was 0.82 (95% CI 0.75 to 0.89). The maximised Youden’s J statistic with sensitivity 90.5% and specificity 59.0% giving optimum accuracy was a TYM score of 45, which correctly classified 76.3% of patients with PD-cognitive impairment.Figure 3
ROC curve showing the sensitivity and specificity of different TYM scores for PD-normal (MoCA 26–30) or PD-cognitive impairment (MoCA ≤ 25). Labelled data point (TYM = 45) gives optimum sensitivity and specificity.
## 3.5. Interrater Reliability
The ICC for absolute agreement (ICC = 0.97, 95% CI 0.94 to 0.99,p
<
0.001) was high, indicating excellent scoring reliability. The median (IQR) difference between the gold standard rater (VL) and the other raters was −1 (−2 to 0) in both cases.
## 4. Conclusions
MoCA is established and advocated as a screening test for cognitive deficits in PD. The main advantage of the TYM test over MoCA and other screening tests is that it is self-administered, being supervised by nonclinical staff, and can be completed whilst waiting at clinic before seeing a specialist. We have established equivalent scores for TYM with MoCA and assessed the discriminant ability of TYM to detect cognitive deficits in PD. Our results suggest that a TYM score of ≤45 identifies MCI level cognitive deficits with a sensitivity of 90.5% and specificity of 59.0%. The relatively low specificity is appropriate for the TYM test’s role as a screening test. Using TYM can avoid the need for further testing in many patients; those below the cutoff can be assessed further with MoCA or other tools. If motor problems such as severe tremor affect completion of the writing and drawing tasks, TYM can be completed by another individual under direction from the patient. Neither TYM nor MoCA showed notable floor or ceiling effects in this population. The time taken to complete the test was acceptable [12] and comparable to previously published data [16] even in this PD population. This is the first study that we are aware of that has examined the utility of TYM in PD. In contrast to a previous study [10], a substantial proportion of this cohort (55%) screened positive for cognitive deficits using the MoCA. Despite excluding those with known PDD, our participants had a broad range of cognitive dysfunction severity, which enhances the generalisability of the results.This study has several limitations. We excluded people with previously diagnosed PDD as they were not eligible to take part in the drug trial in which this substudy was nested. With fewer people with very pronounced cognitive deficits, equivalent scores in the lower range should be interpreted cautiously and this may influence the generalisability of the results. However, we feel that our population of PD patients without dementia but with falls (which are associated with cognitive impairment) represents the group in whom screening for deficits is of most clinical value. We have not compared the TYM to a “gold-standard” test for PD-MCI [23] and PDD [18], but rather to another screening test (albeit one recommended in the diagnostic criteria [level 1] set out by the MDS) [12]. Published MoCA cutoff values for PD-MCI vary slightly between studies. We used a MoCA cutoff score of 26 and may therefore have slightly overestimated those with cognitive impairment. We did not measure the time taken to complete MoCA as a comparison. Although TYM completion took less than 7 minutes, it is probable that people with more severe cognitive deficits would have taken longer.We would still recommend using the MoCA if concerns are raised regarding cognition as this is the recommended standard validated cognitive screening test in PD [12], which stands alone as a minimum assessment, takes <15 minutes to complete, measures major cognitive domains, and can identify subtle cognitive impairment. Observation of the completion of a cognitive test may afford a clinician further insight into the cognitive changes. Our results suggest that the TYM also meets these criteria, may be faster, and, as it does not require specialist supervision, could further support detection of cognitive deficits in PD. Accurate identification of individuals who require further cognitive assessment is a necessary component of both research testing and clinical testing. Where clinical resource limitations preclude the use of the MoCA, use of the TYM test in PD may be a valuable tool.
---
*Source: 1012847-2016-07-05.xml* | 1012847-2016-07-05_1012847-2016-07-05.md | 24,801 | Comparison of Test Your Memory and Montreal Cognitive Assessment Measures in Parkinson’s Disease | Emily J. Henderson; Howard Chu; Daisy M. Gaunt; Alan L. Whone; Yoav Ben-Shlomo; Veronica Lyell | Parkinson’s Disease
(2016) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1012847 | 1012847-2016-07-05.xml | ---
## Abstract
Background. MoCA is widely used in Parkinson’s disease (PD) to assess cognition. The Test Your Memory (TYM) test is a cognitive screening tool that is self-administered.Objectives. We sought to determine (a) the optimal value of TYM to discriminate between PD patients with and without cognitive deficits on MoCA testing, (b) equivalent MoCA and TYM scores, and (c) interrater reliability in TYM testing.Methods. We assessed the discriminant ability of TYM and the equivalence between TYM and MoCA scores and measured the interrater reliability between three raters.Results. Of the 135 subjects that completed both tests, 55% had cognitive impairment according to MoCA. A MoCA score of 25 was equivalent to a TYM score of 43-44. The area under the receiver operator characteristic (ROC) curve for TYM to differentiate between PD-normal and PD-cognitive impairment was 0.82 (95% CI 0.75 to 0.89). The optimal cutoff to distinguish PD-cognitive impairment from PD-normal was ≤45 (sensitivity 90.5%, specificity 59%) thereby correctly classifying 76.3% of patients with PD-cognitive impairment. Interrater agreement was high (0.97) and TYM was completed in under 7 minutes (interquartile range 5.33 to 8.52 minutes).Conclusions. The TYM test is a useful and less resource intensive screening test for cognitive deficits in PD.
---
## Body
## 1. Introduction
Cognitive impairment in Parkinson’s disease (PD) is common and associated with functional impairment and poor quality of life [1, 2]. The spectrum of dysfunction ranges from executive dysfunction to mild cognitive impairment (MCI) seen even in early PD (PD-MCI), through to Parkinson’s disease dementia (PDD). PDD has a cumulative incidence of 80% and is associated with significant morbidity, mortality, and carer stress [1, 3–5]. As the presence of MCI is associated with the development of dementia [6–8] and cognitive deficits impact quality of life [9], accurate identification of those with early cognitive changes is important to facilitate early planning, support, and intervention.The Montreal Cognitive Test (MoCA) is increasingly used to screen for cognitive deficits, largely replacing the less sensitive Mini Mental State Examination (MMSE) [10–12]. MoCA takes 10–15 minutes to administer and assesses seven cognitive domains: visuospatial/executive (5 points), naming (3 points), attention (6 points), language (3 points), abstraction (2 points), memory (5 points), and orientation (6 points), yielding a total possible score of 30. One point is added if the individual has ≤12 years of education. Two studies have examined MoCA as a screening test for cognitive impairment in PD. To identify possible PD-MCI with >80% sensitivity, MoCA cutoff scores of 26/27 [10] or <26/30 [11] have been advocated. The Movement Disorder Society (MDS) Task Force for the diagnosis of PD-MCI (level 1 criteria) supports the use of MoCA to demonstrate global cognitive deficits in a clinical setting [13].The Test Your Memory (TYM) (available fromhttp://www.tymtest.com/) scale is a self-administered test that is validated in Alzheimer’s disease (AD) and has been used in different regions (including in other languages) and clinical settings [14–16]. TYM’s distinct advantage is that it reduces demands on clinical time as it can be supervised by nonclinical staff. TYM tests the same domains as MoCA: orientation (10 points), ability to copy a sentence (2 points), semantic knowledge (3 points), calculation (4 points), verbal fluency (4 points), similarities (4 points), naming (5 points), visuospatial abilities (2 tasks, total 7 points), and recall of a copied sentence (6 points). Ability to complete the test without assistance is scored (executive function, 5 points), yielding a total possible score of 50. For both tests, a higher score indicates better performance. With constraints on clinical time, TYM may represent a helpful additional or alternative tool to screen for cognitive deficits in PD.This substudy sought to determine the ability of TYM to detect cognitive deficits in PD, determine equivalence between TYM and MoCA scores in PD, and assess interrater reliability of TYM scoring.
## 2. Materials and Methods
### 2.1. Study Population
We undertook a diagnostic test study nested within the ReSPonD trial, a double blind randomised controlled trial of Rivastigmine versus placebo to stabilise gait in people with PD [17]. Patients were invited to attend a screening clinic appointment if they appeared to meet the eligibility criteria for the ReSPonD trial. We sought to identify participants with idiopathic Parkinson’s who did not have established dementia, were not treated with cholinesterase inhibitors, were able to walk 18 m, and had been stable on PD medication for 2 weeks. Patients were excluded if they had neurological, visual, or orthopaedic problems that interfered with balance or gait or were non-English speaking (cognitive tests were performed in English). Potential participants were identified from community and hospital settings, through registers and publicity campaigns.Interested participants were sent an information pack and, if interested, had their eligibility checked by telephone. They were then invited for a face-to-face assessment when they completed the MoCA as part of the screening protocol. All patients at this visit were invited to participate in the TYM study regardless of their subsequent involvement in the ReSPonD trial. We excluded patients from the drug trial who had overt PD-dementia, the diagnosis of which was operationalised using the Movement Disorder Society Task Force definition of decreased cognition of sufficient severity to impair daily life [18]. Patients with a low MoCA score without clinically overt dementia (on global clinical assessment) were not excluded. Ethical approval was granted from the South West Central Bristol Research Ethics Committee and written informed consent was obtained from participants.
### 2.2. Procedures
Basic demographic information was obtained for all participants who were assessed in a clinically defined “on” medication state. More in-depth demographic and clinical information was gathered for the participants who subsequently enrolled in the RCT. MoCA and TYM were performed by trained research staff in a variable but nonrandomised order. The MoCA was completed by a registrar in geriatric medicine or trained research nurse, both of whom supervised and timed the TYM tests. All TYM tests were scored by a medical student (HC). To assess interrater reliability, 30% (n
=
40
/
135) were additionally scored by two other individuals, a consultant geriatrician (VL) and a research assistant with no clinical experience, both of whom were provided with the published marking instructions only.
### 2.3. Statistical Analysis
Baseline data are described as mean ± SD if normally distributed or as median interquartile range (25th percentile, 75th percentile) if skewed. As TYM and MoCA are scored on different scales, equipercentile equating with log-linear smoothing [19] was undertaken using the “equate” package developed for “R”. Equally ranked percentiles are considered equivalent for the two scores and a conversion table is produced.We used published screening criteria to classify participants as “PD-normal” (MoCA score 26–30), “PD-MCI” (MoCA score 21–25), and “PDD” (MoCA < 21) [11]. We then grouped PD-MCI and PDD into one group (“PD-cognitive impairment”). A receiver operating characteristic (ROC) was plotted and the area under the curve (AUC) was calculated to determine the ability of worsening TYM score to discriminate between PD-normal (MoCA score 26–30) versus PD-cognitive impairment (MoCA score ≤ 25). The optimal TYM screening cutoff was calculated by maximising Youden’s J statistic [20] which gives equal weighting to sensitivity and specificity.To assess reliability of TYM, we calculated the intraclass correlation coefficient (ICC) for interrater agreement using a two-way random-effects model assuming that the raters were randomly drawn from the population. The ICC is the ratio of intersubject variability to the total variability, defined as the sum of the intersubject variability, the between rater variability, and error variability. An ICC greater than 0.80 is regarded as indicative of high reliability [21]. Absolute difference between raters on TYM was calculated using a gold standard rater (VL) and subtracting the individual scores of the other two raters (i.e., VL TYM score minus other rater TYM scores). Statistical analysis was performed using Stata version 13.1 and “R” [22].
## 2.1. Study Population
We undertook a diagnostic test study nested within the ReSPonD trial, a double blind randomised controlled trial of Rivastigmine versus placebo to stabilise gait in people with PD [17]. Patients were invited to attend a screening clinic appointment if they appeared to meet the eligibility criteria for the ReSPonD trial. We sought to identify participants with idiopathic Parkinson’s who did not have established dementia, were not treated with cholinesterase inhibitors, were able to walk 18 m, and had been stable on PD medication for 2 weeks. Patients were excluded if they had neurological, visual, or orthopaedic problems that interfered with balance or gait or were non-English speaking (cognitive tests were performed in English). Potential participants were identified from community and hospital settings, through registers and publicity campaigns.Interested participants were sent an information pack and, if interested, had their eligibility checked by telephone. They were then invited for a face-to-face assessment when they completed the MoCA as part of the screening protocol. All patients at this visit were invited to participate in the TYM study regardless of their subsequent involvement in the ReSPonD trial. We excluded patients from the drug trial who had overt PD-dementia, the diagnosis of which was operationalised using the Movement Disorder Society Task Force definition of decreased cognition of sufficient severity to impair daily life [18]. Patients with a low MoCA score without clinically overt dementia (on global clinical assessment) were not excluded. Ethical approval was granted from the South West Central Bristol Research Ethics Committee and written informed consent was obtained from participants.
## 2.2. Procedures
Basic demographic information was obtained for all participants who were assessed in a clinically defined “on” medication state. More in-depth demographic and clinical information was gathered for the participants who subsequently enrolled in the RCT. MoCA and TYM were performed by trained research staff in a variable but nonrandomised order. The MoCA was completed by a registrar in geriatric medicine or trained research nurse, both of whom supervised and timed the TYM tests. All TYM tests were scored by a medical student (HC). To assess interrater reliability, 30% (n
=
40
/
135) were additionally scored by two other individuals, a consultant geriatrician (VL) and a research assistant with no clinical experience, both of whom were provided with the published marking instructions only.
## 2.3. Statistical Analysis
Baseline data are described as mean ± SD if normally distributed or as median interquartile range (25th percentile, 75th percentile) if skewed. As TYM and MoCA are scored on different scales, equipercentile equating with log-linear smoothing [19] was undertaken using the “equate” package developed for “R”. Equally ranked percentiles are considered equivalent for the two scores and a conversion table is produced.We used published screening criteria to classify participants as “PD-normal” (MoCA score 26–30), “PD-MCI” (MoCA score 21–25), and “PDD” (MoCA < 21) [11]. We then grouped PD-MCI and PDD into one group (“PD-cognitive impairment”). A receiver operating characteristic (ROC) was plotted and the area under the curve (AUC) was calculated to determine the ability of worsening TYM score to discriminate between PD-normal (MoCA score 26–30) versus PD-cognitive impairment (MoCA score ≤ 25). The optimal TYM screening cutoff was calculated by maximising Youden’s J statistic [20] which gives equal weighting to sensitivity and specificity.To assess reliability of TYM, we calculated the intraclass correlation coefficient (ICC) for interrater agreement using a two-way random-effects model assuming that the raters were randomly drawn from the population. The ICC is the ratio of intersubject variability to the total variability, defined as the sum of the intersubject variability, the between rater variability, and error variability. An ICC greater than 0.80 is regarded as indicative of high reliability [21]. Absolute difference between raters on TYM was calculated using a gold standard rater (VL) and subtracting the individual scores of the other two raters (i.e., VL TYM score minus other rater TYM scores). Statistical analysis was performed using Stata version 13.1 and “R” [22].
## 3. Results
### 3.1. Screening and Demographic Characteristics
Overall, 931 patients were screened for potential inclusion in the study. Of these, 500 (54%) did not meet the trial eligibility criteria, regardless of whether they wished to participate. Of the remaining 301 who were not enrolled and were potentially eligible, 143 did not reply to the initial invitation to attend and 158 declined to participate. Therefore, 135 attended for face-to-face screening, of whom 130 went on to participate in the ReSPonD trial. Of the 5 who did not subsequently enroll in the drug trial,n
=
1 declined, n
=
2 had likely PDD, and 2 were unable to walk 18 m without an aid. All 5 however participated in the TYM study. Participant recruitment is shown in Figure 1.Figure 1
Patient flow.The characteristics of our cohort are summarised in Table1. Participants were predominantly Caucasian with a mean (SD) age of 70 (8.1) years. The median MoCA score was 25 and median TYM score was 43.Table 1
Baseline demographic data.
Participants (n
=
135)
Mean age
70.0 (8.1)
Sex (n female (%))
51 (38%)
Caucasian ethnicity
134 (99%)
Age at leaving school
16 (15–17)
Montreal cognitive assessment (total score)
25 (22–27)
“PD-normal impairment” (MoCA 26–30)
61 (45%)
“PD-cognitive impairment” (MoCA ≤ 25)
74 (55%)
Test your memory (total score)
43 (39–46)
Total MDS-UPDRS (total score)
90 (74–106)∗
Duration of PD (yrs)
9 (5–13)∗
∗
n
=
130.
### 3.2. Test Distributions
MoCA and TYM assessments were performed on all 135 participants. MoCA scores ranged from 7 to 30 and TYM ranged from 15 to 50. Both measures were negatively skewed. Using the published screening cutoffs for MoCA [11], n
=
25 (19%) had deficits consistent with PDD, n
=
49 (36%) had deficits consistent with MCI, and n
=
61 (45%) had normal cognition. The median time taken to complete TYM was 6.53 mins (interquartile range 5.33 to 8.52 mins). 47% (n
=
63) of patients required some degree of assistance to complete the test.
### 3.3. Translation between MoCA and TYM
Corresponding MoCA and TYM scores, after log-linear smoothed equipercentile equating, are shown in Figure2 and Table 2. Extrapolated data are shown in italics in Table 2 corresponding to a TYM score of <15. A MoCA score of 25 (the upper limit for screening PD-MCI) corresponds to a TYM score of 43-44, highlighted in bold.Table 2
Equivalent TYM and MoCA scores.
TYM
MoCA
TYM
MoCA
0
0
25
15
1
1
26
16
2
2
27
16
3
3
28
17
4
4
29
17
5
5
30
18
6
6
31
18
7
7
32
19
8
7
33
19
9
8
34
20
10
8
35
20
11
9
36
21
12
9
37
21
13
10
38
22
14
10
39
22
15
11
40
23
16
11
41
23
17
12
42
24
18
12
43
25
19
12
44
25
20
13
45
26
21
13
46
27
22
14
47
27
23
14
48
28
24
15
49
29
25
15
50
30
Italics indicate extrapolated data.Figure 2
Corresponding raw scores and percentile rank for TYM and MoCA.
### 3.4. Sensitivity and Specificity of TYM
The area under the ROC curve (Figure3) for TYM to differentiate between PD-normal impairment and PD-cognitive impairment as defined by MoCA was 0.82 (95% CI 0.75 to 0.89). The maximised Youden’s J statistic with sensitivity 90.5% and specificity 59.0% giving optimum accuracy was a TYM score of 45, which correctly classified 76.3% of patients with PD-cognitive impairment.Figure 3
ROC curve showing the sensitivity and specificity of different TYM scores for PD-normal (MoCA 26–30) or PD-cognitive impairment (MoCA ≤ 25). Labelled data point (TYM = 45) gives optimum sensitivity and specificity.
### 3.5. Interrater Reliability
The ICC for absolute agreement (ICC = 0.97, 95% CI 0.94 to 0.99,p
<
0.001) was high, indicating excellent scoring reliability. The median (IQR) difference between the gold standard rater (VL) and the other raters was −1 (−2 to 0) in both cases.
## 3.1. Screening and Demographic Characteristics
Overall, 931 patients were screened for potential inclusion in the study. Of these, 500 (54%) did not meet the trial eligibility criteria, regardless of whether they wished to participate. Of the remaining 301 who were not enrolled and were potentially eligible, 143 did not reply to the initial invitation to attend and 158 declined to participate. Therefore, 135 attended for face-to-face screening, of whom 130 went on to participate in the ReSPonD trial. Of the 5 who did not subsequently enroll in the drug trial,n
=
1 declined, n
=
2 had likely PDD, and 2 were unable to walk 18 m without an aid. All 5 however participated in the TYM study. Participant recruitment is shown in Figure 1.Figure 1
Patient flow.The characteristics of our cohort are summarised in Table1. Participants were predominantly Caucasian with a mean (SD) age of 70 (8.1) years. The median MoCA score was 25 and median TYM score was 43.Table 1
Baseline demographic data.
Participants (n
=
135)
Mean age
70.0 (8.1)
Sex (n female (%))
51 (38%)
Caucasian ethnicity
134 (99%)
Age at leaving school
16 (15–17)
Montreal cognitive assessment (total score)
25 (22–27)
“PD-normal impairment” (MoCA 26–30)
61 (45%)
“PD-cognitive impairment” (MoCA ≤ 25)
74 (55%)
Test your memory (total score)
43 (39–46)
Total MDS-UPDRS (total score)
90 (74–106)∗
Duration of PD (yrs)
9 (5–13)∗
∗
n
=
130.
## 3.2. Test Distributions
MoCA and TYM assessments were performed on all 135 participants. MoCA scores ranged from 7 to 30 and TYM ranged from 15 to 50. Both measures were negatively skewed. Using the published screening cutoffs for MoCA [11], n
=
25 (19%) had deficits consistent with PDD, n
=
49 (36%) had deficits consistent with MCI, and n
=
61 (45%) had normal cognition. The median time taken to complete TYM was 6.53 mins (interquartile range 5.33 to 8.52 mins). 47% (n
=
63) of patients required some degree of assistance to complete the test.
## 3.3. Translation between MoCA and TYM
Corresponding MoCA and TYM scores, after log-linear smoothed equipercentile equating, are shown in Figure2 and Table 2. Extrapolated data are shown in italics in Table 2 corresponding to a TYM score of <15. A MoCA score of 25 (the upper limit for screening PD-MCI) corresponds to a TYM score of 43-44, highlighted in bold.Table 2
Equivalent TYM and MoCA scores.
TYM
MoCA
TYM
MoCA
0
0
25
15
1
1
26
16
2
2
27
16
3
3
28
17
4
4
29
17
5
5
30
18
6
6
31
18
7
7
32
19
8
7
33
19
9
8
34
20
10
8
35
20
11
9
36
21
12
9
37
21
13
10
38
22
14
10
39
22
15
11
40
23
16
11
41
23
17
12
42
24
18
12
43
25
19
12
44
25
20
13
45
26
21
13
46
27
22
14
47
27
23
14
48
28
24
15
49
29
25
15
50
30
Italics indicate extrapolated data.Figure 2
Corresponding raw scores and percentile rank for TYM and MoCA.
## 3.4. Sensitivity and Specificity of TYM
The area under the ROC curve (Figure3) for TYM to differentiate between PD-normal impairment and PD-cognitive impairment as defined by MoCA was 0.82 (95% CI 0.75 to 0.89). The maximised Youden’s J statistic with sensitivity 90.5% and specificity 59.0% giving optimum accuracy was a TYM score of 45, which correctly classified 76.3% of patients with PD-cognitive impairment.Figure 3
ROC curve showing the sensitivity and specificity of different TYM scores for PD-normal (MoCA 26–30) or PD-cognitive impairment (MoCA ≤ 25). Labelled data point (TYM = 45) gives optimum sensitivity and specificity.
## 3.5. Interrater Reliability
The ICC for absolute agreement (ICC = 0.97, 95% CI 0.94 to 0.99,p
<
0.001) was high, indicating excellent scoring reliability. The median (IQR) difference between the gold standard rater (VL) and the other raters was −1 (−2 to 0) in both cases.
## 4. Conclusions
MoCA is established and advocated as a screening test for cognitive deficits in PD. The main advantage of the TYM test over MoCA and other screening tests is that it is self-administered, being supervised by nonclinical staff, and can be completed whilst waiting at clinic before seeing a specialist. We have established equivalent scores for TYM with MoCA and assessed the discriminant ability of TYM to detect cognitive deficits in PD. Our results suggest that a TYM score of ≤45 identifies MCI level cognitive deficits with a sensitivity of 90.5% and specificity of 59.0%. The relatively low specificity is appropriate for the TYM test’s role as a screening test. Using TYM can avoid the need for further testing in many patients; those below the cutoff can be assessed further with MoCA or other tools. If motor problems such as severe tremor affect completion of the writing and drawing tasks, TYM can be completed by another individual under direction from the patient. Neither TYM nor MoCA showed notable floor or ceiling effects in this population. The time taken to complete the test was acceptable [12] and comparable to previously published data [16] even in this PD population. This is the first study that we are aware of that has examined the utility of TYM in PD. In contrast to a previous study [10], a substantial proportion of this cohort (55%) screened positive for cognitive deficits using the MoCA. Despite excluding those with known PDD, our participants had a broad range of cognitive dysfunction severity, which enhances the generalisability of the results.This study has several limitations. We excluded people with previously diagnosed PDD as they were not eligible to take part in the drug trial in which this substudy was nested. With fewer people with very pronounced cognitive deficits, equivalent scores in the lower range should be interpreted cautiously and this may influence the generalisability of the results. However, we feel that our population of PD patients without dementia but with falls (which are associated with cognitive impairment) represents the group in whom screening for deficits is of most clinical value. We have not compared the TYM to a “gold-standard” test for PD-MCI [23] and PDD [18], but rather to another screening test (albeit one recommended in the diagnostic criteria [level 1] set out by the MDS) [12]. Published MoCA cutoff values for PD-MCI vary slightly between studies. We used a MoCA cutoff score of 26 and may therefore have slightly overestimated those with cognitive impairment. We did not measure the time taken to complete MoCA as a comparison. Although TYM completion took less than 7 minutes, it is probable that people with more severe cognitive deficits would have taken longer.We would still recommend using the MoCA if concerns are raised regarding cognition as this is the recommended standard validated cognitive screening test in PD [12], which stands alone as a minimum assessment, takes <15 minutes to complete, measures major cognitive domains, and can identify subtle cognitive impairment. Observation of the completion of a cognitive test may afford a clinician further insight into the cognitive changes. Our results suggest that the TYM also meets these criteria, may be faster, and, as it does not require specialist supervision, could further support detection of cognitive deficits in PD. Accurate identification of individuals who require further cognitive assessment is a necessary component of both research testing and clinical testing. Where clinical resource limitations preclude the use of the MoCA, use of the TYM test in PD may be a valuable tool.
---
*Source: 1012847-2016-07-05.xml* | 2016 |
# Alteration of ROS Homeostasis and Decreased Lifespan inS. cerevisiae Elicited by Deletion of the Mitochondrial Translocator FLX1
**Authors:** Teresa Anna Giancaspero; Emilia Dipalo; Angelica Miccolis; Eckhard Boles; Michele Caselle; Maria Barile
**Journal:** BioMed Research International
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101286
---
## Abstract
This paper deals with the control exerted by the mitochondrial translocatorFLX1, which catalyzes the movement of the redox cofactor FAD across the mitochondrial membrane, on the efficiency of ATP production, ROS homeostasis, and lifespan of S. cerevisiae. The deletion of the FLX1 gene resulted in respiration-deficient and small-colony phenotype accompanied by a significant ATP shortage and ROS unbalance in glycerol-grown cells. Moreover, the f
l
x
1
Δ strain showed H2O2 hypersensitivity and decreased lifespan. The impaired biochemical phenotype found in the f
l
x
1
Δ strain might be justified by an altered expression of the flavoprotein subunit of succinate dehydrogenase, a key enzyme in bioenergetics and cell regulation. A search for possible cis-acting consensus motifs in the regulatory region upstream SDH1-ORF revealed a dozen of upstream motifs that might respond to induced metabolic changes by altering the expression of Flx1p. Among these motifs, two are present in the regulatory region of genes encoding proteins involved in flavin homeostasis. This is the first evidence that the mitochondrial flavin cofactor status is involved in controlling the lifespan of yeasts, maybe by changing the cellular succinate level. This is not the only case in which the homeostasis of redox cofactors underlies complex phenotypical behaviours, as lifespan in yeasts.
---
## Body
## 1. Introduction
Riboflavin (Rf or vitamin B2) is the precursor of flavin mononucleotide (FMN) and flavin adenine dinucleotide (FAD), the redox cofactors of a large number of dehydrogenases, reductases, and oxidases. Most of these flavoenzymes are compartmented in the cellular organelles, where they are involved in energy production and redox homeostasis as well as in different cellular regulatory events including apoptosis, chromatin remodelling, and interestingly, as recently proposed, in epigenetic signalling [1–4]. Consistent with the crucial role of flavoenzymes in cell life, flavin-dependent enzyme deficiency and/or impairment in flavin homeostasis in humans and experimental animals has been linked to several diseases, such as cancer, cardiovascular diseases, anaemia, abnormal fetal development, and different neuromuscular and neurological disorders [5–9]. The relevance of these pathologies merits further research aimed to better describe FAD homeostasis and flavoenzyme biogenesis, especially in those organisms that can be a simple and suitable model for human diseases. The conserved biological processes shared with all eukaryotic cells, together with the possibility of simple and quick genetic manipulation, allowed proposing the budding yeast,Saccharomyces cerevisiae, as the premier model to understand the biochemistry and molecular biology of mammalian cells and to decipher molecular mechanisms underlying human diseases [10–12].For many yearsS. cerevisiae has been used also as a model to study the complexity of the molecular events involved in the undesired process of aging, in which mitochondria play a major role [13, 14]. The role of mitochondria has been pointed out either because aged respiratory chain is a major source of cellular ROS [14] or because mitochondria actively participate in regulating the homeostasis of the redox cofactor NAD, which regulates yeast lifespan by acting as a substrate of specific deacetylases (EC 3.5.1.-), named sirtuins [15–17]. This might not be the only case in which the homeostasis of redox cofactors underlies complex phenotypical behaviours, as lifespan in yeasts. Here we investigate whether the mitochondrial flavin cofactor status may also be involved in controlling the lifespan of yeasts, presumably by changing the level of mitochondrial flavoenzymes, which are crucial for cell regulation [18, 19].It should be noted that, even though mitochondria are plenty of flavin and flavoproteins [20, 21], the origin of flavin cofactors starting from Rf in this organelle is still a matter of debate. Yeasts have the ability to either synthesise Rfde novo or to take it from outside. The first eukaryotic gene coding for a cellular Rf transporter was identified inS. cerevisiae as theMCH5 gene [22]. Intracellular Rf conversion to FAD is a ubiquitous pathway and occurs via the sequential actions of ATP: riboflavin 5′-phosphotransferase or riboflavin kinase (RFK, EC 2.7.1.26) which phosphorylates the vitamin into FMN and of ATP: FMN adenylyl transferase or FAD synthase (FADS, EC 2.7.7.2) which adenylates FMN to FAD. The first eukaryotic genes encoding for RFK and FADS were identified inS. cerevisiae and namedFMN1 [23] andFAD1 [24], respectively. While there is no doubt about a mitochondrial localization for Fmn1p [23, 25], the existence of a mitochondrial FADS isoform in yeast is still controversial. First a cytosolic localization for Fad1p was reported [24]; thus newly synthesised FAD was expected to be imported into mitochondria via the FAD translocator Flx1p [25]. However, results from our laboratory showed that, besides in the cytosol, FAD-forming activities can be revealed in mitochondria, thus requiring uptake of the FAD precursors into mitochondria [26, 27]. FAD synthesised inside the organelle can be either delivered to a number of nascent client apo-flavoenzymes or be exported via Flx1p into cytosol to take part of an extramitochondrial posttranscriptional control of apo-flavoprotein biogenesis [19, 26].Besides synthesis and transport, mitochondrial flavin homeostasis strictly depends also on flavin degradation. Recently we have demonstrated thatS. cerevisiae mitochondria (SCM) are able to catalyze FAD hydrolysis via an enzymatic activity which is different from the already characterized NUDIX hydrolases (i.e., enzymes that catalyze the hydrolysis of nucleoside diphosphates linked to other moieties, X) and it is regulated by the mitochondrial NAD redox status [17].To prove the relationship between mitochondrial FAD homeostasis and lifespan in yeast we use as a model aS. cerevisiae strain lacking theFLX1 gene which showed a respiratory-deficient phenotype and a derangement in a number of mitochondrial flavoproteins, that is, dihydrolipoamide dehydrogenase (LPD1), succinate dehydrogenase (SDH), and flavoproteins, involved in ubiquinone biosynthesis (COQ6) [18, 25, 26, 28].We demonstrated here that this deleted strain performed ATP shortage and ROS unbalance, together with H2O2 hypersensitivity and altered chronological lifespan. Thisflx1Δ phenotype is correlated to a reduced ability to maintain an appropriate level of the flavoenzyme succinate dehydrogenase (SDH), a member of a complex “flavin network” participating in a nucleus-mitochondrion cross-talk.
## 2. Materials and Methods
### 2.1. Materials
All reagents and enzymes were from Sigma-Aldrich (St. Louis, MO, USA). Zymolyase was from ICN (Abingdon, UK) and Bacto Yeast Extract and Bacto Peptone were from Difco (Franklin Lakes, NJ, USA). Mitochondrial substrates were used as TRIS salts at pH 7.0. Solvents and salts used for HPLC were from J. T. Baker (Center Valley, PA, USA). Rat anti-HA monoclonal antibody and peroxidase-conjugated anti-rat IgG secondary antibody were obtained from Roche (Basel, Switzerland) and Jackson Immunoresearch (West Grove, PA, USA), respectively.
### 2.2. Yeast Strains
The wild-typeS. cerevisiae strain (EBY157A orWT genotypeMATα ura 3–52 MAL2-8
c
SUC2 p426MET25) used in this work derived from the CEN.PK series of yeast strains and was obtained from P. Kotter (Institut für Mikrobiologie, Goethe-Universität Frankfurt, Frankfurt, Germany), as already described in [26]. Theflx1Δ mutant strain (EBY167A,flx1Δ) was constructed as described in [26] and theWT-HA (EBY157-SDH1-HA) andflx1Δ-HA (EBY167-G418S-SDH1-HA) were constructed as described in [19].
### 2.3. Media and Growth Conditions
Cells were grown aerobically at 30°C with constant shaking in rich liquid medium (YEP, 10 g/L Yeast Extract, 20 g/L Bacto Peptone) or in minimal synthetic liquid medium (SM, 1.7 g/L yeast nitrogen base, 5 g/L ammonium sulphate, and 20 mg/L uracil) supplemented with glucose or glycerol (2% each) as carbon sources. The YEP or SM solid media contained 18 g/L agar.
### 2.4. Chronological Lifespan Determination
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Each strain was then cultured in SM liquid medium at 30°C for 1, 4, and 7 days. Five serial dilutions from each culture containing 200 cells, calculated from A
600
nm, were plated onto SM solid medium and grown at 30°C for two-three days.
### 2.5. H2O2 Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and H2O2 (0.05 or 2 mM). After 5 or 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
### 2.6. Malate and Succinate Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and succinate or malate (5 mM). After 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
### 2.7. Preparation of Spheroplasts, Mitochondria, and Cellular Lysates
Spheroplasts were prepared using Zymolyase. Mitochondria were isolated from spheroplasts as described in [26]. Cellular lysates were obtained by early exponential-phase (5 h) or stationary-phase (24 h) cells harvested by centrifugation (8000 ×g for 5 min), washed with sterile water, resuspended in 250 μL of lysis buffer (10 mM Tris-HCl, pH 7.6, 1 mM EDTA, 1 mM dithiothreitol, and 0.2 mM phenylmethanesulfonyl fluoride, supplemented with one tablet of Roche protease inhibitor cocktail every 10 mL of lysis buffer), and vortexed with glass beads for 10 min at 4°C. The liquid was removed and centrifuged at 3000 ×g for 5 min to remove cell debris. The protein concentrations of the spheroplasts, mitochondria, and cellular lysates were assayed according to Bradford [29].
### 2.8. Quantitation of Flavins, ATP, and Reactive Oxygen Species (ROS)
Rf, FMN, and FAD content in spheroplasts and SCM was measured in aliquots (5–80μL) of neutralized perchloric extracts by means of HPLC (Gilson HPLC system including a model 306 pump and a model 307 pump equipped with a Kontron Instruments SFM 25 fluorometer and Unipoint system software), essentially as previously described [26]. ATP content was measured fluorometrically in cellular lysates by using the ATP Detecting System, essentially as in [30]. NADPH formation, which corresponds to ATP content (with a 1 : 1 stoichiometry), was followed with excitation wavelength at 340 nm and emission wavelength at 456 nm. ROS level was fluorometrically measured on cellular lysates using as substrate 2′-7′-dichlorofluorescin diacetate (DCF-DA) according to [30], with slight modifications. Briefly, the probe DCF-DA (50 μM) was incubated at 37°C for 1 h with 0.03–0.05 mg proteins and converted to fluorescent dichlorofluorescein (DCF) upon reaction with ROS. DCF fluorescence of each sample was measured by means of a LS50S Perkin Elmer spectrofluorometer (excitation and emission wavelengths set at 485 nm and 520 nm, resp.).
### 2.9. Enzymatic Assays
Succinate dehydrogenase (SDH, EC 1.3.5.1) and fumarase (FUM, EC 4.2.1.2) activities were measured as in [26]. Glutathione reductase (GR, EC 1.6.4.2) activity was spectrophotometrically assayed by monitoring the absorbance at 340 nm due to NADPH oxidation after glutathione addition (1 mM), essentially as in [30]. Superoxide dismutase (SOD, EC 1.15.1.1) activity was spectrophotometrically measured by the xanthine oxidase/xanthine/cytochrome c method, essentially as described in [31].
### 2.10. Statistical Analysis
All experiments were repeated at least three times with different cell preparations. Results are presented as mean ± standard deviation (SD). Statistical significance was evaluated by Student’st-test. Values of P
<
0.05 were considered statistically significant.
## 2.1. Materials
All reagents and enzymes were from Sigma-Aldrich (St. Louis, MO, USA). Zymolyase was from ICN (Abingdon, UK) and Bacto Yeast Extract and Bacto Peptone were from Difco (Franklin Lakes, NJ, USA). Mitochondrial substrates were used as TRIS salts at pH 7.0. Solvents and salts used for HPLC were from J. T. Baker (Center Valley, PA, USA). Rat anti-HA monoclonal antibody and peroxidase-conjugated anti-rat IgG secondary antibody were obtained from Roche (Basel, Switzerland) and Jackson Immunoresearch (West Grove, PA, USA), respectively.
## 2.2. Yeast Strains
The wild-typeS. cerevisiae strain (EBY157A orWT genotypeMATα ura 3–52 MAL2-8
c
SUC2 p426MET25) used in this work derived from the CEN.PK series of yeast strains and was obtained from P. Kotter (Institut für Mikrobiologie, Goethe-Universität Frankfurt, Frankfurt, Germany), as already described in [26]. Theflx1Δ mutant strain (EBY167A,flx1Δ) was constructed as described in [26] and theWT-HA (EBY157-SDH1-HA) andflx1Δ-HA (EBY167-G418S-SDH1-HA) were constructed as described in [19].
## 2.3. Media and Growth Conditions
Cells were grown aerobically at 30°C with constant shaking in rich liquid medium (YEP, 10 g/L Yeast Extract, 20 g/L Bacto Peptone) or in minimal synthetic liquid medium (SM, 1.7 g/L yeast nitrogen base, 5 g/L ammonium sulphate, and 20 mg/L uracil) supplemented with glucose or glycerol (2% each) as carbon sources. The YEP or SM solid media contained 18 g/L agar.
## 2.4. Chronological Lifespan Determination
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Each strain was then cultured in SM liquid medium at 30°C for 1, 4, and 7 days. Five serial dilutions from each culture containing 200 cells, calculated from A
600
nm, were plated onto SM solid medium and grown at 30°C for two-three days.
## 2.5. H2O2 Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and H2O2 (0.05 or 2 mM). After 5 or 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
## 2.6. Malate and Succinate Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and succinate or malate (5 mM). After 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
## 2.7. Preparation of Spheroplasts, Mitochondria, and Cellular Lysates
Spheroplasts were prepared using Zymolyase. Mitochondria were isolated from spheroplasts as described in [26]. Cellular lysates were obtained by early exponential-phase (5 h) or stationary-phase (24 h) cells harvested by centrifugation (8000 ×g for 5 min), washed with sterile water, resuspended in 250 μL of lysis buffer (10 mM Tris-HCl, pH 7.6, 1 mM EDTA, 1 mM dithiothreitol, and 0.2 mM phenylmethanesulfonyl fluoride, supplemented with one tablet of Roche protease inhibitor cocktail every 10 mL of lysis buffer), and vortexed with glass beads for 10 min at 4°C. The liquid was removed and centrifuged at 3000 ×g for 5 min to remove cell debris. The protein concentrations of the spheroplasts, mitochondria, and cellular lysates were assayed according to Bradford [29].
## 2.8. Quantitation of Flavins, ATP, and Reactive Oxygen Species (ROS)
Rf, FMN, and FAD content in spheroplasts and SCM was measured in aliquots (5–80μL) of neutralized perchloric extracts by means of HPLC (Gilson HPLC system including a model 306 pump and a model 307 pump equipped with a Kontron Instruments SFM 25 fluorometer and Unipoint system software), essentially as previously described [26]. ATP content was measured fluorometrically in cellular lysates by using the ATP Detecting System, essentially as in [30]. NADPH formation, which corresponds to ATP content (with a 1 : 1 stoichiometry), was followed with excitation wavelength at 340 nm and emission wavelength at 456 nm. ROS level was fluorometrically measured on cellular lysates using as substrate 2′-7′-dichlorofluorescin diacetate (DCF-DA) according to [30], with slight modifications. Briefly, the probe DCF-DA (50 μM) was incubated at 37°C for 1 h with 0.03–0.05 mg proteins and converted to fluorescent dichlorofluorescein (DCF) upon reaction with ROS. DCF fluorescence of each sample was measured by means of a LS50S Perkin Elmer spectrofluorometer (excitation and emission wavelengths set at 485 nm and 520 nm, resp.).
## 2.9. Enzymatic Assays
Succinate dehydrogenase (SDH, EC 1.3.5.1) and fumarase (FUM, EC 4.2.1.2) activities were measured as in [26]. Glutathione reductase (GR, EC 1.6.4.2) activity was spectrophotometrically assayed by monitoring the absorbance at 340 nm due to NADPH oxidation after glutathione addition (1 mM), essentially as in [30]. Superoxide dismutase (SOD, EC 1.15.1.1) activity was spectrophotometrically measured by the xanthine oxidase/xanthine/cytochrome c method, essentially as described in [31].
## 2.10. Statistical Analysis
All experiments were repeated at least three times with different cell preparations. Results are presented as mean ± standard deviation (SD). Statistical significance was evaluated by Student’st-test. Values of P
<
0.05 were considered statistically significant.
## 3. Results
### 3.1. Phenotypical and Biochemical Consequences ofFLX1 Deletion
In order to study the relevance of mitochondrial flavin cofactor homeostasis on cellular bioenergetics we introduced a yeast strain lacking theFLX1 gene, encoding the mitochondrial FAD transporter [26]. This deleted strain showed a small-colony phenotype, on both fermentable and nonfermentable carbon sources, due to an impairment in the aerobic respiratory chain pathway [32]. The deleted strain,flx1Δ, grew normally on glucose medium but failed to grow on nonfermentable carbon sources (i.e., glycerol), thus indicating a respiration-deficient phenotype (Figure 1(a)). The growth defect on nonfermentable carbon source, which was restored by complementing the deleted strain with the YEpFLX1 plasmid [26], was not rescued by the addition of tricarboxylic acid (TCA) cycle intermediates such as succinate or malate (Figure 1(a)).(a) Respiratory-deficient phenotype offlx1Δ strain: effect of succinate and malate addition. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated either 5 mM succinate (Succ) or 5 mM malate (Mal) was added. Cell growth was estimated at the stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm) of a ten-fold dilution of each growth culture, consistently, corrected for the dilution factor. The values reported in the histogram are the means (±SD) of three experiments. (b) Changes in the recombinant Sdh1-HAp level inflx1Δ strain. Cellular lysates were prepared fromWT-HA andflX1Δ-HA cells grown at 30°C up to the exponential growth phase (5 h) in YEP liquid medium supplemented with either glycerol or galactose (2% each) as carbon source. Proteins from cellular lysates (0.05 mg) were separated by SDS/PAGE and transferred onto a PVDF membrane. In each extract, Sdh1-HA protein was detected by using an α-HA and its amount was densitometrically evaluated. The values reported in the histogram are the means (±SD) of three experiments performed with different cellular lysates preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05). As a control, the specific activity of the enzyme fumarase (FUM) was determined in each cellular lysate preparation.
(a)
(b)Among the mitochondrial flavoenzymes which were demonstrated to be altered inflx1Δ strain [25, 26, 28], we showed before [19, 32] and confirmed in Figure 1(b) a significant reduced level of the apo-flavoprotein Sdh1p, resulting in an altered functionality of SDH or complex II of the respiratory chain. This reduction was revealed by creating a strain in which three consecutive copies of the human influenza hemagglutinin epitope (HA epitope, YPYDVPDYA) were fused in frame to the 3′end of theSDH1 ORF in the genome of both the WT andflx1Δ strains. The chimera protein, namely, Sdh1-HAp, carrying the HA-tag at the C-terminal end of Sdh1p, lost the ability to covalently bind the flavin cofactor FAD [19, 33], but not its regulatory behaviour, that is, its inducible expression in galactose or in nonfermentable carbon sources. In all the growth conditions tested, the FAD-independent fumarase (FUM) activity, used as a control, was not affected byFLX1 deletion (see histogram in Figure 1(b)).A significant decrease of Sdh1-HAp level was accompanied in galactose, but not in glycerol, by a profound derangement of flavin cofactors, particularly evident in cell grown at the early exponential phase (Table1), in agreement with [25, 26], respectively. The reason for these carbon source-dependent flavin level changes, which is not easily explainable, is addressed in Section 4
.Table 1
Endogenous flavin content in spheroplasts and mitochondria.
Carbon source
Strain
Spheroplasts
SCM
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
Glycerol
WT
157 ± 7
153 ± 7
1.1
160 ± 10°
30 ± 10°
4.8
f
l
x
1
Δ
126 ± 11
110 ± 10
1.1
140 ± 30°
40 ± 10°
4.5
Galactose
WT
263 ± 10
189 ± 8
1.4
538 ± 32
103 ± 7
5.2
f
l
x
1
Δ
207 ± 8*
195 ± 8
1.1
306 ± 15*
67 ± 11*
4.8
Spheroplasts and mitochondria (SCM) were prepared from WT andf
l
x
1
Δ cells grown in glycerol or galactose (2%) up to the exponential growth phase (5 h). FAD and FMN content was determined in neutralized perchloric acid extracts, as described in Materials and Methods. Riboflavin amount was not relevant, and thus its value has not been reported. The means (±SD) of the flavin endogenous content determined in three experiments performed with different preparations are reported. °Data published in (Bafunno et al., 2004) [26]; statistical evaluation was carried out according to Student’s t-test (*P
<
0.05).Consistent with an altered functionality of SDH, theflx1Δ strain also showed impaired isolated mitochondria oxygen consumption activity, specifically detectable when succinate was used as a respiratory substrate [19]. Similar phenotype was also observed in yeast strains carrying either a deletion ofSDH1 [34] or a deletion ofSDH5, which encodes a mitochondrial protein involved in Sdh1p flavinylation [35]. Another respiration-related phenotype offlx1Δ strain was investigated in Figure 2, by testing H2O2 hypersensitivity of cells grown on both fermentable and nonfermentable carbon sources. In glucose, the WT cells grew up to the stationary phase (24 h) in the presence of H2O2 (0.05 or 2 mM) essentially as the control cells grown in the absence of H2O2. In glycerol, their ability to grow up to 24 h was reduced of about 20% at 0.05 mM H2O2 and of 60% at 2 mM, with respect to the control cells in which no H2O2 was added.Figure 2
Sensitivity to H2O2. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated, H2O2 at the indicated concentration was added. Cell growth was estimated at the exponential (5 h) and stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm). In the histogram, the A
600
nm of the cell cultures grown in the presence of H2O2 is reported as a percentage of the control (i.e., the A
600
nm of cell cultures grown in the absence of H2O2, set arbitrary equal to 100%). The values reported in the histogram are the means (±SD) of three experiments.In glucose,flx1Δ cells did not show H2O2 hypersensitivity at 0.05 mM. At 2 mM H2O2, their ability to grow was significantly reduced (of about 85%) with respect toflx1Δ cells grown in the absence of H2O2. The ability of theflx1Δ cells to grow in glycerol, which wasper se drastically reduced by deletion, was reduced at 24 h by the addition of 0.05 mM H2O2 (about 50% with respect to the control cells grown in the absence of H2O2). An even higher sensitivity to H2O2 was observed in the presence of 2 mM H2O2, having their growth ability reduced of about 85% with respect to control cells in which no addition was made. The impairment in the ability to grow under H2O2 stress conditions clearly demonstrates an impairment in defence capability of theflx1Δ strain. Interestingly, the same phenotype was observed also in the yeastsdh5Δ [35],sdh1Δ, andsdh2Δ [36] strains.To understand whether mitochondrial flavoprotein impairment, due toFLX1 deletion, influenced aging in yeast, we carried out measurements of chronological lifespan on both WT andflx1Δ cells cultured at 30°C in SM liquid medium supplemented with glucose 2% as carbon source (Figure 3). Following 24 h (1 day), 96 h (4 days), and 168 h (7 days) of growth, the number of colonies was determined by spotting five serial dilutions of the liquid culture and incubating the plates for two-three days at 30°C. The results of a typical experiment are reported in Figure 3. A reduced number of small colonies were counted for theflx1Δ strain, with respect to the number of colonies counted for the WT strain. This phenotype, particularly evident after 96 h and 168 h of growth time, clearly indicated a decrease in chronological lifespan of theflx1Δ strain. Essentially the same phenotype was observed insdh1Δ andsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. Whenflx1Δ cells were grown on glycerol, they lost the ability to form colonies following 24 h growth time (data not shown).Figure 3
Chronological lifespan determination. WT andflX1Δ strains were cultured in SM liquid medium at 30°C. Dilutions from each culture containing about 200 cells (as calculated from A
600
nm by taking into account that one A
600
nm is equivalent to 3 × 107 cell/mL) were harvested after 24, 96, and 168 h and plated onto SM solid medium and grown at 30°C for two-three days.In order to correlate the observed phenotype with an impairment of cellular bioenergetics, we compared the ATP content and the ROS amount of theflx1Δ strain with that of the WT. In Figure 4, panel (a), the ATP cellular content was enzymatically measured in neutralized perchloric extracts prepared from WT andflx1Δ cells grown on glycerol. At the exponential growth phase (5 h), a significant reduction was detected in theflx1Δ cells in comparison with the WT (0.21 versus 1.05 nmol·mg−1 protein). At the stationary growth phase (24 h), the ATP content increased significantly in WT cells (3.4 nmol·mg−1 protein) and even more in the deleted strain (5.2 nmol·mg−1 protein). The temporary severe decrease in ATP content induced by the absence of Flx1p was not observed in glucose-grown cells (Figure 4, panel (a′)), as expected when fermentation is the main way to produce ATP.Bioenergetic and redox impairment inflx1Δ strain: ATP and ROS content. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′)) up to the exponential phase (5 h). ATP content ((a), (a′)) was enzymatically determined following perchloric acid extraction and neutralization. ROS content ((b), (b′)) was fluorometrically measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).FLX1 deletion induced also a significant increase in the amount of ROS (135% with respect to the WT cells), as estimated with the fluorescent dye DCFH-DA on the cellular lysates prepared from cells grown in glycerol up to the exponential growth phase (Figure 4, panel (b)). At the stationary phase theflx1Δ cells presented almost the same ROS amount measured in the WT cells (Figure 4, panel (b)). In glucose-grown cells, the amount of cellular ROS in theflx1Δ strain was not significantly changed with respect to the WT (Figure 4, Panel (b′)), as expected when a mitochondrial damage is the major cause of ROS unbalance.In line with the unique role of flavin cofactor in oxygen metabolism and ROS defence systems [20, 30, 37, 38], we further investigated whether the impairment of the ROS level in glycerol-grownflx1Δ strain was due to a derangement in enzymes involved in ROS detoxification, such as the flavoprotein glutathione reductase (GR) or the FAD-independent superoxide dismutase (SOD); their specific enzymatic activities were measured in cellular lysates from WT andflx1Δ cells grown on glycerol and glucose, while assaying the FAD-independent enzyme FUM as control (Figure 5). Figure 5, panel (a), shows a significant increase in GR specific activity inflx1Δ strain (65%) at the exponential growth phase with respect to that measured in WT. The GR specific activity in theflx1Δ reached the same value measured in the WT cells (about 35 nmol·mg−1 protein) at the stationary phase. In cells grown in glucose up to the exponential growth phase (Figure 5, panel (a′)) a slight, but not significant, reduction in GR specific activity was detected in theflx1Δ strain with respect to the WT (25 versus 31 nmol·mg−1 protein).GR and SOD activities inflx1Δ strain. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b), and (c)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′), and (c′)) up to the exponential phase (5 h). GR ((a), (a′)) and SOD ((b), (b′)) specific activities were spectrophotometrically determined as described in Section 2. As control FUM specific activity ((c), (c′)) was measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).As regards SOD, in the glycerol-grownflx1Δ cells after 5 h growth time (Figure 5, panel (b)), the SOD specific activity was significantly higher than the value measured in the WT cells (16 versus 9 standard U·mg−1). At the stationary phase, the SOD specific activity in theflx1Δ significantly decreased, reaching a value of 6.6 standard U·mg−1, that is, about two-fold lower than the SOD specific activity measured in WT cells. In glucose-grown cells after 5 h growth time (Figure 5, panel (b′)), a slight, but significant, reduction in SOD specific activity can be detected in theflx1Δ strain with respect to the WT (9.2 versus 12.2 nmol·mg−1 protein). This reduction might be explained by a defect in FAD dependent protein folding, as previously observed in [30, 39].In all the growth conditions tested, the FUM activity, used as a control, was not affected byFLX1deletion (Figure 5, panels (c) and (c′)).
### 3.2. The Role of Flx1p in a Retrograde Cross-Talk Response Regulating Cell Defence and Lifespan
Results described in the previous paragraph strengthen the relevance of Flx1p in ensuring cell defence and correct aging by maintaining the homeostasis of mitochondrial flavoproteome. As concerns SDH, in [19] we gained some insight into the mechanism by which Flx1p could regulate Sdh1p apo-protein expression, as due to a control that involves regulatory sequences located upstream of the SDH1 coding sequence (as reviewed in [40]).To gain further insight into this mechanism, we searched here for elements that could be relevant in modulating Sdh1p expression, in response to alteration in flavin cofactor homeostasis. Therefore, first we searched forcis-acting elements in the regulatory regions located upstream of theSDH1 ORF, first of all in the 5′UTR region, as defined by [41], which corresponds to the first 71 nucleotides before the start codon ofSDH1 ORF. No consensus motifs were found in this region by using the bioinformatic tool “Yeast Comparative Genomics—Broad Institute” [42]. Indeed, it should be noted that no further information is at the moment available on the actual length of the 5′UTR ofSDH1.Thus, we extended our analysis along the 1 kbp upstream region ofSDH1 ORF and we found twelve consensus motifs that could bind regulatory proteins, six of which are of unknown function. Among these motifs, summarised in Table 2, the most relevant, at least in the scenario described by our experiments, seemed to be a motif which is located at −80 nucleotides upstream the start codon ofSDH1 ORF and, namely, motif 29 (consensus sequence shRCCCYTWDt), that perfectly overlaps with motif 38 (consensus sequence CTCCCCTTAT). This motif is also present in the upstream region of the mitochondrial flavoproteinARH1, involved in ubiquinone biosynthesis [28], but not in that of flavoproteinLPD1 andCOQ6 [25, 26, 28]. Interestingly, this motif 29 is also present in the upstream regions of the members of the machinery that maintained Rf homeostasis, that is, the mitochondrial FAD transporterFLX1 [25], the FAD forming enzymeFAD1 [25], and the Rf translocatorMCH5 [22]. Moreover, this motif is also present in the upstream regulatory region of the mitochondrial isoenzymeSOD2, but not in the cytosolic one,SOD1, and in one of the five nuclear succinate sensitive JmjC-domain-containing demethylases, that is,RPH1 [43]. According to [42], this motif is bound by transcription factor Msn2p and its close homologue Msn4p (referred to as Msn2/4p), which under nonstress conditions are located in the cytoplasm. Upon different stress conditions, among which oxidative stress, Msn2/4p are hyper-phosphorylated and shuttled from the cytosol to the nucleus [44]. The pivotal role played by Msn2/4p in chronological lifespan in yeast was first discovered by [45] and recently exhaustively reviewed by [46].Table 2
List of motifs localized in the 1000 nucleotides upstream region of SDH1 ORF and identified by enriched conservation among all Saccharomyces species genome using the “Yeast Comparative Genomics—Broad Institute” database.
Number
Motif
Number of ORFs
Binding factor
Function
2
RTTACCCGRM
865
Reb1
RNA polymerase I enhancer binding protein
14
YCTATTGTT
561
Unknown
/
26
DCGCGGGGH
285
Mig1
Involved in glucose repression
29
hRCCCYTWDt
442
Msn2/4
Involved in stress conditions
38
CTCCCCTTAT
218
Msn2/4
Involved in stress conditions
39
GCCCGG
152
Unknown
Filamentation
41
CTCSGCS
77
Unknown
/
47
TTTTnnnnnnnnnnnngGGGT
359
Unknown
/
57
CGGCnnMGnnnnnnnCGC
84
Gal4
Involved in galactose induction
61
GKBAGGGT
363
TBF1
Telobox-containing general regulatory factor
63
GGCSnnnnnGnnnCGCG
80
mbp1-like
Involved in regulation of cell cycle progression from G1 to S
70
CGCGnnnnnGGGS
156
Unknown
/A further comparison between the 5′UTRs ofSDH1 and of proteins involved in FAD homeostasis revealed another common motif of unknown function located at –257 nucleotides upstream the start codon ofSDH1 ORF, namely, the motif 14 (consensus sequence YCTATTGTT) [42]. BesidesSDH1, this motif is also present in the upstream region ofMCH5 and its homologueMCH4, inFAD1, and also in a number of mitochondrial flavoproteins, includingHEM14,NDI1, andNCP1. The binding factor and the functional role of the motif 14 have not yet annotated in “Yeast Comparative Genomics—Broad Institute” (Table 2). Searching in the biological database “Biobase-Gene-regulation-Transfac” we found that this motif is reported as bound by Rox1p (YPR065W, a heme-dependent repressor of hypoxic genes—SGD information). Rox1p is involved in the regulation of the expression of proteins involved in oxygen-dependent pathways, such as respiration, heme, and sterols biosynthesis [47]. Thus,SDH1 expression is downregulated inrox1Δ strain under aerobiosis [47]. This finding strengthens the well-described relationship between oxygen/heme metabolism and flavoproteins [18, 37]. A possible involvement of this transcriptional pathway in the scenario depicted by deletion ofFLX1 remains at the moment only speculative.
## 3.1. Phenotypical and Biochemical Consequences ofFLX1 Deletion
In order to study the relevance of mitochondrial flavin cofactor homeostasis on cellular bioenergetics we introduced a yeast strain lacking theFLX1 gene, encoding the mitochondrial FAD transporter [26]. This deleted strain showed a small-colony phenotype, on both fermentable and nonfermentable carbon sources, due to an impairment in the aerobic respiratory chain pathway [32]. The deleted strain,flx1Δ, grew normally on glucose medium but failed to grow on nonfermentable carbon sources (i.e., glycerol), thus indicating a respiration-deficient phenotype (Figure 1(a)). The growth defect on nonfermentable carbon source, which was restored by complementing the deleted strain with the YEpFLX1 plasmid [26], was not rescued by the addition of tricarboxylic acid (TCA) cycle intermediates such as succinate or malate (Figure 1(a)).(a) Respiratory-deficient phenotype offlx1Δ strain: effect of succinate and malate addition. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated either 5 mM succinate (Succ) or 5 mM malate (Mal) was added. Cell growth was estimated at the stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm) of a ten-fold dilution of each growth culture, consistently, corrected for the dilution factor. The values reported in the histogram are the means (±SD) of three experiments. (b) Changes in the recombinant Sdh1-HAp level inflx1Δ strain. Cellular lysates were prepared fromWT-HA andflX1Δ-HA cells grown at 30°C up to the exponential growth phase (5 h) in YEP liquid medium supplemented with either glycerol or galactose (2% each) as carbon source. Proteins from cellular lysates (0.05 mg) were separated by SDS/PAGE and transferred onto a PVDF membrane. In each extract, Sdh1-HA protein was detected by using an α-HA and its amount was densitometrically evaluated. The values reported in the histogram are the means (±SD) of three experiments performed with different cellular lysates preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05). As a control, the specific activity of the enzyme fumarase (FUM) was determined in each cellular lysate preparation.
(a)
(b)Among the mitochondrial flavoenzymes which were demonstrated to be altered inflx1Δ strain [25, 26, 28], we showed before [19, 32] and confirmed in Figure 1(b) a significant reduced level of the apo-flavoprotein Sdh1p, resulting in an altered functionality of SDH or complex II of the respiratory chain. This reduction was revealed by creating a strain in which three consecutive copies of the human influenza hemagglutinin epitope (HA epitope, YPYDVPDYA) were fused in frame to the 3′end of theSDH1 ORF in the genome of both the WT andflx1Δ strains. The chimera protein, namely, Sdh1-HAp, carrying the HA-tag at the C-terminal end of Sdh1p, lost the ability to covalently bind the flavin cofactor FAD [19, 33], but not its regulatory behaviour, that is, its inducible expression in galactose or in nonfermentable carbon sources. In all the growth conditions tested, the FAD-independent fumarase (FUM) activity, used as a control, was not affected byFLX1 deletion (see histogram in Figure 1(b)).A significant decrease of Sdh1-HAp level was accompanied in galactose, but not in glycerol, by a profound derangement of flavin cofactors, particularly evident in cell grown at the early exponential phase (Table1), in agreement with [25, 26], respectively. The reason for these carbon source-dependent flavin level changes, which is not easily explainable, is addressed in Section 4
.Table 1
Endogenous flavin content in spheroplasts and mitochondria.
Carbon source
Strain
Spheroplasts
SCM
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
Glycerol
WT
157 ± 7
153 ± 7
1.1
160 ± 10°
30 ± 10°
4.8
f
l
x
1
Δ
126 ± 11
110 ± 10
1.1
140 ± 30°
40 ± 10°
4.5
Galactose
WT
263 ± 10
189 ± 8
1.4
538 ± 32
103 ± 7
5.2
f
l
x
1
Δ
207 ± 8*
195 ± 8
1.1
306 ± 15*
67 ± 11*
4.8
Spheroplasts and mitochondria (SCM) were prepared from WT andf
l
x
1
Δ cells grown in glycerol or galactose (2%) up to the exponential growth phase (5 h). FAD and FMN content was determined in neutralized perchloric acid extracts, as described in Materials and Methods. Riboflavin amount was not relevant, and thus its value has not been reported. The means (±SD) of the flavin endogenous content determined in three experiments performed with different preparations are reported. °Data published in (Bafunno et al., 2004) [26]; statistical evaluation was carried out according to Student’s t-test (*P
<
0.05).Consistent with an altered functionality of SDH, theflx1Δ strain also showed impaired isolated mitochondria oxygen consumption activity, specifically detectable when succinate was used as a respiratory substrate [19]. Similar phenotype was also observed in yeast strains carrying either a deletion ofSDH1 [34] or a deletion ofSDH5, which encodes a mitochondrial protein involved in Sdh1p flavinylation [35]. Another respiration-related phenotype offlx1Δ strain was investigated in Figure 2, by testing H2O2 hypersensitivity of cells grown on both fermentable and nonfermentable carbon sources. In glucose, the WT cells grew up to the stationary phase (24 h) in the presence of H2O2 (0.05 or 2 mM) essentially as the control cells grown in the absence of H2O2. In glycerol, their ability to grow up to 24 h was reduced of about 20% at 0.05 mM H2O2 and of 60% at 2 mM, with respect to the control cells in which no H2O2 was added.Figure 2
Sensitivity to H2O2. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated, H2O2 at the indicated concentration was added. Cell growth was estimated at the exponential (5 h) and stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm). In the histogram, the A
600
nm of the cell cultures grown in the presence of H2O2 is reported as a percentage of the control (i.e., the A
600
nm of cell cultures grown in the absence of H2O2, set arbitrary equal to 100%). The values reported in the histogram are the means (±SD) of three experiments.In glucose,flx1Δ cells did not show H2O2 hypersensitivity at 0.05 mM. At 2 mM H2O2, their ability to grow was significantly reduced (of about 85%) with respect toflx1Δ cells grown in the absence of H2O2. The ability of theflx1Δ cells to grow in glycerol, which wasper se drastically reduced by deletion, was reduced at 24 h by the addition of 0.05 mM H2O2 (about 50% with respect to the control cells grown in the absence of H2O2). An even higher sensitivity to H2O2 was observed in the presence of 2 mM H2O2, having their growth ability reduced of about 85% with respect to control cells in which no addition was made. The impairment in the ability to grow under H2O2 stress conditions clearly demonstrates an impairment in defence capability of theflx1Δ strain. Interestingly, the same phenotype was observed also in the yeastsdh5Δ [35],sdh1Δ, andsdh2Δ [36] strains.To understand whether mitochondrial flavoprotein impairment, due toFLX1 deletion, influenced aging in yeast, we carried out measurements of chronological lifespan on both WT andflx1Δ cells cultured at 30°C in SM liquid medium supplemented with glucose 2% as carbon source (Figure 3). Following 24 h (1 day), 96 h (4 days), and 168 h (7 days) of growth, the number of colonies was determined by spotting five serial dilutions of the liquid culture and incubating the plates for two-three days at 30°C. The results of a typical experiment are reported in Figure 3. A reduced number of small colonies were counted for theflx1Δ strain, with respect to the number of colonies counted for the WT strain. This phenotype, particularly evident after 96 h and 168 h of growth time, clearly indicated a decrease in chronological lifespan of theflx1Δ strain. Essentially the same phenotype was observed insdh1Δ andsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. Whenflx1Δ cells were grown on glycerol, they lost the ability to form colonies following 24 h growth time (data not shown).Figure 3
Chronological lifespan determination. WT andflX1Δ strains were cultured in SM liquid medium at 30°C. Dilutions from each culture containing about 200 cells (as calculated from A
600
nm by taking into account that one A
600
nm is equivalent to 3 × 107 cell/mL) were harvested after 24, 96, and 168 h and plated onto SM solid medium and grown at 30°C for two-three days.In order to correlate the observed phenotype with an impairment of cellular bioenergetics, we compared the ATP content and the ROS amount of theflx1Δ strain with that of the WT. In Figure 4, panel (a), the ATP cellular content was enzymatically measured in neutralized perchloric extracts prepared from WT andflx1Δ cells grown on glycerol. At the exponential growth phase (5 h), a significant reduction was detected in theflx1Δ cells in comparison with the WT (0.21 versus 1.05 nmol·mg−1 protein). At the stationary growth phase (24 h), the ATP content increased significantly in WT cells (3.4 nmol·mg−1 protein) and even more in the deleted strain (5.2 nmol·mg−1 protein). The temporary severe decrease in ATP content induced by the absence of Flx1p was not observed in glucose-grown cells (Figure 4, panel (a′)), as expected when fermentation is the main way to produce ATP.Bioenergetic and redox impairment inflx1Δ strain: ATP and ROS content. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′)) up to the exponential phase (5 h). ATP content ((a), (a′)) was enzymatically determined following perchloric acid extraction and neutralization. ROS content ((b), (b′)) was fluorometrically measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).FLX1 deletion induced also a significant increase in the amount of ROS (135% with respect to the WT cells), as estimated with the fluorescent dye DCFH-DA on the cellular lysates prepared from cells grown in glycerol up to the exponential growth phase (Figure 4, panel (b)). At the stationary phase theflx1Δ cells presented almost the same ROS amount measured in the WT cells (Figure 4, panel (b)). In glucose-grown cells, the amount of cellular ROS in theflx1Δ strain was not significantly changed with respect to the WT (Figure 4, Panel (b′)), as expected when a mitochondrial damage is the major cause of ROS unbalance.In line with the unique role of flavin cofactor in oxygen metabolism and ROS defence systems [20, 30, 37, 38], we further investigated whether the impairment of the ROS level in glycerol-grownflx1Δ strain was due to a derangement in enzymes involved in ROS detoxification, such as the flavoprotein glutathione reductase (GR) or the FAD-independent superoxide dismutase (SOD); their specific enzymatic activities were measured in cellular lysates from WT andflx1Δ cells grown on glycerol and glucose, while assaying the FAD-independent enzyme FUM as control (Figure 5). Figure 5, panel (a), shows a significant increase in GR specific activity inflx1Δ strain (65%) at the exponential growth phase with respect to that measured in WT. The GR specific activity in theflx1Δ reached the same value measured in the WT cells (about 35 nmol·mg−1 protein) at the stationary phase. In cells grown in glucose up to the exponential growth phase (Figure 5, panel (a′)) a slight, but not significant, reduction in GR specific activity was detected in theflx1Δ strain with respect to the WT (25 versus 31 nmol·mg−1 protein).GR and SOD activities inflx1Δ strain. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b), and (c)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′), and (c′)) up to the exponential phase (5 h). GR ((a), (a′)) and SOD ((b), (b′)) specific activities were spectrophotometrically determined as described in Section 2. As control FUM specific activity ((c), (c′)) was measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).As regards SOD, in the glycerol-grownflx1Δ cells after 5 h growth time (Figure 5, panel (b)), the SOD specific activity was significantly higher than the value measured in the WT cells (16 versus 9 standard U·mg−1). At the stationary phase, the SOD specific activity in theflx1Δ significantly decreased, reaching a value of 6.6 standard U·mg−1, that is, about two-fold lower than the SOD specific activity measured in WT cells. In glucose-grown cells after 5 h growth time (Figure 5, panel (b′)), a slight, but significant, reduction in SOD specific activity can be detected in theflx1Δ strain with respect to the WT (9.2 versus 12.2 nmol·mg−1 protein). This reduction might be explained by a defect in FAD dependent protein folding, as previously observed in [30, 39].In all the growth conditions tested, the FUM activity, used as a control, was not affected byFLX1deletion (Figure 5, panels (c) and (c′)).
## 3.2. The Role of Flx1p in a Retrograde Cross-Talk Response Regulating Cell Defence and Lifespan
Results described in the previous paragraph strengthen the relevance of Flx1p in ensuring cell defence and correct aging by maintaining the homeostasis of mitochondrial flavoproteome. As concerns SDH, in [19] we gained some insight into the mechanism by which Flx1p could regulate Sdh1p apo-protein expression, as due to a control that involves regulatory sequences located upstream of the SDH1 coding sequence (as reviewed in [40]).To gain further insight into this mechanism, we searched here for elements that could be relevant in modulating Sdh1p expression, in response to alteration in flavin cofactor homeostasis. Therefore, first we searched forcis-acting elements in the regulatory regions located upstream of theSDH1 ORF, first of all in the 5′UTR region, as defined by [41], which corresponds to the first 71 nucleotides before the start codon ofSDH1 ORF. No consensus motifs were found in this region by using the bioinformatic tool “Yeast Comparative Genomics—Broad Institute” [42]. Indeed, it should be noted that no further information is at the moment available on the actual length of the 5′UTR ofSDH1.Thus, we extended our analysis along the 1 kbp upstream region ofSDH1 ORF and we found twelve consensus motifs that could bind regulatory proteins, six of which are of unknown function. Among these motifs, summarised in Table 2, the most relevant, at least in the scenario described by our experiments, seemed to be a motif which is located at −80 nucleotides upstream the start codon ofSDH1 ORF and, namely, motif 29 (consensus sequence shRCCCYTWDt), that perfectly overlaps with motif 38 (consensus sequence CTCCCCTTAT). This motif is also present in the upstream region of the mitochondrial flavoproteinARH1, involved in ubiquinone biosynthesis [28], but not in that of flavoproteinLPD1 andCOQ6 [25, 26, 28]. Interestingly, this motif 29 is also present in the upstream regions of the members of the machinery that maintained Rf homeostasis, that is, the mitochondrial FAD transporterFLX1 [25], the FAD forming enzymeFAD1 [25], and the Rf translocatorMCH5 [22]. Moreover, this motif is also present in the upstream regulatory region of the mitochondrial isoenzymeSOD2, but not in the cytosolic one,SOD1, and in one of the five nuclear succinate sensitive JmjC-domain-containing demethylases, that is,RPH1 [43]. According to [42], this motif is bound by transcription factor Msn2p and its close homologue Msn4p (referred to as Msn2/4p), which under nonstress conditions are located in the cytoplasm. Upon different stress conditions, among which oxidative stress, Msn2/4p are hyper-phosphorylated and shuttled from the cytosol to the nucleus [44]. The pivotal role played by Msn2/4p in chronological lifespan in yeast was first discovered by [45] and recently exhaustively reviewed by [46].Table 2
List of motifs localized in the 1000 nucleotides upstream region of SDH1 ORF and identified by enriched conservation among all Saccharomyces species genome using the “Yeast Comparative Genomics—Broad Institute” database.
Number
Motif
Number of ORFs
Binding factor
Function
2
RTTACCCGRM
865
Reb1
RNA polymerase I enhancer binding protein
14
YCTATTGTT
561
Unknown
/
26
DCGCGGGGH
285
Mig1
Involved in glucose repression
29
hRCCCYTWDt
442
Msn2/4
Involved in stress conditions
38
CTCCCCTTAT
218
Msn2/4
Involved in stress conditions
39
GCCCGG
152
Unknown
Filamentation
41
CTCSGCS
77
Unknown
/
47
TTTTnnnnnnnnnnnngGGGT
359
Unknown
/
57
CGGCnnMGnnnnnnnCGC
84
Gal4
Involved in galactose induction
61
GKBAGGGT
363
TBF1
Telobox-containing general regulatory factor
63
GGCSnnnnnGnnnCGCG
80
mbp1-like
Involved in regulation of cell cycle progression from G1 to S
70
CGCGnnnnnGGGS
156
Unknown
/A further comparison between the 5′UTRs ofSDH1 and of proteins involved in FAD homeostasis revealed another common motif of unknown function located at –257 nucleotides upstream the start codon ofSDH1 ORF, namely, the motif 14 (consensus sequence YCTATTGTT) [42]. BesidesSDH1, this motif is also present in the upstream region ofMCH5 and its homologueMCH4, inFAD1, and also in a number of mitochondrial flavoproteins, includingHEM14,NDI1, andNCP1. The binding factor and the functional role of the motif 14 have not yet annotated in “Yeast Comparative Genomics—Broad Institute” (Table 2). Searching in the biological database “Biobase-Gene-regulation-Transfac” we found that this motif is reported as bound by Rox1p (YPR065W, a heme-dependent repressor of hypoxic genes—SGD information). Rox1p is involved in the regulation of the expression of proteins involved in oxygen-dependent pathways, such as respiration, heme, and sterols biosynthesis [47]. Thus,SDH1 expression is downregulated inrox1Δ strain under aerobiosis [47]. This finding strengthens the well-described relationship between oxygen/heme metabolism and flavoproteins [18, 37]. A possible involvement of this transcriptional pathway in the scenario depicted by deletion ofFLX1 remains at the moment only speculative.
## 4. Discussion
This paper deals with the role exerted by the mitochondrial translocator Flx1p in the efficiency of ATP production, ROS homeostasis, H2O2 sensitivity, and chronological lifespan inS. cerevisiae, starting from the previous demonstrations of the derangements in specific mitochondrial flavoproteins which are crucial for mitochondrial bioenergetics, including Coq6p [28], Lpd1p, and Sdh1p [19, 25, 26]. The alteration in Sdh1p expression level in different carbon source is confirmed here (Figure 1) and it is accompanied by an alteration in flavin cofactor amount in galactose, but not in glycerol-grown cells (Table 1), in agreement with [19, 25], respectively. In the attempt to rationalize the reason for the carbon source dependence of the flavin level changes, we hypothesized different subcellular localization for Fad1p in response to carbon sources. Experiments are going on in our laboratory to evaluate this possibility.Theflx1Δ strain showed impaired succinate-dependent oxygen consumption [19]. Since no reduction in the oxygen consumption rate was found by using alternative substrates, such as NADH or glycerol 3-phosphate, possible defects in the ubiquinone or heme biosynthesis [28] could not be relevant for mitochondrial respiration, at least under this nonstress condition.To evaluate the consequences ofFLX1 deletion on bioenergetics and cellular redox balance, the ATP content and ROS level (Figure 4) were compared in WT andflx1Δ strains, accompanied by measurements of the enzymatic activities of GR and SOD, enzymes involved in ROS detoxification (Figure 5). ATP shortage and ROS unbalance were observed inflx1Δcells grown in glycerol up to the exponential growth phase, but not in cells grown in glycerol up to the stationary phase or in glucose. The findings are in agreement with the mitochondrial origin of these biochemical parameters. More importantly, the observation that lifespan was changed in glucose (not accompanied by a detectable ROS unbalance) allows us to propose that the lifespan shortage induced by the mitochondrial alteration due to absence ofFLX1 gene (correlated to flavoprotein impairment) may act also independently of ROS level increase.Theflx1Δ strain showed also H2O2 hypersensitivity (Figure 2). Since the same respiratory-deficient phenotype was previously observed in the yeast strainsdh1Δ andsdh5Δ strains [35], these results could be explained by the incapability of theflx1Δ strain to increase the amount of Sdh1p in response to oxidative stress.In this paper, for the first time, a correlation between deletion ofFLX1 and altered chronological lifespan was reported (Figure 3). A similar phenotype was also previously demonstrated forsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. This conclusion is also consistent with the recent observations made in another model organism, that is,C. elegans, in which the FAD forming enzyme FADS coded byflad-1 gene was silenced [30, 48].To understand the molecular mechanism by which FAD homeostasis derangement and flavoproteome level maintenance are correlated, a bioinformatic analysis was performed which revealed at least twocis-acting motifs which are located in the upstream region of genes encoding SDH1, other mitochondrial flavoproteins, and some members of the machinery that maintain cellular FAD homeostasis. Therefore, the analysis describes the ability of yeast cells to implement under H2O2 stress condition and aging a strategy of gene expression coordinating flavin cofactor homeostasis with the biogenesis of a number of mitochondrial flavoenzymes involved in various aspects of metabolism ranging from oxidative phosphorylation to heme and ubiquinone biosynthesis. Even though no experimental evidence still exists to test the direct involvement of thesecis-acting motifs in flavin-dependent cell defence and chronological lifespan, their involvement in the scenario depicted by deletion ofFLX1 appeared to be a fascinating purpose to be pursued. Experiments in this direction are at the moment going on in our laboratory.In [19] we demonstrated that the early-onset change in apo-Sdh1p content observed in theflx1Δ strain appeared consistent with a posttranscriptional control exerted by Flx1p, as depicted in Figure 6. Thus, an inefficient translation of SDH1-mRNA is expected inflx1Δ strain due to the posttranscriptional control [19], even when putative mRNA levels may change in response to cell stress and/or aging. In this pathway the transcription factors Msn2/4p and Rox1p could play a crucial role.Figure 6
A possible correlation between mitochondrial FAD homeostasis and chronological lifespan. The scheme summarizes results from studies described in this and other papers [17, 19, 22, 26, 35, 36, 40, 50, 53]. Mch5p, plasma membrane Rf transporter; Rib1-5/7p, enzymes involved in Rfde novo biosynthesis; Rf
T, mitochondrial riboflavin transporter; Fmn1p, riboflavin kinase; mtFADS, mitochondrial FAD synthase; Flx1p, mitochondrial FAD exporter; I, FAD pyrophosphatase; Sdh1p, succinate dehydrogenase flavoprotein subunit; Sdh5p, protein required for Sdh1p flavinylation; Sdh2/3/4p, other subunits of succinate dehydrogenase complex; Tmp62p/Sdh6p, factors required for SDH complex assembly; TCA cycle, tricarboxylic acid cycle; TOM complex/TIM complex, proteins involved in mitochondrial protein import; Dic1p, mitochondrial dicarboxylic acid carrier; PDH, prolyl hydroxylase; JmjC, JmjC-domain-containing demethylases, Rox1p, heme-dependent repressor of hypoxic genes; Msn2/4p, transcriptional factors activated in stress conditions.Moreover, scheme in Figure6 outlines howFLX1 deletion, causing a change in expression level of Sdh1p, could activate a sort of retrograde cross-talk directed to nucleus. In our hypothesis besides ROS increase, a key molecule mediating nucleus-mitochondrion cross-talk should be the TCA cycle intermediate succinate, whose amount is expected to increase when altering the activity of SDH. The increased amount of succinate in turn may alter the activity of the α-ketoglutarate- and Fe(II)-depending dioxygenases among which there are (i) the JmjC-domain-containing demethylases [36], which may be causative of epigenetic events at the basis of precocious aging (for an exhaustive review on this point see [49]), and (ii) the prolyl hydroxylase (PDH), which may mimic a hypoxia condition in the cell [50].
## 5. Conclusions
Here we prove that inS. cerevisiae deletion of the mitochondrial translocatorFLX1 results in H2O2 hypersensitivity and altered chronological lifespan, which is associated with ATP shortage and ROS unbalance in nonfermentable carbon source. We propose that this yeast phenotype is correlated to a reduced ability to maintain an appropriate level of succinate dehydrogenase flavoprotein subunit [19], which in turn can either derange epigenetic regulation or mimic a hypoxic condition. Thus,flx1Δ strain provides a useful model system for studying human aging and degenerative pathologic condition associated with alteration in flavin homeostasis, which can be restored by Rf treatment [51, 52].
---
*Source: 101286-2014-05-08.xml* | 101286-2014-05-08_101286-2014-05-08.md | 63,967 | Alteration of ROS Homeostasis and Decreased Lifespan inS. cerevisiae Elicited by Deletion of the Mitochondrial Translocator FLX1 | Teresa Anna Giancaspero; Emilia Dipalo; Angelica Miccolis; Eckhard Boles; Michele Caselle; Maria Barile | BioMed Research International
(2014) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101286 | 101286-2014-05-08.xml | ---
## Abstract
This paper deals with the control exerted by the mitochondrial translocatorFLX1, which catalyzes the movement of the redox cofactor FAD across the mitochondrial membrane, on the efficiency of ATP production, ROS homeostasis, and lifespan of S. cerevisiae. The deletion of the FLX1 gene resulted in respiration-deficient and small-colony phenotype accompanied by a significant ATP shortage and ROS unbalance in glycerol-grown cells. Moreover, the f
l
x
1
Δ strain showed H2O2 hypersensitivity and decreased lifespan. The impaired biochemical phenotype found in the f
l
x
1
Δ strain might be justified by an altered expression of the flavoprotein subunit of succinate dehydrogenase, a key enzyme in bioenergetics and cell regulation. A search for possible cis-acting consensus motifs in the regulatory region upstream SDH1-ORF revealed a dozen of upstream motifs that might respond to induced metabolic changes by altering the expression of Flx1p. Among these motifs, two are present in the regulatory region of genes encoding proteins involved in flavin homeostasis. This is the first evidence that the mitochondrial flavin cofactor status is involved in controlling the lifespan of yeasts, maybe by changing the cellular succinate level. This is not the only case in which the homeostasis of redox cofactors underlies complex phenotypical behaviours, as lifespan in yeasts.
---
## Body
## 1. Introduction
Riboflavin (Rf or vitamin B2) is the precursor of flavin mononucleotide (FMN) and flavin adenine dinucleotide (FAD), the redox cofactors of a large number of dehydrogenases, reductases, and oxidases. Most of these flavoenzymes are compartmented in the cellular organelles, where they are involved in energy production and redox homeostasis as well as in different cellular regulatory events including apoptosis, chromatin remodelling, and interestingly, as recently proposed, in epigenetic signalling [1–4]. Consistent with the crucial role of flavoenzymes in cell life, flavin-dependent enzyme deficiency and/or impairment in flavin homeostasis in humans and experimental animals has been linked to several diseases, such as cancer, cardiovascular diseases, anaemia, abnormal fetal development, and different neuromuscular and neurological disorders [5–9]. The relevance of these pathologies merits further research aimed to better describe FAD homeostasis and flavoenzyme biogenesis, especially in those organisms that can be a simple and suitable model for human diseases. The conserved biological processes shared with all eukaryotic cells, together with the possibility of simple and quick genetic manipulation, allowed proposing the budding yeast,Saccharomyces cerevisiae, as the premier model to understand the biochemistry and molecular biology of mammalian cells and to decipher molecular mechanisms underlying human diseases [10–12].For many yearsS. cerevisiae has been used also as a model to study the complexity of the molecular events involved in the undesired process of aging, in which mitochondria play a major role [13, 14]. The role of mitochondria has been pointed out either because aged respiratory chain is a major source of cellular ROS [14] or because mitochondria actively participate in regulating the homeostasis of the redox cofactor NAD, which regulates yeast lifespan by acting as a substrate of specific deacetylases (EC 3.5.1.-), named sirtuins [15–17]. This might not be the only case in which the homeostasis of redox cofactors underlies complex phenotypical behaviours, as lifespan in yeasts. Here we investigate whether the mitochondrial flavin cofactor status may also be involved in controlling the lifespan of yeasts, presumably by changing the level of mitochondrial flavoenzymes, which are crucial for cell regulation [18, 19].It should be noted that, even though mitochondria are plenty of flavin and flavoproteins [20, 21], the origin of flavin cofactors starting from Rf in this organelle is still a matter of debate. Yeasts have the ability to either synthesise Rfde novo or to take it from outside. The first eukaryotic gene coding for a cellular Rf transporter was identified inS. cerevisiae as theMCH5 gene [22]. Intracellular Rf conversion to FAD is a ubiquitous pathway and occurs via the sequential actions of ATP: riboflavin 5′-phosphotransferase or riboflavin kinase (RFK, EC 2.7.1.26) which phosphorylates the vitamin into FMN and of ATP: FMN adenylyl transferase or FAD synthase (FADS, EC 2.7.7.2) which adenylates FMN to FAD. The first eukaryotic genes encoding for RFK and FADS were identified inS. cerevisiae and namedFMN1 [23] andFAD1 [24], respectively. While there is no doubt about a mitochondrial localization for Fmn1p [23, 25], the existence of a mitochondrial FADS isoform in yeast is still controversial. First a cytosolic localization for Fad1p was reported [24]; thus newly synthesised FAD was expected to be imported into mitochondria via the FAD translocator Flx1p [25]. However, results from our laboratory showed that, besides in the cytosol, FAD-forming activities can be revealed in mitochondria, thus requiring uptake of the FAD precursors into mitochondria [26, 27]. FAD synthesised inside the organelle can be either delivered to a number of nascent client apo-flavoenzymes or be exported via Flx1p into cytosol to take part of an extramitochondrial posttranscriptional control of apo-flavoprotein biogenesis [19, 26].Besides synthesis and transport, mitochondrial flavin homeostasis strictly depends also on flavin degradation. Recently we have demonstrated thatS. cerevisiae mitochondria (SCM) are able to catalyze FAD hydrolysis via an enzymatic activity which is different from the already characterized NUDIX hydrolases (i.e., enzymes that catalyze the hydrolysis of nucleoside diphosphates linked to other moieties, X) and it is regulated by the mitochondrial NAD redox status [17].To prove the relationship between mitochondrial FAD homeostasis and lifespan in yeast we use as a model aS. cerevisiae strain lacking theFLX1 gene which showed a respiratory-deficient phenotype and a derangement in a number of mitochondrial flavoproteins, that is, dihydrolipoamide dehydrogenase (LPD1), succinate dehydrogenase (SDH), and flavoproteins, involved in ubiquinone biosynthesis (COQ6) [18, 25, 26, 28].We demonstrated here that this deleted strain performed ATP shortage and ROS unbalance, together with H2O2 hypersensitivity and altered chronological lifespan. Thisflx1Δ phenotype is correlated to a reduced ability to maintain an appropriate level of the flavoenzyme succinate dehydrogenase (SDH), a member of a complex “flavin network” participating in a nucleus-mitochondrion cross-talk.
## 2. Materials and Methods
### 2.1. Materials
All reagents and enzymes were from Sigma-Aldrich (St. Louis, MO, USA). Zymolyase was from ICN (Abingdon, UK) and Bacto Yeast Extract and Bacto Peptone were from Difco (Franklin Lakes, NJ, USA). Mitochondrial substrates were used as TRIS salts at pH 7.0. Solvents and salts used for HPLC were from J. T. Baker (Center Valley, PA, USA). Rat anti-HA monoclonal antibody and peroxidase-conjugated anti-rat IgG secondary antibody were obtained from Roche (Basel, Switzerland) and Jackson Immunoresearch (West Grove, PA, USA), respectively.
### 2.2. Yeast Strains
The wild-typeS. cerevisiae strain (EBY157A orWT genotypeMATα ura 3–52 MAL2-8
c
SUC2 p426MET25) used in this work derived from the CEN.PK series of yeast strains and was obtained from P. Kotter (Institut für Mikrobiologie, Goethe-Universität Frankfurt, Frankfurt, Germany), as already described in [26]. Theflx1Δ mutant strain (EBY167A,flx1Δ) was constructed as described in [26] and theWT-HA (EBY157-SDH1-HA) andflx1Δ-HA (EBY167-G418S-SDH1-HA) were constructed as described in [19].
### 2.3. Media and Growth Conditions
Cells were grown aerobically at 30°C with constant shaking in rich liquid medium (YEP, 10 g/L Yeast Extract, 20 g/L Bacto Peptone) or in minimal synthetic liquid medium (SM, 1.7 g/L yeast nitrogen base, 5 g/L ammonium sulphate, and 20 mg/L uracil) supplemented with glucose or glycerol (2% each) as carbon sources. The YEP or SM solid media contained 18 g/L agar.
### 2.4. Chronological Lifespan Determination
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Each strain was then cultured in SM liquid medium at 30°C for 1, 4, and 7 days. Five serial dilutions from each culture containing 200 cells, calculated from A
600
nm, were plated onto SM solid medium and grown at 30°C for two-three days.
### 2.5. H2O2 Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and H2O2 (0.05 or 2 mM). After 5 or 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
### 2.6. Malate and Succinate Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and succinate or malate (5 mM). After 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
### 2.7. Preparation of Spheroplasts, Mitochondria, and Cellular Lysates
Spheroplasts were prepared using Zymolyase. Mitochondria were isolated from spheroplasts as described in [26]. Cellular lysates were obtained by early exponential-phase (5 h) or stationary-phase (24 h) cells harvested by centrifugation (8000 ×g for 5 min), washed with sterile water, resuspended in 250 μL of lysis buffer (10 mM Tris-HCl, pH 7.6, 1 mM EDTA, 1 mM dithiothreitol, and 0.2 mM phenylmethanesulfonyl fluoride, supplemented with one tablet of Roche protease inhibitor cocktail every 10 mL of lysis buffer), and vortexed with glass beads for 10 min at 4°C. The liquid was removed and centrifuged at 3000 ×g for 5 min to remove cell debris. The protein concentrations of the spheroplasts, mitochondria, and cellular lysates were assayed according to Bradford [29].
### 2.8. Quantitation of Flavins, ATP, and Reactive Oxygen Species (ROS)
Rf, FMN, and FAD content in spheroplasts and SCM was measured in aliquots (5–80μL) of neutralized perchloric extracts by means of HPLC (Gilson HPLC system including a model 306 pump and a model 307 pump equipped with a Kontron Instruments SFM 25 fluorometer and Unipoint system software), essentially as previously described [26]. ATP content was measured fluorometrically in cellular lysates by using the ATP Detecting System, essentially as in [30]. NADPH formation, which corresponds to ATP content (with a 1 : 1 stoichiometry), was followed with excitation wavelength at 340 nm and emission wavelength at 456 nm. ROS level was fluorometrically measured on cellular lysates using as substrate 2′-7′-dichlorofluorescin diacetate (DCF-DA) according to [30], with slight modifications. Briefly, the probe DCF-DA (50 μM) was incubated at 37°C for 1 h with 0.03–0.05 mg proteins and converted to fluorescent dichlorofluorescein (DCF) upon reaction with ROS. DCF fluorescence of each sample was measured by means of a LS50S Perkin Elmer spectrofluorometer (excitation and emission wavelengths set at 485 nm and 520 nm, resp.).
### 2.9. Enzymatic Assays
Succinate dehydrogenase (SDH, EC 1.3.5.1) and fumarase (FUM, EC 4.2.1.2) activities were measured as in [26]. Glutathione reductase (GR, EC 1.6.4.2) activity was spectrophotometrically assayed by monitoring the absorbance at 340 nm due to NADPH oxidation after glutathione addition (1 mM), essentially as in [30]. Superoxide dismutase (SOD, EC 1.15.1.1) activity was spectrophotometrically measured by the xanthine oxidase/xanthine/cytochrome c method, essentially as described in [31].
### 2.10. Statistical Analysis
All experiments were repeated at least three times with different cell preparations. Results are presented as mean ± standard deviation (SD). Statistical significance was evaluated by Student’st-test. Values of P
<
0.05 were considered statistically significant.
## 2.1. Materials
All reagents and enzymes were from Sigma-Aldrich (St. Louis, MO, USA). Zymolyase was from ICN (Abingdon, UK) and Bacto Yeast Extract and Bacto Peptone were from Difco (Franklin Lakes, NJ, USA). Mitochondrial substrates were used as TRIS salts at pH 7.0. Solvents and salts used for HPLC were from J. T. Baker (Center Valley, PA, USA). Rat anti-HA monoclonal antibody and peroxidase-conjugated anti-rat IgG secondary antibody were obtained from Roche (Basel, Switzerland) and Jackson Immunoresearch (West Grove, PA, USA), respectively.
## 2.2. Yeast Strains
The wild-typeS. cerevisiae strain (EBY157A orWT genotypeMATα ura 3–52 MAL2-8
c
SUC2 p426MET25) used in this work derived from the CEN.PK series of yeast strains and was obtained from P. Kotter (Institut für Mikrobiologie, Goethe-Universität Frankfurt, Frankfurt, Germany), as already described in [26]. Theflx1Δ mutant strain (EBY167A,flx1Δ) was constructed as described in [26] and theWT-HA (EBY157-SDH1-HA) andflx1Δ-HA (EBY167-G418S-SDH1-HA) were constructed as described in [19].
## 2.3. Media and Growth Conditions
Cells were grown aerobically at 30°C with constant shaking in rich liquid medium (YEP, 10 g/L Yeast Extract, 20 g/L Bacto Peptone) or in minimal synthetic liquid medium (SM, 1.7 g/L yeast nitrogen base, 5 g/L ammonium sulphate, and 20 mg/L uracil) supplemented with glucose or glycerol (2% each) as carbon sources. The YEP or SM solid media contained 18 g/L agar.
## 2.4. Chronological Lifespan Determination
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Each strain was then cultured in SM liquid medium at 30°C for 1, 4, and 7 days. Five serial dilutions from each culture containing 200 cells, calculated from A
600
nm, were plated onto SM solid medium and grown at 30°C for two-three days.
## 2.5. H2O2 Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and H2O2 (0.05 or 2 mM). After 5 or 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
## 2.6. Malate and Succinate Sensitivity
WT andflx1Δ strains were grown overnight at 30°C in 5 mL YEP liquid medium supplemented with glucose 0.5% up to the early stationary phase. Then, each strain was inoculated in SM liquid medium (initial A
600
nm equal to 0.1) containing glucose 2% and succinate or malate (5 mM). After 24 h of growth at 30°C, the H2O2 sensitivity was estimated by measuring the A
600
nm of the growth culture.
## 2.7. Preparation of Spheroplasts, Mitochondria, and Cellular Lysates
Spheroplasts were prepared using Zymolyase. Mitochondria were isolated from spheroplasts as described in [26]. Cellular lysates were obtained by early exponential-phase (5 h) or stationary-phase (24 h) cells harvested by centrifugation (8000 ×g for 5 min), washed with sterile water, resuspended in 250 μL of lysis buffer (10 mM Tris-HCl, pH 7.6, 1 mM EDTA, 1 mM dithiothreitol, and 0.2 mM phenylmethanesulfonyl fluoride, supplemented with one tablet of Roche protease inhibitor cocktail every 10 mL of lysis buffer), and vortexed with glass beads for 10 min at 4°C. The liquid was removed and centrifuged at 3000 ×g for 5 min to remove cell debris. The protein concentrations of the spheroplasts, mitochondria, and cellular lysates were assayed according to Bradford [29].
## 2.8. Quantitation of Flavins, ATP, and Reactive Oxygen Species (ROS)
Rf, FMN, and FAD content in spheroplasts and SCM was measured in aliquots (5–80μL) of neutralized perchloric extracts by means of HPLC (Gilson HPLC system including a model 306 pump and a model 307 pump equipped with a Kontron Instruments SFM 25 fluorometer and Unipoint system software), essentially as previously described [26]. ATP content was measured fluorometrically in cellular lysates by using the ATP Detecting System, essentially as in [30]. NADPH formation, which corresponds to ATP content (with a 1 : 1 stoichiometry), was followed with excitation wavelength at 340 nm and emission wavelength at 456 nm. ROS level was fluorometrically measured on cellular lysates using as substrate 2′-7′-dichlorofluorescin diacetate (DCF-DA) according to [30], with slight modifications. Briefly, the probe DCF-DA (50 μM) was incubated at 37°C for 1 h with 0.03–0.05 mg proteins and converted to fluorescent dichlorofluorescein (DCF) upon reaction with ROS. DCF fluorescence of each sample was measured by means of a LS50S Perkin Elmer spectrofluorometer (excitation and emission wavelengths set at 485 nm and 520 nm, resp.).
## 2.9. Enzymatic Assays
Succinate dehydrogenase (SDH, EC 1.3.5.1) and fumarase (FUM, EC 4.2.1.2) activities were measured as in [26]. Glutathione reductase (GR, EC 1.6.4.2) activity was spectrophotometrically assayed by monitoring the absorbance at 340 nm due to NADPH oxidation after glutathione addition (1 mM), essentially as in [30]. Superoxide dismutase (SOD, EC 1.15.1.1) activity was spectrophotometrically measured by the xanthine oxidase/xanthine/cytochrome c method, essentially as described in [31].
## 2.10. Statistical Analysis
All experiments were repeated at least three times with different cell preparations. Results are presented as mean ± standard deviation (SD). Statistical significance was evaluated by Student’st-test. Values of P
<
0.05 were considered statistically significant.
## 3. Results
### 3.1. Phenotypical and Biochemical Consequences ofFLX1 Deletion
In order to study the relevance of mitochondrial flavin cofactor homeostasis on cellular bioenergetics we introduced a yeast strain lacking theFLX1 gene, encoding the mitochondrial FAD transporter [26]. This deleted strain showed a small-colony phenotype, on both fermentable and nonfermentable carbon sources, due to an impairment in the aerobic respiratory chain pathway [32]. The deleted strain,flx1Δ, grew normally on glucose medium but failed to grow on nonfermentable carbon sources (i.e., glycerol), thus indicating a respiration-deficient phenotype (Figure 1(a)). The growth defect on nonfermentable carbon source, which was restored by complementing the deleted strain with the YEpFLX1 plasmid [26], was not rescued by the addition of tricarboxylic acid (TCA) cycle intermediates such as succinate or malate (Figure 1(a)).(a) Respiratory-deficient phenotype offlx1Δ strain: effect of succinate and malate addition. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated either 5 mM succinate (Succ) or 5 mM malate (Mal) was added. Cell growth was estimated at the stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm) of a ten-fold dilution of each growth culture, consistently, corrected for the dilution factor. The values reported in the histogram are the means (±SD) of three experiments. (b) Changes in the recombinant Sdh1-HAp level inflx1Δ strain. Cellular lysates were prepared fromWT-HA andflX1Δ-HA cells grown at 30°C up to the exponential growth phase (5 h) in YEP liquid medium supplemented with either glycerol or galactose (2% each) as carbon source. Proteins from cellular lysates (0.05 mg) were separated by SDS/PAGE and transferred onto a PVDF membrane. In each extract, Sdh1-HA protein was detected by using an α-HA and its amount was densitometrically evaluated. The values reported in the histogram are the means (±SD) of three experiments performed with different cellular lysates preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05). As a control, the specific activity of the enzyme fumarase (FUM) was determined in each cellular lysate preparation.
(a)
(b)Among the mitochondrial flavoenzymes which were demonstrated to be altered inflx1Δ strain [25, 26, 28], we showed before [19, 32] and confirmed in Figure 1(b) a significant reduced level of the apo-flavoprotein Sdh1p, resulting in an altered functionality of SDH or complex II of the respiratory chain. This reduction was revealed by creating a strain in which three consecutive copies of the human influenza hemagglutinin epitope (HA epitope, YPYDVPDYA) were fused in frame to the 3′end of theSDH1 ORF in the genome of both the WT andflx1Δ strains. The chimera protein, namely, Sdh1-HAp, carrying the HA-tag at the C-terminal end of Sdh1p, lost the ability to covalently bind the flavin cofactor FAD [19, 33], but not its regulatory behaviour, that is, its inducible expression in galactose or in nonfermentable carbon sources. In all the growth conditions tested, the FAD-independent fumarase (FUM) activity, used as a control, was not affected byFLX1 deletion (see histogram in Figure 1(b)).A significant decrease of Sdh1-HAp level was accompanied in galactose, but not in glycerol, by a profound derangement of flavin cofactors, particularly evident in cell grown at the early exponential phase (Table1), in agreement with [25, 26], respectively. The reason for these carbon source-dependent flavin level changes, which is not easily explainable, is addressed in Section 4
.Table 1
Endogenous flavin content in spheroplasts and mitochondria.
Carbon source
Strain
Spheroplasts
SCM
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
Glycerol
WT
157 ± 7
153 ± 7
1.1
160 ± 10°
30 ± 10°
4.8
f
l
x
1
Δ
126 ± 11
110 ± 10
1.1
140 ± 30°
40 ± 10°
4.5
Galactose
WT
263 ± 10
189 ± 8
1.4
538 ± 32
103 ± 7
5.2
f
l
x
1
Δ
207 ± 8*
195 ± 8
1.1
306 ± 15*
67 ± 11*
4.8
Spheroplasts and mitochondria (SCM) were prepared from WT andf
l
x
1
Δ cells grown in glycerol or galactose (2%) up to the exponential growth phase (5 h). FAD and FMN content was determined in neutralized perchloric acid extracts, as described in Materials and Methods. Riboflavin amount was not relevant, and thus its value has not been reported. The means (±SD) of the flavin endogenous content determined in three experiments performed with different preparations are reported. °Data published in (Bafunno et al., 2004) [26]; statistical evaluation was carried out according to Student’s t-test (*P
<
0.05).Consistent with an altered functionality of SDH, theflx1Δ strain also showed impaired isolated mitochondria oxygen consumption activity, specifically detectable when succinate was used as a respiratory substrate [19]. Similar phenotype was also observed in yeast strains carrying either a deletion ofSDH1 [34] or a deletion ofSDH5, which encodes a mitochondrial protein involved in Sdh1p flavinylation [35]. Another respiration-related phenotype offlx1Δ strain was investigated in Figure 2, by testing H2O2 hypersensitivity of cells grown on both fermentable and nonfermentable carbon sources. In glucose, the WT cells grew up to the stationary phase (24 h) in the presence of H2O2 (0.05 or 2 mM) essentially as the control cells grown in the absence of H2O2. In glycerol, their ability to grow up to 24 h was reduced of about 20% at 0.05 mM H2O2 and of 60% at 2 mM, with respect to the control cells in which no H2O2 was added.Figure 2
Sensitivity to H2O2. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated, H2O2 at the indicated concentration was added. Cell growth was estimated at the exponential (5 h) and stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm). In the histogram, the A
600
nm of the cell cultures grown in the presence of H2O2 is reported as a percentage of the control (i.e., the A
600
nm of cell cultures grown in the absence of H2O2, set arbitrary equal to 100%). The values reported in the histogram are the means (±SD) of three experiments.In glucose,flx1Δ cells did not show H2O2 hypersensitivity at 0.05 mM. At 2 mM H2O2, their ability to grow was significantly reduced (of about 85%) with respect toflx1Δ cells grown in the absence of H2O2. The ability of theflx1Δ cells to grow in glycerol, which wasper se drastically reduced by deletion, was reduced at 24 h by the addition of 0.05 mM H2O2 (about 50% with respect to the control cells grown in the absence of H2O2). An even higher sensitivity to H2O2 was observed in the presence of 2 mM H2O2, having their growth ability reduced of about 85% with respect to control cells in which no addition was made. The impairment in the ability to grow under H2O2 stress conditions clearly demonstrates an impairment in defence capability of theflx1Δ strain. Interestingly, the same phenotype was observed also in the yeastsdh5Δ [35],sdh1Δ, andsdh2Δ [36] strains.To understand whether mitochondrial flavoprotein impairment, due toFLX1 deletion, influenced aging in yeast, we carried out measurements of chronological lifespan on both WT andflx1Δ cells cultured at 30°C in SM liquid medium supplemented with glucose 2% as carbon source (Figure 3). Following 24 h (1 day), 96 h (4 days), and 168 h (7 days) of growth, the number of colonies was determined by spotting five serial dilutions of the liquid culture and incubating the plates for two-three days at 30°C. The results of a typical experiment are reported in Figure 3. A reduced number of small colonies were counted for theflx1Δ strain, with respect to the number of colonies counted for the WT strain. This phenotype, particularly evident after 96 h and 168 h of growth time, clearly indicated a decrease in chronological lifespan of theflx1Δ strain. Essentially the same phenotype was observed insdh1Δ andsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. Whenflx1Δ cells were grown on glycerol, they lost the ability to form colonies following 24 h growth time (data not shown).Figure 3
Chronological lifespan determination. WT andflX1Δ strains were cultured in SM liquid medium at 30°C. Dilutions from each culture containing about 200 cells (as calculated from A
600
nm by taking into account that one A
600
nm is equivalent to 3 × 107 cell/mL) were harvested after 24, 96, and 168 h and plated onto SM solid medium and grown at 30°C for two-three days.In order to correlate the observed phenotype with an impairment of cellular bioenergetics, we compared the ATP content and the ROS amount of theflx1Δ strain with that of the WT. In Figure 4, panel (a), the ATP cellular content was enzymatically measured in neutralized perchloric extracts prepared from WT andflx1Δ cells grown on glycerol. At the exponential growth phase (5 h), a significant reduction was detected in theflx1Δ cells in comparison with the WT (0.21 versus 1.05 nmol·mg−1 protein). At the stationary growth phase (24 h), the ATP content increased significantly in WT cells (3.4 nmol·mg−1 protein) and even more in the deleted strain (5.2 nmol·mg−1 protein). The temporary severe decrease in ATP content induced by the absence of Flx1p was not observed in glucose-grown cells (Figure 4, panel (a′)), as expected when fermentation is the main way to produce ATP.Bioenergetic and redox impairment inflx1Δ strain: ATP and ROS content. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′)) up to the exponential phase (5 h). ATP content ((a), (a′)) was enzymatically determined following perchloric acid extraction and neutralization. ROS content ((b), (b′)) was fluorometrically measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).FLX1 deletion induced also a significant increase in the amount of ROS (135% with respect to the WT cells), as estimated with the fluorescent dye DCFH-DA on the cellular lysates prepared from cells grown in glycerol up to the exponential growth phase (Figure 4, panel (b)). At the stationary phase theflx1Δ cells presented almost the same ROS amount measured in the WT cells (Figure 4, panel (b)). In glucose-grown cells, the amount of cellular ROS in theflx1Δ strain was not significantly changed with respect to the WT (Figure 4, Panel (b′)), as expected when a mitochondrial damage is the major cause of ROS unbalance.In line with the unique role of flavin cofactor in oxygen metabolism and ROS defence systems [20, 30, 37, 38], we further investigated whether the impairment of the ROS level in glycerol-grownflx1Δ strain was due to a derangement in enzymes involved in ROS detoxification, such as the flavoprotein glutathione reductase (GR) or the FAD-independent superoxide dismutase (SOD); their specific enzymatic activities were measured in cellular lysates from WT andflx1Δ cells grown on glycerol and glucose, while assaying the FAD-independent enzyme FUM as control (Figure 5). Figure 5, panel (a), shows a significant increase in GR specific activity inflx1Δ strain (65%) at the exponential growth phase with respect to that measured in WT. The GR specific activity in theflx1Δ reached the same value measured in the WT cells (about 35 nmol·mg−1 protein) at the stationary phase. In cells grown in glucose up to the exponential growth phase (Figure 5, panel (a′)) a slight, but not significant, reduction in GR specific activity was detected in theflx1Δ strain with respect to the WT (25 versus 31 nmol·mg−1 protein).GR and SOD activities inflx1Δ strain. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b), and (c)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′), and (c′)) up to the exponential phase (5 h). GR ((a), (a′)) and SOD ((b), (b′)) specific activities were spectrophotometrically determined as described in Section 2. As control FUM specific activity ((c), (c′)) was measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).As regards SOD, in the glycerol-grownflx1Δ cells after 5 h growth time (Figure 5, panel (b)), the SOD specific activity was significantly higher than the value measured in the WT cells (16 versus 9 standard U·mg−1). At the stationary phase, the SOD specific activity in theflx1Δ significantly decreased, reaching a value of 6.6 standard U·mg−1, that is, about two-fold lower than the SOD specific activity measured in WT cells. In glucose-grown cells after 5 h growth time (Figure 5, panel (b′)), a slight, but significant, reduction in SOD specific activity can be detected in theflx1Δ strain with respect to the WT (9.2 versus 12.2 nmol·mg−1 protein). This reduction might be explained by a defect in FAD dependent protein folding, as previously observed in [30, 39].In all the growth conditions tested, the FUM activity, used as a control, was not affected byFLX1deletion (Figure 5, panels (c) and (c′)).
### 3.2. The Role of Flx1p in a Retrograde Cross-Talk Response Regulating Cell Defence and Lifespan
Results described in the previous paragraph strengthen the relevance of Flx1p in ensuring cell defence and correct aging by maintaining the homeostasis of mitochondrial flavoproteome. As concerns SDH, in [19] we gained some insight into the mechanism by which Flx1p could regulate Sdh1p apo-protein expression, as due to a control that involves regulatory sequences located upstream of the SDH1 coding sequence (as reviewed in [40]).To gain further insight into this mechanism, we searched here for elements that could be relevant in modulating Sdh1p expression, in response to alteration in flavin cofactor homeostasis. Therefore, first we searched forcis-acting elements in the regulatory regions located upstream of theSDH1 ORF, first of all in the 5′UTR region, as defined by [41], which corresponds to the first 71 nucleotides before the start codon ofSDH1 ORF. No consensus motifs were found in this region by using the bioinformatic tool “Yeast Comparative Genomics—Broad Institute” [42]. Indeed, it should be noted that no further information is at the moment available on the actual length of the 5′UTR ofSDH1.Thus, we extended our analysis along the 1 kbp upstream region ofSDH1 ORF and we found twelve consensus motifs that could bind regulatory proteins, six of which are of unknown function. Among these motifs, summarised in Table 2, the most relevant, at least in the scenario described by our experiments, seemed to be a motif which is located at −80 nucleotides upstream the start codon ofSDH1 ORF and, namely, motif 29 (consensus sequence shRCCCYTWDt), that perfectly overlaps with motif 38 (consensus sequence CTCCCCTTAT). This motif is also present in the upstream region of the mitochondrial flavoproteinARH1, involved in ubiquinone biosynthesis [28], but not in that of flavoproteinLPD1 andCOQ6 [25, 26, 28]. Interestingly, this motif 29 is also present in the upstream regions of the members of the machinery that maintained Rf homeostasis, that is, the mitochondrial FAD transporterFLX1 [25], the FAD forming enzymeFAD1 [25], and the Rf translocatorMCH5 [22]. Moreover, this motif is also present in the upstream regulatory region of the mitochondrial isoenzymeSOD2, but not in the cytosolic one,SOD1, and in one of the five nuclear succinate sensitive JmjC-domain-containing demethylases, that is,RPH1 [43]. According to [42], this motif is bound by transcription factor Msn2p and its close homologue Msn4p (referred to as Msn2/4p), which under nonstress conditions are located in the cytoplasm. Upon different stress conditions, among which oxidative stress, Msn2/4p are hyper-phosphorylated and shuttled from the cytosol to the nucleus [44]. The pivotal role played by Msn2/4p in chronological lifespan in yeast was first discovered by [45] and recently exhaustively reviewed by [46].Table 2
List of motifs localized in the 1000 nucleotides upstream region of SDH1 ORF and identified by enriched conservation among all Saccharomyces species genome using the “Yeast Comparative Genomics—Broad Institute” database.
Number
Motif
Number of ORFs
Binding factor
Function
2
RTTACCCGRM
865
Reb1
RNA polymerase I enhancer binding protein
14
YCTATTGTT
561
Unknown
/
26
DCGCGGGGH
285
Mig1
Involved in glucose repression
29
hRCCCYTWDt
442
Msn2/4
Involved in stress conditions
38
CTCCCCTTAT
218
Msn2/4
Involved in stress conditions
39
GCCCGG
152
Unknown
Filamentation
41
CTCSGCS
77
Unknown
/
47
TTTTnnnnnnnnnnnngGGGT
359
Unknown
/
57
CGGCnnMGnnnnnnnCGC
84
Gal4
Involved in galactose induction
61
GKBAGGGT
363
TBF1
Telobox-containing general regulatory factor
63
GGCSnnnnnGnnnCGCG
80
mbp1-like
Involved in regulation of cell cycle progression from G1 to S
70
CGCGnnnnnGGGS
156
Unknown
/A further comparison between the 5′UTRs ofSDH1 and of proteins involved in FAD homeostasis revealed another common motif of unknown function located at –257 nucleotides upstream the start codon ofSDH1 ORF, namely, the motif 14 (consensus sequence YCTATTGTT) [42]. BesidesSDH1, this motif is also present in the upstream region ofMCH5 and its homologueMCH4, inFAD1, and also in a number of mitochondrial flavoproteins, includingHEM14,NDI1, andNCP1. The binding factor and the functional role of the motif 14 have not yet annotated in “Yeast Comparative Genomics—Broad Institute” (Table 2). Searching in the biological database “Biobase-Gene-regulation-Transfac” we found that this motif is reported as bound by Rox1p (YPR065W, a heme-dependent repressor of hypoxic genes—SGD information). Rox1p is involved in the regulation of the expression of proteins involved in oxygen-dependent pathways, such as respiration, heme, and sterols biosynthesis [47]. Thus,SDH1 expression is downregulated inrox1Δ strain under aerobiosis [47]. This finding strengthens the well-described relationship between oxygen/heme metabolism and flavoproteins [18, 37]. A possible involvement of this transcriptional pathway in the scenario depicted by deletion ofFLX1 remains at the moment only speculative.
## 3.1. Phenotypical and Biochemical Consequences ofFLX1 Deletion
In order to study the relevance of mitochondrial flavin cofactor homeostasis on cellular bioenergetics we introduced a yeast strain lacking theFLX1 gene, encoding the mitochondrial FAD transporter [26]. This deleted strain showed a small-colony phenotype, on both fermentable and nonfermentable carbon sources, due to an impairment in the aerobic respiratory chain pathway [32]. The deleted strain,flx1Δ, grew normally on glucose medium but failed to grow on nonfermentable carbon sources (i.e., glycerol), thus indicating a respiration-deficient phenotype (Figure 1(a)). The growth defect on nonfermentable carbon source, which was restored by complementing the deleted strain with the YEpFLX1 plasmid [26], was not rescued by the addition of tricarboxylic acid (TCA) cycle intermediates such as succinate or malate (Figure 1(a)).(a) Respiratory-deficient phenotype offlx1Δ strain: effect of succinate and malate addition. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated either 5 mM succinate (Succ) or 5 mM malate (Mal) was added. Cell growth was estimated at the stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm) of a ten-fold dilution of each growth culture, consistently, corrected for the dilution factor. The values reported in the histogram are the means (±SD) of three experiments. (b) Changes in the recombinant Sdh1-HAp level inflx1Δ strain. Cellular lysates were prepared fromWT-HA andflX1Δ-HA cells grown at 30°C up to the exponential growth phase (5 h) in YEP liquid medium supplemented with either glycerol or galactose (2% each) as carbon source. Proteins from cellular lysates (0.05 mg) were separated by SDS/PAGE and transferred onto a PVDF membrane. In each extract, Sdh1-HA protein was detected by using an α-HA and its amount was densitometrically evaluated. The values reported in the histogram are the means (±SD) of three experiments performed with different cellular lysates preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05). As a control, the specific activity of the enzyme fumarase (FUM) was determined in each cellular lysate preparation.
(a)
(b)Among the mitochondrial flavoenzymes which were demonstrated to be altered inflx1Δ strain [25, 26, 28], we showed before [19, 32] and confirmed in Figure 1(b) a significant reduced level of the apo-flavoprotein Sdh1p, resulting in an altered functionality of SDH or complex II of the respiratory chain. This reduction was revealed by creating a strain in which three consecutive copies of the human influenza hemagglutinin epitope (HA epitope, YPYDVPDYA) were fused in frame to the 3′end of theSDH1 ORF in the genome of both the WT andflx1Δ strains. The chimera protein, namely, Sdh1-HAp, carrying the HA-tag at the C-terminal end of Sdh1p, lost the ability to covalently bind the flavin cofactor FAD [19, 33], but not its regulatory behaviour, that is, its inducible expression in galactose or in nonfermentable carbon sources. In all the growth conditions tested, the FAD-independent fumarase (FUM) activity, used as a control, was not affected byFLX1 deletion (see histogram in Figure 1(b)).A significant decrease of Sdh1-HAp level was accompanied in galactose, but not in glycerol, by a profound derangement of flavin cofactors, particularly evident in cell grown at the early exponential phase (Table1), in agreement with [25, 26], respectively. The reason for these carbon source-dependent flavin level changes, which is not easily explainable, is addressed in Section 4
.Table 1
Endogenous flavin content in spheroplasts and mitochondria.
Carbon source
Strain
Spheroplasts
SCM
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
FAD pmoli mg−1
FMN pmoli·mg−1
FAD/FMN
Glycerol
WT
157 ± 7
153 ± 7
1.1
160 ± 10°
30 ± 10°
4.8
f
l
x
1
Δ
126 ± 11
110 ± 10
1.1
140 ± 30°
40 ± 10°
4.5
Galactose
WT
263 ± 10
189 ± 8
1.4
538 ± 32
103 ± 7
5.2
f
l
x
1
Δ
207 ± 8*
195 ± 8
1.1
306 ± 15*
67 ± 11*
4.8
Spheroplasts and mitochondria (SCM) were prepared from WT andf
l
x
1
Δ cells grown in glycerol or galactose (2%) up to the exponential growth phase (5 h). FAD and FMN content was determined in neutralized perchloric acid extracts, as described in Materials and Methods. Riboflavin amount was not relevant, and thus its value has not been reported. The means (±SD) of the flavin endogenous content determined in three experiments performed with different preparations are reported. °Data published in (Bafunno et al., 2004) [26]; statistical evaluation was carried out according to Student’s t-test (*P
<
0.05).Consistent with an altered functionality of SDH, theflx1Δ strain also showed impaired isolated mitochondria oxygen consumption activity, specifically detectable when succinate was used as a respiratory substrate [19]. Similar phenotype was also observed in yeast strains carrying either a deletion ofSDH1 [34] or a deletion ofSDH5, which encodes a mitochondrial protein involved in Sdh1p flavinylation [35]. Another respiration-related phenotype offlx1Δ strain was investigated in Figure 2, by testing H2O2 hypersensitivity of cells grown on both fermentable and nonfermentable carbon sources. In glucose, the WT cells grew up to the stationary phase (24 h) in the presence of H2O2 (0.05 or 2 mM) essentially as the control cells grown in the absence of H2O2. In glycerol, their ability to grow up to 24 h was reduced of about 20% at 0.05 mM H2O2 and of 60% at 2 mM, with respect to the control cells in which no H2O2 was added.Figure 2
Sensitivity to H2O2. WT andflX1Δ cells were cultured at 30°C in YEP liquid medium supplemented with either glucose or glycerol (2% each) as carbon source. Where indicated, H2O2 at the indicated concentration was added. Cell growth was estimated at the exponential (5 h) and stationary phase (24 h) by measuring the absorbance at 600 nm (A
600
nm). In the histogram, the A
600
nm of the cell cultures grown in the presence of H2O2 is reported as a percentage of the control (i.e., the A
600
nm of cell cultures grown in the absence of H2O2, set arbitrary equal to 100%). The values reported in the histogram are the means (±SD) of three experiments.In glucose,flx1Δ cells did not show H2O2 hypersensitivity at 0.05 mM. At 2 mM H2O2, their ability to grow was significantly reduced (of about 85%) with respect toflx1Δ cells grown in the absence of H2O2. The ability of theflx1Δ cells to grow in glycerol, which wasper se drastically reduced by deletion, was reduced at 24 h by the addition of 0.05 mM H2O2 (about 50% with respect to the control cells grown in the absence of H2O2). An even higher sensitivity to H2O2 was observed in the presence of 2 mM H2O2, having their growth ability reduced of about 85% with respect to control cells in which no addition was made. The impairment in the ability to grow under H2O2 stress conditions clearly demonstrates an impairment in defence capability of theflx1Δ strain. Interestingly, the same phenotype was observed also in the yeastsdh5Δ [35],sdh1Δ, andsdh2Δ [36] strains.To understand whether mitochondrial flavoprotein impairment, due toFLX1 deletion, influenced aging in yeast, we carried out measurements of chronological lifespan on both WT andflx1Δ cells cultured at 30°C in SM liquid medium supplemented with glucose 2% as carbon source (Figure 3). Following 24 h (1 day), 96 h (4 days), and 168 h (7 days) of growth, the number of colonies was determined by spotting five serial dilutions of the liquid culture and incubating the plates for two-three days at 30°C. The results of a typical experiment are reported in Figure 3. A reduced number of small colonies were counted for theflx1Δ strain, with respect to the number of colonies counted for the WT strain. This phenotype, particularly evident after 96 h and 168 h of growth time, clearly indicated a decrease in chronological lifespan of theflx1Δ strain. Essentially the same phenotype was observed insdh1Δ andsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. Whenflx1Δ cells were grown on glycerol, they lost the ability to form colonies following 24 h growth time (data not shown).Figure 3
Chronological lifespan determination. WT andflX1Δ strains were cultured in SM liquid medium at 30°C. Dilutions from each culture containing about 200 cells (as calculated from A
600
nm by taking into account that one A
600
nm is equivalent to 3 × 107 cell/mL) were harvested after 24, 96, and 168 h and plated onto SM solid medium and grown at 30°C for two-three days.In order to correlate the observed phenotype with an impairment of cellular bioenergetics, we compared the ATP content and the ROS amount of theflx1Δ strain with that of the WT. In Figure 4, panel (a), the ATP cellular content was enzymatically measured in neutralized perchloric extracts prepared from WT andflx1Δ cells grown on glycerol. At the exponential growth phase (5 h), a significant reduction was detected in theflx1Δ cells in comparison with the WT (0.21 versus 1.05 nmol·mg−1 protein). At the stationary growth phase (24 h), the ATP content increased significantly in WT cells (3.4 nmol·mg−1 protein) and even more in the deleted strain (5.2 nmol·mg−1 protein). The temporary severe decrease in ATP content induced by the absence of Flx1p was not observed in glucose-grown cells (Figure 4, panel (a′)), as expected when fermentation is the main way to produce ATP.Bioenergetic and redox impairment inflx1Δ strain: ATP and ROS content. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′)) up to the exponential phase (5 h). ATP content ((a), (a′)) was enzymatically determined following perchloric acid extraction and neutralization. ROS content ((b), (b′)) was fluorometrically measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).FLX1 deletion induced also a significant increase in the amount of ROS (135% with respect to the WT cells), as estimated with the fluorescent dye DCFH-DA on the cellular lysates prepared from cells grown in glycerol up to the exponential growth phase (Figure 4, panel (b)). At the stationary phase theflx1Δ cells presented almost the same ROS amount measured in the WT cells (Figure 4, panel (b)). In glucose-grown cells, the amount of cellular ROS in theflx1Δ strain was not significantly changed with respect to the WT (Figure 4, Panel (b′)), as expected when a mitochondrial damage is the major cause of ROS unbalance.In line with the unique role of flavin cofactor in oxygen metabolism and ROS defence systems [20, 30, 37, 38], we further investigated whether the impairment of the ROS level in glycerol-grownflx1Δ strain was due to a derangement in enzymes involved in ROS detoxification, such as the flavoprotein glutathione reductase (GR) or the FAD-independent superoxide dismutase (SOD); their specific enzymatic activities were measured in cellular lysates from WT andflx1Δ cells grown on glycerol and glucose, while assaying the FAD-independent enzyme FUM as control (Figure 5). Figure 5, panel (a), shows a significant increase in GR specific activity inflx1Δ strain (65%) at the exponential growth phase with respect to that measured in WT. The GR specific activity in theflx1Δ reached the same value measured in the WT cells (about 35 nmol·mg−1 protein) at the stationary phase. In cells grown in glucose up to the exponential growth phase (Figure 5, panel (a′)) a slight, but not significant, reduction in GR specific activity was detected in theflx1Δ strain with respect to the WT (25 versus 31 nmol·mg−1 protein).GR and SOD activities inflx1Δ strain. Cellular lysates were prepared from WT andflx1Δ mutant strains grown in glycerol ((a), (b), and (c)) up to either the exponential (5 h) or the stationary phase (24 h) or in glucose ((a′), (b′), and (c′)) up to the exponential phase (5 h). GR ((a), (a′)) and SOD ((b), (b′)) specific activities were spectrophotometrically determined as described in Section 2. As control FUM specific activity ((c), (c′)) was measured as described in Section 2. The values reported in the histograms are the means (±SD) of three experiments performed with different cellular lysate preparations. Statistical evaluation was carried out according to Student’s t-test (*
P
<
0.05).As regards SOD, in the glycerol-grownflx1Δ cells after 5 h growth time (Figure 5, panel (b)), the SOD specific activity was significantly higher than the value measured in the WT cells (16 versus 9 standard U·mg−1). At the stationary phase, the SOD specific activity in theflx1Δ significantly decreased, reaching a value of 6.6 standard U·mg−1, that is, about two-fold lower than the SOD specific activity measured in WT cells. In glucose-grown cells after 5 h growth time (Figure 5, panel (b′)), a slight, but significant, reduction in SOD specific activity can be detected in theflx1Δ strain with respect to the WT (9.2 versus 12.2 nmol·mg−1 protein). This reduction might be explained by a defect in FAD dependent protein folding, as previously observed in [30, 39].In all the growth conditions tested, the FUM activity, used as a control, was not affected byFLX1deletion (Figure 5, panels (c) and (c′)).
## 3.2. The Role of Flx1p in a Retrograde Cross-Talk Response Regulating Cell Defence and Lifespan
Results described in the previous paragraph strengthen the relevance of Flx1p in ensuring cell defence and correct aging by maintaining the homeostasis of mitochondrial flavoproteome. As concerns SDH, in [19] we gained some insight into the mechanism by which Flx1p could regulate Sdh1p apo-protein expression, as due to a control that involves regulatory sequences located upstream of the SDH1 coding sequence (as reviewed in [40]).To gain further insight into this mechanism, we searched here for elements that could be relevant in modulating Sdh1p expression, in response to alteration in flavin cofactor homeostasis. Therefore, first we searched forcis-acting elements in the regulatory regions located upstream of theSDH1 ORF, first of all in the 5′UTR region, as defined by [41], which corresponds to the first 71 nucleotides before the start codon ofSDH1 ORF. No consensus motifs were found in this region by using the bioinformatic tool “Yeast Comparative Genomics—Broad Institute” [42]. Indeed, it should be noted that no further information is at the moment available on the actual length of the 5′UTR ofSDH1.Thus, we extended our analysis along the 1 kbp upstream region ofSDH1 ORF and we found twelve consensus motifs that could bind regulatory proteins, six of which are of unknown function. Among these motifs, summarised in Table 2, the most relevant, at least in the scenario described by our experiments, seemed to be a motif which is located at −80 nucleotides upstream the start codon ofSDH1 ORF and, namely, motif 29 (consensus sequence shRCCCYTWDt), that perfectly overlaps with motif 38 (consensus sequence CTCCCCTTAT). This motif is also present in the upstream region of the mitochondrial flavoproteinARH1, involved in ubiquinone biosynthesis [28], but not in that of flavoproteinLPD1 andCOQ6 [25, 26, 28]. Interestingly, this motif 29 is also present in the upstream regions of the members of the machinery that maintained Rf homeostasis, that is, the mitochondrial FAD transporterFLX1 [25], the FAD forming enzymeFAD1 [25], and the Rf translocatorMCH5 [22]. Moreover, this motif is also present in the upstream regulatory region of the mitochondrial isoenzymeSOD2, but not in the cytosolic one,SOD1, and in one of the five nuclear succinate sensitive JmjC-domain-containing demethylases, that is,RPH1 [43]. According to [42], this motif is bound by transcription factor Msn2p and its close homologue Msn4p (referred to as Msn2/4p), which under nonstress conditions are located in the cytoplasm. Upon different stress conditions, among which oxidative stress, Msn2/4p are hyper-phosphorylated and shuttled from the cytosol to the nucleus [44]. The pivotal role played by Msn2/4p in chronological lifespan in yeast was first discovered by [45] and recently exhaustively reviewed by [46].Table 2
List of motifs localized in the 1000 nucleotides upstream region of SDH1 ORF and identified by enriched conservation among all Saccharomyces species genome using the “Yeast Comparative Genomics—Broad Institute” database.
Number
Motif
Number of ORFs
Binding factor
Function
2
RTTACCCGRM
865
Reb1
RNA polymerase I enhancer binding protein
14
YCTATTGTT
561
Unknown
/
26
DCGCGGGGH
285
Mig1
Involved in glucose repression
29
hRCCCYTWDt
442
Msn2/4
Involved in stress conditions
38
CTCCCCTTAT
218
Msn2/4
Involved in stress conditions
39
GCCCGG
152
Unknown
Filamentation
41
CTCSGCS
77
Unknown
/
47
TTTTnnnnnnnnnnnngGGGT
359
Unknown
/
57
CGGCnnMGnnnnnnnCGC
84
Gal4
Involved in galactose induction
61
GKBAGGGT
363
TBF1
Telobox-containing general regulatory factor
63
GGCSnnnnnGnnnCGCG
80
mbp1-like
Involved in regulation of cell cycle progression from G1 to S
70
CGCGnnnnnGGGS
156
Unknown
/A further comparison between the 5′UTRs ofSDH1 and of proteins involved in FAD homeostasis revealed another common motif of unknown function located at –257 nucleotides upstream the start codon ofSDH1 ORF, namely, the motif 14 (consensus sequence YCTATTGTT) [42]. BesidesSDH1, this motif is also present in the upstream region ofMCH5 and its homologueMCH4, inFAD1, and also in a number of mitochondrial flavoproteins, includingHEM14,NDI1, andNCP1. The binding factor and the functional role of the motif 14 have not yet annotated in “Yeast Comparative Genomics—Broad Institute” (Table 2). Searching in the biological database “Biobase-Gene-regulation-Transfac” we found that this motif is reported as bound by Rox1p (YPR065W, a heme-dependent repressor of hypoxic genes—SGD information). Rox1p is involved in the regulation of the expression of proteins involved in oxygen-dependent pathways, such as respiration, heme, and sterols biosynthesis [47]. Thus,SDH1 expression is downregulated inrox1Δ strain under aerobiosis [47]. This finding strengthens the well-described relationship between oxygen/heme metabolism and flavoproteins [18, 37]. A possible involvement of this transcriptional pathway in the scenario depicted by deletion ofFLX1 remains at the moment only speculative.
## 4. Discussion
This paper deals with the role exerted by the mitochondrial translocator Flx1p in the efficiency of ATP production, ROS homeostasis, H2O2 sensitivity, and chronological lifespan inS. cerevisiae, starting from the previous demonstrations of the derangements in specific mitochondrial flavoproteins which are crucial for mitochondrial bioenergetics, including Coq6p [28], Lpd1p, and Sdh1p [19, 25, 26]. The alteration in Sdh1p expression level in different carbon source is confirmed here (Figure 1) and it is accompanied by an alteration in flavin cofactor amount in galactose, but not in glycerol-grown cells (Table 1), in agreement with [19, 25], respectively. In the attempt to rationalize the reason for the carbon source dependence of the flavin level changes, we hypothesized different subcellular localization for Fad1p in response to carbon sources. Experiments are going on in our laboratory to evaluate this possibility.Theflx1Δ strain showed impaired succinate-dependent oxygen consumption [19]. Since no reduction in the oxygen consumption rate was found by using alternative substrates, such as NADH or glycerol 3-phosphate, possible defects in the ubiquinone or heme biosynthesis [28] could not be relevant for mitochondrial respiration, at least under this nonstress condition.To evaluate the consequences ofFLX1 deletion on bioenergetics and cellular redox balance, the ATP content and ROS level (Figure 4) were compared in WT andflx1Δ strains, accompanied by measurements of the enzymatic activities of GR and SOD, enzymes involved in ROS detoxification (Figure 5). ATP shortage and ROS unbalance were observed inflx1Δcells grown in glycerol up to the exponential growth phase, but not in cells grown in glycerol up to the stationary phase or in glucose. The findings are in agreement with the mitochondrial origin of these biochemical parameters. More importantly, the observation that lifespan was changed in glucose (not accompanied by a detectable ROS unbalance) allows us to propose that the lifespan shortage induced by the mitochondrial alteration due to absence ofFLX1 gene (correlated to flavoprotein impairment) may act also independently of ROS level increase.Theflx1Δ strain showed also H2O2 hypersensitivity (Figure 2). Since the same respiratory-deficient phenotype was previously observed in the yeast strainsdh1Δ andsdh5Δ strains [35], these results could be explained by the incapability of theflx1Δ strain to increase the amount of Sdh1p in response to oxidative stress.In this paper, for the first time, a correlation between deletion ofFLX1 and altered chronological lifespan was reported (Figure 3). A similar phenotype was also previously demonstrated forsdh5Δ strains [35]. Thus, it seems quite clear that a correct biogenesis of mitochondrial flavoproteome, and in particular assembly of SDH, ensures a correct aging rate in yeast. This conclusion is also consistent with the recent observations made in another model organism, that is,C. elegans, in which the FAD forming enzyme FADS coded byflad-1 gene was silenced [30, 48].To understand the molecular mechanism by which FAD homeostasis derangement and flavoproteome level maintenance are correlated, a bioinformatic analysis was performed which revealed at least twocis-acting motifs which are located in the upstream region of genes encoding SDH1, other mitochondrial flavoproteins, and some members of the machinery that maintain cellular FAD homeostasis. Therefore, the analysis describes the ability of yeast cells to implement under H2O2 stress condition and aging a strategy of gene expression coordinating flavin cofactor homeostasis with the biogenesis of a number of mitochondrial flavoenzymes involved in various aspects of metabolism ranging from oxidative phosphorylation to heme and ubiquinone biosynthesis. Even though no experimental evidence still exists to test the direct involvement of thesecis-acting motifs in flavin-dependent cell defence and chronological lifespan, their involvement in the scenario depicted by deletion ofFLX1 appeared to be a fascinating purpose to be pursued. Experiments in this direction are at the moment going on in our laboratory.In [19] we demonstrated that the early-onset change in apo-Sdh1p content observed in theflx1Δ strain appeared consistent with a posttranscriptional control exerted by Flx1p, as depicted in Figure 6. Thus, an inefficient translation of SDH1-mRNA is expected inflx1Δ strain due to the posttranscriptional control [19], even when putative mRNA levels may change in response to cell stress and/or aging. In this pathway the transcription factors Msn2/4p and Rox1p could play a crucial role.Figure 6
A possible correlation between mitochondrial FAD homeostasis and chronological lifespan. The scheme summarizes results from studies described in this and other papers [17, 19, 22, 26, 35, 36, 40, 50, 53]. Mch5p, plasma membrane Rf transporter; Rib1-5/7p, enzymes involved in Rfde novo biosynthesis; Rf
T, mitochondrial riboflavin transporter; Fmn1p, riboflavin kinase; mtFADS, mitochondrial FAD synthase; Flx1p, mitochondrial FAD exporter; I, FAD pyrophosphatase; Sdh1p, succinate dehydrogenase flavoprotein subunit; Sdh5p, protein required for Sdh1p flavinylation; Sdh2/3/4p, other subunits of succinate dehydrogenase complex; Tmp62p/Sdh6p, factors required for SDH complex assembly; TCA cycle, tricarboxylic acid cycle; TOM complex/TIM complex, proteins involved in mitochondrial protein import; Dic1p, mitochondrial dicarboxylic acid carrier; PDH, prolyl hydroxylase; JmjC, JmjC-domain-containing demethylases, Rox1p, heme-dependent repressor of hypoxic genes; Msn2/4p, transcriptional factors activated in stress conditions.Moreover, scheme in Figure6 outlines howFLX1 deletion, causing a change in expression level of Sdh1p, could activate a sort of retrograde cross-talk directed to nucleus. In our hypothesis besides ROS increase, a key molecule mediating nucleus-mitochondrion cross-talk should be the TCA cycle intermediate succinate, whose amount is expected to increase when altering the activity of SDH. The increased amount of succinate in turn may alter the activity of the α-ketoglutarate- and Fe(II)-depending dioxygenases among which there are (i) the JmjC-domain-containing demethylases [36], which may be causative of epigenetic events at the basis of precocious aging (for an exhaustive review on this point see [49]), and (ii) the prolyl hydroxylase (PDH), which may mimic a hypoxia condition in the cell [50].
## 5. Conclusions
Here we prove that inS. cerevisiae deletion of the mitochondrial translocatorFLX1 results in H2O2 hypersensitivity and altered chronological lifespan, which is associated with ATP shortage and ROS unbalance in nonfermentable carbon source. We propose that this yeast phenotype is correlated to a reduced ability to maintain an appropriate level of succinate dehydrogenase flavoprotein subunit [19], which in turn can either derange epigenetic regulation or mimic a hypoxic condition. Thus,flx1Δ strain provides a useful model system for studying human aging and degenerative pathologic condition associated with alteration in flavin homeostasis, which can be restored by Rf treatment [51, 52].
---
*Source: 101286-2014-05-08.xml* | 2014 |
# Achieving Secure and Efficient Data Access Control for Cloud-Integrated Body Sensor Networks
**Authors:** Zhitao Guan; Tingting Yang; Xiaojiang Du
**Journal:** International Journal of Distributed Sensor Networks
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101287
---
## Abstract
Body sensor network has emerged as one of the most promising technologies for e-healthcare, which makes remote health monitoring and treatment to patients possible. With the support of mobile cloud computing, large number of health-related data collected from various body sensor networks can be managed efficiently. However, how to keep data security and data privacy in cloud-integrated body sensor network (C-BSN) is an important and challenging issue since the patients’ health-related data are quite sensitive. In this paper, we present a novel secure access control mechanism MC-ABE (Mask-Certificate Attribute-Based Encryption) for cloud-integrated body sensor networks. A specific signature is designed to mask the plaintext, and then the masked data can be securely outsourced to cloud severs. An authorization certificate composed of the signature and related privilege items is constructed which is used to grant privileges to data receivers. To ensure security,a unique value is chosen to mask the certificate for each data receiver. Thus, the certificate is unique for each user and user revocation can be easily completed by removing the mask value. The analysis shows that proposed scheme can meet the security requirement of C-BSN, and it also has less computation cost and storage cost compared with other popular models.
---
## Body
## 1. Introduction
Body sensor network (BSN) emerges recently with rapid development of wearable sensors, implantable sensors, and short range wireless communication, which make pervasive healthcare monitoring and management become increasingly popular [1, 2]. By the body sensor network, health-related data of the patient can be collected and transferred to the healthcare staff in real time, so the patient’s state of health can be under monitoring and precautions can be taken if something bad happened.In order to enhance the scalability of the body sensor network, some work focuses on combining cloud computing and body sensor network together. As shown in Figure1, with the support of mobile cloud computing, cloud-integrated body sensor network (C-BSN) can be constructed [3]. In C-BSN, massive local body sensor networks are integrated together and mass data are collected and stored in cloud servers; healthcare staffs will continually monitor their patients’ status and exchange views when it is difficult to make diagnosis; researchers can make data analysis to get some useful results such as regularity of disease development; government agencies also can take measures on disease prevention and control based on data analysis.Figure 1
Conceptual architecture of cloud-integrated body sensor network.However, there are still several problems and challenges in C-BSN [3, 4]. For example, data security and data privacy must be concerned since patient-related data is private and sensitive. In this paper, we propose a secure data access control scheme named MC-ABE, which can efficiently ensure data security and data privacy. For data security, data can be securely transferred from data owners to the cloud servers and securely stored; for data privacy, data can be only accessed by authorized users with fine-grained policies.For example, Bob (data owner) is a patient, and Alice (data requester) is his healthcare doctor. By C-BSN, Bob’s health-related data can be collected and sent to cloud server in real time; and Alice gets Bob’s information from cloud server to monitor his health status. Besides the authorized person, Bob does not want anyone else to know about his health data. However, his information may be leaked in many ways: the cloud operator/administrator may access his data; malicious user may intrude into the cloud server to steal user data; unauthorized DR may exceed to access others’ data. In summary, there are three key problems which need to be solved to ensure the users’ data security and data privacy in C-BSN. Firstly, the cloud is semitrusted; that is, although we outsource the data to the cloud, we still need to prevent cloud operators from accessing the data content; secondly, we must take measures to keep malicious users out of C-BSN system; lastly, it is also important to study how to avoid the unauthorized access of other users.In this paper, we propose a novel secure access control mechanism MC-ABE to tackle the aforementioned problems. And main contributions of this paper can be summarized as follows:(i)
We construct one specific signature to CP-ABE to mask the plaintext and then realize securely encryption/decryption outsourcing.
(ii)
We construct the unique authentication certificate for each visitor, which makes the system achieve more effective control on malicious visitors; in particular, it also leads to a low cost for user revocation.
(iii)
We introduce the third-party trust authority to manage above-mentioned signatures and certificates, which can guarantee data security even if the cloud server is semitrusted.
(iv)
In C-BSN, processing data in time is quite necessary. Our proposed scheme can meet such requirement. From the section of performance evaluation, our scheme takes less time than other compared methods to do data collecting, data transmission, and data acquisition.The rest of this paper is organized as follows. Section2 introduces the related work. Then, in Section 3, some preliminaries are given. Our scheme is stated in Section 4. In Section 5, security analysis is given. In Section 6, the performance of our scheme is evaluated. The paper is concluded in Section 7.
## 2. Related Work
Recently, various techniques have been proposed to address the problems of data security and data privacy in C-BSN. In [5], Sahai and Waters proposed the Attribute-Based Encryption (ABE) to realize access control on encrypted data. In ABE, the ciphertext’s encryption policy is associated with a set of attributes, and the data owner can be offline after data is encrypted. One year later, Goyal et al. proposed a new type of ABE, Key-Policy Attribute-Based Encryption (KP-ABE) [6]. In KP-ABE, the ciphertext’s encryption policy is also associated with a set of attributes, but the attributes are organized into a tree structure (named access tree). The benefit of this approach is that more flexible access control strategy can be got and fine-grained access control can be realized. However, data owner was short of entire control over the encryption policy; that is, he cannot decide who can access the data and who cannot. To solve this problem, Bethencourt et al. proposed CP-ABE (Ciphertext-Policy Attribute-Based Encryption) [7], in which data owner constructed the access tree together with visitors’ identity information. The user can decrypt the ciphertext if and only if attributes in his private key match the access tree. So, in CP-ABE, data owner can configure more flexible access policy. In [8], Yu et al. tried to achieve secure, scalable, and fine-grained access control in cloud environment. Their proposed scheme is based on KP-ABE and combines with the other two techniques, proxy reencryption and lazy reencryption. It is proved that the proposed scheme can meet the security requirement in cloud quite well. Similarly, Wang et al. proposed an access control scheme based on CP-ABE, which is also secure and efficient in cloud environment [9].In [10], Ahmad et al. proposed a multitoken authorization strategy to remedy the weaknesses of the authorization architecture in mobile cloud. It reduces the probability of unauthorized access to the cloud data and service when malicious activity happened; for example, IdM (Identity Management Systems) are compromised, network links are eavesdropped, or even communication tokens are stolen. In [11], Yadav and Dave presented an access model based on CP-ABE which could provide the remote integrity check by the way of augmenting secure data storage operations. To reduce computation overhead and achieve secure encryption/decryption outsourcing, the access tree is divided into two parts: one part is encrypted by the data owner and the other part is encrypted by the cloud sever. So a portion of computation overhead was transferred from data owner to cloud sever. The similar method is also adopted in the work of Zhou and Huang [12]. In addition to the access tree division, Zhou and Huang also propose an efficient data management model to balance communication and storage overhead to reduce the cost of data management operations. In [13], Li et al. presented a low complexity multiauthority attribute-based encryption scheme for mobile cloud computing which uses masked shared-decryption-keys to ensure the security of decryption outsourcing and adopts multiauthorities for authorization to enhance security assurance. The above schemes are based on CP-ABE, in which complex bilinear map calculation is performed. In [14], Yao el al. proposed a novel access control mechanism, in which data operation privileges are granted based on authorization certificates. The advantage of such mechanism is that the computation cost can be decreased remarkably, since there is no bilinear map calculation. And the disadvantage is that lots of operations need to be handled by data owner, such as privilege designation, and then it demands that the data owner must know all information about the visitors. In [15], the authors considered the problem of patient self-controlled access privilege to highly sensitive Personal Health Information. They proposed a Secure Patient-Centric Access Control scheme which allows data requesters to have different access privileges based on their roles and then assigns different attribute sets to them. However, they took the cloud server as trusted, and their scheme does not work well for user revocation. In [16], the authors proposed a novel CP-ABE scheme with constant-size decryption keys independent of the number of attributes. Their scheme is suitable for applications based on lightweight mobile devices but is not suitable for large scale C-BSN.
## 3. Preliminaries
### 3.1. Notations
The notations used in MC-ABE are listed as follows.Notations in MC-ABE. Consider the following:DO: data owner,
DR: data requester/receiver,
ESP: encryption service provider,
DSP: decryption service provider,
SSP: storage service provider,
TA: trust authority,
SetS: setup server,
PK: public key,
MK: master key,
SK: secret key,
M: plaintext,
CT: ciphertext,
T: access tree,
MM: masked plaintext,
Cert: authorization certificate,
MValue: mask value,
MCert: masked certificate.DO and DR are cloud users. ESP is cloud server that can help DO do data encryption. SSP is cloud storage server. DSP is the server that is responsible for data decryption. TA is the third-party trust authority. SetS is the setup server whose responsibility is to generate PK and MK.PK and MK are parameters that are used for data encryption/decryption. SK is held by DR which is used to decrypt ciphertext, which is generated using PK and MK. The data is plaintext before encryption, denoted asM, and CT is the ciphertext of M. T is the access policy (access tree). MM is the masked plaintext; in MC-ABE, the plaintext will be masked to MM by a signature before being encrypted to achieve “double protection.” Cert is the authorization certificate (see Section 4.2.1 for details). Mask value is used to mask Cert to generate MCert (see Section 4.2.2 for details).
### 3.2. Basics
#### 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
#### 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
### 3.3. Ciphertext-Policy Attribute-Based Encryption (CP-ABE)
#### 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
#### 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
### 3.4. Assumptions
In this work, we make the following assumptions.Assumption 1 (service providers (ESP, DSP, and SSP) are semitrusted).
That is, they will follow our proposed protocol in general but try to find out as much secret information as possible. And the information may be accessed illegally by internal malicious employees or external attackers. In particular, although ESP and DSP undertake most of the computing cost, they do not have enough information to deduce the plaintext.Assumption 2 (SetS and TA are trusted).
On no conditions will they leak information about data and related keys.
In order to deduce more information about encrypted data, service providers might combine their information to perform collusion attack. In our scheme, collusions between service providers are taken into consideration.
## 3.1. Notations
The notations used in MC-ABE are listed as follows.Notations in MC-ABE. Consider the following:DO: data owner,
DR: data requester/receiver,
ESP: encryption service provider,
DSP: decryption service provider,
SSP: storage service provider,
TA: trust authority,
SetS: setup server,
PK: public key,
MK: master key,
SK: secret key,
M: plaintext,
CT: ciphertext,
T: access tree,
MM: masked plaintext,
Cert: authorization certificate,
MValue: mask value,
MCert: masked certificate.DO and DR are cloud users. ESP is cloud server that can help DO do data encryption. SSP is cloud storage server. DSP is the server that is responsible for data decryption. TA is the third-party trust authority. SetS is the setup server whose responsibility is to generate PK and MK.PK and MK are parameters that are used for data encryption/decryption. SK is held by DR which is used to decrypt ciphertext, which is generated using PK and MK. The data is plaintext before encryption, denoted asM, and CT is the ciphertext of M. T is the access policy (access tree). MM is the masked plaintext; in MC-ABE, the plaintext will be masked to MM by a signature before being encrypted to achieve “double protection.” Cert is the authorization certificate (see Section 4.2.1 for details). Mask value is used to mask Cert to generate MCert (see Section 4.2.2 for details).
## 3.2. Basics
### 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
### 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
## 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
## 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
## 3.3. Ciphertext-Policy Attribute-Based Encryption (CP-ABE)
### 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
### 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
## 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
## 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
## 3.4. Assumptions
In this work, we make the following assumptions.Assumption 1 (service providers (ESP, DSP, and SSP) are semitrusted).
That is, they will follow our proposed protocol in general but try to find out as much secret information as possible. And the information may be accessed illegally by internal malicious employees or external attackers. In particular, although ESP and DSP undertake most of the computing cost, they do not have enough information to deduce the plaintext.Assumption 2 (SetS and TA are trusted).
On no conditions will they leak information about data and related keys.
In order to deduce more information about encrypted data, service providers might combine their information to perform collusion attack. In our scheme, collusions between service providers are taken into consideration.
## 4. MC-ABE
### 4.1. Overview
Our proposed scheme MC-ABE is shown in Figure2. Seven algorithms are included in MC-ABE: Setup, E
n
c
r
y
p
t
D
O, E
n
c
r
y
p
t
E
S
P, KeyGen, CerGen, D
e
c
r
y
p
t
D
S
P, and D
e
c
r
y
p
t
D
R.Figure 2
System model.For data outsourcing, DO encryptsM with algorithm E
n
c
r
y
p
t
D
O, in which signature is used to mask M. Then ESP encrypts T with algorithm E
n
c
r
y
p
t
E
S
P to finish the encryption. The encrypted data is stored in SSP.For data access, when DR requests data from SSP, the request is sent to TA after verification. TA chooses a unique value to the mask certificate for DR. Then, using the attributes set of DR, TA computes SK with algorithm KeyGen. After that, SK is sent to DSP and the certificate is sent to DR. At the same time, SSP sends the CT to DSP. With SK and CT, DSP can do decryption and getM that is masked by signature. Once DR receives the certificate, he decrypts the masked certificate with hisunique value (TA sends the unique value to this DR when the first authorized request occurred. It will be used in the following requests until this DR is revoked) to get the certificate. Using the certificate, DR can decrypt the masked M with signatures in the certificate.In addition, if a DR is revoked, TA will mark the DR as “revoked” and this DR’s unique mask value will be invalid. No certificate will be granted to this DR any more.
### 4.2. Two Important Notions
#### 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
#### 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
### 4.3. Scheme Description
The whole process of MC-ABE is shown in Figure3. In this section, we describe each step in detail.Figure 3
Algorithms’ implementation in MC-ABE.
#### 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
#### 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
#### 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 4.1. Overview
Our proposed scheme MC-ABE is shown in Figure2. Seven algorithms are included in MC-ABE: Setup, E
n
c
r
y
p
t
D
O, E
n
c
r
y
p
t
E
S
P, KeyGen, CerGen, D
e
c
r
y
p
t
D
S
P, and D
e
c
r
y
p
t
D
R.Figure 2
System model.For data outsourcing, DO encryptsM with algorithm E
n
c
r
y
p
t
D
O, in which signature is used to mask M. Then ESP encrypts T with algorithm E
n
c
r
y
p
t
E
S
P to finish the encryption. The encrypted data is stored in SSP.For data access, when DR requests data from SSP, the request is sent to TA after verification. TA chooses a unique value to the mask certificate for DR. Then, using the attributes set of DR, TA computes SK with algorithm KeyGen. After that, SK is sent to DSP and the certificate is sent to DR. At the same time, SSP sends the CT to DSP. With SK and CT, DSP can do decryption and getM that is masked by signature. Once DR receives the certificate, he decrypts the masked certificate with hisunique value (TA sends the unique value to this DR when the first authorized request occurred. It will be used in the following requests until this DR is revoked) to get the certificate. Using the certificate, DR can decrypt the masked M with signatures in the certificate.In addition, if a DR is revoked, TA will mark the DR as “revoked” and this DR’s unique mask value will be invalid. No certificate will be granted to this DR any more.
## 4.2. Two Important Notions
### 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
### 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
## 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
## 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
## 4.3. Scheme Description
The whole process of MC-ABE is shown in Figure3. In this section, we describe each step in detail.Figure 3
Algorithms’ implementation in MC-ABE.
### 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
### 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
### 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
## 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
## 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 5. Security Analysis
### 5.1. Encryption and Decryption Outsource
In CP-ABE, both data encryption and data decryption are only done by the cloud users. Meanwhile, in MC-ABE, data encryption is done by DO and the cloud server collaboratively, and data decryption is undertaken by DR and the cloud server together.M is masked by DO before it is sent to ESP. DO and authorized DR can get M. ESP and DSP can get MM (Masked M), but they cannot deduce M from MM.Theorem 8.
The security in encryption and decryption in MC-ABE is not weaker than that of CP-ABE.Proof.
In algorithmE
n
c
r
y
p
t
E
S
P, ESP encrypts the access tree T with the parameters s, T, and MM. Consider(16)C~=M·eg,gas·signature=M·eg,gas·eg,gεv.Using PK ands, ESP can get e
(
g
,
g
)
a
s; what ESP got is M
·
e
(
g
,
g
)
ε
v.
The encrypted data in CP-ABE isC
~
=
M
·
e
(
g
,
g
)
a
s; both of α and s are random; let z
=
α
·
s; z is also random; then C
~
=
M
e
(
g
,
g
)
z is equal to M
e
(
g
,
g
)
ε
v
k. According to security proof in [7], the structure of C
~
=
M
·
e
(
g
,
g
)
a
s is secure to prevent the adversary from deducing M. Thus, M
e
(
g
,
g
)
ε
v
k in our scheme is secure. That is to say, ESP cannot deduce M with M
e
(
g
,
g
)
ε
v
k, and encryption outsourcing is secure in MC-ABE.
For DSP, it can decrypt CT using SK and get the maskedM
=
M
·
s
i
g
n
a
t
u
r
e. The information DSP gets is the same as ESP. So, in MC-ABE, data decryption outsourcing is also secure since it is similar to data encryption outsourcing.
### 5.2. Certificate
From the above statement, the signature is vitally important to the security of our scheme. Since the signature is an item of the certificate, the security of the signature relies on the certificate. Each DR has his unique masked certificate; DR can retrieve his certificate only by his own MValue. In the following, we prove that malicious DR cannot get MCert without the right MValue.Theorem 9.
MCert cannot be decrypted without the right MValue.
DR1 hasM
C
e
r
t1 = C
e
r
t1 ·
MValue1 = C
e
r
t1e
(
g
,
g
)
θ
t
D
R
1; D
R2 wanted to retrieve C
e
r
t1 without e
(
g
,
g
)
θ
t
D
R
1.Proof.
DR1 forgedM
V
a
l
u
e
1
′
=
e
g
,
g
θ
t
D
R
1
′
, to get Cert1:(17)Cert1=MCert1MValue1′=Cert1·MValue1MValue1′=Cert1·eg,gθtDR1-tDR1′.
In other words, if the forgedM
V
a
l
u
e
2
′ is right, we must have t
D
R
1
=
t
D
R
1
′ to solve the DL problem. The DL problem is computationally infeasible; thus, MValue is difficult to be forged and MCert cannot be decrypted without the right MValue.
### 5.3. Collusion
Service providers might collude with each other to combine their information to deduceM. In the above statement, ESP and DSP hold similar information to retrieve M. If ESP colluded with DSP, the most information they could get is M
·
s
i
g
n
a
t
u
r
e. We have given the security proof of M
·
s
i
g
n
a
t
u
r
e in Theorem 8. Thus, MC-ABE is quite qualified for anticollusion.SSP is a semitrusted server, which stores CT. If SSP colluded with ESP and DSP, it provides no useful information to deduceM. So, MC-ABE can defend against collusion among SSP, ESP, and DSP.
### 5.4. Revocation
If a DR is revealed to be malicious, he will be revoked from the authorized user list. We update the signature encrypted in CT; after that, as shown in the following, the revoked DR cannot get authorized data any more:Revoked signature held by DR:s
i
g
n
a
t
u
r
e
=
e
g
,
g
ε
v
k.
Updated signature:s
i
g
n
a
t
u
r
e
′
=
e
g
,
g
ε
v
k
′.
MaskedM
′
=
M
·
s
i
g
n
a
t
u
r
e
′
=
M
e
(
g
,
g
)
v
k
′.
MaskedM
′
/
s
i
g
n
a
t
u
r
e = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
(
v
k
′
-
v
k
).It is the same with the proof of Theorem9. MC-ABE is secure in revocation.
## 5.1. Encryption and Decryption Outsource
In CP-ABE, both data encryption and data decryption are only done by the cloud users. Meanwhile, in MC-ABE, data encryption is done by DO and the cloud server collaboratively, and data decryption is undertaken by DR and the cloud server together.M is masked by DO before it is sent to ESP. DO and authorized DR can get M. ESP and DSP can get MM (Masked M), but they cannot deduce M from MM.Theorem 8.
The security in encryption and decryption in MC-ABE is not weaker than that of CP-ABE.Proof.
In algorithmE
n
c
r
y
p
t
E
S
P, ESP encrypts the access tree T with the parameters s, T, and MM. Consider(16)C~=M·eg,gas·signature=M·eg,gas·eg,gεv.Using PK ands, ESP can get e
(
g
,
g
)
a
s; what ESP got is M
·
e
(
g
,
g
)
ε
v.
The encrypted data in CP-ABE isC
~
=
M
·
e
(
g
,
g
)
a
s; both of α and s are random; let z
=
α
·
s; z is also random; then C
~
=
M
e
(
g
,
g
)
z is equal to M
e
(
g
,
g
)
ε
v
k. According to security proof in [7], the structure of C
~
=
M
·
e
(
g
,
g
)
a
s is secure to prevent the adversary from deducing M. Thus, M
e
(
g
,
g
)
ε
v
k in our scheme is secure. That is to say, ESP cannot deduce M with M
e
(
g
,
g
)
ε
v
k, and encryption outsourcing is secure in MC-ABE.
For DSP, it can decrypt CT using SK and get the maskedM
=
M
·
s
i
g
n
a
t
u
r
e. The information DSP gets is the same as ESP. So, in MC-ABE, data decryption outsourcing is also secure since it is similar to data encryption outsourcing.
## 5.2. Certificate
From the above statement, the signature is vitally important to the security of our scheme. Since the signature is an item of the certificate, the security of the signature relies on the certificate. Each DR has his unique masked certificate; DR can retrieve his certificate only by his own MValue. In the following, we prove that malicious DR cannot get MCert without the right MValue.Theorem 9.
MCert cannot be decrypted without the right MValue.
DR1 hasM
C
e
r
t1 = C
e
r
t1 ·
MValue1 = C
e
r
t1e
(
g
,
g
)
θ
t
D
R
1; D
R2 wanted to retrieve C
e
r
t1 without e
(
g
,
g
)
θ
t
D
R
1.Proof.
DR1 forgedM
V
a
l
u
e
1
′
=
e
g
,
g
θ
t
D
R
1
′
, to get Cert1:(17)Cert1=MCert1MValue1′=Cert1·MValue1MValue1′=Cert1·eg,gθtDR1-tDR1′.
In other words, if the forgedM
V
a
l
u
e
2
′ is right, we must have t
D
R
1
=
t
D
R
1
′ to solve the DL problem. The DL problem is computationally infeasible; thus, MValue is difficult to be forged and MCert cannot be decrypted without the right MValue.
## 5.3. Collusion
Service providers might collude with each other to combine their information to deduceM. In the above statement, ESP and DSP hold similar information to retrieve M. If ESP colluded with DSP, the most information they could get is M
·
s
i
g
n
a
t
u
r
e. We have given the security proof of M
·
s
i
g
n
a
t
u
r
e in Theorem 8. Thus, MC-ABE is quite qualified for anticollusion.SSP is a semitrusted server, which stores CT. If SSP colluded with ESP and DSP, it provides no useful information to deduceM. So, MC-ABE can defend against collusion among SSP, ESP, and DSP.
## 5.4. Revocation
If a DR is revealed to be malicious, he will be revoked from the authorized user list. We update the signature encrypted in CT; after that, as shown in the following, the revoked DR cannot get authorized data any more:Revoked signature held by DR:s
i
g
n
a
t
u
r
e
=
e
g
,
g
ε
v
k.
Updated signature:s
i
g
n
a
t
u
r
e
′
=
e
g
,
g
ε
v
k
′.
MaskedM
′
=
M
·
s
i
g
n
a
t
u
r
e
′
=
M
e
(
g
,
g
)
v
k
′.
MaskedM
′
/
s
i
g
n
a
t
u
r
e = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
(
v
k
′
-
v
k
).It is the same with the proof of Theorem9. MC-ABE is secure in revocation.
## 6. Performance Evaluation
In this section, we numerically analyze the communication and computation cost of MC-ABE. We also give the simulation results in detail.
### 6.1. Numerical Analysis
#### 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
#### 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
### 6.2. Simulation Results
To evaluate the performance of MC-ABE, we develop simulation codes based on CP-ABE toolkit [21]. We make a comparison between MC-ABE and other two popular models (CP-ABE and PP-CP-ABE [11]) in four aspects: computation cost for data encryption, computation cost for key generation, computation cost for data decryption, and computation cost for user revocation.(1) Computation Cost for Data Encryption. Most of the computation cost in encryption is incurred for the encryption of the access tree, which is proportional to the number of the leaf nodes. In CP-ABE, data encryption is done by DO. In PP-CP-ABE, data encryption/decryption is outsourced to service providers; the access tree was divided into two parts: one part is encrypted by DO and the other part is encrypted by ESP. In MC-ABE, the access tree is encrypted by ESP. In Figure 6(a), the computation cost of three different schemes is compared. x-axis indicates the number of leaf nodes in T (the access tree), and y-axis indicates time to encrypt M (computation cost). For x, ten values are selected evenly (10, 20, …, 100). For each x value, we run simulation codes 10 times and take the average value of the results as the final result. It is shown that MC-ABE has better performance than the other two ones. In PP-CP-ABE, the number of leaf nodes in DO’s subtree will change with different tree division. So, for simplicity, we set the number of DO’s subtrees to be half of the number of the whole leaves. As shown in Figure 6(b), we also show confidence interval to assess the results in Figure 6(a) (only results about DO’s computation cost in MC-ABE are given, since the results in PP-CP-ABE and CP ABE are consistent with MC-ABE). In Figure 6(b), it is shown that all average results lie in the confidence interval.Figure 6
(a) DO’s computation cost for data encryption in CP-ABE, PP-CP-ABE, and MC-ABE. In PP-CP-ABE, part of encryption computation is transferred to cloud sever to reduce DO’s cost. In MC-ABE, more efforts are made to reduce computation cost undertaken by DO. (b) Computation cost of DO (the 95% confidence interval assuming random data with normal distribution is shown). (c) Computation cost of key generation (the 95% confidence interval assuming random data with normal distribution is shown). (d) Computation cost of DR in CP-ABE and MC-ABE. Similar to ESP in MC-ABE, DSP also undertakes most of the computation in decryption. The cost is proportional to attributes number in private key. (e) Computation cost for user revocation. With the authorization certificate in MC-ABE, revocation cost can be reduced obviously.
(a)
(b)
(c)
(d)
(e)(2) Computation Cost for Key Generation. Same with simulation about data encryption, we also take the average value of key generation cost as the final result. As shown in Figure 6(c), the average value is very close to lower bound and upper bound of the confidence interval, so we also list source data of the simulation results in Table 3. It shows that all average results lie in the confidence interval, so the simulation result is confident. From the results, we can get that the computation cost will grow with the number of attributes in private key. The algorithm KeyGen is implemented by TA, so there is no cost for DO.Table 3
Computation cost of key generation (source data of Figure6(c); the 95% confidence interval assuming random data with normal distribution is shown). Att_num indicates the number of DR’s attributes, CI indicates confidence interval, and Ave indicates the average value.
Att_num
CI
Ave
5
[5.291941, 5.30779403]
5.2998678
10
[11.90953, 11.9336295]
11.9215787
15
[18.54277, 18.5893065]
18.5660398
20
[25.12693, 25.1588600]
25.1428953
25
[31.65159, 31.7310845]
31.6913405
30
[38.26554, 38.3662469]
38.3158938
35
[44.86881, 44.9555189]
44.9121638
40
[51.45481, 51.6333506]
51.5440794
45
[58.04029, 58.1582152]
58.0992549
50
[64.54168, 64.6776467]
64.6096648(3) Computation Cost for Data Decryption. In MC-ABE, most of the computation cost has been shifted to DSP, so the computation cost of DR is constant. The comparison results are shown in Figure 6(d).(4) Computation Cost for User Revocation. In MC-ABE, user revocation simplified for the signature is introduced. When user revocation happens, the revoked DR’s “Revocation” item in mask value table is set as “Y”; his new data request will not be responded to; his former signature encrypted in ciphertext will be also changed. It needs one multiplication operation and one exponentiation operation for the above operations. The simulation results are as shown in Figure 6(e).
## 6.1. Numerical Analysis
### 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
### 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
## 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
## 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
## 6.2. Simulation Results
To evaluate the performance of MC-ABE, we develop simulation codes based on CP-ABE toolkit [21]. We make a comparison between MC-ABE and other two popular models (CP-ABE and PP-CP-ABE [11]) in four aspects: computation cost for data encryption, computation cost for key generation, computation cost for data decryption, and computation cost for user revocation.(1) Computation Cost for Data Encryption. Most of the computation cost in encryption is incurred for the encryption of the access tree, which is proportional to the number of the leaf nodes. In CP-ABE, data encryption is done by DO. In PP-CP-ABE, data encryption/decryption is outsourced to service providers; the access tree was divided into two parts: one part is encrypted by DO and the other part is encrypted by ESP. In MC-ABE, the access tree is encrypted by ESP. In Figure 6(a), the computation cost of three different schemes is compared. x-axis indicates the number of leaf nodes in T (the access tree), and y-axis indicates time to encrypt M (computation cost). For x, ten values are selected evenly (10, 20, …, 100). For each x value, we run simulation codes 10 times and take the average value of the results as the final result. It is shown that MC-ABE has better performance than the other two ones. In PP-CP-ABE, the number of leaf nodes in DO’s subtree will change with different tree division. So, for simplicity, we set the number of DO’s subtrees to be half of the number of the whole leaves. As shown in Figure 6(b), we also show confidence interval to assess the results in Figure 6(a) (only results about DO’s computation cost in MC-ABE are given, since the results in PP-CP-ABE and CP ABE are consistent with MC-ABE). In Figure 6(b), it is shown that all average results lie in the confidence interval.Figure 6
(a) DO’s computation cost for data encryption in CP-ABE, PP-CP-ABE, and MC-ABE. In PP-CP-ABE, part of encryption computation is transferred to cloud sever to reduce DO’s cost. In MC-ABE, more efforts are made to reduce computation cost undertaken by DO. (b) Computation cost of DO (the 95% confidence interval assuming random data with normal distribution is shown). (c) Computation cost of key generation (the 95% confidence interval assuming random data with normal distribution is shown). (d) Computation cost of DR in CP-ABE and MC-ABE. Similar to ESP in MC-ABE, DSP also undertakes most of the computation in decryption. The cost is proportional to attributes number in private key. (e) Computation cost for user revocation. With the authorization certificate in MC-ABE, revocation cost can be reduced obviously.
(a)
(b)
(c)
(d)
(e)(2) Computation Cost for Key Generation. Same with simulation about data encryption, we also take the average value of key generation cost as the final result. As shown in Figure 6(c), the average value is very close to lower bound and upper bound of the confidence interval, so we also list source data of the simulation results in Table 3. It shows that all average results lie in the confidence interval, so the simulation result is confident. From the results, we can get that the computation cost will grow with the number of attributes in private key. The algorithm KeyGen is implemented by TA, so there is no cost for DO.Table 3
Computation cost of key generation (source data of Figure6(c); the 95% confidence interval assuming random data with normal distribution is shown). Att_num indicates the number of DR’s attributes, CI indicates confidence interval, and Ave indicates the average value.
Att_num
CI
Ave
5
[5.291941, 5.30779403]
5.2998678
10
[11.90953, 11.9336295]
11.9215787
15
[18.54277, 18.5893065]
18.5660398
20
[25.12693, 25.1588600]
25.1428953
25
[31.65159, 31.7310845]
31.6913405
30
[38.26554, 38.3662469]
38.3158938
35
[44.86881, 44.9555189]
44.9121638
40
[51.45481, 51.6333506]
51.5440794
45
[58.04029, 58.1582152]
58.0992549
50
[64.54168, 64.6776467]
64.6096648(3) Computation Cost for Data Decryption. In MC-ABE, most of the computation cost has been shifted to DSP, so the computation cost of DR is constant. The comparison results are shown in Figure 6(d).(4) Computation Cost for User Revocation. In MC-ABE, user revocation simplified for the signature is introduced. When user revocation happens, the revoked DR’s “Revocation” item in mask value table is set as “Y”; his new data request will not be responded to; his former signature encrypted in ciphertext will be also changed. It needs one multiplication operation and one exponentiation operation for the above operations. The simulation results are as shown in Figure 6(e).
## 7. Conclusion
The C-BSN is one promising technology that can change people’s healthcare experiences greatly. However, how to keep data security and data privacy in C-BSN is an important and challenging issue since the patients’ health-related data are quite sensitive. In this paper, we propose a novel encryption outsourcing scheme MC-ABE that meets the requirements of data security and data privacy in C-BSN. In MC-ABE, one specific signature is constructed to mask the plaintext; the unique authentication certificate for each visitor is constructed; the third-party trust authority to manage above-mentioned signatures and certificates is also introduced. By security analysis, we prove that MC-ABE can meet the security requirement of C-BSN. And, by performance evaluation, it shows that MC-ABE has less computation cost and storage cost compared with other popular models. In future work, we plan to explore the possibility of improving the scalability of MC-ABE.
---
*Source: 101287-2015-08-20.xml* | 101287-2015-08-20_101287-2015-08-20.md | 75,791 | Achieving Secure and Efficient Data Access Control for Cloud-Integrated Body Sensor Networks | Zhitao Guan; Tingting Yang; Xiaojiang Du | International Journal of Distributed Sensor Networks
(2015) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101287 | 101287-2015-08-20.xml | ---
## Abstract
Body sensor network has emerged as one of the most promising technologies for e-healthcare, which makes remote health monitoring and treatment to patients possible. With the support of mobile cloud computing, large number of health-related data collected from various body sensor networks can be managed efficiently. However, how to keep data security and data privacy in cloud-integrated body sensor network (C-BSN) is an important and challenging issue since the patients’ health-related data are quite sensitive. In this paper, we present a novel secure access control mechanism MC-ABE (Mask-Certificate Attribute-Based Encryption) for cloud-integrated body sensor networks. A specific signature is designed to mask the plaintext, and then the masked data can be securely outsourced to cloud severs. An authorization certificate composed of the signature and related privilege items is constructed which is used to grant privileges to data receivers. To ensure security,a unique value is chosen to mask the certificate for each data receiver. Thus, the certificate is unique for each user and user revocation can be easily completed by removing the mask value. The analysis shows that proposed scheme can meet the security requirement of C-BSN, and it also has less computation cost and storage cost compared with other popular models.
---
## Body
## 1. Introduction
Body sensor network (BSN) emerges recently with rapid development of wearable sensors, implantable sensors, and short range wireless communication, which make pervasive healthcare monitoring and management become increasingly popular [1, 2]. By the body sensor network, health-related data of the patient can be collected and transferred to the healthcare staff in real time, so the patient’s state of health can be under monitoring and precautions can be taken if something bad happened.In order to enhance the scalability of the body sensor network, some work focuses on combining cloud computing and body sensor network together. As shown in Figure1, with the support of mobile cloud computing, cloud-integrated body sensor network (C-BSN) can be constructed [3]. In C-BSN, massive local body sensor networks are integrated together and mass data are collected and stored in cloud servers; healthcare staffs will continually monitor their patients’ status and exchange views when it is difficult to make diagnosis; researchers can make data analysis to get some useful results such as regularity of disease development; government agencies also can take measures on disease prevention and control based on data analysis.Figure 1
Conceptual architecture of cloud-integrated body sensor network.However, there are still several problems and challenges in C-BSN [3, 4]. For example, data security and data privacy must be concerned since patient-related data is private and sensitive. In this paper, we propose a secure data access control scheme named MC-ABE, which can efficiently ensure data security and data privacy. For data security, data can be securely transferred from data owners to the cloud servers and securely stored; for data privacy, data can be only accessed by authorized users with fine-grained policies.For example, Bob (data owner) is a patient, and Alice (data requester) is his healthcare doctor. By C-BSN, Bob’s health-related data can be collected and sent to cloud server in real time; and Alice gets Bob’s information from cloud server to monitor his health status. Besides the authorized person, Bob does not want anyone else to know about his health data. However, his information may be leaked in many ways: the cloud operator/administrator may access his data; malicious user may intrude into the cloud server to steal user data; unauthorized DR may exceed to access others’ data. In summary, there are three key problems which need to be solved to ensure the users’ data security and data privacy in C-BSN. Firstly, the cloud is semitrusted; that is, although we outsource the data to the cloud, we still need to prevent cloud operators from accessing the data content; secondly, we must take measures to keep malicious users out of C-BSN system; lastly, it is also important to study how to avoid the unauthorized access of other users.In this paper, we propose a novel secure access control mechanism MC-ABE to tackle the aforementioned problems. And main contributions of this paper can be summarized as follows:(i)
We construct one specific signature to CP-ABE to mask the plaintext and then realize securely encryption/decryption outsourcing.
(ii)
We construct the unique authentication certificate for each visitor, which makes the system achieve more effective control on malicious visitors; in particular, it also leads to a low cost for user revocation.
(iii)
We introduce the third-party trust authority to manage above-mentioned signatures and certificates, which can guarantee data security even if the cloud server is semitrusted.
(iv)
In C-BSN, processing data in time is quite necessary. Our proposed scheme can meet such requirement. From the section of performance evaluation, our scheme takes less time than other compared methods to do data collecting, data transmission, and data acquisition.The rest of this paper is organized as follows. Section2 introduces the related work. Then, in Section 3, some preliminaries are given. Our scheme is stated in Section 4. In Section 5, security analysis is given. In Section 6, the performance of our scheme is evaluated. The paper is concluded in Section 7.
## 2. Related Work
Recently, various techniques have been proposed to address the problems of data security and data privacy in C-BSN. In [5], Sahai and Waters proposed the Attribute-Based Encryption (ABE) to realize access control on encrypted data. In ABE, the ciphertext’s encryption policy is associated with a set of attributes, and the data owner can be offline after data is encrypted. One year later, Goyal et al. proposed a new type of ABE, Key-Policy Attribute-Based Encryption (KP-ABE) [6]. In KP-ABE, the ciphertext’s encryption policy is also associated with a set of attributes, but the attributes are organized into a tree structure (named access tree). The benefit of this approach is that more flexible access control strategy can be got and fine-grained access control can be realized. However, data owner was short of entire control over the encryption policy; that is, he cannot decide who can access the data and who cannot. To solve this problem, Bethencourt et al. proposed CP-ABE (Ciphertext-Policy Attribute-Based Encryption) [7], in which data owner constructed the access tree together with visitors’ identity information. The user can decrypt the ciphertext if and only if attributes in his private key match the access tree. So, in CP-ABE, data owner can configure more flexible access policy. In [8], Yu et al. tried to achieve secure, scalable, and fine-grained access control in cloud environment. Their proposed scheme is based on KP-ABE and combines with the other two techniques, proxy reencryption and lazy reencryption. It is proved that the proposed scheme can meet the security requirement in cloud quite well. Similarly, Wang et al. proposed an access control scheme based on CP-ABE, which is also secure and efficient in cloud environment [9].In [10], Ahmad et al. proposed a multitoken authorization strategy to remedy the weaknesses of the authorization architecture in mobile cloud. It reduces the probability of unauthorized access to the cloud data and service when malicious activity happened; for example, IdM (Identity Management Systems) are compromised, network links are eavesdropped, or even communication tokens are stolen. In [11], Yadav and Dave presented an access model based on CP-ABE which could provide the remote integrity check by the way of augmenting secure data storage operations. To reduce computation overhead and achieve secure encryption/decryption outsourcing, the access tree is divided into two parts: one part is encrypted by the data owner and the other part is encrypted by the cloud sever. So a portion of computation overhead was transferred from data owner to cloud sever. The similar method is also adopted in the work of Zhou and Huang [12]. In addition to the access tree division, Zhou and Huang also propose an efficient data management model to balance communication and storage overhead to reduce the cost of data management operations. In [13], Li et al. presented a low complexity multiauthority attribute-based encryption scheme for mobile cloud computing which uses masked shared-decryption-keys to ensure the security of decryption outsourcing and adopts multiauthorities for authorization to enhance security assurance. The above schemes are based on CP-ABE, in which complex bilinear map calculation is performed. In [14], Yao el al. proposed a novel access control mechanism, in which data operation privileges are granted based on authorization certificates. The advantage of such mechanism is that the computation cost can be decreased remarkably, since there is no bilinear map calculation. And the disadvantage is that lots of operations need to be handled by data owner, such as privilege designation, and then it demands that the data owner must know all information about the visitors. In [15], the authors considered the problem of patient self-controlled access privilege to highly sensitive Personal Health Information. They proposed a Secure Patient-Centric Access Control scheme which allows data requesters to have different access privileges based on their roles and then assigns different attribute sets to them. However, they took the cloud server as trusted, and their scheme does not work well for user revocation. In [16], the authors proposed a novel CP-ABE scheme with constant-size decryption keys independent of the number of attributes. Their scheme is suitable for applications based on lightweight mobile devices but is not suitable for large scale C-BSN.
## 3. Preliminaries
### 3.1. Notations
The notations used in MC-ABE are listed as follows.Notations in MC-ABE. Consider the following:DO: data owner,
DR: data requester/receiver,
ESP: encryption service provider,
DSP: decryption service provider,
SSP: storage service provider,
TA: trust authority,
SetS: setup server,
PK: public key,
MK: master key,
SK: secret key,
M: plaintext,
CT: ciphertext,
T: access tree,
MM: masked plaintext,
Cert: authorization certificate,
MValue: mask value,
MCert: masked certificate.DO and DR are cloud users. ESP is cloud server that can help DO do data encryption. SSP is cloud storage server. DSP is the server that is responsible for data decryption. TA is the third-party trust authority. SetS is the setup server whose responsibility is to generate PK and MK.PK and MK are parameters that are used for data encryption/decryption. SK is held by DR which is used to decrypt ciphertext, which is generated using PK and MK. The data is plaintext before encryption, denoted asM, and CT is the ciphertext of M. T is the access policy (access tree). MM is the masked plaintext; in MC-ABE, the plaintext will be masked to MM by a signature before being encrypted to achieve “double protection.” Cert is the authorization certificate (see Section 4.2.1 for details). Mask value is used to mask Cert to generate MCert (see Section 4.2.2 for details).
### 3.2. Basics
#### 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
#### 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
### 3.3. Ciphertext-Policy Attribute-Based Encryption (CP-ABE)
#### 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
#### 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
### 3.4. Assumptions
In this work, we make the following assumptions.Assumption 1 (service providers (ESP, DSP, and SSP) are semitrusted).
That is, they will follow our proposed protocol in general but try to find out as much secret information as possible. And the information may be accessed illegally by internal malicious employees or external attackers. In particular, although ESP and DSP undertake most of the computing cost, they do not have enough information to deduce the plaintext.Assumption 2 (SetS and TA are trusted).
On no conditions will they leak information about data and related keys.
In order to deduce more information about encrypted data, service providers might combine their information to perform collusion attack. In our scheme, collusions between service providers are taken into consideration.
## 3.1. Notations
The notations used in MC-ABE are listed as follows.Notations in MC-ABE. Consider the following:DO: data owner,
DR: data requester/receiver,
ESP: encryption service provider,
DSP: decryption service provider,
SSP: storage service provider,
TA: trust authority,
SetS: setup server,
PK: public key,
MK: master key,
SK: secret key,
M: plaintext,
CT: ciphertext,
T: access tree,
MM: masked plaintext,
Cert: authorization certificate,
MValue: mask value,
MCert: masked certificate.DO and DR are cloud users. ESP is cloud server that can help DO do data encryption. SSP is cloud storage server. DSP is the server that is responsible for data decryption. TA is the third-party trust authority. SetS is the setup server whose responsibility is to generate PK and MK.PK and MK are parameters that are used for data encryption/decryption. SK is held by DR which is used to decrypt ciphertext, which is generated using PK and MK. The data is plaintext before encryption, denoted asM, and CT is the ciphertext of M. T is the access policy (access tree). MM is the masked plaintext; in MC-ABE, the plaintext will be masked to MM by a signature before being encrypted to achieve “double protection.” Cert is the authorization certificate (see Section 4.2.1 for details). Mask value is used to mask Cert to generate MCert (see Section 4.2.2 for details).
## 3.2. Basics
### 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
### 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
## 3.2.1. Bilinear Pairing
LetG
1 and G
2 be two multiplicative cyclic groups of prime order p. Let g be a generator of G
1 and let e be a bilinear map, e: G
1
×
G
1
→
G
2. For a
,
b
∈
Z
p, the bilinear map e has the following properties [3, 10]:(1)
Bilinearity: for allu
,
v
∈
G
1
, we have e
(
u
a
,
v
b
)
=
e
(
u
,
v
)
a
b.
(2)
Nondegeneracy:e
(
g
,
g
)
≠
1.
(3)
Being symmetric:e
(
g
a
,
g
b
)
=
e
(
g
,
g
)
a
b
=
e
(
g
b
,
g
a
).
## 3.2.2. Discrete Logarithm (DL) Problem
Definition 1 (discrete logarithm (DL) problem).
LetG be a multiplicative cyclic group of prime order p and let g be its generator, for all α
∈
Z
p, given g
,
g
α as input, output α.
The DL assumption holds inG if it is computationally infeasible to solve the DL problem in G [17].
## 3.3. Ciphertext-Policy Attribute-Based Encryption (CP-ABE)
### 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
### 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
## 3.3.1. Access Structure
LetP
=
P
1
,
P
2
,
…
P
n be the universal set [18]. Each element in P is an attribute, that is, the descriptive identification information of a visitor [19]. An access structure is a collection (resp., monotone collection) T of nonempty subsets of P. For example, P = Beijing, Shanghai, No. 1 Middle School, No. 2 Middle School, student, teacher, administrator. Visitor 1 has the attributes set A = Beijing, No. 1 Middle School, student [20].The access structure in CP-ABE is the tree structure, which is named access tree [2]. For the access tree T, the leaf nodes are associated with descriptive attributes; each interior node is a relation function, such as AND (n of n), OR (1 of n), and n of m (m
>
n).Each DR has a set of attributes, which are associated with DR’s SK. If DR’s attributes set satisfies the access tree, the encrypted data can be decrypted by DR’s SK.
## 3.3.2. Working Process
In CP-ABE, the plaintext is encrypted with a symmetric key, and then the key is shared in the access tree. In the process of decryption, if DR’s SK satisfies the access tree, then DR gets the shared secret and the data can be recovered.
## 3.4. Assumptions
In this work, we make the following assumptions.Assumption 1 (service providers (ESP, DSP, and SSP) are semitrusted).
That is, they will follow our proposed protocol in general but try to find out as much secret information as possible. And the information may be accessed illegally by internal malicious employees or external attackers. In particular, although ESP and DSP undertake most of the computing cost, they do not have enough information to deduce the plaintext.Assumption 2 (SetS and TA are trusted).
On no conditions will they leak information about data and related keys.
In order to deduce more information about encrypted data, service providers might combine their information to perform collusion attack. In our scheme, collusions between service providers are taken into consideration.
## 4. MC-ABE
### 4.1. Overview
Our proposed scheme MC-ABE is shown in Figure2. Seven algorithms are included in MC-ABE: Setup, E
n
c
r
y
p
t
D
O, E
n
c
r
y
p
t
E
S
P, KeyGen, CerGen, D
e
c
r
y
p
t
D
S
P, and D
e
c
r
y
p
t
D
R.Figure 2
System model.For data outsourcing, DO encryptsM with algorithm E
n
c
r
y
p
t
D
O, in which signature is used to mask M. Then ESP encrypts T with algorithm E
n
c
r
y
p
t
E
S
P to finish the encryption. The encrypted data is stored in SSP.For data access, when DR requests data from SSP, the request is sent to TA after verification. TA chooses a unique value to the mask certificate for DR. Then, using the attributes set of DR, TA computes SK with algorithm KeyGen. After that, SK is sent to DSP and the certificate is sent to DR. At the same time, SSP sends the CT to DSP. With SK and CT, DSP can do decryption and getM that is masked by signature. Once DR receives the certificate, he decrypts the masked certificate with hisunique value (TA sends the unique value to this DR when the first authorized request occurred. It will be used in the following requests until this DR is revoked) to get the certificate. Using the certificate, DR can decrypt the masked M with signatures in the certificate.In addition, if a DR is revoked, TA will mark the DR as “revoked” and this DR’s unique mask value will be invalid. No certificate will be granted to this DR any more.
### 4.2. Two Important Notions
#### 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
#### 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
### 4.3. Scheme Description
The whole process of MC-ABE is shown in Figure3. In this section, we describe each step in detail.Figure 3
Algorithms’ implementation in MC-ABE.
#### 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
#### 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
#### 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 4.1. Overview
Our proposed scheme MC-ABE is shown in Figure2. Seven algorithms are included in MC-ABE: Setup, E
n
c
r
y
p
t
D
O, E
n
c
r
y
p
t
E
S
P, KeyGen, CerGen, D
e
c
r
y
p
t
D
S
P, and D
e
c
r
y
p
t
D
R.Figure 2
System model.For data outsourcing, DO encryptsM with algorithm E
n
c
r
y
p
t
D
O, in which signature is used to mask M. Then ESP encrypts T with algorithm E
n
c
r
y
p
t
E
S
P to finish the encryption. The encrypted data is stored in SSP.For data access, when DR requests data from SSP, the request is sent to TA after verification. TA chooses a unique value to the mask certificate for DR. Then, using the attributes set of DR, TA computes SK with algorithm KeyGen. After that, SK is sent to DSP and the certificate is sent to DR. At the same time, SSP sends the CT to DSP. With SK and CT, DSP can do decryption and getM that is masked by signature. Once DR receives the certificate, he decrypts the masked certificate with hisunique value (TA sends the unique value to this DR when the first authorized request occurred. It will be used in the following requests until this DR is revoked) to get the certificate. Using the certificate, DR can decrypt the masked M with signatures in the certificate.In addition, if a DR is revoked, TA will mark the DR as “revoked” and this DR’s unique mask value will be invalid. No certificate will be granted to this DR any more.
## 4.2. Two Important Notions
### 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
### 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
## 4.2.1. Authorization Certificate (Cert)
The authorization certificate is introduced in MC-ABE to grant data privileges for DR. As shown in Structure of Authorization Certificate, it includes five items that are privilege related information. DO provides the certificate related information to TA, and then TA constructs the unique authorization certificate for each authorized DR.Structure of Authorization Certificate
File ID list (f
1, f
2
…)
Valid Period (From the start time to the end time)
Signature ({
s
i
g
n
f
1
}, {
s
i
g
n
f
2
}
…)
Privilege ({
p
f
1
}, p
f
2
…)
PK, MKFile ID is ID list of the authorized files. Valid Period denotes the valid period of the signature from the start time to the end time. Signature is used by DO to mask the plaintext in data encryption; it is used by DR to get the plaintext in data decryption. Privilege is the privilege denoted by the signature such as read, modify, or delete. PK, MK are two keys noted in Notations in MC-ABE.
## 4.2.2. Mask Value (MValue)
To achieve fine-grained access control over DR, the mask value is introduced in MC-ABE. The mask value is maintained by TA. For each DR, TA sets a unique mask value for him. The mask value is used to blind the authorization certificate before the certificate is sent to DR. Thus, each DR receives its own unique blinded certificate since the mask value is unique. In the following, the process is described in detail.After TA receives a data access request, it checks DRID firstly. If the requester is a new user, TA generates a random numbert
D
R
I
D
∈
Z
p and inserts it into the mask value table.Otherwise, if this DRID already exists in mask value table and the item of revocation is “N” (initial value of this item is “N.” Only at the time when the DRID is revoked will this item be set as “Y”), TA invokes algorithm CerGen to compute the masked certificate (see Table1).Table 1
Mask value table (maintained by TA).
DRID
Mask value
Revocation
DR1
MValueDR1
N
DR2
MValueDR2
Y
DR3
MValueDR3
N
DRID: ID of DR.
Mask value: unique mask value for each DR.
Revocation: revocation mark. “Y” means this DR is revoked. “N” means this DR is authorized.Algorithm(CerGen(t
D
R
I
D, PK)→MCert). Construct a certificate Cert as Structure of Authorization Certificate shows. MCert is the masked Cert.Then, compute as follows:(1)MValue=gtDRIDMCert=Cert·egθ,gtDRID=Cert·eg,gθtDRID.If DR is a new user, MValue and MCert will be sent to him. Otherwise, send MCert to the DR.
## 4.3. Scheme Description
The whole process of MC-ABE is shown in Figure3. In this section, we describe each step in detail.Figure 3
Algorithms’ implementation in MC-ABE.
### 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
### 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
### 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 4.3.1. Data Outsourcing
In C-BSN, DO usually uses mobile devices that lack computing power and storage space. To reduce the encryption overhead of DO, the encryption process is divided into two parts:E
n
c
r
y
p
t
D
O and E
n
c
r
y
p
t
E
S
P. E
n
c
r
y
p
t
D
O is the encryption algorithm implemented by DO and E
n
c
r
y
p
t
E
S
P is carried out by ESP. Since ESP is semitrusted, we introduce the signature in E
n
c
r
y
p
t
D
O to mask M. In general, there are three steps for data outsourcing.Firstly, SetS generates PK and MK.Algorithm 2 (Setup→PK, MK).
SetS performs the algorithm. LetG
0 be a multiplicative cyclic group of prime order p and let g be its generator, and four random numbers α
,
β
,
ε
,
θ
∈
Z
p (further details in [7]). Consider(2)PK=G0,g,h=gβ,eg,gα,gε,gθMK=β,gα.Secondly, DO performs the first step of data encryption.Algorithm 3 (E
n
c
r
y
p
t
D
O(PK, M, K)→MM).
DO implements the algorithm. PK is got from SetS;M is DO’s plaintext; MM is masked M; K is the set of operation privileges, and k is one of the elements in K.
Fork
⊂
K, we choose a random number v
k
⊂
Z
p and then compute the signature:(3)signaturek=egε,gvk=eg,gεvk.
For simplicity, letv denote the set of v
k
:
v
=
{
v
k
∣
k
∈
K
}; signature denotes the set of s
i
g
n
a
t
u
r
e
k
:
s
i
g
n
a
t
u
r
e
=
{
s
i
g
n
a
t
u
r
e
k
∣
k
∈
K
}.
Choose a random numbers
∈
Z
p; then(4)MM=C~=M·eg,gassignature=M·eg,gas·eg,gεv.
Lastly, ESP performs the last step of data encryption.Algorithm 4 (E
n
c
r
y
p
t
E
S
P(PK, s, T, M
M) [7, 11]→CT).
ESP implements the algorithm. The access treeT is encrypted from the root node R to leaf nodes. For each node x in T, choose a polynomial q
x.
For nodex, consider the following:k
x: it denotes the threshold value of x.
d
x: it denotes the degree of q
x; d
x
=
k
x
-
1.
p
a
r
e
n
t
(
x
): a function returns the parent node of x.
n
u
m
x: it is the number of child nodes of x. For a child node y, y is uniquely identified by an index number i
n
d
e
x
(
y
), and 1
⩽
i
n
d
e
x
(
y
)
⩽
n
u
m
x. Consider(5)qx0=qparentxindexx.
For root nodeR, q
R
(
0
)
=
s. Choose d
R other points randomly to completely define q
R. For any other node x in T, let q
x
(
0
)
=
q
p
a
e
r
e
n
t
(
x
)
(
i
n
d
e
x
(
x
)
), and choose d
x other points randomly to completely define q
x.
Y is the set of leaf nodes in T. Compute as follows:(6)C=hs∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
Then,(7)CT=T,C~=M·eg,gas·eg,gεv,C=hs,∀y∈Y:Cy=gqy0,Cy′=Hattyqy0.
CT is stored in SSP. Detailed communication information is shown in Figure4.Figure 4
Communication information in data outsourcing.
## 4.3.2. Data Request
When a DR requests data from SSP, TA generates SK and a certificate for DR. Most of decryption cost is taken by DSP but DSP cannot getM. Based on the effort of DSP, DR finishes the last step of decryption and gets M. Similar to data outsourcing, there are also three steps for data outsourcing.Firstly, TA generates SK for DR.Algorithm 5 (KeyGen(MK,S)→SK).
S is the attributes set of DR.
We generate a random numberr
∈
Z
p and then generate the random number r
j
∈
Z
p for each j
∈
S. Compute as follows:(8)SK=D=gα+r/β,∀j∈S:Dj=gr·Hjrj,Dj′=grj.
Then, TA sends SK to DSP.Secondly, DSP performs the first step of data decryption: decrypt the access tree in CT to get MM.Algorithm 6 (D
e
c
r
y
p
t
D
S
P(SK, CT)→MM).
Whenx is a leaf node, let i
=
a
t
t
(
x
). Function a
t
t
(
x
) denotes the attribute associated with the leaf node x in the tree.
Ifi
∈
S,(9)DecryptNodeLCT,SK,x=eDi,CxeDi′,Cx′=egr·Hiqx0,gqx0egri,Hiqx0=eg,gr·qx0.
Otherwise,(10)i∉S,DecryptNodeLCT,SK,x=⊥.
Whenx is an interior node, call the algorithm DecryptNodeNL(
C
T
,
S
K
,
x
).
For all of the childrenz of node x, call DecryptNodeL(
C
T
,
S
K
,
z
), and the output is F
z. Let S
x be k
x (the threshold value of interior node) random set and let F
z
≠
⊥. If no such set exists, the function cannot be satisfied, so return ⊥.
Otherwise, compute as follows and return the result:(11)Fx=∏z∈SxFzΔi,Sx′0,wherei=indexz,Sx′=indexz:z∈Sx=∏z∈Sxeg,gr.qz0Δi,Sx′0=∏z∈Sxeg,gr·qparentzindexzΔi,Sx′0=∏z∈Sxeg,gr·qxiΔi,Sx′0=eg,gr·qx0.
In particular, for root nodeR, (12)A=eg,grqT0=eg,gr·s.
Finally,(13)C~keC,D/A=C~ehs,gα+r/β/eg,gr·s=M·signature.
Then,M
·
s
i
g
n
a
t
u
r
e is sent to DR.
ReceivingM
·
s
i
g
n
a
t
u
r
e and MCert, DR implements algorithm D
e
c
r
y
p
t
D
R to finish data decryption.
Lastly, DR performs the last step of data decryption: remove the masked value in MM to get M.Algorithm 7 (D
e
c
r
y
p
t
D
R(M
·
s
i
g
n
a
t
u
r
e, MCert)→
M).
DR retrieves Cert to get related signatures:(14)MCertegθ,gtDRID=Cert·eg,gθtDRIDeg,gθtDRID=Cert.
Then, DR getsM with the signature:(15)M·signaturesignature=M.
## 4.3.3. User Revocation
An invalid DR is a DR who is thought to be malicious or whose certificate is expired. The invalid DR should be revoked from the authorized access list. In MC-ABE, we can remove the MValue record in Table1 to revoke DR. Firstly, TA modifies the revoked DR’s “Revocation” item from “N” to “Y” in mask value table. Secondly, current signature must be updated to a new one (signature updating is shown in Figure 5). After these two steps, the invalid DR is revoked. When he requests new data, he will be taken as new comer (the signature is updated, and he does not have the new one), and TA will refuse his request since he is marked as revoked. For valid DR, they will get the new signature and access the system as usual.Figure 5
Signature updating.
## 5. Security Analysis
### 5.1. Encryption and Decryption Outsource
In CP-ABE, both data encryption and data decryption are only done by the cloud users. Meanwhile, in MC-ABE, data encryption is done by DO and the cloud server collaboratively, and data decryption is undertaken by DR and the cloud server together.M is masked by DO before it is sent to ESP. DO and authorized DR can get M. ESP and DSP can get MM (Masked M), but they cannot deduce M from MM.Theorem 8.
The security in encryption and decryption in MC-ABE is not weaker than that of CP-ABE.Proof.
In algorithmE
n
c
r
y
p
t
E
S
P, ESP encrypts the access tree T with the parameters s, T, and MM. Consider(16)C~=M·eg,gas·signature=M·eg,gas·eg,gεv.Using PK ands, ESP can get e
(
g
,
g
)
a
s; what ESP got is M
·
e
(
g
,
g
)
ε
v.
The encrypted data in CP-ABE isC
~
=
M
·
e
(
g
,
g
)
a
s; both of α and s are random; let z
=
α
·
s; z is also random; then C
~
=
M
e
(
g
,
g
)
z is equal to M
e
(
g
,
g
)
ε
v
k. According to security proof in [7], the structure of C
~
=
M
·
e
(
g
,
g
)
a
s is secure to prevent the adversary from deducing M. Thus, M
e
(
g
,
g
)
ε
v
k in our scheme is secure. That is to say, ESP cannot deduce M with M
e
(
g
,
g
)
ε
v
k, and encryption outsourcing is secure in MC-ABE.
For DSP, it can decrypt CT using SK and get the maskedM
=
M
·
s
i
g
n
a
t
u
r
e. The information DSP gets is the same as ESP. So, in MC-ABE, data decryption outsourcing is also secure since it is similar to data encryption outsourcing.
### 5.2. Certificate
From the above statement, the signature is vitally important to the security of our scheme. Since the signature is an item of the certificate, the security of the signature relies on the certificate. Each DR has his unique masked certificate; DR can retrieve his certificate only by his own MValue. In the following, we prove that malicious DR cannot get MCert without the right MValue.Theorem 9.
MCert cannot be decrypted without the right MValue.
DR1 hasM
C
e
r
t1 = C
e
r
t1 ·
MValue1 = C
e
r
t1e
(
g
,
g
)
θ
t
D
R
1; D
R2 wanted to retrieve C
e
r
t1 without e
(
g
,
g
)
θ
t
D
R
1.Proof.
DR1 forgedM
V
a
l
u
e
1
′
=
e
g
,
g
θ
t
D
R
1
′
, to get Cert1:(17)Cert1=MCert1MValue1′=Cert1·MValue1MValue1′=Cert1·eg,gθtDR1-tDR1′.
In other words, if the forgedM
V
a
l
u
e
2
′ is right, we must have t
D
R
1
=
t
D
R
1
′ to solve the DL problem. The DL problem is computationally infeasible; thus, MValue is difficult to be forged and MCert cannot be decrypted without the right MValue.
### 5.3. Collusion
Service providers might collude with each other to combine their information to deduceM. In the above statement, ESP and DSP hold similar information to retrieve M. If ESP colluded with DSP, the most information they could get is M
·
s
i
g
n
a
t
u
r
e. We have given the security proof of M
·
s
i
g
n
a
t
u
r
e in Theorem 8. Thus, MC-ABE is quite qualified for anticollusion.SSP is a semitrusted server, which stores CT. If SSP colluded with ESP and DSP, it provides no useful information to deduceM. So, MC-ABE can defend against collusion among SSP, ESP, and DSP.
### 5.4. Revocation
If a DR is revealed to be malicious, he will be revoked from the authorized user list. We update the signature encrypted in CT; after that, as shown in the following, the revoked DR cannot get authorized data any more:Revoked signature held by DR:s
i
g
n
a
t
u
r
e
=
e
g
,
g
ε
v
k.
Updated signature:s
i
g
n
a
t
u
r
e
′
=
e
g
,
g
ε
v
k
′.
MaskedM
′
=
M
·
s
i
g
n
a
t
u
r
e
′
=
M
e
(
g
,
g
)
v
k
′.
MaskedM
′
/
s
i
g
n
a
t
u
r
e = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
(
v
k
′
-
v
k
).It is the same with the proof of Theorem9. MC-ABE is secure in revocation.
## 5.1. Encryption and Decryption Outsource
In CP-ABE, both data encryption and data decryption are only done by the cloud users. Meanwhile, in MC-ABE, data encryption is done by DO and the cloud server collaboratively, and data decryption is undertaken by DR and the cloud server together.M is masked by DO before it is sent to ESP. DO and authorized DR can get M. ESP and DSP can get MM (Masked M), but they cannot deduce M from MM.Theorem 8.
The security in encryption and decryption in MC-ABE is not weaker than that of CP-ABE.Proof.
In algorithmE
n
c
r
y
p
t
E
S
P, ESP encrypts the access tree T with the parameters s, T, and MM. Consider(16)C~=M·eg,gas·signature=M·eg,gas·eg,gεv.Using PK ands, ESP can get e
(
g
,
g
)
a
s; what ESP got is M
·
e
(
g
,
g
)
ε
v.
The encrypted data in CP-ABE isC
~
=
M
·
e
(
g
,
g
)
a
s; both of α and s are random; let z
=
α
·
s; z is also random; then C
~
=
M
e
(
g
,
g
)
z is equal to M
e
(
g
,
g
)
ε
v
k. According to security proof in [7], the structure of C
~
=
M
·
e
(
g
,
g
)
a
s is secure to prevent the adversary from deducing M. Thus, M
e
(
g
,
g
)
ε
v
k in our scheme is secure. That is to say, ESP cannot deduce M with M
e
(
g
,
g
)
ε
v
k, and encryption outsourcing is secure in MC-ABE.
For DSP, it can decrypt CT using SK and get the maskedM
=
M
·
s
i
g
n
a
t
u
r
e. The information DSP gets is the same as ESP. So, in MC-ABE, data decryption outsourcing is also secure since it is similar to data encryption outsourcing.
## 5.2. Certificate
From the above statement, the signature is vitally important to the security of our scheme. Since the signature is an item of the certificate, the security of the signature relies on the certificate. Each DR has his unique masked certificate; DR can retrieve his certificate only by his own MValue. In the following, we prove that malicious DR cannot get MCert without the right MValue.Theorem 9.
MCert cannot be decrypted without the right MValue.
DR1 hasM
C
e
r
t1 = C
e
r
t1 ·
MValue1 = C
e
r
t1e
(
g
,
g
)
θ
t
D
R
1; D
R2 wanted to retrieve C
e
r
t1 without e
(
g
,
g
)
θ
t
D
R
1.Proof.
DR1 forgedM
V
a
l
u
e
1
′
=
e
g
,
g
θ
t
D
R
1
′
, to get Cert1:(17)Cert1=MCert1MValue1′=Cert1·MValue1MValue1′=Cert1·eg,gθtDR1-tDR1′.
In other words, if the forgedM
V
a
l
u
e
2
′ is right, we must have t
D
R
1
=
t
D
R
1
′ to solve the DL problem. The DL problem is computationally infeasible; thus, MValue is difficult to be forged and MCert cannot be decrypted without the right MValue.
## 5.3. Collusion
Service providers might collude with each other to combine their information to deduceM. In the above statement, ESP and DSP hold similar information to retrieve M. If ESP colluded with DSP, the most information they could get is M
·
s
i
g
n
a
t
u
r
e. We have given the security proof of M
·
s
i
g
n
a
t
u
r
e in Theorem 8. Thus, MC-ABE is quite qualified for anticollusion.SSP is a semitrusted server, which stores CT. If SSP colluded with ESP and DSP, it provides no useful information to deduceM. So, MC-ABE can defend against collusion among SSP, ESP, and DSP.
## 5.4. Revocation
If a DR is revealed to be malicious, he will be revoked from the authorized user list. We update the signature encrypted in CT; after that, as shown in the following, the revoked DR cannot get authorized data any more:Revoked signature held by DR:s
i
g
n
a
t
u
r
e
=
e
g
,
g
ε
v
k.
Updated signature:s
i
g
n
a
t
u
r
e
′
=
e
g
,
g
ε
v
k
′.
MaskedM
′
=
M
·
s
i
g
n
a
t
u
r
e
′
=
M
e
(
g
,
g
)
v
k
′.
MaskedM
′
/
s
i
g
n
a
t
u
r
e = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
v
k
′
/
e
(
g
,
g
)
ε
v
k = M
e
(
g
,
g
)
ε
(
v
k
′
-
v
k
).It is the same with the proof of Theorem9. MC-ABE is secure in revocation.
## 6. Performance Evaluation
In this section, we numerically analyze the communication and computation cost of MC-ABE. We also give the simulation results in detail.
### 6.1. Numerical Analysis
#### 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
#### 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
### 6.2. Simulation Results
To evaluate the performance of MC-ABE, we develop simulation codes based on CP-ABE toolkit [21]. We make a comparison between MC-ABE and other two popular models (CP-ABE and PP-CP-ABE [11]) in four aspects: computation cost for data encryption, computation cost for key generation, computation cost for data decryption, and computation cost for user revocation.(1) Computation Cost for Data Encryption. Most of the computation cost in encryption is incurred for the encryption of the access tree, which is proportional to the number of the leaf nodes. In CP-ABE, data encryption is done by DO. In PP-CP-ABE, data encryption/decryption is outsourced to service providers; the access tree was divided into two parts: one part is encrypted by DO and the other part is encrypted by ESP. In MC-ABE, the access tree is encrypted by ESP. In Figure 6(a), the computation cost of three different schemes is compared. x-axis indicates the number of leaf nodes in T (the access tree), and y-axis indicates time to encrypt M (computation cost). For x, ten values are selected evenly (10, 20, …, 100). For each x value, we run simulation codes 10 times and take the average value of the results as the final result. It is shown that MC-ABE has better performance than the other two ones. In PP-CP-ABE, the number of leaf nodes in DO’s subtree will change with different tree division. So, for simplicity, we set the number of DO’s subtrees to be half of the number of the whole leaves. As shown in Figure 6(b), we also show confidence interval to assess the results in Figure 6(a) (only results about DO’s computation cost in MC-ABE are given, since the results in PP-CP-ABE and CP ABE are consistent with MC-ABE). In Figure 6(b), it is shown that all average results lie in the confidence interval.Figure 6
(a) DO’s computation cost for data encryption in CP-ABE, PP-CP-ABE, and MC-ABE. In PP-CP-ABE, part of encryption computation is transferred to cloud sever to reduce DO’s cost. In MC-ABE, more efforts are made to reduce computation cost undertaken by DO. (b) Computation cost of DO (the 95% confidence interval assuming random data with normal distribution is shown). (c) Computation cost of key generation (the 95% confidence interval assuming random data with normal distribution is shown). (d) Computation cost of DR in CP-ABE and MC-ABE. Similar to ESP in MC-ABE, DSP also undertakes most of the computation in decryption. The cost is proportional to attributes number in private key. (e) Computation cost for user revocation. With the authorization certificate in MC-ABE, revocation cost can be reduced obviously.
(a)
(b)
(c)
(d)
(e)(2) Computation Cost for Key Generation. Same with simulation about data encryption, we also take the average value of key generation cost as the final result. As shown in Figure 6(c), the average value is very close to lower bound and upper bound of the confidence interval, so we also list source data of the simulation results in Table 3. It shows that all average results lie in the confidence interval, so the simulation result is confident. From the results, we can get that the computation cost will grow with the number of attributes in private key. The algorithm KeyGen is implemented by TA, so there is no cost for DO.Table 3
Computation cost of key generation (source data of Figure6(c); the 95% confidence interval assuming random data with normal distribution is shown). Att_num indicates the number of DR’s attributes, CI indicates confidence interval, and Ave indicates the average value.
Att_num
CI
Ave
5
[5.291941, 5.30779403]
5.2998678
10
[11.90953, 11.9336295]
11.9215787
15
[18.54277, 18.5893065]
18.5660398
20
[25.12693, 25.1588600]
25.1428953
25
[31.65159, 31.7310845]
31.6913405
30
[38.26554, 38.3662469]
38.3158938
35
[44.86881, 44.9555189]
44.9121638
40
[51.45481, 51.6333506]
51.5440794
45
[58.04029, 58.1582152]
58.0992549
50
[64.54168, 64.6776467]
64.6096648(3) Computation Cost for Data Decryption. In MC-ABE, most of the computation cost has been shifted to DSP, so the computation cost of DR is constant. The comparison results are shown in Figure 6(d).(4) Computation Cost for User Revocation. In MC-ABE, user revocation simplified for the signature is introduced. When user revocation happens, the revoked DR’s “Revocation” item in mask value table is set as “Y”; his new data request will not be responded to; his former signature encrypted in ciphertext will be also changed. It needs one multiplication operation and one exponentiation operation for the above operations. The simulation results are as shown in Figure 6(e).
## 6.1. Numerical Analysis
### 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
### 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
## 6.1.1. Computation Cost
Setup. The setup procedure includes defining multiplicative cyclic group and generating PK and MK that will be used in encryption and key generation. There are four exponentiation operations and one pairing operation in setup procedure. Time complexity of the procedure is O
(
1
). The computation cost has nothing to do with the number of attributes.E
n
c
r
y
p
t
D
O. In this procedure, DO is responsible for generating signature and masking M. Two operations are included in signature computation, which are random number generation and bilinear map computation. And operations performed in mask computation include random number generation and three multiplication operations. Thus, it needs to do two exponentiation operations, two multiplication operations, and one pairing operation for each file. But if more privileges are permitted at the same time, more signatures will be computed. For each privilege, computation cost is fixed, so the total cost is proportional to the number of privileges.E
n
c
r
y
p
t
E
S
P. ESP encrypts the access tree in this procedure. The computation cost is proportional to the number of attributes in the tree. If the universal attributes set in T is I (I denotes the total number of attributes in set I), for each element in I, it needs two exponentiation operations; totally, the computation complexity is O
(
I
).KeyGen. This procedure is carried out to generate SK for DR. Computation cost is proportional to the number of attributes in SK. For each attribute, two pairing operations and one multiplication operation are needed. If the universal attributes set is S (S is the total number of attributes in set, S
≤
I), the time complexity of SK computation is O
(
S
).CerGen. In this procedure, we construct the certificate and mask it. Items in certificate are denoted by DO. TA needs to do one exponentiation operation, one multiplication operation, and one pairing operation. Computation cost is fixed; the computation complexity is O
(
1
).D
e
c
r
y
p
t
D
S
P. In this procedure, DSP decrypts the ciphertext. The main overhead is incurred at the decryption of every attribute. The cost is proportional to the number of attributes in the access tree. Thus, the complexity is O
(
I
).D
e
c
r
y
p
t
D
R. In this procedure, DR gets M from the masked M by a divide operation. Thus, the complexity is O
(
1
).
## 6.1.2. Storage Cost
Compared to CP-ABE, more storage cost is incurred in MC-ABE because the certificate and the unique value are introduced. As shown in Table2, the items in certificates are related to data access privileges, so the storage space of the certificate is proportional to the number of the documents (data). For each DR, one record is kept in mask value table (Table 1). Thus, the storage space for mask value table is proportional to the number of DR. Since the items in mask value table are quite simple, the total storage cost is not heavy.Table 2
Impact factor of storage cost.
Number of docs
Number of DR
Certificate storage space
Related
No
Mask value storage space
No
Related
## 6.2. Simulation Results
To evaluate the performance of MC-ABE, we develop simulation codes based on CP-ABE toolkit [21]. We make a comparison between MC-ABE and other two popular models (CP-ABE and PP-CP-ABE [11]) in four aspects: computation cost for data encryption, computation cost for key generation, computation cost for data decryption, and computation cost for user revocation.(1) Computation Cost for Data Encryption. Most of the computation cost in encryption is incurred for the encryption of the access tree, which is proportional to the number of the leaf nodes. In CP-ABE, data encryption is done by DO. In PP-CP-ABE, data encryption/decryption is outsourced to service providers; the access tree was divided into two parts: one part is encrypted by DO and the other part is encrypted by ESP. In MC-ABE, the access tree is encrypted by ESP. In Figure 6(a), the computation cost of three different schemes is compared. x-axis indicates the number of leaf nodes in T (the access tree), and y-axis indicates time to encrypt M (computation cost). For x, ten values are selected evenly (10, 20, …, 100). For each x value, we run simulation codes 10 times and take the average value of the results as the final result. It is shown that MC-ABE has better performance than the other two ones. In PP-CP-ABE, the number of leaf nodes in DO’s subtree will change with different tree division. So, for simplicity, we set the number of DO’s subtrees to be half of the number of the whole leaves. As shown in Figure 6(b), we also show confidence interval to assess the results in Figure 6(a) (only results about DO’s computation cost in MC-ABE are given, since the results in PP-CP-ABE and CP ABE are consistent with MC-ABE). In Figure 6(b), it is shown that all average results lie in the confidence interval.Figure 6
(a) DO’s computation cost for data encryption in CP-ABE, PP-CP-ABE, and MC-ABE. In PP-CP-ABE, part of encryption computation is transferred to cloud sever to reduce DO’s cost. In MC-ABE, more efforts are made to reduce computation cost undertaken by DO. (b) Computation cost of DO (the 95% confidence interval assuming random data with normal distribution is shown). (c) Computation cost of key generation (the 95% confidence interval assuming random data with normal distribution is shown). (d) Computation cost of DR in CP-ABE and MC-ABE. Similar to ESP in MC-ABE, DSP also undertakes most of the computation in decryption. The cost is proportional to attributes number in private key. (e) Computation cost for user revocation. With the authorization certificate in MC-ABE, revocation cost can be reduced obviously.
(a)
(b)
(c)
(d)
(e)(2) Computation Cost for Key Generation. Same with simulation about data encryption, we also take the average value of key generation cost as the final result. As shown in Figure 6(c), the average value is very close to lower bound and upper bound of the confidence interval, so we also list source data of the simulation results in Table 3. It shows that all average results lie in the confidence interval, so the simulation result is confident. From the results, we can get that the computation cost will grow with the number of attributes in private key. The algorithm KeyGen is implemented by TA, so there is no cost for DO.Table 3
Computation cost of key generation (source data of Figure6(c); the 95% confidence interval assuming random data with normal distribution is shown). Att_num indicates the number of DR’s attributes, CI indicates confidence interval, and Ave indicates the average value.
Att_num
CI
Ave
5
[5.291941, 5.30779403]
5.2998678
10
[11.90953, 11.9336295]
11.9215787
15
[18.54277, 18.5893065]
18.5660398
20
[25.12693, 25.1588600]
25.1428953
25
[31.65159, 31.7310845]
31.6913405
30
[38.26554, 38.3662469]
38.3158938
35
[44.86881, 44.9555189]
44.9121638
40
[51.45481, 51.6333506]
51.5440794
45
[58.04029, 58.1582152]
58.0992549
50
[64.54168, 64.6776467]
64.6096648(3) Computation Cost for Data Decryption. In MC-ABE, most of the computation cost has been shifted to DSP, so the computation cost of DR is constant. The comparison results are shown in Figure 6(d).(4) Computation Cost for User Revocation. In MC-ABE, user revocation simplified for the signature is introduced. When user revocation happens, the revoked DR’s “Revocation” item in mask value table is set as “Y”; his new data request will not be responded to; his former signature encrypted in ciphertext will be also changed. It needs one multiplication operation and one exponentiation operation for the above operations. The simulation results are as shown in Figure 6(e).
## 7. Conclusion
The C-BSN is one promising technology that can change people’s healthcare experiences greatly. However, how to keep data security and data privacy in C-BSN is an important and challenging issue since the patients’ health-related data are quite sensitive. In this paper, we propose a novel encryption outsourcing scheme MC-ABE that meets the requirements of data security and data privacy in C-BSN. In MC-ABE, one specific signature is constructed to mask the plaintext; the unique authentication certificate for each visitor is constructed; the third-party trust authority to manage above-mentioned signatures and certificates is also introduced. By security analysis, we prove that MC-ABE can meet the security requirement of C-BSN. And, by performance evaluation, it shows that MC-ABE has less computation cost and storage cost compared with other popular models. In future work, we plan to explore the possibility of improving the scalability of MC-ABE.
---
*Source: 101287-2015-08-20.xml* | 2015 |
# Simulation and Formation Mechanisms of Urban Landscape Design Based on Discrete Dynamic Models Driven by Big Data
**Authors:** Ke Cao; Jing Xiao; Yan Wu
**Journal:** Discrete Dynamics in Nature and Society
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1012900
---
## Abstract
Urban landscape design as a contemporary art embodies postmodernist philosophical thinking, aesthetic thinking, and breaking the traditional concept of art, and it is a new way of creating and presenting art. Big data technology characterized by large scale, speed, variety, value, and uncertainty of data is used to achieve urban landscape design. In this article, during the research process, we strive to raise the revelation of the design layer rather than the brand new level of cross-fertilization and interaction between big data-driven discrete dynamic model and urban landscape design; we also reveal how the benefits of promoting urban development and harmonious life are achieved in the interactive expression of the urban landscape after the application of the big data-driven discrete dynamic model, which provides designers and related professionals with more detailed and novel design ideas at the theoretical level and makes the theory of big data-driven discrete dynamic models in landscape design interactive methods more enriched. Finally, this article puts forward its thinking and outlook on the design of the big data-driven discrete dynamic model in the interactivity of urban landscape design, hoping that artists will strengthen its functional and material design elements when creating performance. Moreover, more design means of emerging technologies of modern science and technology should be integrated so that modern urban landscape can achieve ordinary and uncommon benefits and promote the rapid development of the big data-driven discrete dynamic model in urban landscape design development.
---
## Body
## 1. Introduction
Historically, urban landscape design has been closely linked to the arts, interacting with each other. While today’s landscape design has involved many scientific disciplines, it is still heavily influenced by the ever-changing field of art. The profound, subtle, and inner beauty of traditional gardens or the simplicity, ecology [1], and realism of modern gardens all are closely related to the various styles of art, and even when the art world undergoes a radical change, the landscape design world will be influenced by the corresponding changes. When a large number of surrealist paintings emerged in the century, it had a significant impact on designers, including landscape designers who combined the formal language of cubism and surrealism in the art to apply simple and flowing planes in landscape design, beginning to show a dynamic and balanced form. These artistic trends and forms have provided landscape designers with a formal language and creative inspiration [2].In today’s urban development, big data-driven discrete dynamic models are gaining unprecedented momentum in landscape design. More and more designers and artists are focusing on using big data-driven discrete dynamic models and urban landscape design, making the once neglected and aging urban landscape environment fresh and trendy. In the urban public space, the big data-driven discrete dynamic model is influenced by today’s new design trend, which tends to be more and more novel, new, and humanized [3], forming a kind of public art interactive communication with urban residents and strengthening the close relationship between the big data-driven discrete dynamic model and the urban landscape environment and public life in a holistic and independent public space, in terms of visual and auditory senses. Truly reflecting the aesthetic value and social value of big data-driven discrete dynamic models and letting big data-driven discrete dynamic models enter the life of the public with a more even and free attitude [4] are great problems faced when constructing big data-driven discrete dynamic models today and, however, they are also realistic objectives for contemporary big data-driven discrete dynamic models to obtain higher development goals and realize themselves. The trend of mutual influence and integration between big data-driven discrete dynamic models and the urban landscape is becoming more and more obvious [5].The design of urban spaces influences our lifestyles, which in turn has an impact on our health. Not all lifestyles are shaped by urban space design; however, the lack of physical exercise, dependence on motorized travel, and sedentary lifestyles are inextricably linked to urban space design. Urban space plays an important role in the pathogenic mechanisms of certain diseases. The landscape environment, as a large component of urban space, plays an important role in creating a healthy environment and inducing positive behaviors and is closely linked to public health issues. A healthy city includes not only a clean, hygienic, natural, and harmonious living environment but also a good and positive social atmosphere. A natural and harmonious living environment is the basis for a healthy and positive social climate, which in turn ensures the sustainability of a natural and harmonious living environment. As an important part of urban public space, urban landscape plays a decisive role in the living environment and social atmosphere to a large extent, whether it is the creation of space or the construction of service facilities. Contemporary medical research confirms that a beautiful environment can play a good role in regulating and promoting the human cerebral cortex and the state of mind and body. Only by creating an environment conducive to promoting healthy behavior can public health be thoroughly improved.
## 2. Related Work
The population expansion, landscape congestion, soaring housing prices, and environmental pollution in large cities are not conducive to developing enterprises and talents, and many domestic high-tech enterprises have chosen to move away from first-tier cities, while the countryside is facing a large loss of land, abandoned residential land [6], large-scale population transfer, and many other problems, and the “big city disease” and “countryside disease” caused by high-speed urbanization are increasingly aggravated. Along with the rapid development of information and communication technology, the development of urban big data has brought great changes to the field of urban planning [7], and the new urban spatial data can, to a certain extent, make up for the shortage of traditional data. Previous population density studies mainly belong to macro analysis and can reduce the research scale to the township road level in the environment of new data, promoting the expansion of research scope and accuracy and helping discover new problems. Exploring how to effectively integrate the traditional Depthmap-based spatial syntax analysis techniques with the big data mining that has emerged in recent years to enhance the explanatory power of complex urban spatial problems, this research direction will become one of the frontier branches of research in the field of urban planning. Therefore, in [8], the combined application research of spatial syntax and big data provides new possibilities for the spatial morphology research of the characteristic town to better solve the construction problems of the characteristic town [9].In terms of planning, urban morphology is defined as “the spatial and territorial distribution of the city as a whole and its internal components,” i.e., urban morphology is a reflection of the spatial structure of the city by various materials [10] and economic and social components in the urban space, expressed in the form of the combination of the elements in the plane or three-dimensional space morphology. Kevin Lynch believed that urban morphology is the orderly arrangement and overall organization of urban environmental elements (roads, boundaries, areas, nodes, and markers) so that the presentation of the overall shape of the city can cause more impact on the visual excitement of the observer [11]. Urban spatial morphology is the characteristic of urban spatial structure and development law, while Mr. Qikang believed that urban morphology is the result of spatial morphological characteristics of the city in the development and change due to internal and external conflicts [12].The literature has found in experiments that the natural environment can influence and even change the human mood, and the role of landscape in restoring physical and mental health has been demonstrated through a postuse evaluation of rehabilitation gardens, which concluded that natural landscape elements play a major role in promoting the process of restoring health. Guiding principles for rehabilitation landscape design are also presented. The literature is referred to as the founder of the modern hypothesis of rehabilitation gardens and related theories, followed by an experiment on the impact of the external hospital environment on the effectiveness of rehabilitation treatment for patients. After testing, the researchers found that the natural environment has a catalytic effect on easing blood pressure, skin conductivity, and muscle conductivity and that the natural environment enables people to recover more quickly from stressful mental states. Moreover, based on the results of the study, the natural benefit hypothesis is proposed in terms of the health benefits of landscape elements in the natural environment, including water and plants on the human body [13]. By conducting a project practice survey of landscape types under the concept of rehabilitation landscape, the literature argues that rehabilitation landscape provides opportunities for patients to actively recover their functions. It is also the first time to link rehabilitation landscape and urban public space, and it is argued that expanding the scope of rehabilitation landscape and considering the use of rehabilitation landscape in combination with urban public space can effectively promote the physical and mental health of urban residents. The literature argues that both natural and artificial landscape environments have a role in promoting the recovery of human physical and mental health and defines healing landscapes: places, facilities, buildings, places [14], and surroundings that are beneficial to the recovery and maintenance of physical and mental health and happiness [15].
## 3. Simulation and Formation Mechanisms for Urban Landscape Design Based on Discrete Dynamic Models Driven by Big Data
### 3.1. Big Data-Driven Discrete Dynamic Models
Big data refers to a collection of datasets that are so large and complex that storage and data processing is difficult using existing general databases. Such datasets were once called “big data,” and real-time urban landscape design is an important basis for responding to signal control effects and is a core indicator for calculating or optimizing control volumes, so modeling landscape flows is an essential step [16]. In the landscape engineering community, to study the mechanism of landscape flow and the characteristics of the landscape flow on a macro and micro level, some researchers have established a mathematical model of landscape flow changes through the conservation of urban landscape quantity and the spreading mechanism of landscape quantity; these models have better results in the prediction of local landscape flow. Prelandscape planning analysis is the starting point for carrying out landscape planning work and is the basis for planning and design. Common landscape data include natural information such as site topography, terrain, vegetation, water bodies, soil, meteorology, and human and social information such as population, transportation, economy, and culture. Traditional landscape data mainly exists in the form of paper and pictures [17], such as topographic maps, planning drawings, textual information, hand drawings, and site photos. By mining and analyzing big data, it is possible to quantify text, quantify speech [18], quantify location information, and quantify everything. So far, big data has greatly expanded the information for preanalysis, provided data support for landscape planning, and improved the science of planning and design. The conceptual diagram of the discrete dynamic model is shown in Figure 1.Figure 1
Conceptual illustration of the discrete dynamic model.The discrete industry is characterized by the construction of manufacturing units, which are characterized by high variety, small batch sizes, flexible process routes, and equipment use. In industrial production, companies pay more attention to the flexibility of production, so for the optimization of process control, they pay more attention to the high degree of automation of the manufacturing material flow process, the interconnection and pervasive awareness of all elements involved in the manufacturing process within the factory, and the modeling and simulation of the information-thing system of the manufacturing process. Since its introduction, the discrete choice model has been widely used in various fields, such as marketing, economics, and transportation, to study individual choice behavior. For example, when customers buy a mobile phone, they usually consider its CPU, pixel, resolution, and other attributes before choosing to buy it, and all these mobile phone attributes are not continuous variables but discrete variables, which is difficult to handle using the traditional linear regression model, and it is necessary to use discrete choice model for modeling and analysis [19].A big data-driven discrete dynamic model is feedback, a system consisting of nonlinear components, where each neuron is connected to all other neurons, classified as discrete (DHNN) or continuous depending on whether the output of the network is discrete or continuous quantity. For the discrete type (DHNN), if the system is stable, it can converge from any state to a stable state; if the system is unstable, since the output of the network nodes only has two states, 1 and −1 (or 1 and 0), the system cannot have infinite divergence but only self-sustaining oscillations or limit loops of limited amplitude, finally converging to a stable state adjacent to it. When the evaluation index of the college students to be classified is input, DHNN gradually converges to a stored equilibrium point using its associative memory, and when the state no longer changes [20], the equilibrium point corresponds to the classification level to be sought. The discrete model building process is shown in Figure 2.Figure 2
Flowchart of discrete model building.Random utility theory is the most basic principle of the discrete choice model: assume that the decision-maker hasn options, each of which corresponds to a utility of Uij. The utility is defined as a measure of the satisfaction of the consumer’s needs and desires through consumption or enjoyment of leisure and Uij is composed of a fixed utility v and a random utility ij, with the fixed utility Vij being explained by ij explained by the observable part of factor x, while the random utility is the unobservable part, the unpredictable, random variable utility, and error. The decision-maker chooses the alternative with the maximum utility, which is also called the utility maximization criterion, i.e., satisfaction maximization, and utility U can be expressed as a function of factor x of the observable part of the fixed utility v; that is, Uijij=x is the coefficient. The probability of choosing each alternative can be expressed as a function of utility: P = F(U). The multinomial logit model, the MNL model, is the simplest and most basic form of the discrete choice model, which sets the random utility to obey an independent extreme value distribution so that the choice probability function can be expressed as follows:(1)Mij=eij∑i=1nei.This formula is the core formula of this article. It represents what algorithm the discrete model uses, its applications, and how it applies to the following formulas. The MNL model is easy to operate and quick to use, so the model is popular among scholars and is used in many fields. For example, in landscape design and travel, the model can be used to plan various optimal routes according to people’s habits and destinations to provide a reference; in education, the model can be used in teaching practice; in marketing, the model is used in various aspects such as Internet positioning, pharmaceutical pricing, and the selection of electronic products.The traditional sliding mode variable structure control study object is a continuous system model, while the currently used computer systems are all discrete. When the sliding mode variable structure control is applied to the discrete control system, due to the influence of sampling frequency, the sliding mode variable structure control can no longer produce the ideal sliding mode. However, the quasisliding mode and motion trajectory can only end up in the form of a jitter along the sliding surface. At present, the research and design of discrete sliding mode variable structure control have become an important part of sliding mode variable structure control theory. The definition of discrete structural control is systematically described as follows:(2)Nij=1n∑i=1nXi−X¯2+δyδx.The existence and arrival conditions of the discrete structure, the existence of which is the basis of the sliding mode motion, are given by the following mathematical expression:(3)f=ΔyΔx⋅∂2Ω∂v2.The sliding mode motion consists of two processes, convergence motion and sliding mode motion, and the sliding mode arrival condition only guarantees the requirement of arriving at the switching surface from the state space point, while no restriction is placed on the convergence motion trajectory. The proposed convergence law can improve the dynamic quality of the convergence motion.
### 3.2. Mechanisms of Urban Landscape Design Simulation and Formation
The extremely large volume of data has also placed demands on the tools used for field collection. The traditional methods of handheld topographic maps, image maps, and status maps on paper, using cameras to collect information, and manually viewing the site features are no longer sufficient to meet the current needs of data collection. In the context of big data, mobile GIS, GPS technology, smartphones, and other mobile devices will enter the field research work of the site. Planners can use mobile devices; based on the existing database, the field collection of vector data, photos with location information, action tracks and necessary text descriptions, and other information directly uploaded to the database in the field can be the preliminary planning and design of the site, thus creating a convenient way of research and efficient work methods. Many forms of data are commonly used in landscape planning, such as drawings, text, pictures, dwg files, survey log sheets, verbal notes, and video images. The data currently widely used and available at the planning stage come primarily from the site itself. In addition to the basic information commonly used in planning, such as topographic and geomorphological maps, land use maps, remote sensing image maps, site photos, and multiplanning maps, for specific sites, specific analysis and various targeted surveys are needed, such as soil bearing capacity survey, water quality survey, visitor use survey, vegetation cover survey, and housing quality and use survey. In addition to the abovementioned surveys of the basic information of the site, there are also common data reflecting economic and cultural data, such as the number of visitors, economic revenue, environmental monitoring, video viewing, brochure collection, and basic service facilities. As big data brings more new types of data, such as web text data, communication location data, and social network data, the types of data have increased substantially. The creation of databases is a requirement of the times to promote resource sharing. With the deepening of data openness, more and more data can be obtained through fast, comprehensive, and authoritative official websites, people can freely standardize the use and analysis of data, giving full play to the economic and social value of data and creating a more convenient and comfortable living environment. The database is established as shown in Figure3.Figure 3
Design diagram of the landscape design database.The current analysis of the site is the basis for the design and reflects the important principle of “respect for the site” in landscape planning and design. The concept of “nature-based” considers people as part of nature. By investigating and analyzing the current situation of the site, we can effectively use the resources of the site, avoid wasting resources, and reflect the culture of the site; we can reasonably plan the spatial functions of the site to meet the needs of users; we can scientifically design the roads and water supply and drainage; we can save the engineering volume and improve the work efficiency. The analysis of the current situation should not be limited to the site; the surrounding resources are also the key point to be examined; the reasonable use of surrounding resources can form a chain reaction of the whole area and drive the development of the region. In a word, the planning and design based on the analysis of the current situation of the site are in line with the actual situation and ecological requirements to meet the scientific, economic, and rational planning and design.As one of the five elements of design, the terrain embodies the unique skeleton of the site and is the basis of the design. By analyzing the topography of the site, the functional partitioning of the site and the location of attractions can be delineated; for example, plazas and resting points can be set up in flat areas, and vertical activities such as slides and rock climbing can be set up on steep slopes. The study of the topography and geomorphology of the site can effectively reduce the damage to the status quo and also reduce the amount of earthwork; at the same time, it is conducive to the analysis of landscape sightlines and landscape view areas; it is beneficial to the scientific planning of drainage design and elevation analysis, and terrain analysis is the basis of planning and design, which determines the overall style of planning and design. When investigating and analyzing topography and geomorphology, the most important is the elevation and slope analysis, and the traditional terrain analysis is mainly based on field survey by human and site conditions, as the more complex topography of the site cannot be comprehensively investigated and is not conducive to a comprehensive understanding of the site and making analysis and judgment.As the most representative and attractive street in a city, certain qualities of the physical environment are critical to the urban landscape. One or two of the many elements that make up a streetscape are not enough to create a competent streetscape. Good urban boulevards must meet several basic requirements, including accessibility, the ability to bring people together, public character, livability, safety, comfort, and the potential to encourage residents to participate and take responsibility for the community. All of these qualities can be achieved by design.
## 3.1. Big Data-Driven Discrete Dynamic Models
Big data refers to a collection of datasets that are so large and complex that storage and data processing is difficult using existing general databases. Such datasets were once called “big data,” and real-time urban landscape design is an important basis for responding to signal control effects and is a core indicator for calculating or optimizing control volumes, so modeling landscape flows is an essential step [16]. In the landscape engineering community, to study the mechanism of landscape flow and the characteristics of the landscape flow on a macro and micro level, some researchers have established a mathematical model of landscape flow changes through the conservation of urban landscape quantity and the spreading mechanism of landscape quantity; these models have better results in the prediction of local landscape flow. Prelandscape planning analysis is the starting point for carrying out landscape planning work and is the basis for planning and design. Common landscape data include natural information such as site topography, terrain, vegetation, water bodies, soil, meteorology, and human and social information such as population, transportation, economy, and culture. Traditional landscape data mainly exists in the form of paper and pictures [17], such as topographic maps, planning drawings, textual information, hand drawings, and site photos. By mining and analyzing big data, it is possible to quantify text, quantify speech [18], quantify location information, and quantify everything. So far, big data has greatly expanded the information for preanalysis, provided data support for landscape planning, and improved the science of planning and design. The conceptual diagram of the discrete dynamic model is shown in Figure 1.Figure 1
Conceptual illustration of the discrete dynamic model.The discrete industry is characterized by the construction of manufacturing units, which are characterized by high variety, small batch sizes, flexible process routes, and equipment use. In industrial production, companies pay more attention to the flexibility of production, so for the optimization of process control, they pay more attention to the high degree of automation of the manufacturing material flow process, the interconnection and pervasive awareness of all elements involved in the manufacturing process within the factory, and the modeling and simulation of the information-thing system of the manufacturing process. Since its introduction, the discrete choice model has been widely used in various fields, such as marketing, economics, and transportation, to study individual choice behavior. For example, when customers buy a mobile phone, they usually consider its CPU, pixel, resolution, and other attributes before choosing to buy it, and all these mobile phone attributes are not continuous variables but discrete variables, which is difficult to handle using the traditional linear regression model, and it is necessary to use discrete choice model for modeling and analysis [19].A big data-driven discrete dynamic model is feedback, a system consisting of nonlinear components, where each neuron is connected to all other neurons, classified as discrete (DHNN) or continuous depending on whether the output of the network is discrete or continuous quantity. For the discrete type (DHNN), if the system is stable, it can converge from any state to a stable state; if the system is unstable, since the output of the network nodes only has two states, 1 and −1 (or 1 and 0), the system cannot have infinite divergence but only self-sustaining oscillations or limit loops of limited amplitude, finally converging to a stable state adjacent to it. When the evaluation index of the college students to be classified is input, DHNN gradually converges to a stored equilibrium point using its associative memory, and when the state no longer changes [20], the equilibrium point corresponds to the classification level to be sought. The discrete model building process is shown in Figure 2.Figure 2
Flowchart of discrete model building.Random utility theory is the most basic principle of the discrete choice model: assume that the decision-maker hasn options, each of which corresponds to a utility of Uij. The utility is defined as a measure of the satisfaction of the consumer’s needs and desires through consumption or enjoyment of leisure and Uij is composed of a fixed utility v and a random utility ij, with the fixed utility Vij being explained by ij explained by the observable part of factor x, while the random utility is the unobservable part, the unpredictable, random variable utility, and error. The decision-maker chooses the alternative with the maximum utility, which is also called the utility maximization criterion, i.e., satisfaction maximization, and utility U can be expressed as a function of factor x of the observable part of the fixed utility v; that is, Uijij=x is the coefficient. The probability of choosing each alternative can be expressed as a function of utility: P = F(U). The multinomial logit model, the MNL model, is the simplest and most basic form of the discrete choice model, which sets the random utility to obey an independent extreme value distribution so that the choice probability function can be expressed as follows:(1)Mij=eij∑i=1nei.This formula is the core formula of this article. It represents what algorithm the discrete model uses, its applications, and how it applies to the following formulas. The MNL model is easy to operate and quick to use, so the model is popular among scholars and is used in many fields. For example, in landscape design and travel, the model can be used to plan various optimal routes according to people’s habits and destinations to provide a reference; in education, the model can be used in teaching practice; in marketing, the model is used in various aspects such as Internet positioning, pharmaceutical pricing, and the selection of electronic products.The traditional sliding mode variable structure control study object is a continuous system model, while the currently used computer systems are all discrete. When the sliding mode variable structure control is applied to the discrete control system, due to the influence of sampling frequency, the sliding mode variable structure control can no longer produce the ideal sliding mode. However, the quasisliding mode and motion trajectory can only end up in the form of a jitter along the sliding surface. At present, the research and design of discrete sliding mode variable structure control have become an important part of sliding mode variable structure control theory. The definition of discrete structural control is systematically described as follows:(2)Nij=1n∑i=1nXi−X¯2+δyδx.The existence and arrival conditions of the discrete structure, the existence of which is the basis of the sliding mode motion, are given by the following mathematical expression:(3)f=ΔyΔx⋅∂2Ω∂v2.The sliding mode motion consists of two processes, convergence motion and sliding mode motion, and the sliding mode arrival condition only guarantees the requirement of arriving at the switching surface from the state space point, while no restriction is placed on the convergence motion trajectory. The proposed convergence law can improve the dynamic quality of the convergence motion.
## 3.2. Mechanisms of Urban Landscape Design Simulation and Formation
The extremely large volume of data has also placed demands on the tools used for field collection. The traditional methods of handheld topographic maps, image maps, and status maps on paper, using cameras to collect information, and manually viewing the site features are no longer sufficient to meet the current needs of data collection. In the context of big data, mobile GIS, GPS technology, smartphones, and other mobile devices will enter the field research work of the site. Planners can use mobile devices; based on the existing database, the field collection of vector data, photos with location information, action tracks and necessary text descriptions, and other information directly uploaded to the database in the field can be the preliminary planning and design of the site, thus creating a convenient way of research and efficient work methods. Many forms of data are commonly used in landscape planning, such as drawings, text, pictures, dwg files, survey log sheets, verbal notes, and video images. The data currently widely used and available at the planning stage come primarily from the site itself. In addition to the basic information commonly used in planning, such as topographic and geomorphological maps, land use maps, remote sensing image maps, site photos, and multiplanning maps, for specific sites, specific analysis and various targeted surveys are needed, such as soil bearing capacity survey, water quality survey, visitor use survey, vegetation cover survey, and housing quality and use survey. In addition to the abovementioned surveys of the basic information of the site, there are also common data reflecting economic and cultural data, such as the number of visitors, economic revenue, environmental monitoring, video viewing, brochure collection, and basic service facilities. As big data brings more new types of data, such as web text data, communication location data, and social network data, the types of data have increased substantially. The creation of databases is a requirement of the times to promote resource sharing. With the deepening of data openness, more and more data can be obtained through fast, comprehensive, and authoritative official websites, people can freely standardize the use and analysis of data, giving full play to the economic and social value of data and creating a more convenient and comfortable living environment. The database is established as shown in Figure3.Figure 3
Design diagram of the landscape design database.The current analysis of the site is the basis for the design and reflects the important principle of “respect for the site” in landscape planning and design. The concept of “nature-based” considers people as part of nature. By investigating and analyzing the current situation of the site, we can effectively use the resources of the site, avoid wasting resources, and reflect the culture of the site; we can reasonably plan the spatial functions of the site to meet the needs of users; we can scientifically design the roads and water supply and drainage; we can save the engineering volume and improve the work efficiency. The analysis of the current situation should not be limited to the site; the surrounding resources are also the key point to be examined; the reasonable use of surrounding resources can form a chain reaction of the whole area and drive the development of the region. In a word, the planning and design based on the analysis of the current situation of the site are in line with the actual situation and ecological requirements to meet the scientific, economic, and rational planning and design.As one of the five elements of design, the terrain embodies the unique skeleton of the site and is the basis of the design. By analyzing the topography of the site, the functional partitioning of the site and the location of attractions can be delineated; for example, plazas and resting points can be set up in flat areas, and vertical activities such as slides and rock climbing can be set up on steep slopes. The study of the topography and geomorphology of the site can effectively reduce the damage to the status quo and also reduce the amount of earthwork; at the same time, it is conducive to the analysis of landscape sightlines and landscape view areas; it is beneficial to the scientific planning of drainage design and elevation analysis, and terrain analysis is the basis of planning and design, which determines the overall style of planning and design. When investigating and analyzing topography and geomorphology, the most important is the elevation and slope analysis, and the traditional terrain analysis is mainly based on field survey by human and site conditions, as the more complex topography of the site cannot be comprehensively investigated and is not conducive to a comprehensive understanding of the site and making analysis and judgment.As the most representative and attractive street in a city, certain qualities of the physical environment are critical to the urban landscape. One or two of the many elements that make up a streetscape are not enough to create a competent streetscape. Good urban boulevards must meet several basic requirements, including accessibility, the ability to bring people together, public character, livability, safety, comfort, and the potential to encourage residents to participate and take responsibility for the community. All of these qualities can be achieved by design.
## 4. Experimental Results and Analysis
### 4.1. Results of an Urban Landscape Design Simulation
Vision is undoubtedly the primary way in which people see and feel things. Studies have shown that when people move around on the road, more than 80% of the information they get about their surroundings comes from vision. In addition to making us aware of the physical form and color of things in the outside world, vision also helps us judge the position and movement of objects, thus allowing us to enjoy beautiful street scenes. On the other hand, since the new century, modern transportation has gradually replaced walking, and most people are moving through the road space at a certain speed. Compared to the low-speed people who traveled and quietly viewed the streetscape, today, most people are in a dynamic situation. An urban landscape design based on a discrete dynamic model driven by big data will be more efficient in landscape design, and its efficiency is shown in Figure4.Figure 4
The efficiency of discrete dynamic model experiments based on big data.The study of the dynamic visual properties of people is the basis of streetscape design. It can provide several applications in the specific design process. For example, according to the shading relationship between the front landscape and the rear buildings, it can help select street tree species with compound visible height requirements; according to the visual characteristics of people in the plane, it can determine the design size of signs or scenery in front of the road; according to the relationship between the speed of vehicles and the recognizable distance of the horizontal landscape of the road, it can determine the appropriate setback distance of buildings on both sides of the road or the width of the roadside landscape zone for different design speeds. According to the relationship between vehicle speed and road longitudinal landscape discernible distance, it can help to determine the scale of green landscape on both sides and the appropriate law of streetscape space change. Therefore, when creating a street landscape environment, matching the changes in the human visual field of view and respecting the dynamic visual characteristics of people can have a multiplier effect and improve the efficiency and quality of the design. The satisfaction level for landscape design with the help of a big data-driven discrete dynamic model of landscape design is shown in Figure5.Figure 5
Satisfaction with landscape design based on big data-driven discrete dynamic models.
### 4.2. Simulation of Urban Landscape Formation Mechanism
Urbanization is an inevitable law at a certain stage of economic and social development and inevitably brings many problems and contradictions that need to be confronted and faced openly. Moreover, an urban green space design that incorporates an urban agricultural landscape is the solution to the set of problems facing the city that requires a repositioning with the help of the current complex adaptive systems theory. Systems theory of complex adaptation states that the behavior of individuals in adaptive systems of uncertainty leading to an inability to perceive the features to be presented by the system is the dominant factor in the failure of all relevant design projects. Moreover, this potential uncertainty cannot be eliminated. Landscaping based on discrete dynamic models driven by big data can remove its uncertainty even more, with the results shown in Figure6.Figure 6
Map of the efficiency of the determination of landscaping to engineering based on discrete dynamic models driven by big data.The introduction of urban agricultural landscaping in urban green spaces instead of simply introducing existing ornamental landscaping requires. There are many complex factors to consider, not only in the selection of plants in the design, the creation of the landscape, the composition of the space, and other issues but also in the crop output involved in urban agricultural landscapes, the most typical of which is the planting management of productive agricultural plants, harvesting and distribution issues based on a big data-driven discrete dynamic model of landscape design can solve similar problems of its management. Its efficiency results are shown in Figure7.Figure 7
Management efficiency of landscape design based on discrete dynamic models driven by big data.
## 4.1. Results of an Urban Landscape Design Simulation
Vision is undoubtedly the primary way in which people see and feel things. Studies have shown that when people move around on the road, more than 80% of the information they get about their surroundings comes from vision. In addition to making us aware of the physical form and color of things in the outside world, vision also helps us judge the position and movement of objects, thus allowing us to enjoy beautiful street scenes. On the other hand, since the new century, modern transportation has gradually replaced walking, and most people are moving through the road space at a certain speed. Compared to the low-speed people who traveled and quietly viewed the streetscape, today, most people are in a dynamic situation. An urban landscape design based on a discrete dynamic model driven by big data will be more efficient in landscape design, and its efficiency is shown in Figure4.Figure 4
The efficiency of discrete dynamic model experiments based on big data.The study of the dynamic visual properties of people is the basis of streetscape design. It can provide several applications in the specific design process. For example, according to the shading relationship between the front landscape and the rear buildings, it can help select street tree species with compound visible height requirements; according to the visual characteristics of people in the plane, it can determine the design size of signs or scenery in front of the road; according to the relationship between the speed of vehicles and the recognizable distance of the horizontal landscape of the road, it can determine the appropriate setback distance of buildings on both sides of the road or the width of the roadside landscape zone for different design speeds. According to the relationship between vehicle speed and road longitudinal landscape discernible distance, it can help to determine the scale of green landscape on both sides and the appropriate law of streetscape space change. Therefore, when creating a street landscape environment, matching the changes in the human visual field of view and respecting the dynamic visual characteristics of people can have a multiplier effect and improve the efficiency and quality of the design. The satisfaction level for landscape design with the help of a big data-driven discrete dynamic model of landscape design is shown in Figure5.Figure 5
Satisfaction with landscape design based on big data-driven discrete dynamic models.
## 4.2. Simulation of Urban Landscape Formation Mechanism
Urbanization is an inevitable law at a certain stage of economic and social development and inevitably brings many problems and contradictions that need to be confronted and faced openly. Moreover, an urban green space design that incorporates an urban agricultural landscape is the solution to the set of problems facing the city that requires a repositioning with the help of the current complex adaptive systems theory. Systems theory of complex adaptation states that the behavior of individuals in adaptive systems of uncertainty leading to an inability to perceive the features to be presented by the system is the dominant factor in the failure of all relevant design projects. Moreover, this potential uncertainty cannot be eliminated. Landscaping based on discrete dynamic models driven by big data can remove its uncertainty even more, with the results shown in Figure6.Figure 6
Map of the efficiency of the determination of landscaping to engineering based on discrete dynamic models driven by big data.The introduction of urban agricultural landscaping in urban green spaces instead of simply introducing existing ornamental landscaping requires. There are many complex factors to consider, not only in the selection of plants in the design, the creation of the landscape, the composition of the space, and other issues but also in the crop output involved in urban agricultural landscapes, the most typical of which is the planting management of productive agricultural plants, harvesting and distribution issues based on a big data-driven discrete dynamic model of landscape design can solve similar problems of its management. Its efficiency results are shown in Figure7.Figure 7
Management efficiency of landscape design based on discrete dynamic models driven by big data.
## 5. Conclusion
As a unique type of road, urban landscape avenue must have different functional properties or morphological characteristics from the general road; the current domestic and foreign studies investigate its specific concepts and characteristics, but most of them are still relatively general and one-sided and some are still controversial. The various functional characteristics and spatial morphological requirements of the urban landscape avenue itself make its streetscape design very challenging. Some pioneering research and practice have already put forward the streetscape design for the landscape avenue, focused more on the principle, goal, or guide level. However, for the general sense of the road landscape, there are numerous design theories and innovation methods, but few people use them. In this article, through the analysis of the functional characteristics and spatial characteristics of urban landscape boulevards, the integration of classical and innovative theoretical methods of road landscape design, and the introduction of the concept of standardization, we boldly propose a set of streetscape design methods applicable to urban landscape boulevards. This method takes “decomposition-reconstruction” as the core idea, road landscape segmentation as the premise, standard landscape section design as the basis, and the combination of standard sections as the means to finally realize the streetscape design of urban landscape boulevards. In addition, this article uses the method to apply a series of landscape designs based on big data-driven discrete dynamic models in a practical project, and its effect is obvious. This algorithm will be used in the simulation of the city in the future and a series of simulation mechanisms, such as the road planning of the city.
---
*Source: 1012900-2022-01-10.xml* | 1012900-2022-01-10_1012900-2022-01-10.md | 47,324 | Simulation and Formation Mechanisms of Urban Landscape Design Based on Discrete Dynamic Models Driven by Big Data | Ke Cao; Jing Xiao; Yan Wu | Discrete Dynamics in Nature and Society
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1012900 | 1012900-2022-01-10.xml | ---
## Abstract
Urban landscape design as a contemporary art embodies postmodernist philosophical thinking, aesthetic thinking, and breaking the traditional concept of art, and it is a new way of creating and presenting art. Big data technology characterized by large scale, speed, variety, value, and uncertainty of data is used to achieve urban landscape design. In this article, during the research process, we strive to raise the revelation of the design layer rather than the brand new level of cross-fertilization and interaction between big data-driven discrete dynamic model and urban landscape design; we also reveal how the benefits of promoting urban development and harmonious life are achieved in the interactive expression of the urban landscape after the application of the big data-driven discrete dynamic model, which provides designers and related professionals with more detailed and novel design ideas at the theoretical level and makes the theory of big data-driven discrete dynamic models in landscape design interactive methods more enriched. Finally, this article puts forward its thinking and outlook on the design of the big data-driven discrete dynamic model in the interactivity of urban landscape design, hoping that artists will strengthen its functional and material design elements when creating performance. Moreover, more design means of emerging technologies of modern science and technology should be integrated so that modern urban landscape can achieve ordinary and uncommon benefits and promote the rapid development of the big data-driven discrete dynamic model in urban landscape design development.
---
## Body
## 1. Introduction
Historically, urban landscape design has been closely linked to the arts, interacting with each other. While today’s landscape design has involved many scientific disciplines, it is still heavily influenced by the ever-changing field of art. The profound, subtle, and inner beauty of traditional gardens or the simplicity, ecology [1], and realism of modern gardens all are closely related to the various styles of art, and even when the art world undergoes a radical change, the landscape design world will be influenced by the corresponding changes. When a large number of surrealist paintings emerged in the century, it had a significant impact on designers, including landscape designers who combined the formal language of cubism and surrealism in the art to apply simple and flowing planes in landscape design, beginning to show a dynamic and balanced form. These artistic trends and forms have provided landscape designers with a formal language and creative inspiration [2].In today’s urban development, big data-driven discrete dynamic models are gaining unprecedented momentum in landscape design. More and more designers and artists are focusing on using big data-driven discrete dynamic models and urban landscape design, making the once neglected and aging urban landscape environment fresh and trendy. In the urban public space, the big data-driven discrete dynamic model is influenced by today’s new design trend, which tends to be more and more novel, new, and humanized [3], forming a kind of public art interactive communication with urban residents and strengthening the close relationship between the big data-driven discrete dynamic model and the urban landscape environment and public life in a holistic and independent public space, in terms of visual and auditory senses. Truly reflecting the aesthetic value and social value of big data-driven discrete dynamic models and letting big data-driven discrete dynamic models enter the life of the public with a more even and free attitude [4] are great problems faced when constructing big data-driven discrete dynamic models today and, however, they are also realistic objectives for contemporary big data-driven discrete dynamic models to obtain higher development goals and realize themselves. The trend of mutual influence and integration between big data-driven discrete dynamic models and the urban landscape is becoming more and more obvious [5].The design of urban spaces influences our lifestyles, which in turn has an impact on our health. Not all lifestyles are shaped by urban space design; however, the lack of physical exercise, dependence on motorized travel, and sedentary lifestyles are inextricably linked to urban space design. Urban space plays an important role in the pathogenic mechanisms of certain diseases. The landscape environment, as a large component of urban space, plays an important role in creating a healthy environment and inducing positive behaviors and is closely linked to public health issues. A healthy city includes not only a clean, hygienic, natural, and harmonious living environment but also a good and positive social atmosphere. A natural and harmonious living environment is the basis for a healthy and positive social climate, which in turn ensures the sustainability of a natural and harmonious living environment. As an important part of urban public space, urban landscape plays a decisive role in the living environment and social atmosphere to a large extent, whether it is the creation of space or the construction of service facilities. Contemporary medical research confirms that a beautiful environment can play a good role in regulating and promoting the human cerebral cortex and the state of mind and body. Only by creating an environment conducive to promoting healthy behavior can public health be thoroughly improved.
## 2. Related Work
The population expansion, landscape congestion, soaring housing prices, and environmental pollution in large cities are not conducive to developing enterprises and talents, and many domestic high-tech enterprises have chosen to move away from first-tier cities, while the countryside is facing a large loss of land, abandoned residential land [6], large-scale population transfer, and many other problems, and the “big city disease” and “countryside disease” caused by high-speed urbanization are increasingly aggravated. Along with the rapid development of information and communication technology, the development of urban big data has brought great changes to the field of urban planning [7], and the new urban spatial data can, to a certain extent, make up for the shortage of traditional data. Previous population density studies mainly belong to macro analysis and can reduce the research scale to the township road level in the environment of new data, promoting the expansion of research scope and accuracy and helping discover new problems. Exploring how to effectively integrate the traditional Depthmap-based spatial syntax analysis techniques with the big data mining that has emerged in recent years to enhance the explanatory power of complex urban spatial problems, this research direction will become one of the frontier branches of research in the field of urban planning. Therefore, in [8], the combined application research of spatial syntax and big data provides new possibilities for the spatial morphology research of the characteristic town to better solve the construction problems of the characteristic town [9].In terms of planning, urban morphology is defined as “the spatial and territorial distribution of the city as a whole and its internal components,” i.e., urban morphology is a reflection of the spatial structure of the city by various materials [10] and economic and social components in the urban space, expressed in the form of the combination of the elements in the plane or three-dimensional space morphology. Kevin Lynch believed that urban morphology is the orderly arrangement and overall organization of urban environmental elements (roads, boundaries, areas, nodes, and markers) so that the presentation of the overall shape of the city can cause more impact on the visual excitement of the observer [11]. Urban spatial morphology is the characteristic of urban spatial structure and development law, while Mr. Qikang believed that urban morphology is the result of spatial morphological characteristics of the city in the development and change due to internal and external conflicts [12].The literature has found in experiments that the natural environment can influence and even change the human mood, and the role of landscape in restoring physical and mental health has been demonstrated through a postuse evaluation of rehabilitation gardens, which concluded that natural landscape elements play a major role in promoting the process of restoring health. Guiding principles for rehabilitation landscape design are also presented. The literature is referred to as the founder of the modern hypothesis of rehabilitation gardens and related theories, followed by an experiment on the impact of the external hospital environment on the effectiveness of rehabilitation treatment for patients. After testing, the researchers found that the natural environment has a catalytic effect on easing blood pressure, skin conductivity, and muscle conductivity and that the natural environment enables people to recover more quickly from stressful mental states. Moreover, based on the results of the study, the natural benefit hypothesis is proposed in terms of the health benefits of landscape elements in the natural environment, including water and plants on the human body [13]. By conducting a project practice survey of landscape types under the concept of rehabilitation landscape, the literature argues that rehabilitation landscape provides opportunities for patients to actively recover their functions. It is also the first time to link rehabilitation landscape and urban public space, and it is argued that expanding the scope of rehabilitation landscape and considering the use of rehabilitation landscape in combination with urban public space can effectively promote the physical and mental health of urban residents. The literature argues that both natural and artificial landscape environments have a role in promoting the recovery of human physical and mental health and defines healing landscapes: places, facilities, buildings, places [14], and surroundings that are beneficial to the recovery and maintenance of physical and mental health and happiness [15].
## 3. Simulation and Formation Mechanisms for Urban Landscape Design Based on Discrete Dynamic Models Driven by Big Data
### 3.1. Big Data-Driven Discrete Dynamic Models
Big data refers to a collection of datasets that are so large and complex that storage and data processing is difficult using existing general databases. Such datasets were once called “big data,” and real-time urban landscape design is an important basis for responding to signal control effects and is a core indicator for calculating or optimizing control volumes, so modeling landscape flows is an essential step [16]. In the landscape engineering community, to study the mechanism of landscape flow and the characteristics of the landscape flow on a macro and micro level, some researchers have established a mathematical model of landscape flow changes through the conservation of urban landscape quantity and the spreading mechanism of landscape quantity; these models have better results in the prediction of local landscape flow. Prelandscape planning analysis is the starting point for carrying out landscape planning work and is the basis for planning and design. Common landscape data include natural information such as site topography, terrain, vegetation, water bodies, soil, meteorology, and human and social information such as population, transportation, economy, and culture. Traditional landscape data mainly exists in the form of paper and pictures [17], such as topographic maps, planning drawings, textual information, hand drawings, and site photos. By mining and analyzing big data, it is possible to quantify text, quantify speech [18], quantify location information, and quantify everything. So far, big data has greatly expanded the information for preanalysis, provided data support for landscape planning, and improved the science of planning and design. The conceptual diagram of the discrete dynamic model is shown in Figure 1.Figure 1
Conceptual illustration of the discrete dynamic model.The discrete industry is characterized by the construction of manufacturing units, which are characterized by high variety, small batch sizes, flexible process routes, and equipment use. In industrial production, companies pay more attention to the flexibility of production, so for the optimization of process control, they pay more attention to the high degree of automation of the manufacturing material flow process, the interconnection and pervasive awareness of all elements involved in the manufacturing process within the factory, and the modeling and simulation of the information-thing system of the manufacturing process. Since its introduction, the discrete choice model has been widely used in various fields, such as marketing, economics, and transportation, to study individual choice behavior. For example, when customers buy a mobile phone, they usually consider its CPU, pixel, resolution, and other attributes before choosing to buy it, and all these mobile phone attributes are not continuous variables but discrete variables, which is difficult to handle using the traditional linear regression model, and it is necessary to use discrete choice model for modeling and analysis [19].A big data-driven discrete dynamic model is feedback, a system consisting of nonlinear components, where each neuron is connected to all other neurons, classified as discrete (DHNN) or continuous depending on whether the output of the network is discrete or continuous quantity. For the discrete type (DHNN), if the system is stable, it can converge from any state to a stable state; if the system is unstable, since the output of the network nodes only has two states, 1 and −1 (or 1 and 0), the system cannot have infinite divergence but only self-sustaining oscillations or limit loops of limited amplitude, finally converging to a stable state adjacent to it. When the evaluation index of the college students to be classified is input, DHNN gradually converges to a stored equilibrium point using its associative memory, and when the state no longer changes [20], the equilibrium point corresponds to the classification level to be sought. The discrete model building process is shown in Figure 2.Figure 2
Flowchart of discrete model building.Random utility theory is the most basic principle of the discrete choice model: assume that the decision-maker hasn options, each of which corresponds to a utility of Uij. The utility is defined as a measure of the satisfaction of the consumer’s needs and desires through consumption or enjoyment of leisure and Uij is composed of a fixed utility v and a random utility ij, with the fixed utility Vij being explained by ij explained by the observable part of factor x, while the random utility is the unobservable part, the unpredictable, random variable utility, and error. The decision-maker chooses the alternative with the maximum utility, which is also called the utility maximization criterion, i.e., satisfaction maximization, and utility U can be expressed as a function of factor x of the observable part of the fixed utility v; that is, Uijij=x is the coefficient. The probability of choosing each alternative can be expressed as a function of utility: P = F(U). The multinomial logit model, the MNL model, is the simplest and most basic form of the discrete choice model, which sets the random utility to obey an independent extreme value distribution so that the choice probability function can be expressed as follows:(1)Mij=eij∑i=1nei.This formula is the core formula of this article. It represents what algorithm the discrete model uses, its applications, and how it applies to the following formulas. The MNL model is easy to operate and quick to use, so the model is popular among scholars and is used in many fields. For example, in landscape design and travel, the model can be used to plan various optimal routes according to people’s habits and destinations to provide a reference; in education, the model can be used in teaching practice; in marketing, the model is used in various aspects such as Internet positioning, pharmaceutical pricing, and the selection of electronic products.The traditional sliding mode variable structure control study object is a continuous system model, while the currently used computer systems are all discrete. When the sliding mode variable structure control is applied to the discrete control system, due to the influence of sampling frequency, the sliding mode variable structure control can no longer produce the ideal sliding mode. However, the quasisliding mode and motion trajectory can only end up in the form of a jitter along the sliding surface. At present, the research and design of discrete sliding mode variable structure control have become an important part of sliding mode variable structure control theory. The definition of discrete structural control is systematically described as follows:(2)Nij=1n∑i=1nXi−X¯2+δyδx.The existence and arrival conditions of the discrete structure, the existence of which is the basis of the sliding mode motion, are given by the following mathematical expression:(3)f=ΔyΔx⋅∂2Ω∂v2.The sliding mode motion consists of two processes, convergence motion and sliding mode motion, and the sliding mode arrival condition only guarantees the requirement of arriving at the switching surface from the state space point, while no restriction is placed on the convergence motion trajectory. The proposed convergence law can improve the dynamic quality of the convergence motion.
### 3.2. Mechanisms of Urban Landscape Design Simulation and Formation
The extremely large volume of data has also placed demands on the tools used for field collection. The traditional methods of handheld topographic maps, image maps, and status maps on paper, using cameras to collect information, and manually viewing the site features are no longer sufficient to meet the current needs of data collection. In the context of big data, mobile GIS, GPS technology, smartphones, and other mobile devices will enter the field research work of the site. Planners can use mobile devices; based on the existing database, the field collection of vector data, photos with location information, action tracks and necessary text descriptions, and other information directly uploaded to the database in the field can be the preliminary planning and design of the site, thus creating a convenient way of research and efficient work methods. Many forms of data are commonly used in landscape planning, such as drawings, text, pictures, dwg files, survey log sheets, verbal notes, and video images. The data currently widely used and available at the planning stage come primarily from the site itself. In addition to the basic information commonly used in planning, such as topographic and geomorphological maps, land use maps, remote sensing image maps, site photos, and multiplanning maps, for specific sites, specific analysis and various targeted surveys are needed, such as soil bearing capacity survey, water quality survey, visitor use survey, vegetation cover survey, and housing quality and use survey. In addition to the abovementioned surveys of the basic information of the site, there are also common data reflecting economic and cultural data, such as the number of visitors, economic revenue, environmental monitoring, video viewing, brochure collection, and basic service facilities. As big data brings more new types of data, such as web text data, communication location data, and social network data, the types of data have increased substantially. The creation of databases is a requirement of the times to promote resource sharing. With the deepening of data openness, more and more data can be obtained through fast, comprehensive, and authoritative official websites, people can freely standardize the use and analysis of data, giving full play to the economic and social value of data and creating a more convenient and comfortable living environment. The database is established as shown in Figure3.Figure 3
Design diagram of the landscape design database.The current analysis of the site is the basis for the design and reflects the important principle of “respect for the site” in landscape planning and design. The concept of “nature-based” considers people as part of nature. By investigating and analyzing the current situation of the site, we can effectively use the resources of the site, avoid wasting resources, and reflect the culture of the site; we can reasonably plan the spatial functions of the site to meet the needs of users; we can scientifically design the roads and water supply and drainage; we can save the engineering volume and improve the work efficiency. The analysis of the current situation should not be limited to the site; the surrounding resources are also the key point to be examined; the reasonable use of surrounding resources can form a chain reaction of the whole area and drive the development of the region. In a word, the planning and design based on the analysis of the current situation of the site are in line with the actual situation and ecological requirements to meet the scientific, economic, and rational planning and design.As one of the five elements of design, the terrain embodies the unique skeleton of the site and is the basis of the design. By analyzing the topography of the site, the functional partitioning of the site and the location of attractions can be delineated; for example, plazas and resting points can be set up in flat areas, and vertical activities such as slides and rock climbing can be set up on steep slopes. The study of the topography and geomorphology of the site can effectively reduce the damage to the status quo and also reduce the amount of earthwork; at the same time, it is conducive to the analysis of landscape sightlines and landscape view areas; it is beneficial to the scientific planning of drainage design and elevation analysis, and terrain analysis is the basis of planning and design, which determines the overall style of planning and design. When investigating and analyzing topography and geomorphology, the most important is the elevation and slope analysis, and the traditional terrain analysis is mainly based on field survey by human and site conditions, as the more complex topography of the site cannot be comprehensively investigated and is not conducive to a comprehensive understanding of the site and making analysis and judgment.As the most representative and attractive street in a city, certain qualities of the physical environment are critical to the urban landscape. One or two of the many elements that make up a streetscape are not enough to create a competent streetscape. Good urban boulevards must meet several basic requirements, including accessibility, the ability to bring people together, public character, livability, safety, comfort, and the potential to encourage residents to participate and take responsibility for the community. All of these qualities can be achieved by design.
## 3.1. Big Data-Driven Discrete Dynamic Models
Big data refers to a collection of datasets that are so large and complex that storage and data processing is difficult using existing general databases. Such datasets were once called “big data,” and real-time urban landscape design is an important basis for responding to signal control effects and is a core indicator for calculating or optimizing control volumes, so modeling landscape flows is an essential step [16]. In the landscape engineering community, to study the mechanism of landscape flow and the characteristics of the landscape flow on a macro and micro level, some researchers have established a mathematical model of landscape flow changes through the conservation of urban landscape quantity and the spreading mechanism of landscape quantity; these models have better results in the prediction of local landscape flow. Prelandscape planning analysis is the starting point for carrying out landscape planning work and is the basis for planning and design. Common landscape data include natural information such as site topography, terrain, vegetation, water bodies, soil, meteorology, and human and social information such as population, transportation, economy, and culture. Traditional landscape data mainly exists in the form of paper and pictures [17], such as topographic maps, planning drawings, textual information, hand drawings, and site photos. By mining and analyzing big data, it is possible to quantify text, quantify speech [18], quantify location information, and quantify everything. So far, big data has greatly expanded the information for preanalysis, provided data support for landscape planning, and improved the science of planning and design. The conceptual diagram of the discrete dynamic model is shown in Figure 1.Figure 1
Conceptual illustration of the discrete dynamic model.The discrete industry is characterized by the construction of manufacturing units, which are characterized by high variety, small batch sizes, flexible process routes, and equipment use. In industrial production, companies pay more attention to the flexibility of production, so for the optimization of process control, they pay more attention to the high degree of automation of the manufacturing material flow process, the interconnection and pervasive awareness of all elements involved in the manufacturing process within the factory, and the modeling and simulation of the information-thing system of the manufacturing process. Since its introduction, the discrete choice model has been widely used in various fields, such as marketing, economics, and transportation, to study individual choice behavior. For example, when customers buy a mobile phone, they usually consider its CPU, pixel, resolution, and other attributes before choosing to buy it, and all these mobile phone attributes are not continuous variables but discrete variables, which is difficult to handle using the traditional linear regression model, and it is necessary to use discrete choice model for modeling and analysis [19].A big data-driven discrete dynamic model is feedback, a system consisting of nonlinear components, where each neuron is connected to all other neurons, classified as discrete (DHNN) or continuous depending on whether the output of the network is discrete or continuous quantity. For the discrete type (DHNN), if the system is stable, it can converge from any state to a stable state; if the system is unstable, since the output of the network nodes only has two states, 1 and −1 (or 1 and 0), the system cannot have infinite divergence but only self-sustaining oscillations or limit loops of limited amplitude, finally converging to a stable state adjacent to it. When the evaluation index of the college students to be classified is input, DHNN gradually converges to a stored equilibrium point using its associative memory, and when the state no longer changes [20], the equilibrium point corresponds to the classification level to be sought. The discrete model building process is shown in Figure 2.Figure 2
Flowchart of discrete model building.Random utility theory is the most basic principle of the discrete choice model: assume that the decision-maker hasn options, each of which corresponds to a utility of Uij. The utility is defined as a measure of the satisfaction of the consumer’s needs and desires through consumption or enjoyment of leisure and Uij is composed of a fixed utility v and a random utility ij, with the fixed utility Vij being explained by ij explained by the observable part of factor x, while the random utility is the unobservable part, the unpredictable, random variable utility, and error. The decision-maker chooses the alternative with the maximum utility, which is also called the utility maximization criterion, i.e., satisfaction maximization, and utility U can be expressed as a function of factor x of the observable part of the fixed utility v; that is, Uijij=x is the coefficient. The probability of choosing each alternative can be expressed as a function of utility: P = F(U). The multinomial logit model, the MNL model, is the simplest and most basic form of the discrete choice model, which sets the random utility to obey an independent extreme value distribution so that the choice probability function can be expressed as follows:(1)Mij=eij∑i=1nei.This formula is the core formula of this article. It represents what algorithm the discrete model uses, its applications, and how it applies to the following formulas. The MNL model is easy to operate and quick to use, so the model is popular among scholars and is used in many fields. For example, in landscape design and travel, the model can be used to plan various optimal routes according to people’s habits and destinations to provide a reference; in education, the model can be used in teaching practice; in marketing, the model is used in various aspects such as Internet positioning, pharmaceutical pricing, and the selection of electronic products.The traditional sliding mode variable structure control study object is a continuous system model, while the currently used computer systems are all discrete. When the sliding mode variable structure control is applied to the discrete control system, due to the influence of sampling frequency, the sliding mode variable structure control can no longer produce the ideal sliding mode. However, the quasisliding mode and motion trajectory can only end up in the form of a jitter along the sliding surface. At present, the research and design of discrete sliding mode variable structure control have become an important part of sliding mode variable structure control theory. The definition of discrete structural control is systematically described as follows:(2)Nij=1n∑i=1nXi−X¯2+δyδx.The existence and arrival conditions of the discrete structure, the existence of which is the basis of the sliding mode motion, are given by the following mathematical expression:(3)f=ΔyΔx⋅∂2Ω∂v2.The sliding mode motion consists of two processes, convergence motion and sliding mode motion, and the sliding mode arrival condition only guarantees the requirement of arriving at the switching surface from the state space point, while no restriction is placed on the convergence motion trajectory. The proposed convergence law can improve the dynamic quality of the convergence motion.
## 3.2. Mechanisms of Urban Landscape Design Simulation and Formation
The extremely large volume of data has also placed demands on the tools used for field collection. The traditional methods of handheld topographic maps, image maps, and status maps on paper, using cameras to collect information, and manually viewing the site features are no longer sufficient to meet the current needs of data collection. In the context of big data, mobile GIS, GPS technology, smartphones, and other mobile devices will enter the field research work of the site. Planners can use mobile devices; based on the existing database, the field collection of vector data, photos with location information, action tracks and necessary text descriptions, and other information directly uploaded to the database in the field can be the preliminary planning and design of the site, thus creating a convenient way of research and efficient work methods. Many forms of data are commonly used in landscape planning, such as drawings, text, pictures, dwg files, survey log sheets, verbal notes, and video images. The data currently widely used and available at the planning stage come primarily from the site itself. In addition to the basic information commonly used in planning, such as topographic and geomorphological maps, land use maps, remote sensing image maps, site photos, and multiplanning maps, for specific sites, specific analysis and various targeted surveys are needed, such as soil bearing capacity survey, water quality survey, visitor use survey, vegetation cover survey, and housing quality and use survey. In addition to the abovementioned surveys of the basic information of the site, there are also common data reflecting economic and cultural data, such as the number of visitors, economic revenue, environmental monitoring, video viewing, brochure collection, and basic service facilities. As big data brings more new types of data, such as web text data, communication location data, and social network data, the types of data have increased substantially. The creation of databases is a requirement of the times to promote resource sharing. With the deepening of data openness, more and more data can be obtained through fast, comprehensive, and authoritative official websites, people can freely standardize the use and analysis of data, giving full play to the economic and social value of data and creating a more convenient and comfortable living environment. The database is established as shown in Figure3.Figure 3
Design diagram of the landscape design database.The current analysis of the site is the basis for the design and reflects the important principle of “respect for the site” in landscape planning and design. The concept of “nature-based” considers people as part of nature. By investigating and analyzing the current situation of the site, we can effectively use the resources of the site, avoid wasting resources, and reflect the culture of the site; we can reasonably plan the spatial functions of the site to meet the needs of users; we can scientifically design the roads and water supply and drainage; we can save the engineering volume and improve the work efficiency. The analysis of the current situation should not be limited to the site; the surrounding resources are also the key point to be examined; the reasonable use of surrounding resources can form a chain reaction of the whole area and drive the development of the region. In a word, the planning and design based on the analysis of the current situation of the site are in line with the actual situation and ecological requirements to meet the scientific, economic, and rational planning and design.As one of the five elements of design, the terrain embodies the unique skeleton of the site and is the basis of the design. By analyzing the topography of the site, the functional partitioning of the site and the location of attractions can be delineated; for example, plazas and resting points can be set up in flat areas, and vertical activities such as slides and rock climbing can be set up on steep slopes. The study of the topography and geomorphology of the site can effectively reduce the damage to the status quo and also reduce the amount of earthwork; at the same time, it is conducive to the analysis of landscape sightlines and landscape view areas; it is beneficial to the scientific planning of drainage design and elevation analysis, and terrain analysis is the basis of planning and design, which determines the overall style of planning and design. When investigating and analyzing topography and geomorphology, the most important is the elevation and slope analysis, and the traditional terrain analysis is mainly based on field survey by human and site conditions, as the more complex topography of the site cannot be comprehensively investigated and is not conducive to a comprehensive understanding of the site and making analysis and judgment.As the most representative and attractive street in a city, certain qualities of the physical environment are critical to the urban landscape. One or two of the many elements that make up a streetscape are not enough to create a competent streetscape. Good urban boulevards must meet several basic requirements, including accessibility, the ability to bring people together, public character, livability, safety, comfort, and the potential to encourage residents to participate and take responsibility for the community. All of these qualities can be achieved by design.
## 4. Experimental Results and Analysis
### 4.1. Results of an Urban Landscape Design Simulation
Vision is undoubtedly the primary way in which people see and feel things. Studies have shown that when people move around on the road, more than 80% of the information they get about their surroundings comes from vision. In addition to making us aware of the physical form and color of things in the outside world, vision also helps us judge the position and movement of objects, thus allowing us to enjoy beautiful street scenes. On the other hand, since the new century, modern transportation has gradually replaced walking, and most people are moving through the road space at a certain speed. Compared to the low-speed people who traveled and quietly viewed the streetscape, today, most people are in a dynamic situation. An urban landscape design based on a discrete dynamic model driven by big data will be more efficient in landscape design, and its efficiency is shown in Figure4.Figure 4
The efficiency of discrete dynamic model experiments based on big data.The study of the dynamic visual properties of people is the basis of streetscape design. It can provide several applications in the specific design process. For example, according to the shading relationship between the front landscape and the rear buildings, it can help select street tree species with compound visible height requirements; according to the visual characteristics of people in the plane, it can determine the design size of signs or scenery in front of the road; according to the relationship between the speed of vehicles and the recognizable distance of the horizontal landscape of the road, it can determine the appropriate setback distance of buildings on both sides of the road or the width of the roadside landscape zone for different design speeds. According to the relationship between vehicle speed and road longitudinal landscape discernible distance, it can help to determine the scale of green landscape on both sides and the appropriate law of streetscape space change. Therefore, when creating a street landscape environment, matching the changes in the human visual field of view and respecting the dynamic visual characteristics of people can have a multiplier effect and improve the efficiency and quality of the design. The satisfaction level for landscape design with the help of a big data-driven discrete dynamic model of landscape design is shown in Figure5.Figure 5
Satisfaction with landscape design based on big data-driven discrete dynamic models.
### 4.2. Simulation of Urban Landscape Formation Mechanism
Urbanization is an inevitable law at a certain stage of economic and social development and inevitably brings many problems and contradictions that need to be confronted and faced openly. Moreover, an urban green space design that incorporates an urban agricultural landscape is the solution to the set of problems facing the city that requires a repositioning with the help of the current complex adaptive systems theory. Systems theory of complex adaptation states that the behavior of individuals in adaptive systems of uncertainty leading to an inability to perceive the features to be presented by the system is the dominant factor in the failure of all relevant design projects. Moreover, this potential uncertainty cannot be eliminated. Landscaping based on discrete dynamic models driven by big data can remove its uncertainty even more, with the results shown in Figure6.Figure 6
Map of the efficiency of the determination of landscaping to engineering based on discrete dynamic models driven by big data.The introduction of urban agricultural landscaping in urban green spaces instead of simply introducing existing ornamental landscaping requires. There are many complex factors to consider, not only in the selection of plants in the design, the creation of the landscape, the composition of the space, and other issues but also in the crop output involved in urban agricultural landscapes, the most typical of which is the planting management of productive agricultural plants, harvesting and distribution issues based on a big data-driven discrete dynamic model of landscape design can solve similar problems of its management. Its efficiency results are shown in Figure7.Figure 7
Management efficiency of landscape design based on discrete dynamic models driven by big data.
## 4.1. Results of an Urban Landscape Design Simulation
Vision is undoubtedly the primary way in which people see and feel things. Studies have shown that when people move around on the road, more than 80% of the information they get about their surroundings comes from vision. In addition to making us aware of the physical form and color of things in the outside world, vision also helps us judge the position and movement of objects, thus allowing us to enjoy beautiful street scenes. On the other hand, since the new century, modern transportation has gradually replaced walking, and most people are moving through the road space at a certain speed. Compared to the low-speed people who traveled and quietly viewed the streetscape, today, most people are in a dynamic situation. An urban landscape design based on a discrete dynamic model driven by big data will be more efficient in landscape design, and its efficiency is shown in Figure4.Figure 4
The efficiency of discrete dynamic model experiments based on big data.The study of the dynamic visual properties of people is the basis of streetscape design. It can provide several applications in the specific design process. For example, according to the shading relationship between the front landscape and the rear buildings, it can help select street tree species with compound visible height requirements; according to the visual characteristics of people in the plane, it can determine the design size of signs or scenery in front of the road; according to the relationship between the speed of vehicles and the recognizable distance of the horizontal landscape of the road, it can determine the appropriate setback distance of buildings on both sides of the road or the width of the roadside landscape zone for different design speeds. According to the relationship between vehicle speed and road longitudinal landscape discernible distance, it can help to determine the scale of green landscape on both sides and the appropriate law of streetscape space change. Therefore, when creating a street landscape environment, matching the changes in the human visual field of view and respecting the dynamic visual characteristics of people can have a multiplier effect and improve the efficiency and quality of the design. The satisfaction level for landscape design with the help of a big data-driven discrete dynamic model of landscape design is shown in Figure5.Figure 5
Satisfaction with landscape design based on big data-driven discrete dynamic models.
## 4.2. Simulation of Urban Landscape Formation Mechanism
Urbanization is an inevitable law at a certain stage of economic and social development and inevitably brings many problems and contradictions that need to be confronted and faced openly. Moreover, an urban green space design that incorporates an urban agricultural landscape is the solution to the set of problems facing the city that requires a repositioning with the help of the current complex adaptive systems theory. Systems theory of complex adaptation states that the behavior of individuals in adaptive systems of uncertainty leading to an inability to perceive the features to be presented by the system is the dominant factor in the failure of all relevant design projects. Moreover, this potential uncertainty cannot be eliminated. Landscaping based on discrete dynamic models driven by big data can remove its uncertainty even more, with the results shown in Figure6.Figure 6
Map of the efficiency of the determination of landscaping to engineering based on discrete dynamic models driven by big data.The introduction of urban agricultural landscaping in urban green spaces instead of simply introducing existing ornamental landscaping requires. There are many complex factors to consider, not only in the selection of plants in the design, the creation of the landscape, the composition of the space, and other issues but also in the crop output involved in urban agricultural landscapes, the most typical of which is the planting management of productive agricultural plants, harvesting and distribution issues based on a big data-driven discrete dynamic model of landscape design can solve similar problems of its management. Its efficiency results are shown in Figure7.Figure 7
Management efficiency of landscape design based on discrete dynamic models driven by big data.
## 5. Conclusion
As a unique type of road, urban landscape avenue must have different functional properties or morphological characteristics from the general road; the current domestic and foreign studies investigate its specific concepts and characteristics, but most of them are still relatively general and one-sided and some are still controversial. The various functional characteristics and spatial morphological requirements of the urban landscape avenue itself make its streetscape design very challenging. Some pioneering research and practice have already put forward the streetscape design for the landscape avenue, focused more on the principle, goal, or guide level. However, for the general sense of the road landscape, there are numerous design theories and innovation methods, but few people use them. In this article, through the analysis of the functional characteristics and spatial characteristics of urban landscape boulevards, the integration of classical and innovative theoretical methods of road landscape design, and the introduction of the concept of standardization, we boldly propose a set of streetscape design methods applicable to urban landscape boulevards. This method takes “decomposition-reconstruction” as the core idea, road landscape segmentation as the premise, standard landscape section design as the basis, and the combination of standard sections as the means to finally realize the streetscape design of urban landscape boulevards. In addition, this article uses the method to apply a series of landscape designs based on big data-driven discrete dynamic models in a practical project, and its effect is obvious. This algorithm will be used in the simulation of the city in the future and a series of simulation mechanisms, such as the road planning of the city.
---
*Source: 1012900-2022-01-10.xml* | 2022 |
# Yeast Methylotrophy: Metabolism, Gene Regulation and Peroxisome Homeostasis
**Authors:** Hiroya Yurimoto; Masahide Oku; Yasuyoshi Sakai
**Journal:** International Journal of Microbiology
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101298
---
## Abstract
Eukaryotic methylotrophs, which are able to obtain all the carbon and energy needed for growth from methanol, are restricted to a limited number of yeast species. When these yeasts are grown on methanol as the sole carbon and energy source, the enzymes involved in methanol metabolism are strongly induced, and the membrane-bound organelles, peroxisomes, which contain key enzymes of methanol metabolism, proliferate massively. These features have made methylotrophic yeasts attractive hosts for the production of heterologous proteins and useful model organisms for the study of peroxisome biogenesis and degradation. In this paper, we describe recent insights into the molecular basis of yeast methylotrophy.
---
## Body
## 1. Introduction
Reduced C1-compounds, such as methane and methanol, are relatively abundant in nature. Methylotrophs, that have the ability to utilize C1-compounds as the sole source of carbon and energy, also appear to be ubiquitous in nature. A diverse range of prokaryotes and eukaryotes can utilize C1-compounds for growth, and methylotrophs have a diverse range of metabolic pathways for assimilating and dissimilating C1-compounds [1–4]. Prokaryotic methylotrophs can utilize a variety of C1-compounds (e.g., methane, methanol, methylamine), while eukaryotic methylotrophs can only use methanol as a carbon source, and methylamine not as a carbon source but as a nitrogen source. The latter group of organisms is limited to a number of yeast genera including Candida, Pichia, and some genera that were recently separated from Pichia, that is, Ogataea, Kuraishia, and Komagataella [5].Since the first isolation in 1969 [6], methylotrophic yeasts have been studied intensively in terms of both physiological activities and potential applications. In the early 1970s, production of single cell protein (SCP) using methanol as a carbon source was studied intensively [7, 8]. These studies established a high-cell-density cultivation method although large-scale production of SCP from methanol was eventually found not to be economically feasible. The metabolic pathways involved in methanol assimilation and dissimilation and characterization of the enzymes have been described in Hansenula polymorpha(Pichia angusta) and Candida boidinii [9–12]. One major finding was the strong inducibility of these enzymes by methanol. A variety of genes encoding enzymes and proteins involved in methanol metabolism have since been cloned, and the regulation of methanol-inducible gene expression has been studied [13, 14]. Methylotrophic yeasts also have been used as model organisms for peroxisome biogenesis and degradation, because methylotrophic growth in yeasts is accompanied by the massive proliferation of peroxisomes, membrane-bound organelles that contain several methanol-metabolizing enzymes [15–17].Heterologous gene expression systems driven by strong methanol-inducible promoters have been developed in a number of methylotrophic yeast strains, includingP. pastoris, H. polymorpha, P. methanolica, and C. boidinii [18–22]. Increasing industrial and academic use of these expression systems has led to the heterologous production of a large number of proteins including enzymes, antibodies, cytokines, plasma proteins, and hormones. Some of the advantages of these systems include (i) cheap synthetic salt-based media for growing the yeast, (ii) strong and tightly regulated promoters induced by methanol and repressed by glucose or ethanol, and (iii) the fact that the processes of protein folding, secretion, and other functions in these yeasts are similar in many respects to the same processes in higher eukaryotes.Recently, much attention has been paid to methanol as an alternative carbon source to replace coal and petroleum [23, 24]. Industrially, methanol is prepared from “syn-gas” (CO and H2) or reductive conversion of atmospheric CO2 with H2. Syn-gas can be produced from an abundant natural resource, methane, which is a major component of natural gas and is also obtained from renewable biomass [25]. Because methanol is a cheap and nonfood substrate, it has become a promising feedstock for biotechnological and chemical processes. In natural environments, methanol is produced by oxidation of methane by methane-oxidizing bacteria and also by decomposition of plant pectins and lignins that contain methylester and methoxyl groups, respectively [26, 27]. Methanol is oxidized to CO2 by methylotrophic bacteria and yeasts. Thus, methylotrophs play indispensable roles in the global carbon cycle between methane and CO2 called “the methane cycle.” A thorough understanding of the molecular basis of methylotrophy is needed not only to better understand the global methane cycle but also to permit more efficient use of methanol as a renewable carbon source.
## 2. Outline of Yeast Methanol Metabolism and the Physiological Roles of Associated Enzymes
All methylotrophic yeasts use a common methanol-utilizing pathway. An outline of methanol metabolism in methylotrophic yeasts is summarized in Figure1 [13]. Methanol is first oxidized by alcohol oxidase (AOD) to form formaldehyde and hydrogen peroxide, which are both highly toxic compounds. Formaldehyde is a central intermediate situated at the branch point between assimilation and dissimilation pathways [2]. A portion of formaldehyde is fixed to xylulose 5-phosphate (Xu5P) by dihydroxyacetone synthase (DAS) forming dihydroxyacetone (DHA) and glyceraldehyde 3-phosphate (GAP), which are used for the synthesis of cell constituents and the regeneration of Xu5P. AOD and DAS are located in peroxisomes together with catalase (CTA), which breaks down hydrogen peroxide. DHA and GAP are further assimilated within the cytosol. DHA is phosphorylated by dihydroxyacetone kinase (DHAK), and, subsequently, dihydroxyacetone phosphate (DHAP) and GAP form fructose 1,6-bisphosphate which is then utilized for regeneration of Xu5P and for biosynthesis of cell constituents.Figure 1
Outline of methanol metabolism in methylotrophic yeasts.Enzymes:ADH (MFS): alcohol dehydrogenase (methylformate-synthesizing enzyme); AOD: alcohol oxidase; CTA: catalase; DAK: dihydroxyacetone kinase; DAS: dihydroxyacetone synthase; FDH: formate dehydrogenase; FGH: S-formylglutathione hydrolase; FLD: formaldehyde dehydrogenase; GLR: glutathione reductase; Pmp20: peroxisome membrane protein which has glutathione peroxidase activity. Abbreviations: DHA: dihydroxyacetone; DHAP: dihydroxyacetone phosphate; F6P: fructose 6-phosphate; FBP: fructose 1,6-bisphosphate; GAP: glyceraldehyde 3-phosphate; GS-CH2OH: S-hydroxymethyl glutathione; GS-CHO: S-formylglutathione; GSH: reduced form of glutathione; GSSG: oxidized form of glutathione; RCOOOH: alkyl hydroperoxide; Xu5P: xylulose 5-phosphate.Another portion of formaldehyde is further oxidized to CO2 by the cytosolic dissimilation pathway. Formaldehyde generated by AOD reacts nonenzymatically with the reduced form of glutathione (GSH) to generate S-hydroxymethyl glutathione (S-HMG). S-HMG is then oxidized to CO2 through the cytosolic GSH-dependent oxidation pathway, which is ubiquitous in nature [36]. NAD+-linked and GSH-dependent formaldehyde dehydrogenase (FLD) uses S-HMG as a substrate to yield S-formylglutathione (S-FG) and NADH. S-FG is then hydrolyzed to formate and GSH by S-formylglutathione hydrolase (FGH). NAD+-linked formate dehydrogenase (FDH) is the last enzyme involved in the methanol dissimilation pathway and generates CO2 and NADH through the oxidation of formate.We have studied the physiological role of the methanol-metabolizing enzymes by cloning and disrupting the corresponding genes inC. boidinii [22, 37]. The cloned genes and phenotypes of the associated mutants are summarized in Table 1. Obviously, AOD and DAS are both essential enzymes for methylotrophy in yeast [28, 29]. However, the enzymes involved in the GSH-dependent formaldehyde oxidation pathway have different roles in methanol metabolism. The FLD1-disrupted strain (fgh1Δ) was unable to grow on methanol under chemostat conditions even with a low dilution rate, indicating that FLD is essential for growth on methanol [30]. The FGH1- and FDH1-disrupted strains (fgh1Δ and fdh1Δ, resp.), however, were able to grow on methanol as a sole source of carbon and energy, although the growth yields were only 10% and 25% of that observed for the wild-type strain, respectively [31, 32]. These results suggested that although FGH and FDH are not essential, they contribute to optimal growth in the presence of methanol. We further showed that methyl formate synthesis catalyzed by cytosolic alcohol dehydrogenase (ADH) contributed to formaldehyde detoxification through GSH-independent formaldehyde oxidation during growth on methanol [33].Table 1
Phenotypes associated with disruptions in genes encoding methanol-metabolizing enzymes inC. boidinii.
EnzymeGene nameMutant (M)/disruptant (D) phenotypeReferenceAlcohol oxidaseAOD1no growth[28]Dihydroxyacetone synthaseDAS1no growth[29]Formaldehyde dehydrogenaseFLD1no growth[30]S-Formylglutathione hydrolaseFGH1weak growth[31]Formate dehydrogenaseFDH1weak growth[32]Alcohol dehydrogenaseADH1weak growth[33]CatalaseCTA1weak growth[34]Pmp20PMP20no growth[35]
## 3. Roles and Function of Peroxisomes in Methanol Metabolism
Oxidation of methanol in methylotrophic yeasts results in the formation of two very reactive and toxic compounds, formaldehyde and hydrogen peroxide. The enzymes involved in the metabolism of these compounds, namely AOD, DAS, CTA, and Pmp20, are compartmentalized in peroxisomes [16]. The presence of hydrogen peroxide-producing oxidases and catalase in a shared compartment is a characteristic feature of peroxisomes in all eukaryotes which offers the advantage of tightly linking production and breakdown of this toxic compound, preventing its diffusion into the cytosol [15]. The physiological roles of peroxisomal antioxidant enzymes and the antioxidative responses underlying yeast methylotrophy are described below (see Section 5).Both AOD and DAS are major components of the peroxisomal matrix, suggesting that the generation and fixation of formaldehyde is primarily confined to this organelle. Compartmentalization of the formaldehyde conversion reactions in a membrane-bound organelle may be one cellular strategy to avoid formaldehyde toxicity. Furthermore, GSH was also shown to be present in the peroxisome at a physiologically significant level [35], indicating that S-HMG can be formed within peroxisomes and then exported to the cytosol [30, 31].The peroxisomal localization of peroxisome matrix enzymes is essential for allowing methylotrophic yeasts to grow on methanol [38, 39]. Intact peroxisomes are crucial to support growth of cells on methanol as a sole source of carbon and energy. The function of peroxisomes during methylotrophic growth is (i) to allow proper partitioning of formaldehyde generated from methanol via the assimilation and dissimilation pathways and (ii) to provide a site for scavenging hydrogen peroxide. These characteristics have been an important tool in the isolation of peroxisome-deficient mutants (pex) of methylotrophic yeast species, as such mutants lose the capacity to grow on methanol despite the fact that they have all the enzymes needed for methanol metabolism [40, 41]. Detailed physiological studies of yeast pex mutants have revealed why compartmentalization is essential for methylotrophic growth [42]. Many PEX genes encoding peroxins, which are proteins involved in peroxisome biogenesis, peroxisomal matrix protein import, membrane biogenesis, peroxisome proliferation, and peroxisome inheritance, have been identified and later characterized using methylotrophic yeasts as model organisms [16].
## 4. Regulation of Methanol-Inducible Gene Expression in Methylotrophic Yeast
The key enzymes of methanol metabolism mentioned above are highly induced by methanol and are virtually absent in cells growing on glucose. We have studied the regulation of the expression of genes encoding these enzymes in cells grown on various carbon and nitrogen sources inC. boidinii. Furthermore, we have evaluated in detail the strength and regulation of the methanol-inducible promoters using a sensitive reporter system based on the acid phosphatase gene (PHO5) from Saccharomyces cerevisiae [43].Maximal expression of the methanol-inducible genes is thought to be exerted through two modes of regulation, that is, derepression and methanol-specific gene activation [14, 22, 37]. The former refers to activation of genes in a manner independent of the availability of methanol, and the latter refers to activation of genes in response to methanol (Figure 2(a)). When grown on glycerol, C. boidinii and H. polymorphaexhibited ~10 and 80% of the methanol-induced maximum AOD expression level, respectively. Glucose-limited chemostat culture experiments also showed that the levels of AOD in H. polymorpha gradually increased with decreasing dilution, whereas the derepression of AOD was lower in C. boidinii than in H. polymorpha[44]. Therefore, the extent and mode of derepression differs among methylotrophic yeast species. InC. boidinii, the AOD1 promoter was shown to have a maximum level of expression in cells grown on methanol, a derepressed level of expression in cells grown on glycerol or oleate, and was repressed in cells grown on glucose or ethanol (Figure 2(a)). The DAS1promoter did not appear to be derepressed when cells were grown on any of the alternative carbon sources [43], indicating that the derepressed level of expression of methanol-inducible genes is gene-specific even in C. boidinii. Based on these observations, we were able to determine a number of genetic factors specific to methanol induction in C. boidinii, particularly in regard to methanol-specific gene activation as opposed to derepression.Molecular mechanism of methanol-inducible gene expression. (a) Relative expression levels ofH. polymorpha MOX (encoding AOD), C. boidinii AOD1, and C. boidinii DAS1 during growth on various carbon sources. On glucose-containing media, expression is completely repressed. When glucose is completely consumed or cells are shifted to glycerol medium, a derepressed level of expression of the AOD genes is induced (derepression) and the extent of derepression of the AOD genes differs between H. polymorpha and C. boidinii. When cells are grown on methanol, the maximum level of expression of AOD genes is achieved not only by derepression but also by methanol-specific gene activation. The induction of DAS1 on methanol medium is achieved only by methanol-specific gene activation. (b) During growth on glucose, expression of methanol-inducible genes is repressed. When cells are shifted to methanol, initially, a Trm2p-related derepression event occurs followed by a Trm1p-related methanol-specific gene activation.
(a)(b)InH. polymorpha, Mpp1p regulates levels of peroxisomal matrix proteins and peroxins [45]. Further, the SWI/SNF complex (Swi1p and Snf2p) in H. polymorphaplays a role in the transcriptional control of methanol-inducible gene expression, suggesting that chromatin remodeling participates in the transcriptional regulation of methanol-inducible genes [46]. In P. pastoris, Mxr1p (a homologue of the C2H2-type transcriptional factor Adr1p in S. cerevisiae) was shown to control transcription of genes involved in methanol metabolism, in particular AOD1, as well as the PEXgenes [47]. All of these transcriptional regulators appear to be involved in glucose derepression of gene expression; however, we have also identified a novel gene, TRM1, as a putative regulator of methanol-specific gene activation [48].Trm1p belongs to the Zn(II)2Cys6-type zinc cluster protein family, members of which are known to be transcriptional regulators in fungi [49]. Deletion of the TRM1gene completely prevented growth on methanol, whereas it did not cause a defect in growth on glucose or other nonfermentative carbon sources, including glycerol, ethanol, or oleic acid. These results suggested that Trm1p is more involved in the regulation of methanol-specific gene activation than in derepression. The transcriptional activities of all of the methanol-inducible promoters tested drastically decreased in the trm1Δ strain grown in methanol medium. Thus, Trm1p was shown to be a master transcriptional regulator of methanol-specific gene activation in the methylotrophic yeast C. boidinii.With respect to derepression, we recently isolated and characterized Trm2p, which is homologous toP. pastorisMxr1p and S. cerevisiae Adr1p [50]. A C. boidinii mutant (trm2Δ) could not grow on methanol or oleate but could grow on glucose or ethanol. Trm2p was necessary for the activation of five methanol-inducible promoters tested. The derepressed level of expression of AOD1,which was observed in the trm1Δ strain, decreased in the trm1Δ trm2Δ strain to a level similar to that observed in the trm2Δ strain. These results suggest that Trm2p-dependent derepression is essential for Trm1p-dependent methanol-specific gene activation in C. boidinii (Figure 2(b)).Recently, a transcriptome analysis ofH. polymorphacells shifted from glucose- to methanol-containing media was reported [51]. As expected, genes involved in methanol metabolism and several PEXgenes were highly upregulated after 2 hours incubation on methanol. Among 1184 genes, which were significantly upregulated with at least a two-fold expression, highest upregulation (>300-fold) was observed for the genes encoding a dissimilation pathway enzyme FDH and a transcriptional factor Mpp1. Autophagy-related genes (ATGgenes; see Section 6) were also upregulated. In P. pastoris, autophagy was shown to be induced at the lag phase of the methylotrophic growth as described below [52].
## 5. Antioxidative Responses Underlying Methylotrophy in Yeasts
Because methanol metabolism in methylotrophic yeasts inevitably produces hydrogen peroxide, the organisms possess functions protective against oxidative stress, including induction of CTA and glutathione peroxidases. InC. boidinii, a CTA-deficient mutant (cta1Δ) was able to grow on methanol as a sole carbon source although its growth rate was much lower than that of the wild-type strain. The peroxisomal localization of CTA is essential for its function [34]. A 20-kDa peroxisomal peripheral membrane protein of C. boidinii (Pmp20) was identified as peroxiredoxin, an enzyme with antioxidant activity necessary for methylotrophic growth [35]. Interestingly, the pmp20Δ strain had a more severe growth defect than thecta1Δ strain. Pmp20 is localized to peroxisomes and has the ability to reduce alkyl hydroperoxide species, which suggests a role in elimination of other reactive oxygen species derived from hydrogen peroxide (Figure 1).In order for the system that protects cells against oxidative stress to function properly, sufficient GSH is necessary to serve as a cosubstrate in the reaction mediated by glutathione peroxidases (Figure3). GSH is also utilized to form S-HMG, a formaldehyde conjugate that serves as an intermediate metabolite in the catabolism of methanol. Thus, it is plausible that the amount of GSH available for catabolic and antioxidative pathways plays a vital role in yeast methylotrophy.Figure 3
Schematic drawing of GSH dynamics and its regulation by Yap1. The proteins enclosed within the boxes represent factors found to be induced by Yap1 at the level of transcription.Enzymes:Gsh1p: γ-glutamylcysteine synthetase; Gsh2p: glutathione synthetase; Glr1p: glutathione reducatse; Gpx: glutathione peroxidase.S-HMG: S-hydroxymethyl glutathione; GSH: reduced form of glutathione; GSSG: oxidized form of glutathione; ROS: reactive oxygen species.Upregulation of GSH is accomplished through induction of itsde novo synthesis and regeneration. The synthesis pathway begins with the reaction catalyzed by Gsh1p (γ-glutamylcysteine synthetase) to form γ-glutamylcysteine. The second and last step catalyzed by Gsh2p (glutathione synthetase) conjugates L-glycine to γ-glutamylcysteine. Because the Gsh1p-mediated reaction is rate limiting, induction of Gsh1p expression plays a central role in upregulating GSH levels. The regeneration of free GSH from its oxidized form is catalyzed by Glr1p (glutathione reductase) at the expense of NADPH. In addition, the GSH pool is also partially replenished from its conjugate form S-HMG, via the activities of FLD and FGH. Thus, we anticipated that deciphering the molecular mechanisms governing induction of these enzymes would be important for elucidating redox-related regulatory mechanisms in methylotrophic yeasts.A recent study demonstrated that a transcription factor termed Yap1 plays an important role in the upregulation of GSH-related enzymes inP. pastoris [53]. Disruption of PpYAP1 led to defects in transcriptional induction of genes encoding PpGlr1p, PpGsh1p, and a glutathione peroxidase PpGpx1p and caused severe growth arrest in methanol culture. PpYap1p fused to a fluorescent tag was found to change localization from the cytoplasm to the nucleus several hours after the onset of methanol culture, validating the role of PpYap1 in the antioxidative response. Loss of genes required for GSH regeneration from S-HMG (PpFLD1 or PpFGH1) enhanced Yap1 translocation, which suggests an increased demand for GSH in these gene mutants. Of note, disruption of PpGLR1 caused a severe growth defect concomitantly with abnormal accumulation of formaldehyde in methanol culture, showing that regeneration of GSH from its oxidized form plays a vital role in methylotrophy in this organism.The molecular details of PpYap1p response were further investigated in another study [54]. PpYap1p has two cysteine-rich domains (CRDs) similar to its homologue inS. cerevisiae. Each of the CRDs has two cysteine residues essential for induction of TRR1 (encoding thioredoxin reductase) in response to hydrogen peroxide treatment. This response was dependent on PpGpx1p, similar to the S. cerevisiae Yap1p response that requires ScGpx3p. These functional characteristics of PpYap1 are thought to underlie antioxidant activities in methanol culture.
## 6. Autophagic Activities for Regulations of Methylotrophy
As mentioned above, one remarkable feature of methylotrophy in eukaryotes is that many of the enzymes required for methanol metabolism are located in peroxisomes. This means that the regulation of methanol metabolism can be accomplished in part, through control of organelle homeostasis. Methylotrophic yeasts have been utilized as valuable experimental systems for elucidating molecular mechanisms of peroxisome biogenesis and degradation [55]. Studies in these yeasts in fact contributed to establishing terms now widely used for genes involved in peroxisome biogenesis, the so-called PEX genes, and autophagy-related genes (ATG). Below, we briefly summarize what is known about the involvement of autophagy in regulating peroxisome homeostasis.The term autophagy refers to the transport of cytoplasmic components to the lysosome/vacuole, followed by their degradation inside the lysosome/vacuole. Methylotrophic yeasts have peroxisome-selective autophagic machinery to reduce peroxisome quantity. This activity, termed pexophagy, is induced when nutrient conditions change from those requiring peroxisome function (e.g., growth in methanol culture) to those irrelevant to peroxisomal activities (e.g., growth in glucose or ethanol culture) [56, 57].Pexophagy has been classified into two modes according to differences in organelle dynamics (Figure4) [58]. In macropexophagy, the peroxisome is encapsulated within a newly synthesized double-membrane structure termed the pexophagosome. After sequestering the peroxisome from the cytoplasm, the pexophagosome fuses with the limiting membrane of the vacuole, resulting in release of its content and its inner membrane into the vacuole. In micropexophagy, a portion of the peroxisome surface is covered by a flat double-membrane structure termed the micropexophagy apparatus, or MIPA [59]. The remainder of the peroxisome surface is engulfed by extension of the vacuolar membrane, with fusion between the MIPA and the vacuolar membrane sealing the peroxisome within the vacuolar compartment. To date, micropexophagy has only been observed inP. pastoris.Figure 4
Model of organelle dynamics observed during pexophagy in methylotrophic yeasts. The green portions of the figure (MIPA and pexophagosome) represent autophagic membrane structures newly synthesized by the action of Atg proteins.Most proteins responsible for pexophagy are also needed for other autophagic pathways and are now considered Atg proteins, as noted above. This is because in many autophagic pathways, including the two modes of pexophagy,de novosynthesis of membrane structures plays a crucial role, requiring a shared molecular machinery for modulating membrane dynamics. Studies to elucidate the molecular mechanisms by which Atg proteins form membrane structures have been a major focus of research efforts [60].A recent study demonstrated that autophagic activity also plays an important role in peroxisome biogenesis in methylotrophic yeast [52]. Autophagy was reported to be induced at lag phase following a shift from growth on glucose to methanol. This induction was observed only in minimal medium as addition of excess amino acids inhibits the induction. Loss of several ATGgenes led to a prolonged lag phase, indicating a physiological function of for autophagic activity during adaptation to growth on methanol. The carbon source shift forces cells to remodel metabolic systems and to increase the peroxisome quantity, which requires a substantial supply of amino acids. The induced autophagy (termed lag-phase autophagy) should have the effect of increasing amino acid pools through degradation of cellular components in the vacuole.
## 7. Perspectives
Heterologous gene expression systems using methylotrophic yeasts are based on the unique methanol metabolism of these organisms and are characterized by strong and tightly regulated methanol-inducible gene expression We anticipate that additional studies of the molecular basis for methylotrophy in yeasts that focus on transcriptional machinery and signal transduction pathways involved in methanol-inducible gene expression, and mechanisms that regulate organelle homeostasis and redox state will lead to improvements in these heterologous gene expression systems.Methylotrophs play important roles in carbon recycling in natural environments. The symbiotic relationship between plants and methylotrophic bacteria has been the focus of recent intensive study [61–63]. In contrast, the relationship between plants and methylotrophic yeasts has not been extensively characterized, although methylotrophic yeasts have often been isolated from plant-related materials [5, 16]. We previously reported that C. boidiniican grow on pectin and that this ability depends on methylotrophy [26]. Further intensive analysis of the molecular basis of yeast methylotrophy is expected to reveal new physiological functions and their importance in natural environments.
---
*Source: 101298-2011-07-07.xml* | 101298-2011-07-07_101298-2011-07-07.md | 27,814 | Yeast Methylotrophy: Metabolism, Gene Regulation and Peroxisome Homeostasis | Hiroya Yurimoto; Masahide Oku; Yasuyoshi Sakai | International Journal of Microbiology
(2011) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101298 | 101298-2011-07-07.xml | ---
## Abstract
Eukaryotic methylotrophs, which are able to obtain all the carbon and energy needed for growth from methanol, are restricted to a limited number of yeast species. When these yeasts are grown on methanol as the sole carbon and energy source, the enzymes involved in methanol metabolism are strongly induced, and the membrane-bound organelles, peroxisomes, which contain key enzymes of methanol metabolism, proliferate massively. These features have made methylotrophic yeasts attractive hosts for the production of heterologous proteins and useful model organisms for the study of peroxisome biogenesis and degradation. In this paper, we describe recent insights into the molecular basis of yeast methylotrophy.
---
## Body
## 1. Introduction
Reduced C1-compounds, such as methane and methanol, are relatively abundant in nature. Methylotrophs, that have the ability to utilize C1-compounds as the sole source of carbon and energy, also appear to be ubiquitous in nature. A diverse range of prokaryotes and eukaryotes can utilize C1-compounds for growth, and methylotrophs have a diverse range of metabolic pathways for assimilating and dissimilating C1-compounds [1–4]. Prokaryotic methylotrophs can utilize a variety of C1-compounds (e.g., methane, methanol, methylamine), while eukaryotic methylotrophs can only use methanol as a carbon source, and methylamine not as a carbon source but as a nitrogen source. The latter group of organisms is limited to a number of yeast genera including Candida, Pichia, and some genera that were recently separated from Pichia, that is, Ogataea, Kuraishia, and Komagataella [5].Since the first isolation in 1969 [6], methylotrophic yeasts have been studied intensively in terms of both physiological activities and potential applications. In the early 1970s, production of single cell protein (SCP) using methanol as a carbon source was studied intensively [7, 8]. These studies established a high-cell-density cultivation method although large-scale production of SCP from methanol was eventually found not to be economically feasible. The metabolic pathways involved in methanol assimilation and dissimilation and characterization of the enzymes have been described in Hansenula polymorpha(Pichia angusta) and Candida boidinii [9–12]. One major finding was the strong inducibility of these enzymes by methanol. A variety of genes encoding enzymes and proteins involved in methanol metabolism have since been cloned, and the regulation of methanol-inducible gene expression has been studied [13, 14]. Methylotrophic yeasts also have been used as model organisms for peroxisome biogenesis and degradation, because methylotrophic growth in yeasts is accompanied by the massive proliferation of peroxisomes, membrane-bound organelles that contain several methanol-metabolizing enzymes [15–17].Heterologous gene expression systems driven by strong methanol-inducible promoters have been developed in a number of methylotrophic yeast strains, includingP. pastoris, H. polymorpha, P. methanolica, and C. boidinii [18–22]. Increasing industrial and academic use of these expression systems has led to the heterologous production of a large number of proteins including enzymes, antibodies, cytokines, plasma proteins, and hormones. Some of the advantages of these systems include (i) cheap synthetic salt-based media for growing the yeast, (ii) strong and tightly regulated promoters induced by methanol and repressed by glucose or ethanol, and (iii) the fact that the processes of protein folding, secretion, and other functions in these yeasts are similar in many respects to the same processes in higher eukaryotes.Recently, much attention has been paid to methanol as an alternative carbon source to replace coal and petroleum [23, 24]. Industrially, methanol is prepared from “syn-gas” (CO and H2) or reductive conversion of atmospheric CO2 with H2. Syn-gas can be produced from an abundant natural resource, methane, which is a major component of natural gas and is also obtained from renewable biomass [25]. Because methanol is a cheap and nonfood substrate, it has become a promising feedstock for biotechnological and chemical processes. In natural environments, methanol is produced by oxidation of methane by methane-oxidizing bacteria and also by decomposition of plant pectins and lignins that contain methylester and methoxyl groups, respectively [26, 27]. Methanol is oxidized to CO2 by methylotrophic bacteria and yeasts. Thus, methylotrophs play indispensable roles in the global carbon cycle between methane and CO2 called “the methane cycle.” A thorough understanding of the molecular basis of methylotrophy is needed not only to better understand the global methane cycle but also to permit more efficient use of methanol as a renewable carbon source.
## 2. Outline of Yeast Methanol Metabolism and the Physiological Roles of Associated Enzymes
All methylotrophic yeasts use a common methanol-utilizing pathway. An outline of methanol metabolism in methylotrophic yeasts is summarized in Figure1 [13]. Methanol is first oxidized by alcohol oxidase (AOD) to form formaldehyde and hydrogen peroxide, which are both highly toxic compounds. Formaldehyde is a central intermediate situated at the branch point between assimilation and dissimilation pathways [2]. A portion of formaldehyde is fixed to xylulose 5-phosphate (Xu5P) by dihydroxyacetone synthase (DAS) forming dihydroxyacetone (DHA) and glyceraldehyde 3-phosphate (GAP), which are used for the synthesis of cell constituents and the regeneration of Xu5P. AOD and DAS are located in peroxisomes together with catalase (CTA), which breaks down hydrogen peroxide. DHA and GAP are further assimilated within the cytosol. DHA is phosphorylated by dihydroxyacetone kinase (DHAK), and, subsequently, dihydroxyacetone phosphate (DHAP) and GAP form fructose 1,6-bisphosphate which is then utilized for regeneration of Xu5P and for biosynthesis of cell constituents.Figure 1
Outline of methanol metabolism in methylotrophic yeasts.Enzymes:ADH (MFS): alcohol dehydrogenase (methylformate-synthesizing enzyme); AOD: alcohol oxidase; CTA: catalase; DAK: dihydroxyacetone kinase; DAS: dihydroxyacetone synthase; FDH: formate dehydrogenase; FGH: S-formylglutathione hydrolase; FLD: formaldehyde dehydrogenase; GLR: glutathione reductase; Pmp20: peroxisome membrane protein which has glutathione peroxidase activity. Abbreviations: DHA: dihydroxyacetone; DHAP: dihydroxyacetone phosphate; F6P: fructose 6-phosphate; FBP: fructose 1,6-bisphosphate; GAP: glyceraldehyde 3-phosphate; GS-CH2OH: S-hydroxymethyl glutathione; GS-CHO: S-formylglutathione; GSH: reduced form of glutathione; GSSG: oxidized form of glutathione; RCOOOH: alkyl hydroperoxide; Xu5P: xylulose 5-phosphate.Another portion of formaldehyde is further oxidized to CO2 by the cytosolic dissimilation pathway. Formaldehyde generated by AOD reacts nonenzymatically with the reduced form of glutathione (GSH) to generate S-hydroxymethyl glutathione (S-HMG). S-HMG is then oxidized to CO2 through the cytosolic GSH-dependent oxidation pathway, which is ubiquitous in nature [36]. NAD+-linked and GSH-dependent formaldehyde dehydrogenase (FLD) uses S-HMG as a substrate to yield S-formylglutathione (S-FG) and NADH. S-FG is then hydrolyzed to formate and GSH by S-formylglutathione hydrolase (FGH). NAD+-linked formate dehydrogenase (FDH) is the last enzyme involved in the methanol dissimilation pathway and generates CO2 and NADH through the oxidation of formate.We have studied the physiological role of the methanol-metabolizing enzymes by cloning and disrupting the corresponding genes inC. boidinii [22, 37]. The cloned genes and phenotypes of the associated mutants are summarized in Table 1. Obviously, AOD and DAS are both essential enzymes for methylotrophy in yeast [28, 29]. However, the enzymes involved in the GSH-dependent formaldehyde oxidation pathway have different roles in methanol metabolism. The FLD1-disrupted strain (fgh1Δ) was unable to grow on methanol under chemostat conditions even with a low dilution rate, indicating that FLD is essential for growth on methanol [30]. The FGH1- and FDH1-disrupted strains (fgh1Δ and fdh1Δ, resp.), however, were able to grow on methanol as a sole source of carbon and energy, although the growth yields were only 10% and 25% of that observed for the wild-type strain, respectively [31, 32]. These results suggested that although FGH and FDH are not essential, they contribute to optimal growth in the presence of methanol. We further showed that methyl formate synthesis catalyzed by cytosolic alcohol dehydrogenase (ADH) contributed to formaldehyde detoxification through GSH-independent formaldehyde oxidation during growth on methanol [33].Table 1
Phenotypes associated with disruptions in genes encoding methanol-metabolizing enzymes inC. boidinii.
EnzymeGene nameMutant (M)/disruptant (D) phenotypeReferenceAlcohol oxidaseAOD1no growth[28]Dihydroxyacetone synthaseDAS1no growth[29]Formaldehyde dehydrogenaseFLD1no growth[30]S-Formylglutathione hydrolaseFGH1weak growth[31]Formate dehydrogenaseFDH1weak growth[32]Alcohol dehydrogenaseADH1weak growth[33]CatalaseCTA1weak growth[34]Pmp20PMP20no growth[35]
## 3. Roles and Function of Peroxisomes in Methanol Metabolism
Oxidation of methanol in methylotrophic yeasts results in the formation of two very reactive and toxic compounds, formaldehyde and hydrogen peroxide. The enzymes involved in the metabolism of these compounds, namely AOD, DAS, CTA, and Pmp20, are compartmentalized in peroxisomes [16]. The presence of hydrogen peroxide-producing oxidases and catalase in a shared compartment is a characteristic feature of peroxisomes in all eukaryotes which offers the advantage of tightly linking production and breakdown of this toxic compound, preventing its diffusion into the cytosol [15]. The physiological roles of peroxisomal antioxidant enzymes and the antioxidative responses underlying yeast methylotrophy are described below (see Section 5).Both AOD and DAS are major components of the peroxisomal matrix, suggesting that the generation and fixation of formaldehyde is primarily confined to this organelle. Compartmentalization of the formaldehyde conversion reactions in a membrane-bound organelle may be one cellular strategy to avoid formaldehyde toxicity. Furthermore, GSH was also shown to be present in the peroxisome at a physiologically significant level [35], indicating that S-HMG can be formed within peroxisomes and then exported to the cytosol [30, 31].The peroxisomal localization of peroxisome matrix enzymes is essential for allowing methylotrophic yeasts to grow on methanol [38, 39]. Intact peroxisomes are crucial to support growth of cells on methanol as a sole source of carbon and energy. The function of peroxisomes during methylotrophic growth is (i) to allow proper partitioning of formaldehyde generated from methanol via the assimilation and dissimilation pathways and (ii) to provide a site for scavenging hydrogen peroxide. These characteristics have been an important tool in the isolation of peroxisome-deficient mutants (pex) of methylotrophic yeast species, as such mutants lose the capacity to grow on methanol despite the fact that they have all the enzymes needed for methanol metabolism [40, 41]. Detailed physiological studies of yeast pex mutants have revealed why compartmentalization is essential for methylotrophic growth [42]. Many PEX genes encoding peroxins, which are proteins involved in peroxisome biogenesis, peroxisomal matrix protein import, membrane biogenesis, peroxisome proliferation, and peroxisome inheritance, have been identified and later characterized using methylotrophic yeasts as model organisms [16].
## 4. Regulation of Methanol-Inducible Gene Expression in Methylotrophic Yeast
The key enzymes of methanol metabolism mentioned above are highly induced by methanol and are virtually absent in cells growing on glucose. We have studied the regulation of the expression of genes encoding these enzymes in cells grown on various carbon and nitrogen sources inC. boidinii. Furthermore, we have evaluated in detail the strength and regulation of the methanol-inducible promoters using a sensitive reporter system based on the acid phosphatase gene (PHO5) from Saccharomyces cerevisiae [43].Maximal expression of the methanol-inducible genes is thought to be exerted through two modes of regulation, that is, derepression and methanol-specific gene activation [14, 22, 37]. The former refers to activation of genes in a manner independent of the availability of methanol, and the latter refers to activation of genes in response to methanol (Figure 2(a)). When grown on glycerol, C. boidinii and H. polymorphaexhibited ~10 and 80% of the methanol-induced maximum AOD expression level, respectively. Glucose-limited chemostat culture experiments also showed that the levels of AOD in H. polymorpha gradually increased with decreasing dilution, whereas the derepression of AOD was lower in C. boidinii than in H. polymorpha[44]. Therefore, the extent and mode of derepression differs among methylotrophic yeast species. InC. boidinii, the AOD1 promoter was shown to have a maximum level of expression in cells grown on methanol, a derepressed level of expression in cells grown on glycerol or oleate, and was repressed in cells grown on glucose or ethanol (Figure 2(a)). The DAS1promoter did not appear to be derepressed when cells were grown on any of the alternative carbon sources [43], indicating that the derepressed level of expression of methanol-inducible genes is gene-specific even in C. boidinii. Based on these observations, we were able to determine a number of genetic factors specific to methanol induction in C. boidinii, particularly in regard to methanol-specific gene activation as opposed to derepression.Molecular mechanism of methanol-inducible gene expression. (a) Relative expression levels ofH. polymorpha MOX (encoding AOD), C. boidinii AOD1, and C. boidinii DAS1 during growth on various carbon sources. On glucose-containing media, expression is completely repressed. When glucose is completely consumed or cells are shifted to glycerol medium, a derepressed level of expression of the AOD genes is induced (derepression) and the extent of derepression of the AOD genes differs between H. polymorpha and C. boidinii. When cells are grown on methanol, the maximum level of expression of AOD genes is achieved not only by derepression but also by methanol-specific gene activation. The induction of DAS1 on methanol medium is achieved only by methanol-specific gene activation. (b) During growth on glucose, expression of methanol-inducible genes is repressed. When cells are shifted to methanol, initially, a Trm2p-related derepression event occurs followed by a Trm1p-related methanol-specific gene activation.
(a)(b)InH. polymorpha, Mpp1p regulates levels of peroxisomal matrix proteins and peroxins [45]. Further, the SWI/SNF complex (Swi1p and Snf2p) in H. polymorphaplays a role in the transcriptional control of methanol-inducible gene expression, suggesting that chromatin remodeling participates in the transcriptional regulation of methanol-inducible genes [46]. In P. pastoris, Mxr1p (a homologue of the C2H2-type transcriptional factor Adr1p in S. cerevisiae) was shown to control transcription of genes involved in methanol metabolism, in particular AOD1, as well as the PEXgenes [47]. All of these transcriptional regulators appear to be involved in glucose derepression of gene expression; however, we have also identified a novel gene, TRM1, as a putative regulator of methanol-specific gene activation [48].Trm1p belongs to the Zn(II)2Cys6-type zinc cluster protein family, members of which are known to be transcriptional regulators in fungi [49]. Deletion of the TRM1gene completely prevented growth on methanol, whereas it did not cause a defect in growth on glucose or other nonfermentative carbon sources, including glycerol, ethanol, or oleic acid. These results suggested that Trm1p is more involved in the regulation of methanol-specific gene activation than in derepression. The transcriptional activities of all of the methanol-inducible promoters tested drastically decreased in the trm1Δ strain grown in methanol medium. Thus, Trm1p was shown to be a master transcriptional regulator of methanol-specific gene activation in the methylotrophic yeast C. boidinii.With respect to derepression, we recently isolated and characterized Trm2p, which is homologous toP. pastorisMxr1p and S. cerevisiae Adr1p [50]. A C. boidinii mutant (trm2Δ) could not grow on methanol or oleate but could grow on glucose or ethanol. Trm2p was necessary for the activation of five methanol-inducible promoters tested. The derepressed level of expression of AOD1,which was observed in the trm1Δ strain, decreased in the trm1Δ trm2Δ strain to a level similar to that observed in the trm2Δ strain. These results suggest that Trm2p-dependent derepression is essential for Trm1p-dependent methanol-specific gene activation in C. boidinii (Figure 2(b)).Recently, a transcriptome analysis ofH. polymorphacells shifted from glucose- to methanol-containing media was reported [51]. As expected, genes involved in methanol metabolism and several PEXgenes were highly upregulated after 2 hours incubation on methanol. Among 1184 genes, which were significantly upregulated with at least a two-fold expression, highest upregulation (>300-fold) was observed for the genes encoding a dissimilation pathway enzyme FDH and a transcriptional factor Mpp1. Autophagy-related genes (ATGgenes; see Section 6) were also upregulated. In P. pastoris, autophagy was shown to be induced at the lag phase of the methylotrophic growth as described below [52].
## 5. Antioxidative Responses Underlying Methylotrophy in Yeasts
Because methanol metabolism in methylotrophic yeasts inevitably produces hydrogen peroxide, the organisms possess functions protective against oxidative stress, including induction of CTA and glutathione peroxidases. InC. boidinii, a CTA-deficient mutant (cta1Δ) was able to grow on methanol as a sole carbon source although its growth rate was much lower than that of the wild-type strain. The peroxisomal localization of CTA is essential for its function [34]. A 20-kDa peroxisomal peripheral membrane protein of C. boidinii (Pmp20) was identified as peroxiredoxin, an enzyme with antioxidant activity necessary for methylotrophic growth [35]. Interestingly, the pmp20Δ strain had a more severe growth defect than thecta1Δ strain. Pmp20 is localized to peroxisomes and has the ability to reduce alkyl hydroperoxide species, which suggests a role in elimination of other reactive oxygen species derived from hydrogen peroxide (Figure 1).In order for the system that protects cells against oxidative stress to function properly, sufficient GSH is necessary to serve as a cosubstrate in the reaction mediated by glutathione peroxidases (Figure3). GSH is also utilized to form S-HMG, a formaldehyde conjugate that serves as an intermediate metabolite in the catabolism of methanol. Thus, it is plausible that the amount of GSH available for catabolic and antioxidative pathways plays a vital role in yeast methylotrophy.Figure 3
Schematic drawing of GSH dynamics and its regulation by Yap1. The proteins enclosed within the boxes represent factors found to be induced by Yap1 at the level of transcription.Enzymes:Gsh1p: γ-glutamylcysteine synthetase; Gsh2p: glutathione synthetase; Glr1p: glutathione reducatse; Gpx: glutathione peroxidase.S-HMG: S-hydroxymethyl glutathione; GSH: reduced form of glutathione; GSSG: oxidized form of glutathione; ROS: reactive oxygen species.Upregulation of GSH is accomplished through induction of itsde novo synthesis and regeneration. The synthesis pathway begins with the reaction catalyzed by Gsh1p (γ-glutamylcysteine synthetase) to form γ-glutamylcysteine. The second and last step catalyzed by Gsh2p (glutathione synthetase) conjugates L-glycine to γ-glutamylcysteine. Because the Gsh1p-mediated reaction is rate limiting, induction of Gsh1p expression plays a central role in upregulating GSH levels. The regeneration of free GSH from its oxidized form is catalyzed by Glr1p (glutathione reductase) at the expense of NADPH. In addition, the GSH pool is also partially replenished from its conjugate form S-HMG, via the activities of FLD and FGH. Thus, we anticipated that deciphering the molecular mechanisms governing induction of these enzymes would be important for elucidating redox-related regulatory mechanisms in methylotrophic yeasts.A recent study demonstrated that a transcription factor termed Yap1 plays an important role in the upregulation of GSH-related enzymes inP. pastoris [53]. Disruption of PpYAP1 led to defects in transcriptional induction of genes encoding PpGlr1p, PpGsh1p, and a glutathione peroxidase PpGpx1p and caused severe growth arrest in methanol culture. PpYap1p fused to a fluorescent tag was found to change localization from the cytoplasm to the nucleus several hours after the onset of methanol culture, validating the role of PpYap1 in the antioxidative response. Loss of genes required for GSH regeneration from S-HMG (PpFLD1 or PpFGH1) enhanced Yap1 translocation, which suggests an increased demand for GSH in these gene mutants. Of note, disruption of PpGLR1 caused a severe growth defect concomitantly with abnormal accumulation of formaldehyde in methanol culture, showing that regeneration of GSH from its oxidized form plays a vital role in methylotrophy in this organism.The molecular details of PpYap1p response were further investigated in another study [54]. PpYap1p has two cysteine-rich domains (CRDs) similar to its homologue inS. cerevisiae. Each of the CRDs has two cysteine residues essential for induction of TRR1 (encoding thioredoxin reductase) in response to hydrogen peroxide treatment. This response was dependent on PpGpx1p, similar to the S. cerevisiae Yap1p response that requires ScGpx3p. These functional characteristics of PpYap1 are thought to underlie antioxidant activities in methanol culture.
## 6. Autophagic Activities for Regulations of Methylotrophy
As mentioned above, one remarkable feature of methylotrophy in eukaryotes is that many of the enzymes required for methanol metabolism are located in peroxisomes. This means that the regulation of methanol metabolism can be accomplished in part, through control of organelle homeostasis. Methylotrophic yeasts have been utilized as valuable experimental systems for elucidating molecular mechanisms of peroxisome biogenesis and degradation [55]. Studies in these yeasts in fact contributed to establishing terms now widely used for genes involved in peroxisome biogenesis, the so-called PEX genes, and autophagy-related genes (ATG). Below, we briefly summarize what is known about the involvement of autophagy in regulating peroxisome homeostasis.The term autophagy refers to the transport of cytoplasmic components to the lysosome/vacuole, followed by their degradation inside the lysosome/vacuole. Methylotrophic yeasts have peroxisome-selective autophagic machinery to reduce peroxisome quantity. This activity, termed pexophagy, is induced when nutrient conditions change from those requiring peroxisome function (e.g., growth in methanol culture) to those irrelevant to peroxisomal activities (e.g., growth in glucose or ethanol culture) [56, 57].Pexophagy has been classified into two modes according to differences in organelle dynamics (Figure4) [58]. In macropexophagy, the peroxisome is encapsulated within a newly synthesized double-membrane structure termed the pexophagosome. After sequestering the peroxisome from the cytoplasm, the pexophagosome fuses with the limiting membrane of the vacuole, resulting in release of its content and its inner membrane into the vacuole. In micropexophagy, a portion of the peroxisome surface is covered by a flat double-membrane structure termed the micropexophagy apparatus, or MIPA [59]. The remainder of the peroxisome surface is engulfed by extension of the vacuolar membrane, with fusion between the MIPA and the vacuolar membrane sealing the peroxisome within the vacuolar compartment. To date, micropexophagy has only been observed inP. pastoris.Figure 4
Model of organelle dynamics observed during pexophagy in methylotrophic yeasts. The green portions of the figure (MIPA and pexophagosome) represent autophagic membrane structures newly synthesized by the action of Atg proteins.Most proteins responsible for pexophagy are also needed for other autophagic pathways and are now considered Atg proteins, as noted above. This is because in many autophagic pathways, including the two modes of pexophagy,de novosynthesis of membrane structures plays a crucial role, requiring a shared molecular machinery for modulating membrane dynamics. Studies to elucidate the molecular mechanisms by which Atg proteins form membrane structures have been a major focus of research efforts [60].A recent study demonstrated that autophagic activity also plays an important role in peroxisome biogenesis in methylotrophic yeast [52]. Autophagy was reported to be induced at lag phase following a shift from growth on glucose to methanol. This induction was observed only in minimal medium as addition of excess amino acids inhibits the induction. Loss of several ATGgenes led to a prolonged lag phase, indicating a physiological function of for autophagic activity during adaptation to growth on methanol. The carbon source shift forces cells to remodel metabolic systems and to increase the peroxisome quantity, which requires a substantial supply of amino acids. The induced autophagy (termed lag-phase autophagy) should have the effect of increasing amino acid pools through degradation of cellular components in the vacuole.
## 7. Perspectives
Heterologous gene expression systems using methylotrophic yeasts are based on the unique methanol metabolism of these organisms and are characterized by strong and tightly regulated methanol-inducible gene expression We anticipate that additional studies of the molecular basis for methylotrophy in yeasts that focus on transcriptional machinery and signal transduction pathways involved in methanol-inducible gene expression, and mechanisms that regulate organelle homeostasis and redox state will lead to improvements in these heterologous gene expression systems.Methylotrophs play important roles in carbon recycling in natural environments. The symbiotic relationship between plants and methylotrophic bacteria has been the focus of recent intensive study [61–63]. In contrast, the relationship between plants and methylotrophic yeasts has not been extensively characterized, although methylotrophic yeasts have often been isolated from plant-related materials [5, 16]. We previously reported that C. boidiniican grow on pectin and that this ability depends on methylotrophy [26]. Further intensive analysis of the molecular basis of yeast methylotrophy is expected to reveal new physiological functions and their importance in natural environments.
---
*Source: 101298-2011-07-07.xml* | 2011 |
# Wind-Induced Vibration Response of an Inspection Vehicle for Main Cables Based on Computer Simulation
**Authors:** Lu Zhang; Shaohua Wang; Peng Guo; Qunsheng Wang
**Journal:** Shock and Vibration
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1012987
---
## Abstract
This paper presents a simulation approach based on the finite element method (FEM) to analyze the wind-induced vibration response of an inspection vehicle for main cables. First, two finite element (FE) models of a suspension bridge and a main cable-inspection vehicle coupled system are established using MIDAS Civil software and ANSYS software, respectively. Second, the mean wind speed distribution characteristics at a bridge site are analyzed, and the wind field is simulated based on the spectral representation method (SRM). Third, a modal analysis and a wind-induced vibration response transient analysis of the suspension bridge FE model are completed. Fourth, the vibration characteristics of the inspection vehicle are analyzed by applying fluctuating wind conditions and main cable vibration displacements in the main cable-inspection vehicle coupled FE model. Finally, based on the ISO2631-1-1997 standard, a vehicle ride comfort evaluation is performed. The results of the suspension bridge FE modal analysis are in good accordance with those of the experimental modal test. The effects of the working height, number of nonworking compressing wheels, and number of nonworking driving wheels during driving are discussed. When the average wind speed is less than 13.3 m/s, the maximum total weighted root mean square acceleration (av) is 0.1646 m/s2 and the vehicle ride comfort level is classified as “not uncomfortable.” This approach provides a foundation for the design and application of inspection vehicles.
---
## Body
## 1. Introduction
As the most important component of a suspension bridge, the main cable is corroded by the long-term exposure to natural factors (e.g., wind, rain, freezing temperatures, temperature changes, and humidity changes), which can endanger the security and reduce the service life of the bridge [1–3]. Therefore, it is necessary to inspect and maintain the main cable regularly. The traditional method of inspecting the main cable is artificial climbing, which has many shortcomings, such as having blind inspection areas, difficulty in climbing at high altitudes, low efficiency, and potential safety hazards. To improve the efficiency of cable inspection, various types of crawling robots have been developed [4–6]. These light crawling robots can perform unmanned inspections of the sling and the cable, but it is difficult to inspect the main cable of a suspension bridge with cable bands and other ancillary structures or provide a suitable maintenance platform. Therefore, it is of great significance to develop an inspection vehicle suitable for large main cables.To inspect the main cable of a suspension bridge, an inspection vehicle is designed and manufactured, as shown in Figure1. The inspection vehicle adopts the wheeled walking scheme of “10 sets of driving wheels +3 pairs of compressing wheels (driven wheels)” to surround the main cable. The vehicle body is a Π-shaped steel frame, and the other parts are aluminum alloy trusses. The whole vehicle weighs 8.5 t, including a 1 t live load. Three sets of lock devices are installed on each pair of revolving plates at the bottom of the vehicle body to increase the rigidity of the vehicle body. The inspection vehicle adopts automatic control technology to perform the alternate lifting and lowering of the driving wheels, the advancing and retracting of the compressing wheels, the rotating of flaps, and the switching of lock devices by controlling the hydraulic cylinders to straddle obstacles such as cable bands and hangers. The vehicle can identify cable bands automatically, climb cable bands actively, and crawl at a large angle (30°) when manned.Figure 1
(a) An inspection vehicle for a main cable. (b) Structure of the vehicle.
(a)
(b)The inspection vehicle drives on the main cable, which is a typical wind-sensitive structure. Wind-induced vibration directly affects the security of equipment and the efficiency of workers [7]. Therefore, the dynamic properties of the main cable-inspection vehicle coupled system under wind loads should be studied when the inspection vehicle is designed. Researchers have conducted numerous field measurements to investigate the wind-induced vibration characteristics of suspension bridges [8–10]. Field measurement is the most direct and reliable method to obtain site wind data [11–13]. However, this approach requires a long period and is easily affected by environmental conditions. Li et al. [14], Chen et al. [15], and Li et al. [16] investigated the wind characteristics of different bridge sites through wind tunnel tests to provide a basis for bridge design. Although this method is not affected by the geographical environment, accurate models must be built, and the cost can be extremely expensive. With the development of theoretical studies and computer technology, simulation experiments have been increasingly used to study the effects of various factors on bridge components at the initial stage of the research work to reduce the number of field measurements and tunnel tests [17, 18]. Field measurements and tunnel tests are usually used for final validation. Bai et al. [19] and Helgedagsrud et al. [20] performed and validated simulation experiments by comparing different simulation results with those of wind tunnel tests. However, these studies focused on the wind-induced vibration of suspension bridges, and few studies have focused on the wind-induced vibration of mobile devices on the main cable.The inspection vehicle was developed in the specialized field of mobile robots. In most previous studies, researchers focused on the design and vibration caused by the climbing of robots. Kim et al. [4] and Cho et al. [5] manufactured a wheel-based robot and a caterpillar-based robot for the inspection of a hanger rope, and the climbing abilities of both robots were validated in indoor experimental environments. Xu et al. [6] designed a cable-climbing robot for cable-stayed bridges, and the obstacle-climbing performance of each robot was simulated and validated in the laboratory. However, these robots are all tiny unmanned robots, and only structural safety must be considered. For a manned inspection vehicle, the vibration of the vehicle may cause discomfort or annoy the workers and influence performance [21]. These factors must be studied in the design process. The wind-induced vibration of the inspection vehicle is a complicated solid-wind-cable interaction problem, and the corresponding action mechanism has not yet been effectively studied.The simulation of a random wind field is essential for time-domain vibration analysis of the inspection vehicle under wind loads. Based on the Monte Carlo method, the digital filtering method and the spectral representation method (SRM) [22] are two basic approaches used to simulate the random wind field. SRM, the method adopted in this paper, has been widely used in the engineering community due to its accuracy and simplicity [23, 24]. In order to improve the efficiency of SRM for the simulation of multidimensional wind fields, Tao et al. [23, 25], Huang et al. [24], and Xu et al. [26] proposed different optimization methods for solving. With the development of sensor technology, the study of wind fields has gradually been extended from stationary fields to nonstationary fields, especially for extreme wind [27, 28]. However, no empirical model is available for nonstationary winds due to the difficulties in mathematical treatments and nonergodic characteristics [28]. Therefore, stationary wind is the objective of this paper.The purpose of this paper is to propose a method of assessing the vibration response of a main cable-inspection vehicle coupled system under fluctuating wind conditions. This paper is organized as follows: In Section2, the process for establishing a suspension bridge finite element (FE) model and a main cable-inspection vehicle coupled FE model is introduced in detail. Taking the Qingshui River Bridge as an example, the mean wind speed distribution characteristics at the bridge site are analyzed and the wind field is simulated in Section 3. Section 4 presents the ride comfort evaluation method based on the ISO 2631-1-1997 standard [21]. A modal analysis of the suspension bridge FE model, transient analysis of both FE models, and ride comfort evaluation of the vehicle are conducted in Section 5. Finally, the conclusions and future work are discussed in Section 6.
## 2. Finite Element Model
The unit length mass of the inspection vehicle is much smaller than that of the suspension bridge. The inspection vehicle has little influence on the vibration amplitude and acceleration of the cable [29]. Therefore, the influence of the inspection vehicle on the vibration response of the bridge is not considered in this study. In order to simplify the study, the suspension bridge FE model and the main cable-inspection vehicle coupled FE model are established. MIDAS Civil software is used for the suspension bridge FE Model because of the accuracy and accessibility of its built-in formulas. For the main cable-inspection vehicle coupled FE model, ANSYS software is used due to the variety of its element types and the flexibility of its ANSYS Parametric Design Language (APDL). The dynamic displacement of the main cable which is recorded over time from the suspension bridge FE model is taken as the excitation source for the main cable-inspection vehicle coupled FE model to realize one-way coupling between two FE models.
### 2.1. Suspension Bridge FE Model
The Qingshui River Bridge, located in Guizhou Province, China, is 27 m wide with a main span of 1130 m. The structure is a single-span steel truss suspension bridge located in a mountainous area, connecting the expressway from Guizhou to Weng’an, as shown in Figure2 xb is the axial direction of the bridge, yb is the direction of the bridge width, and zb is the vertical direction.Figure 2
Configuration of the Qingshui River Bridge (unit: m).The FE model of the Qingshui River Bridge was built using MIDAS Civil software, as presented in Figure3. Beam elements are used for the bridge deck and towers. Truss elements are used for the main cable and hangers and are tension-only elements. The structural parameters of the bridge FE model are listed in Table 1. The bottom of each tower and each anchorage are fixed. The bridge deck is connected to two main towers by elastic contacts in both the y and z directions, with ratings of 1000000 kN/m. One end of the bridge deck is constrained in the x direction. The weight of the bridge deck, two main cables, and all hangers are added to the nodes of the bridge deck and main cables in the form of a uniformly distributed mass load. Fluctuating wind loads are applied to the bridge deck and the main cables simultaneously on the elasticity center nodes in a given time sequence. The damping ratio is defined as 0.005.Figure 3
The FE model of the suspension bridge.Table 1
Parameters of the suspension bridge.
Part
A (m2)
I
y (m4)
I
z (m4)
ρ (kg/m3)
W (kN/m)
E (GPa)
Main tower
56.0
752.6
762.3
2500
—
34.5
Bridge deck
4.594
6.78
223.41
—
256.0
205
Main cable
0.3526
—
—
—
27.679
192
Hanger
0.0017
—
—
—
0.134
192
A, cross-sectional area; Iy, vertical section moment of inertia; Iz, transverse section moment of inertia; E, Young’s modulus; ρ, density; W, weight per unit length.
### 2.2. Main Cable-Inspection Vehicle Coupled FE Model
The structure of the main cable-inspection vehicle coupled system is shown in Figure4 xv is the width direction of the bridge deck, yv is the direction perpendicular to the tangential direction of the main cable centerline, and zv is the tangential direction of the main cable centerline.Figure 4
Configuration of the main cable-inspection vehicle coupled system (unit: mm).The FE model of the main cable-inspection vehicle coupled system was established in ANSYS software, as shown in Figure5. The main cable is defined as a rigid body. SHELL 63 elements are used for the vehicle body, and BEAM 188 elements are used for working platforms, compressing wheel brackets, and the equipment box truss. The guardrail, compressing wheels, driving wheels, power system, and control system are omitted, and the weight of each part is added to the model by adjusting the material density of the local structure. The parameters of the material are shown in Table 2. Between the driving wheels and main cable or the compressing wheels and main cable are viscoelastic contacts, which are defined as spring-damping contacts. The parameters of the spring-damping contacts are shown in Table 3. Fluctuating wind loads are applied to the vehicle on the windward nodes in a given time sequence. The dynamic displacement of the main cable that is recorded over time from the suspension bridge FE model is taken as the excitation source.Figure 5
The main cable-inspection vehicle coupled FE model.Table 2
The parameters of the material used in the coupled FE model.
Part
E (MPa)
ν
ρ (kg/m3)
Main cable
Rigid body
—
—
Vehicle body
2.05 × 105
0.3
7850
Driving/compression wheel
2.05 × 105
0.3
60000
Working platform
6.9 × 104
0.33
4000
Equipment box
6.9 × 104
0.33
85000
E, Young’s modulus; ν, Poisson’s ratio; ρ, density.Table 3
The parameters of the spring-damping contacts used in the coupled FE model.
Parameters
k
d (N/mm)
c
d (N·s/mm)
k
c (N/mm)
c
c (N·s/mm)
Value
285.3
0.2
1500
0.2
k
d and cd are the spring stiffness and viscous damping coefficient of the contact between the driving wheels and main cable, respectively; kc and cc are the spring stiffness and viscous damping coefficient of the contact between the compressing wheels and main cable, respectively.For consistency in the comparison of results, the vibration of node P, located at the bottom of the middle of the working platform on the windward side (see Figure5), is investigated in the following analysis.
### 2.3. Coordinate Relationship
The inspection vehicle drives along the main cable during work, which results in changes in both the inclination angle (α) and working height (H) of the inspection vehicle. According to the different heights of the main cable from the bridge deck, H is set at 5 m, 35 m, 65 m, 95 m, and 113 m. The relation between H and α is shown in Table 4. According to the selected coordinate system when establishing the FE models, the coordinate relationships between the suspension bridge and inspection vehicle and the vehicle and a worker are shown in Figure 6.Table 4
The relation betweenH and α.
H
5 m
35 m
65 m
95 m
113 m
α
0°
12.1°
16.8°
20.2°
21.3°Figure 6
Coordinate relationship. (a) Bridge and vehicle. (b) Vehicle and worker.
(a)
(b)The displacements obtained from the transient analysis of the suspension bridge FE model need to be transferred to the main cable-inspection vehicle coupled FE model. The displacement relations in the two coordinate systems are as follows:(1)Dxv=−Dyb,Dyv=Dxb⋅sinα+Dzb⋅cosα,Dzv=−Dxb⋅cosα+Dzb⋅sinα,where Dxv, Dyv, and Dzv are the displacements of the main cable in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. Additionally, Dxb, Dyb, and Dzb are the displacements of the main cable in the X, Y, and Z directions of the bridge coordinate system Ob—XbYbZb, respectively.The acceleration obtained from the transient analysis of the main cable-inspection vehicle coupled FE model is transferred to the foot of the worker. The acceleration relations in the two coordinate systems are as follows:(2)ax=axv,ay=ayv⋅sinα−azv⋅cosα,az=azv⋅sinα+ayv⋅cosα,where axv, ayv, and azv are the accelerations of the inspection vehicle (node P) in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. In addition, ax, ay, and az are the accelerations transferred to the foot of the worker in the X, Y, and Z directions of the worker coordinate system O—XYZ, respectively.
## 2.1. Suspension Bridge FE Model
The Qingshui River Bridge, located in Guizhou Province, China, is 27 m wide with a main span of 1130 m. The structure is a single-span steel truss suspension bridge located in a mountainous area, connecting the expressway from Guizhou to Weng’an, as shown in Figure2 xb is the axial direction of the bridge, yb is the direction of the bridge width, and zb is the vertical direction.Figure 2
Configuration of the Qingshui River Bridge (unit: m).The FE model of the Qingshui River Bridge was built using MIDAS Civil software, as presented in Figure3. Beam elements are used for the bridge deck and towers. Truss elements are used for the main cable and hangers and are tension-only elements. The structural parameters of the bridge FE model are listed in Table 1. The bottom of each tower and each anchorage are fixed. The bridge deck is connected to two main towers by elastic contacts in both the y and z directions, with ratings of 1000000 kN/m. One end of the bridge deck is constrained in the x direction. The weight of the bridge deck, two main cables, and all hangers are added to the nodes of the bridge deck and main cables in the form of a uniformly distributed mass load. Fluctuating wind loads are applied to the bridge deck and the main cables simultaneously on the elasticity center nodes in a given time sequence. The damping ratio is defined as 0.005.Figure 3
The FE model of the suspension bridge.Table 1
Parameters of the suspension bridge.
Part
A (m2)
I
y (m4)
I
z (m4)
ρ (kg/m3)
W (kN/m)
E (GPa)
Main tower
56.0
752.6
762.3
2500
—
34.5
Bridge deck
4.594
6.78
223.41
—
256.0
205
Main cable
0.3526
—
—
—
27.679
192
Hanger
0.0017
—
—
—
0.134
192
A, cross-sectional area; Iy, vertical section moment of inertia; Iz, transverse section moment of inertia; E, Young’s modulus; ρ, density; W, weight per unit length.
## 2.2. Main Cable-Inspection Vehicle Coupled FE Model
The structure of the main cable-inspection vehicle coupled system is shown in Figure4 xv is the width direction of the bridge deck, yv is the direction perpendicular to the tangential direction of the main cable centerline, and zv is the tangential direction of the main cable centerline.Figure 4
Configuration of the main cable-inspection vehicle coupled system (unit: mm).The FE model of the main cable-inspection vehicle coupled system was established in ANSYS software, as shown in Figure5. The main cable is defined as a rigid body. SHELL 63 elements are used for the vehicle body, and BEAM 188 elements are used for working platforms, compressing wheel brackets, and the equipment box truss. The guardrail, compressing wheels, driving wheels, power system, and control system are omitted, and the weight of each part is added to the model by adjusting the material density of the local structure. The parameters of the material are shown in Table 2. Between the driving wheels and main cable or the compressing wheels and main cable are viscoelastic contacts, which are defined as spring-damping contacts. The parameters of the spring-damping contacts are shown in Table 3. Fluctuating wind loads are applied to the vehicle on the windward nodes in a given time sequence. The dynamic displacement of the main cable that is recorded over time from the suspension bridge FE model is taken as the excitation source.Figure 5
The main cable-inspection vehicle coupled FE model.Table 2
The parameters of the material used in the coupled FE model.
Part
E (MPa)
ν
ρ (kg/m3)
Main cable
Rigid body
—
—
Vehicle body
2.05 × 105
0.3
7850
Driving/compression wheel
2.05 × 105
0.3
60000
Working platform
6.9 × 104
0.33
4000
Equipment box
6.9 × 104
0.33
85000
E, Young’s modulus; ν, Poisson’s ratio; ρ, density.Table 3
The parameters of the spring-damping contacts used in the coupled FE model.
Parameters
k
d (N/mm)
c
d (N·s/mm)
k
c (N/mm)
c
c (N·s/mm)
Value
285.3
0.2
1500
0.2
k
d and cd are the spring stiffness and viscous damping coefficient of the contact between the driving wheels and main cable, respectively; kc and cc are the spring stiffness and viscous damping coefficient of the contact between the compressing wheels and main cable, respectively.For consistency in the comparison of results, the vibration of node P, located at the bottom of the middle of the working platform on the windward side (see Figure5), is investigated in the following analysis.
## 2.3. Coordinate Relationship
The inspection vehicle drives along the main cable during work, which results in changes in both the inclination angle (α) and working height (H) of the inspection vehicle. According to the different heights of the main cable from the bridge deck, H is set at 5 m, 35 m, 65 m, 95 m, and 113 m. The relation between H and α is shown in Table 4. According to the selected coordinate system when establishing the FE models, the coordinate relationships between the suspension bridge and inspection vehicle and the vehicle and a worker are shown in Figure 6.Table 4
The relation betweenH and α.
H
5 m
35 m
65 m
95 m
113 m
α
0°
12.1°
16.8°
20.2°
21.3°Figure 6
Coordinate relationship. (a) Bridge and vehicle. (b) Vehicle and worker.
(a)
(b)The displacements obtained from the transient analysis of the suspension bridge FE model need to be transferred to the main cable-inspection vehicle coupled FE model. The displacement relations in the two coordinate systems are as follows:(1)Dxv=−Dyb,Dyv=Dxb⋅sinα+Dzb⋅cosα,Dzv=−Dxb⋅cosα+Dzb⋅sinα,where Dxv, Dyv, and Dzv are the displacements of the main cable in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. Additionally, Dxb, Dyb, and Dzb are the displacements of the main cable in the X, Y, and Z directions of the bridge coordinate system Ob—XbYbZb, respectively.The acceleration obtained from the transient analysis of the main cable-inspection vehicle coupled FE model is transferred to the foot of the worker. The acceleration relations in the two coordinate systems are as follows:(2)ax=axv,ay=ayv⋅sinα−azv⋅cosα,az=azv⋅sinα+ayv⋅cosα,where axv, ayv, and azv are the accelerations of the inspection vehicle (node P) in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. In addition, ax, ay, and az are the accelerations transferred to the foot of the worker in the X, Y, and Z directions of the worker coordinate system O—XYZ, respectively.
## 3. Wind Speed and Wind Load Simulation
### 3.1. Wind Speed Simulation
The wind speedVt in nature over a given time interval is considered to be the sum of mean wind speed V¯ and fluctuating wind speed vt:(3)Vt=V¯+vt.To obtain the mean wind speed distribution characteristics for the Qingshui River Bridge, a wind measurement tower was built approximately 100 m away from the main tower of the bridge close to Guizhou. An NRG 40C anemometer (produced by NRG Systems company) was mounted 10 m above the bridge deck to record 10-min mean wind speeds (V¯10). Based on the records from 33 months (from January 2014 to September 2016), the frequency of each mean wind speed interval was calculated, as shown in Figure 7. The lognormal probability density distribution, which is extensively used to fit the wind speed distribution over land [30], is used to fit these wind speed data and is given as follows [31]:(4)fV¯=1V¯σ2πexp−lnV¯−μ22σ2,where σ and μ are the shape and scale parameters, respectively.Figure 7
Probability distribution of the maximum daily mean wind speed.According to the maximum likelihood estimator (MLE) method, the shape and scale parameters are calculated as follows [31]:(5)μ=1M∑i=1Mlnvi,σ=1M∑i=1Mlnvi2,where M is the total number of wind speed values and i = 1, 2, …, and vi is the wind speed at time step.Figure7 shows that the maximum daily mean wind speed is 13.3 m/s. Because a real-time wind speed alarm system is installed on the inspection vehicle, the influence of height on wind speed is neglected. Therefore, the design maximum working mean wind speed of the inspection vehicle is 13.3 m/s (V¯max=13.3m/s).The Davenport power spectrum [32] is used to simulate the fluctuating wind field along the bridge because the influence of height on wind speed is neglected. The Davenport power spectrum is expressed as follows [32]:(6)Svn=4kV¯102x2n1+x24/3,where x=1200n/V¯10, k is a coefficient that depends on the roughness of the surface, and n is the fluctuating wind frequency.The wind field of suspension bridges in each direction can be considered as a one-dimensionalm-variable (1D-mV) zero-mean homogeneous random process V(t) = V1t,V2t,…,VmtT [23], where T is a transpose indicator. The cross power spectral density (CPSD) matrix of V(t) can be described as follows:(7)Sn=S11nS12n⋯S1mnS21ωS22n⋯S2mn⋮⋮⋱⋮Sm1nSm2n⋯Smmn.The CPSD between any two componentsVj(t) and Vp(t) can be expressed as(8)Siqn=SjnSqnγjqneiθjqn,where γjq(n) is the coherence function, the Davenport empirical function [33] is used, j, q = 1, 2, …, m, and θjq(n) is the corresponding phase angle [34].The CPSD matrix can be factorized into the following product with the Cholesky decomposition:(9)Sn=HnHT∗n,where the superscript ∗ indicates the matrix should be conjugate and H(n) is a lower triangular matrix.According to SRM, the fluctuating wind speed for any pointvjt is given as follows [22]:(10)vjt=2Δn∑l=1j∑k=1NHjlnlkcosnlkt−θnlk+ϕlk,nlk=kΔn−1−lmΔn,θnlk=tan−1ImHlknlkReHlknlk,where N is the total number of frequency intervals, Δn=nu/N, nu is the cutoff frequency, nlk is the double-index frequency, Hjl = (nlk) are the elements of matrix H(n), θ(nlk) is the phase angle of the complex element in H(n) at frequency nlk, and ϕlk is a random phase angle uniformly distributed from 0 to 2π.Referring to the JTG/T D60-01-2004 standard [35], the bridge site is classified as Class D, k is taken as 0.01291, and the ground roughness index is taken as 0.3. Using fast Fourier transform (FFT) and the random number generator in MATLAB, wind speed curves in the time domain can be simulated. Thirty-seven simulation points are uniformly distributed along the length of the bridge deck. This paper only considers the effect of the cross-bridge wind, and the angle between the incoming wind direction and primary flow plane is taken as 0°. The simulated fluctuating wind speed time history at the middle-span position (z = 10 m; V¯10 = 13.3 m/s) for the bridge is presented in Figure 8.Figure 8
Simulated wind speed time history at the middle-span position.
### 3.2. Wind Load Simulation
Wind load on a bridge can be classified into three parts: static wind load, buffeting load, and self-excited load. Chen et al. found that the effect of the self-excited force is limited when the wind speed is lower than the design basic wind speed [36]. Therefore, only static wind load and buffeting load on the bridge are considered in this paper.The wind loads per unit length including the static wind and buffeting loads acting at the elasticity center of the bridge deck/cable or acting on the windward nodes of the inspection vehicle are defined as follows:(11)Lwt=12ρHCLVt2,Dwt=12ρBCDVt2,Mwt=12ρB2CMVt2,where Lwt, Dwt, and Mwt are the lift, drag, and torque wind force, respectively, ρ is the air density, H is the height of the bridge deck or diameter of the main cable or windward projection height of the inspection vehicle, and CL, CD, and CM are the lift, drag, and torque wind force coefficients, respectively. All three forces are considered for the deck, but only drag force is considered for the cables and the vehicle. According to the section model tests of the bridge deck in the wind tunnel completed by Wang et al. [37], CL, CD, and CM for the deck are taken as 0.2, 1.5, and 0.02, respectively. Referring to the JTG/T D60-01-2004 standard [35], CL for the cables and the vehicle are taken as 0.7 and 1.8, respectively.
## 3.1. Wind Speed Simulation
The wind speedVt in nature over a given time interval is considered to be the sum of mean wind speed V¯ and fluctuating wind speed vt:(3)Vt=V¯+vt.To obtain the mean wind speed distribution characteristics for the Qingshui River Bridge, a wind measurement tower was built approximately 100 m away from the main tower of the bridge close to Guizhou. An NRG 40C anemometer (produced by NRG Systems company) was mounted 10 m above the bridge deck to record 10-min mean wind speeds (V¯10). Based on the records from 33 months (from January 2014 to September 2016), the frequency of each mean wind speed interval was calculated, as shown in Figure 7. The lognormal probability density distribution, which is extensively used to fit the wind speed distribution over land [30], is used to fit these wind speed data and is given as follows [31]:(4)fV¯=1V¯σ2πexp−lnV¯−μ22σ2,where σ and μ are the shape and scale parameters, respectively.Figure 7
Probability distribution of the maximum daily mean wind speed.According to the maximum likelihood estimator (MLE) method, the shape and scale parameters are calculated as follows [31]:(5)μ=1M∑i=1Mlnvi,σ=1M∑i=1Mlnvi2,where M is the total number of wind speed values and i = 1, 2, …, and vi is the wind speed at time step.Figure7 shows that the maximum daily mean wind speed is 13.3 m/s. Because a real-time wind speed alarm system is installed on the inspection vehicle, the influence of height on wind speed is neglected. Therefore, the design maximum working mean wind speed of the inspection vehicle is 13.3 m/s (V¯max=13.3m/s).The Davenport power spectrum [32] is used to simulate the fluctuating wind field along the bridge because the influence of height on wind speed is neglected. The Davenport power spectrum is expressed as follows [32]:(6)Svn=4kV¯102x2n1+x24/3,where x=1200n/V¯10, k is a coefficient that depends on the roughness of the surface, and n is the fluctuating wind frequency.The wind field of suspension bridges in each direction can be considered as a one-dimensionalm-variable (1D-mV) zero-mean homogeneous random process V(t) = V1t,V2t,…,VmtT [23], where T is a transpose indicator. The cross power spectral density (CPSD) matrix of V(t) can be described as follows:(7)Sn=S11nS12n⋯S1mnS21ωS22n⋯S2mn⋮⋮⋱⋮Sm1nSm2n⋯Smmn.The CPSD between any two componentsVj(t) and Vp(t) can be expressed as(8)Siqn=SjnSqnγjqneiθjqn,where γjq(n) is the coherence function, the Davenport empirical function [33] is used, j, q = 1, 2, …, m, and θjq(n) is the corresponding phase angle [34].The CPSD matrix can be factorized into the following product with the Cholesky decomposition:(9)Sn=HnHT∗n,where the superscript ∗ indicates the matrix should be conjugate and H(n) is a lower triangular matrix.According to SRM, the fluctuating wind speed for any pointvjt is given as follows [22]:(10)vjt=2Δn∑l=1j∑k=1NHjlnlkcosnlkt−θnlk+ϕlk,nlk=kΔn−1−lmΔn,θnlk=tan−1ImHlknlkReHlknlk,where N is the total number of frequency intervals, Δn=nu/N, nu is the cutoff frequency, nlk is the double-index frequency, Hjl = (nlk) are the elements of matrix H(n), θ(nlk) is the phase angle of the complex element in H(n) at frequency nlk, and ϕlk is a random phase angle uniformly distributed from 0 to 2π.Referring to the JTG/T D60-01-2004 standard [35], the bridge site is classified as Class D, k is taken as 0.01291, and the ground roughness index is taken as 0.3. Using fast Fourier transform (FFT) and the random number generator in MATLAB, wind speed curves in the time domain can be simulated. Thirty-seven simulation points are uniformly distributed along the length of the bridge deck. This paper only considers the effect of the cross-bridge wind, and the angle between the incoming wind direction and primary flow plane is taken as 0°. The simulated fluctuating wind speed time history at the middle-span position (z = 10 m; V¯10 = 13.3 m/s) for the bridge is presented in Figure 8.Figure 8
Simulated wind speed time history at the middle-span position.
## 3.2. Wind Load Simulation
Wind load on a bridge can be classified into three parts: static wind load, buffeting load, and self-excited load. Chen et al. found that the effect of the self-excited force is limited when the wind speed is lower than the design basic wind speed [36]. Therefore, only static wind load and buffeting load on the bridge are considered in this paper.The wind loads per unit length including the static wind and buffeting loads acting at the elasticity center of the bridge deck/cable or acting on the windward nodes of the inspection vehicle are defined as follows:(11)Lwt=12ρHCLVt2,Dwt=12ρBCDVt2,Mwt=12ρB2CMVt2,where Lwt, Dwt, and Mwt are the lift, drag, and torque wind force, respectively, ρ is the air density, H is the height of the bridge deck or diameter of the main cable or windward projection height of the inspection vehicle, and CL, CD, and CM are the lift, drag, and torque wind force coefficients, respectively. All three forces are considered for the deck, but only drag force is considered for the cables and the vehicle. According to the section model tests of the bridge deck in the wind tunnel completed by Wang et al. [37], CL, CD, and CM for the deck are taken as 0.2, 1.5, and 0.02, respectively. Referring to the JTG/T D60-01-2004 standard [35], CL for the cables and the vehicle are taken as 0.7 and 1.8, respectively.
## 4. Comfort Evaluation
With the increasing awareness of human physiological and behavioral responses to vibration, the comfort problem in the vibration environment has extended from the ride comfort of automobiles to the operating comfort of engineering equipment and has received increased attention [38, 39]. In this paper, the evaluation method recommended by the ISO2631-1-1997 standard [21] is used to evaluate the perception and ride comfort of a worker driving an inspection vehicle under vibration. According to the standard, the acceleration time history signals in all directions transmitted from the working platform floor to a worker through the feet are transformed into the frequency domain using FFT. The frequency-weighted root mean square (rms) acceleration aw is determined by weighting and the appropriate addition of one-third octave band data as follows [21]:(12)aw=∑kWkak21/2,where Wk is the weighting factor of the kth one-third octave band given by the ISO 2631-1-1997 standard and ak is the rms acceleration of the kth one-third octave band.In orthogonal coordinates, the vibration total value of the weighted rms accelerationav is calculated by aw in three directions as follows [21]:(13)av=kx2awx2+ky2awy2+kz2awz21/2,where awx, awy, and awz are the frequency-weighted accelerations with respect to orthogonal coordinate axes x, y, and z, respectively, and kx, ky, and kz are the multiplying factors given by the ISO 2631-1-1997 standard [21].The relationship betweenav and subjective sensory comfort is shown in Table 5 [21]. After calculating av, it is convenient to quantitatively evaluate the ride comfort of the inspection vehicle.Table 5
Subjective criteria for ride comfort.
a
v (m/s2)
Comfort level
<0.315
Not uncomfortable
0.315∼0.63
A little uncomfortable
0.5∼1.0
Fairly uncomfortable
0.8∼1.6
Uncomfortable
1.25∼2.5
Very uncomfortable
>2.0
Extremely uncomfortable
## 5. Results and Analysis
### 5.1. Suspension Bridge FE Model Verification
The block-Lanczos method was used in the modal analysis of a suspension bridge to verify the correctness of the proposed suspension bridge FE model. To measure the modal characteristics of the suspension bridge, a full-bridge aeroelastic model was constructed that was scaled toC = 1/100 of the original. According to the similarity criterion suggested in the JTG/T D60-01-2004 standard [35], the major design parameters of the model are listed in Table 6. The full-bridge aeroelastic elastic model was composed of the bridge deck, main towers, main cables, hangers, and bases. The model main beam adopted aluminum chords and plastic diagonals. The bridge deck consisted of 37 sections with a 2 mm gap between each section and a U connection between two sections. Each tower contained a steel core beam and wooden boards that provided an aerodynamic shape. Steel strand provided stiffness of cable, and an iron block was enwrapped outside the cable to control the weight. Each hanger consisted of a steel wire without shear stiffness but with high tension stiffness. Lead blocks were used to adjust the weight of the model and were installed inside the bridge deck model. Plastic boards were used to simulate the aeroelastic shape of the deck rail. The test was conducted in the Industrial Wind Tunnel (XNJD-3) of Southwest Jiaotong University, as shown in Figure 9. The dynamic response of the model was measured by the forced oscillation method [14]. Two laser displacement sensors were used to obtain the vibration signal in the test. A comparison of the results of the FE modal analysis and experiment modal test of the first 4 vibration modes is presented in Table 7. The table shows that the modes of vibration are fully consistent, and the difference in frequency values is less than 5%. Therefore, the FE model of the suspension bridge has high accuracy and can effectively reflect the dynamic characteristics of the suspension bridge.Table 6
Major design parameters of the full-bridge aeroelastic model.
Parameter
Unit
Similarity ratio
Value of the real bridge
Value of the model
Length
Total length of bridge deck (L)
m
C
1130
11.3
Width of bridge deck (B)
27
0.27
Height of bridge deck (H)
7
0.07
Height of main tower (Ht)
230
2.3
Stiffness
Vertical stiffness of bridge deck,EIy
Nm2
C
5
1.39 × 1012
1.39 × 102
Transverse stiffness of bridge deck,EIz
4.58 × 1013
4.58 × 103
Along-bridge direction stiffness of the bottom of main tower,EIy
2.63 × 1013
2.63 × 103
Transverse stiffness of the bottom of main tower,EIz
12.60 × 1013
12.60 × 103Figure 9
Test model of the suspension bridge.Table 7
Comparison of vibration modes.
Mode no.
Frequency
Mode of vibration
Remark
Experiment modal test (Hz)
FE modal analysis (Hz)
Difference (%)
1
0.0844
0.0818
3.1
Symmetric lateral vibration
2
0.1660
0.1608
3.1
Antisymmetric vertical vibration
3
0.1777
0.1812
2.0
Symmetric vertical vibration
4
0.2435
0.2542
4.4
Antisymmetric lateral vibration
### 5.2. Wind-Induced Vibration Response of the Main Cable
After validation, the suspension bridge FE model can be used to analyze the vibration characteristics of the main cable with fluctuating winds as the excitation source. Figure10 shows the vibration displacement history of the node at the middle-span position, which is 5 m above the bridge deck surface in the bridge coordinate system Ob—XbYbZb. The vibration frequency of the main cable is 0.0815, which is consistent with the first-order frequency of the suspension bridge. The results are also consistent with the conclusion of Huang et al. [40]. The main vibration direction of the main cable is the Y direction (lateral direction). The peak value of the Y-direction vibration displacement is 188.7 mm, and the peak values of X-direction and Z-direction vibration displacement are less than 0.1 mm. Therefore, only the Y-direction vibration of the main cable node is considered in the study of the wind-induced vibration response in the main cable-inspection vehicle coupled FE model. As the height of the main cable node increases, the constraint on the node from the cable saddle becomes stronger and the peak value of the Y-direction vibration displacement of the node gradually decreases, as shown in Figure 11.Figure 10
Vibration displacement time history of the node 5 m above the main cable.Figure 11
The peakY-direction displacement vs. the height of the main cable node.
### 5.3. Wind-Induced Vibration Response of the Inspection Vehicle
The transient analysis of the main cable-inspection vehicle coupled FE model was performed with the fluctuating wind and the displacement of the main cable node as the excitation sources. Figure12 shows the vibration acceleration time history of the inspection vehicle node P at a height of 5 m in the moving coordinate system Ov—XvYvZv. At this time, all the driving wheels and compressing wheels of the inspection vehicle are in contact with the main cable. The vibration of the inspection vehicle has a large impact from 0 s to 10 s, which is due to the impact of gravity applied during simulation. This phenomenon will not occur during the operation of the inspection vehicle. Therefore, all subsequent acceleration statistics start from 10 s to maintain accuracy. The peak value of the X-direction vibration acceleration (axv) at node P at a height of 5 m is the largest at 7.86 m/s2, and the peak value of the Z-direction vibration acceleration (azv) is the smallest at 0.55 m/s2. The peak value of the Y-direction vibration acceleration (ayv) is 3.81 m/s2. Therefore, the main vibrations of the inspection vehicle are X-direction vibration and Y-direction vibration.Figure 12
Vibration acceleration time history of the inspection vehicle at a height of 5 m.When the inspection vehicle climbs along the main cable, the working height (H) and inclination angle (α) of the inspection vehicle increase simultaneously. Due to the influence of the constraint on the nodes of the cable from the main cable saddle strength and the increase in the gravity component in the Z direction of the inspection vehicle, the peak values of the X-direction and Y-direction vibration acceleration of the inspection vehicle gradually decrease and the peak value of the Z-direction vibration acceleration initially increases and then decreases, as shown in Figure 13.Figure 13
Amplitude of vibration acceleration vs. the working height of the inspection vehicle.When the inspection vehicle crosses the cable band, the driving wheels need to be lifted and the compressing wheels need to be alternately opened. Lifting the driving wheels or opening the compressing wheels will change the coupling relationship between the inspection vehicle and the main cable and will reduce the contact stiffness between them, resulting in simultaneous reductions in the main frequency and the main frequency amplitude, as shown in Figure14. As the number of nonworking compression wheels increases, the vertical and lateral contact stiffness values between the inspection vehicle and the main cable simultaneously decrease, resulting in a gradual decrease in the peak values of both the X-direction and Y-direction vibration acceleration of the inspection vehicle and an initial increase and subsequent decrease in the peak value of the Z-direction vibration acceleration, as shown in Figure 15. As the number of nonworking driving wheels increases, the vertical contact stiffness between the inspection vehicle and the main cable decreases, resulting in a gradual decrease in the peak value of the X-direction vibration acceleration of the inspection vehicle, a nonobvious change in the peak value of the Y-direction vibration acceleration, and a slow increase in the peak value of the Z-direction vibration acceleration, as shown in Figure 16.Figure 14
One-sided power spectrum density (PSD) of the vibration acceleration of the inspection vehicle vs. (a) the number of nonworking compression wheels and (b) the number of nonworking driving wheels.
(a)
(b)Figure 15
Amplitude of displacement vs. the number of nonworking compression wheels of the inspection vehicle.Figure 16
Amplitude of displacement vs. the number of nonworking driving wheels of the inspection vehicle.
### 5.4. Ride Comfort Evaluation of the Inspection Vehicle
The inspection vehicle is a type of aerial work equipment. Vibration of the vehicle can not only induce psychological panic in workers but also affect the work efficiency and health of workers [21]. Therefore, it is necessary to analyze the ride comfort of the vehicle. Through coordinate transformation, the vibration acceleration transmitted to the foot of a worker in the inspection vehicle is obtained and the vehicle ride comfort is further analyzed. Figure 17 illustrates the variation curves of the vibration total values of the weighted rms acceleration (av).Figure 17
The variation curves of theav value vs. (a) the working height and (b) the number of nonworking compressing/driving wheels.
(a)
(b)Figure17(a) shows that as the working height of the inspection vehicle increases, the value of av decreases slowly first and then sharply and the maximum reduction ratio of av is 99.0%. The ride comfort of the inspection vehicle is improved due to the vehicle acceleration change described in Section 5.3. When the inspection vehicle works at the middle-span position (H = 5 m), the value of av reaches a maximum of 0.1478 m/s2, which is less than 0.315 m/s2, and the vehicle ride comfort level is “not uncomfortable.”Figure17(b) shows that when one pair of compressing wheels at the end of the vehicle is opened, the value of av reaches a maximum of 0.1646 m/s2, which is less than 0.315 m/s2. Thus, the vehicle comfort level is “not uncomfortable” because as the number of open compressing wheel pairs increases, the main frequency of vehicle vibration decreases. In this case, the frequency gradually approaches the sensitive frequency of the human body and the vehicle acceleration changes as described in Section 5.3. As the number of nonworking driving wheels increases, the value of av slowly decreases, and the vehicle ride comfort improves due to the decreases in the main frequency and amplitude of the main frequency described in Section 5.3. When all the driving wheels contact the main cable, the value of av is the largest at 0.1478 m/s2, less than 0.315 m/s2. Therefore, the vehicle ride comfort level is “not uncomfortable.”
## 5.1. Suspension Bridge FE Model Verification
The block-Lanczos method was used in the modal analysis of a suspension bridge to verify the correctness of the proposed suspension bridge FE model. To measure the modal characteristics of the suspension bridge, a full-bridge aeroelastic model was constructed that was scaled toC = 1/100 of the original. According to the similarity criterion suggested in the JTG/T D60-01-2004 standard [35], the major design parameters of the model are listed in Table 6. The full-bridge aeroelastic elastic model was composed of the bridge deck, main towers, main cables, hangers, and bases. The model main beam adopted aluminum chords and plastic diagonals. The bridge deck consisted of 37 sections with a 2 mm gap between each section and a U connection between two sections. Each tower contained a steel core beam and wooden boards that provided an aerodynamic shape. Steel strand provided stiffness of cable, and an iron block was enwrapped outside the cable to control the weight. Each hanger consisted of a steel wire without shear stiffness but with high tension stiffness. Lead blocks were used to adjust the weight of the model and were installed inside the bridge deck model. Plastic boards were used to simulate the aeroelastic shape of the deck rail. The test was conducted in the Industrial Wind Tunnel (XNJD-3) of Southwest Jiaotong University, as shown in Figure 9. The dynamic response of the model was measured by the forced oscillation method [14]. Two laser displacement sensors were used to obtain the vibration signal in the test. A comparison of the results of the FE modal analysis and experiment modal test of the first 4 vibration modes is presented in Table 7. The table shows that the modes of vibration are fully consistent, and the difference in frequency values is less than 5%. Therefore, the FE model of the suspension bridge has high accuracy and can effectively reflect the dynamic characteristics of the suspension bridge.Table 6
Major design parameters of the full-bridge aeroelastic model.
Parameter
Unit
Similarity ratio
Value of the real bridge
Value of the model
Length
Total length of bridge deck (L)
m
C
1130
11.3
Width of bridge deck (B)
27
0.27
Height of bridge deck (H)
7
0.07
Height of main tower (Ht)
230
2.3
Stiffness
Vertical stiffness of bridge deck,EIy
Nm2
C
5
1.39 × 1012
1.39 × 102
Transverse stiffness of bridge deck,EIz
4.58 × 1013
4.58 × 103
Along-bridge direction stiffness of the bottom of main tower,EIy
2.63 × 1013
2.63 × 103
Transverse stiffness of the bottom of main tower,EIz
12.60 × 1013
12.60 × 103Figure 9
Test model of the suspension bridge.Table 7
Comparison of vibration modes.
Mode no.
Frequency
Mode of vibration
Remark
Experiment modal test (Hz)
FE modal analysis (Hz)
Difference (%)
1
0.0844
0.0818
3.1
Symmetric lateral vibration
2
0.1660
0.1608
3.1
Antisymmetric vertical vibration
3
0.1777
0.1812
2.0
Symmetric vertical vibration
4
0.2435
0.2542
4.4
Antisymmetric lateral vibration
## 5.2. Wind-Induced Vibration Response of the Main Cable
After validation, the suspension bridge FE model can be used to analyze the vibration characteristics of the main cable with fluctuating winds as the excitation source. Figure10 shows the vibration displacement history of the node at the middle-span position, which is 5 m above the bridge deck surface in the bridge coordinate system Ob—XbYbZb. The vibration frequency of the main cable is 0.0815, which is consistent with the first-order frequency of the suspension bridge. The results are also consistent with the conclusion of Huang et al. [40]. The main vibration direction of the main cable is the Y direction (lateral direction). The peak value of the Y-direction vibration displacement is 188.7 mm, and the peak values of X-direction and Z-direction vibration displacement are less than 0.1 mm. Therefore, only the Y-direction vibration of the main cable node is considered in the study of the wind-induced vibration response in the main cable-inspection vehicle coupled FE model. As the height of the main cable node increases, the constraint on the node from the cable saddle becomes stronger and the peak value of the Y-direction vibration displacement of the node gradually decreases, as shown in Figure 11.Figure 10
Vibration displacement time history of the node 5 m above the main cable.Figure 11
The peakY-direction displacement vs. the height of the main cable node.
## 5.3. Wind-Induced Vibration Response of the Inspection Vehicle
The transient analysis of the main cable-inspection vehicle coupled FE model was performed with the fluctuating wind and the displacement of the main cable node as the excitation sources. Figure12 shows the vibration acceleration time history of the inspection vehicle node P at a height of 5 m in the moving coordinate system Ov—XvYvZv. At this time, all the driving wheels and compressing wheels of the inspection vehicle are in contact with the main cable. The vibration of the inspection vehicle has a large impact from 0 s to 10 s, which is due to the impact of gravity applied during simulation. This phenomenon will not occur during the operation of the inspection vehicle. Therefore, all subsequent acceleration statistics start from 10 s to maintain accuracy. The peak value of the X-direction vibration acceleration (axv) at node P at a height of 5 m is the largest at 7.86 m/s2, and the peak value of the Z-direction vibration acceleration (azv) is the smallest at 0.55 m/s2. The peak value of the Y-direction vibration acceleration (ayv) is 3.81 m/s2. Therefore, the main vibrations of the inspection vehicle are X-direction vibration and Y-direction vibration.Figure 12
Vibration acceleration time history of the inspection vehicle at a height of 5 m.When the inspection vehicle climbs along the main cable, the working height (H) and inclination angle (α) of the inspection vehicle increase simultaneously. Due to the influence of the constraint on the nodes of the cable from the main cable saddle strength and the increase in the gravity component in the Z direction of the inspection vehicle, the peak values of the X-direction and Y-direction vibration acceleration of the inspection vehicle gradually decrease and the peak value of the Z-direction vibration acceleration initially increases and then decreases, as shown in Figure 13.Figure 13
Amplitude of vibration acceleration vs. the working height of the inspection vehicle.When the inspection vehicle crosses the cable band, the driving wheels need to be lifted and the compressing wheels need to be alternately opened. Lifting the driving wheels or opening the compressing wheels will change the coupling relationship between the inspection vehicle and the main cable and will reduce the contact stiffness between them, resulting in simultaneous reductions in the main frequency and the main frequency amplitude, as shown in Figure14. As the number of nonworking compression wheels increases, the vertical and lateral contact stiffness values between the inspection vehicle and the main cable simultaneously decrease, resulting in a gradual decrease in the peak values of both the X-direction and Y-direction vibration acceleration of the inspection vehicle and an initial increase and subsequent decrease in the peak value of the Z-direction vibration acceleration, as shown in Figure 15. As the number of nonworking driving wheels increases, the vertical contact stiffness between the inspection vehicle and the main cable decreases, resulting in a gradual decrease in the peak value of the X-direction vibration acceleration of the inspection vehicle, a nonobvious change in the peak value of the Y-direction vibration acceleration, and a slow increase in the peak value of the Z-direction vibration acceleration, as shown in Figure 16.Figure 14
One-sided power spectrum density (PSD) of the vibration acceleration of the inspection vehicle vs. (a) the number of nonworking compression wheels and (b) the number of nonworking driving wheels.
(a)
(b)Figure 15
Amplitude of displacement vs. the number of nonworking compression wheels of the inspection vehicle.Figure 16
Amplitude of displacement vs. the number of nonworking driving wheels of the inspection vehicle.
## 5.4. Ride Comfort Evaluation of the Inspection Vehicle
The inspection vehicle is a type of aerial work equipment. Vibration of the vehicle can not only induce psychological panic in workers but also affect the work efficiency and health of workers [21]. Therefore, it is necessary to analyze the ride comfort of the vehicle. Through coordinate transformation, the vibration acceleration transmitted to the foot of a worker in the inspection vehicle is obtained and the vehicle ride comfort is further analyzed. Figure 17 illustrates the variation curves of the vibration total values of the weighted rms acceleration (av).Figure 17
The variation curves of theav value vs. (a) the working height and (b) the number of nonworking compressing/driving wheels.
(a)
(b)Figure17(a) shows that as the working height of the inspection vehicle increases, the value of av decreases slowly first and then sharply and the maximum reduction ratio of av is 99.0%. The ride comfort of the inspection vehicle is improved due to the vehicle acceleration change described in Section 5.3. When the inspection vehicle works at the middle-span position (H = 5 m), the value of av reaches a maximum of 0.1478 m/s2, which is less than 0.315 m/s2, and the vehicle ride comfort level is “not uncomfortable.”Figure17(b) shows that when one pair of compressing wheels at the end of the vehicle is opened, the value of av reaches a maximum of 0.1646 m/s2, which is less than 0.315 m/s2. Thus, the vehicle comfort level is “not uncomfortable” because as the number of open compressing wheel pairs increases, the main frequency of vehicle vibration decreases. In this case, the frequency gradually approaches the sensitive frequency of the human body and the vehicle acceleration changes as described in Section 5.3. As the number of nonworking driving wheels increases, the value of av slowly decreases, and the vehicle ride comfort improves due to the decreases in the main frequency and amplitude of the main frequency described in Section 5.3. When all the driving wheels contact the main cable, the value of av is the largest at 0.1478 m/s2, less than 0.315 m/s2. Therefore, the vehicle ride comfort level is “not uncomfortable.”
## 6. Conclusions
In this paper, an FE model of a suspension bridge and a coupled FE model of a main cable-inspection vehicle coupled system were established by MIDAS Civil software and ANSYS software, respectively, and the two systems were connected by the vibration displacement of the main cable. An experimental modal test of the suspension bridge was completed, and the dynamic responses of both the suspension bridge and the inspection vehicle were analyzed using two FE models under fluctuating wind conditions simulated by MATLAB software. The main conclusions drawn from the study results are as follows:(1)
By comparing the results of the FE modal analysis and modal test of the first 4 vibration modes, the vibration modes are fully consistent and the difference in frequency values between them is less than 5%. The validity of the FE model of the suspension bridge is verified.(2)
Through the transient analysis under fluctuating wind conditions, the main vibration direction of the main cable is the lateral direction and the main vibration directions of the inspection vehicle are theX direction and Y direction. The increase in the working height will lead to a significant reduction in the vibration displacement amplitude of the main cable node and a significant reduction in the vibration response of the inspection vehicle. An increase in the number of nonworking compressing wheels or nonworking driving wheels will result in a decrease in the main frequency and slowly reduce the vibration response of the inspection vehicle.(3)
The vehicle ride comfort evaluation under different working conditions shows that an increase in the working height improves the vehicle ride comfort due to the reduction in the vibration response of the inspection vehicle. The increase in the number of nonworking compressing wheels affects the vibration amplitude and frequency of the vehicle, which results in an initial deterioration and subsequent improvement in vehicle ride comfort. The increase in the number of nonworking driving wheels will lead to improved vehicle ride comfort.(4)
The vehicle ride comfort analysis shows that when the average wind speed of the inspection vehicle is below 13.3 m/s, the maximum value ofav for the vehicle is 0.1646 m/s2 and the vehicle ride comfort level is “not uncomfortable,” which meets the users’ requirements.The simulation results presented demonstrate that the numerical analysis method proposed in this paper can be implemented to evaluate wind-induced vibration characteristics for other similar devices working on the main cable. However, comparisons and experimental tests on the influence of the long-term driving of the inspection vehicle on the main cable protective layer have yet to be achieved. These issues remain potential topics of future work.
---
*Source: 1012987-2019-10-30.xml* | 1012987-2019-10-30_1012987-2019-10-30.md | 59,917 | Wind-Induced Vibration Response of an Inspection Vehicle for Main Cables Based on Computer Simulation | Lu Zhang; Shaohua Wang; Peng Guo; Qunsheng Wang | Shock and Vibration
(2019) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1012987 | 1012987-2019-10-30.xml | ---
## Abstract
This paper presents a simulation approach based on the finite element method (FEM) to analyze the wind-induced vibration response of an inspection vehicle for main cables. First, two finite element (FE) models of a suspension bridge and a main cable-inspection vehicle coupled system are established using MIDAS Civil software and ANSYS software, respectively. Second, the mean wind speed distribution characteristics at a bridge site are analyzed, and the wind field is simulated based on the spectral representation method (SRM). Third, a modal analysis and a wind-induced vibration response transient analysis of the suspension bridge FE model are completed. Fourth, the vibration characteristics of the inspection vehicle are analyzed by applying fluctuating wind conditions and main cable vibration displacements in the main cable-inspection vehicle coupled FE model. Finally, based on the ISO2631-1-1997 standard, a vehicle ride comfort evaluation is performed. The results of the suspension bridge FE modal analysis are in good accordance with those of the experimental modal test. The effects of the working height, number of nonworking compressing wheels, and number of nonworking driving wheels during driving are discussed. When the average wind speed is less than 13.3 m/s, the maximum total weighted root mean square acceleration (av) is 0.1646 m/s2 and the vehicle ride comfort level is classified as “not uncomfortable.” This approach provides a foundation for the design and application of inspection vehicles.
---
## Body
## 1. Introduction
As the most important component of a suspension bridge, the main cable is corroded by the long-term exposure to natural factors (e.g., wind, rain, freezing temperatures, temperature changes, and humidity changes), which can endanger the security and reduce the service life of the bridge [1–3]. Therefore, it is necessary to inspect and maintain the main cable regularly. The traditional method of inspecting the main cable is artificial climbing, which has many shortcomings, such as having blind inspection areas, difficulty in climbing at high altitudes, low efficiency, and potential safety hazards. To improve the efficiency of cable inspection, various types of crawling robots have been developed [4–6]. These light crawling robots can perform unmanned inspections of the sling and the cable, but it is difficult to inspect the main cable of a suspension bridge with cable bands and other ancillary structures or provide a suitable maintenance platform. Therefore, it is of great significance to develop an inspection vehicle suitable for large main cables.To inspect the main cable of a suspension bridge, an inspection vehicle is designed and manufactured, as shown in Figure1. The inspection vehicle adopts the wheeled walking scheme of “10 sets of driving wheels +3 pairs of compressing wheels (driven wheels)” to surround the main cable. The vehicle body is a Π-shaped steel frame, and the other parts are aluminum alloy trusses. The whole vehicle weighs 8.5 t, including a 1 t live load. Three sets of lock devices are installed on each pair of revolving plates at the bottom of the vehicle body to increase the rigidity of the vehicle body. The inspection vehicle adopts automatic control technology to perform the alternate lifting and lowering of the driving wheels, the advancing and retracting of the compressing wheels, the rotating of flaps, and the switching of lock devices by controlling the hydraulic cylinders to straddle obstacles such as cable bands and hangers. The vehicle can identify cable bands automatically, climb cable bands actively, and crawl at a large angle (30°) when manned.Figure 1
(a) An inspection vehicle for a main cable. (b) Structure of the vehicle.
(a)
(b)The inspection vehicle drives on the main cable, which is a typical wind-sensitive structure. Wind-induced vibration directly affects the security of equipment and the efficiency of workers [7]. Therefore, the dynamic properties of the main cable-inspection vehicle coupled system under wind loads should be studied when the inspection vehicle is designed. Researchers have conducted numerous field measurements to investigate the wind-induced vibration characteristics of suspension bridges [8–10]. Field measurement is the most direct and reliable method to obtain site wind data [11–13]. However, this approach requires a long period and is easily affected by environmental conditions. Li et al. [14], Chen et al. [15], and Li et al. [16] investigated the wind characteristics of different bridge sites through wind tunnel tests to provide a basis for bridge design. Although this method is not affected by the geographical environment, accurate models must be built, and the cost can be extremely expensive. With the development of theoretical studies and computer technology, simulation experiments have been increasingly used to study the effects of various factors on bridge components at the initial stage of the research work to reduce the number of field measurements and tunnel tests [17, 18]. Field measurements and tunnel tests are usually used for final validation. Bai et al. [19] and Helgedagsrud et al. [20] performed and validated simulation experiments by comparing different simulation results with those of wind tunnel tests. However, these studies focused on the wind-induced vibration of suspension bridges, and few studies have focused on the wind-induced vibration of mobile devices on the main cable.The inspection vehicle was developed in the specialized field of mobile robots. In most previous studies, researchers focused on the design and vibration caused by the climbing of robots. Kim et al. [4] and Cho et al. [5] manufactured a wheel-based robot and a caterpillar-based robot for the inspection of a hanger rope, and the climbing abilities of both robots were validated in indoor experimental environments. Xu et al. [6] designed a cable-climbing robot for cable-stayed bridges, and the obstacle-climbing performance of each robot was simulated and validated in the laboratory. However, these robots are all tiny unmanned robots, and only structural safety must be considered. For a manned inspection vehicle, the vibration of the vehicle may cause discomfort or annoy the workers and influence performance [21]. These factors must be studied in the design process. The wind-induced vibration of the inspection vehicle is a complicated solid-wind-cable interaction problem, and the corresponding action mechanism has not yet been effectively studied.The simulation of a random wind field is essential for time-domain vibration analysis of the inspection vehicle under wind loads. Based on the Monte Carlo method, the digital filtering method and the spectral representation method (SRM) [22] are two basic approaches used to simulate the random wind field. SRM, the method adopted in this paper, has been widely used in the engineering community due to its accuracy and simplicity [23, 24]. In order to improve the efficiency of SRM for the simulation of multidimensional wind fields, Tao et al. [23, 25], Huang et al. [24], and Xu et al. [26] proposed different optimization methods for solving. With the development of sensor technology, the study of wind fields has gradually been extended from stationary fields to nonstationary fields, especially for extreme wind [27, 28]. However, no empirical model is available for nonstationary winds due to the difficulties in mathematical treatments and nonergodic characteristics [28]. Therefore, stationary wind is the objective of this paper.The purpose of this paper is to propose a method of assessing the vibration response of a main cable-inspection vehicle coupled system under fluctuating wind conditions. This paper is organized as follows: In Section2, the process for establishing a suspension bridge finite element (FE) model and a main cable-inspection vehicle coupled FE model is introduced in detail. Taking the Qingshui River Bridge as an example, the mean wind speed distribution characteristics at the bridge site are analyzed and the wind field is simulated in Section 3. Section 4 presents the ride comfort evaluation method based on the ISO 2631-1-1997 standard [21]. A modal analysis of the suspension bridge FE model, transient analysis of both FE models, and ride comfort evaluation of the vehicle are conducted in Section 5. Finally, the conclusions and future work are discussed in Section 6.
## 2. Finite Element Model
The unit length mass of the inspection vehicle is much smaller than that of the suspension bridge. The inspection vehicle has little influence on the vibration amplitude and acceleration of the cable [29]. Therefore, the influence of the inspection vehicle on the vibration response of the bridge is not considered in this study. In order to simplify the study, the suspension bridge FE model and the main cable-inspection vehicle coupled FE model are established. MIDAS Civil software is used for the suspension bridge FE Model because of the accuracy and accessibility of its built-in formulas. For the main cable-inspection vehicle coupled FE model, ANSYS software is used due to the variety of its element types and the flexibility of its ANSYS Parametric Design Language (APDL). The dynamic displacement of the main cable which is recorded over time from the suspension bridge FE model is taken as the excitation source for the main cable-inspection vehicle coupled FE model to realize one-way coupling between two FE models.
### 2.1. Suspension Bridge FE Model
The Qingshui River Bridge, located in Guizhou Province, China, is 27 m wide with a main span of 1130 m. The structure is a single-span steel truss suspension bridge located in a mountainous area, connecting the expressway from Guizhou to Weng’an, as shown in Figure2 xb is the axial direction of the bridge, yb is the direction of the bridge width, and zb is the vertical direction.Figure 2
Configuration of the Qingshui River Bridge (unit: m).The FE model of the Qingshui River Bridge was built using MIDAS Civil software, as presented in Figure3. Beam elements are used for the bridge deck and towers. Truss elements are used for the main cable and hangers and are tension-only elements. The structural parameters of the bridge FE model are listed in Table 1. The bottom of each tower and each anchorage are fixed. The bridge deck is connected to two main towers by elastic contacts in both the y and z directions, with ratings of 1000000 kN/m. One end of the bridge deck is constrained in the x direction. The weight of the bridge deck, two main cables, and all hangers are added to the nodes of the bridge deck and main cables in the form of a uniformly distributed mass load. Fluctuating wind loads are applied to the bridge deck and the main cables simultaneously on the elasticity center nodes in a given time sequence. The damping ratio is defined as 0.005.Figure 3
The FE model of the suspension bridge.Table 1
Parameters of the suspension bridge.
Part
A (m2)
I
y (m4)
I
z (m4)
ρ (kg/m3)
W (kN/m)
E (GPa)
Main tower
56.0
752.6
762.3
2500
—
34.5
Bridge deck
4.594
6.78
223.41
—
256.0
205
Main cable
0.3526
—
—
—
27.679
192
Hanger
0.0017
—
—
—
0.134
192
A, cross-sectional area; Iy, vertical section moment of inertia; Iz, transverse section moment of inertia; E, Young’s modulus; ρ, density; W, weight per unit length.
### 2.2. Main Cable-Inspection Vehicle Coupled FE Model
The structure of the main cable-inspection vehicle coupled system is shown in Figure4 xv is the width direction of the bridge deck, yv is the direction perpendicular to the tangential direction of the main cable centerline, and zv is the tangential direction of the main cable centerline.Figure 4
Configuration of the main cable-inspection vehicle coupled system (unit: mm).The FE model of the main cable-inspection vehicle coupled system was established in ANSYS software, as shown in Figure5. The main cable is defined as a rigid body. SHELL 63 elements are used for the vehicle body, and BEAM 188 elements are used for working platforms, compressing wheel brackets, and the equipment box truss. The guardrail, compressing wheels, driving wheels, power system, and control system are omitted, and the weight of each part is added to the model by adjusting the material density of the local structure. The parameters of the material are shown in Table 2. Between the driving wheels and main cable or the compressing wheels and main cable are viscoelastic contacts, which are defined as spring-damping contacts. The parameters of the spring-damping contacts are shown in Table 3. Fluctuating wind loads are applied to the vehicle on the windward nodes in a given time sequence. The dynamic displacement of the main cable that is recorded over time from the suspension bridge FE model is taken as the excitation source.Figure 5
The main cable-inspection vehicle coupled FE model.Table 2
The parameters of the material used in the coupled FE model.
Part
E (MPa)
ν
ρ (kg/m3)
Main cable
Rigid body
—
—
Vehicle body
2.05 × 105
0.3
7850
Driving/compression wheel
2.05 × 105
0.3
60000
Working platform
6.9 × 104
0.33
4000
Equipment box
6.9 × 104
0.33
85000
E, Young’s modulus; ν, Poisson’s ratio; ρ, density.Table 3
The parameters of the spring-damping contacts used in the coupled FE model.
Parameters
k
d (N/mm)
c
d (N·s/mm)
k
c (N/mm)
c
c (N·s/mm)
Value
285.3
0.2
1500
0.2
k
d and cd are the spring stiffness and viscous damping coefficient of the contact between the driving wheels and main cable, respectively; kc and cc are the spring stiffness and viscous damping coefficient of the contact between the compressing wheels and main cable, respectively.For consistency in the comparison of results, the vibration of node P, located at the bottom of the middle of the working platform on the windward side (see Figure5), is investigated in the following analysis.
### 2.3. Coordinate Relationship
The inspection vehicle drives along the main cable during work, which results in changes in both the inclination angle (α) and working height (H) of the inspection vehicle. According to the different heights of the main cable from the bridge deck, H is set at 5 m, 35 m, 65 m, 95 m, and 113 m. The relation between H and α is shown in Table 4. According to the selected coordinate system when establishing the FE models, the coordinate relationships between the suspension bridge and inspection vehicle and the vehicle and a worker are shown in Figure 6.Table 4
The relation betweenH and α.
H
5 m
35 m
65 m
95 m
113 m
α
0°
12.1°
16.8°
20.2°
21.3°Figure 6
Coordinate relationship. (a) Bridge and vehicle. (b) Vehicle and worker.
(a)
(b)The displacements obtained from the transient analysis of the suspension bridge FE model need to be transferred to the main cable-inspection vehicle coupled FE model. The displacement relations in the two coordinate systems are as follows:(1)Dxv=−Dyb,Dyv=Dxb⋅sinα+Dzb⋅cosα,Dzv=−Dxb⋅cosα+Dzb⋅sinα,where Dxv, Dyv, and Dzv are the displacements of the main cable in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. Additionally, Dxb, Dyb, and Dzb are the displacements of the main cable in the X, Y, and Z directions of the bridge coordinate system Ob—XbYbZb, respectively.The acceleration obtained from the transient analysis of the main cable-inspection vehicle coupled FE model is transferred to the foot of the worker. The acceleration relations in the two coordinate systems are as follows:(2)ax=axv,ay=ayv⋅sinα−azv⋅cosα,az=azv⋅sinα+ayv⋅cosα,where axv, ayv, and azv are the accelerations of the inspection vehicle (node P) in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. In addition, ax, ay, and az are the accelerations transferred to the foot of the worker in the X, Y, and Z directions of the worker coordinate system O—XYZ, respectively.
## 2.1. Suspension Bridge FE Model
The Qingshui River Bridge, located in Guizhou Province, China, is 27 m wide with a main span of 1130 m. The structure is a single-span steel truss suspension bridge located in a mountainous area, connecting the expressway from Guizhou to Weng’an, as shown in Figure2 xb is the axial direction of the bridge, yb is the direction of the bridge width, and zb is the vertical direction.Figure 2
Configuration of the Qingshui River Bridge (unit: m).The FE model of the Qingshui River Bridge was built using MIDAS Civil software, as presented in Figure3. Beam elements are used for the bridge deck and towers. Truss elements are used for the main cable and hangers and are tension-only elements. The structural parameters of the bridge FE model are listed in Table 1. The bottom of each tower and each anchorage are fixed. The bridge deck is connected to two main towers by elastic contacts in both the y and z directions, with ratings of 1000000 kN/m. One end of the bridge deck is constrained in the x direction. The weight of the bridge deck, two main cables, and all hangers are added to the nodes of the bridge deck and main cables in the form of a uniformly distributed mass load. Fluctuating wind loads are applied to the bridge deck and the main cables simultaneously on the elasticity center nodes in a given time sequence. The damping ratio is defined as 0.005.Figure 3
The FE model of the suspension bridge.Table 1
Parameters of the suspension bridge.
Part
A (m2)
I
y (m4)
I
z (m4)
ρ (kg/m3)
W (kN/m)
E (GPa)
Main tower
56.0
752.6
762.3
2500
—
34.5
Bridge deck
4.594
6.78
223.41
—
256.0
205
Main cable
0.3526
—
—
—
27.679
192
Hanger
0.0017
—
—
—
0.134
192
A, cross-sectional area; Iy, vertical section moment of inertia; Iz, transverse section moment of inertia; E, Young’s modulus; ρ, density; W, weight per unit length.
## 2.2. Main Cable-Inspection Vehicle Coupled FE Model
The structure of the main cable-inspection vehicle coupled system is shown in Figure4 xv is the width direction of the bridge deck, yv is the direction perpendicular to the tangential direction of the main cable centerline, and zv is the tangential direction of the main cable centerline.Figure 4
Configuration of the main cable-inspection vehicle coupled system (unit: mm).The FE model of the main cable-inspection vehicle coupled system was established in ANSYS software, as shown in Figure5. The main cable is defined as a rigid body. SHELL 63 elements are used for the vehicle body, and BEAM 188 elements are used for working platforms, compressing wheel brackets, and the equipment box truss. The guardrail, compressing wheels, driving wheels, power system, and control system are omitted, and the weight of each part is added to the model by adjusting the material density of the local structure. The parameters of the material are shown in Table 2. Between the driving wheels and main cable or the compressing wheels and main cable are viscoelastic contacts, which are defined as spring-damping contacts. The parameters of the spring-damping contacts are shown in Table 3. Fluctuating wind loads are applied to the vehicle on the windward nodes in a given time sequence. The dynamic displacement of the main cable that is recorded over time from the suspension bridge FE model is taken as the excitation source.Figure 5
The main cable-inspection vehicle coupled FE model.Table 2
The parameters of the material used in the coupled FE model.
Part
E (MPa)
ν
ρ (kg/m3)
Main cable
Rigid body
—
—
Vehicle body
2.05 × 105
0.3
7850
Driving/compression wheel
2.05 × 105
0.3
60000
Working platform
6.9 × 104
0.33
4000
Equipment box
6.9 × 104
0.33
85000
E, Young’s modulus; ν, Poisson’s ratio; ρ, density.Table 3
The parameters of the spring-damping contacts used in the coupled FE model.
Parameters
k
d (N/mm)
c
d (N·s/mm)
k
c (N/mm)
c
c (N·s/mm)
Value
285.3
0.2
1500
0.2
k
d and cd are the spring stiffness and viscous damping coefficient of the contact between the driving wheels and main cable, respectively; kc and cc are the spring stiffness and viscous damping coefficient of the contact between the compressing wheels and main cable, respectively.For consistency in the comparison of results, the vibration of node P, located at the bottom of the middle of the working platform on the windward side (see Figure5), is investigated in the following analysis.
## 2.3. Coordinate Relationship
The inspection vehicle drives along the main cable during work, which results in changes in both the inclination angle (α) and working height (H) of the inspection vehicle. According to the different heights of the main cable from the bridge deck, H is set at 5 m, 35 m, 65 m, 95 m, and 113 m. The relation between H and α is shown in Table 4. According to the selected coordinate system when establishing the FE models, the coordinate relationships between the suspension bridge and inspection vehicle and the vehicle and a worker are shown in Figure 6.Table 4
The relation betweenH and α.
H
5 m
35 m
65 m
95 m
113 m
α
0°
12.1°
16.8°
20.2°
21.3°Figure 6
Coordinate relationship. (a) Bridge and vehicle. (b) Vehicle and worker.
(a)
(b)The displacements obtained from the transient analysis of the suspension bridge FE model need to be transferred to the main cable-inspection vehicle coupled FE model. The displacement relations in the two coordinate systems are as follows:(1)Dxv=−Dyb,Dyv=Dxb⋅sinα+Dzb⋅cosα,Dzv=−Dxb⋅cosα+Dzb⋅sinα,where Dxv, Dyv, and Dzv are the displacements of the main cable in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. Additionally, Dxb, Dyb, and Dzb are the displacements of the main cable in the X, Y, and Z directions of the bridge coordinate system Ob—XbYbZb, respectively.The acceleration obtained from the transient analysis of the main cable-inspection vehicle coupled FE model is transferred to the foot of the worker. The acceleration relations in the two coordinate systems are as follows:(2)ax=axv,ay=ayv⋅sinα−azv⋅cosα,az=azv⋅sinα+ayv⋅cosα,where axv, ayv, and azv are the accelerations of the inspection vehicle (node P) in the X, Y, and Z directions of the vehicle coordinate system Ov—XvYvZv, respectively. In addition, ax, ay, and az are the accelerations transferred to the foot of the worker in the X, Y, and Z directions of the worker coordinate system O—XYZ, respectively.
## 3. Wind Speed and Wind Load Simulation
### 3.1. Wind Speed Simulation
The wind speedVt in nature over a given time interval is considered to be the sum of mean wind speed V¯ and fluctuating wind speed vt:(3)Vt=V¯+vt.To obtain the mean wind speed distribution characteristics for the Qingshui River Bridge, a wind measurement tower was built approximately 100 m away from the main tower of the bridge close to Guizhou. An NRG 40C anemometer (produced by NRG Systems company) was mounted 10 m above the bridge deck to record 10-min mean wind speeds (V¯10). Based on the records from 33 months (from January 2014 to September 2016), the frequency of each mean wind speed interval was calculated, as shown in Figure 7. The lognormal probability density distribution, which is extensively used to fit the wind speed distribution over land [30], is used to fit these wind speed data and is given as follows [31]:(4)fV¯=1V¯σ2πexp−lnV¯−μ22σ2,where σ and μ are the shape and scale parameters, respectively.Figure 7
Probability distribution of the maximum daily mean wind speed.According to the maximum likelihood estimator (MLE) method, the shape and scale parameters are calculated as follows [31]:(5)μ=1M∑i=1Mlnvi,σ=1M∑i=1Mlnvi2,where M is the total number of wind speed values and i = 1, 2, …, and vi is the wind speed at time step.Figure7 shows that the maximum daily mean wind speed is 13.3 m/s. Because a real-time wind speed alarm system is installed on the inspection vehicle, the influence of height on wind speed is neglected. Therefore, the design maximum working mean wind speed of the inspection vehicle is 13.3 m/s (V¯max=13.3m/s).The Davenport power spectrum [32] is used to simulate the fluctuating wind field along the bridge because the influence of height on wind speed is neglected. The Davenport power spectrum is expressed as follows [32]:(6)Svn=4kV¯102x2n1+x24/3,where x=1200n/V¯10, k is a coefficient that depends on the roughness of the surface, and n is the fluctuating wind frequency.The wind field of suspension bridges in each direction can be considered as a one-dimensionalm-variable (1D-mV) zero-mean homogeneous random process V(t) = V1t,V2t,…,VmtT [23], where T is a transpose indicator. The cross power spectral density (CPSD) matrix of V(t) can be described as follows:(7)Sn=S11nS12n⋯S1mnS21ωS22n⋯S2mn⋮⋮⋱⋮Sm1nSm2n⋯Smmn.The CPSD between any two componentsVj(t) and Vp(t) can be expressed as(8)Siqn=SjnSqnγjqneiθjqn,where γjq(n) is the coherence function, the Davenport empirical function [33] is used, j, q = 1, 2, …, m, and θjq(n) is the corresponding phase angle [34].The CPSD matrix can be factorized into the following product with the Cholesky decomposition:(9)Sn=HnHT∗n,where the superscript ∗ indicates the matrix should be conjugate and H(n) is a lower triangular matrix.According to SRM, the fluctuating wind speed for any pointvjt is given as follows [22]:(10)vjt=2Δn∑l=1j∑k=1NHjlnlkcosnlkt−θnlk+ϕlk,nlk=kΔn−1−lmΔn,θnlk=tan−1ImHlknlkReHlknlk,where N is the total number of frequency intervals, Δn=nu/N, nu is the cutoff frequency, nlk is the double-index frequency, Hjl = (nlk) are the elements of matrix H(n), θ(nlk) is the phase angle of the complex element in H(n) at frequency nlk, and ϕlk is a random phase angle uniformly distributed from 0 to 2π.Referring to the JTG/T D60-01-2004 standard [35], the bridge site is classified as Class D, k is taken as 0.01291, and the ground roughness index is taken as 0.3. Using fast Fourier transform (FFT) and the random number generator in MATLAB, wind speed curves in the time domain can be simulated. Thirty-seven simulation points are uniformly distributed along the length of the bridge deck. This paper only considers the effect of the cross-bridge wind, and the angle between the incoming wind direction and primary flow plane is taken as 0°. The simulated fluctuating wind speed time history at the middle-span position (z = 10 m; V¯10 = 13.3 m/s) for the bridge is presented in Figure 8.Figure 8
Simulated wind speed time history at the middle-span position.
### 3.2. Wind Load Simulation
Wind load on a bridge can be classified into three parts: static wind load, buffeting load, and self-excited load. Chen et al. found that the effect of the self-excited force is limited when the wind speed is lower than the design basic wind speed [36]. Therefore, only static wind load and buffeting load on the bridge are considered in this paper.The wind loads per unit length including the static wind and buffeting loads acting at the elasticity center of the bridge deck/cable or acting on the windward nodes of the inspection vehicle are defined as follows:(11)Lwt=12ρHCLVt2,Dwt=12ρBCDVt2,Mwt=12ρB2CMVt2,where Lwt, Dwt, and Mwt are the lift, drag, and torque wind force, respectively, ρ is the air density, H is the height of the bridge deck or diameter of the main cable or windward projection height of the inspection vehicle, and CL, CD, and CM are the lift, drag, and torque wind force coefficients, respectively. All three forces are considered for the deck, but only drag force is considered for the cables and the vehicle. According to the section model tests of the bridge deck in the wind tunnel completed by Wang et al. [37], CL, CD, and CM for the deck are taken as 0.2, 1.5, and 0.02, respectively. Referring to the JTG/T D60-01-2004 standard [35], CL for the cables and the vehicle are taken as 0.7 and 1.8, respectively.
## 3.1. Wind Speed Simulation
The wind speedVt in nature over a given time interval is considered to be the sum of mean wind speed V¯ and fluctuating wind speed vt:(3)Vt=V¯+vt.To obtain the mean wind speed distribution characteristics for the Qingshui River Bridge, a wind measurement tower was built approximately 100 m away from the main tower of the bridge close to Guizhou. An NRG 40C anemometer (produced by NRG Systems company) was mounted 10 m above the bridge deck to record 10-min mean wind speeds (V¯10). Based on the records from 33 months (from January 2014 to September 2016), the frequency of each mean wind speed interval was calculated, as shown in Figure 7. The lognormal probability density distribution, which is extensively used to fit the wind speed distribution over land [30], is used to fit these wind speed data and is given as follows [31]:(4)fV¯=1V¯σ2πexp−lnV¯−μ22σ2,where σ and μ are the shape and scale parameters, respectively.Figure 7
Probability distribution of the maximum daily mean wind speed.According to the maximum likelihood estimator (MLE) method, the shape and scale parameters are calculated as follows [31]:(5)μ=1M∑i=1Mlnvi,σ=1M∑i=1Mlnvi2,where M is the total number of wind speed values and i = 1, 2, …, and vi is the wind speed at time step.Figure7 shows that the maximum daily mean wind speed is 13.3 m/s. Because a real-time wind speed alarm system is installed on the inspection vehicle, the influence of height on wind speed is neglected. Therefore, the design maximum working mean wind speed of the inspection vehicle is 13.3 m/s (V¯max=13.3m/s).The Davenport power spectrum [32] is used to simulate the fluctuating wind field along the bridge because the influence of height on wind speed is neglected. The Davenport power spectrum is expressed as follows [32]:(6)Svn=4kV¯102x2n1+x24/3,where x=1200n/V¯10, k is a coefficient that depends on the roughness of the surface, and n is the fluctuating wind frequency.The wind field of suspension bridges in each direction can be considered as a one-dimensionalm-variable (1D-mV) zero-mean homogeneous random process V(t) = V1t,V2t,…,VmtT [23], where T is a transpose indicator. The cross power spectral density (CPSD) matrix of V(t) can be described as follows:(7)Sn=S11nS12n⋯S1mnS21ωS22n⋯S2mn⋮⋮⋱⋮Sm1nSm2n⋯Smmn.The CPSD between any two componentsVj(t) and Vp(t) can be expressed as(8)Siqn=SjnSqnγjqneiθjqn,where γjq(n) is the coherence function, the Davenport empirical function [33] is used, j, q = 1, 2, …, m, and θjq(n) is the corresponding phase angle [34].The CPSD matrix can be factorized into the following product with the Cholesky decomposition:(9)Sn=HnHT∗n,where the superscript ∗ indicates the matrix should be conjugate and H(n) is a lower triangular matrix.According to SRM, the fluctuating wind speed for any pointvjt is given as follows [22]:(10)vjt=2Δn∑l=1j∑k=1NHjlnlkcosnlkt−θnlk+ϕlk,nlk=kΔn−1−lmΔn,θnlk=tan−1ImHlknlkReHlknlk,where N is the total number of frequency intervals, Δn=nu/N, nu is the cutoff frequency, nlk is the double-index frequency, Hjl = (nlk) are the elements of matrix H(n), θ(nlk) is the phase angle of the complex element in H(n) at frequency nlk, and ϕlk is a random phase angle uniformly distributed from 0 to 2π.Referring to the JTG/T D60-01-2004 standard [35], the bridge site is classified as Class D, k is taken as 0.01291, and the ground roughness index is taken as 0.3. Using fast Fourier transform (FFT) and the random number generator in MATLAB, wind speed curves in the time domain can be simulated. Thirty-seven simulation points are uniformly distributed along the length of the bridge deck. This paper only considers the effect of the cross-bridge wind, and the angle between the incoming wind direction and primary flow plane is taken as 0°. The simulated fluctuating wind speed time history at the middle-span position (z = 10 m; V¯10 = 13.3 m/s) for the bridge is presented in Figure 8.Figure 8
Simulated wind speed time history at the middle-span position.
## 3.2. Wind Load Simulation
Wind load on a bridge can be classified into three parts: static wind load, buffeting load, and self-excited load. Chen et al. found that the effect of the self-excited force is limited when the wind speed is lower than the design basic wind speed [36]. Therefore, only static wind load and buffeting load on the bridge are considered in this paper.The wind loads per unit length including the static wind and buffeting loads acting at the elasticity center of the bridge deck/cable or acting on the windward nodes of the inspection vehicle are defined as follows:(11)Lwt=12ρHCLVt2,Dwt=12ρBCDVt2,Mwt=12ρB2CMVt2,where Lwt, Dwt, and Mwt are the lift, drag, and torque wind force, respectively, ρ is the air density, H is the height of the bridge deck or diameter of the main cable or windward projection height of the inspection vehicle, and CL, CD, and CM are the lift, drag, and torque wind force coefficients, respectively. All three forces are considered for the deck, but only drag force is considered for the cables and the vehicle. According to the section model tests of the bridge deck in the wind tunnel completed by Wang et al. [37], CL, CD, and CM for the deck are taken as 0.2, 1.5, and 0.02, respectively. Referring to the JTG/T D60-01-2004 standard [35], CL for the cables and the vehicle are taken as 0.7 and 1.8, respectively.
## 4. Comfort Evaluation
With the increasing awareness of human physiological and behavioral responses to vibration, the comfort problem in the vibration environment has extended from the ride comfort of automobiles to the operating comfort of engineering equipment and has received increased attention [38, 39]. In this paper, the evaluation method recommended by the ISO2631-1-1997 standard [21] is used to evaluate the perception and ride comfort of a worker driving an inspection vehicle under vibration. According to the standard, the acceleration time history signals in all directions transmitted from the working platform floor to a worker through the feet are transformed into the frequency domain using FFT. The frequency-weighted root mean square (rms) acceleration aw is determined by weighting and the appropriate addition of one-third octave band data as follows [21]:(12)aw=∑kWkak21/2,where Wk is the weighting factor of the kth one-third octave band given by the ISO 2631-1-1997 standard and ak is the rms acceleration of the kth one-third octave band.In orthogonal coordinates, the vibration total value of the weighted rms accelerationav is calculated by aw in three directions as follows [21]:(13)av=kx2awx2+ky2awy2+kz2awz21/2,where awx, awy, and awz are the frequency-weighted accelerations with respect to orthogonal coordinate axes x, y, and z, respectively, and kx, ky, and kz are the multiplying factors given by the ISO 2631-1-1997 standard [21].The relationship betweenav and subjective sensory comfort is shown in Table 5 [21]. After calculating av, it is convenient to quantitatively evaluate the ride comfort of the inspection vehicle.Table 5
Subjective criteria for ride comfort.
a
v (m/s2)
Comfort level
<0.315
Not uncomfortable
0.315∼0.63
A little uncomfortable
0.5∼1.0
Fairly uncomfortable
0.8∼1.6
Uncomfortable
1.25∼2.5
Very uncomfortable
>2.0
Extremely uncomfortable
## 5. Results and Analysis
### 5.1. Suspension Bridge FE Model Verification
The block-Lanczos method was used in the modal analysis of a suspension bridge to verify the correctness of the proposed suspension bridge FE model. To measure the modal characteristics of the suspension bridge, a full-bridge aeroelastic model was constructed that was scaled toC = 1/100 of the original. According to the similarity criterion suggested in the JTG/T D60-01-2004 standard [35], the major design parameters of the model are listed in Table 6. The full-bridge aeroelastic elastic model was composed of the bridge deck, main towers, main cables, hangers, and bases. The model main beam adopted aluminum chords and plastic diagonals. The bridge deck consisted of 37 sections with a 2 mm gap between each section and a U connection between two sections. Each tower contained a steel core beam and wooden boards that provided an aerodynamic shape. Steel strand provided stiffness of cable, and an iron block was enwrapped outside the cable to control the weight. Each hanger consisted of a steel wire without shear stiffness but with high tension stiffness. Lead blocks were used to adjust the weight of the model and were installed inside the bridge deck model. Plastic boards were used to simulate the aeroelastic shape of the deck rail. The test was conducted in the Industrial Wind Tunnel (XNJD-3) of Southwest Jiaotong University, as shown in Figure 9. The dynamic response of the model was measured by the forced oscillation method [14]. Two laser displacement sensors were used to obtain the vibration signal in the test. A comparison of the results of the FE modal analysis and experiment modal test of the first 4 vibration modes is presented in Table 7. The table shows that the modes of vibration are fully consistent, and the difference in frequency values is less than 5%. Therefore, the FE model of the suspension bridge has high accuracy and can effectively reflect the dynamic characteristics of the suspension bridge.Table 6
Major design parameters of the full-bridge aeroelastic model.
Parameter
Unit
Similarity ratio
Value of the real bridge
Value of the model
Length
Total length of bridge deck (L)
m
C
1130
11.3
Width of bridge deck (B)
27
0.27
Height of bridge deck (H)
7
0.07
Height of main tower (Ht)
230
2.3
Stiffness
Vertical stiffness of bridge deck,EIy
Nm2
C
5
1.39 × 1012
1.39 × 102
Transverse stiffness of bridge deck,EIz
4.58 × 1013
4.58 × 103
Along-bridge direction stiffness of the bottom of main tower,EIy
2.63 × 1013
2.63 × 103
Transverse stiffness of the bottom of main tower,EIz
12.60 × 1013
12.60 × 103Figure 9
Test model of the suspension bridge.Table 7
Comparison of vibration modes.
Mode no.
Frequency
Mode of vibration
Remark
Experiment modal test (Hz)
FE modal analysis (Hz)
Difference (%)
1
0.0844
0.0818
3.1
Symmetric lateral vibration
2
0.1660
0.1608
3.1
Antisymmetric vertical vibration
3
0.1777
0.1812
2.0
Symmetric vertical vibration
4
0.2435
0.2542
4.4
Antisymmetric lateral vibration
### 5.2. Wind-Induced Vibration Response of the Main Cable
After validation, the suspension bridge FE model can be used to analyze the vibration characteristics of the main cable with fluctuating winds as the excitation source. Figure10 shows the vibration displacement history of the node at the middle-span position, which is 5 m above the bridge deck surface in the bridge coordinate system Ob—XbYbZb. The vibration frequency of the main cable is 0.0815, which is consistent with the first-order frequency of the suspension bridge. The results are also consistent with the conclusion of Huang et al. [40]. The main vibration direction of the main cable is the Y direction (lateral direction). The peak value of the Y-direction vibration displacement is 188.7 mm, and the peak values of X-direction and Z-direction vibration displacement are less than 0.1 mm. Therefore, only the Y-direction vibration of the main cable node is considered in the study of the wind-induced vibration response in the main cable-inspection vehicle coupled FE model. As the height of the main cable node increases, the constraint on the node from the cable saddle becomes stronger and the peak value of the Y-direction vibration displacement of the node gradually decreases, as shown in Figure 11.Figure 10
Vibration displacement time history of the node 5 m above the main cable.Figure 11
The peakY-direction displacement vs. the height of the main cable node.
### 5.3. Wind-Induced Vibration Response of the Inspection Vehicle
The transient analysis of the main cable-inspection vehicle coupled FE model was performed with the fluctuating wind and the displacement of the main cable node as the excitation sources. Figure12 shows the vibration acceleration time history of the inspection vehicle node P at a height of 5 m in the moving coordinate system Ov—XvYvZv. At this time, all the driving wheels and compressing wheels of the inspection vehicle are in contact with the main cable. The vibration of the inspection vehicle has a large impact from 0 s to 10 s, which is due to the impact of gravity applied during simulation. This phenomenon will not occur during the operation of the inspection vehicle. Therefore, all subsequent acceleration statistics start from 10 s to maintain accuracy. The peak value of the X-direction vibration acceleration (axv) at node P at a height of 5 m is the largest at 7.86 m/s2, and the peak value of the Z-direction vibration acceleration (azv) is the smallest at 0.55 m/s2. The peak value of the Y-direction vibration acceleration (ayv) is 3.81 m/s2. Therefore, the main vibrations of the inspection vehicle are X-direction vibration and Y-direction vibration.Figure 12
Vibration acceleration time history of the inspection vehicle at a height of 5 m.When the inspection vehicle climbs along the main cable, the working height (H) and inclination angle (α) of the inspection vehicle increase simultaneously. Due to the influence of the constraint on the nodes of the cable from the main cable saddle strength and the increase in the gravity component in the Z direction of the inspection vehicle, the peak values of the X-direction and Y-direction vibration acceleration of the inspection vehicle gradually decrease and the peak value of the Z-direction vibration acceleration initially increases and then decreases, as shown in Figure 13.Figure 13
Amplitude of vibration acceleration vs. the working height of the inspection vehicle.When the inspection vehicle crosses the cable band, the driving wheels need to be lifted and the compressing wheels need to be alternately opened. Lifting the driving wheels or opening the compressing wheels will change the coupling relationship between the inspection vehicle and the main cable and will reduce the contact stiffness between them, resulting in simultaneous reductions in the main frequency and the main frequency amplitude, as shown in Figure14. As the number of nonworking compression wheels increases, the vertical and lateral contact stiffness values between the inspection vehicle and the main cable simultaneously decrease, resulting in a gradual decrease in the peak values of both the X-direction and Y-direction vibration acceleration of the inspection vehicle and an initial increase and subsequent decrease in the peak value of the Z-direction vibration acceleration, as shown in Figure 15. As the number of nonworking driving wheels increases, the vertical contact stiffness between the inspection vehicle and the main cable decreases, resulting in a gradual decrease in the peak value of the X-direction vibration acceleration of the inspection vehicle, a nonobvious change in the peak value of the Y-direction vibration acceleration, and a slow increase in the peak value of the Z-direction vibration acceleration, as shown in Figure 16.Figure 14
One-sided power spectrum density (PSD) of the vibration acceleration of the inspection vehicle vs. (a) the number of nonworking compression wheels and (b) the number of nonworking driving wheels.
(a)
(b)Figure 15
Amplitude of displacement vs. the number of nonworking compression wheels of the inspection vehicle.Figure 16
Amplitude of displacement vs. the number of nonworking driving wheels of the inspection vehicle.
### 5.4. Ride Comfort Evaluation of the Inspection Vehicle
The inspection vehicle is a type of aerial work equipment. Vibration of the vehicle can not only induce psychological panic in workers but also affect the work efficiency and health of workers [21]. Therefore, it is necessary to analyze the ride comfort of the vehicle. Through coordinate transformation, the vibration acceleration transmitted to the foot of a worker in the inspection vehicle is obtained and the vehicle ride comfort is further analyzed. Figure 17 illustrates the variation curves of the vibration total values of the weighted rms acceleration (av).Figure 17
The variation curves of theav value vs. (a) the working height and (b) the number of nonworking compressing/driving wheels.
(a)
(b)Figure17(a) shows that as the working height of the inspection vehicle increases, the value of av decreases slowly first and then sharply and the maximum reduction ratio of av is 99.0%. The ride comfort of the inspection vehicle is improved due to the vehicle acceleration change described in Section 5.3. When the inspection vehicle works at the middle-span position (H = 5 m), the value of av reaches a maximum of 0.1478 m/s2, which is less than 0.315 m/s2, and the vehicle ride comfort level is “not uncomfortable.”Figure17(b) shows that when one pair of compressing wheels at the end of the vehicle is opened, the value of av reaches a maximum of 0.1646 m/s2, which is less than 0.315 m/s2. Thus, the vehicle comfort level is “not uncomfortable” because as the number of open compressing wheel pairs increases, the main frequency of vehicle vibration decreases. In this case, the frequency gradually approaches the sensitive frequency of the human body and the vehicle acceleration changes as described in Section 5.3. As the number of nonworking driving wheels increases, the value of av slowly decreases, and the vehicle ride comfort improves due to the decreases in the main frequency and amplitude of the main frequency described in Section 5.3. When all the driving wheels contact the main cable, the value of av is the largest at 0.1478 m/s2, less than 0.315 m/s2. Therefore, the vehicle ride comfort level is “not uncomfortable.”
## 5.1. Suspension Bridge FE Model Verification
The block-Lanczos method was used in the modal analysis of a suspension bridge to verify the correctness of the proposed suspension bridge FE model. To measure the modal characteristics of the suspension bridge, a full-bridge aeroelastic model was constructed that was scaled toC = 1/100 of the original. According to the similarity criterion suggested in the JTG/T D60-01-2004 standard [35], the major design parameters of the model are listed in Table 6. The full-bridge aeroelastic elastic model was composed of the bridge deck, main towers, main cables, hangers, and bases. The model main beam adopted aluminum chords and plastic diagonals. The bridge deck consisted of 37 sections with a 2 mm gap between each section and a U connection between two sections. Each tower contained a steel core beam and wooden boards that provided an aerodynamic shape. Steel strand provided stiffness of cable, and an iron block was enwrapped outside the cable to control the weight. Each hanger consisted of a steel wire without shear stiffness but with high tension stiffness. Lead blocks were used to adjust the weight of the model and were installed inside the bridge deck model. Plastic boards were used to simulate the aeroelastic shape of the deck rail. The test was conducted in the Industrial Wind Tunnel (XNJD-3) of Southwest Jiaotong University, as shown in Figure 9. The dynamic response of the model was measured by the forced oscillation method [14]. Two laser displacement sensors were used to obtain the vibration signal in the test. A comparison of the results of the FE modal analysis and experiment modal test of the first 4 vibration modes is presented in Table 7. The table shows that the modes of vibration are fully consistent, and the difference in frequency values is less than 5%. Therefore, the FE model of the suspension bridge has high accuracy and can effectively reflect the dynamic characteristics of the suspension bridge.Table 6
Major design parameters of the full-bridge aeroelastic model.
Parameter
Unit
Similarity ratio
Value of the real bridge
Value of the model
Length
Total length of bridge deck (L)
m
C
1130
11.3
Width of bridge deck (B)
27
0.27
Height of bridge deck (H)
7
0.07
Height of main tower (Ht)
230
2.3
Stiffness
Vertical stiffness of bridge deck,EIy
Nm2
C
5
1.39 × 1012
1.39 × 102
Transverse stiffness of bridge deck,EIz
4.58 × 1013
4.58 × 103
Along-bridge direction stiffness of the bottom of main tower,EIy
2.63 × 1013
2.63 × 103
Transverse stiffness of the bottom of main tower,EIz
12.60 × 1013
12.60 × 103Figure 9
Test model of the suspension bridge.Table 7
Comparison of vibration modes.
Mode no.
Frequency
Mode of vibration
Remark
Experiment modal test (Hz)
FE modal analysis (Hz)
Difference (%)
1
0.0844
0.0818
3.1
Symmetric lateral vibration
2
0.1660
0.1608
3.1
Antisymmetric vertical vibration
3
0.1777
0.1812
2.0
Symmetric vertical vibration
4
0.2435
0.2542
4.4
Antisymmetric lateral vibration
## 5.2. Wind-Induced Vibration Response of the Main Cable
After validation, the suspension bridge FE model can be used to analyze the vibration characteristics of the main cable with fluctuating winds as the excitation source. Figure10 shows the vibration displacement history of the node at the middle-span position, which is 5 m above the bridge deck surface in the bridge coordinate system Ob—XbYbZb. The vibration frequency of the main cable is 0.0815, which is consistent with the first-order frequency of the suspension bridge. The results are also consistent with the conclusion of Huang et al. [40]. The main vibration direction of the main cable is the Y direction (lateral direction). The peak value of the Y-direction vibration displacement is 188.7 mm, and the peak values of X-direction and Z-direction vibration displacement are less than 0.1 mm. Therefore, only the Y-direction vibration of the main cable node is considered in the study of the wind-induced vibration response in the main cable-inspection vehicle coupled FE model. As the height of the main cable node increases, the constraint on the node from the cable saddle becomes stronger and the peak value of the Y-direction vibration displacement of the node gradually decreases, as shown in Figure 11.Figure 10
Vibration displacement time history of the node 5 m above the main cable.Figure 11
The peakY-direction displacement vs. the height of the main cable node.
## 5.3. Wind-Induced Vibration Response of the Inspection Vehicle
The transient analysis of the main cable-inspection vehicle coupled FE model was performed with the fluctuating wind and the displacement of the main cable node as the excitation sources. Figure12 shows the vibration acceleration time history of the inspection vehicle node P at a height of 5 m in the moving coordinate system Ov—XvYvZv. At this time, all the driving wheels and compressing wheels of the inspection vehicle are in contact with the main cable. The vibration of the inspection vehicle has a large impact from 0 s to 10 s, which is due to the impact of gravity applied during simulation. This phenomenon will not occur during the operation of the inspection vehicle. Therefore, all subsequent acceleration statistics start from 10 s to maintain accuracy. The peak value of the X-direction vibration acceleration (axv) at node P at a height of 5 m is the largest at 7.86 m/s2, and the peak value of the Z-direction vibration acceleration (azv) is the smallest at 0.55 m/s2. The peak value of the Y-direction vibration acceleration (ayv) is 3.81 m/s2. Therefore, the main vibrations of the inspection vehicle are X-direction vibration and Y-direction vibration.Figure 12
Vibration acceleration time history of the inspection vehicle at a height of 5 m.When the inspection vehicle climbs along the main cable, the working height (H) and inclination angle (α) of the inspection vehicle increase simultaneously. Due to the influence of the constraint on the nodes of the cable from the main cable saddle strength and the increase in the gravity component in the Z direction of the inspection vehicle, the peak values of the X-direction and Y-direction vibration acceleration of the inspection vehicle gradually decrease and the peak value of the Z-direction vibration acceleration initially increases and then decreases, as shown in Figure 13.Figure 13
Amplitude of vibration acceleration vs. the working height of the inspection vehicle.When the inspection vehicle crosses the cable band, the driving wheels need to be lifted and the compressing wheels need to be alternately opened. Lifting the driving wheels or opening the compressing wheels will change the coupling relationship between the inspection vehicle and the main cable and will reduce the contact stiffness between them, resulting in simultaneous reductions in the main frequency and the main frequency amplitude, as shown in Figure14. As the number of nonworking compression wheels increases, the vertical and lateral contact stiffness values between the inspection vehicle and the main cable simultaneously decrease, resulting in a gradual decrease in the peak values of both the X-direction and Y-direction vibration acceleration of the inspection vehicle and an initial increase and subsequent decrease in the peak value of the Z-direction vibration acceleration, as shown in Figure 15. As the number of nonworking driving wheels increases, the vertical contact stiffness between the inspection vehicle and the main cable decreases, resulting in a gradual decrease in the peak value of the X-direction vibration acceleration of the inspection vehicle, a nonobvious change in the peak value of the Y-direction vibration acceleration, and a slow increase in the peak value of the Z-direction vibration acceleration, as shown in Figure 16.Figure 14
One-sided power spectrum density (PSD) of the vibration acceleration of the inspection vehicle vs. (a) the number of nonworking compression wheels and (b) the number of nonworking driving wheels.
(a)
(b)Figure 15
Amplitude of displacement vs. the number of nonworking compression wheels of the inspection vehicle.Figure 16
Amplitude of displacement vs. the number of nonworking driving wheels of the inspection vehicle.
## 5.4. Ride Comfort Evaluation of the Inspection Vehicle
The inspection vehicle is a type of aerial work equipment. Vibration of the vehicle can not only induce psychological panic in workers but also affect the work efficiency and health of workers [21]. Therefore, it is necessary to analyze the ride comfort of the vehicle. Through coordinate transformation, the vibration acceleration transmitted to the foot of a worker in the inspection vehicle is obtained and the vehicle ride comfort is further analyzed. Figure 17 illustrates the variation curves of the vibration total values of the weighted rms acceleration (av).Figure 17
The variation curves of theav value vs. (a) the working height and (b) the number of nonworking compressing/driving wheels.
(a)
(b)Figure17(a) shows that as the working height of the inspection vehicle increases, the value of av decreases slowly first and then sharply and the maximum reduction ratio of av is 99.0%. The ride comfort of the inspection vehicle is improved due to the vehicle acceleration change described in Section 5.3. When the inspection vehicle works at the middle-span position (H = 5 m), the value of av reaches a maximum of 0.1478 m/s2, which is less than 0.315 m/s2, and the vehicle ride comfort level is “not uncomfortable.”Figure17(b) shows that when one pair of compressing wheels at the end of the vehicle is opened, the value of av reaches a maximum of 0.1646 m/s2, which is less than 0.315 m/s2. Thus, the vehicle comfort level is “not uncomfortable” because as the number of open compressing wheel pairs increases, the main frequency of vehicle vibration decreases. In this case, the frequency gradually approaches the sensitive frequency of the human body and the vehicle acceleration changes as described in Section 5.3. As the number of nonworking driving wheels increases, the value of av slowly decreases, and the vehicle ride comfort improves due to the decreases in the main frequency and amplitude of the main frequency described in Section 5.3. When all the driving wheels contact the main cable, the value of av is the largest at 0.1478 m/s2, less than 0.315 m/s2. Therefore, the vehicle ride comfort level is “not uncomfortable.”
## 6. Conclusions
In this paper, an FE model of a suspension bridge and a coupled FE model of a main cable-inspection vehicle coupled system were established by MIDAS Civil software and ANSYS software, respectively, and the two systems were connected by the vibration displacement of the main cable. An experimental modal test of the suspension bridge was completed, and the dynamic responses of both the suspension bridge and the inspection vehicle were analyzed using two FE models under fluctuating wind conditions simulated by MATLAB software. The main conclusions drawn from the study results are as follows:(1)
By comparing the results of the FE modal analysis and modal test of the first 4 vibration modes, the vibration modes are fully consistent and the difference in frequency values between them is less than 5%. The validity of the FE model of the suspension bridge is verified.(2)
Through the transient analysis under fluctuating wind conditions, the main vibration direction of the main cable is the lateral direction and the main vibration directions of the inspection vehicle are theX direction and Y direction. The increase in the working height will lead to a significant reduction in the vibration displacement amplitude of the main cable node and a significant reduction in the vibration response of the inspection vehicle. An increase in the number of nonworking compressing wheels or nonworking driving wheels will result in a decrease in the main frequency and slowly reduce the vibration response of the inspection vehicle.(3)
The vehicle ride comfort evaluation under different working conditions shows that an increase in the working height improves the vehicle ride comfort due to the reduction in the vibration response of the inspection vehicle. The increase in the number of nonworking compressing wheels affects the vibration amplitude and frequency of the vehicle, which results in an initial deterioration and subsequent improvement in vehicle ride comfort. The increase in the number of nonworking driving wheels will lead to improved vehicle ride comfort.(4)
The vehicle ride comfort analysis shows that when the average wind speed of the inspection vehicle is below 13.3 m/s, the maximum value ofav for the vehicle is 0.1646 m/s2 and the vehicle ride comfort level is “not uncomfortable,” which meets the users’ requirements.The simulation results presented demonstrate that the numerical analysis method proposed in this paper can be implemented to evaluate wind-induced vibration characteristics for other similar devices working on the main cable. However, comparisons and experimental tests on the influence of the long-term driving of the inspection vehicle on the main cable protective layer have yet to be achieved. These issues remain potential topics of future work.
---
*Source: 1012987-2019-10-30.xml* | 2019 |
# Flutter Performance of the Emergency Bridge with New-Type Cable-Girder
**Authors:** Lei Yang; Fei Shao; Qian Xu; Ke-bin Jiang
**Journal:** Mathematical Problems in Engineering
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1013025
---
## Abstract
Based on the proposed emergency bridge scheme, the flutter performance of the emergency bridge with the new-type cable-girder has been investigated through wind tunnel tests and numerical simulation analyses. Four aerodynamic optimization schemes have been developed in consideration of structure characteristics of the emergency bridge. The flutter performances of the aerodynamic optimization schemes have been investigated. The flutter derivatives of four aerodynamic optimization schemes have been analyzed. According to the results, the optimal scheme has been determined. Based on flutter theory of bridge, the differential equations of flutter of the emergency bridge with new-type cable-girder have been established. Iterative method has been used for solving the differential equations. The flutter analysis program has been compiled using the APDL language in ANSYS, and the bridge flutter critical wind speed of the optimal scheme has been determined by the program. The flutter analysis program has also been used to determine the bridge flutter critical wind speed of different wind-resistance cable schemes. The results indicate that the bridge flutter critical wind speed of the original emergency bridge scheme is lower than the flutter checking wind speed. The aerodynamic combined measurements of central-slotted and wind fairing are the optimal scheme, with the safety coefficients larger than 1.2 at the wind attack angles of −3°, 0°, and +3°. The bridge flutter critical wind speed of the optimal scheme has been determined using the flutter analysis program, and the numerical results agree well with the wind tunnel test results. The wind-resistance cable scheme of 90° is the optimal wind cable scheme, and the bridge flutter critical wind speed increased 31.4%. However, in consideration of the convenience in construction and the effectiveness in erection, the scheme of wind-resistance cable in the horizontal direction has been selected to be used in the emergency bridge with new-type cable-girder.
---
## Body
## 1. Introduction
As compared to normal bridges, the emergency bridge has the characteristics of small stiffness and damping. It is a wind-sensitive structure, which is prone to a variety of wind-induced vibrations. Generally, the torsional stiffness of the combined plate beam in a suspension bridge is smaller than that of a box girder or a truss girder. For the Tacoma Narrows Bridge, the combined plate beam was used, and the collapse of the bridge was the result of not considering the flutter stability in the design [1]. In consideration of the demand in transportation and erection, the combined plate beam type is still used widely in emergency bridge structure. In China, the maximum single span of an emergency bridge is 51 m, which cannot satisfy the demand of rescue and relief works in mountainous areas. In order to satisfy the demand, in this study, using a new-type cable-girder, an emergency bridge which can span 150 m has been developed. The light weight, high strength fiber cable has been used as the main cable. The combined plate beam has been used as the main girder, and the splicing structure has been used as the pylon. Therefore, the stiffness of the emergency bridge with new-type cable-girder is relatively low. Further, as the wind resistibility of the combined plate beam is not strong, therefore the flutter of the emergency bridge is worth studying.Dung [2] and Ge [3] developed the mode superposition method for flutter analysis of a suspension bridge. The advantage of this method is that the full participating natural modes of vibration can be considered, but the calculation is time-consuming. However, the mode superposition method is commonly used due to its accuracy and efficiency. Jones and Scanlan [4], Jain [5], and Tanaka [6] utilized the determinant search method directly to predict the bridge flutter critical condition. In recent years, with the development of computers, numerical simulation studies are widely used in flutter analysis. Flutter derivatives are the key parameter for numerical analysis of flutter, and different turbulence models had been used for obtaining the flutter derivatives of bridge cross-section. Vairo [7, 8] proposed a numerical model based on a finite volume ALE formulation and employs ak-ε turbulence model; the accuracy and applicability of the model to wind engineering problems were successfully assessed by computing the aerodynamic behaviour of simple cross-section shapes and typical cross-sections. The effectiveness of the sst (shear-stress-transport) and the standard (std) RANS-based turbulence models in predicting flutter derivatives had been compared, and thek-ε sst formulation proved to be more accurate than thek-ε std [9]. A 2D unsteady Reynolds-averaged Navier-Stokes (URANS) approach adopting Menter’s SSTk-ε turbulence model was employed for computing the flutter and the static aerodynamic characteristics, and the conclusions indicated that the results provided by the proposed methodology agree well with the experimental data [10]. The performances of standard Smagorinsky-Lilly and Kinetic Energy Transport turbulence models were applied to study the unsteady flow field around a rectangular cylinder [11]. The accuracy of standard computational fluid dynamics techniques and turbulence models in predicting the critical flutter speed of streamlined and bluff deck sections was investigated, and the results showed that the flutter onset velocity had mainly been underestimated but cases showing opposite behavior [12]. By considering the nonlinear wind-structure interactions based on the linear theory, Zhang [13] developed an approach for the aerostatic and aerodynamic analysis. Based on ANSYS, Hua [14] developed an approach for the full-mode aerodynamic flutter analysis. The method of full-mode aerodynamic flutter analysis was used to analyze the aerodynamic flutter analysis of a suspension with double main spans [15]. A simple analytical approach to aeroelastic stability problem was proposed and had been proved to be consistent and effective for successfully capturing the main wind-bridge interaction mechanisms [16]. Bai [17] carried out a study on the flutter stability of a steel truss girder suspension bridge. Wind tunnel tests were performed to investigate the effects of different aerodynamic measures on the flutter stability of a steel truss girder suspension bridge. PC slab stiffening girder section is similar to the combined plate beam. These section types are not strong in wind resistibility of structure. Zhu [18] analyzed the flutter stability of a suspension bridge with PC slab stiffening girder. The results showed that the suspension bridge with PC slab stiffening girder was sensitive to the wind attack angles. Based on a series of wind tunnel tests, Yang [19] investigated the influence of vertical central stabilizers on the flutter performance of twin-box girders. Based on wind tunnel tests and computational fluid dynamics (CFD) simulations, a study on the flutter performance of twin-box bridge girders at large angles of attack was presented [20].On the energy viewpoint of flutter, bridge structure can absorb energy from the wind-induced vibration. Energy harvesting from wind-induced vibrations of long-span bridges through electromagnetic devices was studied [21]. The coupling vibrations have attracted the researchers’ attention. The phenomena of RIWVs were reproduced using a high-precision simulator, and the effects of wind speed and rain were considered by wind tunnel tests [22]. The accuracy of wind tunnel test is a key question for wind-induced vibration of bridge. Fabio [23] investigated experimental error propagation, and three different experimental data sets had been used in studying the effects on critical flutter speeds of pedestrian suspension bridge.There are only a few studies on the wind-resistance of emergency bridges [24]. In this study, using wind tunnel tests and numerical simulation analyses, the wind-resistance performance of the emergency bridge has been investigated. The results can be used as a reference for other similar studies.
## 2. Emergency Bridge with New-Type Cable-Girder
### 2.1. Description of the Emergency Bridge with New-Type Cable-Girder
The emergency bridge with the new-type cable-girder comprises of the cable system, the girder, the pylon, and the anchorage system. The emergency bridge is allowed to carry 35 tons of pedrail deck load and 13 tons of wheel load. The emergency bridge has a span of 150 m. The height of the pylon is 15 m. The sag-span ratio is 1/12, and the height of the girder is 0.75 m. The cable system comprises of cable and suspender. The entire bridge has two cables which are in the straddle form. The material of the suspender is round steel. The lateral distance between suspenders is 6 m, and the suspender is anchored to the main beam. The longitudinal distance between suspenders is 10 m. The pylon is assembled by the aluminum alloy profile of H-type, and the type of aluminum alloy is 7005. The main girder mainly consists of three parts: main girder, cross beam, and spandrel beam. Curbs are installed outside of the main girder. A sketch of the bridge is shown in Figures1 and 2. The section of main girder is shown in Figures 3 and 4.Figure 1
General arrangement of emergency bridge.
(a)
Lateral view (b)
Vertical viewFigure 2
General arrangement of emergency bridge deck.
(a)
Lateral view (b)
Vertical viewFigure 3
Cross-section of main girder (mm).Figure 4
Cross-section of main cross girder (mm).
### 2.2. Dynamic Characteristic of the Emergency Bridge
Based on a quasi-secant large-displacement formulation, a nonlinear continuous model for the analysis of long-span cable-stayed bridges was proposed; the model opens the possibility to develop more refined closed-form solutions for the analysis of cable-stayed structures [25]. In order to consider nonlinear response of cable-stayed structures, a closed-form refined model was proposed [26]. To simplify the dynamic analysis of emergency bridge, the equivalent modulus of elasticity (Ernst 1965) [27–30] had been used to consider the sag effect of main cable. The finite element model of the emergency bridge was established using ANSYS software. In the finite element model, the main girders and cross beam were modeled by element BEAM4. The mass and rotation inertia of middle plate focus on the middle of cross beam, which had been modeled by the element MASS21. The pylon has several cross-sections, which had been modeled by element BEAM188. Main cable and suspender were modeled by element LINK10. Boundary conditions of finite element model of the emergency bridge are shown in Table 1. Three-dimensional finite element model of the emergency bridge is shown in Figure 5. The hinge connection between the main girders has been modeled by the method of constraint coupling. Using the Lanczos method in ANSYS, a dynamic finite-element analysis has been performed. Dynamic characteristics of the emergency bridge are shown in Table 2. The bridge flutter stability is mainly related to the first-order model of the vertical bending and torsion. The first-order vertical bending frequency is 0.509Hz, and the first-order torsion frequency is 0.846Hz. The first antisymmetric vertical bending mode and the first antisymmetric torsion mode of stiffening girder of the emergency bridge are shown in Figures 6 and 7, respectively.Table 1
Boundary conditions of finite element model of the emergency bridge.
Degree of freedom UX UY UZ ROTZ ROTX ROTY Beam end ⊕ × × ⊕ ⊕ ⊕ Bottom of pylon × × × × × × Between main cable and top of pylon CP CP CP CP CP CP Main cable at the anchor end × × × × × × Notations represent the following. UX: the longitudinal direction, UY: the vertical direction, UZ: the lateral direction, ROTX: torsion in longitudinal direction, ROTY: torsion in vertical direction, ROTZ: torsion in lateral direction,⊕: release the degree of freedom, ×: constraint to degree of freedom, and CP: coupling the degree of freedom.Table 2
Dynamic characteristics results.
Mode No. Frequency (Hz) Mode shape description 1 0.509 1st-A-VB (MG) 2 0.635 1st-S-VB (MG) 3 0.846 1st-A-T (MG) 4 1.028 2nd-S-VB (MG) 5 1.056 1st-S-T (MG) 6 1.469 2nd-S-T (MG) 7 1.476 B (MC) 8 1.670 2nd-A-VB (MG) 9 2.075 2nd-A-T (MG) 10 2.146 B (MC) Notations represent the following. H: horizontal, V: vertical, L: Longitudinal, B: bending, T: torsion, F: floating, S: symmetric, A: anti-symmetric, MG: main girder, MC: main cables, and P: pylon.Figure 5
Finite element model of the emergency bridge.Figure 6
The first antisymmetric vertical bending mode of the stiffening girder.Figure 7
The first antisymmetric torsion mode of the stiffening girder.
## 2.1. Description of the Emergency Bridge with New-Type Cable-Girder
The emergency bridge with the new-type cable-girder comprises of the cable system, the girder, the pylon, and the anchorage system. The emergency bridge is allowed to carry 35 tons of pedrail deck load and 13 tons of wheel load. The emergency bridge has a span of 150 m. The height of the pylon is 15 m. The sag-span ratio is 1/12, and the height of the girder is 0.75 m. The cable system comprises of cable and suspender. The entire bridge has two cables which are in the straddle form. The material of the suspender is round steel. The lateral distance between suspenders is 6 m, and the suspender is anchored to the main beam. The longitudinal distance between suspenders is 10 m. The pylon is assembled by the aluminum alloy profile of H-type, and the type of aluminum alloy is 7005. The main girder mainly consists of three parts: main girder, cross beam, and spandrel beam. Curbs are installed outside of the main girder. A sketch of the bridge is shown in Figures1 and 2. The section of main girder is shown in Figures 3 and 4.Figure 1
General arrangement of emergency bridge.
(a)
Lateral view (b)
Vertical viewFigure 2
General arrangement of emergency bridge deck.
(a)
Lateral view (b)
Vertical viewFigure 3
Cross-section of main girder (mm).Figure 4
Cross-section of main cross girder (mm).
## 2.2. Dynamic Characteristic of the Emergency Bridge
Based on a quasi-secant large-displacement formulation, a nonlinear continuous model for the analysis of long-span cable-stayed bridges was proposed; the model opens the possibility to develop more refined closed-form solutions for the analysis of cable-stayed structures [25]. In order to consider nonlinear response of cable-stayed structures, a closed-form refined model was proposed [26]. To simplify the dynamic analysis of emergency bridge, the equivalent modulus of elasticity (Ernst 1965) [27–30] had been used to consider the sag effect of main cable. The finite element model of the emergency bridge was established using ANSYS software. In the finite element model, the main girders and cross beam were modeled by element BEAM4. The mass and rotation inertia of middle plate focus on the middle of cross beam, which had been modeled by the element MASS21. The pylon has several cross-sections, which had been modeled by element BEAM188. Main cable and suspender were modeled by element LINK10. Boundary conditions of finite element model of the emergency bridge are shown in Table 1. Three-dimensional finite element model of the emergency bridge is shown in Figure 5. The hinge connection between the main girders has been modeled by the method of constraint coupling. Using the Lanczos method in ANSYS, a dynamic finite-element analysis has been performed. Dynamic characteristics of the emergency bridge are shown in Table 2. The bridge flutter stability is mainly related to the first-order model of the vertical bending and torsion. The first-order vertical bending frequency is 0.509Hz, and the first-order torsion frequency is 0.846Hz. The first antisymmetric vertical bending mode and the first antisymmetric torsion mode of stiffening girder of the emergency bridge are shown in Figures 6 and 7, respectively.Table 1
Boundary conditions of finite element model of the emergency bridge.
Degree of freedom UX UY UZ ROTZ ROTX ROTY Beam end ⊕ × × ⊕ ⊕ ⊕ Bottom of pylon × × × × × × Between main cable and top of pylon CP CP CP CP CP CP Main cable at the anchor end × × × × × × Notations represent the following. UX: the longitudinal direction, UY: the vertical direction, UZ: the lateral direction, ROTX: torsion in longitudinal direction, ROTY: torsion in vertical direction, ROTZ: torsion in lateral direction,⊕: release the degree of freedom, ×: constraint to degree of freedom, and CP: coupling the degree of freedom.Table 2
Dynamic characteristics results.
Mode No. Frequency (Hz) Mode shape description 1 0.509 1st-A-VB (MG) 2 0.635 1st-S-VB (MG) 3 0.846 1st-A-T (MG) 4 1.028 2nd-S-VB (MG) 5 1.056 1st-S-T (MG) 6 1.469 2nd-S-T (MG) 7 1.476 B (MC) 8 1.670 2nd-A-VB (MG) 9 2.075 2nd-A-T (MG) 10 2.146 B (MC) Notations represent the following. H: horizontal, V: vertical, L: Longitudinal, B: bending, T: torsion, F: floating, S: symmetric, A: anti-symmetric, MG: main girder, MC: main cables, and P: pylon.Figure 5
Finite element model of the emergency bridge.Figure 6
The first antisymmetric vertical bending mode of the stiffening girder.Figure 7
The first antisymmetric torsion mode of the stiffening girder.
## 3. Wind Tunnel Tests
### 3.1. Design of the Sectional Model
Wind tunnel tests of the section models have been carried out in the TJ-2 wind tunnel laboratory at Tongji University in China. The model scale is 1:10. So, the parameter of the section model can be obtained based on the comparability demand. The major parameters of the section model are shown in Table3. The stringer and horizontal girder system has been used on the framework of the section model. The girder has been welded with steel. The bridge deck of the section model has been sculptured with timber. The wind fairing and baseplate are made of ABS materials. The wind tunnel test of the section model only considers two degrees of freedom of the vertical direction and the torsional direction. The section model of the emergency bridge is shown in Figure 8.Table 3
Parameters of the section model.
Parameters Units Bridge value Scale ratios Section model value Height of girder m 0.75 1:10 0.075 Width of girder m 4.23 1:10 0.423 Mass per unit length kg/m 680 1:102 6.7042 Mass moment of inertia per unit length kg·m2/m 1800 1:104 0.252 Radius of gyration m 1.63 1:10 0.18 Fundamental frequency of vertical bending Hz 0.509 2.776 1.413 Fundamental frequency of torsion Hz 0.846 2.777 2.349 Frequency ratio of torsion and bending / 1.662 / 1.655 Wind speed scale m/s / 1:3.6 / Damping ratio of bending % 0.5 / 0.2 Damping ratio of torsion % 0.5 / 0.3Figure 8
Section model of the original emergency bridge scheme.
### 3.2. Flutter Performance of Original Emergency Bridge Scheme
Subjected to the smooth flow, the flutter critical wind speed of the original emergency bridge scheme has been measured at the wind attack angles of −3°, 0°, and +3°. The data acquisition system can instantly display the data of displacement for the section model. The flutter critical wind speed is the wind speed when the state of the section model system changes from stable to unstable. The wind speed scale is the ratio between wind speed in wind tunnel test and wind speed for actual bridge; it equals the model scale divided by the ratio of torsion frequency. The wind speed scale has been used to calculate the flutter critical wind speed of the actual emergency bridge. The flutter critical wind speed of the original emergency bridge scheme at the wind attack angles of −3°, 0°, and +3° is shown in Table4.Table 4
Flutter critical wind speeds of original emergency bridge scheme.
Wind attack angle (°) Flutter critical wind speed (m/s) Corresponding flutter checking wind speed (m/s) Section model of the emergency bridge Emergency bridge -3 2.85 10.26 20.1 0 2.625 9.45 3 2.50 9.0Based on the design requirements of the emergency bridge, the flutter checking wind speed is 20.1 m/s. The variation of the torsional damping ratio of the original scheme’s section model system with wind speed is shown in Figure9. The results show that the flutter critical wind speeds of original cross-section are less than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. The stability of the original scheme against flutter is insufficient. Therefore, the flutter critical wind speed of the emergency bridge needs to be improved by structural measures or aerodynamic optimization schemes.Figure 9
Variation of torsional damping ratio of the original scheme’s section model system with wind speed.
### 3.3. Flutter Optimization of the Emergency Bridge
In order to improve the flutter performance of the emergency bridge, different aerodynamic optimization schemes have been developed, which are shown in Figure10. Section model of the emergency bridge in wind tunnel tests mainly considers vibration of two degrees of freedom (vertical vibration and torsional vibration), so the test results only give the flutter derivatives correlation with the vertical vibration and torsional vibration. Flutter derivatives of different aerodynamic schemes are shown in Figure 11. The influence of different aerodynamic measures to the flutter performance has been investigated. Four aerodynamic optimization schemes are shown in Table 5. The bridge flutter critical wind speeds of four aerodynamic optimization schemes at the wind attack angles of −3°, 0°, and +3° have been measured by wind tunnel tests. Due to the limited space in this paper, only damping ratio and frequency of Scheme 3 are shown in Figure 12.Table 5
Descriptions of four aerodynamic optimization schemes.
Aerodynamic optimization scheme Description of the aerodynamic optimization scheme 1 Central-slotted Dismantle the board between the girder sides 2 Wind fairing Install the wind fairing on both girder sides 3 Central-slotted + wind fairing Combined Scheme 1 and Scheme 2 4 Plus bottom board + wind fairing Plus the board under the girder and install the wind fairingFigure 10
Section models of four aerodynamic optimization schemes.Figure 11
Bridge flutter derivatives of different schemes.Figure 12
Variation of torsional damping ratio of four schemes’ section model systems with wind speed.In order to analyze the flutter characteristics of the emergency bridge conveniently based on the flutter mechanism, the major flutter derivatives of different schemes at the wind attack angle of 0° are shown in Figure11. The flutter derivative A2∗ is related to the aerodynamic damping, which is generated by the torsional motion. If A2∗ is positive, it is an indication of aerodynamic negative damping, and vice versa. Figure 11(1) shows the flutter derivative A2∗ of the original scheme and Scheme 1 changes from negative to positive with the wind speed increases. This is an indication that the aerodynamic negative damping is produced, which means the emergency bridge is less stable against flutter. The flutter derivative A2∗ of the other aerodynamic optimization schemes is negative with the wind speed increases. So, Scheme 3 is more stable than the other schemes against flutter. The flutter derivative H2∗ is related to the stiffness and damping of the vertical bending affected by the torsional velocity. The correlation between increasing H2∗ and the flutter stability is positive. Figure 11(2) shows that H2∗ of different schemes increase with the wind speed increases. Further, H2∗ of Scheme 3 is larger than those of the other schemes. The flutter derivative A3∗ is related to the torsional stiffness, which is affected by the torsional motion. Generally, the effect of A3∗ on the flutter critical wind speed is small. Figure 11(3) shows that different schemes have similar variation tendencies with the wind speed increase. Therefore, based on the analysis of the major flutter derivatives, the flutter stability of Scheme 3 is the best among four aerodynamic optimization schemes.The variations of the torsional damping ratio of different schemes’ section model systems with wind speed are shown in Figure12. Based on the wind-resistance design code for highway bridges (JTG/TD60-01-2004) [31], the safety factor of the emergency bridge is 1.2. Table 6 shows the bridge flutter critical wind speed of different aerodynamic optimization schemes at wind attack angles of −3°, 0°, and +3°. It can be seen that the flutter critical wind speed at wind attack angle of +3° is smaller than those at wind attack angles of −3° and 0°. So, the bridge flutter critical wind speed at the attack angle of +3° has been used for further comparison. The bridge flutter critical wind speed of the central-slotted scheme is 5.8 m/s. So, this aerodynamic measure cannot improve the flutter critical wind speed. The bridge flutter critical wind speed of the wind fairing scheme is 19.0 m/s. This is a significant improvement to the bridge flutter critical wind speed, but it still cannot meet the design requirement. The bridge flutter critical wind speed of the aerodynamic combined measurements of the central-slotted and wind fairing is 24.3 m/s; this scheme can make the flutter performance of the emergency bridge meet the requirements. The bridge flutter critical wind speed of the scheme of plus the board under the girder and install the wind fairing is 20.3 m/s. This scheme can meet the design requirement, but the safety coefficient is less than 1.2.Table 6
Test results of bridge flutter critical wind speed.
Aerodynamic optimization scheme Wind attack angle (°) Flutter critical wind speed (m/s) Section model of the emergency bridge Emergency bridge 1 -3° 2.25 8.1 0° 1.75 6.3 +3° 1.62 5.8 2 -3° 5.50 19.8 0° 5.35 19.3 +3° 5.28 19.0 3 -3° 9.40 34.2 0° 7.95 28.8 +3° 6.45 24.3 4 -3° 5.85 21.1 0° 5.69 20.5 +3° 5.65 20.3Summarizing the above analysis, the aerodynamic combined measurements of the central-slotted and wind fairing can make flutter performance of the emergency bridge meet the requirements. So, the aerodynamic combined measurements of the central-slotted and wind fairing are the optimal scheme in four aerodynamic optimization schemes.
## 3.1. Design of the Sectional Model
Wind tunnel tests of the section models have been carried out in the TJ-2 wind tunnel laboratory at Tongji University in China. The model scale is 1:10. So, the parameter of the section model can be obtained based on the comparability demand. The major parameters of the section model are shown in Table3. The stringer and horizontal girder system has been used on the framework of the section model. The girder has been welded with steel. The bridge deck of the section model has been sculptured with timber. The wind fairing and baseplate are made of ABS materials. The wind tunnel test of the section model only considers two degrees of freedom of the vertical direction and the torsional direction. The section model of the emergency bridge is shown in Figure 8.Table 3
Parameters of the section model.
Parameters Units Bridge value Scale ratios Section model value Height of girder m 0.75 1:10 0.075 Width of girder m 4.23 1:10 0.423 Mass per unit length kg/m 680 1:102 6.7042 Mass moment of inertia per unit length kg·m2/m 1800 1:104 0.252 Radius of gyration m 1.63 1:10 0.18 Fundamental frequency of vertical bending Hz 0.509 2.776 1.413 Fundamental frequency of torsion Hz 0.846 2.777 2.349 Frequency ratio of torsion and bending / 1.662 / 1.655 Wind speed scale m/s / 1:3.6 / Damping ratio of bending % 0.5 / 0.2 Damping ratio of torsion % 0.5 / 0.3Figure 8
Section model of the original emergency bridge scheme.
## 3.2. Flutter Performance of Original Emergency Bridge Scheme
Subjected to the smooth flow, the flutter critical wind speed of the original emergency bridge scheme has been measured at the wind attack angles of −3°, 0°, and +3°. The data acquisition system can instantly display the data of displacement for the section model. The flutter critical wind speed is the wind speed when the state of the section model system changes from stable to unstable. The wind speed scale is the ratio between wind speed in wind tunnel test and wind speed for actual bridge; it equals the model scale divided by the ratio of torsion frequency. The wind speed scale has been used to calculate the flutter critical wind speed of the actual emergency bridge. The flutter critical wind speed of the original emergency bridge scheme at the wind attack angles of −3°, 0°, and +3° is shown in Table4.Table 4
Flutter critical wind speeds of original emergency bridge scheme.
Wind attack angle (°) Flutter critical wind speed (m/s) Corresponding flutter checking wind speed (m/s) Section model of the emergency bridge Emergency bridge -3 2.85 10.26 20.1 0 2.625 9.45 3 2.50 9.0Based on the design requirements of the emergency bridge, the flutter checking wind speed is 20.1 m/s. The variation of the torsional damping ratio of the original scheme’s section model system with wind speed is shown in Figure9. The results show that the flutter critical wind speeds of original cross-section are less than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. The stability of the original scheme against flutter is insufficient. Therefore, the flutter critical wind speed of the emergency bridge needs to be improved by structural measures or aerodynamic optimization schemes.Figure 9
Variation of torsional damping ratio of the original scheme’s section model system with wind speed.
## 3.3. Flutter Optimization of the Emergency Bridge
In order to improve the flutter performance of the emergency bridge, different aerodynamic optimization schemes have been developed, which are shown in Figure10. Section model of the emergency bridge in wind tunnel tests mainly considers vibration of two degrees of freedom (vertical vibration and torsional vibration), so the test results only give the flutter derivatives correlation with the vertical vibration and torsional vibration. Flutter derivatives of different aerodynamic schemes are shown in Figure 11. The influence of different aerodynamic measures to the flutter performance has been investigated. Four aerodynamic optimization schemes are shown in Table 5. The bridge flutter critical wind speeds of four aerodynamic optimization schemes at the wind attack angles of −3°, 0°, and +3° have been measured by wind tunnel tests. Due to the limited space in this paper, only damping ratio and frequency of Scheme 3 are shown in Figure 12.Table 5
Descriptions of four aerodynamic optimization schemes.
Aerodynamic optimization scheme Description of the aerodynamic optimization scheme 1 Central-slotted Dismantle the board between the girder sides 2 Wind fairing Install the wind fairing on both girder sides 3 Central-slotted + wind fairing Combined Scheme 1 and Scheme 2 4 Plus bottom board + wind fairing Plus the board under the girder and install the wind fairingFigure 10
Section models of four aerodynamic optimization schemes.Figure 11
Bridge flutter derivatives of different schemes.Figure 12
Variation of torsional damping ratio of four schemes’ section model systems with wind speed.In order to analyze the flutter characteristics of the emergency bridge conveniently based on the flutter mechanism, the major flutter derivatives of different schemes at the wind attack angle of 0° are shown in Figure11. The flutter derivative A2∗ is related to the aerodynamic damping, which is generated by the torsional motion. If A2∗ is positive, it is an indication of aerodynamic negative damping, and vice versa. Figure 11(1) shows the flutter derivative A2∗ of the original scheme and Scheme 1 changes from negative to positive with the wind speed increases. This is an indication that the aerodynamic negative damping is produced, which means the emergency bridge is less stable against flutter. The flutter derivative A2∗ of the other aerodynamic optimization schemes is negative with the wind speed increases. So, Scheme 3 is more stable than the other schemes against flutter. The flutter derivative H2∗ is related to the stiffness and damping of the vertical bending affected by the torsional velocity. The correlation between increasing H2∗ and the flutter stability is positive. Figure 11(2) shows that H2∗ of different schemes increase with the wind speed increases. Further, H2∗ of Scheme 3 is larger than those of the other schemes. The flutter derivative A3∗ is related to the torsional stiffness, which is affected by the torsional motion. Generally, the effect of A3∗ on the flutter critical wind speed is small. Figure 11(3) shows that different schemes have similar variation tendencies with the wind speed increase. Therefore, based on the analysis of the major flutter derivatives, the flutter stability of Scheme 3 is the best among four aerodynamic optimization schemes.The variations of the torsional damping ratio of different schemes’ section model systems with wind speed are shown in Figure12. Based on the wind-resistance design code for highway bridges (JTG/TD60-01-2004) [31], the safety factor of the emergency bridge is 1.2. Table 6 shows the bridge flutter critical wind speed of different aerodynamic optimization schemes at wind attack angles of −3°, 0°, and +3°. It can be seen that the flutter critical wind speed at wind attack angle of +3° is smaller than those at wind attack angles of −3° and 0°. So, the bridge flutter critical wind speed at the attack angle of +3° has been used for further comparison. The bridge flutter critical wind speed of the central-slotted scheme is 5.8 m/s. So, this aerodynamic measure cannot improve the flutter critical wind speed. The bridge flutter critical wind speed of the wind fairing scheme is 19.0 m/s. This is a significant improvement to the bridge flutter critical wind speed, but it still cannot meet the design requirement. The bridge flutter critical wind speed of the aerodynamic combined measurements of the central-slotted and wind fairing is 24.3 m/s; this scheme can make the flutter performance of the emergency bridge meet the requirements. The bridge flutter critical wind speed of the scheme of plus the board under the girder and install the wind fairing is 20.3 m/s. This scheme can meet the design requirement, but the safety coefficient is less than 1.2.Table 6
Test results of bridge flutter critical wind speed.
Aerodynamic optimization scheme Wind attack angle (°) Flutter critical wind speed (m/s) Section model of the emergency bridge Emergency bridge 1 -3° 2.25 8.1 0° 1.75 6.3 +3° 1.62 5.8 2 -3° 5.50 19.8 0° 5.35 19.3 +3° 5.28 19.0 3 -3° 9.40 34.2 0° 7.95 28.8 +3° 6.45 24.3 4 -3° 5.85 21.1 0° 5.69 20.5 +3° 5.65 20.3Summarizing the above analysis, the aerodynamic combined measurements of the central-slotted and wind fairing can make flutter performance of the emergency bridge meet the requirements. So, the aerodynamic combined measurements of the central-slotted and wind fairing are the optimal scheme in four aerodynamic optimization schemes.
## 4. Flutter Analysis of the Emergency Bridge
### 4.1. Fundamental Theory of Flutter
Based on the structural vibration theory, the equation for motion of bridge structure in steady airflow can be expressed as(1)MX¨+CX˙+KX=Fawhere M is the bridge mass matrix, C is the damping matrix of the bridge, K is the stiffness matrix of bridge,X is the vector of displacement of the bridge, X· is the vector of velocity of the bridge, X¨ is the vector of acceleration of the bridge, and Fa is the vector of aeroelastic forces on the bridge.Based on the flutter theory of R.H. Scanlan (1965), for a bridge structure in steady airflow, the self-excited lift forceLse, drag force Dse, and pitching moment Mse per unit length are defined in the following equations. Eqs. (2a)-(2c) fully consider the lateral movement of the bridge structure.(2a)Lse=12ρU22BKH1∗h˙U+KH2∗Bα˙U+K2H3∗α+K2H4∗hB+KH5∗p˙U+K2H6∗pB(2b)Dse=12ρU22BKP1∗p˙U+KP2∗Bα˙U+K2P3∗α+K2P4∗pB+KP5∗h˙U+K2P6∗hB(2c)Mse=12ρU22B2KA1∗h˙U+KA2∗Bα˙U+K2A3∗α+K2A4∗hB+KA5∗p˙U+K2A6∗pBwhere ρ, U, and B are the air density, the mean wind speed, and the width of the bridge deck, respectively;K represents the reduced circular frequency that can be expressed as K=(ω·B)/U. h is the vertical displacement of bridge, p is the lateral displacement of bridge, α is the torsional displacement of bridge, and the dot superscript denotes the derivative with respect to time. Hi, Pi, and Ai(i=1,2,3,4,5,6) are the flutter derivatives related to the vertical, lateral, and torsional directions, respectively. Flutter derivatives are related to the shape of the main girder only, and they can be obtained by carrying out wind tunnel test or CFD.Based on the thought of finite element analysis, the distributed aerodynamic forces can be converted into the equivalent nodal loadings acting on the bridge element, and the aeroelastic forces for elemente can be expressed as(3)Faee=KaeeXe+CaeeX˙ewhere Xe is the vector of nodal displacement and X˙e is the vector of nodal velocity; Kaee is the local aeroelastic stiffness matrix; Caee is the vector of the local aeroelastic damping matrix.Matrix27 element of ANSYS software can model the stiffness component or the damping of bridge structure. So the total aeroelastic stiffness matrix of bridge Kaee and the damping matrix of bridge Caee can be expressed as(4)Kaee=Kae1e00Kae1eCaee=Cae1e00Cae1e(5)Kae1e=a0000000P6∗P4∗BP3∗000H6∗H4∗BH3∗000BA6∗BA4∗B2A3∗00000000000000(6)Cae1e=b0000000P5∗P1∗BP2∗000H5∗H1∗BH2∗000BA5∗BA1∗B2A2∗00000000000000where a=ρU2K2Le/2; b=ρUBKLe/2, and Le is the length of elemente.The global aeroelastic stiffness and damping matrices can be obtained by equation (7).(7)Fae=KaeX+CaeX˙where Kae represents the global aeroelastic stiffness matrix, and Cae represents the global aeroelastic damping matrix.Substituting Eq. (7) into Eq. (1) leads to the equation of motion for the bridge structure, as follows:(8)MX¨+C-CaeX˙+K-KaeX=0The equation of motion of bridge structure for flutter analysis can be obtained after incorporating the Rayleigh structure damping matrix assumptionC=αM+βK, and the equation is expressed as(9)MX¨+C-Cae′X˙+K-KaeX=0where C′, Cae′ are the modified damping and the modified aeroelastic damping matrices, respectively. They can be expressed as(10)C′=αM+βK-Kae(11)Cae′=Cae-βKaewhere α and β are the proportionality coefficients for Rayleigh damping. α and β can be obtained by least squares fitting, as follows:(12)minα,β∑i=1m2ξjωj-α-βωj22where ξi and m are the damping ratio of the ith mode and the total number of mode considered, respectively.Eq. (9) represents an integrated system in consideration of the effect of aeroelasticity, parameterized according to wind speed and response frequency. Eq. (9) can be carried out by the damped complex eigenvalue analysis method.If the bridge system hasn degrees of freedom,n conjugate pairs of complex eigenvalues and eigenvectors will be obtained by solving Eq. (9). Thejth conjugate pair of complex eigenvalues can be expressed as(13)λj=σj±iωjwhere i=-1; σj represents the real part of thejth conjugate pair of complex eigenvalues, and ωj represents the imaginary part of thejth conjugate pair of complex eigenvalues. σj and ωj are the damping and the vibrating frequency of the bridge system, respectively.When the real part of all eigenvalues is negative, the bridge system is dynamically stable; otherwise the bridge system is unstable. When the real part becomes zero, the corresponding wind speed is the critical wind speed, and it means the bridge system is on the critical state.As shown in Eq. (5) and Eq. (6), the aeroelastic stiffness matrix and the aeroelastic damping matrix are expressed according to wind speed, response frequency, and reduced frequency; only two of them are independent. Therefore, a sweep and iterative procedure should be employed in the identification of the flutter instability state. In this study, the flutter program of the emergency bridge can be implemented in ANSYS based on the scripting language APDL.
### 4.2. Flutter Critical Wind Speed of the Emergency Bridge Based on Numerical Calculation
Assuming that the emergency bridge structure damping ratio is 0.5%, the flutter derivative is obtained from the wind tunnel test, and the flutter program was compiled based on the APDL language in ANSYS. The bridge flutter critical wind speeds of the optimal aerodynamic scheme at the attack angles of −3°, 0°, and +3° have been calculated. The polynomial fitting method has been used to deal with the flutter derivative data of the optimal scheme, which is convenient for the flutter derivative invoked by the flutter analysis program. The fitting derivative curves are shown in Figure13.Figure 13
Flutter derivatives of the optimal aerodynamic scheme at attack angle of 0°.As shown in Figure14, the damping ratio and frequency at different attack angle change with the wind speed increase. As shown in Figure 14, the damping ratio of the third order vibration is the first to decrease to zero as the wind speed increases. So, the control mode of the flutter for the emergency bridge is the first order antisymmetric torsion. For the optimal scheme, the bridge flutter critical wind speeds at the wind attack angles of −3°, 0°, and +3° have been calculated by the flutter program, and the results are shown in Table 7; the results agree well with the wind tunnel test results. The maximum error is 9.5%. So, the flutter program can be used for flutter analysis of an emergency bridge.Table 7
Bridge flutter critical wind speeds.
Wind attack angle (°) Flutter critical wind speed (m/s) error(%) Results of numerical calculations Results of wind tunnel tests -3 31.3 34.2 8.48 0 27.2 28.8 5.56 3 26.85 24.3 9.50Figure 14
Variation of damping ratio of the emergency bridge with wind speed at attack angles of −3°, 0°, and +3°.The flutter critical wind speed of the emergency bridge at the attack angle of +3° is slower than the flutter critical wind speed at attack angles −3° and 0°. As the flutter critical wind speed is bigger than the corresponding flutter checking wind speed, the emergency bridge is still not safe. So, other structural measures should be applied in order to further improve the flutter stability of the emergency bridge.
### 4.3. Effect of Wind-Resistance Cable Structure on the Flutter Critical Wind Speed of the Emergency Bridge
The flutter derivative is a dimensionless parameter which only relates to the sectional shape [32, 33]. The flutter derivative can reflect the aerodynamic characteristics of a section. Ignore the influence of a wind-resistance cable in a flow field; the flutter derivative does not change when the section shape remains unchanged. In order to improve the safety of the emergency bridge in the regions of complex topography, four different wind-resistance cable schemes have been developed, which are shown in Figure 15. Using the flutter program, the bridge flutter critical wind speeds of different wind-resistance cable schemes have been calculated, and the optimal scheme has been determined.SchemeA: the wind-resistance cable is added symmetrically in the horizontal direction at the locationsL/4,L/8, and 3L/16 of the emergency bridge (L represents the span of the emergency bridge). The initial strain of the wind-resistance cable is 0.003, and the wind-resistance cable is anchored to the rock.SchemesB,C, andD: the wind-resistance cable in an arch shape has been installed on both sides of the emergency bridge. The wind-resistance cable and the main girder are connected by a tension rod. At mid-span, the length of the tension rod between the wind-resistance cable and the main girder is 1 m. At other locations, the length of tension rod depends on the rise-span ratio of the emergency bridge. The initial strain of the wind-resistance cable is 0.003. The relative positions of the wind-resistance cable and the emergency bridge deck for Schemes B, C, and D are 0°, 45°, and 90°, respectively.Figure 15
Finite element model of different wind-resistance cable schemes.The dynamic characteristics and flutter critical wind speed of different wind-resistance cable schemes at the attack angle of 0° are shown in Table8. The results indicate that the flutter critical wind speed is positively correlated with the torsional frequency. The critical wind speed of Scheme D is 41.98 m/s which is larger than those of the other three wind-resistance cable schemes. As compared to the emergency bridge without the wind-resistance cable, the bridge flutter critical wind speed increased 31.4%. In consideration of the convenience in construction and the effectiveness in erection, Scheme A has been selected. The flutter critical wind speed of Scheme A can meet the requirements of flutter stability. Hence, the emergency bridge possesses sufficient security against flutter.Table 8
Dynamic characteristics, flutter critical wind speed, and flutter critical frequency of four wind-resistance cable schemes at attack angle of 0°.
Scheme Fundamental frequency of vertical bending (Hz) Fundamental frequency of lateral bending (Hz) Fundamental frequency of torsion (Hz) torsion-bending moment ratio Flutter critical wind speed (m/s) Flutter critical frequency (Hz) A 0.702 1.332 1.040 1.481 37.22 0.8796 B 0.542 1.355 0.848 1.565 29.17 0.6513 C 0.506 1.347 0.768 1.512 38.13 0.8563 D 0.736 1.345 1.12 1.522 41.98 0.9338
## 4.1. Fundamental Theory of Flutter
Based on the structural vibration theory, the equation for motion of bridge structure in steady airflow can be expressed as(1)MX¨+CX˙+KX=Fawhere M is the bridge mass matrix, C is the damping matrix of the bridge, K is the stiffness matrix of bridge,X is the vector of displacement of the bridge, X· is the vector of velocity of the bridge, X¨ is the vector of acceleration of the bridge, and Fa is the vector of aeroelastic forces on the bridge.Based on the flutter theory of R.H. Scanlan (1965), for a bridge structure in steady airflow, the self-excited lift forceLse, drag force Dse, and pitching moment Mse per unit length are defined in the following equations. Eqs. (2a)-(2c) fully consider the lateral movement of the bridge structure.(2a)Lse=12ρU22BKH1∗h˙U+KH2∗Bα˙U+K2H3∗α+K2H4∗hB+KH5∗p˙U+K2H6∗pB(2b)Dse=12ρU22BKP1∗p˙U+KP2∗Bα˙U+K2P3∗α+K2P4∗pB+KP5∗h˙U+K2P6∗hB(2c)Mse=12ρU22B2KA1∗h˙U+KA2∗Bα˙U+K2A3∗α+K2A4∗hB+KA5∗p˙U+K2A6∗pBwhere ρ, U, and B are the air density, the mean wind speed, and the width of the bridge deck, respectively;K represents the reduced circular frequency that can be expressed as K=(ω·B)/U. h is the vertical displacement of bridge, p is the lateral displacement of bridge, α is the torsional displacement of bridge, and the dot superscript denotes the derivative with respect to time. Hi, Pi, and Ai(i=1,2,3,4,5,6) are the flutter derivatives related to the vertical, lateral, and torsional directions, respectively. Flutter derivatives are related to the shape of the main girder only, and they can be obtained by carrying out wind tunnel test or CFD.Based on the thought of finite element analysis, the distributed aerodynamic forces can be converted into the equivalent nodal loadings acting on the bridge element, and the aeroelastic forces for elemente can be expressed as(3)Faee=KaeeXe+CaeeX˙ewhere Xe is the vector of nodal displacement and X˙e is the vector of nodal velocity; Kaee is the local aeroelastic stiffness matrix; Caee is the vector of the local aeroelastic damping matrix.Matrix27 element of ANSYS software can model the stiffness component or the damping of bridge structure. So the total aeroelastic stiffness matrix of bridge Kaee and the damping matrix of bridge Caee can be expressed as(4)Kaee=Kae1e00Kae1eCaee=Cae1e00Cae1e(5)Kae1e=a0000000P6∗P4∗BP3∗000H6∗H4∗BH3∗000BA6∗BA4∗B2A3∗00000000000000(6)Cae1e=b0000000P5∗P1∗BP2∗000H5∗H1∗BH2∗000BA5∗BA1∗B2A2∗00000000000000where a=ρU2K2Le/2; b=ρUBKLe/2, and Le is the length of elemente.The global aeroelastic stiffness and damping matrices can be obtained by equation (7).(7)Fae=KaeX+CaeX˙where Kae represents the global aeroelastic stiffness matrix, and Cae represents the global aeroelastic damping matrix.Substituting Eq. (7) into Eq. (1) leads to the equation of motion for the bridge structure, as follows:(8)MX¨+C-CaeX˙+K-KaeX=0The equation of motion of bridge structure for flutter analysis can be obtained after incorporating the Rayleigh structure damping matrix assumptionC=αM+βK, and the equation is expressed as(9)MX¨+C-Cae′X˙+K-KaeX=0where C′, Cae′ are the modified damping and the modified aeroelastic damping matrices, respectively. They can be expressed as(10)C′=αM+βK-Kae(11)Cae′=Cae-βKaewhere α and β are the proportionality coefficients for Rayleigh damping. α and β can be obtained by least squares fitting, as follows:(12)minα,β∑i=1m2ξjωj-α-βωj22where ξi and m are the damping ratio of the ith mode and the total number of mode considered, respectively.Eq. (9) represents an integrated system in consideration of the effect of aeroelasticity, parameterized according to wind speed and response frequency. Eq. (9) can be carried out by the damped complex eigenvalue analysis method.If the bridge system hasn degrees of freedom,n conjugate pairs of complex eigenvalues and eigenvectors will be obtained by solving Eq. (9). Thejth conjugate pair of complex eigenvalues can be expressed as(13)λj=σj±iωjwhere i=-1; σj represents the real part of thejth conjugate pair of complex eigenvalues, and ωj represents the imaginary part of thejth conjugate pair of complex eigenvalues. σj and ωj are the damping and the vibrating frequency of the bridge system, respectively.When the real part of all eigenvalues is negative, the bridge system is dynamically stable; otherwise the bridge system is unstable. When the real part becomes zero, the corresponding wind speed is the critical wind speed, and it means the bridge system is on the critical state.As shown in Eq. (5) and Eq. (6), the aeroelastic stiffness matrix and the aeroelastic damping matrix are expressed according to wind speed, response frequency, and reduced frequency; only two of them are independent. Therefore, a sweep and iterative procedure should be employed in the identification of the flutter instability state. In this study, the flutter program of the emergency bridge can be implemented in ANSYS based on the scripting language APDL.
## 4.2. Flutter Critical Wind Speed of the Emergency Bridge Based on Numerical Calculation
Assuming that the emergency bridge structure damping ratio is 0.5%, the flutter derivative is obtained from the wind tunnel test, and the flutter program was compiled based on the APDL language in ANSYS. The bridge flutter critical wind speeds of the optimal aerodynamic scheme at the attack angles of −3°, 0°, and +3° have been calculated. The polynomial fitting method has been used to deal with the flutter derivative data of the optimal scheme, which is convenient for the flutter derivative invoked by the flutter analysis program. The fitting derivative curves are shown in Figure13.Figure 13
Flutter derivatives of the optimal aerodynamic scheme at attack angle of 0°.As shown in Figure14, the damping ratio and frequency at different attack angle change with the wind speed increase. As shown in Figure 14, the damping ratio of the third order vibration is the first to decrease to zero as the wind speed increases. So, the control mode of the flutter for the emergency bridge is the first order antisymmetric torsion. For the optimal scheme, the bridge flutter critical wind speeds at the wind attack angles of −3°, 0°, and +3° have been calculated by the flutter program, and the results are shown in Table 7; the results agree well with the wind tunnel test results. The maximum error is 9.5%. So, the flutter program can be used for flutter analysis of an emergency bridge.Table 7
Bridge flutter critical wind speeds.
Wind attack angle (°) Flutter critical wind speed (m/s) error(%) Results of numerical calculations Results of wind tunnel tests -3 31.3 34.2 8.48 0 27.2 28.8 5.56 3 26.85 24.3 9.50Figure 14
Variation of damping ratio of the emergency bridge with wind speed at attack angles of −3°, 0°, and +3°.The flutter critical wind speed of the emergency bridge at the attack angle of +3° is slower than the flutter critical wind speed at attack angles −3° and 0°. As the flutter critical wind speed is bigger than the corresponding flutter checking wind speed, the emergency bridge is still not safe. So, other structural measures should be applied in order to further improve the flutter stability of the emergency bridge.
## 4.3. Effect of Wind-Resistance Cable Structure on the Flutter Critical Wind Speed of the Emergency Bridge
The flutter derivative is a dimensionless parameter which only relates to the sectional shape [32, 33]. The flutter derivative can reflect the aerodynamic characteristics of a section. Ignore the influence of a wind-resistance cable in a flow field; the flutter derivative does not change when the section shape remains unchanged. In order to improve the safety of the emergency bridge in the regions of complex topography, four different wind-resistance cable schemes have been developed, which are shown in Figure 15. Using the flutter program, the bridge flutter critical wind speeds of different wind-resistance cable schemes have been calculated, and the optimal scheme has been determined.SchemeA: the wind-resistance cable is added symmetrically in the horizontal direction at the locationsL/4,L/8, and 3L/16 of the emergency bridge (L represents the span of the emergency bridge). The initial strain of the wind-resistance cable is 0.003, and the wind-resistance cable is anchored to the rock.SchemesB,C, andD: the wind-resistance cable in an arch shape has been installed on both sides of the emergency bridge. The wind-resistance cable and the main girder are connected by a tension rod. At mid-span, the length of the tension rod between the wind-resistance cable and the main girder is 1 m. At other locations, the length of tension rod depends on the rise-span ratio of the emergency bridge. The initial strain of the wind-resistance cable is 0.003. The relative positions of the wind-resistance cable and the emergency bridge deck for Schemes B, C, and D are 0°, 45°, and 90°, respectively.Figure 15
Finite element model of different wind-resistance cable schemes.The dynamic characteristics and flutter critical wind speed of different wind-resistance cable schemes at the attack angle of 0° are shown in Table8. The results indicate that the flutter critical wind speed is positively correlated with the torsional frequency. The critical wind speed of Scheme D is 41.98 m/s which is larger than those of the other three wind-resistance cable schemes. As compared to the emergency bridge without the wind-resistance cable, the bridge flutter critical wind speed increased 31.4%. In consideration of the convenience in construction and the effectiveness in erection, Scheme A has been selected. The flutter critical wind speed of Scheme A can meet the requirements of flutter stability. Hence, the emergency bridge possesses sufficient security against flutter.Table 8
Dynamic characteristics, flutter critical wind speed, and flutter critical frequency of four wind-resistance cable schemes at attack angle of 0°.
Scheme Fundamental frequency of vertical bending (Hz) Fundamental frequency of lateral bending (Hz) Fundamental frequency of torsion (Hz) torsion-bending moment ratio Flutter critical wind speed (m/s) Flutter critical frequency (Hz) A 0.702 1.332 1.040 1.481 37.22 0.8796 B 0.542 1.355 0.848 1.565 29.17 0.6513 C 0.506 1.347 0.768 1.512 38.13 0.8563 D 0.736 1.345 1.12 1.522 41.98 0.9338
## 5. Conclusions
The flutter critical wind speed of the original emergency bridge scheme is slower than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. Based on the original scheme, four aerodynamic optimization schemes have been developed. Based on the wind tunnel test results, the aerodynamic combined measurements of central-slotted and wind fairing are the optimal scheme, which can make the flutter critical wind speed and the safety coefficient of the bridge meet the design requirements of the emergency bridge at the wind attack angles of −3°, 0°, and +3°. The flutter program has been compiled using the APDL language. The calculated flutter critical wind speed results by the flutter program agree well with the wind tunnel test results. The maximum error is only 9.5%. In order to improve the flutter stability of the emergency bridge, the critical wind speeds of different wind-resistance cable schemes have been calculated by the flutter analysis program. The results show that the scheme of wind-resistance cable in the horizontal direction can meet the requirement of flutter stability, which is also convenient in constructing and effective in erecting an emergency bridge.
---
*Source: 1013025-2019-03-17.xml* | 1013025-2019-03-17_1013025-2019-03-17.md | 58,204 | Flutter Performance of the Emergency Bridge with New-Type Cable-Girder | Lei Yang; Fei Shao; Qian Xu; Ke-bin Jiang | Mathematical Problems in Engineering
(2019) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1013025 | 1013025-2019-03-17.xml | ---
## Abstract
Based on the proposed emergency bridge scheme, the flutter performance of the emergency bridge with the new-type cable-girder has been investigated through wind tunnel tests and numerical simulation analyses. Four aerodynamic optimization schemes have been developed in consideration of structure characteristics of the emergency bridge. The flutter performances of the aerodynamic optimization schemes have been investigated. The flutter derivatives of four aerodynamic optimization schemes have been analyzed. According to the results, the optimal scheme has been determined. Based on flutter theory of bridge, the differential equations of flutter of the emergency bridge with new-type cable-girder have been established. Iterative method has been used for solving the differential equations. The flutter analysis program has been compiled using the APDL language in ANSYS, and the bridge flutter critical wind speed of the optimal scheme has been determined by the program. The flutter analysis program has also been used to determine the bridge flutter critical wind speed of different wind-resistance cable schemes. The results indicate that the bridge flutter critical wind speed of the original emergency bridge scheme is lower than the flutter checking wind speed. The aerodynamic combined measurements of central-slotted and wind fairing are the optimal scheme, with the safety coefficients larger than 1.2 at the wind attack angles of −3°, 0°, and +3°. The bridge flutter critical wind speed of the optimal scheme has been determined using the flutter analysis program, and the numerical results agree well with the wind tunnel test results. The wind-resistance cable scheme of 90° is the optimal wind cable scheme, and the bridge flutter critical wind speed increased 31.4%. However, in consideration of the convenience in construction and the effectiveness in erection, the scheme of wind-resistance cable in the horizontal direction has been selected to be used in the emergency bridge with new-type cable-girder.
---
## Body
## 1. Introduction
As compared to normal bridges, the emergency bridge has the characteristics of small stiffness and damping. It is a wind-sensitive structure, which is prone to a variety of wind-induced vibrations. Generally, the torsional stiffness of the combined plate beam in a suspension bridge is smaller than that of a box girder or a truss girder. For the Tacoma Narrows Bridge, the combined plate beam was used, and the collapse of the bridge was the result of not considering the flutter stability in the design [1]. In consideration of the demand in transportation and erection, the combined plate beam type is still used widely in emergency bridge structure. In China, the maximum single span of an emergency bridge is 51 m, which cannot satisfy the demand of rescue and relief works in mountainous areas. In order to satisfy the demand, in this study, using a new-type cable-girder, an emergency bridge which can span 150 m has been developed. The light weight, high strength fiber cable has been used as the main cable. The combined plate beam has been used as the main girder, and the splicing structure has been used as the pylon. Therefore, the stiffness of the emergency bridge with new-type cable-girder is relatively low. Further, as the wind resistibility of the combined plate beam is not strong, therefore the flutter of the emergency bridge is worth studying.Dung [2] and Ge [3] developed the mode superposition method for flutter analysis of a suspension bridge. The advantage of this method is that the full participating natural modes of vibration can be considered, but the calculation is time-consuming. However, the mode superposition method is commonly used due to its accuracy and efficiency. Jones and Scanlan [4], Jain [5], and Tanaka [6] utilized the determinant search method directly to predict the bridge flutter critical condition. In recent years, with the development of computers, numerical simulation studies are widely used in flutter analysis. Flutter derivatives are the key parameter for numerical analysis of flutter, and different turbulence models had been used for obtaining the flutter derivatives of bridge cross-section. Vairo [7, 8] proposed a numerical model based on a finite volume ALE formulation and employs ak-ε turbulence model; the accuracy and applicability of the model to wind engineering problems were successfully assessed by computing the aerodynamic behaviour of simple cross-section shapes and typical cross-sections. The effectiveness of the sst (shear-stress-transport) and the standard (std) RANS-based turbulence models in predicting flutter derivatives had been compared, and thek-ε sst formulation proved to be more accurate than thek-ε std [9]. A 2D unsteady Reynolds-averaged Navier-Stokes (URANS) approach adopting Menter’s SSTk-ε turbulence model was employed for computing the flutter and the static aerodynamic characteristics, and the conclusions indicated that the results provided by the proposed methodology agree well with the experimental data [10]. The performances of standard Smagorinsky-Lilly and Kinetic Energy Transport turbulence models were applied to study the unsteady flow field around a rectangular cylinder [11]. The accuracy of standard computational fluid dynamics techniques and turbulence models in predicting the critical flutter speed of streamlined and bluff deck sections was investigated, and the results showed that the flutter onset velocity had mainly been underestimated but cases showing opposite behavior [12]. By considering the nonlinear wind-structure interactions based on the linear theory, Zhang [13] developed an approach for the aerostatic and aerodynamic analysis. Based on ANSYS, Hua [14] developed an approach for the full-mode aerodynamic flutter analysis. The method of full-mode aerodynamic flutter analysis was used to analyze the aerodynamic flutter analysis of a suspension with double main spans [15]. A simple analytical approach to aeroelastic stability problem was proposed and had been proved to be consistent and effective for successfully capturing the main wind-bridge interaction mechanisms [16]. Bai [17] carried out a study on the flutter stability of a steel truss girder suspension bridge. Wind tunnel tests were performed to investigate the effects of different aerodynamic measures on the flutter stability of a steel truss girder suspension bridge. PC slab stiffening girder section is similar to the combined plate beam. These section types are not strong in wind resistibility of structure. Zhu [18] analyzed the flutter stability of a suspension bridge with PC slab stiffening girder. The results showed that the suspension bridge with PC slab stiffening girder was sensitive to the wind attack angles. Based on a series of wind tunnel tests, Yang [19] investigated the influence of vertical central stabilizers on the flutter performance of twin-box girders. Based on wind tunnel tests and computational fluid dynamics (CFD) simulations, a study on the flutter performance of twin-box bridge girders at large angles of attack was presented [20].On the energy viewpoint of flutter, bridge structure can absorb energy from the wind-induced vibration. Energy harvesting from wind-induced vibrations of long-span bridges through electromagnetic devices was studied [21]. The coupling vibrations have attracted the researchers’ attention. The phenomena of RIWVs were reproduced using a high-precision simulator, and the effects of wind speed and rain were considered by wind tunnel tests [22]. The accuracy of wind tunnel test is a key question for wind-induced vibration of bridge. Fabio [23] investigated experimental error propagation, and three different experimental data sets had been used in studying the effects on critical flutter speeds of pedestrian suspension bridge.There are only a few studies on the wind-resistance of emergency bridges [24]. In this study, using wind tunnel tests and numerical simulation analyses, the wind-resistance performance of the emergency bridge has been investigated. The results can be used as a reference for other similar studies.
## 2. Emergency Bridge with New-Type Cable-Girder
### 2.1. Description of the Emergency Bridge with New-Type Cable-Girder
The emergency bridge with the new-type cable-girder comprises of the cable system, the girder, the pylon, and the anchorage system. The emergency bridge is allowed to carry 35 tons of pedrail deck load and 13 tons of wheel load. The emergency bridge has a span of 150 m. The height of the pylon is 15 m. The sag-span ratio is 1/12, and the height of the girder is 0.75 m. The cable system comprises of cable and suspender. The entire bridge has two cables which are in the straddle form. The material of the suspender is round steel. The lateral distance between suspenders is 6 m, and the suspender is anchored to the main beam. The longitudinal distance between suspenders is 10 m. The pylon is assembled by the aluminum alloy profile of H-type, and the type of aluminum alloy is 7005. The main girder mainly consists of three parts: main girder, cross beam, and spandrel beam. Curbs are installed outside of the main girder. A sketch of the bridge is shown in Figures1 and 2. The section of main girder is shown in Figures 3 and 4.Figure 1
General arrangement of emergency bridge.
(a)
Lateral view (b)
Vertical viewFigure 2
General arrangement of emergency bridge deck.
(a)
Lateral view (b)
Vertical viewFigure 3
Cross-section of main girder (mm).Figure 4
Cross-section of main cross girder (mm).
### 2.2. Dynamic Characteristic of the Emergency Bridge
Based on a quasi-secant large-displacement formulation, a nonlinear continuous model for the analysis of long-span cable-stayed bridges was proposed; the model opens the possibility to develop more refined closed-form solutions for the analysis of cable-stayed structures [25]. In order to consider nonlinear response of cable-stayed structures, a closed-form refined model was proposed [26]. To simplify the dynamic analysis of emergency bridge, the equivalent modulus of elasticity (Ernst 1965) [27–30] had been used to consider the sag effect of main cable. The finite element model of the emergency bridge was established using ANSYS software. In the finite element model, the main girders and cross beam were modeled by element BEAM4. The mass and rotation inertia of middle plate focus on the middle of cross beam, which had been modeled by the element MASS21. The pylon has several cross-sections, which had been modeled by element BEAM188. Main cable and suspender were modeled by element LINK10. Boundary conditions of finite element model of the emergency bridge are shown in Table 1. Three-dimensional finite element model of the emergency bridge is shown in Figure 5. The hinge connection between the main girders has been modeled by the method of constraint coupling. Using the Lanczos method in ANSYS, a dynamic finite-element analysis has been performed. Dynamic characteristics of the emergency bridge are shown in Table 2. The bridge flutter stability is mainly related to the first-order model of the vertical bending and torsion. The first-order vertical bending frequency is 0.509Hz, and the first-order torsion frequency is 0.846Hz. The first antisymmetric vertical bending mode and the first antisymmetric torsion mode of stiffening girder of the emergency bridge are shown in Figures 6 and 7, respectively.Table 1
Boundary conditions of finite element model of the emergency bridge.
Degree of freedom UX UY UZ ROTZ ROTX ROTY Beam end ⊕ × × ⊕ ⊕ ⊕ Bottom of pylon × × × × × × Between main cable and top of pylon CP CP CP CP CP CP Main cable at the anchor end × × × × × × Notations represent the following. UX: the longitudinal direction, UY: the vertical direction, UZ: the lateral direction, ROTX: torsion in longitudinal direction, ROTY: torsion in vertical direction, ROTZ: torsion in lateral direction,⊕: release the degree of freedom, ×: constraint to degree of freedom, and CP: coupling the degree of freedom.Table 2
Dynamic characteristics results.
Mode No. Frequency (Hz) Mode shape description 1 0.509 1st-A-VB (MG) 2 0.635 1st-S-VB (MG) 3 0.846 1st-A-T (MG) 4 1.028 2nd-S-VB (MG) 5 1.056 1st-S-T (MG) 6 1.469 2nd-S-T (MG) 7 1.476 B (MC) 8 1.670 2nd-A-VB (MG) 9 2.075 2nd-A-T (MG) 10 2.146 B (MC) Notations represent the following. H: horizontal, V: vertical, L: Longitudinal, B: bending, T: torsion, F: floating, S: symmetric, A: anti-symmetric, MG: main girder, MC: main cables, and P: pylon.Figure 5
Finite element model of the emergency bridge.Figure 6
The first antisymmetric vertical bending mode of the stiffening girder.Figure 7
The first antisymmetric torsion mode of the stiffening girder.
## 2.1. Description of the Emergency Bridge with New-Type Cable-Girder
The emergency bridge with the new-type cable-girder comprises of the cable system, the girder, the pylon, and the anchorage system. The emergency bridge is allowed to carry 35 tons of pedrail deck load and 13 tons of wheel load. The emergency bridge has a span of 150 m. The height of the pylon is 15 m. The sag-span ratio is 1/12, and the height of the girder is 0.75 m. The cable system comprises of cable and suspender. The entire bridge has two cables which are in the straddle form. The material of the suspender is round steel. The lateral distance between suspenders is 6 m, and the suspender is anchored to the main beam. The longitudinal distance between suspenders is 10 m. The pylon is assembled by the aluminum alloy profile of H-type, and the type of aluminum alloy is 7005. The main girder mainly consists of three parts: main girder, cross beam, and spandrel beam. Curbs are installed outside of the main girder. A sketch of the bridge is shown in Figures1 and 2. The section of main girder is shown in Figures 3 and 4.Figure 1
General arrangement of emergency bridge.
(a)
Lateral view (b)
Vertical viewFigure 2
General arrangement of emergency bridge deck.
(a)
Lateral view (b)
Vertical viewFigure 3
Cross-section of main girder (mm).Figure 4
Cross-section of main cross girder (mm).
## 2.2. Dynamic Characteristic of the Emergency Bridge
Based on a quasi-secant large-displacement formulation, a nonlinear continuous model for the analysis of long-span cable-stayed bridges was proposed; the model opens the possibility to develop more refined closed-form solutions for the analysis of cable-stayed structures [25]. In order to consider nonlinear response of cable-stayed structures, a closed-form refined model was proposed [26]. To simplify the dynamic analysis of emergency bridge, the equivalent modulus of elasticity (Ernst 1965) [27–30] had been used to consider the sag effect of main cable. The finite element model of the emergency bridge was established using ANSYS software. In the finite element model, the main girders and cross beam were modeled by element BEAM4. The mass and rotation inertia of middle plate focus on the middle of cross beam, which had been modeled by the element MASS21. The pylon has several cross-sections, which had been modeled by element BEAM188. Main cable and suspender were modeled by element LINK10. Boundary conditions of finite element model of the emergency bridge are shown in Table 1. Three-dimensional finite element model of the emergency bridge is shown in Figure 5. The hinge connection between the main girders has been modeled by the method of constraint coupling. Using the Lanczos method in ANSYS, a dynamic finite-element analysis has been performed. Dynamic characteristics of the emergency bridge are shown in Table 2. The bridge flutter stability is mainly related to the first-order model of the vertical bending and torsion. The first-order vertical bending frequency is 0.509Hz, and the first-order torsion frequency is 0.846Hz. The first antisymmetric vertical bending mode and the first antisymmetric torsion mode of stiffening girder of the emergency bridge are shown in Figures 6 and 7, respectively.Table 1
Boundary conditions of finite element model of the emergency bridge.
Degree of freedom UX UY UZ ROTZ ROTX ROTY Beam end ⊕ × × ⊕ ⊕ ⊕ Bottom of pylon × × × × × × Between main cable and top of pylon CP CP CP CP CP CP Main cable at the anchor end × × × × × × Notations represent the following. UX: the longitudinal direction, UY: the vertical direction, UZ: the lateral direction, ROTX: torsion in longitudinal direction, ROTY: torsion in vertical direction, ROTZ: torsion in lateral direction,⊕: release the degree of freedom, ×: constraint to degree of freedom, and CP: coupling the degree of freedom.Table 2
Dynamic characteristics results.
Mode No. Frequency (Hz) Mode shape description 1 0.509 1st-A-VB (MG) 2 0.635 1st-S-VB (MG) 3 0.846 1st-A-T (MG) 4 1.028 2nd-S-VB (MG) 5 1.056 1st-S-T (MG) 6 1.469 2nd-S-T (MG) 7 1.476 B (MC) 8 1.670 2nd-A-VB (MG) 9 2.075 2nd-A-T (MG) 10 2.146 B (MC) Notations represent the following. H: horizontal, V: vertical, L: Longitudinal, B: bending, T: torsion, F: floating, S: symmetric, A: anti-symmetric, MG: main girder, MC: main cables, and P: pylon.Figure 5
Finite element model of the emergency bridge.Figure 6
The first antisymmetric vertical bending mode of the stiffening girder.Figure 7
The first antisymmetric torsion mode of the stiffening girder.
## 3. Wind Tunnel Tests
### 3.1. Design of the Sectional Model
Wind tunnel tests of the section models have been carried out in the TJ-2 wind tunnel laboratory at Tongji University in China. The model scale is 1:10. So, the parameter of the section model can be obtained based on the comparability demand. The major parameters of the section model are shown in Table3. The stringer and horizontal girder system has been used on the framework of the section model. The girder has been welded with steel. The bridge deck of the section model has been sculptured with timber. The wind fairing and baseplate are made of ABS materials. The wind tunnel test of the section model only considers two degrees of freedom of the vertical direction and the torsional direction. The section model of the emergency bridge is shown in Figure 8.Table 3
Parameters of the section model.
Parameters Units Bridge value Scale ratios Section model value Height of girder m 0.75 1:10 0.075 Width of girder m 4.23 1:10 0.423 Mass per unit length kg/m 680 1:102 6.7042 Mass moment of inertia per unit length kg·m2/m 1800 1:104 0.252 Radius of gyration m 1.63 1:10 0.18 Fundamental frequency of vertical bending Hz 0.509 2.776 1.413 Fundamental frequency of torsion Hz 0.846 2.777 2.349 Frequency ratio of torsion and bending / 1.662 / 1.655 Wind speed scale m/s / 1:3.6 / Damping ratio of bending % 0.5 / 0.2 Damping ratio of torsion % 0.5 / 0.3Figure 8
Section model of the original emergency bridge scheme.
### 3.2. Flutter Performance of Original Emergency Bridge Scheme
Subjected to the smooth flow, the flutter critical wind speed of the original emergency bridge scheme has been measured at the wind attack angles of −3°, 0°, and +3°. The data acquisition system can instantly display the data of displacement for the section model. The flutter critical wind speed is the wind speed when the state of the section model system changes from stable to unstable. The wind speed scale is the ratio between wind speed in wind tunnel test and wind speed for actual bridge; it equals the model scale divided by the ratio of torsion frequency. The wind speed scale has been used to calculate the flutter critical wind speed of the actual emergency bridge. The flutter critical wind speed of the original emergency bridge scheme at the wind attack angles of −3°, 0°, and +3° is shown in Table4.Table 4
Flutter critical wind speeds of original emergency bridge scheme.
Wind attack angle (°) Flutter critical wind speed (m/s) Corresponding flutter checking wind speed (m/s) Section model of the emergency bridge Emergency bridge -3 2.85 10.26 20.1 0 2.625 9.45 3 2.50 9.0Based on the design requirements of the emergency bridge, the flutter checking wind speed is 20.1 m/s. The variation of the torsional damping ratio of the original scheme’s section model system with wind speed is shown in Figure9. The results show that the flutter critical wind speeds of original cross-section are less than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. The stability of the original scheme against flutter is insufficient. Therefore, the flutter critical wind speed of the emergency bridge needs to be improved by structural measures or aerodynamic optimization schemes.Figure 9
Variation of torsional damping ratio of the original scheme’s section model system with wind speed.
### 3.3. Flutter Optimization of the Emergency Bridge
In order to improve the flutter performance of the emergency bridge, different aerodynamic optimization schemes have been developed, which are shown in Figure10. Section model of the emergency bridge in wind tunnel tests mainly considers vibration of two degrees of freedom (vertical vibration and torsional vibration), so the test results only give the flutter derivatives correlation with the vertical vibration and torsional vibration. Flutter derivatives of different aerodynamic schemes are shown in Figure 11. The influence of different aerodynamic measures to the flutter performance has been investigated. Four aerodynamic optimization schemes are shown in Table 5. The bridge flutter critical wind speeds of four aerodynamic optimization schemes at the wind attack angles of −3°, 0°, and +3° have been measured by wind tunnel tests. Due to the limited space in this paper, only damping ratio and frequency of Scheme 3 are shown in Figure 12.Table 5
Descriptions of four aerodynamic optimization schemes.
Aerodynamic optimization scheme Description of the aerodynamic optimization scheme 1 Central-slotted Dismantle the board between the girder sides 2 Wind fairing Install the wind fairing on both girder sides 3 Central-slotted + wind fairing Combined Scheme 1 and Scheme 2 4 Plus bottom board + wind fairing Plus the board under the girder and install the wind fairingFigure 10
Section models of four aerodynamic optimization schemes.Figure 11
Bridge flutter derivatives of different schemes.Figure 12
Variation of torsional damping ratio of four schemes’ section model systems with wind speed.In order to analyze the flutter characteristics of the emergency bridge conveniently based on the flutter mechanism, the major flutter derivatives of different schemes at the wind attack angle of 0° are shown in Figure11. The flutter derivative A2∗ is related to the aerodynamic damping, which is generated by the torsional motion. If A2∗ is positive, it is an indication of aerodynamic negative damping, and vice versa. Figure 11(1) shows the flutter derivative A2∗ of the original scheme and Scheme 1 changes from negative to positive with the wind speed increases. This is an indication that the aerodynamic negative damping is produced, which means the emergency bridge is less stable against flutter. The flutter derivative A2∗ of the other aerodynamic optimization schemes is negative with the wind speed increases. So, Scheme 3 is more stable than the other schemes against flutter. The flutter derivative H2∗ is related to the stiffness and damping of the vertical bending affected by the torsional velocity. The correlation between increasing H2∗ and the flutter stability is positive. Figure 11(2) shows that H2∗ of different schemes increase with the wind speed increases. Further, H2∗ of Scheme 3 is larger than those of the other schemes. The flutter derivative A3∗ is related to the torsional stiffness, which is affected by the torsional motion. Generally, the effect of A3∗ on the flutter critical wind speed is small. Figure 11(3) shows that different schemes have similar variation tendencies with the wind speed increase. Therefore, based on the analysis of the major flutter derivatives, the flutter stability of Scheme 3 is the best among four aerodynamic optimization schemes.The variations of the torsional damping ratio of different schemes’ section model systems with wind speed are shown in Figure12. Based on the wind-resistance design code for highway bridges (JTG/TD60-01-2004) [31], the safety factor of the emergency bridge is 1.2. Table 6 shows the bridge flutter critical wind speed of different aerodynamic optimization schemes at wind attack angles of −3°, 0°, and +3°. It can be seen that the flutter critical wind speed at wind attack angle of +3° is smaller than those at wind attack angles of −3° and 0°. So, the bridge flutter critical wind speed at the attack angle of +3° has been used for further comparison. The bridge flutter critical wind speed of the central-slotted scheme is 5.8 m/s. So, this aerodynamic measure cannot improve the flutter critical wind speed. The bridge flutter critical wind speed of the wind fairing scheme is 19.0 m/s. This is a significant improvement to the bridge flutter critical wind speed, but it still cannot meet the design requirement. The bridge flutter critical wind speed of the aerodynamic combined measurements of the central-slotted and wind fairing is 24.3 m/s; this scheme can make the flutter performance of the emergency bridge meet the requirements. The bridge flutter critical wind speed of the scheme of plus the board under the girder and install the wind fairing is 20.3 m/s. This scheme can meet the design requirement, but the safety coefficient is less than 1.2.Table 6
Test results of bridge flutter critical wind speed.
Aerodynamic optimization scheme Wind attack angle (°) Flutter critical wind speed (m/s) Section model of the emergency bridge Emergency bridge 1 -3° 2.25 8.1 0° 1.75 6.3 +3° 1.62 5.8 2 -3° 5.50 19.8 0° 5.35 19.3 +3° 5.28 19.0 3 -3° 9.40 34.2 0° 7.95 28.8 +3° 6.45 24.3 4 -3° 5.85 21.1 0° 5.69 20.5 +3° 5.65 20.3Summarizing the above analysis, the aerodynamic combined measurements of the central-slotted and wind fairing can make flutter performance of the emergency bridge meet the requirements. So, the aerodynamic combined measurements of the central-slotted and wind fairing are the optimal scheme in four aerodynamic optimization schemes.
## 3.1. Design of the Sectional Model
Wind tunnel tests of the section models have been carried out in the TJ-2 wind tunnel laboratory at Tongji University in China. The model scale is 1:10. So, the parameter of the section model can be obtained based on the comparability demand. The major parameters of the section model are shown in Table3. The stringer and horizontal girder system has been used on the framework of the section model. The girder has been welded with steel. The bridge deck of the section model has been sculptured with timber. The wind fairing and baseplate are made of ABS materials. The wind tunnel test of the section model only considers two degrees of freedom of the vertical direction and the torsional direction. The section model of the emergency bridge is shown in Figure 8.Table 3
Parameters of the section model.
Parameters Units Bridge value Scale ratios Section model value Height of girder m 0.75 1:10 0.075 Width of girder m 4.23 1:10 0.423 Mass per unit length kg/m 680 1:102 6.7042 Mass moment of inertia per unit length kg·m2/m 1800 1:104 0.252 Radius of gyration m 1.63 1:10 0.18 Fundamental frequency of vertical bending Hz 0.509 2.776 1.413 Fundamental frequency of torsion Hz 0.846 2.777 2.349 Frequency ratio of torsion and bending / 1.662 / 1.655 Wind speed scale m/s / 1:3.6 / Damping ratio of bending % 0.5 / 0.2 Damping ratio of torsion % 0.5 / 0.3Figure 8
Section model of the original emergency bridge scheme.
## 3.2. Flutter Performance of Original Emergency Bridge Scheme
Subjected to the smooth flow, the flutter critical wind speed of the original emergency bridge scheme has been measured at the wind attack angles of −3°, 0°, and +3°. The data acquisition system can instantly display the data of displacement for the section model. The flutter critical wind speed is the wind speed when the state of the section model system changes from stable to unstable. The wind speed scale is the ratio between wind speed in wind tunnel test and wind speed for actual bridge; it equals the model scale divided by the ratio of torsion frequency. The wind speed scale has been used to calculate the flutter critical wind speed of the actual emergency bridge. The flutter critical wind speed of the original emergency bridge scheme at the wind attack angles of −3°, 0°, and +3° is shown in Table4.Table 4
Flutter critical wind speeds of original emergency bridge scheme.
Wind attack angle (°) Flutter critical wind speed (m/s) Corresponding flutter checking wind speed (m/s) Section model of the emergency bridge Emergency bridge -3 2.85 10.26 20.1 0 2.625 9.45 3 2.50 9.0Based on the design requirements of the emergency bridge, the flutter checking wind speed is 20.1 m/s. The variation of the torsional damping ratio of the original scheme’s section model system with wind speed is shown in Figure9. The results show that the flutter critical wind speeds of original cross-section are less than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. The stability of the original scheme against flutter is insufficient. Therefore, the flutter critical wind speed of the emergency bridge needs to be improved by structural measures or aerodynamic optimization schemes.Figure 9
Variation of torsional damping ratio of the original scheme’s section model system with wind speed.
## 3.3. Flutter Optimization of the Emergency Bridge
In order to improve the flutter performance of the emergency bridge, different aerodynamic optimization schemes have been developed, which are shown in Figure10. Section model of the emergency bridge in wind tunnel tests mainly considers vibration of two degrees of freedom (vertical vibration and torsional vibration), so the test results only give the flutter derivatives correlation with the vertical vibration and torsional vibration. Flutter derivatives of different aerodynamic schemes are shown in Figure 11. The influence of different aerodynamic measures to the flutter performance has been investigated. Four aerodynamic optimization schemes are shown in Table 5. The bridge flutter critical wind speeds of four aerodynamic optimization schemes at the wind attack angles of −3°, 0°, and +3° have been measured by wind tunnel tests. Due to the limited space in this paper, only damping ratio and frequency of Scheme 3 are shown in Figure 12.Table 5
Descriptions of four aerodynamic optimization schemes.
Aerodynamic optimization scheme Description of the aerodynamic optimization scheme 1 Central-slotted Dismantle the board between the girder sides 2 Wind fairing Install the wind fairing on both girder sides 3 Central-slotted + wind fairing Combined Scheme 1 and Scheme 2 4 Plus bottom board + wind fairing Plus the board under the girder and install the wind fairingFigure 10
Section models of four aerodynamic optimization schemes.Figure 11
Bridge flutter derivatives of different schemes.Figure 12
Variation of torsional damping ratio of four schemes’ section model systems with wind speed.In order to analyze the flutter characteristics of the emergency bridge conveniently based on the flutter mechanism, the major flutter derivatives of different schemes at the wind attack angle of 0° are shown in Figure11. The flutter derivative A2∗ is related to the aerodynamic damping, which is generated by the torsional motion. If A2∗ is positive, it is an indication of aerodynamic negative damping, and vice versa. Figure 11(1) shows the flutter derivative A2∗ of the original scheme and Scheme 1 changes from negative to positive with the wind speed increases. This is an indication that the aerodynamic negative damping is produced, which means the emergency bridge is less stable against flutter. The flutter derivative A2∗ of the other aerodynamic optimization schemes is negative with the wind speed increases. So, Scheme 3 is more stable than the other schemes against flutter. The flutter derivative H2∗ is related to the stiffness and damping of the vertical bending affected by the torsional velocity. The correlation between increasing H2∗ and the flutter stability is positive. Figure 11(2) shows that H2∗ of different schemes increase with the wind speed increases. Further, H2∗ of Scheme 3 is larger than those of the other schemes. The flutter derivative A3∗ is related to the torsional stiffness, which is affected by the torsional motion. Generally, the effect of A3∗ on the flutter critical wind speed is small. Figure 11(3) shows that different schemes have similar variation tendencies with the wind speed increase. Therefore, based on the analysis of the major flutter derivatives, the flutter stability of Scheme 3 is the best among four aerodynamic optimization schemes.The variations of the torsional damping ratio of different schemes’ section model systems with wind speed are shown in Figure12. Based on the wind-resistance design code for highway bridges (JTG/TD60-01-2004) [31], the safety factor of the emergency bridge is 1.2. Table 6 shows the bridge flutter critical wind speed of different aerodynamic optimization schemes at wind attack angles of −3°, 0°, and +3°. It can be seen that the flutter critical wind speed at wind attack angle of +3° is smaller than those at wind attack angles of −3° and 0°. So, the bridge flutter critical wind speed at the attack angle of +3° has been used for further comparison. The bridge flutter critical wind speed of the central-slotted scheme is 5.8 m/s. So, this aerodynamic measure cannot improve the flutter critical wind speed. The bridge flutter critical wind speed of the wind fairing scheme is 19.0 m/s. This is a significant improvement to the bridge flutter critical wind speed, but it still cannot meet the design requirement. The bridge flutter critical wind speed of the aerodynamic combined measurements of the central-slotted and wind fairing is 24.3 m/s; this scheme can make the flutter performance of the emergency bridge meet the requirements. The bridge flutter critical wind speed of the scheme of plus the board under the girder and install the wind fairing is 20.3 m/s. This scheme can meet the design requirement, but the safety coefficient is less than 1.2.Table 6
Test results of bridge flutter critical wind speed.
Aerodynamic optimization scheme Wind attack angle (°) Flutter critical wind speed (m/s) Section model of the emergency bridge Emergency bridge 1 -3° 2.25 8.1 0° 1.75 6.3 +3° 1.62 5.8 2 -3° 5.50 19.8 0° 5.35 19.3 +3° 5.28 19.0 3 -3° 9.40 34.2 0° 7.95 28.8 +3° 6.45 24.3 4 -3° 5.85 21.1 0° 5.69 20.5 +3° 5.65 20.3Summarizing the above analysis, the aerodynamic combined measurements of the central-slotted and wind fairing can make flutter performance of the emergency bridge meet the requirements. So, the aerodynamic combined measurements of the central-slotted and wind fairing are the optimal scheme in four aerodynamic optimization schemes.
## 4. Flutter Analysis of the Emergency Bridge
### 4.1. Fundamental Theory of Flutter
Based on the structural vibration theory, the equation for motion of bridge structure in steady airflow can be expressed as(1)MX¨+CX˙+KX=Fawhere M is the bridge mass matrix, C is the damping matrix of the bridge, K is the stiffness matrix of bridge,X is the vector of displacement of the bridge, X· is the vector of velocity of the bridge, X¨ is the vector of acceleration of the bridge, and Fa is the vector of aeroelastic forces on the bridge.Based on the flutter theory of R.H. Scanlan (1965), for a bridge structure in steady airflow, the self-excited lift forceLse, drag force Dse, and pitching moment Mse per unit length are defined in the following equations. Eqs. (2a)-(2c) fully consider the lateral movement of the bridge structure.(2a)Lse=12ρU22BKH1∗h˙U+KH2∗Bα˙U+K2H3∗α+K2H4∗hB+KH5∗p˙U+K2H6∗pB(2b)Dse=12ρU22BKP1∗p˙U+KP2∗Bα˙U+K2P3∗α+K2P4∗pB+KP5∗h˙U+K2P6∗hB(2c)Mse=12ρU22B2KA1∗h˙U+KA2∗Bα˙U+K2A3∗α+K2A4∗hB+KA5∗p˙U+K2A6∗pBwhere ρ, U, and B are the air density, the mean wind speed, and the width of the bridge deck, respectively;K represents the reduced circular frequency that can be expressed as K=(ω·B)/U. h is the vertical displacement of bridge, p is the lateral displacement of bridge, α is the torsional displacement of bridge, and the dot superscript denotes the derivative with respect to time. Hi, Pi, and Ai(i=1,2,3,4,5,6) are the flutter derivatives related to the vertical, lateral, and torsional directions, respectively. Flutter derivatives are related to the shape of the main girder only, and they can be obtained by carrying out wind tunnel test or CFD.Based on the thought of finite element analysis, the distributed aerodynamic forces can be converted into the equivalent nodal loadings acting on the bridge element, and the aeroelastic forces for elemente can be expressed as(3)Faee=KaeeXe+CaeeX˙ewhere Xe is the vector of nodal displacement and X˙e is the vector of nodal velocity; Kaee is the local aeroelastic stiffness matrix; Caee is the vector of the local aeroelastic damping matrix.Matrix27 element of ANSYS software can model the stiffness component or the damping of bridge structure. So the total aeroelastic stiffness matrix of bridge Kaee and the damping matrix of bridge Caee can be expressed as(4)Kaee=Kae1e00Kae1eCaee=Cae1e00Cae1e(5)Kae1e=a0000000P6∗P4∗BP3∗000H6∗H4∗BH3∗000BA6∗BA4∗B2A3∗00000000000000(6)Cae1e=b0000000P5∗P1∗BP2∗000H5∗H1∗BH2∗000BA5∗BA1∗B2A2∗00000000000000where a=ρU2K2Le/2; b=ρUBKLe/2, and Le is the length of elemente.The global aeroelastic stiffness and damping matrices can be obtained by equation (7).(7)Fae=KaeX+CaeX˙where Kae represents the global aeroelastic stiffness matrix, and Cae represents the global aeroelastic damping matrix.Substituting Eq. (7) into Eq. (1) leads to the equation of motion for the bridge structure, as follows:(8)MX¨+C-CaeX˙+K-KaeX=0The equation of motion of bridge structure for flutter analysis can be obtained after incorporating the Rayleigh structure damping matrix assumptionC=αM+βK, and the equation is expressed as(9)MX¨+C-Cae′X˙+K-KaeX=0where C′, Cae′ are the modified damping and the modified aeroelastic damping matrices, respectively. They can be expressed as(10)C′=αM+βK-Kae(11)Cae′=Cae-βKaewhere α and β are the proportionality coefficients for Rayleigh damping. α and β can be obtained by least squares fitting, as follows:(12)minα,β∑i=1m2ξjωj-α-βωj22where ξi and m are the damping ratio of the ith mode and the total number of mode considered, respectively.Eq. (9) represents an integrated system in consideration of the effect of aeroelasticity, parameterized according to wind speed and response frequency. Eq. (9) can be carried out by the damped complex eigenvalue analysis method.If the bridge system hasn degrees of freedom,n conjugate pairs of complex eigenvalues and eigenvectors will be obtained by solving Eq. (9). Thejth conjugate pair of complex eigenvalues can be expressed as(13)λj=σj±iωjwhere i=-1; σj represents the real part of thejth conjugate pair of complex eigenvalues, and ωj represents the imaginary part of thejth conjugate pair of complex eigenvalues. σj and ωj are the damping and the vibrating frequency of the bridge system, respectively.When the real part of all eigenvalues is negative, the bridge system is dynamically stable; otherwise the bridge system is unstable. When the real part becomes zero, the corresponding wind speed is the critical wind speed, and it means the bridge system is on the critical state.As shown in Eq. (5) and Eq. (6), the aeroelastic stiffness matrix and the aeroelastic damping matrix are expressed according to wind speed, response frequency, and reduced frequency; only two of them are independent. Therefore, a sweep and iterative procedure should be employed in the identification of the flutter instability state. In this study, the flutter program of the emergency bridge can be implemented in ANSYS based on the scripting language APDL.
### 4.2. Flutter Critical Wind Speed of the Emergency Bridge Based on Numerical Calculation
Assuming that the emergency bridge structure damping ratio is 0.5%, the flutter derivative is obtained from the wind tunnel test, and the flutter program was compiled based on the APDL language in ANSYS. The bridge flutter critical wind speeds of the optimal aerodynamic scheme at the attack angles of −3°, 0°, and +3° have been calculated. The polynomial fitting method has been used to deal with the flutter derivative data of the optimal scheme, which is convenient for the flutter derivative invoked by the flutter analysis program. The fitting derivative curves are shown in Figure13.Figure 13
Flutter derivatives of the optimal aerodynamic scheme at attack angle of 0°.As shown in Figure14, the damping ratio and frequency at different attack angle change with the wind speed increase. As shown in Figure 14, the damping ratio of the third order vibration is the first to decrease to zero as the wind speed increases. So, the control mode of the flutter for the emergency bridge is the first order antisymmetric torsion. For the optimal scheme, the bridge flutter critical wind speeds at the wind attack angles of −3°, 0°, and +3° have been calculated by the flutter program, and the results are shown in Table 7; the results agree well with the wind tunnel test results. The maximum error is 9.5%. So, the flutter program can be used for flutter analysis of an emergency bridge.Table 7
Bridge flutter critical wind speeds.
Wind attack angle (°) Flutter critical wind speed (m/s) error(%) Results of numerical calculations Results of wind tunnel tests -3 31.3 34.2 8.48 0 27.2 28.8 5.56 3 26.85 24.3 9.50Figure 14
Variation of damping ratio of the emergency bridge with wind speed at attack angles of −3°, 0°, and +3°.The flutter critical wind speed of the emergency bridge at the attack angle of +3° is slower than the flutter critical wind speed at attack angles −3° and 0°. As the flutter critical wind speed is bigger than the corresponding flutter checking wind speed, the emergency bridge is still not safe. So, other structural measures should be applied in order to further improve the flutter stability of the emergency bridge.
### 4.3. Effect of Wind-Resistance Cable Structure on the Flutter Critical Wind Speed of the Emergency Bridge
The flutter derivative is a dimensionless parameter which only relates to the sectional shape [32, 33]. The flutter derivative can reflect the aerodynamic characteristics of a section. Ignore the influence of a wind-resistance cable in a flow field; the flutter derivative does not change when the section shape remains unchanged. In order to improve the safety of the emergency bridge in the regions of complex topography, four different wind-resistance cable schemes have been developed, which are shown in Figure 15. Using the flutter program, the bridge flutter critical wind speeds of different wind-resistance cable schemes have been calculated, and the optimal scheme has been determined.SchemeA: the wind-resistance cable is added symmetrically in the horizontal direction at the locationsL/4,L/8, and 3L/16 of the emergency bridge (L represents the span of the emergency bridge). The initial strain of the wind-resistance cable is 0.003, and the wind-resistance cable is anchored to the rock.SchemesB,C, andD: the wind-resistance cable in an arch shape has been installed on both sides of the emergency bridge. The wind-resistance cable and the main girder are connected by a tension rod. At mid-span, the length of the tension rod between the wind-resistance cable and the main girder is 1 m. At other locations, the length of tension rod depends on the rise-span ratio of the emergency bridge. The initial strain of the wind-resistance cable is 0.003. The relative positions of the wind-resistance cable and the emergency bridge deck for Schemes B, C, and D are 0°, 45°, and 90°, respectively.Figure 15
Finite element model of different wind-resistance cable schemes.The dynamic characteristics and flutter critical wind speed of different wind-resistance cable schemes at the attack angle of 0° are shown in Table8. The results indicate that the flutter critical wind speed is positively correlated with the torsional frequency. The critical wind speed of Scheme D is 41.98 m/s which is larger than those of the other three wind-resistance cable schemes. As compared to the emergency bridge without the wind-resistance cable, the bridge flutter critical wind speed increased 31.4%. In consideration of the convenience in construction and the effectiveness in erection, Scheme A has been selected. The flutter critical wind speed of Scheme A can meet the requirements of flutter stability. Hence, the emergency bridge possesses sufficient security against flutter.Table 8
Dynamic characteristics, flutter critical wind speed, and flutter critical frequency of four wind-resistance cable schemes at attack angle of 0°.
Scheme Fundamental frequency of vertical bending (Hz) Fundamental frequency of lateral bending (Hz) Fundamental frequency of torsion (Hz) torsion-bending moment ratio Flutter critical wind speed (m/s) Flutter critical frequency (Hz) A 0.702 1.332 1.040 1.481 37.22 0.8796 B 0.542 1.355 0.848 1.565 29.17 0.6513 C 0.506 1.347 0.768 1.512 38.13 0.8563 D 0.736 1.345 1.12 1.522 41.98 0.9338
## 4.1. Fundamental Theory of Flutter
Based on the structural vibration theory, the equation for motion of bridge structure in steady airflow can be expressed as(1)MX¨+CX˙+KX=Fawhere M is the bridge mass matrix, C is the damping matrix of the bridge, K is the stiffness matrix of bridge,X is the vector of displacement of the bridge, X· is the vector of velocity of the bridge, X¨ is the vector of acceleration of the bridge, and Fa is the vector of aeroelastic forces on the bridge.Based on the flutter theory of R.H. Scanlan (1965), for a bridge structure in steady airflow, the self-excited lift forceLse, drag force Dse, and pitching moment Mse per unit length are defined in the following equations. Eqs. (2a)-(2c) fully consider the lateral movement of the bridge structure.(2a)Lse=12ρU22BKH1∗h˙U+KH2∗Bα˙U+K2H3∗α+K2H4∗hB+KH5∗p˙U+K2H6∗pB(2b)Dse=12ρU22BKP1∗p˙U+KP2∗Bα˙U+K2P3∗α+K2P4∗pB+KP5∗h˙U+K2P6∗hB(2c)Mse=12ρU22B2KA1∗h˙U+KA2∗Bα˙U+K2A3∗α+K2A4∗hB+KA5∗p˙U+K2A6∗pBwhere ρ, U, and B are the air density, the mean wind speed, and the width of the bridge deck, respectively;K represents the reduced circular frequency that can be expressed as K=(ω·B)/U. h is the vertical displacement of bridge, p is the lateral displacement of bridge, α is the torsional displacement of bridge, and the dot superscript denotes the derivative with respect to time. Hi, Pi, and Ai(i=1,2,3,4,5,6) are the flutter derivatives related to the vertical, lateral, and torsional directions, respectively. Flutter derivatives are related to the shape of the main girder only, and they can be obtained by carrying out wind tunnel test or CFD.Based on the thought of finite element analysis, the distributed aerodynamic forces can be converted into the equivalent nodal loadings acting on the bridge element, and the aeroelastic forces for elemente can be expressed as(3)Faee=KaeeXe+CaeeX˙ewhere Xe is the vector of nodal displacement and X˙e is the vector of nodal velocity; Kaee is the local aeroelastic stiffness matrix; Caee is the vector of the local aeroelastic damping matrix.Matrix27 element of ANSYS software can model the stiffness component or the damping of bridge structure. So the total aeroelastic stiffness matrix of bridge Kaee and the damping matrix of bridge Caee can be expressed as(4)Kaee=Kae1e00Kae1eCaee=Cae1e00Cae1e(5)Kae1e=a0000000P6∗P4∗BP3∗000H6∗H4∗BH3∗000BA6∗BA4∗B2A3∗00000000000000(6)Cae1e=b0000000P5∗P1∗BP2∗000H5∗H1∗BH2∗000BA5∗BA1∗B2A2∗00000000000000where a=ρU2K2Le/2; b=ρUBKLe/2, and Le is the length of elemente.The global aeroelastic stiffness and damping matrices can be obtained by equation (7).(7)Fae=KaeX+CaeX˙where Kae represents the global aeroelastic stiffness matrix, and Cae represents the global aeroelastic damping matrix.Substituting Eq. (7) into Eq. (1) leads to the equation of motion for the bridge structure, as follows:(8)MX¨+C-CaeX˙+K-KaeX=0The equation of motion of bridge structure for flutter analysis can be obtained after incorporating the Rayleigh structure damping matrix assumptionC=αM+βK, and the equation is expressed as(9)MX¨+C-Cae′X˙+K-KaeX=0where C′, Cae′ are the modified damping and the modified aeroelastic damping matrices, respectively. They can be expressed as(10)C′=αM+βK-Kae(11)Cae′=Cae-βKaewhere α and β are the proportionality coefficients for Rayleigh damping. α and β can be obtained by least squares fitting, as follows:(12)minα,β∑i=1m2ξjωj-α-βωj22where ξi and m are the damping ratio of the ith mode and the total number of mode considered, respectively.Eq. (9) represents an integrated system in consideration of the effect of aeroelasticity, parameterized according to wind speed and response frequency. Eq. (9) can be carried out by the damped complex eigenvalue analysis method.If the bridge system hasn degrees of freedom,n conjugate pairs of complex eigenvalues and eigenvectors will be obtained by solving Eq. (9). Thejth conjugate pair of complex eigenvalues can be expressed as(13)λj=σj±iωjwhere i=-1; σj represents the real part of thejth conjugate pair of complex eigenvalues, and ωj represents the imaginary part of thejth conjugate pair of complex eigenvalues. σj and ωj are the damping and the vibrating frequency of the bridge system, respectively.When the real part of all eigenvalues is negative, the bridge system is dynamically stable; otherwise the bridge system is unstable. When the real part becomes zero, the corresponding wind speed is the critical wind speed, and it means the bridge system is on the critical state.As shown in Eq. (5) and Eq. (6), the aeroelastic stiffness matrix and the aeroelastic damping matrix are expressed according to wind speed, response frequency, and reduced frequency; only two of them are independent. Therefore, a sweep and iterative procedure should be employed in the identification of the flutter instability state. In this study, the flutter program of the emergency bridge can be implemented in ANSYS based on the scripting language APDL.
## 4.2. Flutter Critical Wind Speed of the Emergency Bridge Based on Numerical Calculation
Assuming that the emergency bridge structure damping ratio is 0.5%, the flutter derivative is obtained from the wind tunnel test, and the flutter program was compiled based on the APDL language in ANSYS. The bridge flutter critical wind speeds of the optimal aerodynamic scheme at the attack angles of −3°, 0°, and +3° have been calculated. The polynomial fitting method has been used to deal with the flutter derivative data of the optimal scheme, which is convenient for the flutter derivative invoked by the flutter analysis program. The fitting derivative curves are shown in Figure13.Figure 13
Flutter derivatives of the optimal aerodynamic scheme at attack angle of 0°.As shown in Figure14, the damping ratio and frequency at different attack angle change with the wind speed increase. As shown in Figure 14, the damping ratio of the third order vibration is the first to decrease to zero as the wind speed increases. So, the control mode of the flutter for the emergency bridge is the first order antisymmetric torsion. For the optimal scheme, the bridge flutter critical wind speeds at the wind attack angles of −3°, 0°, and +3° have been calculated by the flutter program, and the results are shown in Table 7; the results agree well with the wind tunnel test results. The maximum error is 9.5%. So, the flutter program can be used for flutter analysis of an emergency bridge.Table 7
Bridge flutter critical wind speeds.
Wind attack angle (°) Flutter critical wind speed (m/s) error(%) Results of numerical calculations Results of wind tunnel tests -3 31.3 34.2 8.48 0 27.2 28.8 5.56 3 26.85 24.3 9.50Figure 14
Variation of damping ratio of the emergency bridge with wind speed at attack angles of −3°, 0°, and +3°.The flutter critical wind speed of the emergency bridge at the attack angle of +3° is slower than the flutter critical wind speed at attack angles −3° and 0°. As the flutter critical wind speed is bigger than the corresponding flutter checking wind speed, the emergency bridge is still not safe. So, other structural measures should be applied in order to further improve the flutter stability of the emergency bridge.
## 4.3. Effect of Wind-Resistance Cable Structure on the Flutter Critical Wind Speed of the Emergency Bridge
The flutter derivative is a dimensionless parameter which only relates to the sectional shape [32, 33]. The flutter derivative can reflect the aerodynamic characteristics of a section. Ignore the influence of a wind-resistance cable in a flow field; the flutter derivative does not change when the section shape remains unchanged. In order to improve the safety of the emergency bridge in the regions of complex topography, four different wind-resistance cable schemes have been developed, which are shown in Figure 15. Using the flutter program, the bridge flutter critical wind speeds of different wind-resistance cable schemes have been calculated, and the optimal scheme has been determined.SchemeA: the wind-resistance cable is added symmetrically in the horizontal direction at the locationsL/4,L/8, and 3L/16 of the emergency bridge (L represents the span of the emergency bridge). The initial strain of the wind-resistance cable is 0.003, and the wind-resistance cable is anchored to the rock.SchemesB,C, andD: the wind-resistance cable in an arch shape has been installed on both sides of the emergency bridge. The wind-resistance cable and the main girder are connected by a tension rod. At mid-span, the length of the tension rod between the wind-resistance cable and the main girder is 1 m. At other locations, the length of tension rod depends on the rise-span ratio of the emergency bridge. The initial strain of the wind-resistance cable is 0.003. The relative positions of the wind-resistance cable and the emergency bridge deck for Schemes B, C, and D are 0°, 45°, and 90°, respectively.Figure 15
Finite element model of different wind-resistance cable schemes.The dynamic characteristics and flutter critical wind speed of different wind-resistance cable schemes at the attack angle of 0° are shown in Table8. The results indicate that the flutter critical wind speed is positively correlated with the torsional frequency. The critical wind speed of Scheme D is 41.98 m/s which is larger than those of the other three wind-resistance cable schemes. As compared to the emergency bridge without the wind-resistance cable, the bridge flutter critical wind speed increased 31.4%. In consideration of the convenience in construction and the effectiveness in erection, Scheme A has been selected. The flutter critical wind speed of Scheme A can meet the requirements of flutter stability. Hence, the emergency bridge possesses sufficient security against flutter.Table 8
Dynamic characteristics, flutter critical wind speed, and flutter critical frequency of four wind-resistance cable schemes at attack angle of 0°.
Scheme Fundamental frequency of vertical bending (Hz) Fundamental frequency of lateral bending (Hz) Fundamental frequency of torsion (Hz) torsion-bending moment ratio Flutter critical wind speed (m/s) Flutter critical frequency (Hz) A 0.702 1.332 1.040 1.481 37.22 0.8796 B 0.542 1.355 0.848 1.565 29.17 0.6513 C 0.506 1.347 0.768 1.512 38.13 0.8563 D 0.736 1.345 1.12 1.522 41.98 0.9338
## 5. Conclusions
The flutter critical wind speed of the original emergency bridge scheme is slower than the corresponding flutter checking wind speed at the wind attack angles of −3°, 0°, and +3°, leading to the possibilities of flutter. Based on the original scheme, four aerodynamic optimization schemes have been developed. Based on the wind tunnel test results, the aerodynamic combined measurements of central-slotted and wind fairing are the optimal scheme, which can make the flutter critical wind speed and the safety coefficient of the bridge meet the design requirements of the emergency bridge at the wind attack angles of −3°, 0°, and +3°. The flutter program has been compiled using the APDL language. The calculated flutter critical wind speed results by the flutter program agree well with the wind tunnel test results. The maximum error is only 9.5%. In order to improve the flutter stability of the emergency bridge, the critical wind speeds of different wind-resistance cable schemes have been calculated by the flutter analysis program. The results show that the scheme of wind-resistance cable in the horizontal direction can meet the requirement of flutter stability, which is also convenient in constructing and effective in erecting an emergency bridge.
---
*Source: 1013025-2019-03-17.xml* | 2019 |
# Nuclear Nox4 Role in Stemness Power of Human Amniotic Fluid Stem Cells
**Authors:** Tullia Maraldi; Marianna Guida; Manuela Zavatti; Elisa Resca; Laura Bertoni; Giovanni B. La Sala; Anto De Pol
**Journal:** Oxidative Medicine and Cellular Longevity
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101304
---
## Abstract
Human amniotic fluid stem cells (AFSC) are an attractive source for cell therapy due to their multilineage differentiation potential and accessibility advantages. However the clinical application of human stem cells largely depends on their capacity to expandin vitro, since there is an extensive donor-to-donor heterogeneity. Reactive oxygen species (ROS) and cellular oxidative stress are involved in many physiological and pathophysiological processes of stem cells, including pluripotency, proliferation, differentiation, and stress resistance. The mode of action of ROS is also dependent on the localization of their target molecules. Thus, the modifications induced by ROS can be separated depending on the cellular compartments they affect. NAD(P)H oxidase family, particularly Nox4, has been known to produce ROS in the nucleus. In the present study we show that Nox4 nuclear expression (nNox4) depends on the donor and it correlates with the expression of transcription factors involved in stemness regulation, such as Oct4, SSEA-4, and Sox2. Moreover nNox4 is linked with the nuclear localization of redox sensitive transcription factors, as Nrf2 and NF-κB, and with the differentiation potential. Taken together, these results suggest that nNox4 regulation may have important effects in stem cell capability through modulation of transcription factors and DNA damage.
---
## Body
## 1. Introduction
Numerous studies have demonstrated that the MSC populations exhibit donor-to-donor heterogeneity. This fact could be attributed to several factors, including the methods used to culture, select, and expand the population and the age of the donor [1].About amniotic fluid stem cells (AFSC), the harvesting protocol is well established in the clinical practice as well as the selection method, based on the c-Kit surface marker expression [2]. Moreover the donor age range has to be considered quite restricted since the sample is usually obtained in clinical practice for cytogenetic analysis between the 16th week and the 20th week of pregnancy. However, as well as other MSCs [1], AFSC could display heterogeneity among the donors.Regulation of ROS has a vital role in maintaining the “stemness” and the differentiation potential of the stem cells, as well as in the progression of stem-cell-associated diseases [3]. ROS-mediated proliferation and senescence in stem/progenitor cells may be determined by the amount, duration, and location of ROS generation, which activates specific redox-signaling pathways [4]. In fact redox changes in different areas and resulting changes in ROS levels may represent an important mechanism of intracellular communication between different cellular compartments [5]. The nucleus itself contains a number of proteins with oxidizable thiols that are essential for transcription, chromatin stability, and nuclear protein import and export, as well as DNA replication and repair [5]. Several transcription factors have been thought to be involved in the redox-dependent modulation of gene expression [5].Recent advances indicate that the participation of ROS-producing nicotinamide adenine dinucleotide phosphate reduced oxidase (NADPH, Nox) system is an important trigger for differentiating ESCs toward the cardiomyocyte lineage [6–10]. Nox4 plays an important role in the differentiation of mouse ESCs toward the smooth muscle cell (SMC) lineage when translocating to the nucleus and generating H2O2 [11]. In fact the subcellular localization of Nox4 is likely to be especially important, given its constitutive activity, unlike isoforms, such as Nox1 or Nox2, that require agonist activation. Nox4 has been reported to be variably present in the endoplasmic reticulum [12, 13], mitochondria [14], cytoskeleton [15], plasma membrane [16], and nucleus [17] in different cell types. Recently we demonstrated that Nox4 can be detected in nuclei of human AFSC, depending on the cell metabolism status [18].It is interesting to better understand how ROS homeostasis is an important modulator in stem cell self-renewal and differentiation. Certain proteins can act as “redox sensors” due to the redox modifications of their cysteine residues, which are critically important in the control of protein function. Signaling molecules such as FoxOs, APE1/Ref-1, Nrf2, ATM, HIFs, NF-κB, p38, and p53 are subjected to redox modifications and could be involved in the regulation of stem cell self-renewal and differentiation [19].The aim of this study was to assess whether nuclear Nox4-generated ROS can modulate the presence and the localization in nuclear domain of transcription factors crucial for stemness capability. For this purpose we performed confocal analysis of immunofluorescence experiments and coimmunoprecipitation assays. Furthermore we investigated whether the different nuclear Nox4 (nNox4) presence, observed among the AFSC samples, was correlated with the expression of typical stem cell markers and the differentiation potential. These data indicate that nNox4 derived ROS are involved in AFSC stemness regulation and could be considered as marker of stem potential.
## 2. Materials and Methods
### 2.1. Cell Culture
Amniocentesis samples (6 backup flasks obtained from different donors) were provided by the Laboratorio di Genetica, Ospedale Santa Maria Nuova (Reggio Emilia, Italy). All samples were collected with the informed consent of the patients (mother’s age≥ 35) according to Italian law and the Ethical Committee guideline.Human AFSC (AFSC) were isolated as previously described by De Coppi et al. [2]. Human amniocentesis cultures were harvested by trypsinization and subjected to c-Kit immunoselection using MACS technology (Miltenyi Biotec, Germany). AFSC were subcultured routinely at 1 : 3 dilution and were not allowed to expand beyond the 70% of confluence. AFSC were grown in a culture medium (αMEM supplemented with 20% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin) (all reagents from EuroClone Spa, Italy) at 37°C and 5% CO2 [20].
### 2.2. Nox4 Silencing
Retroviral supernatants were produced according to HuSH shRNA plasmid panels (29-mer) application guide; AM12 cells were transfected with an empty vector (pRS vector, TR20003), a scrambled vector (HuSH 29-mer noneffective pRS vector, TR30012), and four NOX4 gene specific shRNA expression pRS vectors (TI311637, TI311638, TI311639, and TI311640) for 48 h [21]. Retroviral supernatants were then centrifuged at 2000 ×g for 5 minutes and used for target cells (AFSC) infection. Where indicated, cells were infected with NOX4 shRNA retroviral vectors, empty vector, or scrambled vector. Forty-eight hours after infection, cells were exposed to 2 μg/mL puromycin (Sigma-Aldrich, St. Louis, MO, USA) for 24 hours and subjected to evaluation of Nox4 expression by western blotting and confocal analysis and detection of intracellular ROS levels.
### 2.3. Differentiation Protocols
Osteogenic differentiation was obtained maintaining cells for 3 weeks at 37°C and 5% CO2 in osteogenic medium: culture medium supplemented with 100 nM dexamethasone, 10 mM β-glycerophosphate, and 50 μg/mL ascorbic acid-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA). Coverslips were then stained with alizarin red S staining for light microscopic observation.Chondrogenic differentiation: cells were cultured as a monolayer using a medium containing DMEM high glucose, 100 nM dexamethasone and 10 ng/mL TGFβ1 (Sigma-Aldrich, St. Louis, USA), 10 μM 2P-ascorbic acid, 1% v/v sodium pyruvate (Invitrogen, Italy), and 50 mg/mL ITS premix (BD, Franklin Lakes, NJ, USA) for 3 weeks.Neural differentiation protocol [22]: cells were seeded at 60% confluence and maintained in neural differentiation medium (culture medium supplemented with 10% FBS and 20 μM retinoic acid (RA) in dimethyl sulfoxide (DMSO), both from Sigma-Aldrich, St. Louis, MO, USA) for up to 4 weeks at 37°C and 5% CO2.
### 2.4. Preparation of Cell Extracts
Cell extracts were obtained as described by Maraldi et al. [23]. Briefly, subconfluent cells were extracted by addition of AT lysis buffer (20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; 5 mM sodium pyrophosphate; and 1 mM Na3VO4) and freshly added Sigma-Aldrich protease inhibitor cocktail at 4°C for 30 min. Lysates were sonicated, cleared by centrifugation, and immediately boiled in SDS sample buffer or used for immunoprecipitation experiments, as described below.
### 2.5. Immunoprecipitation and Electrophoresis
Immunoprecipitation was performed as reported by Cenni et al. [24]. Equal amounts of precleared lysates (pcl), whose protein concentration was determined by the Bradford method, were incubated overnight with rabbit anti-Nox4 (Novus Biologicals, CO, USA) and mouse anti-sc-35 (Sigma-Aldrich) (3 μg all). Then the two samples were treated with 30 μL of 50% (v/v) of protein A/G agarose slurry (GE Healthcare Bio-sciences, Uppsala, Sweden) at 4°C with gentle rocking for 1 h. Pellets were washed twice with 20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; and 5 mM sodium pyrophosphate, once with 10 mM Tris-Cl, pH 7.4, boiled in SDS sample buffer, and centrifuged. Supernatants were loaded onto SDS-polyacrylamide gel, blotted on Immobilon-P membranes (Millipore, Waltham, MA, USA), processed by western blot with the indicated antibodies and detected by Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Signal quantification was obtained by chemiluminescence detection on a Kodak Image Station 440CF and the analysis with the Kodak 1D Image software.
### 2.6. Nuclei Purification
Human AFSC nuclei were purified as reported by Cenni et al. [25]. Briefly, 400 μL of nuclear isolation buffer (10 mM Tris-HCl, pH 7.8, 1% Nonidet P-40, 10 mM β-mercaptoethanol, 0.5 mM phenylmethylsulfonyl fluoride, 1 μg/mL aprotinin and leupeptin, and 5 mM NaF) was added to 5 × 106 cells for 8 min on ice. Milli-Q water (400 μL) was then added to swell cells for 3 min. Cells were sheared by passages through a 22-gauge needle. Nuclei were recovered by centrifugation at 400 ×g at 4°C for 6 min and washed once in 400 μL of washing buffer (10 mM Tris-HCl, pH 7.4, and 2 mM MgCl2, plus inhibitors as described earlier in the text). Supernatants (containing the cytosolic fractions) were further centrifuged for 30 min at 4000 ×g. Isolated nuclear and cytoplasmic extracts were finally lysed in AT lysis buffer, sonicated, and cleared by centrifugation.
### 2.7. Western Blot
The protocols of the western blot were performed as described by Hanson et al. [26].Protein extracts, quantified by a Bradford Protein Assay (Bio-Rad Laboratories, CA, USA), underwent SDS-polyacrylamide gel electrophoresis and were transferred to Immobilon-P membranes. The following antibodies were used: rabbit anti-NF-κB, rabbit anti-βcatenin, goat anti-matrin3, goat anti-actin (Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted 1 : 500; rabbit anti-cyclin E2, cyclin D1, cyclin B1, p21, Pmyt1, Oct4, and mouse anti-cyclin A1, and SSEA-4 (Cell Signalling Technology, Beverly, MA, USA), mouse anti-tubulin, and mouse anti-sc-35 (Sigma-Aldrich St. Louis, MO, USA), rabbit anti-Nrf2 (Abcam, Cambridge, UK), rabbit anti-Nox4 (Novus Biologicals, CO, USA), and mouse anti-pH2A (Ser139), mouse anti-CD90 and anti-CD105 (Millipore, Billerica, MA, USA) rabbit anti-CD73 (Genetex, Irvine, CA, USA), diluted 1 : 1000; peroxidase-labelled anti-rabbit, mouse, and goat secondary antibodies diluted 1 : 3000 (Pierce Antibodies, Thermo Scientific; Rockford, IL, USA). Ab dilution was performed in TBS-T pH 7.6 containing 3% BSA. The membranes were visualized using Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Anti-actin antibody was used as control of protein loading.
### 2.8. Senescence Assay
Senescent cells were visualized in 45 days cultures with the Senescenceβ-Galactosidase Staining Kit (Cell Signalling Technology, Beverly, MA, USA) following the manufacturer’s instructions. This test is designed to detect β-galactosidase activity at pH 6, a known characteristic of senescent cells not found in presenescent, quiescent, or immortal cells.
### 2.9. Confocal Microscopy
Undifferentiated AFSC were fixed for 20 min in 4% ice-cold paraformaldehyde and then permeabilized with 0.1% Triton X-100 in ice-cold phosphate-buffered saline (PBS) for 5 min. Permeabilized samples were then blocked with 3% of bovine serum albumin (BSA) in PBS for 30 min at room temperature (RT) and incubated with primary antibodies (Ab). Mouse anti-sc-35 and mouse anti-glial fibrillary acidic protein (GFAP) (Sigma-Aldrich, St. Louis, MO, USA), rabbit anti-human Collagen type II (Genetex, Irvine, CA, USA), rabbit anti-coilin (Abcam, Cambridge, UK), goat anti-aggrecan, rabbit anti-Nox4, rabbit anti-Oct4, goat anti-Foxo1, goat anti-Sox2 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) (diluted 1 : 50), mouse anti-Oct4 (Millipore Billerica, MA, USA), mouse anti-βtubulin III (Cell Signalling Technology, Beverly, MA, USA), and mouse anti-pH2A (Ser139) (Millipore, Billerica, MA, USA) (diluted 1 : 100), in PBS containing 3% BSA for 1 h at RT, were used as primary antibodies (Ab). Secondary Ab were diluted 1 : 200 in PBS containing 3% BSA (goat anti-mouse Alexa 647, goat anti-rabbit Alexa 488, and donkey anti-goat Alexa 488). After washing in PBS, samples were stained with 1 μg/mL DAPI in H2O for 1 min and then were mounted with antifading medium (0.21 M DABCO and 90% glycerol in 0.02 M Tris, pH 8.0). Negative controls consisted of samples not incubated with the primary antibody but only with the secondary antibody.In the case of a double staining with sc-35 antibody and, for example, Nox4, we performed a first incubation with anti-Nox4 overnight and then, separately, 1 h of incubation for anti-sc-35, in order to avoid unspecific antibodies interactions.Confocal imaging was performed using a Nikon A1 confocal laser scanning microscope as previously described [27].Spectral analysis was performed to exclude overlapping between two signals or the influence of autofluorescence background on the fluorochrome signals, as previously shown [28]. The confocal serial sections were processed with ImageJ software to obtain three-dimensional projections, as previously described [29]. The image rendering was performed using Adobe Photoshop software.
### 2.10. Nuclear ROS Imaging
Nuclear ROS were detected with nuclear-localized fluorescent probe for H2O2, nuclear peroxy emerald 1 (NucPE1) [30–33]. For all experiments, 5 μM solutions of NucPE1 (from 5 mM stocks in DMSO) were made in PBS/glucose. The cells were then kept in an incubator (37°C, 5% CO2) during the course of all experiments. The probe was incubated for total of 30 min.Confocal fluorescence imaging studies were performed with a Nikon A1 confocal laser scanning microscope. Excitation of NucPE1-loaded cells at 488 nm was carried out with an Ar laser and emission was collected at 535 nm. All images in an experiment were collected simultaneously using identical microscope settings. Image analysis was performed in ImageJ.
### 2.11. Statistical Analysis
In vitro experiments were performed in triplicate. For quantitative comparisons, values were expressed as mean ± SD (standard deviation) based on triplicate analysis for each sample. To test the significance of observed differences among the study groups, one way analysis of variance (ANOVA) test with the post-hoc Bonferroni correction was applied. A P value of <0.05 was considered to be statistically significant.
## 2.1. Cell Culture
Amniocentesis samples (6 backup flasks obtained from different donors) were provided by the Laboratorio di Genetica, Ospedale Santa Maria Nuova (Reggio Emilia, Italy). All samples were collected with the informed consent of the patients (mother’s age≥ 35) according to Italian law and the Ethical Committee guideline.Human AFSC (AFSC) were isolated as previously described by De Coppi et al. [2]. Human amniocentesis cultures were harvested by trypsinization and subjected to c-Kit immunoselection using MACS technology (Miltenyi Biotec, Germany). AFSC were subcultured routinely at 1 : 3 dilution and were not allowed to expand beyond the 70% of confluence. AFSC were grown in a culture medium (αMEM supplemented with 20% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin) (all reagents from EuroClone Spa, Italy) at 37°C and 5% CO2 [20].
## 2.2. Nox4 Silencing
Retroviral supernatants were produced according to HuSH shRNA plasmid panels (29-mer) application guide; AM12 cells were transfected with an empty vector (pRS vector, TR20003), a scrambled vector (HuSH 29-mer noneffective pRS vector, TR30012), and four NOX4 gene specific shRNA expression pRS vectors (TI311637, TI311638, TI311639, and TI311640) for 48 h [21]. Retroviral supernatants were then centrifuged at 2000 ×g for 5 minutes and used for target cells (AFSC) infection. Where indicated, cells were infected with NOX4 shRNA retroviral vectors, empty vector, or scrambled vector. Forty-eight hours after infection, cells were exposed to 2 μg/mL puromycin (Sigma-Aldrich, St. Louis, MO, USA) for 24 hours and subjected to evaluation of Nox4 expression by western blotting and confocal analysis and detection of intracellular ROS levels.
## 2.3. Differentiation Protocols
Osteogenic differentiation was obtained maintaining cells for 3 weeks at 37°C and 5% CO2 in osteogenic medium: culture medium supplemented with 100 nM dexamethasone, 10 mM β-glycerophosphate, and 50 μg/mL ascorbic acid-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA). Coverslips were then stained with alizarin red S staining for light microscopic observation.Chondrogenic differentiation: cells were cultured as a monolayer using a medium containing DMEM high glucose, 100 nM dexamethasone and 10 ng/mL TGFβ1 (Sigma-Aldrich, St. Louis, USA), 10 μM 2P-ascorbic acid, 1% v/v sodium pyruvate (Invitrogen, Italy), and 50 mg/mL ITS premix (BD, Franklin Lakes, NJ, USA) for 3 weeks.Neural differentiation protocol [22]: cells were seeded at 60% confluence and maintained in neural differentiation medium (culture medium supplemented with 10% FBS and 20 μM retinoic acid (RA) in dimethyl sulfoxide (DMSO), both from Sigma-Aldrich, St. Louis, MO, USA) for up to 4 weeks at 37°C and 5% CO2.
## 2.4. Preparation of Cell Extracts
Cell extracts were obtained as described by Maraldi et al. [23]. Briefly, subconfluent cells were extracted by addition of AT lysis buffer (20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; 5 mM sodium pyrophosphate; and 1 mM Na3VO4) and freshly added Sigma-Aldrich protease inhibitor cocktail at 4°C for 30 min. Lysates were sonicated, cleared by centrifugation, and immediately boiled in SDS sample buffer or used for immunoprecipitation experiments, as described below.
## 2.5. Immunoprecipitation and Electrophoresis
Immunoprecipitation was performed as reported by Cenni et al. [24]. Equal amounts of precleared lysates (pcl), whose protein concentration was determined by the Bradford method, were incubated overnight with rabbit anti-Nox4 (Novus Biologicals, CO, USA) and mouse anti-sc-35 (Sigma-Aldrich) (3 μg all). Then the two samples were treated with 30 μL of 50% (v/v) of protein A/G agarose slurry (GE Healthcare Bio-sciences, Uppsala, Sweden) at 4°C with gentle rocking for 1 h. Pellets were washed twice with 20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; and 5 mM sodium pyrophosphate, once with 10 mM Tris-Cl, pH 7.4, boiled in SDS sample buffer, and centrifuged. Supernatants were loaded onto SDS-polyacrylamide gel, blotted on Immobilon-P membranes (Millipore, Waltham, MA, USA), processed by western blot with the indicated antibodies and detected by Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Signal quantification was obtained by chemiluminescence detection on a Kodak Image Station 440CF and the analysis with the Kodak 1D Image software.
## 2.6. Nuclei Purification
Human AFSC nuclei were purified as reported by Cenni et al. [25]. Briefly, 400 μL of nuclear isolation buffer (10 mM Tris-HCl, pH 7.8, 1% Nonidet P-40, 10 mM β-mercaptoethanol, 0.5 mM phenylmethylsulfonyl fluoride, 1 μg/mL aprotinin and leupeptin, and 5 mM NaF) was added to 5 × 106 cells for 8 min on ice. Milli-Q water (400 μL) was then added to swell cells for 3 min. Cells were sheared by passages through a 22-gauge needle. Nuclei were recovered by centrifugation at 400 ×g at 4°C for 6 min and washed once in 400 μL of washing buffer (10 mM Tris-HCl, pH 7.4, and 2 mM MgCl2, plus inhibitors as described earlier in the text). Supernatants (containing the cytosolic fractions) were further centrifuged for 30 min at 4000 ×g. Isolated nuclear and cytoplasmic extracts were finally lysed in AT lysis buffer, sonicated, and cleared by centrifugation.
## 2.7. Western Blot
The protocols of the western blot were performed as described by Hanson et al. [26].Protein extracts, quantified by a Bradford Protein Assay (Bio-Rad Laboratories, CA, USA), underwent SDS-polyacrylamide gel electrophoresis and were transferred to Immobilon-P membranes. The following antibodies were used: rabbit anti-NF-κB, rabbit anti-βcatenin, goat anti-matrin3, goat anti-actin (Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted 1 : 500; rabbit anti-cyclin E2, cyclin D1, cyclin B1, p21, Pmyt1, Oct4, and mouse anti-cyclin A1, and SSEA-4 (Cell Signalling Technology, Beverly, MA, USA), mouse anti-tubulin, and mouse anti-sc-35 (Sigma-Aldrich St. Louis, MO, USA), rabbit anti-Nrf2 (Abcam, Cambridge, UK), rabbit anti-Nox4 (Novus Biologicals, CO, USA), and mouse anti-pH2A (Ser139), mouse anti-CD90 and anti-CD105 (Millipore, Billerica, MA, USA) rabbit anti-CD73 (Genetex, Irvine, CA, USA), diluted 1 : 1000; peroxidase-labelled anti-rabbit, mouse, and goat secondary antibodies diluted 1 : 3000 (Pierce Antibodies, Thermo Scientific; Rockford, IL, USA). Ab dilution was performed in TBS-T pH 7.6 containing 3% BSA. The membranes were visualized using Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Anti-actin antibody was used as control of protein loading.
## 2.8. Senescence Assay
Senescent cells were visualized in 45 days cultures with the Senescenceβ-Galactosidase Staining Kit (Cell Signalling Technology, Beverly, MA, USA) following the manufacturer’s instructions. This test is designed to detect β-galactosidase activity at pH 6, a known characteristic of senescent cells not found in presenescent, quiescent, or immortal cells.
## 2.9. Confocal Microscopy
Undifferentiated AFSC were fixed for 20 min in 4% ice-cold paraformaldehyde and then permeabilized with 0.1% Triton X-100 in ice-cold phosphate-buffered saline (PBS) for 5 min. Permeabilized samples were then blocked with 3% of bovine serum albumin (BSA) in PBS for 30 min at room temperature (RT) and incubated with primary antibodies (Ab). Mouse anti-sc-35 and mouse anti-glial fibrillary acidic protein (GFAP) (Sigma-Aldrich, St. Louis, MO, USA), rabbit anti-human Collagen type II (Genetex, Irvine, CA, USA), rabbit anti-coilin (Abcam, Cambridge, UK), goat anti-aggrecan, rabbit anti-Nox4, rabbit anti-Oct4, goat anti-Foxo1, goat anti-Sox2 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) (diluted 1 : 50), mouse anti-Oct4 (Millipore Billerica, MA, USA), mouse anti-βtubulin III (Cell Signalling Technology, Beverly, MA, USA), and mouse anti-pH2A (Ser139) (Millipore, Billerica, MA, USA) (diluted 1 : 100), in PBS containing 3% BSA for 1 h at RT, were used as primary antibodies (Ab). Secondary Ab were diluted 1 : 200 in PBS containing 3% BSA (goat anti-mouse Alexa 647, goat anti-rabbit Alexa 488, and donkey anti-goat Alexa 488). After washing in PBS, samples were stained with 1 μg/mL DAPI in H2O for 1 min and then were mounted with antifading medium (0.21 M DABCO and 90% glycerol in 0.02 M Tris, pH 8.0). Negative controls consisted of samples not incubated with the primary antibody but only with the secondary antibody.In the case of a double staining with sc-35 antibody and, for example, Nox4, we performed a first incubation with anti-Nox4 overnight and then, separately, 1 h of incubation for anti-sc-35, in order to avoid unspecific antibodies interactions.Confocal imaging was performed using a Nikon A1 confocal laser scanning microscope as previously described [27].Spectral analysis was performed to exclude overlapping between two signals or the influence of autofluorescence background on the fluorochrome signals, as previously shown [28]. The confocal serial sections were processed with ImageJ software to obtain three-dimensional projections, as previously described [29]. The image rendering was performed using Adobe Photoshop software.
## 2.10. Nuclear ROS Imaging
Nuclear ROS were detected with nuclear-localized fluorescent probe for H2O2, nuclear peroxy emerald 1 (NucPE1) [30–33]. For all experiments, 5 μM solutions of NucPE1 (from 5 mM stocks in DMSO) were made in PBS/glucose. The cells were then kept in an incubator (37°C, 5% CO2) during the course of all experiments. The probe was incubated for total of 30 min.Confocal fluorescence imaging studies were performed with a Nikon A1 confocal laser scanning microscope. Excitation of NucPE1-loaded cells at 488 nm was carried out with an Ar laser and emission was collected at 535 nm. All images in an experiment were collected simultaneously using identical microscope settings. Image analysis was performed in ImageJ.
## 2.11. Statistical Analysis
In vitro experiments were performed in triplicate. For quantitative comparisons, values were expressed as mean ± SD (standard deviation) based on triplicate analysis for each sample. To test the significance of observed differences among the study groups, one way analysis of variance (ANOVA) test with the post-hoc Bonferroni correction was applied. A P value of <0.05 was considered to be statistically significant.
## 3. Results
### 3.1. Nox4 into the Nucleus of AFSC
Recently we have shown that, by using antibodies from Santa Cruz, Abcam, or Novus, we can see a Nox4 signal mostly localized inside the nuclei of AFSC [18]. In particular AFSC expressing Nox4 into the nucleus show a spot distribution, a punctate pattern similar to the one observed in nuclear domains, such as speckles or Cajal bodies. In order to test whether nuclear Nox4 (nNox4) resides inside nuclear domains, colocalization assays were performed using antibodies directed to sc-35, a speckle marker, or coilin, a Cajal bodies marker. Confocal analysis (Figure 1(a)) of double staining with anti-Nox4 (green) and anti-sc35 (red) or anti-coilin (red) demonstrates that Nox4 interacts with domain of nuclear speckles, rather than with Cajal bodies, as shown by values of the Pearson’s correlation coefficient (Rp) and overlap coefficient (R), which provides information about the similarity of shape between the two patterns (Nox4 and sc-35). The value for correlationR can ranges from −1 to 1, and thus a value of 1 would mean that the patterns are perfectly similar, while a value of −1 would mean that the patterns are perfectly opposite. An overlap coefficient around 0.8 indicates a very good colocalization of the two signals.Figure 1
Nox4 nuclear localization and interaction in AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and coilin (red) or sc-35 (red). Colocalization graph reporting Pearson’s and overlap coefficients. Scale bar: 10μm. (b) Total lysates (TL) were immunoprecipitated with sc-35 antibody and then revealed with anti-sc-35 and anti-Nox4 (right) or were immunoprecipitated with anti-Nox4 and then revealed with anti-Nox4 and anti-sc-35 (left). Signals of preclearing sample (pcl) are shown in the middle line. (c) First line: representative images showing staining with nuclear ROS probe (nuclear peroxy emerald 1) of AFSC treated or not treated with shRNA. Second line: Nox4 signal (green) in the same samples. Scale bar: 10 μm. (d) Western blot revealed anti-Nox4 of AFSC treated with empty vector (ev) or siRNA TI311640, the best silencing vector among the 4 reported in Materials and Methods section. All presented data are representative of three independent experiments.
(a)
(b)
(c)
(d)To demonstrate that this localization means also a direct interaction of these proteins, coimmunoprecipitation experiments (IP for anti-sc35 and IP for anti-Nox4) were performed and show that Nox4 interacts with domain of nuclear speckles (Figure1(b)).In order to investigate the NADPH oxidase activity inside the nuclei, we used a nuclear selective probe for H2O2, nuclear peroxy emerald 1 (Figure 1(c)). Immunofluorescence assay (Figure 1(c)) shows that the decrease in Nox4 expression, demonstrated by western blot (Figure 1(d)), occurs both in cytoplasmic and nuclear compartments. Overall, AFSC cells, treated with siRNA, show a significant decrease in nuclear ROS level.Forkhead Box O (FoxO) transcription factors act in adult stem cells to preserve their regenerative potential. FoxO1 is essential for the maintenance of human ESC pluripotency. This function is probably mediated through direct control exerted by FoxO1 of Oct4 and Sox2 gene expression through occupation and activation of their respective promoters [34]. The cellular distribution of FoxO1 in AFSC is both in the cytosol and in the nucleus, but the Nox4 signal matches only in the cytosol, as shown in Figure 2(a). Otherwise, the pluripotent stem cell marker Oct4 colocalizes in some spots with nNox4 staining into the nucleus (Figure 2(a)). Interestingly Oct4 is detectable in speckle domains, as shown by labeling with sc-35 (Figure 2(b)) and coimmunoprecipitation assay (Figure 2(c)). The signal of coilin, a Cajal bodies marker, does not match with the Oct4 one (Figure 2(b)).Figure 2
Nox4 interaction with transcription factors in nuclei of AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and FoxO1 (red) or Oct4 (red). (b) Representative images showing superimposing between DAPI (blue), Oct4 (green), and coilin (red) or sc-35 (red). Scale bar: 10μm. (c) Western blot analysis of nuclear lysate (NL) and immunoprecipitation experiment of NL with Nox4 antibody then revealed with anti-Oct4 and anti-sc-35. Signals of preclearing sample (pcl) are shown in the middle line. Presented data are representative of three independent experiments.
(a)
(b)
(c)
### 3.2. AFSC Heterogeneity and nNox4
Stem cells isolated from different amniotic fluids (1–6 collected samples) exhibit different behaviors, proliferation rates. In the figures only the most representative 4 were shown. Figure3(a) shows images, representative of 4 of the 6 samples, related to the different cell distribution of Nox4. It is evident that in s1 sample Nox4 is more expressed and it is detectable mostly in the cytosol. Conversely, sample s4, the slowest one, shows a Nox4 localization into the nuclei, while the cytosolic expression is low. This evidence is confirmed by western blot analysis of Nox4 in nuclear extracts, as shown in Figure 3(b). Moreover, the use of nuclear ROS probe demonstrates that the production of ROS in the nuclei significantly increases from s1 to s4 donor (Figure 3(c)).Figure 3
Effect of donor heterogeneity on Nox4 localization and nuclear ROS production. (a) Representative images showing superimposing between DAPI (blue) and Nox4 (green) signals of 4 different AFSC cultures. Scale bar: 10μm. (b) Representative images of western blot analysis of nuclei of samples 1–4 of AFSC revealed with Nox4. Actin detection was performed in order to show the amount of protein loaded in each line. Presented data are representative of three independent experiments. (c) Representative graph showing fluorescence obtained with nuclear ROS probe (nuclear peroxy emerald 1) normalized to protein content of AFSC samples. P
*
*
*
<
0.0001; P
*
*
*
<
0.01 significantly different from sample 1.
(a)
(b)
(c)Since ROS can cause DNA damage, we tested the phosphorylation level of H2AX while it is crucial to determine whether cells will survive after DNA damage [35]. As expected looking at nuclear H2A foci, we found that, compared to s1 and s2, s3 and s4 samples exhibit a huge status of H2A phosphorylation (Figure 4(a)). The double staining for Nox4 and pH2AX, even if not in all the nuclei, can suggest that nNox4-generated ROS can induce nuclear DNA damage.Figure 4
AFSC samples heterogeneity in DNA damage, senescence, and cell cycle. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and PH2A (red) signals of samples 1 to 4 of AFSC. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Actin detection was performed in order to show the amount of protein loaded in each line. The analysis for NFκB and Nrf2 was performed on nuclear lysates and matrin3 detection was performed in order to show the amount of nuclear protein loaded in each line. Presented data are representative of three independent experiments.
(a)
(b)In parallel, we looked for senescence marker,β-galactosidase activity (data not shown), but only a not significant increase can be noticed in the sample 4.Indeed, regarding the proliferation rate, the faster sample (s1) culturedin vitro reaches confluence every 48 h, while the slowest one (s4), seeded at the same density, spends more than 3 days. A deeper analysis of the cell cycle is reported below (Figure 4(b)). The positivity for c-Kit in the selected population is around 98% for all the samples (data not shown).In order to investigate cell cycle check points, we analyzed the expression of different cyclins and other related proteins (Figure4(b)). Cyclins A1, B1, and E2, usually upregulated in proliferating cells, decrease passing from s1 to s4, as well as p21 and β-catenin. On the other hand, pmyt1 and cyclin D1 increase, since they are expressed during G
0
/
G
1 phase, confirming the low rate of growth of these samples (s3 and s4).Analyzing nuclear extracts, the level of the regulating cell cycle transcription factor NF-κB decreases in slower samples, suggesting that the oxidation status into the nuclei leads to destabilization and nuclear export. On the other hand, Nrf2 presence into the nuclei increases from s1 to s4, because Nrf2 acts as a negative regulator of cell cycle entry in hematopoietic stem cells [36].The expression profile of pluripotent stem cells and mesenchymal stem cells markers were analyzed. Figures5(a) and 5(b) show that nuclear expression markers of pluripotency such as Oct4, Sox2, and SSEA-4 decrease from s1 to s4, as well as the presence of mesenchymal stem cell markers CD73, CD90, and CD105. Therefore the stemness capability could decline.Figure 5
Effect of donor heterogeneity on stem cells markers. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and Oct4 (red) signals or DAPI (blue) and Sox2 (green) signals of AFSC samples 1 and 4. Scale bar: 10μm. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Presented data are representative of three independent experiments.
(a)
(b)Then we treated AFSC with 3 different differentiation protocols and we tested the presence of calcified matrix (alizarin red) for the osteogenic one, of collagen II and aggrecan for the chondrogenic one, and of GFAP andβtubulin III for the neurogenic one. The differentiation potential analysis demonstrated that osteogenic (Figure 6(a)) and neurogenic (Figure 6(c)) differentiations were easier for sample 1 than for sample 4. On the other hand, the presence of cartilage matrix proteins is higher in sample 4 than in sample 1 (Figure 6(b)).Figure 6
Effect of donor heterogeneity on differentiation potential. (a) Representative images showing staining with alizarin red of AFSC samples 1 and 4 after three weeks of culture in osteogenic medium. (b) Representative images showing superimposing between DAPI (blue), collagen II (green), and aggrecan (red) signals of AFSC samples 1–4 after three weeks of culture in chondrogenic medium. (c) Representative images showing superimposing between DAPI (blue), GFAP (green), andβtubulin III (red) signals of AFSC samples 1 and 4 after three weeks of culture in neurogenic medium. Scale bar: 10 μm.
(a)
(b)
(c)
## 3.1. Nox4 into the Nucleus of AFSC
Recently we have shown that, by using antibodies from Santa Cruz, Abcam, or Novus, we can see a Nox4 signal mostly localized inside the nuclei of AFSC [18]. In particular AFSC expressing Nox4 into the nucleus show a spot distribution, a punctate pattern similar to the one observed in nuclear domains, such as speckles or Cajal bodies. In order to test whether nuclear Nox4 (nNox4) resides inside nuclear domains, colocalization assays were performed using antibodies directed to sc-35, a speckle marker, or coilin, a Cajal bodies marker. Confocal analysis (Figure 1(a)) of double staining with anti-Nox4 (green) and anti-sc35 (red) or anti-coilin (red) demonstrates that Nox4 interacts with domain of nuclear speckles, rather than with Cajal bodies, as shown by values of the Pearson’s correlation coefficient (Rp) and overlap coefficient (R), which provides information about the similarity of shape between the two patterns (Nox4 and sc-35). The value for correlationR can ranges from −1 to 1, and thus a value of 1 would mean that the patterns are perfectly similar, while a value of −1 would mean that the patterns are perfectly opposite. An overlap coefficient around 0.8 indicates a very good colocalization of the two signals.Figure 1
Nox4 nuclear localization and interaction in AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and coilin (red) or sc-35 (red). Colocalization graph reporting Pearson’s and overlap coefficients. Scale bar: 10μm. (b) Total lysates (TL) were immunoprecipitated with sc-35 antibody and then revealed with anti-sc-35 and anti-Nox4 (right) or were immunoprecipitated with anti-Nox4 and then revealed with anti-Nox4 and anti-sc-35 (left). Signals of preclearing sample (pcl) are shown in the middle line. (c) First line: representative images showing staining with nuclear ROS probe (nuclear peroxy emerald 1) of AFSC treated or not treated with shRNA. Second line: Nox4 signal (green) in the same samples. Scale bar: 10 μm. (d) Western blot revealed anti-Nox4 of AFSC treated with empty vector (ev) or siRNA TI311640, the best silencing vector among the 4 reported in Materials and Methods section. All presented data are representative of three independent experiments.
(a)
(b)
(c)
(d)To demonstrate that this localization means also a direct interaction of these proteins, coimmunoprecipitation experiments (IP for anti-sc35 and IP for anti-Nox4) were performed and show that Nox4 interacts with domain of nuclear speckles (Figure1(b)).In order to investigate the NADPH oxidase activity inside the nuclei, we used a nuclear selective probe for H2O2, nuclear peroxy emerald 1 (Figure 1(c)). Immunofluorescence assay (Figure 1(c)) shows that the decrease in Nox4 expression, demonstrated by western blot (Figure 1(d)), occurs both in cytoplasmic and nuclear compartments. Overall, AFSC cells, treated with siRNA, show a significant decrease in nuclear ROS level.Forkhead Box O (FoxO) transcription factors act in adult stem cells to preserve their regenerative potential. FoxO1 is essential for the maintenance of human ESC pluripotency. This function is probably mediated through direct control exerted by FoxO1 of Oct4 and Sox2 gene expression through occupation and activation of their respective promoters [34]. The cellular distribution of FoxO1 in AFSC is both in the cytosol and in the nucleus, but the Nox4 signal matches only in the cytosol, as shown in Figure 2(a). Otherwise, the pluripotent stem cell marker Oct4 colocalizes in some spots with nNox4 staining into the nucleus (Figure 2(a)). Interestingly Oct4 is detectable in speckle domains, as shown by labeling with sc-35 (Figure 2(b)) and coimmunoprecipitation assay (Figure 2(c)). The signal of coilin, a Cajal bodies marker, does not match with the Oct4 one (Figure 2(b)).Figure 2
Nox4 interaction with transcription factors in nuclei of AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and FoxO1 (red) or Oct4 (red). (b) Representative images showing superimposing between DAPI (blue), Oct4 (green), and coilin (red) or sc-35 (red). Scale bar: 10μm. (c) Western blot analysis of nuclear lysate (NL) and immunoprecipitation experiment of NL with Nox4 antibody then revealed with anti-Oct4 and anti-sc-35. Signals of preclearing sample (pcl) are shown in the middle line. Presented data are representative of three independent experiments.
(a)
(b)
(c)
## 3.2. AFSC Heterogeneity and nNox4
Stem cells isolated from different amniotic fluids (1–6 collected samples) exhibit different behaviors, proliferation rates. In the figures only the most representative 4 were shown. Figure3(a) shows images, representative of 4 of the 6 samples, related to the different cell distribution of Nox4. It is evident that in s1 sample Nox4 is more expressed and it is detectable mostly in the cytosol. Conversely, sample s4, the slowest one, shows a Nox4 localization into the nuclei, while the cytosolic expression is low. This evidence is confirmed by western blot analysis of Nox4 in nuclear extracts, as shown in Figure 3(b). Moreover, the use of nuclear ROS probe demonstrates that the production of ROS in the nuclei significantly increases from s1 to s4 donor (Figure 3(c)).Figure 3
Effect of donor heterogeneity on Nox4 localization and nuclear ROS production. (a) Representative images showing superimposing between DAPI (blue) and Nox4 (green) signals of 4 different AFSC cultures. Scale bar: 10μm. (b) Representative images of western blot analysis of nuclei of samples 1–4 of AFSC revealed with Nox4. Actin detection was performed in order to show the amount of protein loaded in each line. Presented data are representative of three independent experiments. (c) Representative graph showing fluorescence obtained with nuclear ROS probe (nuclear peroxy emerald 1) normalized to protein content of AFSC samples. P
*
*
*
<
0.0001; P
*
*
*
<
0.01 significantly different from sample 1.
(a)
(b)
(c)Since ROS can cause DNA damage, we tested the phosphorylation level of H2AX while it is crucial to determine whether cells will survive after DNA damage [35]. As expected looking at nuclear H2A foci, we found that, compared to s1 and s2, s3 and s4 samples exhibit a huge status of H2A phosphorylation (Figure 4(a)). The double staining for Nox4 and pH2AX, even if not in all the nuclei, can suggest that nNox4-generated ROS can induce nuclear DNA damage.Figure 4
AFSC samples heterogeneity in DNA damage, senescence, and cell cycle. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and PH2A (red) signals of samples 1 to 4 of AFSC. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Actin detection was performed in order to show the amount of protein loaded in each line. The analysis for NFκB and Nrf2 was performed on nuclear lysates and matrin3 detection was performed in order to show the amount of nuclear protein loaded in each line. Presented data are representative of three independent experiments.
(a)
(b)In parallel, we looked for senescence marker,β-galactosidase activity (data not shown), but only a not significant increase can be noticed in the sample 4.Indeed, regarding the proliferation rate, the faster sample (s1) culturedin vitro reaches confluence every 48 h, while the slowest one (s4), seeded at the same density, spends more than 3 days. A deeper analysis of the cell cycle is reported below (Figure 4(b)). The positivity for c-Kit in the selected population is around 98% for all the samples (data not shown).In order to investigate cell cycle check points, we analyzed the expression of different cyclins and other related proteins (Figure4(b)). Cyclins A1, B1, and E2, usually upregulated in proliferating cells, decrease passing from s1 to s4, as well as p21 and β-catenin. On the other hand, pmyt1 and cyclin D1 increase, since they are expressed during G
0
/
G
1 phase, confirming the low rate of growth of these samples (s3 and s4).Analyzing nuclear extracts, the level of the regulating cell cycle transcription factor NF-κB decreases in slower samples, suggesting that the oxidation status into the nuclei leads to destabilization and nuclear export. On the other hand, Nrf2 presence into the nuclei increases from s1 to s4, because Nrf2 acts as a negative regulator of cell cycle entry in hematopoietic stem cells [36].The expression profile of pluripotent stem cells and mesenchymal stem cells markers were analyzed. Figures5(a) and 5(b) show that nuclear expression markers of pluripotency such as Oct4, Sox2, and SSEA-4 decrease from s1 to s4, as well as the presence of mesenchymal stem cell markers CD73, CD90, and CD105. Therefore the stemness capability could decline.Figure 5
Effect of donor heterogeneity on stem cells markers. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and Oct4 (red) signals or DAPI (blue) and Sox2 (green) signals of AFSC samples 1 and 4. Scale bar: 10μm. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Presented data are representative of three independent experiments.
(a)
(b)Then we treated AFSC with 3 different differentiation protocols and we tested the presence of calcified matrix (alizarin red) for the osteogenic one, of collagen II and aggrecan for the chondrogenic one, and of GFAP andβtubulin III for the neurogenic one. The differentiation potential analysis demonstrated that osteogenic (Figure 6(a)) and neurogenic (Figure 6(c)) differentiations were easier for sample 1 than for sample 4. On the other hand, the presence of cartilage matrix proteins is higher in sample 4 than in sample 1 (Figure 6(b)).Figure 6
Effect of donor heterogeneity on differentiation potential. (a) Representative images showing staining with alizarin red of AFSC samples 1 and 4 after three weeks of culture in osteogenic medium. (b) Representative images showing superimposing between DAPI (blue), collagen II (green), and aggrecan (red) signals of AFSC samples 1–4 after three weeks of culture in chondrogenic medium. (c) Representative images showing superimposing between DAPI (blue), GFAP (green), andβtubulin III (red) signals of AFSC samples 1 and 4 after three weeks of culture in neurogenic medium. Scale bar: 10 μm.
(a)
(b)
(c)
## 4. Discussion
The current effort in regenerative medicine is the use of human stem cells that are easy to collect and are high proliferating, with large plasticity and without ethical problem. Amniotic fluid stem cells show all these characteristics, but there is a donor-to-donor heterogeneity that can influence the proliferation and the differentiation capacities. This is evident starting from the initial phase of culture, before the selection for c-Kit. The difference may be due to the fact that amniotic fluid contains cells of mixed populations derived from fetus and amnion. Nevertheless, this growth difference is maintained also after c-Kit+ cells selection. Therefore, it is important to find alternative factors involved in cell fate changes such as ROS and discuss their roles in the pluripotency and the differentiation of stem cells to improve directed culture protocols [3].Recently, it has become evident that nuclear redox signaling is an important signaling mechanism regulating a variety of cellular functions [37]. NADPH oxidase family (Nox) is one of the most important sources of ROS in several cellular compartments, including the nucleus. Recently, we demonstrated that in AFSC Nox4 can be detected inside the nuclear domains [18]. In the present study we shed light on the type of nuclear domain where Nox4 localizes, namely, speckles domains. Speckles are subnuclear structures that are enriched in premessenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations. Several kinases and phosphatases that can regulate the splicing machinery have also been localized into nuclear speckles. They might also contain transcription factors, together with splicing factors [38].Indeed, transcription factors, as well as even kinases and phosphatases, have been described to be redox regulated in the nucleus, through modulation of their DNA binding capacity [37]. The diversity in transcriptional control is achieved through a complex network of combinatorial protein-protein and protein-DNA interactions affecting the stability and subnuclear localization of these transcriptional regulators. The forkhead homeobox type O (FOXO) transcription factors have an essential role in maintaining stem cell identity [3]. FoxO1, FoxO3a, and FoxO4 are critical mediators of the cellular responses to oxidative stress and can also be viewed as sensors for oxidative stress since their activity is regulated by H2O2 and, dependent on the cellular context, they relay these stresses to induce apoptosis, stress resistance, or senescence [39, 40]. An increase in intracellular ROS facilitates the localization of FoxO in the nucleus where it is transcriptionally active [40]. Therefore, we investigated at first the localization of FoxO proteins in AFSC expressing Nox4 also into the nuclei. The signal of FoxO1 corresponds with the one of Nox4 but in cytosolic compartment.Since our interest is to elucidate the role of Nox4 into the nucleus, we examined the nuclear interaction with other transcription factors. For example the transcription factor Oct4 plays essential functions in the maintenance of pluripotent embryonic and germ cells of mammals [41]. Moreover, Oct4 protein has been previously reported to be associated, in human oocytes, with splicing speckles and Cajal bodies [42]. Here we showed that in AFSC nuclei Oct4 colocalizes with Nox4 and sc-35, as speckles marker. Moreover confocal and coimmunoprecipitation analysis demonstrated that Nox4 interacts with speckle domains, suggesting that Nox4 could be involved in the regulation of the transcription/pre-mRNA processing machinery by ROS production in these specific nuclear areas. In fact, immunofluorescent localization of Nox4 demonstrated a punctate pattern of staining in stem cell nuclei, matching with Oct4, a stemness regulating protein. It is possible that Oct4 modulated by Nox4-derived ROS could coordinate with other speckles proteins to regulate RNA processing.Stem cells isolated from different amniotic fluids exhibit a proliferation rate inversely coupled with Nox4-derived ROS level into the nuclei, as shown by the cell cycle protein analysis. In support of this, there is recently reported evidence that accumulation of oxidative DNA damage restricts the self-renewal capacity of human HSCs [43]. Therefore, we analyzed in different AFSC samples the presence of H2A foci, as marker of DNA damage. As expected, in samples where Nox4 was mostly nuclear, a higher DNA damage occurred. Therefore one potential role of nNox4 could be the regulation of the response to DNA damage or through regulation of DNA repair.The study of cell cycle better clarified that slower AFSC samples are blocked inG
0
/
G
1 phase. Among the transcription factor, NF-κB and Nrf2 are redox sensitive and cell cycle regulators. In unstimulated cells, NF-κB is sequestered in an inactive form in the cytosol. It can be released from these cytosolic pools by two main pathways (for review, see [44]), resulting in nuclear translocation of NF-κB complexes. In our experimental conditions nNox4 derived-ROS seems to establish a decrease in NF-κB expression also into the nuclei.Nrf-2 is a transcription factor implicated in the cellular responses to oxidative stress. This heterodimer binds to antioxidant-response elements (AREs) and thereby upregulates numerous genes coding for detoxification enzymes, antioxidants, and the enzymes required forde novo GSH synthesis [45]. Interestingly Nrf2 acts as a negative regulator of cell-cycle entry in HSCs, maintaining the balance between HSC quiescence and self-renewal [36]. In effect, Nrf2 level increases in AFSC samples where the cell cycle is blocked. Analyzing the cells culture obtained from different donors, we noticed that also the expression of Oct4 declines in low growth rate samples, as well as Sox2. In fact, increasing evidence suggests that Oct4 does not activate transcription of target genes alone but requires DNA-dependent heterodimerization with another DNA-binding transcription factor, the HMG-box protein Sox2 [41].As far as concerning the differentiation capability of the different AFSC samples, we noticed that the higher expression of stemness markers (sample 1 or 2) is parallel with an easier differentiation potential towards osteogenic and neurogenic lineages. On the other hand, the chondrogenic commitment was better obtained with AFSC population of sample 4, but this result may be justified from the low oxygen condition that allows this differentiation.Understanding the possible mechanisms by which ROS influence stem cells’ fate may provide insights into how the aging of stem cells could be implicated in diseases of aging [46]. Moreover it may indicate new marker of stemness capability in order to easily discriminate active MSC produced for clinical use, with the final outcome that patients are treated only with effective cells and a waste of public funds is prevented.Our findings not only show the effects of nuclear Nox4-derived ROS on AFSC, but also suggest the mechanisms involved in the regulation of the proliferation and differentiation capacity. Moreover, targeting increased levels of nuclear ROS associated with nonactive stem cells may reverse their decreased stem capacity, as slight variations in ROS content may have important effects on stem cell fate.
---
*Source: 101304-2015-07-26.xml* | 101304-2015-07-26_101304-2015-07-26.md | 55,645 | Nuclear Nox4 Role in Stemness Power of Human Amniotic Fluid Stem Cells | Tullia Maraldi; Marianna Guida; Manuela Zavatti; Elisa Resca; Laura Bertoni; Giovanni B. La Sala; Anto De Pol | Oxidative Medicine and Cellular Longevity
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101304 | 101304-2015-07-26.xml | ---
## Abstract
Human amniotic fluid stem cells (AFSC) are an attractive source for cell therapy due to their multilineage differentiation potential and accessibility advantages. However the clinical application of human stem cells largely depends on their capacity to expandin vitro, since there is an extensive donor-to-donor heterogeneity. Reactive oxygen species (ROS) and cellular oxidative stress are involved in many physiological and pathophysiological processes of stem cells, including pluripotency, proliferation, differentiation, and stress resistance. The mode of action of ROS is also dependent on the localization of their target molecules. Thus, the modifications induced by ROS can be separated depending on the cellular compartments they affect. NAD(P)H oxidase family, particularly Nox4, has been known to produce ROS in the nucleus. In the present study we show that Nox4 nuclear expression (nNox4) depends on the donor and it correlates with the expression of transcription factors involved in stemness regulation, such as Oct4, SSEA-4, and Sox2. Moreover nNox4 is linked with the nuclear localization of redox sensitive transcription factors, as Nrf2 and NF-κB, and with the differentiation potential. Taken together, these results suggest that nNox4 regulation may have important effects in stem cell capability through modulation of transcription factors and DNA damage.
---
## Body
## 1. Introduction
Numerous studies have demonstrated that the MSC populations exhibit donor-to-donor heterogeneity. This fact could be attributed to several factors, including the methods used to culture, select, and expand the population and the age of the donor [1].About amniotic fluid stem cells (AFSC), the harvesting protocol is well established in the clinical practice as well as the selection method, based on the c-Kit surface marker expression [2]. Moreover the donor age range has to be considered quite restricted since the sample is usually obtained in clinical practice for cytogenetic analysis between the 16th week and the 20th week of pregnancy. However, as well as other MSCs [1], AFSC could display heterogeneity among the donors.Regulation of ROS has a vital role in maintaining the “stemness” and the differentiation potential of the stem cells, as well as in the progression of stem-cell-associated diseases [3]. ROS-mediated proliferation and senescence in stem/progenitor cells may be determined by the amount, duration, and location of ROS generation, which activates specific redox-signaling pathways [4]. In fact redox changes in different areas and resulting changes in ROS levels may represent an important mechanism of intracellular communication between different cellular compartments [5]. The nucleus itself contains a number of proteins with oxidizable thiols that are essential for transcription, chromatin stability, and nuclear protein import and export, as well as DNA replication and repair [5]. Several transcription factors have been thought to be involved in the redox-dependent modulation of gene expression [5].Recent advances indicate that the participation of ROS-producing nicotinamide adenine dinucleotide phosphate reduced oxidase (NADPH, Nox) system is an important trigger for differentiating ESCs toward the cardiomyocyte lineage [6–10]. Nox4 plays an important role in the differentiation of mouse ESCs toward the smooth muscle cell (SMC) lineage when translocating to the nucleus and generating H2O2 [11]. In fact the subcellular localization of Nox4 is likely to be especially important, given its constitutive activity, unlike isoforms, such as Nox1 or Nox2, that require agonist activation. Nox4 has been reported to be variably present in the endoplasmic reticulum [12, 13], mitochondria [14], cytoskeleton [15], plasma membrane [16], and nucleus [17] in different cell types. Recently we demonstrated that Nox4 can be detected in nuclei of human AFSC, depending on the cell metabolism status [18].It is interesting to better understand how ROS homeostasis is an important modulator in stem cell self-renewal and differentiation. Certain proteins can act as “redox sensors” due to the redox modifications of their cysteine residues, which are critically important in the control of protein function. Signaling molecules such as FoxOs, APE1/Ref-1, Nrf2, ATM, HIFs, NF-κB, p38, and p53 are subjected to redox modifications and could be involved in the regulation of stem cell self-renewal and differentiation [19].The aim of this study was to assess whether nuclear Nox4-generated ROS can modulate the presence and the localization in nuclear domain of transcription factors crucial for stemness capability. For this purpose we performed confocal analysis of immunofluorescence experiments and coimmunoprecipitation assays. Furthermore we investigated whether the different nuclear Nox4 (nNox4) presence, observed among the AFSC samples, was correlated with the expression of typical stem cell markers and the differentiation potential. These data indicate that nNox4 derived ROS are involved in AFSC stemness regulation and could be considered as marker of stem potential.
## 2. Materials and Methods
### 2.1. Cell Culture
Amniocentesis samples (6 backup flasks obtained from different donors) were provided by the Laboratorio di Genetica, Ospedale Santa Maria Nuova (Reggio Emilia, Italy). All samples were collected with the informed consent of the patients (mother’s age≥ 35) according to Italian law and the Ethical Committee guideline.Human AFSC (AFSC) were isolated as previously described by De Coppi et al. [2]. Human amniocentesis cultures were harvested by trypsinization and subjected to c-Kit immunoselection using MACS technology (Miltenyi Biotec, Germany). AFSC were subcultured routinely at 1 : 3 dilution and were not allowed to expand beyond the 70% of confluence. AFSC were grown in a culture medium (αMEM supplemented with 20% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin) (all reagents from EuroClone Spa, Italy) at 37°C and 5% CO2 [20].
### 2.2. Nox4 Silencing
Retroviral supernatants were produced according to HuSH shRNA plasmid panels (29-mer) application guide; AM12 cells were transfected with an empty vector (pRS vector, TR20003), a scrambled vector (HuSH 29-mer noneffective pRS vector, TR30012), and four NOX4 gene specific shRNA expression pRS vectors (TI311637, TI311638, TI311639, and TI311640) for 48 h [21]. Retroviral supernatants were then centrifuged at 2000 ×g for 5 minutes and used for target cells (AFSC) infection. Where indicated, cells were infected with NOX4 shRNA retroviral vectors, empty vector, or scrambled vector. Forty-eight hours after infection, cells were exposed to 2 μg/mL puromycin (Sigma-Aldrich, St. Louis, MO, USA) for 24 hours and subjected to evaluation of Nox4 expression by western blotting and confocal analysis and detection of intracellular ROS levels.
### 2.3. Differentiation Protocols
Osteogenic differentiation was obtained maintaining cells for 3 weeks at 37°C and 5% CO2 in osteogenic medium: culture medium supplemented with 100 nM dexamethasone, 10 mM β-glycerophosphate, and 50 μg/mL ascorbic acid-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA). Coverslips were then stained with alizarin red S staining for light microscopic observation.Chondrogenic differentiation: cells were cultured as a monolayer using a medium containing DMEM high glucose, 100 nM dexamethasone and 10 ng/mL TGFβ1 (Sigma-Aldrich, St. Louis, USA), 10 μM 2P-ascorbic acid, 1% v/v sodium pyruvate (Invitrogen, Italy), and 50 mg/mL ITS premix (BD, Franklin Lakes, NJ, USA) for 3 weeks.Neural differentiation protocol [22]: cells were seeded at 60% confluence and maintained in neural differentiation medium (culture medium supplemented with 10% FBS and 20 μM retinoic acid (RA) in dimethyl sulfoxide (DMSO), both from Sigma-Aldrich, St. Louis, MO, USA) for up to 4 weeks at 37°C and 5% CO2.
### 2.4. Preparation of Cell Extracts
Cell extracts were obtained as described by Maraldi et al. [23]. Briefly, subconfluent cells were extracted by addition of AT lysis buffer (20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; 5 mM sodium pyrophosphate; and 1 mM Na3VO4) and freshly added Sigma-Aldrich protease inhibitor cocktail at 4°C for 30 min. Lysates were sonicated, cleared by centrifugation, and immediately boiled in SDS sample buffer or used for immunoprecipitation experiments, as described below.
### 2.5. Immunoprecipitation and Electrophoresis
Immunoprecipitation was performed as reported by Cenni et al. [24]. Equal amounts of precleared lysates (pcl), whose protein concentration was determined by the Bradford method, were incubated overnight with rabbit anti-Nox4 (Novus Biologicals, CO, USA) and mouse anti-sc-35 (Sigma-Aldrich) (3 μg all). Then the two samples were treated with 30 μL of 50% (v/v) of protein A/G agarose slurry (GE Healthcare Bio-sciences, Uppsala, Sweden) at 4°C with gentle rocking for 1 h. Pellets were washed twice with 20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; and 5 mM sodium pyrophosphate, once with 10 mM Tris-Cl, pH 7.4, boiled in SDS sample buffer, and centrifuged. Supernatants were loaded onto SDS-polyacrylamide gel, blotted on Immobilon-P membranes (Millipore, Waltham, MA, USA), processed by western blot with the indicated antibodies and detected by Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Signal quantification was obtained by chemiluminescence detection on a Kodak Image Station 440CF and the analysis with the Kodak 1D Image software.
### 2.6. Nuclei Purification
Human AFSC nuclei were purified as reported by Cenni et al. [25]. Briefly, 400 μL of nuclear isolation buffer (10 mM Tris-HCl, pH 7.8, 1% Nonidet P-40, 10 mM β-mercaptoethanol, 0.5 mM phenylmethylsulfonyl fluoride, 1 μg/mL aprotinin and leupeptin, and 5 mM NaF) was added to 5 × 106 cells for 8 min on ice. Milli-Q water (400 μL) was then added to swell cells for 3 min. Cells were sheared by passages through a 22-gauge needle. Nuclei were recovered by centrifugation at 400 ×g at 4°C for 6 min and washed once in 400 μL of washing buffer (10 mM Tris-HCl, pH 7.4, and 2 mM MgCl2, plus inhibitors as described earlier in the text). Supernatants (containing the cytosolic fractions) were further centrifuged for 30 min at 4000 ×g. Isolated nuclear and cytoplasmic extracts were finally lysed in AT lysis buffer, sonicated, and cleared by centrifugation.
### 2.7. Western Blot
The protocols of the western blot were performed as described by Hanson et al. [26].Protein extracts, quantified by a Bradford Protein Assay (Bio-Rad Laboratories, CA, USA), underwent SDS-polyacrylamide gel electrophoresis and were transferred to Immobilon-P membranes. The following antibodies were used: rabbit anti-NF-κB, rabbit anti-βcatenin, goat anti-matrin3, goat anti-actin (Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted 1 : 500; rabbit anti-cyclin E2, cyclin D1, cyclin B1, p21, Pmyt1, Oct4, and mouse anti-cyclin A1, and SSEA-4 (Cell Signalling Technology, Beverly, MA, USA), mouse anti-tubulin, and mouse anti-sc-35 (Sigma-Aldrich St. Louis, MO, USA), rabbit anti-Nrf2 (Abcam, Cambridge, UK), rabbit anti-Nox4 (Novus Biologicals, CO, USA), and mouse anti-pH2A (Ser139), mouse anti-CD90 and anti-CD105 (Millipore, Billerica, MA, USA) rabbit anti-CD73 (Genetex, Irvine, CA, USA), diluted 1 : 1000; peroxidase-labelled anti-rabbit, mouse, and goat secondary antibodies diluted 1 : 3000 (Pierce Antibodies, Thermo Scientific; Rockford, IL, USA). Ab dilution was performed in TBS-T pH 7.6 containing 3% BSA. The membranes were visualized using Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Anti-actin antibody was used as control of protein loading.
### 2.8. Senescence Assay
Senescent cells were visualized in 45 days cultures with the Senescenceβ-Galactosidase Staining Kit (Cell Signalling Technology, Beverly, MA, USA) following the manufacturer’s instructions. This test is designed to detect β-galactosidase activity at pH 6, a known characteristic of senescent cells not found in presenescent, quiescent, or immortal cells.
### 2.9. Confocal Microscopy
Undifferentiated AFSC were fixed for 20 min in 4% ice-cold paraformaldehyde and then permeabilized with 0.1% Triton X-100 in ice-cold phosphate-buffered saline (PBS) for 5 min. Permeabilized samples were then blocked with 3% of bovine serum albumin (BSA) in PBS for 30 min at room temperature (RT) and incubated with primary antibodies (Ab). Mouse anti-sc-35 and mouse anti-glial fibrillary acidic protein (GFAP) (Sigma-Aldrich, St. Louis, MO, USA), rabbit anti-human Collagen type II (Genetex, Irvine, CA, USA), rabbit anti-coilin (Abcam, Cambridge, UK), goat anti-aggrecan, rabbit anti-Nox4, rabbit anti-Oct4, goat anti-Foxo1, goat anti-Sox2 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) (diluted 1 : 50), mouse anti-Oct4 (Millipore Billerica, MA, USA), mouse anti-βtubulin III (Cell Signalling Technology, Beverly, MA, USA), and mouse anti-pH2A (Ser139) (Millipore, Billerica, MA, USA) (diluted 1 : 100), in PBS containing 3% BSA for 1 h at RT, were used as primary antibodies (Ab). Secondary Ab were diluted 1 : 200 in PBS containing 3% BSA (goat anti-mouse Alexa 647, goat anti-rabbit Alexa 488, and donkey anti-goat Alexa 488). After washing in PBS, samples were stained with 1 μg/mL DAPI in H2O for 1 min and then were mounted with antifading medium (0.21 M DABCO and 90% glycerol in 0.02 M Tris, pH 8.0). Negative controls consisted of samples not incubated with the primary antibody but only with the secondary antibody.In the case of a double staining with sc-35 antibody and, for example, Nox4, we performed a first incubation with anti-Nox4 overnight and then, separately, 1 h of incubation for anti-sc-35, in order to avoid unspecific antibodies interactions.Confocal imaging was performed using a Nikon A1 confocal laser scanning microscope as previously described [27].Spectral analysis was performed to exclude overlapping between two signals or the influence of autofluorescence background on the fluorochrome signals, as previously shown [28]. The confocal serial sections were processed with ImageJ software to obtain three-dimensional projections, as previously described [29]. The image rendering was performed using Adobe Photoshop software.
### 2.10. Nuclear ROS Imaging
Nuclear ROS were detected with nuclear-localized fluorescent probe for H2O2, nuclear peroxy emerald 1 (NucPE1) [30–33]. For all experiments, 5 μM solutions of NucPE1 (from 5 mM stocks in DMSO) were made in PBS/glucose. The cells were then kept in an incubator (37°C, 5% CO2) during the course of all experiments. The probe was incubated for total of 30 min.Confocal fluorescence imaging studies were performed with a Nikon A1 confocal laser scanning microscope. Excitation of NucPE1-loaded cells at 488 nm was carried out with an Ar laser and emission was collected at 535 nm. All images in an experiment were collected simultaneously using identical microscope settings. Image analysis was performed in ImageJ.
### 2.11. Statistical Analysis
In vitro experiments were performed in triplicate. For quantitative comparisons, values were expressed as mean ± SD (standard deviation) based on triplicate analysis for each sample. To test the significance of observed differences among the study groups, one way analysis of variance (ANOVA) test with the post-hoc Bonferroni correction was applied. A P value of <0.05 was considered to be statistically significant.
## 2.1. Cell Culture
Amniocentesis samples (6 backup flasks obtained from different donors) were provided by the Laboratorio di Genetica, Ospedale Santa Maria Nuova (Reggio Emilia, Italy). All samples were collected with the informed consent of the patients (mother’s age≥ 35) according to Italian law and the Ethical Committee guideline.Human AFSC (AFSC) were isolated as previously described by De Coppi et al. [2]. Human amniocentesis cultures were harvested by trypsinization and subjected to c-Kit immunoselection using MACS technology (Miltenyi Biotec, Germany). AFSC were subcultured routinely at 1 : 3 dilution and were not allowed to expand beyond the 70% of confluence. AFSC were grown in a culture medium (αMEM supplemented with 20% fetal bovine serum (FBS), 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin) (all reagents from EuroClone Spa, Italy) at 37°C and 5% CO2 [20].
## 2.2. Nox4 Silencing
Retroviral supernatants were produced according to HuSH shRNA plasmid panels (29-mer) application guide; AM12 cells were transfected with an empty vector (pRS vector, TR20003), a scrambled vector (HuSH 29-mer noneffective pRS vector, TR30012), and four NOX4 gene specific shRNA expression pRS vectors (TI311637, TI311638, TI311639, and TI311640) for 48 h [21]. Retroviral supernatants were then centrifuged at 2000 ×g for 5 minutes and used for target cells (AFSC) infection. Where indicated, cells were infected with NOX4 shRNA retroviral vectors, empty vector, or scrambled vector. Forty-eight hours after infection, cells were exposed to 2 μg/mL puromycin (Sigma-Aldrich, St. Louis, MO, USA) for 24 hours and subjected to evaluation of Nox4 expression by western blotting and confocal analysis and detection of intracellular ROS levels.
## 2.3. Differentiation Protocols
Osteogenic differentiation was obtained maintaining cells for 3 weeks at 37°C and 5% CO2 in osteogenic medium: culture medium supplemented with 100 nM dexamethasone, 10 mM β-glycerophosphate, and 50 μg/mL ascorbic acid-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA). Coverslips were then stained with alizarin red S staining for light microscopic observation.Chondrogenic differentiation: cells were cultured as a monolayer using a medium containing DMEM high glucose, 100 nM dexamethasone and 10 ng/mL TGFβ1 (Sigma-Aldrich, St. Louis, USA), 10 μM 2P-ascorbic acid, 1% v/v sodium pyruvate (Invitrogen, Italy), and 50 mg/mL ITS premix (BD, Franklin Lakes, NJ, USA) for 3 weeks.Neural differentiation protocol [22]: cells were seeded at 60% confluence and maintained in neural differentiation medium (culture medium supplemented with 10% FBS and 20 μM retinoic acid (RA) in dimethyl sulfoxide (DMSO), both from Sigma-Aldrich, St. Louis, MO, USA) for up to 4 weeks at 37°C and 5% CO2.
## 2.4. Preparation of Cell Extracts
Cell extracts were obtained as described by Maraldi et al. [23]. Briefly, subconfluent cells were extracted by addition of AT lysis buffer (20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; 5 mM sodium pyrophosphate; and 1 mM Na3VO4) and freshly added Sigma-Aldrich protease inhibitor cocktail at 4°C for 30 min. Lysates were sonicated, cleared by centrifugation, and immediately boiled in SDS sample buffer or used for immunoprecipitation experiments, as described below.
## 2.5. Immunoprecipitation and Electrophoresis
Immunoprecipitation was performed as reported by Cenni et al. [24]. Equal amounts of precleared lysates (pcl), whose protein concentration was determined by the Bradford method, were incubated overnight with rabbit anti-Nox4 (Novus Biologicals, CO, USA) and mouse anti-sc-35 (Sigma-Aldrich) (3 μg all). Then the two samples were treated with 30 μL of 50% (v/v) of protein A/G agarose slurry (GE Healthcare Bio-sciences, Uppsala, Sweden) at 4°C with gentle rocking for 1 h. Pellets were washed twice with 20 mM Tris-Cl, pH 7.0; 1% Nonidet P-40; 150 mM NaCl; 10% glycerol; 10 mM EDTA; 20 mM NaF; and 5 mM sodium pyrophosphate, once with 10 mM Tris-Cl, pH 7.4, boiled in SDS sample buffer, and centrifuged. Supernatants were loaded onto SDS-polyacrylamide gel, blotted on Immobilon-P membranes (Millipore, Waltham, MA, USA), processed by western blot with the indicated antibodies and detected by Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Signal quantification was obtained by chemiluminescence detection on a Kodak Image Station 440CF and the analysis with the Kodak 1D Image software.
## 2.6. Nuclei Purification
Human AFSC nuclei were purified as reported by Cenni et al. [25]. Briefly, 400 μL of nuclear isolation buffer (10 mM Tris-HCl, pH 7.8, 1% Nonidet P-40, 10 mM β-mercaptoethanol, 0.5 mM phenylmethylsulfonyl fluoride, 1 μg/mL aprotinin and leupeptin, and 5 mM NaF) was added to 5 × 106 cells for 8 min on ice. Milli-Q water (400 μL) was then added to swell cells for 3 min. Cells were sheared by passages through a 22-gauge needle. Nuclei were recovered by centrifugation at 400 ×g at 4°C for 6 min and washed once in 400 μL of washing buffer (10 mM Tris-HCl, pH 7.4, and 2 mM MgCl2, plus inhibitors as described earlier in the text). Supernatants (containing the cytosolic fractions) were further centrifuged for 30 min at 4000 ×g. Isolated nuclear and cytoplasmic extracts were finally lysed in AT lysis buffer, sonicated, and cleared by centrifugation.
## 2.7. Western Blot
The protocols of the western blot were performed as described by Hanson et al. [26].Protein extracts, quantified by a Bradford Protein Assay (Bio-Rad Laboratories, CA, USA), underwent SDS-polyacrylamide gel electrophoresis and were transferred to Immobilon-P membranes. The following antibodies were used: rabbit anti-NF-κB, rabbit anti-βcatenin, goat anti-matrin3, goat anti-actin (Santa Cruz Biotechnology, Santa Cruz, CA, USA) diluted 1 : 500; rabbit anti-cyclin E2, cyclin D1, cyclin B1, p21, Pmyt1, Oct4, and mouse anti-cyclin A1, and SSEA-4 (Cell Signalling Technology, Beverly, MA, USA), mouse anti-tubulin, and mouse anti-sc-35 (Sigma-Aldrich St. Louis, MO, USA), rabbit anti-Nrf2 (Abcam, Cambridge, UK), rabbit anti-Nox4 (Novus Biologicals, CO, USA), and mouse anti-pH2A (Ser139), mouse anti-CD90 and anti-CD105 (Millipore, Billerica, MA, USA) rabbit anti-CD73 (Genetex, Irvine, CA, USA), diluted 1 : 1000; peroxidase-labelled anti-rabbit, mouse, and goat secondary antibodies diluted 1 : 3000 (Pierce Antibodies, Thermo Scientific; Rockford, IL, USA). Ab dilution was performed in TBS-T pH 7.6 containing 3% BSA. The membranes were visualized using Supersignal substrate chemiluminescence detection kit (Pierce, Rockford, IL, USA). Anti-actin antibody was used as control of protein loading.
## 2.8. Senescence Assay
Senescent cells were visualized in 45 days cultures with the Senescenceβ-Galactosidase Staining Kit (Cell Signalling Technology, Beverly, MA, USA) following the manufacturer’s instructions. This test is designed to detect β-galactosidase activity at pH 6, a known characteristic of senescent cells not found in presenescent, quiescent, or immortal cells.
## 2.9. Confocal Microscopy
Undifferentiated AFSC were fixed for 20 min in 4% ice-cold paraformaldehyde and then permeabilized with 0.1% Triton X-100 in ice-cold phosphate-buffered saline (PBS) for 5 min. Permeabilized samples were then blocked with 3% of bovine serum albumin (BSA) in PBS for 30 min at room temperature (RT) and incubated with primary antibodies (Ab). Mouse anti-sc-35 and mouse anti-glial fibrillary acidic protein (GFAP) (Sigma-Aldrich, St. Louis, MO, USA), rabbit anti-human Collagen type II (Genetex, Irvine, CA, USA), rabbit anti-coilin (Abcam, Cambridge, UK), goat anti-aggrecan, rabbit anti-Nox4, rabbit anti-Oct4, goat anti-Foxo1, goat anti-Sox2 (Santa Cruz Biotechnology, Santa Cruz, CA, USA) (diluted 1 : 50), mouse anti-Oct4 (Millipore Billerica, MA, USA), mouse anti-βtubulin III (Cell Signalling Technology, Beverly, MA, USA), and mouse anti-pH2A (Ser139) (Millipore, Billerica, MA, USA) (diluted 1 : 100), in PBS containing 3% BSA for 1 h at RT, were used as primary antibodies (Ab). Secondary Ab were diluted 1 : 200 in PBS containing 3% BSA (goat anti-mouse Alexa 647, goat anti-rabbit Alexa 488, and donkey anti-goat Alexa 488). After washing in PBS, samples were stained with 1 μg/mL DAPI in H2O for 1 min and then were mounted with antifading medium (0.21 M DABCO and 90% glycerol in 0.02 M Tris, pH 8.0). Negative controls consisted of samples not incubated with the primary antibody but only with the secondary antibody.In the case of a double staining with sc-35 antibody and, for example, Nox4, we performed a first incubation with anti-Nox4 overnight and then, separately, 1 h of incubation for anti-sc-35, in order to avoid unspecific antibodies interactions.Confocal imaging was performed using a Nikon A1 confocal laser scanning microscope as previously described [27].Spectral analysis was performed to exclude overlapping between two signals or the influence of autofluorescence background on the fluorochrome signals, as previously shown [28]. The confocal serial sections were processed with ImageJ software to obtain three-dimensional projections, as previously described [29]. The image rendering was performed using Adobe Photoshop software.
## 2.10. Nuclear ROS Imaging
Nuclear ROS were detected with nuclear-localized fluorescent probe for H2O2, nuclear peroxy emerald 1 (NucPE1) [30–33]. For all experiments, 5 μM solutions of NucPE1 (from 5 mM stocks in DMSO) were made in PBS/glucose. The cells were then kept in an incubator (37°C, 5% CO2) during the course of all experiments. The probe was incubated for total of 30 min.Confocal fluorescence imaging studies were performed with a Nikon A1 confocal laser scanning microscope. Excitation of NucPE1-loaded cells at 488 nm was carried out with an Ar laser and emission was collected at 535 nm. All images in an experiment were collected simultaneously using identical microscope settings. Image analysis was performed in ImageJ.
## 2.11. Statistical Analysis
In vitro experiments were performed in triplicate. For quantitative comparisons, values were expressed as mean ± SD (standard deviation) based on triplicate analysis for each sample. To test the significance of observed differences among the study groups, one way analysis of variance (ANOVA) test with the post-hoc Bonferroni correction was applied. A P value of <0.05 was considered to be statistically significant.
## 3. Results
### 3.1. Nox4 into the Nucleus of AFSC
Recently we have shown that, by using antibodies from Santa Cruz, Abcam, or Novus, we can see a Nox4 signal mostly localized inside the nuclei of AFSC [18]. In particular AFSC expressing Nox4 into the nucleus show a spot distribution, a punctate pattern similar to the one observed in nuclear domains, such as speckles or Cajal bodies. In order to test whether nuclear Nox4 (nNox4) resides inside nuclear domains, colocalization assays were performed using antibodies directed to sc-35, a speckle marker, or coilin, a Cajal bodies marker. Confocal analysis (Figure 1(a)) of double staining with anti-Nox4 (green) and anti-sc35 (red) or anti-coilin (red) demonstrates that Nox4 interacts with domain of nuclear speckles, rather than with Cajal bodies, as shown by values of the Pearson’s correlation coefficient (Rp) and overlap coefficient (R), which provides information about the similarity of shape between the two patterns (Nox4 and sc-35). The value for correlationR can ranges from −1 to 1, and thus a value of 1 would mean that the patterns are perfectly similar, while a value of −1 would mean that the patterns are perfectly opposite. An overlap coefficient around 0.8 indicates a very good colocalization of the two signals.Figure 1
Nox4 nuclear localization and interaction in AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and coilin (red) or sc-35 (red). Colocalization graph reporting Pearson’s and overlap coefficients. Scale bar: 10μm. (b) Total lysates (TL) were immunoprecipitated with sc-35 antibody and then revealed with anti-sc-35 and anti-Nox4 (right) or were immunoprecipitated with anti-Nox4 and then revealed with anti-Nox4 and anti-sc-35 (left). Signals of preclearing sample (pcl) are shown in the middle line. (c) First line: representative images showing staining with nuclear ROS probe (nuclear peroxy emerald 1) of AFSC treated or not treated with shRNA. Second line: Nox4 signal (green) in the same samples. Scale bar: 10 μm. (d) Western blot revealed anti-Nox4 of AFSC treated with empty vector (ev) or siRNA TI311640, the best silencing vector among the 4 reported in Materials and Methods section. All presented data are representative of three independent experiments.
(a)
(b)
(c)
(d)To demonstrate that this localization means also a direct interaction of these proteins, coimmunoprecipitation experiments (IP for anti-sc35 and IP for anti-Nox4) were performed and show that Nox4 interacts with domain of nuclear speckles (Figure1(b)).In order to investigate the NADPH oxidase activity inside the nuclei, we used a nuclear selective probe for H2O2, nuclear peroxy emerald 1 (Figure 1(c)). Immunofluorescence assay (Figure 1(c)) shows that the decrease in Nox4 expression, demonstrated by western blot (Figure 1(d)), occurs both in cytoplasmic and nuclear compartments. Overall, AFSC cells, treated with siRNA, show a significant decrease in nuclear ROS level.Forkhead Box O (FoxO) transcription factors act in adult stem cells to preserve their regenerative potential. FoxO1 is essential for the maintenance of human ESC pluripotency. This function is probably mediated through direct control exerted by FoxO1 of Oct4 and Sox2 gene expression through occupation and activation of their respective promoters [34]. The cellular distribution of FoxO1 in AFSC is both in the cytosol and in the nucleus, but the Nox4 signal matches only in the cytosol, as shown in Figure 2(a). Otherwise, the pluripotent stem cell marker Oct4 colocalizes in some spots with nNox4 staining into the nucleus (Figure 2(a)). Interestingly Oct4 is detectable in speckle domains, as shown by labeling with sc-35 (Figure 2(b)) and coimmunoprecipitation assay (Figure 2(c)). The signal of coilin, a Cajal bodies marker, does not match with the Oct4 one (Figure 2(b)).Figure 2
Nox4 interaction with transcription factors in nuclei of AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and FoxO1 (red) or Oct4 (red). (b) Representative images showing superimposing between DAPI (blue), Oct4 (green), and coilin (red) or sc-35 (red). Scale bar: 10μm. (c) Western blot analysis of nuclear lysate (NL) and immunoprecipitation experiment of NL with Nox4 antibody then revealed with anti-Oct4 and anti-sc-35. Signals of preclearing sample (pcl) are shown in the middle line. Presented data are representative of three independent experiments.
(a)
(b)
(c)
### 3.2. AFSC Heterogeneity and nNox4
Stem cells isolated from different amniotic fluids (1–6 collected samples) exhibit different behaviors, proliferation rates. In the figures only the most representative 4 were shown. Figure3(a) shows images, representative of 4 of the 6 samples, related to the different cell distribution of Nox4. It is evident that in s1 sample Nox4 is more expressed and it is detectable mostly in the cytosol. Conversely, sample s4, the slowest one, shows a Nox4 localization into the nuclei, while the cytosolic expression is low. This evidence is confirmed by western blot analysis of Nox4 in nuclear extracts, as shown in Figure 3(b). Moreover, the use of nuclear ROS probe demonstrates that the production of ROS in the nuclei significantly increases from s1 to s4 donor (Figure 3(c)).Figure 3
Effect of donor heterogeneity on Nox4 localization and nuclear ROS production. (a) Representative images showing superimposing between DAPI (blue) and Nox4 (green) signals of 4 different AFSC cultures. Scale bar: 10μm. (b) Representative images of western blot analysis of nuclei of samples 1–4 of AFSC revealed with Nox4. Actin detection was performed in order to show the amount of protein loaded in each line. Presented data are representative of three independent experiments. (c) Representative graph showing fluorescence obtained with nuclear ROS probe (nuclear peroxy emerald 1) normalized to protein content of AFSC samples. P
*
*
*
<
0.0001; P
*
*
*
<
0.01 significantly different from sample 1.
(a)
(b)
(c)Since ROS can cause DNA damage, we tested the phosphorylation level of H2AX while it is crucial to determine whether cells will survive after DNA damage [35]. As expected looking at nuclear H2A foci, we found that, compared to s1 and s2, s3 and s4 samples exhibit a huge status of H2A phosphorylation (Figure 4(a)). The double staining for Nox4 and pH2AX, even if not in all the nuclei, can suggest that nNox4-generated ROS can induce nuclear DNA damage.Figure 4
AFSC samples heterogeneity in DNA damage, senescence, and cell cycle. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and PH2A (red) signals of samples 1 to 4 of AFSC. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Actin detection was performed in order to show the amount of protein loaded in each line. The analysis for NFκB and Nrf2 was performed on nuclear lysates and matrin3 detection was performed in order to show the amount of nuclear protein loaded in each line. Presented data are representative of three independent experiments.
(a)
(b)In parallel, we looked for senescence marker,β-galactosidase activity (data not shown), but only a not significant increase can be noticed in the sample 4.Indeed, regarding the proliferation rate, the faster sample (s1) culturedin vitro reaches confluence every 48 h, while the slowest one (s4), seeded at the same density, spends more than 3 days. A deeper analysis of the cell cycle is reported below (Figure 4(b)). The positivity for c-Kit in the selected population is around 98% for all the samples (data not shown).In order to investigate cell cycle check points, we analyzed the expression of different cyclins and other related proteins (Figure4(b)). Cyclins A1, B1, and E2, usually upregulated in proliferating cells, decrease passing from s1 to s4, as well as p21 and β-catenin. On the other hand, pmyt1 and cyclin D1 increase, since they are expressed during G
0
/
G
1 phase, confirming the low rate of growth of these samples (s3 and s4).Analyzing nuclear extracts, the level of the regulating cell cycle transcription factor NF-κB decreases in slower samples, suggesting that the oxidation status into the nuclei leads to destabilization and nuclear export. On the other hand, Nrf2 presence into the nuclei increases from s1 to s4, because Nrf2 acts as a negative regulator of cell cycle entry in hematopoietic stem cells [36].The expression profile of pluripotent stem cells and mesenchymal stem cells markers were analyzed. Figures5(a) and 5(b) show that nuclear expression markers of pluripotency such as Oct4, Sox2, and SSEA-4 decrease from s1 to s4, as well as the presence of mesenchymal stem cell markers CD73, CD90, and CD105. Therefore the stemness capability could decline.Figure 5
Effect of donor heterogeneity on stem cells markers. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and Oct4 (red) signals or DAPI (blue) and Sox2 (green) signals of AFSC samples 1 and 4. Scale bar: 10μm. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Presented data are representative of three independent experiments.
(a)
(b)Then we treated AFSC with 3 different differentiation protocols and we tested the presence of calcified matrix (alizarin red) for the osteogenic one, of collagen II and aggrecan for the chondrogenic one, and of GFAP andβtubulin III for the neurogenic one. The differentiation potential analysis demonstrated that osteogenic (Figure 6(a)) and neurogenic (Figure 6(c)) differentiations were easier for sample 1 than for sample 4. On the other hand, the presence of cartilage matrix proteins is higher in sample 4 than in sample 1 (Figure 6(b)).Figure 6
Effect of donor heterogeneity on differentiation potential. (a) Representative images showing staining with alizarin red of AFSC samples 1 and 4 after three weeks of culture in osteogenic medium. (b) Representative images showing superimposing between DAPI (blue), collagen II (green), and aggrecan (red) signals of AFSC samples 1–4 after three weeks of culture in chondrogenic medium. (c) Representative images showing superimposing between DAPI (blue), GFAP (green), andβtubulin III (red) signals of AFSC samples 1 and 4 after three weeks of culture in neurogenic medium. Scale bar: 10 μm.
(a)
(b)
(c)
## 3.1. Nox4 into the Nucleus of AFSC
Recently we have shown that, by using antibodies from Santa Cruz, Abcam, or Novus, we can see a Nox4 signal mostly localized inside the nuclei of AFSC [18]. In particular AFSC expressing Nox4 into the nucleus show a spot distribution, a punctate pattern similar to the one observed in nuclear domains, such as speckles or Cajal bodies. In order to test whether nuclear Nox4 (nNox4) resides inside nuclear domains, colocalization assays were performed using antibodies directed to sc-35, a speckle marker, or coilin, a Cajal bodies marker. Confocal analysis (Figure 1(a)) of double staining with anti-Nox4 (green) and anti-sc35 (red) or anti-coilin (red) demonstrates that Nox4 interacts with domain of nuclear speckles, rather than with Cajal bodies, as shown by values of the Pearson’s correlation coefficient (Rp) and overlap coefficient (R), which provides information about the similarity of shape between the two patterns (Nox4 and sc-35). The value for correlationR can ranges from −1 to 1, and thus a value of 1 would mean that the patterns are perfectly similar, while a value of −1 would mean that the patterns are perfectly opposite. An overlap coefficient around 0.8 indicates a very good colocalization of the two signals.Figure 1
Nox4 nuclear localization and interaction in AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and coilin (red) or sc-35 (red). Colocalization graph reporting Pearson’s and overlap coefficients. Scale bar: 10μm. (b) Total lysates (TL) were immunoprecipitated with sc-35 antibody and then revealed with anti-sc-35 and anti-Nox4 (right) or were immunoprecipitated with anti-Nox4 and then revealed with anti-Nox4 and anti-sc-35 (left). Signals of preclearing sample (pcl) are shown in the middle line. (c) First line: representative images showing staining with nuclear ROS probe (nuclear peroxy emerald 1) of AFSC treated or not treated with shRNA. Second line: Nox4 signal (green) in the same samples. Scale bar: 10 μm. (d) Western blot revealed anti-Nox4 of AFSC treated with empty vector (ev) or siRNA TI311640, the best silencing vector among the 4 reported in Materials and Methods section. All presented data are representative of three independent experiments.
(a)
(b)
(c)
(d)To demonstrate that this localization means also a direct interaction of these proteins, coimmunoprecipitation experiments (IP for anti-sc35 and IP for anti-Nox4) were performed and show that Nox4 interacts with domain of nuclear speckles (Figure1(b)).In order to investigate the NADPH oxidase activity inside the nuclei, we used a nuclear selective probe for H2O2, nuclear peroxy emerald 1 (Figure 1(c)). Immunofluorescence assay (Figure 1(c)) shows that the decrease in Nox4 expression, demonstrated by western blot (Figure 1(d)), occurs both in cytoplasmic and nuclear compartments. Overall, AFSC cells, treated with siRNA, show a significant decrease in nuclear ROS level.Forkhead Box O (FoxO) transcription factors act in adult stem cells to preserve their regenerative potential. FoxO1 is essential for the maintenance of human ESC pluripotency. This function is probably mediated through direct control exerted by FoxO1 of Oct4 and Sox2 gene expression through occupation and activation of their respective promoters [34]. The cellular distribution of FoxO1 in AFSC is both in the cytosol and in the nucleus, but the Nox4 signal matches only in the cytosol, as shown in Figure 2(a). Otherwise, the pluripotent stem cell marker Oct4 colocalizes in some spots with nNox4 staining into the nucleus (Figure 2(a)). Interestingly Oct4 is detectable in speckle domains, as shown by labeling with sc-35 (Figure 2(b)) and coimmunoprecipitation assay (Figure 2(c)). The signal of coilin, a Cajal bodies marker, does not match with the Oct4 one (Figure 2(b)).Figure 2
Nox4 interaction with transcription factors in nuclei of AFSC. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and FoxO1 (red) or Oct4 (red). (b) Representative images showing superimposing between DAPI (blue), Oct4 (green), and coilin (red) or sc-35 (red). Scale bar: 10μm. (c) Western blot analysis of nuclear lysate (NL) and immunoprecipitation experiment of NL with Nox4 antibody then revealed with anti-Oct4 and anti-sc-35. Signals of preclearing sample (pcl) are shown in the middle line. Presented data are representative of three independent experiments.
(a)
(b)
(c)
## 3.2. AFSC Heterogeneity and nNox4
Stem cells isolated from different amniotic fluids (1–6 collected samples) exhibit different behaviors, proliferation rates. In the figures only the most representative 4 were shown. Figure3(a) shows images, representative of 4 of the 6 samples, related to the different cell distribution of Nox4. It is evident that in s1 sample Nox4 is more expressed and it is detectable mostly in the cytosol. Conversely, sample s4, the slowest one, shows a Nox4 localization into the nuclei, while the cytosolic expression is low. This evidence is confirmed by western blot analysis of Nox4 in nuclear extracts, as shown in Figure 3(b). Moreover, the use of nuclear ROS probe demonstrates that the production of ROS in the nuclei significantly increases from s1 to s4 donor (Figure 3(c)).Figure 3
Effect of donor heterogeneity on Nox4 localization and nuclear ROS production. (a) Representative images showing superimposing between DAPI (blue) and Nox4 (green) signals of 4 different AFSC cultures. Scale bar: 10μm. (b) Representative images of western blot analysis of nuclei of samples 1–4 of AFSC revealed with Nox4. Actin detection was performed in order to show the amount of protein loaded in each line. Presented data are representative of three independent experiments. (c) Representative graph showing fluorescence obtained with nuclear ROS probe (nuclear peroxy emerald 1) normalized to protein content of AFSC samples. P
*
*
*
<
0.0001; P
*
*
*
<
0.01 significantly different from sample 1.
(a)
(b)
(c)Since ROS can cause DNA damage, we tested the phosphorylation level of H2AX while it is crucial to determine whether cells will survive after DNA damage [35]. As expected looking at nuclear H2A foci, we found that, compared to s1 and s2, s3 and s4 samples exhibit a huge status of H2A phosphorylation (Figure 4(a)). The double staining for Nox4 and pH2AX, even if not in all the nuclei, can suggest that nNox4-generated ROS can induce nuclear DNA damage.Figure 4
AFSC samples heterogeneity in DNA damage, senescence, and cell cycle. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and PH2A (red) signals of samples 1 to 4 of AFSC. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Actin detection was performed in order to show the amount of protein loaded in each line. The analysis for NFκB and Nrf2 was performed on nuclear lysates and matrin3 detection was performed in order to show the amount of nuclear protein loaded in each line. Presented data are representative of three independent experiments.
(a)
(b)In parallel, we looked for senescence marker,β-galactosidase activity (data not shown), but only a not significant increase can be noticed in the sample 4.Indeed, regarding the proliferation rate, the faster sample (s1) culturedin vitro reaches confluence every 48 h, while the slowest one (s4), seeded at the same density, spends more than 3 days. A deeper analysis of the cell cycle is reported below (Figure 4(b)). The positivity for c-Kit in the selected population is around 98% for all the samples (data not shown).In order to investigate cell cycle check points, we analyzed the expression of different cyclins and other related proteins (Figure4(b)). Cyclins A1, B1, and E2, usually upregulated in proliferating cells, decrease passing from s1 to s4, as well as p21 and β-catenin. On the other hand, pmyt1 and cyclin D1 increase, since they are expressed during G
0
/
G
1 phase, confirming the low rate of growth of these samples (s3 and s4).Analyzing nuclear extracts, the level of the regulating cell cycle transcription factor NF-κB decreases in slower samples, suggesting that the oxidation status into the nuclei leads to destabilization and nuclear export. On the other hand, Nrf2 presence into the nuclei increases from s1 to s4, because Nrf2 acts as a negative regulator of cell cycle entry in hematopoietic stem cells [36].The expression profile of pluripotent stem cells and mesenchymal stem cells markers were analyzed. Figures5(a) and 5(b) show that nuclear expression markers of pluripotency such as Oct4, Sox2, and SSEA-4 decrease from s1 to s4, as well as the presence of mesenchymal stem cell markers CD73, CD90, and CD105. Therefore the stemness capability could decline.Figure 5
Effect of donor heterogeneity on stem cells markers. (a) Representative images showing superimposing between DAPI (blue), Nox4 (green), and Oct4 (red) signals or DAPI (blue) and Sox2 (green) signals of AFSC samples 1 and 4. Scale bar: 10μm. (b) Representative images of total lysates of AFSC samples 1–4 separated by SDS-PAGE. Western blot was then performed with the indicated antibodies. Presented data are representative of three independent experiments.
(a)
(b)Then we treated AFSC with 3 different differentiation protocols and we tested the presence of calcified matrix (alizarin red) for the osteogenic one, of collagen II and aggrecan for the chondrogenic one, and of GFAP andβtubulin III for the neurogenic one. The differentiation potential analysis demonstrated that osteogenic (Figure 6(a)) and neurogenic (Figure 6(c)) differentiations were easier for sample 1 than for sample 4. On the other hand, the presence of cartilage matrix proteins is higher in sample 4 than in sample 1 (Figure 6(b)).Figure 6
Effect of donor heterogeneity on differentiation potential. (a) Representative images showing staining with alizarin red of AFSC samples 1 and 4 after three weeks of culture in osteogenic medium. (b) Representative images showing superimposing between DAPI (blue), collagen II (green), and aggrecan (red) signals of AFSC samples 1–4 after three weeks of culture in chondrogenic medium. (c) Representative images showing superimposing between DAPI (blue), GFAP (green), andβtubulin III (red) signals of AFSC samples 1 and 4 after three weeks of culture in neurogenic medium. Scale bar: 10 μm.
(a)
(b)
(c)
## 4. Discussion
The current effort in regenerative medicine is the use of human stem cells that are easy to collect and are high proliferating, with large plasticity and without ethical problem. Amniotic fluid stem cells show all these characteristics, but there is a donor-to-donor heterogeneity that can influence the proliferation and the differentiation capacities. This is evident starting from the initial phase of culture, before the selection for c-Kit. The difference may be due to the fact that amniotic fluid contains cells of mixed populations derived from fetus and amnion. Nevertheless, this growth difference is maintained also after c-Kit+ cells selection. Therefore, it is important to find alternative factors involved in cell fate changes such as ROS and discuss their roles in the pluripotency and the differentiation of stem cells to improve directed culture protocols [3].Recently, it has become evident that nuclear redox signaling is an important signaling mechanism regulating a variety of cellular functions [37]. NADPH oxidase family (Nox) is one of the most important sources of ROS in several cellular compartments, including the nucleus. Recently, we demonstrated that in AFSC Nox4 can be detected inside the nuclear domains [18]. In the present study we shed light on the type of nuclear domain where Nox4 localizes, namely, speckles domains. Speckles are subnuclear structures that are enriched in premessenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations. Several kinases and phosphatases that can regulate the splicing machinery have also been localized into nuclear speckles. They might also contain transcription factors, together with splicing factors [38].Indeed, transcription factors, as well as even kinases and phosphatases, have been described to be redox regulated in the nucleus, through modulation of their DNA binding capacity [37]. The diversity in transcriptional control is achieved through a complex network of combinatorial protein-protein and protein-DNA interactions affecting the stability and subnuclear localization of these transcriptional regulators. The forkhead homeobox type O (FOXO) transcription factors have an essential role in maintaining stem cell identity [3]. FoxO1, FoxO3a, and FoxO4 are critical mediators of the cellular responses to oxidative stress and can also be viewed as sensors for oxidative stress since their activity is regulated by H2O2 and, dependent on the cellular context, they relay these stresses to induce apoptosis, stress resistance, or senescence [39, 40]. An increase in intracellular ROS facilitates the localization of FoxO in the nucleus where it is transcriptionally active [40]. Therefore, we investigated at first the localization of FoxO proteins in AFSC expressing Nox4 also into the nuclei. The signal of FoxO1 corresponds with the one of Nox4 but in cytosolic compartment.Since our interest is to elucidate the role of Nox4 into the nucleus, we examined the nuclear interaction with other transcription factors. For example the transcription factor Oct4 plays essential functions in the maintenance of pluripotent embryonic and germ cells of mammals [41]. Moreover, Oct4 protein has been previously reported to be associated, in human oocytes, with splicing speckles and Cajal bodies [42]. Here we showed that in AFSC nuclei Oct4 colocalizes with Nox4 and sc-35, as speckles marker. Moreover confocal and coimmunoprecipitation analysis demonstrated that Nox4 interacts with speckle domains, suggesting that Nox4 could be involved in the regulation of the transcription/pre-mRNA processing machinery by ROS production in these specific nuclear areas. In fact, immunofluorescent localization of Nox4 demonstrated a punctate pattern of staining in stem cell nuclei, matching with Oct4, a stemness regulating protein. It is possible that Oct4 modulated by Nox4-derived ROS could coordinate with other speckles proteins to regulate RNA processing.Stem cells isolated from different amniotic fluids exhibit a proliferation rate inversely coupled with Nox4-derived ROS level into the nuclei, as shown by the cell cycle protein analysis. In support of this, there is recently reported evidence that accumulation of oxidative DNA damage restricts the self-renewal capacity of human HSCs [43]. Therefore, we analyzed in different AFSC samples the presence of H2A foci, as marker of DNA damage. As expected, in samples where Nox4 was mostly nuclear, a higher DNA damage occurred. Therefore one potential role of nNox4 could be the regulation of the response to DNA damage or through regulation of DNA repair.The study of cell cycle better clarified that slower AFSC samples are blocked inG
0
/
G
1 phase. Among the transcription factor, NF-κB and Nrf2 are redox sensitive and cell cycle regulators. In unstimulated cells, NF-κB is sequestered in an inactive form in the cytosol. It can be released from these cytosolic pools by two main pathways (for review, see [44]), resulting in nuclear translocation of NF-κB complexes. In our experimental conditions nNox4 derived-ROS seems to establish a decrease in NF-κB expression also into the nuclei.Nrf-2 is a transcription factor implicated in the cellular responses to oxidative stress. This heterodimer binds to antioxidant-response elements (AREs) and thereby upregulates numerous genes coding for detoxification enzymes, antioxidants, and the enzymes required forde novo GSH synthesis [45]. Interestingly Nrf2 acts as a negative regulator of cell-cycle entry in HSCs, maintaining the balance between HSC quiescence and self-renewal [36]. In effect, Nrf2 level increases in AFSC samples where the cell cycle is blocked. Analyzing the cells culture obtained from different donors, we noticed that also the expression of Oct4 declines in low growth rate samples, as well as Sox2. In fact, increasing evidence suggests that Oct4 does not activate transcription of target genes alone but requires DNA-dependent heterodimerization with another DNA-binding transcription factor, the HMG-box protein Sox2 [41].As far as concerning the differentiation capability of the different AFSC samples, we noticed that the higher expression of stemness markers (sample 1 or 2) is parallel with an easier differentiation potential towards osteogenic and neurogenic lineages. On the other hand, the chondrogenic commitment was better obtained with AFSC population of sample 4, but this result may be justified from the low oxygen condition that allows this differentiation.Understanding the possible mechanisms by which ROS influence stem cells’ fate may provide insights into how the aging of stem cells could be implicated in diseases of aging [46]. Moreover it may indicate new marker of stemness capability in order to easily discriminate active MSC produced for clinical use, with the final outcome that patients are treated only with effective cells and a waste of public funds is prevented.Our findings not only show the effects of nuclear Nox4-derived ROS on AFSC, but also suggest the mechanisms involved in the regulation of the proliferation and differentiation capacity. Moreover, targeting increased levels of nuclear ROS associated with nonactive stem cells may reverse their decreased stem capacity, as slight variations in ROS content may have important effects on stem cell fate.
---
*Source: 101304-2015-07-26.xml* | 2015 |
# CD44, Sonic Hedgehog, and Gli1 Expression Are Prognostic Biomarkers in Gastric Cancer Patients after Radical Resection
**Authors:** Chen Jian-Hui; Zhai Er-Tao; Chen Si-Le; Wu Hui; Wu Kai-Ming; Zhang Xin-Hua; Chen Chuang-Qi; Cai Shi-Rong; He Yu-Long
**Journal:** Gastroenterology Research and Practice
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1013045
---
## Abstract
Aim. CD44 and Sonic Hedgehog (Shh) signaling are important for gastric cancer (GC). However, the clinical impact, survival, and recurrence outcome of CD44, Shh, and Gli1 expressions in GC patients following radical resection have not been elucidated.Patients and Methods. CD44, Shh, and Gli1 protein levels were quantified by immunohistochemistry (IHC). The association between CD44, Shh, and Gli1 expression and clinicopathological features or prognosis of GC patients was determined. The biomarker risk score was calculated by the IHC staining score of CD44, Shh, and Gli1 protein.Results. The IHC positive staining of CD44, Shh, and Gli1 proteins was correlated with larger tumour size, worse gross type and histological type, and advanced TNM stage, which also predicted shorter overall survival (OS) and disease-free survival (DFS) after radical resection. Multivariate analysis indicated the Gli1 protein and Gli1, CD44 proteins were predictive biomarkers for OS and DFS, respectively. If biomarker risk score was taken into analysis, it was the independent prognostic factor for OS and DFS.Conclusions. CD44 and Shh signaling are important biomarkers for tumour aggressiveness, survival, and recurrence in GC.
---
## Body
## 1. Introduction
Due to an increased early detection rate and therapeutic advancements, the survival of gastric cancer (GC) patients has improved in the past 3 decades worldwide. However, GC remains the second leading cause of cancer death in China [1], mainly because of the disappointing early detection rate in China, early tumour recurrence, and high chemotherapy resistance. Hence, it is essential for gastroenterologists to identify effective biomarkers for evaluating the early detection of GC, which may also be targets for novel therapies for this deadly disease.Cancer stem-like cells (CSCs) are defined as rare cells in malignant tumours with the ability to self-renew and to differentiate into various heterogeneous cancer cell lineages [2]. Abnormal gene expression in CSCs might be responsible for the acquisition of various genetic and epigenetic events and may play a critical role in tumour initiation, maintenance, progression, lymphatic involvement, distant metastasis, and chemoradiotherapy resistance [3]. Therefore, CSCs are considered promising tumour-specific biomarkers with potential clinical application. CD44, widely accepted as a CSCs marker for gastric cancer in many studies [4–6], is involved in cell-cell adhesion, cell-matrix interactions, and tumour metastasis [4]. However, most studies exploring the role of CD44 protein in gastric cancer included patients that received either radical resection or palliative surgery, which introduced bias into the studies. Hence, it is necessary to reevaluate the relationship between CD44 expression and clinicopathological features and long-term survival of GC patients who received radical resection. The activation of the Sonic Hedgehog (Shh) pathway affects numerous human stem cell markers in prostate [7], breast [8], and pancreatic [9] cancer. Several studies have demonstrated that increased CD44 expression activates several signalling pathways related to cancer progression and metastasis, including Shh pathway [10]. Song et al. [10] demonstrated that the Shh pathway is essential for maintenance of human gastric cancer CSCsin vitro. However, the clinical impact and interaction between the Shh pathway and CD44 expression in gastric cancer patients are still uncertain. Here, we aimed to find out the correlations between CD44, Gli1, and Shh expression and clinicopathological features, long-term survival, and recurrence.
## 2. Methods
### 2.1. Ethic Statement
The study was approved by the Institutional Review Board of the 1st Affiliated Hospital of Sun Yat-sen University and informed consent was obtained according to institutional regulations.
### 2.2. Clinical Samples
A total of 101 primary gastric cancer tissues were obtained at the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, between January 2006 and June 2006. Patients who underwent radical gastrectomy were included, while patients who received neoadjuvant chemotherapy or chemoradiotherapy were excluded from the study. Clinicopathological parameters evaluated included age, gender, tumour location, tumour size, gross tumour type, tumour histological type, depth of invasion, lymph node involvement, distant metastasis, and TNM stage. Tumour gross types were classified as either infiltrating or noninfiltrating. Tumour histological types were classified as either well differentiated (well and moderately differentiated adenocarcinomas) or undifferentiated (poorly differentiated adenocarcinomas, signet ring cell carcinomas, and mucinous adenocarcinomas). Depth of tumour invasion, lymph node involvement, and distant metastasis were assessed according to the 7th edition of Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) guidelines. The potential radical resection gastric cancer patients received gastrectomy and D2 lymphadenectomy. The patients received postoperative chemotherapy using epirubicin, cisplatin, and 5-fluorouracil regimen as indicated by the concurrent UICC/AJCC guidelines. In addition, 20 human adjacent normal gastric tissues were obtained. Chronic atrophic gastritis, ulcer, and erosion were not detected microscopically in the adjacent gastric tissue samples.
### 2.3. Immunohistochemical Staining
Formalin fixed paraffin embedded human gastric cancer specimens were prepared according to the classical methods. The sections (5μm thickness) were treated with protein-blocking solution for 30 min at temperature before being incubated with primary antibodies against human CD44 (mouse monoclonal diluted 1 : 50), Shh (rabbit polyclonal diluted 1 : 100), and Gli1 (mouse polyclonal diluted 1 : 100) overnight at 4°C. All antibodies were obtained from Novus Biologicals (USA). Following incubation with the appropriate peroxidase-conjugated secondary antibody, the samples were treated with diaminobenzidine and counterstained with hematoxylin. Using bright-field microscopy, the percentage of positive cancer cells and the staining intensity was quantified independently by 2 pathologists. The mean percentage of positive tumour cells was quantified in at least 5 fields at 400x magnification and classified into one of the following 5 grades: 0 (<5% of cells had positive staining), 1 (5–25% of cells had positive staining), 2 (26–50% of cells had positive staining), 3 (51–75% of cells had positive staining), and 4 (>75% of cells had positive staining). The staining intensity of CD44, Shh, and Gli1 was scored as follows: 0 (no staining), 1 (light brown), 2 (brown), and 3 (dark brown). The percentage score and staining intensity score were multiplied to get the final staining score for each tumour specimen. The overall staining scoring system could be categorised into 2 groups: negative (0–4), positive (5–12). We defined the positive IHC staining of biomarker as biomarker overexpression.
### 2.4. Statistical Analysis
The biomarker risk score for gastric cancer in this study was the sum of the IHC score of CD44, Shh, and Gli1 proteins (positive: score 1, negative: score 0), and the patients were divided into four groups according to biomarker risk scores (groups 1–4: score 0–3). Continuous variables are presented as the mean ± SEM and categorical variables are presented as percentages (%). The two-tailed Chi-square test and Fisher’s exact test for categorical variables were performed to determine statistical significance of the associations between clinicopathological parameters and the level of CD44, Shh, and Gli1 expression. Overall survival and disease-free survival rates were calculated according to the Kaplan-Meier method and were compared by log-rank tests. Cox proportional hazard models were performed for both univariate and multivariate analysis to determine prognostic significance. Spearman’s rank order correlation was used to determine the correlations between the expressions of CD44, Shh, and Gli1. AP value of less than 0.05 was considered as statistically significant. SPSS 16.0 software (version 17.0, SPSS Inc., Chicago, IL) was used for all statistical analyses.
## 2.1. Ethic Statement
The study was approved by the Institutional Review Board of the 1st Affiliated Hospital of Sun Yat-sen University and informed consent was obtained according to institutional regulations.
## 2.2. Clinical Samples
A total of 101 primary gastric cancer tissues were obtained at the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, between January 2006 and June 2006. Patients who underwent radical gastrectomy were included, while patients who received neoadjuvant chemotherapy or chemoradiotherapy were excluded from the study. Clinicopathological parameters evaluated included age, gender, tumour location, tumour size, gross tumour type, tumour histological type, depth of invasion, lymph node involvement, distant metastasis, and TNM stage. Tumour gross types were classified as either infiltrating or noninfiltrating. Tumour histological types were classified as either well differentiated (well and moderately differentiated adenocarcinomas) or undifferentiated (poorly differentiated adenocarcinomas, signet ring cell carcinomas, and mucinous adenocarcinomas). Depth of tumour invasion, lymph node involvement, and distant metastasis were assessed according to the 7th edition of Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) guidelines. The potential radical resection gastric cancer patients received gastrectomy and D2 lymphadenectomy. The patients received postoperative chemotherapy using epirubicin, cisplatin, and 5-fluorouracil regimen as indicated by the concurrent UICC/AJCC guidelines. In addition, 20 human adjacent normal gastric tissues were obtained. Chronic atrophic gastritis, ulcer, and erosion were not detected microscopically in the adjacent gastric tissue samples.
## 2.3. Immunohistochemical Staining
Formalin fixed paraffin embedded human gastric cancer specimens were prepared according to the classical methods. The sections (5μm thickness) were treated with protein-blocking solution for 30 min at temperature before being incubated with primary antibodies against human CD44 (mouse monoclonal diluted 1 : 50), Shh (rabbit polyclonal diluted 1 : 100), and Gli1 (mouse polyclonal diluted 1 : 100) overnight at 4°C. All antibodies were obtained from Novus Biologicals (USA). Following incubation with the appropriate peroxidase-conjugated secondary antibody, the samples were treated with diaminobenzidine and counterstained with hematoxylin. Using bright-field microscopy, the percentage of positive cancer cells and the staining intensity was quantified independently by 2 pathologists. The mean percentage of positive tumour cells was quantified in at least 5 fields at 400x magnification and classified into one of the following 5 grades: 0 (<5% of cells had positive staining), 1 (5–25% of cells had positive staining), 2 (26–50% of cells had positive staining), 3 (51–75% of cells had positive staining), and 4 (>75% of cells had positive staining). The staining intensity of CD44, Shh, and Gli1 was scored as follows: 0 (no staining), 1 (light brown), 2 (brown), and 3 (dark brown). The percentage score and staining intensity score were multiplied to get the final staining score for each tumour specimen. The overall staining scoring system could be categorised into 2 groups: negative (0–4), positive (5–12). We defined the positive IHC staining of biomarker as biomarker overexpression.
## 2.4. Statistical Analysis
The biomarker risk score for gastric cancer in this study was the sum of the IHC score of CD44, Shh, and Gli1 proteins (positive: score 1, negative: score 0), and the patients were divided into four groups according to biomarker risk scores (groups 1–4: score 0–3). Continuous variables are presented as the mean ± SEM and categorical variables are presented as percentages (%). The two-tailed Chi-square test and Fisher’s exact test for categorical variables were performed to determine statistical significance of the associations between clinicopathological parameters and the level of CD44, Shh, and Gli1 expression. Overall survival and disease-free survival rates were calculated according to the Kaplan-Meier method and were compared by log-rank tests. Cox proportional hazard models were performed for both univariate and multivariate analysis to determine prognostic significance. Spearman’s rank order correlation was used to determine the correlations between the expressions of CD44, Shh, and Gli1. AP value of less than 0.05 was considered as statistically significant. SPSS 16.0 software (version 17.0, SPSS Inc., Chicago, IL) was used for all statistical analyses.
## 3. Results
### 3.1. Correlations between CD44, Shh, and Gli1 Expression and Clinicopathological Characteristics of Gastric Cancer
To investigate the role of the tumour stem cell biomarker CD44 and Shh signaling pathway in GC tumour, we evaluated the levels of CD44, Shh, and Gli1 protein in tumour tissues using immunohistochemistry (IHC) (Figure1) and the positive stainings of Gli1, Shh, and CD44 protein were mainly localized in the nucleus, cytoplasm, and cell membrane, respectively. We found that 57.8% (59/101), 71.3% (72/101), and 57.8% (59/101) GC tumour specimens stained positively for CD44, Shh, and Gli1 protein, respectively. To further investigate the effect of CD44, Shh, and Gli1 in gastric cancer progression, we analysed the correlations between the level of CD44, Shh, and Gli1 protein and clinicopathological characteristics of GC. There were no statistically significant correlations between CD44, Shh, and Gli1 expression levels and age, gender, or tumour location (Table 1). Overexpression of CD44, Shh, and Gli1 protein was significantly associated with larger tumour size, aggressive gross type, and less differentiated tumour histological type, all of which were clinicopathological features associated with a high metastatic potential. Tumours with high CD44, Shh, and Gli1 expression had more cases of advanced tumour invasion, an increased likelihood of lymph node metastasis, advanced TNM stage (Table 1).Table 1
Clinicopathological characteristics of CD44, Shh, and Gli1 in gastric cancer after radical resection.
Factors
Cases
CD44
Shh
GLI1
Positive (n
=
59)
Negative (n
=
42)
P value
Positive (n
=
72)
Negative (n
=
29)
P value
Positive (n
=
59)
Negative (n
=
42)
P value
Age
0.089
0.777
0.936
<60 years
51
34 (57.6%)
17 (40.5%)
37 (51.4%)
14 (48.3%)
31 (50.8%)
20 (50.0%)
≧60 years
50
25 (42.4%)
25 (59.5%)
35 (48.6%)
15 (51.7%)
30 (49.2%)
20 (50.0%)
Gender
0.460
0.929
0.852
Male
62
38 (64.4%)
24 (57.1%)
44 (61.1%)
18 (62.1%)
37 (60.7%)
25 (62.5%)
Female
39
21 (35.6%)
18 (42.9%)
28 (38.9%)
11 (37.9%)
24 (39.3%)
15 (37.5%)
Tumor location
0.684
0.891
0.440
Upper 1/3
21
13 (22.0%)
8 (19.0%)
15 (20.8%)
6 (20.7%)
13 (21.3%)
8 (20.0%)
Middle 1/3
25
13 (22.0%)
12 (28.6%)
19 (26.4%)
6 (20.7%)
12 (19.7%)
13 (32.5%)
Lower 1/3
50
29 (49.2%)
21 (50.0%)
35 (48.6%)
15 (51.7%)
32 (52.5%)
18 (45.0%)
Whole
5
4 (6.8%)
1 (2.4%)
3 (4.2%)
2 (6.9%)
4 (6.6%)
1 (2.5%)
Tumor size
0.038
<0.001
<0.001
<5 cm
34
15 (25.4%)
19 (45.2%)
16 (22.2%)
18 (62.1%)
11 (18.0%)
23 (57.5%)
≥5 cm
67
44 (74.6%)
23 (54.8%)
56 (77.8%)
11 (37.9%)
50 (82.0%)
17 (42.5%)
Histological type
0.004
0.002
0.002
Differentiated
30
11 (18.6%)
19 (45.2%)
15 (20.8%)
15 (51.7%)
11 (18.0%)
19 (47.5%)
Undifferentiated
71
48 (81.4%)
23 (54.8%)
57 (79.2%)
14 (48.3%)
50 (82.0%)
21 (52.5%)
Gross type
0.501
<0.001
<0.001
Noninfiltration
30
16 (27.1%)
14 (33.3%)
13 (18.1%)
17 (58.6%)
10 (16.4%)
20 (50.0%)
Infiltration
71
43 (72.9%)
28 (66.7%)
59 (81.9%)
12 (41.1%)
51 (83.6%)
20 (50.0%)
T stage (7th)
<0.001
<0.001
<0.001
I
10
2 (3.4%)
8 (19.0%)
3 (4.2%)
7 (24.1%)
2 (3.3%)
8 (20.0%)
II
21
7 (11.9%)
14 (33.3%)
7 (9.7%)
14 (48.3%)
3 (4.9%)
18 (45.0%)
III
20
10 (16.9%)
10 (23.8%)
13 (18.1%)
7 (24.1%)
10 (16.4%)
10 (25.0%)
IVa
30
23 (39.0%)
7 (16.7%)
29 (40.3%)
1 (3.4%)
26 (42.6%)
4 (10.0%)
IVb
20
17 (28.8%)
3 (7.1%)
20 (27.8%)
0 (0.0%)
20 (32.8%)
0 (0.0%)
N stage (7th)
0.005
0.010
<0.001
N0
39
15 (25.4%)
24 (57.1%)
21 (29.2%)
18 (62.1%)
14 (23.0%)
25 (62.5%)
N1
16
9 (15.3%)
7 (16.7%)
11 (15.3%)
5 (17.2%)
11 (18.0%)
5 (12.5%)
N2
24
17 (28.8%)
7 (16.7%)
21 (29.2%)
3 (10.3%)
16 (26.2%)
8 (20.0%)
N3
22
18 (30.5%)
4 (9.5%)
19 (26.4%)
3 (10.3%)
20 (32.8%)
2 (5.0%)
TNM stage (7th)
<0.001
<0.001
<0.001
I
22
4 (6.8%)
18 (42.9%)
5 (6.9%)
17 (58.6%)
2 (3.3%)
20 (50.0%)
II
21
10 (16.9%)
11 (26.2%)
15 (20.8%)
6 (20.7%)
9 (14.8%)
12 (30.0%)
III
33
45 (76.3%)
13 (31.0%)
52 (72.2%)
6 (20.7%)
50 (82.0%)
8 (20.0%)Figure 1
Immunohistochemical expressions of CD44, Shh, and Gli1 markers.
### 3.2. The Overexpression of CD44, Shh, and Gli1 Proteins Indicated Poor Clinical Outcome
Using Kaplan-Meier analysis and the log-rank test, we find that gastric cancer patients with CD44-positive staining had poorer overall survival. 73.8% of patients with CD44-negative tumours survived 5 years compared to only 27.1% of patients with CD44-positive tumours (Figure2(a)) (P
<
0
.
001). A similar result was observed when CD44 expression status and recurrence-free survival time were compared. The recurrence-free survival time of patients with CD44-positive tumours was lower than that of patients with CD44-negative tumours (39.0% versus 79.5%, resp., P
=
0.0
01) (Figure 2(b)). Similarly, cases with Shh and Gli1 positive staining had poorer overall survival (Shh: 33.3% versus 79.3%, P
<
0.001; Gli1: 21.3% versus 85.0%, P
<
0.001) and recurrence-free survival (Shh: 44.6% versus 84.9%, P
<
0.001; Gli1: 35.8% versus 86.5%, P
<
0.001) (Figures 2(c)–2(f)).Figure 2
Prognostic impact of CD44, Shh, and Gli1 markers. (a) CD44 and overall survival, (b) CD44 and recurrence-free survival, (c) Shh and overall survival, (d) Shh and recurrence-free survival, (e) Gli1 and overall survival, and (f) Gli1 and recurrence-free survival.
(a)
(b)
(c)
(d)
(e)
(f)In accordance with these results, univariate Cox regression analysis also revealed that CD44, Shh, and Gli1 status were associated with the prognosis of gastric cancer in our study (Table2). Rather than CD44 and Shh expression levels, only TNM staging and Gli1 expression level were independent prognostic factors for overall survival of patients with GC in this study (Table 2). Similar to the results of prognostic analysis for overall survival, CD44, Shh, and Gli1 status also affected the recurrence of gastric cancer in our study (Table 3). The multivariate analysis revealed that, other than TNM stage and nodal classification, Gli1 status was the independent factor for recurrence-free survival in our study (Table 3).Table 2
Univariate and multivariate analysis for overall survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.001
0.995
0.589–1.679
0.984
Gender
0.076
0.927
0.540–1.592
0.783
Tumor location
0.052
0.966
0.713–1.307
0.820
Tumor size
10.301
2.957
1.525–5.733
0.001
Histological type
7.097
2.457
1.268–4.760
0.008
Gross type
5.811
2.252
1.164–4.357
0.016
T stage
11.943
3.370
1.692–6.713
0.001
N stage
5.334
2.473
1.147–5.333
0.021
TNM stage
21.978
3.070
1.921–4.906
<0.001
11.856
1.346
1.137–1.594
0.001
CD44 expression
16.049
3.589
1.921–6.706
<0.001
Shh expression
13.707
4.490
2.028–9.945
<0.001
Gli1 expression
28.800
8.927
4.013–19.858
<0.001
9.970
4.247
1.731–10.423
0.002Table 3
Univariate and multivariate analysis for disease-free survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.461
—
—
0.497
Gender
0.323
—
—
0.570
Tumor location
0.706
—
—
0.401
Tumor size
8.632
3.187
1.471–6.904
0.003
Histological type
2.391
—
—
0.122
Gross type
4.816
2.372
1.097–5.130
0.028
T stage
20.271
1.912
1.442–2.535
<0.001
N stage
20.429
1.841
1.413–2.398
<0.001
4.368
1.334
1.018–1.747
0.037
TNM stage
27.076
2.663
1.841–3.851
<0.001
7.473
1.940
1.206–3.121
0.006
CD44 expression
11.981
3.545
1.731–7.258
0.001
Shh expression
10.853
4.836
1.893–12.352
0.001
Gli1 expression
21.233
7.806
3.257–18.707
<0.001
6.387
3.403
1.316–8.796
0.011
### 3.3. The Correlation of CD44 Expression with the Shh Signalling Pathway in Gastric Cancer
The Shh signalling pathway regulates tumour development via cell proliferation and is involved in the progression and metastasis of a wide variety of human cancers. Hence, abnormal activation of the Shh pathway could be essential for maintenance and regulation of cancer stem-like cells in human gastric cancer. Using immunohistochemistry, we found that CD44 protein levels were correlated with those of both Shh (r
=
0.385, P
<
0.001) and Gli1 (r
=
0.219, P
=
0.028).
### 3.4. Survival Impact of Biomarker Risk Score for Gastric Cancer
We defined the positive staining of CD44, Shh, and Gli1 proteins as score 1, and the patients were divided into four groups according to biomarker risk scores. There were prognostic differences of overall survival and recurrence-free survival among four groups (Figures3(a) and 3(b)), and the 5-year overall survival rates and recurrence-free survival rates of biomarker risk score of 0, 1, 2, and 3 were 93.8%, 72.7%, 57.9%, and 11.4% and 100.0%, 75.6%, 61.1%, and 27.3%, respectively.Figure 3
Prognostic impact of biomarker risk score system. (a) Overall survival. (b) Recurrence-free survival.
(a)
(b)The biomarker risk score also had prognostic impact for overall survival (χ
2, 34.163; relative risk (RR), 2.766; 95% confidence interval (CI), 1.966–3.890; P
<
0.001) and recurrence-free survival (χ
2, 25.616; RR, 2.727; 95% CI, 1.849–4.022; P
<
0.001). Moreover, if biomarker risk score was taken into multivariate Cox regression analysis, rather than CD44, Shh, and Gli1 expression, biomarker risk score (χ
2, 11.744; RR, 1.999; 95% CI, 1.345–2.972; P
=
0.001), and TNM stage were independent prognostic factors for overall survival, and biomarker risk score (χ
2, 7.183; RR, 1.848; 95% CI, 1.179–2.895; P
=
0.00
7), TNM stage, and nodal classification were independent prognostic factors for recurrence-free survival in our study.
## 3.1. Correlations between CD44, Shh, and Gli1 Expression and Clinicopathological Characteristics of Gastric Cancer
To investigate the role of the tumour stem cell biomarker CD44 and Shh signaling pathway in GC tumour, we evaluated the levels of CD44, Shh, and Gli1 protein in tumour tissues using immunohistochemistry (IHC) (Figure1) and the positive stainings of Gli1, Shh, and CD44 protein were mainly localized in the nucleus, cytoplasm, and cell membrane, respectively. We found that 57.8% (59/101), 71.3% (72/101), and 57.8% (59/101) GC tumour specimens stained positively for CD44, Shh, and Gli1 protein, respectively. To further investigate the effect of CD44, Shh, and Gli1 in gastric cancer progression, we analysed the correlations between the level of CD44, Shh, and Gli1 protein and clinicopathological characteristics of GC. There were no statistically significant correlations between CD44, Shh, and Gli1 expression levels and age, gender, or tumour location (Table 1). Overexpression of CD44, Shh, and Gli1 protein was significantly associated with larger tumour size, aggressive gross type, and less differentiated tumour histological type, all of which were clinicopathological features associated with a high metastatic potential. Tumours with high CD44, Shh, and Gli1 expression had more cases of advanced tumour invasion, an increased likelihood of lymph node metastasis, advanced TNM stage (Table 1).Table 1
Clinicopathological characteristics of CD44, Shh, and Gli1 in gastric cancer after radical resection.
Factors
Cases
CD44
Shh
GLI1
Positive (n
=
59)
Negative (n
=
42)
P value
Positive (n
=
72)
Negative (n
=
29)
P value
Positive (n
=
59)
Negative (n
=
42)
P value
Age
0.089
0.777
0.936
<60 years
51
34 (57.6%)
17 (40.5%)
37 (51.4%)
14 (48.3%)
31 (50.8%)
20 (50.0%)
≧60 years
50
25 (42.4%)
25 (59.5%)
35 (48.6%)
15 (51.7%)
30 (49.2%)
20 (50.0%)
Gender
0.460
0.929
0.852
Male
62
38 (64.4%)
24 (57.1%)
44 (61.1%)
18 (62.1%)
37 (60.7%)
25 (62.5%)
Female
39
21 (35.6%)
18 (42.9%)
28 (38.9%)
11 (37.9%)
24 (39.3%)
15 (37.5%)
Tumor location
0.684
0.891
0.440
Upper 1/3
21
13 (22.0%)
8 (19.0%)
15 (20.8%)
6 (20.7%)
13 (21.3%)
8 (20.0%)
Middle 1/3
25
13 (22.0%)
12 (28.6%)
19 (26.4%)
6 (20.7%)
12 (19.7%)
13 (32.5%)
Lower 1/3
50
29 (49.2%)
21 (50.0%)
35 (48.6%)
15 (51.7%)
32 (52.5%)
18 (45.0%)
Whole
5
4 (6.8%)
1 (2.4%)
3 (4.2%)
2 (6.9%)
4 (6.6%)
1 (2.5%)
Tumor size
0.038
<0.001
<0.001
<5 cm
34
15 (25.4%)
19 (45.2%)
16 (22.2%)
18 (62.1%)
11 (18.0%)
23 (57.5%)
≥5 cm
67
44 (74.6%)
23 (54.8%)
56 (77.8%)
11 (37.9%)
50 (82.0%)
17 (42.5%)
Histological type
0.004
0.002
0.002
Differentiated
30
11 (18.6%)
19 (45.2%)
15 (20.8%)
15 (51.7%)
11 (18.0%)
19 (47.5%)
Undifferentiated
71
48 (81.4%)
23 (54.8%)
57 (79.2%)
14 (48.3%)
50 (82.0%)
21 (52.5%)
Gross type
0.501
<0.001
<0.001
Noninfiltration
30
16 (27.1%)
14 (33.3%)
13 (18.1%)
17 (58.6%)
10 (16.4%)
20 (50.0%)
Infiltration
71
43 (72.9%)
28 (66.7%)
59 (81.9%)
12 (41.1%)
51 (83.6%)
20 (50.0%)
T stage (7th)
<0.001
<0.001
<0.001
I
10
2 (3.4%)
8 (19.0%)
3 (4.2%)
7 (24.1%)
2 (3.3%)
8 (20.0%)
II
21
7 (11.9%)
14 (33.3%)
7 (9.7%)
14 (48.3%)
3 (4.9%)
18 (45.0%)
III
20
10 (16.9%)
10 (23.8%)
13 (18.1%)
7 (24.1%)
10 (16.4%)
10 (25.0%)
IVa
30
23 (39.0%)
7 (16.7%)
29 (40.3%)
1 (3.4%)
26 (42.6%)
4 (10.0%)
IVb
20
17 (28.8%)
3 (7.1%)
20 (27.8%)
0 (0.0%)
20 (32.8%)
0 (0.0%)
N stage (7th)
0.005
0.010
<0.001
N0
39
15 (25.4%)
24 (57.1%)
21 (29.2%)
18 (62.1%)
14 (23.0%)
25 (62.5%)
N1
16
9 (15.3%)
7 (16.7%)
11 (15.3%)
5 (17.2%)
11 (18.0%)
5 (12.5%)
N2
24
17 (28.8%)
7 (16.7%)
21 (29.2%)
3 (10.3%)
16 (26.2%)
8 (20.0%)
N3
22
18 (30.5%)
4 (9.5%)
19 (26.4%)
3 (10.3%)
20 (32.8%)
2 (5.0%)
TNM stage (7th)
<0.001
<0.001
<0.001
I
22
4 (6.8%)
18 (42.9%)
5 (6.9%)
17 (58.6%)
2 (3.3%)
20 (50.0%)
II
21
10 (16.9%)
11 (26.2%)
15 (20.8%)
6 (20.7%)
9 (14.8%)
12 (30.0%)
III
33
45 (76.3%)
13 (31.0%)
52 (72.2%)
6 (20.7%)
50 (82.0%)
8 (20.0%)Figure 1
Immunohistochemical expressions of CD44, Shh, and Gli1 markers.
## 3.2. The Overexpression of CD44, Shh, and Gli1 Proteins Indicated Poor Clinical Outcome
Using Kaplan-Meier analysis and the log-rank test, we find that gastric cancer patients with CD44-positive staining had poorer overall survival. 73.8% of patients with CD44-negative tumours survived 5 years compared to only 27.1% of patients with CD44-positive tumours (Figure2(a)) (P
<
0
.
001). A similar result was observed when CD44 expression status and recurrence-free survival time were compared. The recurrence-free survival time of patients with CD44-positive tumours was lower than that of patients with CD44-negative tumours (39.0% versus 79.5%, resp., P
=
0.0
01) (Figure 2(b)). Similarly, cases with Shh and Gli1 positive staining had poorer overall survival (Shh: 33.3% versus 79.3%, P
<
0.001; Gli1: 21.3% versus 85.0%, P
<
0.001) and recurrence-free survival (Shh: 44.6% versus 84.9%, P
<
0.001; Gli1: 35.8% versus 86.5%, P
<
0.001) (Figures 2(c)–2(f)).Figure 2
Prognostic impact of CD44, Shh, and Gli1 markers. (a) CD44 and overall survival, (b) CD44 and recurrence-free survival, (c) Shh and overall survival, (d) Shh and recurrence-free survival, (e) Gli1 and overall survival, and (f) Gli1 and recurrence-free survival.
(a)
(b)
(c)
(d)
(e)
(f)In accordance with these results, univariate Cox regression analysis also revealed that CD44, Shh, and Gli1 status were associated with the prognosis of gastric cancer in our study (Table2). Rather than CD44 and Shh expression levels, only TNM staging and Gli1 expression level were independent prognostic factors for overall survival of patients with GC in this study (Table 2). Similar to the results of prognostic analysis for overall survival, CD44, Shh, and Gli1 status also affected the recurrence of gastric cancer in our study (Table 3). The multivariate analysis revealed that, other than TNM stage and nodal classification, Gli1 status was the independent factor for recurrence-free survival in our study (Table 3).Table 2
Univariate and multivariate analysis for overall survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.001
0.995
0.589–1.679
0.984
Gender
0.076
0.927
0.540–1.592
0.783
Tumor location
0.052
0.966
0.713–1.307
0.820
Tumor size
10.301
2.957
1.525–5.733
0.001
Histological type
7.097
2.457
1.268–4.760
0.008
Gross type
5.811
2.252
1.164–4.357
0.016
T stage
11.943
3.370
1.692–6.713
0.001
N stage
5.334
2.473
1.147–5.333
0.021
TNM stage
21.978
3.070
1.921–4.906
<0.001
11.856
1.346
1.137–1.594
0.001
CD44 expression
16.049
3.589
1.921–6.706
<0.001
Shh expression
13.707
4.490
2.028–9.945
<0.001
Gli1 expression
28.800
8.927
4.013–19.858
<0.001
9.970
4.247
1.731–10.423
0.002Table 3
Univariate and multivariate analysis for disease-free survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.461
—
—
0.497
Gender
0.323
—
—
0.570
Tumor location
0.706
—
—
0.401
Tumor size
8.632
3.187
1.471–6.904
0.003
Histological type
2.391
—
—
0.122
Gross type
4.816
2.372
1.097–5.130
0.028
T stage
20.271
1.912
1.442–2.535
<0.001
N stage
20.429
1.841
1.413–2.398
<0.001
4.368
1.334
1.018–1.747
0.037
TNM stage
27.076
2.663
1.841–3.851
<0.001
7.473
1.940
1.206–3.121
0.006
CD44 expression
11.981
3.545
1.731–7.258
0.001
Shh expression
10.853
4.836
1.893–12.352
0.001
Gli1 expression
21.233
7.806
3.257–18.707
<0.001
6.387
3.403
1.316–8.796
0.011
## 3.3. The Correlation of CD44 Expression with the Shh Signalling Pathway in Gastric Cancer
The Shh signalling pathway regulates tumour development via cell proliferation and is involved in the progression and metastasis of a wide variety of human cancers. Hence, abnormal activation of the Shh pathway could be essential for maintenance and regulation of cancer stem-like cells in human gastric cancer. Using immunohistochemistry, we found that CD44 protein levels were correlated with those of both Shh (r
=
0.385, P
<
0.001) and Gli1 (r
=
0.219, P
=
0.028).
## 3.4. Survival Impact of Biomarker Risk Score for Gastric Cancer
We defined the positive staining of CD44, Shh, and Gli1 proteins as score 1, and the patients were divided into four groups according to biomarker risk scores. There were prognostic differences of overall survival and recurrence-free survival among four groups (Figures3(a) and 3(b)), and the 5-year overall survival rates and recurrence-free survival rates of biomarker risk score of 0, 1, 2, and 3 were 93.8%, 72.7%, 57.9%, and 11.4% and 100.0%, 75.6%, 61.1%, and 27.3%, respectively.Figure 3
Prognostic impact of biomarker risk score system. (a) Overall survival. (b) Recurrence-free survival.
(a)
(b)The biomarker risk score also had prognostic impact for overall survival (χ
2, 34.163; relative risk (RR), 2.766; 95% confidence interval (CI), 1.966–3.890; P
<
0.001) and recurrence-free survival (χ
2, 25.616; RR, 2.727; 95% CI, 1.849–4.022; P
<
0.001). Moreover, if biomarker risk score was taken into multivariate Cox regression analysis, rather than CD44, Shh, and Gli1 expression, biomarker risk score (χ
2, 11.744; RR, 1.999; 95% CI, 1.345–2.972; P
=
0.001), and TNM stage were independent prognostic factors for overall survival, and biomarker risk score (χ
2, 7.183; RR, 1.848; 95% CI, 1.179–2.895; P
=
0.00
7), TNM stage, and nodal classification were independent prognostic factors for recurrence-free survival in our study.
## 4. Discussion
The CD44 gene, located on chromosome 11p12-13, has various isoforms consisting of at least 19 exons. The CD44 protein is a class I transmembrane glycoprotein and is a major component of the extracellular matrix that regulates the function of cell-cell and cell-tissue adhesion. Moreover, the CD44 protein has been identified as a biomarker of side population cells [11] or cancer stem-like cells [12] in the gastric cell lines MKN-45, MKN-74, NCI-N87, and BGC-823. Hence, CD44 may be involved in several malignant biological processes, such as tumour initiation, development, and metastasis. As one of the most important signalling pathways, Shh has been implicated in the regulation of gastric cancer cell proliferation, migration, invasion, stem cell maintenance, and lymphangiogenesis. CD44 is required for Shh signalling pathway activation in various types of cancer, including ovarian [13], pancreatic [14], and prostate cancers [15]. Most studies have confirmed an interaction between CD44 and the Shh pathwayin vivo. In contrast, Nanashima et al. [16] found no significant correlation between the expressions of Gli1 and CD44 in intrahepatic cholangiocarcinoma. There are very few studies in the literature evaluating the interaction between the Shh pathway and CD44 in gastric cancer cells. Song et al. [10] demonstrated that the Shh pathway was important for maintenance of cancer stem-like abilities in human gastric cancer cells. Yu et al. [17] found that overexpression of Shh signalling pathway genes was accompanied by an increase in CD44-positive cells in the MKN45 gastric cancer cell line. A similar result has been reported for breast cancer cells [18]. However, to the best of our knowledge, the correlation of CD44, Shh, and Gli1 in gastric cancer and their clinicopathological significance have not been reported in the literature. This is the first report revealing a positive relationship between CD44 expression and the levels of 2 important members of Hedgehog signalling pathwayin vivo, suggesting that the interaction of CD44 and the Shh pathway may be involved in primary gastric cancer tumourigenesis, progression, and metastasis.Most studies confirm that high CD44 [19], Shh [20], and Gli1 [21] expression is significantly associated with poorer clinicopathological parameters and worse overall survival in gastric cancer. It was worthy of note that most studies did not distinguish between patients who underwent radical resection and those receiving palliative surgery, which have significant differences in clinicopathological features and prognosis, when assessing the association of CD44, Shh, and Gli1 protein levels in GC. This study is the first to explore CD44, Shh, and Gli1 expression only in patients who underwent radical resection. Similar to studies that included both patients who underwent radical resection and palliative surgery, we also found an association between high CD44, Shh, and Gli1 expression and clinicopathological characteristics indicative of increased malignant potential, such as gross type, tumour differentiation, tumour invasion, and lymph node metastasis.The clinical usefulness of CD44 expression to predict recurrence in GC is controversial. Hirata et al. [22] reported that expression of CD44 variant 9, an isoform of CD44, could predict recurrence in early gastric cancer. In contrast, Yong et al. found that the expression of CD44 was not associated with recurrence of gastric cancer [23]. The different proportion of patients receiving radical resection versus palliative surgery may have contributed to the different conclusions reached in these two studies. No previous studies have clarified the association between CD44 overexpression and tumour relapse or long-term survival only in gastric cancer patients who received radical resection. Moreover, this is the first study to demonstrate that patients with CD44-negative tumours have better overall survival and lower recurrence rate than patients with CD44-positive tumours after radical surgery. Similarly, it is also the first time to evaluate the overexpression of Shh and Gli1 proteins can predict worse survival outcome and early recurrence in gastric cancer.To assess the aggressiveness of CD44, Shh, and Gli1 for gastric cancer, we established biomarker risk score system to evaluate the prognostic importance. The biomarker risk score system can discriminate survival differences of overall survival and recurrence-free survival and show the highest prognostic value from the multivariate Cox regression analysis result. This may partially explain why CD44 and Shh signaling pathway signatures are useful biomarkers for aggressive tumour behaviour in gastric cancer.In summary, the cancer stem cell biomarker CD44 and Shh signaling pathway signatures can be used as novel diagnostic and therapeutic tools. It is necessary to further elucidate the mechanisms of aberrant Shh, Gli1 expression and the overexpression of CSCs markers in gastric cancer.
---
*Source: 1013045-2015-12-29.xml* | 1013045-2015-12-29_1013045-2015-12-29.md | 37,846 | CD44, Sonic Hedgehog, and Gli1 Expression Are Prognostic Biomarkers in Gastric Cancer Patients after Radical Resection | Chen Jian-Hui; Zhai Er-Tao; Chen Si-Le; Wu Hui; Wu Kai-Ming; Zhang Xin-Hua; Chen Chuang-Qi; Cai Shi-Rong; He Yu-Long | Gastroenterology Research and Practice
(2016) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1013045 | 1013045-2015-12-29.xml | ---
## Abstract
Aim. CD44 and Sonic Hedgehog (Shh) signaling are important for gastric cancer (GC). However, the clinical impact, survival, and recurrence outcome of CD44, Shh, and Gli1 expressions in GC patients following radical resection have not been elucidated.Patients and Methods. CD44, Shh, and Gli1 protein levels were quantified by immunohistochemistry (IHC). The association between CD44, Shh, and Gli1 expression and clinicopathological features or prognosis of GC patients was determined. The biomarker risk score was calculated by the IHC staining score of CD44, Shh, and Gli1 protein.Results. The IHC positive staining of CD44, Shh, and Gli1 proteins was correlated with larger tumour size, worse gross type and histological type, and advanced TNM stage, which also predicted shorter overall survival (OS) and disease-free survival (DFS) after radical resection. Multivariate analysis indicated the Gli1 protein and Gli1, CD44 proteins were predictive biomarkers for OS and DFS, respectively. If biomarker risk score was taken into analysis, it was the independent prognostic factor for OS and DFS.Conclusions. CD44 and Shh signaling are important biomarkers for tumour aggressiveness, survival, and recurrence in GC.
---
## Body
## 1. Introduction
Due to an increased early detection rate and therapeutic advancements, the survival of gastric cancer (GC) patients has improved in the past 3 decades worldwide. However, GC remains the second leading cause of cancer death in China [1], mainly because of the disappointing early detection rate in China, early tumour recurrence, and high chemotherapy resistance. Hence, it is essential for gastroenterologists to identify effective biomarkers for evaluating the early detection of GC, which may also be targets for novel therapies for this deadly disease.Cancer stem-like cells (CSCs) are defined as rare cells in malignant tumours with the ability to self-renew and to differentiate into various heterogeneous cancer cell lineages [2]. Abnormal gene expression in CSCs might be responsible for the acquisition of various genetic and epigenetic events and may play a critical role in tumour initiation, maintenance, progression, lymphatic involvement, distant metastasis, and chemoradiotherapy resistance [3]. Therefore, CSCs are considered promising tumour-specific biomarkers with potential clinical application. CD44, widely accepted as a CSCs marker for gastric cancer in many studies [4–6], is involved in cell-cell adhesion, cell-matrix interactions, and tumour metastasis [4]. However, most studies exploring the role of CD44 protein in gastric cancer included patients that received either radical resection or palliative surgery, which introduced bias into the studies. Hence, it is necessary to reevaluate the relationship between CD44 expression and clinicopathological features and long-term survival of GC patients who received radical resection. The activation of the Sonic Hedgehog (Shh) pathway affects numerous human stem cell markers in prostate [7], breast [8], and pancreatic [9] cancer. Several studies have demonstrated that increased CD44 expression activates several signalling pathways related to cancer progression and metastasis, including Shh pathway [10]. Song et al. [10] demonstrated that the Shh pathway is essential for maintenance of human gastric cancer CSCsin vitro. However, the clinical impact and interaction between the Shh pathway and CD44 expression in gastric cancer patients are still uncertain. Here, we aimed to find out the correlations between CD44, Gli1, and Shh expression and clinicopathological features, long-term survival, and recurrence.
## 2. Methods
### 2.1. Ethic Statement
The study was approved by the Institutional Review Board of the 1st Affiliated Hospital of Sun Yat-sen University and informed consent was obtained according to institutional regulations.
### 2.2. Clinical Samples
A total of 101 primary gastric cancer tissues were obtained at the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, between January 2006 and June 2006. Patients who underwent radical gastrectomy were included, while patients who received neoadjuvant chemotherapy or chemoradiotherapy were excluded from the study. Clinicopathological parameters evaluated included age, gender, tumour location, tumour size, gross tumour type, tumour histological type, depth of invasion, lymph node involvement, distant metastasis, and TNM stage. Tumour gross types were classified as either infiltrating or noninfiltrating. Tumour histological types were classified as either well differentiated (well and moderately differentiated adenocarcinomas) or undifferentiated (poorly differentiated adenocarcinomas, signet ring cell carcinomas, and mucinous adenocarcinomas). Depth of tumour invasion, lymph node involvement, and distant metastasis were assessed according to the 7th edition of Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) guidelines. The potential radical resection gastric cancer patients received gastrectomy and D2 lymphadenectomy. The patients received postoperative chemotherapy using epirubicin, cisplatin, and 5-fluorouracil regimen as indicated by the concurrent UICC/AJCC guidelines. In addition, 20 human adjacent normal gastric tissues were obtained. Chronic atrophic gastritis, ulcer, and erosion were not detected microscopically in the adjacent gastric tissue samples.
### 2.3. Immunohistochemical Staining
Formalin fixed paraffin embedded human gastric cancer specimens were prepared according to the classical methods. The sections (5μm thickness) were treated with protein-blocking solution for 30 min at temperature before being incubated with primary antibodies against human CD44 (mouse monoclonal diluted 1 : 50), Shh (rabbit polyclonal diluted 1 : 100), and Gli1 (mouse polyclonal diluted 1 : 100) overnight at 4°C. All antibodies were obtained from Novus Biologicals (USA). Following incubation with the appropriate peroxidase-conjugated secondary antibody, the samples were treated with diaminobenzidine and counterstained with hematoxylin. Using bright-field microscopy, the percentage of positive cancer cells and the staining intensity was quantified independently by 2 pathologists. The mean percentage of positive tumour cells was quantified in at least 5 fields at 400x magnification and classified into one of the following 5 grades: 0 (<5% of cells had positive staining), 1 (5–25% of cells had positive staining), 2 (26–50% of cells had positive staining), 3 (51–75% of cells had positive staining), and 4 (>75% of cells had positive staining). The staining intensity of CD44, Shh, and Gli1 was scored as follows: 0 (no staining), 1 (light brown), 2 (brown), and 3 (dark brown). The percentage score and staining intensity score were multiplied to get the final staining score for each tumour specimen. The overall staining scoring system could be categorised into 2 groups: negative (0–4), positive (5–12). We defined the positive IHC staining of biomarker as biomarker overexpression.
### 2.4. Statistical Analysis
The biomarker risk score for gastric cancer in this study was the sum of the IHC score of CD44, Shh, and Gli1 proteins (positive: score 1, negative: score 0), and the patients were divided into four groups according to biomarker risk scores (groups 1–4: score 0–3). Continuous variables are presented as the mean ± SEM and categorical variables are presented as percentages (%). The two-tailed Chi-square test and Fisher’s exact test for categorical variables were performed to determine statistical significance of the associations between clinicopathological parameters and the level of CD44, Shh, and Gli1 expression. Overall survival and disease-free survival rates were calculated according to the Kaplan-Meier method and were compared by log-rank tests. Cox proportional hazard models were performed for both univariate and multivariate analysis to determine prognostic significance. Spearman’s rank order correlation was used to determine the correlations between the expressions of CD44, Shh, and Gli1. AP value of less than 0.05 was considered as statistically significant. SPSS 16.0 software (version 17.0, SPSS Inc., Chicago, IL) was used for all statistical analyses.
## 2.1. Ethic Statement
The study was approved by the Institutional Review Board of the 1st Affiliated Hospital of Sun Yat-sen University and informed consent was obtained according to institutional regulations.
## 2.2. Clinical Samples
A total of 101 primary gastric cancer tissues were obtained at the 1st Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, between January 2006 and June 2006. Patients who underwent radical gastrectomy were included, while patients who received neoadjuvant chemotherapy or chemoradiotherapy were excluded from the study. Clinicopathological parameters evaluated included age, gender, tumour location, tumour size, gross tumour type, tumour histological type, depth of invasion, lymph node involvement, distant metastasis, and TNM stage. Tumour gross types were classified as either infiltrating or noninfiltrating. Tumour histological types were classified as either well differentiated (well and moderately differentiated adenocarcinomas) or undifferentiated (poorly differentiated adenocarcinomas, signet ring cell carcinomas, and mucinous adenocarcinomas). Depth of tumour invasion, lymph node involvement, and distant metastasis were assessed according to the 7th edition of Union for International Cancer Control/American Joint Committee on Cancer (UICC/AJCC) guidelines. The potential radical resection gastric cancer patients received gastrectomy and D2 lymphadenectomy. The patients received postoperative chemotherapy using epirubicin, cisplatin, and 5-fluorouracil regimen as indicated by the concurrent UICC/AJCC guidelines. In addition, 20 human adjacent normal gastric tissues were obtained. Chronic atrophic gastritis, ulcer, and erosion were not detected microscopically in the adjacent gastric tissue samples.
## 2.3. Immunohistochemical Staining
Formalin fixed paraffin embedded human gastric cancer specimens were prepared according to the classical methods. The sections (5μm thickness) were treated with protein-blocking solution for 30 min at temperature before being incubated with primary antibodies against human CD44 (mouse monoclonal diluted 1 : 50), Shh (rabbit polyclonal diluted 1 : 100), and Gli1 (mouse polyclonal diluted 1 : 100) overnight at 4°C. All antibodies were obtained from Novus Biologicals (USA). Following incubation with the appropriate peroxidase-conjugated secondary antibody, the samples were treated with diaminobenzidine and counterstained with hematoxylin. Using bright-field microscopy, the percentage of positive cancer cells and the staining intensity was quantified independently by 2 pathologists. The mean percentage of positive tumour cells was quantified in at least 5 fields at 400x magnification and classified into one of the following 5 grades: 0 (<5% of cells had positive staining), 1 (5–25% of cells had positive staining), 2 (26–50% of cells had positive staining), 3 (51–75% of cells had positive staining), and 4 (>75% of cells had positive staining). The staining intensity of CD44, Shh, and Gli1 was scored as follows: 0 (no staining), 1 (light brown), 2 (brown), and 3 (dark brown). The percentage score and staining intensity score were multiplied to get the final staining score for each tumour specimen. The overall staining scoring system could be categorised into 2 groups: negative (0–4), positive (5–12). We defined the positive IHC staining of biomarker as biomarker overexpression.
## 2.4. Statistical Analysis
The biomarker risk score for gastric cancer in this study was the sum of the IHC score of CD44, Shh, and Gli1 proteins (positive: score 1, negative: score 0), and the patients were divided into four groups according to biomarker risk scores (groups 1–4: score 0–3). Continuous variables are presented as the mean ± SEM and categorical variables are presented as percentages (%). The two-tailed Chi-square test and Fisher’s exact test for categorical variables were performed to determine statistical significance of the associations between clinicopathological parameters and the level of CD44, Shh, and Gli1 expression. Overall survival and disease-free survival rates were calculated according to the Kaplan-Meier method and were compared by log-rank tests. Cox proportional hazard models were performed for both univariate and multivariate analysis to determine prognostic significance. Spearman’s rank order correlation was used to determine the correlations between the expressions of CD44, Shh, and Gli1. AP value of less than 0.05 was considered as statistically significant. SPSS 16.0 software (version 17.0, SPSS Inc., Chicago, IL) was used for all statistical analyses.
## 3. Results
### 3.1. Correlations between CD44, Shh, and Gli1 Expression and Clinicopathological Characteristics of Gastric Cancer
To investigate the role of the tumour stem cell biomarker CD44 and Shh signaling pathway in GC tumour, we evaluated the levels of CD44, Shh, and Gli1 protein in tumour tissues using immunohistochemistry (IHC) (Figure1) and the positive stainings of Gli1, Shh, and CD44 protein were mainly localized in the nucleus, cytoplasm, and cell membrane, respectively. We found that 57.8% (59/101), 71.3% (72/101), and 57.8% (59/101) GC tumour specimens stained positively for CD44, Shh, and Gli1 protein, respectively. To further investigate the effect of CD44, Shh, and Gli1 in gastric cancer progression, we analysed the correlations between the level of CD44, Shh, and Gli1 protein and clinicopathological characteristics of GC. There were no statistically significant correlations between CD44, Shh, and Gli1 expression levels and age, gender, or tumour location (Table 1). Overexpression of CD44, Shh, and Gli1 protein was significantly associated with larger tumour size, aggressive gross type, and less differentiated tumour histological type, all of which were clinicopathological features associated with a high metastatic potential. Tumours with high CD44, Shh, and Gli1 expression had more cases of advanced tumour invasion, an increased likelihood of lymph node metastasis, advanced TNM stage (Table 1).Table 1
Clinicopathological characteristics of CD44, Shh, and Gli1 in gastric cancer after radical resection.
Factors
Cases
CD44
Shh
GLI1
Positive (n
=
59)
Negative (n
=
42)
P value
Positive (n
=
72)
Negative (n
=
29)
P value
Positive (n
=
59)
Negative (n
=
42)
P value
Age
0.089
0.777
0.936
<60 years
51
34 (57.6%)
17 (40.5%)
37 (51.4%)
14 (48.3%)
31 (50.8%)
20 (50.0%)
≧60 years
50
25 (42.4%)
25 (59.5%)
35 (48.6%)
15 (51.7%)
30 (49.2%)
20 (50.0%)
Gender
0.460
0.929
0.852
Male
62
38 (64.4%)
24 (57.1%)
44 (61.1%)
18 (62.1%)
37 (60.7%)
25 (62.5%)
Female
39
21 (35.6%)
18 (42.9%)
28 (38.9%)
11 (37.9%)
24 (39.3%)
15 (37.5%)
Tumor location
0.684
0.891
0.440
Upper 1/3
21
13 (22.0%)
8 (19.0%)
15 (20.8%)
6 (20.7%)
13 (21.3%)
8 (20.0%)
Middle 1/3
25
13 (22.0%)
12 (28.6%)
19 (26.4%)
6 (20.7%)
12 (19.7%)
13 (32.5%)
Lower 1/3
50
29 (49.2%)
21 (50.0%)
35 (48.6%)
15 (51.7%)
32 (52.5%)
18 (45.0%)
Whole
5
4 (6.8%)
1 (2.4%)
3 (4.2%)
2 (6.9%)
4 (6.6%)
1 (2.5%)
Tumor size
0.038
<0.001
<0.001
<5 cm
34
15 (25.4%)
19 (45.2%)
16 (22.2%)
18 (62.1%)
11 (18.0%)
23 (57.5%)
≥5 cm
67
44 (74.6%)
23 (54.8%)
56 (77.8%)
11 (37.9%)
50 (82.0%)
17 (42.5%)
Histological type
0.004
0.002
0.002
Differentiated
30
11 (18.6%)
19 (45.2%)
15 (20.8%)
15 (51.7%)
11 (18.0%)
19 (47.5%)
Undifferentiated
71
48 (81.4%)
23 (54.8%)
57 (79.2%)
14 (48.3%)
50 (82.0%)
21 (52.5%)
Gross type
0.501
<0.001
<0.001
Noninfiltration
30
16 (27.1%)
14 (33.3%)
13 (18.1%)
17 (58.6%)
10 (16.4%)
20 (50.0%)
Infiltration
71
43 (72.9%)
28 (66.7%)
59 (81.9%)
12 (41.1%)
51 (83.6%)
20 (50.0%)
T stage (7th)
<0.001
<0.001
<0.001
I
10
2 (3.4%)
8 (19.0%)
3 (4.2%)
7 (24.1%)
2 (3.3%)
8 (20.0%)
II
21
7 (11.9%)
14 (33.3%)
7 (9.7%)
14 (48.3%)
3 (4.9%)
18 (45.0%)
III
20
10 (16.9%)
10 (23.8%)
13 (18.1%)
7 (24.1%)
10 (16.4%)
10 (25.0%)
IVa
30
23 (39.0%)
7 (16.7%)
29 (40.3%)
1 (3.4%)
26 (42.6%)
4 (10.0%)
IVb
20
17 (28.8%)
3 (7.1%)
20 (27.8%)
0 (0.0%)
20 (32.8%)
0 (0.0%)
N stage (7th)
0.005
0.010
<0.001
N0
39
15 (25.4%)
24 (57.1%)
21 (29.2%)
18 (62.1%)
14 (23.0%)
25 (62.5%)
N1
16
9 (15.3%)
7 (16.7%)
11 (15.3%)
5 (17.2%)
11 (18.0%)
5 (12.5%)
N2
24
17 (28.8%)
7 (16.7%)
21 (29.2%)
3 (10.3%)
16 (26.2%)
8 (20.0%)
N3
22
18 (30.5%)
4 (9.5%)
19 (26.4%)
3 (10.3%)
20 (32.8%)
2 (5.0%)
TNM stage (7th)
<0.001
<0.001
<0.001
I
22
4 (6.8%)
18 (42.9%)
5 (6.9%)
17 (58.6%)
2 (3.3%)
20 (50.0%)
II
21
10 (16.9%)
11 (26.2%)
15 (20.8%)
6 (20.7%)
9 (14.8%)
12 (30.0%)
III
33
45 (76.3%)
13 (31.0%)
52 (72.2%)
6 (20.7%)
50 (82.0%)
8 (20.0%)Figure 1
Immunohistochemical expressions of CD44, Shh, and Gli1 markers.
### 3.2. The Overexpression of CD44, Shh, and Gli1 Proteins Indicated Poor Clinical Outcome
Using Kaplan-Meier analysis and the log-rank test, we find that gastric cancer patients with CD44-positive staining had poorer overall survival. 73.8% of patients with CD44-negative tumours survived 5 years compared to only 27.1% of patients with CD44-positive tumours (Figure2(a)) (P
<
0
.
001). A similar result was observed when CD44 expression status and recurrence-free survival time were compared. The recurrence-free survival time of patients with CD44-positive tumours was lower than that of patients with CD44-negative tumours (39.0% versus 79.5%, resp., P
=
0.0
01) (Figure 2(b)). Similarly, cases with Shh and Gli1 positive staining had poorer overall survival (Shh: 33.3% versus 79.3%, P
<
0.001; Gli1: 21.3% versus 85.0%, P
<
0.001) and recurrence-free survival (Shh: 44.6% versus 84.9%, P
<
0.001; Gli1: 35.8% versus 86.5%, P
<
0.001) (Figures 2(c)–2(f)).Figure 2
Prognostic impact of CD44, Shh, and Gli1 markers. (a) CD44 and overall survival, (b) CD44 and recurrence-free survival, (c) Shh and overall survival, (d) Shh and recurrence-free survival, (e) Gli1 and overall survival, and (f) Gli1 and recurrence-free survival.
(a)
(b)
(c)
(d)
(e)
(f)In accordance with these results, univariate Cox regression analysis also revealed that CD44, Shh, and Gli1 status were associated with the prognosis of gastric cancer in our study (Table2). Rather than CD44 and Shh expression levels, only TNM staging and Gli1 expression level were independent prognostic factors for overall survival of patients with GC in this study (Table 2). Similar to the results of prognostic analysis for overall survival, CD44, Shh, and Gli1 status also affected the recurrence of gastric cancer in our study (Table 3). The multivariate analysis revealed that, other than TNM stage and nodal classification, Gli1 status was the independent factor for recurrence-free survival in our study (Table 3).Table 2
Univariate and multivariate analysis for overall survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.001
0.995
0.589–1.679
0.984
Gender
0.076
0.927
0.540–1.592
0.783
Tumor location
0.052
0.966
0.713–1.307
0.820
Tumor size
10.301
2.957
1.525–5.733
0.001
Histological type
7.097
2.457
1.268–4.760
0.008
Gross type
5.811
2.252
1.164–4.357
0.016
T stage
11.943
3.370
1.692–6.713
0.001
N stage
5.334
2.473
1.147–5.333
0.021
TNM stage
21.978
3.070
1.921–4.906
<0.001
11.856
1.346
1.137–1.594
0.001
CD44 expression
16.049
3.589
1.921–6.706
<0.001
Shh expression
13.707
4.490
2.028–9.945
<0.001
Gli1 expression
28.800
8.927
4.013–19.858
<0.001
9.970
4.247
1.731–10.423
0.002Table 3
Univariate and multivariate analysis for disease-free survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.461
—
—
0.497
Gender
0.323
—
—
0.570
Tumor location
0.706
—
—
0.401
Tumor size
8.632
3.187
1.471–6.904
0.003
Histological type
2.391
—
—
0.122
Gross type
4.816
2.372
1.097–5.130
0.028
T stage
20.271
1.912
1.442–2.535
<0.001
N stage
20.429
1.841
1.413–2.398
<0.001
4.368
1.334
1.018–1.747
0.037
TNM stage
27.076
2.663
1.841–3.851
<0.001
7.473
1.940
1.206–3.121
0.006
CD44 expression
11.981
3.545
1.731–7.258
0.001
Shh expression
10.853
4.836
1.893–12.352
0.001
Gli1 expression
21.233
7.806
3.257–18.707
<0.001
6.387
3.403
1.316–8.796
0.011
### 3.3. The Correlation of CD44 Expression with the Shh Signalling Pathway in Gastric Cancer
The Shh signalling pathway regulates tumour development via cell proliferation and is involved in the progression and metastasis of a wide variety of human cancers. Hence, abnormal activation of the Shh pathway could be essential for maintenance and regulation of cancer stem-like cells in human gastric cancer. Using immunohistochemistry, we found that CD44 protein levels were correlated with those of both Shh (r
=
0.385, P
<
0.001) and Gli1 (r
=
0.219, P
=
0.028).
### 3.4. Survival Impact of Biomarker Risk Score for Gastric Cancer
We defined the positive staining of CD44, Shh, and Gli1 proteins as score 1, and the patients were divided into four groups according to biomarker risk scores. There were prognostic differences of overall survival and recurrence-free survival among four groups (Figures3(a) and 3(b)), and the 5-year overall survival rates and recurrence-free survival rates of biomarker risk score of 0, 1, 2, and 3 were 93.8%, 72.7%, 57.9%, and 11.4% and 100.0%, 75.6%, 61.1%, and 27.3%, respectively.Figure 3
Prognostic impact of biomarker risk score system. (a) Overall survival. (b) Recurrence-free survival.
(a)
(b)The biomarker risk score also had prognostic impact for overall survival (χ
2, 34.163; relative risk (RR), 2.766; 95% confidence interval (CI), 1.966–3.890; P
<
0.001) and recurrence-free survival (χ
2, 25.616; RR, 2.727; 95% CI, 1.849–4.022; P
<
0.001). Moreover, if biomarker risk score was taken into multivariate Cox regression analysis, rather than CD44, Shh, and Gli1 expression, biomarker risk score (χ
2, 11.744; RR, 1.999; 95% CI, 1.345–2.972; P
=
0.001), and TNM stage were independent prognostic factors for overall survival, and biomarker risk score (χ
2, 7.183; RR, 1.848; 95% CI, 1.179–2.895; P
=
0.00
7), TNM stage, and nodal classification were independent prognostic factors for recurrence-free survival in our study.
## 3.1. Correlations between CD44, Shh, and Gli1 Expression and Clinicopathological Characteristics of Gastric Cancer
To investigate the role of the tumour stem cell biomarker CD44 and Shh signaling pathway in GC tumour, we evaluated the levels of CD44, Shh, and Gli1 protein in tumour tissues using immunohistochemistry (IHC) (Figure1) and the positive stainings of Gli1, Shh, and CD44 protein were mainly localized in the nucleus, cytoplasm, and cell membrane, respectively. We found that 57.8% (59/101), 71.3% (72/101), and 57.8% (59/101) GC tumour specimens stained positively for CD44, Shh, and Gli1 protein, respectively. To further investigate the effect of CD44, Shh, and Gli1 in gastric cancer progression, we analysed the correlations between the level of CD44, Shh, and Gli1 protein and clinicopathological characteristics of GC. There were no statistically significant correlations between CD44, Shh, and Gli1 expression levels and age, gender, or tumour location (Table 1). Overexpression of CD44, Shh, and Gli1 protein was significantly associated with larger tumour size, aggressive gross type, and less differentiated tumour histological type, all of which were clinicopathological features associated with a high metastatic potential. Tumours with high CD44, Shh, and Gli1 expression had more cases of advanced tumour invasion, an increased likelihood of lymph node metastasis, advanced TNM stage (Table 1).Table 1
Clinicopathological characteristics of CD44, Shh, and Gli1 in gastric cancer after radical resection.
Factors
Cases
CD44
Shh
GLI1
Positive (n
=
59)
Negative (n
=
42)
P value
Positive (n
=
72)
Negative (n
=
29)
P value
Positive (n
=
59)
Negative (n
=
42)
P value
Age
0.089
0.777
0.936
<60 years
51
34 (57.6%)
17 (40.5%)
37 (51.4%)
14 (48.3%)
31 (50.8%)
20 (50.0%)
≧60 years
50
25 (42.4%)
25 (59.5%)
35 (48.6%)
15 (51.7%)
30 (49.2%)
20 (50.0%)
Gender
0.460
0.929
0.852
Male
62
38 (64.4%)
24 (57.1%)
44 (61.1%)
18 (62.1%)
37 (60.7%)
25 (62.5%)
Female
39
21 (35.6%)
18 (42.9%)
28 (38.9%)
11 (37.9%)
24 (39.3%)
15 (37.5%)
Tumor location
0.684
0.891
0.440
Upper 1/3
21
13 (22.0%)
8 (19.0%)
15 (20.8%)
6 (20.7%)
13 (21.3%)
8 (20.0%)
Middle 1/3
25
13 (22.0%)
12 (28.6%)
19 (26.4%)
6 (20.7%)
12 (19.7%)
13 (32.5%)
Lower 1/3
50
29 (49.2%)
21 (50.0%)
35 (48.6%)
15 (51.7%)
32 (52.5%)
18 (45.0%)
Whole
5
4 (6.8%)
1 (2.4%)
3 (4.2%)
2 (6.9%)
4 (6.6%)
1 (2.5%)
Tumor size
0.038
<0.001
<0.001
<5 cm
34
15 (25.4%)
19 (45.2%)
16 (22.2%)
18 (62.1%)
11 (18.0%)
23 (57.5%)
≥5 cm
67
44 (74.6%)
23 (54.8%)
56 (77.8%)
11 (37.9%)
50 (82.0%)
17 (42.5%)
Histological type
0.004
0.002
0.002
Differentiated
30
11 (18.6%)
19 (45.2%)
15 (20.8%)
15 (51.7%)
11 (18.0%)
19 (47.5%)
Undifferentiated
71
48 (81.4%)
23 (54.8%)
57 (79.2%)
14 (48.3%)
50 (82.0%)
21 (52.5%)
Gross type
0.501
<0.001
<0.001
Noninfiltration
30
16 (27.1%)
14 (33.3%)
13 (18.1%)
17 (58.6%)
10 (16.4%)
20 (50.0%)
Infiltration
71
43 (72.9%)
28 (66.7%)
59 (81.9%)
12 (41.1%)
51 (83.6%)
20 (50.0%)
T stage (7th)
<0.001
<0.001
<0.001
I
10
2 (3.4%)
8 (19.0%)
3 (4.2%)
7 (24.1%)
2 (3.3%)
8 (20.0%)
II
21
7 (11.9%)
14 (33.3%)
7 (9.7%)
14 (48.3%)
3 (4.9%)
18 (45.0%)
III
20
10 (16.9%)
10 (23.8%)
13 (18.1%)
7 (24.1%)
10 (16.4%)
10 (25.0%)
IVa
30
23 (39.0%)
7 (16.7%)
29 (40.3%)
1 (3.4%)
26 (42.6%)
4 (10.0%)
IVb
20
17 (28.8%)
3 (7.1%)
20 (27.8%)
0 (0.0%)
20 (32.8%)
0 (0.0%)
N stage (7th)
0.005
0.010
<0.001
N0
39
15 (25.4%)
24 (57.1%)
21 (29.2%)
18 (62.1%)
14 (23.0%)
25 (62.5%)
N1
16
9 (15.3%)
7 (16.7%)
11 (15.3%)
5 (17.2%)
11 (18.0%)
5 (12.5%)
N2
24
17 (28.8%)
7 (16.7%)
21 (29.2%)
3 (10.3%)
16 (26.2%)
8 (20.0%)
N3
22
18 (30.5%)
4 (9.5%)
19 (26.4%)
3 (10.3%)
20 (32.8%)
2 (5.0%)
TNM stage (7th)
<0.001
<0.001
<0.001
I
22
4 (6.8%)
18 (42.9%)
5 (6.9%)
17 (58.6%)
2 (3.3%)
20 (50.0%)
II
21
10 (16.9%)
11 (26.2%)
15 (20.8%)
6 (20.7%)
9 (14.8%)
12 (30.0%)
III
33
45 (76.3%)
13 (31.0%)
52 (72.2%)
6 (20.7%)
50 (82.0%)
8 (20.0%)Figure 1
Immunohistochemical expressions of CD44, Shh, and Gli1 markers.
## 3.2. The Overexpression of CD44, Shh, and Gli1 Proteins Indicated Poor Clinical Outcome
Using Kaplan-Meier analysis and the log-rank test, we find that gastric cancer patients with CD44-positive staining had poorer overall survival. 73.8% of patients with CD44-negative tumours survived 5 years compared to only 27.1% of patients with CD44-positive tumours (Figure2(a)) (P
<
0
.
001). A similar result was observed when CD44 expression status and recurrence-free survival time were compared. The recurrence-free survival time of patients with CD44-positive tumours was lower than that of patients with CD44-negative tumours (39.0% versus 79.5%, resp., P
=
0.0
01) (Figure 2(b)). Similarly, cases with Shh and Gli1 positive staining had poorer overall survival (Shh: 33.3% versus 79.3%, P
<
0.001; Gli1: 21.3% versus 85.0%, P
<
0.001) and recurrence-free survival (Shh: 44.6% versus 84.9%, P
<
0.001; Gli1: 35.8% versus 86.5%, P
<
0.001) (Figures 2(c)–2(f)).Figure 2
Prognostic impact of CD44, Shh, and Gli1 markers. (a) CD44 and overall survival, (b) CD44 and recurrence-free survival, (c) Shh and overall survival, (d) Shh and recurrence-free survival, (e) Gli1 and overall survival, and (f) Gli1 and recurrence-free survival.
(a)
(b)
(c)
(d)
(e)
(f)In accordance with these results, univariate Cox regression analysis also revealed that CD44, Shh, and Gli1 status were associated with the prognosis of gastric cancer in our study (Table2). Rather than CD44 and Shh expression levels, only TNM staging and Gli1 expression level were independent prognostic factors for overall survival of patients with GC in this study (Table 2). Similar to the results of prognostic analysis for overall survival, CD44, Shh, and Gli1 status also affected the recurrence of gastric cancer in our study (Table 3). The multivariate analysis revealed that, other than TNM stage and nodal classification, Gli1 status was the independent factor for recurrence-free survival in our study (Table 3).Table 2
Univariate and multivariate analysis for overall survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.001
0.995
0.589–1.679
0.984
Gender
0.076
0.927
0.540–1.592
0.783
Tumor location
0.052
0.966
0.713–1.307
0.820
Tumor size
10.301
2.957
1.525–5.733
0.001
Histological type
7.097
2.457
1.268–4.760
0.008
Gross type
5.811
2.252
1.164–4.357
0.016
T stage
11.943
3.370
1.692–6.713
0.001
N stage
5.334
2.473
1.147–5.333
0.021
TNM stage
21.978
3.070
1.921–4.906
<0.001
11.856
1.346
1.137–1.594
0.001
CD44 expression
16.049
3.589
1.921–6.706
<0.001
Shh expression
13.707
4.490
2.028–9.945
<0.001
Gli1 expression
28.800
8.927
4.013–19.858
<0.001
9.970
4.247
1.731–10.423
0.002Table 3
Univariate and multivariate analysis for disease-free survival in gastric cancer after radical resection.
Factors
Univariate regression analysis
Multivariate regression analysis
χ
2 value
OR
95% CI
P value
χ
2 value
OR
95% CI
P value
Age
0.461
—
—
0.497
Gender
0.323
—
—
0.570
Tumor location
0.706
—
—
0.401
Tumor size
8.632
3.187
1.471–6.904
0.003
Histological type
2.391
—
—
0.122
Gross type
4.816
2.372
1.097–5.130
0.028
T stage
20.271
1.912
1.442–2.535
<0.001
N stage
20.429
1.841
1.413–2.398
<0.001
4.368
1.334
1.018–1.747
0.037
TNM stage
27.076
2.663
1.841–3.851
<0.001
7.473
1.940
1.206–3.121
0.006
CD44 expression
11.981
3.545
1.731–7.258
0.001
Shh expression
10.853
4.836
1.893–12.352
0.001
Gli1 expression
21.233
7.806
3.257–18.707
<0.001
6.387
3.403
1.316–8.796
0.011
## 3.3. The Correlation of CD44 Expression with the Shh Signalling Pathway in Gastric Cancer
The Shh signalling pathway regulates tumour development via cell proliferation and is involved in the progression and metastasis of a wide variety of human cancers. Hence, abnormal activation of the Shh pathway could be essential for maintenance and regulation of cancer stem-like cells in human gastric cancer. Using immunohistochemistry, we found that CD44 protein levels were correlated with those of both Shh (r
=
0.385, P
<
0.001) and Gli1 (r
=
0.219, P
=
0.028).
## 3.4. Survival Impact of Biomarker Risk Score for Gastric Cancer
We defined the positive staining of CD44, Shh, and Gli1 proteins as score 1, and the patients were divided into four groups according to biomarker risk scores. There were prognostic differences of overall survival and recurrence-free survival among four groups (Figures3(a) and 3(b)), and the 5-year overall survival rates and recurrence-free survival rates of biomarker risk score of 0, 1, 2, and 3 were 93.8%, 72.7%, 57.9%, and 11.4% and 100.0%, 75.6%, 61.1%, and 27.3%, respectively.Figure 3
Prognostic impact of biomarker risk score system. (a) Overall survival. (b) Recurrence-free survival.
(a)
(b)The biomarker risk score also had prognostic impact for overall survival (χ
2, 34.163; relative risk (RR), 2.766; 95% confidence interval (CI), 1.966–3.890; P
<
0.001) and recurrence-free survival (χ
2, 25.616; RR, 2.727; 95% CI, 1.849–4.022; P
<
0.001). Moreover, if biomarker risk score was taken into multivariate Cox regression analysis, rather than CD44, Shh, and Gli1 expression, biomarker risk score (χ
2, 11.744; RR, 1.999; 95% CI, 1.345–2.972; P
=
0.001), and TNM stage were independent prognostic factors for overall survival, and biomarker risk score (χ
2, 7.183; RR, 1.848; 95% CI, 1.179–2.895; P
=
0.00
7), TNM stage, and nodal classification were independent prognostic factors for recurrence-free survival in our study.
## 4. Discussion
The CD44 gene, located on chromosome 11p12-13, has various isoforms consisting of at least 19 exons. The CD44 protein is a class I transmembrane glycoprotein and is a major component of the extracellular matrix that regulates the function of cell-cell and cell-tissue adhesion. Moreover, the CD44 protein has been identified as a biomarker of side population cells [11] or cancer stem-like cells [12] in the gastric cell lines MKN-45, MKN-74, NCI-N87, and BGC-823. Hence, CD44 may be involved in several malignant biological processes, such as tumour initiation, development, and metastasis. As one of the most important signalling pathways, Shh has been implicated in the regulation of gastric cancer cell proliferation, migration, invasion, stem cell maintenance, and lymphangiogenesis. CD44 is required for Shh signalling pathway activation in various types of cancer, including ovarian [13], pancreatic [14], and prostate cancers [15]. Most studies have confirmed an interaction between CD44 and the Shh pathwayin vivo. In contrast, Nanashima et al. [16] found no significant correlation between the expressions of Gli1 and CD44 in intrahepatic cholangiocarcinoma. There are very few studies in the literature evaluating the interaction between the Shh pathway and CD44 in gastric cancer cells. Song et al. [10] demonstrated that the Shh pathway was important for maintenance of cancer stem-like abilities in human gastric cancer cells. Yu et al. [17] found that overexpression of Shh signalling pathway genes was accompanied by an increase in CD44-positive cells in the MKN45 gastric cancer cell line. A similar result has been reported for breast cancer cells [18]. However, to the best of our knowledge, the correlation of CD44, Shh, and Gli1 in gastric cancer and their clinicopathological significance have not been reported in the literature. This is the first report revealing a positive relationship between CD44 expression and the levels of 2 important members of Hedgehog signalling pathwayin vivo, suggesting that the interaction of CD44 and the Shh pathway may be involved in primary gastric cancer tumourigenesis, progression, and metastasis.Most studies confirm that high CD44 [19], Shh [20], and Gli1 [21] expression is significantly associated with poorer clinicopathological parameters and worse overall survival in gastric cancer. It was worthy of note that most studies did not distinguish between patients who underwent radical resection and those receiving palliative surgery, which have significant differences in clinicopathological features and prognosis, when assessing the association of CD44, Shh, and Gli1 protein levels in GC. This study is the first to explore CD44, Shh, and Gli1 expression only in patients who underwent radical resection. Similar to studies that included both patients who underwent radical resection and palliative surgery, we also found an association between high CD44, Shh, and Gli1 expression and clinicopathological characteristics indicative of increased malignant potential, such as gross type, tumour differentiation, tumour invasion, and lymph node metastasis.The clinical usefulness of CD44 expression to predict recurrence in GC is controversial. Hirata et al. [22] reported that expression of CD44 variant 9, an isoform of CD44, could predict recurrence in early gastric cancer. In contrast, Yong et al. found that the expression of CD44 was not associated with recurrence of gastric cancer [23]. The different proportion of patients receiving radical resection versus palliative surgery may have contributed to the different conclusions reached in these two studies. No previous studies have clarified the association between CD44 overexpression and tumour relapse or long-term survival only in gastric cancer patients who received radical resection. Moreover, this is the first study to demonstrate that patients with CD44-negative tumours have better overall survival and lower recurrence rate than patients with CD44-positive tumours after radical surgery. Similarly, it is also the first time to evaluate the overexpression of Shh and Gli1 proteins can predict worse survival outcome and early recurrence in gastric cancer.To assess the aggressiveness of CD44, Shh, and Gli1 for gastric cancer, we established biomarker risk score system to evaluate the prognostic importance. The biomarker risk score system can discriminate survival differences of overall survival and recurrence-free survival and show the highest prognostic value from the multivariate Cox regression analysis result. This may partially explain why CD44 and Shh signaling pathway signatures are useful biomarkers for aggressive tumour behaviour in gastric cancer.In summary, the cancer stem cell biomarker CD44 and Shh signaling pathway signatures can be used as novel diagnostic and therapeutic tools. It is necessary to further elucidate the mechanisms of aberrant Shh, Gli1 expression and the overexpression of CSCs markers in gastric cancer.
---
*Source: 1013045-2015-12-29.xml* | 2016 |
# Controllable Synthesis and Photocatalytic Activity of Nano-BiOBr Photocatalyst
**Authors:** Xiaoyang Wang; Fuchun Zhang; Yanning Yang; Yu Zhang; Lili Liu; Wenli Lei
**Journal:** Journal of Nanomaterials
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1013075
---
## Abstract
Nano-BiOBr photocatalysts were successfully prepared by hydrothermal synthesis using the ethylene glycol solution. The nano-BiOBr photocatalysts were characterized and investigated by X-ray diffractometry (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and UV-vis diffuse reflectance spectroscopy (UV-Vis DRS), and the catalytic ability toward photodegradation of rhodamine B (RhB) was also explored. The results showed that the crystallinity of the nano-BiOBr photocatalyst decreased with the increase of the concentration, while it increased with the amount of the applied deionized water. The morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microspheres and flakes with the increasing of the concentration and from microspheres to flakes with the addition of the deionized water. The results indicated that the concentration and solvents have an essential influence on the bandgap energy values of the nano-BiOBr photocatalyst, and photocatalyst showed an excellent photocatalyst activity toward photodegradation of RhB. The degradation yields of photocatalyst decreased with the increase of the concentration and increased with the addition of the deionized water. PL intensity of photocatalyst increased with the increase of the concentration and weakened with the addition of the deionized water.
---
## Body
## 1. Introduction
In recent years, the phenomenon of global water pollution has become a more and more severe issue with the rapid development of the economy, which has attracted widespread attention because of the close relationship between water resources and people’s daily work and life [1, 2]. Many ways can cause water pollution, one of them being the textile industry and wastewaters with organic dye, which are challenging due to their poor biodegradability [3–5]. Semiconductor bismuth halide (BiOX, X = Cl, Br, I)-based photocatalysts have attracted extensive attention from researchers because of their unique structure and excellent photocatalytic properties [6, 7]. BiOBr was the target material of the presented investigations because of its moderate bandgap, open layered structure, high oxidation ability, indirect transition mode, high visible light response ability, and excellent stability [8, 9]. There are many methods to prepare BiOBr, such as high temperature-based solid-state [10], hydrothermal [11], solvothermal [12], water- (alcohol-) based [13], ultrasound-assisted [14], and electrospinning method [15]. Among them, hydro- and solvothermal methods are the most commonly used synthesis pathways. The structure, morphology, crystallinity, and phase formation of the photocatalysts can be effectively obtained through controllable synthesis because of the slow product formation rate, simple and easy ways to control reaction conditions, and stable reaction environment during the water- (solvent-) based thermal reaction [16]. For example, nano-BiOBr microspheres were synthesized previously by the solvothermal method using ethylene glycol (EG) as a solvent [17]. On the other hand, nano-BiOX microspheres were obtained using other solvothermal methods and the same solvent EG [18]. BiOBr/SrFe12O19 nanosheets were synthesized by the solvothermal method using deionized water (DI) as a solvent [19]. AgBr/BiOBr nano-heterostructure-decorated polyacrylonitrile nanofibers were synthesized by electrospinning technique and solvothermal treatment in the presence of an EG solution as the reductant [20].Therefore, the present paper is aimed at using Bi(NO3)·5H2O and CTAB as raw materials, with EG and DI as a solvent, under the condition of different concentrations and different solvents to obtain nano-BiOBr photocatalyst. The influences of different solvents and concentrations of precursors on the structure, morphology, optical properties, and photocatalytic activities were also investigated systematically.
## 2. Experiment Section
### 2.1. Synthesis of Nano-BiOBr Photocatalyst
In the first step, 2 mmol of Bi(NO3)·5H2O is added to 80 ml of EG, and the solution was ultrasonicated until it was completely dissolved (obtaining solution A). Afterward, 2 mmol of CTAB was introduced into solution A, stirred magnetically until it was completely dissolved (solution B). Next, solution B was introduced into a high-temperature reactor (the filling degree is 80%), and after constant temperature reaction at 180°C for 10 hours in an incubator, the solution was naturally cooled to room temperature, and the precipitate was separated. Finally, the precipitate was washed with DI and alcohol, and then, the nano-BiOBr photocatalyst was finally obtained after the drying procedure at 80°C for 12 hours. Table 1 shows the abbreviation and synthesis parameters of nano-BiOBr photocatalyst obtained with different synthesis conditions, abbreviating them with (a)–(d) in the latter stages of the manuscript.Table 1
The specific process parameters of samples.
Sample
V1:V2
CBi:CBr
T (°C)
Notes
(a)
8 : 0
1 : 1
180
V1: EG volume
(b)
8 : 0
1 : 3
180
V2: DI volume
(c)
8 : 1
1 : 3
180
CBi: the source of Bi+3
(d)
8 : 0
1 : 5
180
CBr: the source of Br-
### 2.2. Characterization of Nano-BiOBr Photocatalyst
The crystalline phases were determined by a Bruker D8 Advance X-ray diffractometer (XRD) using a Cu Kα (λ=0.15418nm) radiation in the θ~2θ Bragg. The morphologies of the as-prepared samples were observed and investigated by a field emission scanning electron microscope (FESEM, Nova NanoSEM 450, FEI). The UV-vis absorption spectra of the samples were measured with a Cary 5000 (Agilent, USA). The photoluminescence (PL) spectra were observed with a He-Cd laser 280 nm.
### 2.3. Determination of the Photocatalytic Activity of the Nano-BiOBr Photocatalyst
The photocatalytic reactions were carried out in a CEL-LAB500E4 multisite photochemical reaction system. The catalytic activity of the target degradation product was evaluated by using nano-BiOBr photocatalyst under the visible light source. In a typical photocatalytic experiment, 0.05 g of the nano-BiOBr photocatalyst was dispersed into 50 ml of RhB (10 mg/l) solution and magnetically stirred in the dark for 30 min to reach the adsorption-desorption equilibrium between RhB and the nano-BiOBr photocatalyst. Then, the light source was turned on, and a sample of 4 ml of the suspension was continually taken from the reaction cell at every 15 minutes and centrifuged. Finally, the absorbance of the supernatant at the maximum absorption wavelength was analysed through an ultraviolet-visible spectrophotometer (UV-1901). The degradation efficiencies were calculated according to the expression of degradation rate (1−A/A0), where A0 is the absorbance of the target degradation at its maximum absorption wavelength before illumination, and A is the absorbance value after illumination for a certain time.
## 2.1. Synthesis of Nano-BiOBr Photocatalyst
In the first step, 2 mmol of Bi(NO3)·5H2O is added to 80 ml of EG, and the solution was ultrasonicated until it was completely dissolved (obtaining solution A). Afterward, 2 mmol of CTAB was introduced into solution A, stirred magnetically until it was completely dissolved (solution B). Next, solution B was introduced into a high-temperature reactor (the filling degree is 80%), and after constant temperature reaction at 180°C for 10 hours in an incubator, the solution was naturally cooled to room temperature, and the precipitate was separated. Finally, the precipitate was washed with DI and alcohol, and then, the nano-BiOBr photocatalyst was finally obtained after the drying procedure at 80°C for 12 hours. Table 1 shows the abbreviation and synthesis parameters of nano-BiOBr photocatalyst obtained with different synthesis conditions, abbreviating them with (a)–(d) in the latter stages of the manuscript.Table 1
The specific process parameters of samples.
Sample
V1:V2
CBi:CBr
T (°C)
Notes
(a)
8 : 0
1 : 1
180
V1: EG volume
(b)
8 : 0
1 : 3
180
V2: DI volume
(c)
8 : 1
1 : 3
180
CBi: the source of Bi+3
(d)
8 : 0
1 : 5
180
CBr: the source of Br-
## 2.2. Characterization of Nano-BiOBr Photocatalyst
The crystalline phases were determined by a Bruker D8 Advance X-ray diffractometer (XRD) using a Cu Kα (λ=0.15418nm) radiation in the θ~2θ Bragg. The morphologies of the as-prepared samples were observed and investigated by a field emission scanning electron microscope (FESEM, Nova NanoSEM 450, FEI). The UV-vis absorption spectra of the samples were measured with a Cary 5000 (Agilent, USA). The photoluminescence (PL) spectra were observed with a He-Cd laser 280 nm.
## 2.3. Determination of the Photocatalytic Activity of the Nano-BiOBr Photocatalyst
The photocatalytic reactions were carried out in a CEL-LAB500E4 multisite photochemical reaction system. The catalytic activity of the target degradation product was evaluated by using nano-BiOBr photocatalyst under the visible light source. In a typical photocatalytic experiment, 0.05 g of the nano-BiOBr photocatalyst was dispersed into 50 ml of RhB (10 mg/l) solution and magnetically stirred in the dark for 30 min to reach the adsorption-desorption equilibrium between RhB and the nano-BiOBr photocatalyst. Then, the light source was turned on, and a sample of 4 ml of the suspension was continually taken from the reaction cell at every 15 minutes and centrifuged. Finally, the absorbance of the supernatant at the maximum absorption wavelength was analysed through an ultraviolet-visible spectrophotometer (UV-1901). The degradation efficiencies were calculated according to the expression of degradation rate (1−A/A0), where A0 is the absorbance of the target degradation at its maximum absorption wavelength before illumination, and A is the absorbance value after illumination for a certain time.
## 3. Results and Discussion
### 3.1. XRD Analysis
The XRD patterns of nano-BiOBr photocatalyst at different concentrations and different solvents are shown in Figure1. It is clear that the prepared nano-BiOBr photocatalysts were consistent with the standard diffraction pattern of tetragonal BiOBr (PDF#85-0862) (Figure 1(a)). No other specific diffraction peaks were detected, indicating that the nano-BiOBr photocatalysts prepared in different concentrations and different solvents are pure tetragonal nanoparticles. The intensity of the diffraction peak is weakened with the increase of the concentration, indicating that the concentrations are the key factor affecting the crystallinity of nano-BiOBr photocatalyst, 2-theta was right-shifted, and the crystal space d decreased according to the Prague equation (dsinθ=nλ). The intensity of the diffraction peak increases with the increase of DI, indicating that the increase of DI throughout all the experiments is beneficial to increase the crystallinity of nano-BiOBr photocatalyst; the influence factors were more, and the main reason remains to be further studied.Figure 1
XRD patterns of samples (a) before and (b) after the reaction.
(a)
(b)In order to study the stability of the nano-BiOBr photocatalyst, the XRD of the nano-BiOBr photocatalyst after the photocatalytic reaction was investigated (Figure1(b)). Compared with the results before the photodegradation, there are no noticeable changes in the crystal phases of the samples.
### 3.2. SEM Analysis
Figure2 shows the SEM images of nano-BiOBr photocatalysts prepared using different concentrations and different solvents. As shown in Figure 2(a), BiOBr microspheres with a diameter of ~2 μm were obtained, and the BiOBr microspheres are self-assembled from irregular nanosheets with a thickness of ~10 nm. From Figure 2(b), BiOBr cubes of ~2 μm are obtained, BiOBr being also self-assembled from irregular nanosheets in a fixed manner. Compared with (a) photocatalyst, (b) photocatalyst is self-assembled from nanosheets more densely, and the thickness of the nanosheets is higher (~20 nm). As shown in Figure 2(c), irregular sheetlike nano-BiOBr was obtained; compared with (b) photocatalyst, (c) photocatalyst is a nanosheet with a thickness of ~15 nm. From Figure 2(d), it can be observed that a mixture of BiOBr microspheres and sheetlike BiOBr is obtained. BiOBr microspheres are tightly assembled from irregular nanosheets with a thickness of ~15 nm. Compared with (b) photocatalysts, (d) photocatalysts have more unassembled nanosheets, and the thickness of the nanosheets is smaller and assembled in a way that cubes become microspheres. The results showed that the morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of the concentration of precursors. Moreover, the morphology of nano-BiOBr photocatalyst changed from microspheres to flakes with the addition of DI.Figure 2
SEM images of (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
### 3.3. UV-Vis DRS Analysis
Figure3 shows the UV-vis diffuse reflectance spectra and corresponding bandgap energies of the nano-BiOBr photocatalyst. It can be observed that the absorption edges of (a)–(d) photocatalysts can be found at 439, 450, 446, and 453 nm, respectively, indicating that the absorption wavelength range of (d) photocatalyst is the largest, and the absorption wavelength range of (a) photocatalyst is the smallest, meaning that, more visible light can be absorbed with the increasing of the concentration; the absorbed visible light is reduced with the addition of DI. The position of the absorption edge is closely related to the forbidden bands of the semiconductor photocatalyst. The forbidden bandwidth of the BiOBr photocatalyst is calculated by Equation (1) [21].
(1)αhv=Ahv−Egn/2,where α, h, ν, A, and Eg represent the intrinsic absorption coefficient, the Planck constant, the frequency of light, the proportion constant of photocatalyst, and the bandgap width of semiconductor, respectively. n=2 for direct bandgap semiconductor, and n=4 for indirect bandgap semiconductor. n=4 because BiOBr photocatalyst is an indirect bandgap semiconductor. Using the formula on the hν~αhν1/2 curve, as shown in Figure 3(b), it can be observed that the tangent in the middle section of the curve and the intercept between tangent and abscissa is the bandgap of BiOBr photocatalyst. The bandgap of (a)–(d) photocatalyst measured by plotting and tangent fitting was 2.77 eV, 2.57 eV, 2.67 eV, and 2.53 eV, respectively. It can be observed that the bandgap decreases continuously with the increase of the concentration and the bandgap increases with the addition of DI. It can be concluded that the concentration of substances and solvents has an important influence on the bandgap energy values of BiOBr photocatalyst.Figure 3
UV-vis diffuse reflectance spectra and a corresponding bandgap width of (A)–(D) photocatalysts.
(a)
(b)
### 3.4. Photodegradation of RhB Using Nano-BiOBr Semiconductors
#### 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
#### 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
### 3.5. PL Analysis
Nano-BiOBr photocatalyst using different concentrations and different solvents were characterized by photoluminescence (PL) spectroscopy to verify further the conclusions mentioned above. The excitation wavelength was 280 nm, and the emission range was 400 nm to 600 nm, as shown in Figure6. The movement, transfer, and recombination rate of photogenerated electron-hole pairs were revealed by PL spectra. The lower the intensity of emission peaks in PL spectra, the higher the separation efficiency of electrons and holes in semiconductors and the higher the photocatalytic activity of photocatalysts that were observable [23]. It can also be seen that the PL intensity increases with the increase of the concentration, while the photocatalytic activity decreases. As can also be observed, the PL intensity is weakened with the addition of DI and the photocatalytic activity increased. It can be seen from the figure that all of the photocatalysts have emission peaks at 437 nm, 449 nm, and 466 nm and the order of luminous intensity is (b) > (d) > (c) > (a), which means that the recombination rate of electrons and holes of the four materials decreases gradually and the photodegradation performance is gradually enhanced, which is in accordance with the previous photocatalytic experiment results.Figure 6
PL spectral of (a)–(d) photocatalysts.
## 3.1. XRD Analysis
The XRD patterns of nano-BiOBr photocatalyst at different concentrations and different solvents are shown in Figure1. It is clear that the prepared nano-BiOBr photocatalysts were consistent with the standard diffraction pattern of tetragonal BiOBr (PDF#85-0862) (Figure 1(a)). No other specific diffraction peaks were detected, indicating that the nano-BiOBr photocatalysts prepared in different concentrations and different solvents are pure tetragonal nanoparticles. The intensity of the diffraction peak is weakened with the increase of the concentration, indicating that the concentrations are the key factor affecting the crystallinity of nano-BiOBr photocatalyst, 2-theta was right-shifted, and the crystal space d decreased according to the Prague equation (dsinθ=nλ). The intensity of the diffraction peak increases with the increase of DI, indicating that the increase of DI throughout all the experiments is beneficial to increase the crystallinity of nano-BiOBr photocatalyst; the influence factors were more, and the main reason remains to be further studied.Figure 1
XRD patterns of samples (a) before and (b) after the reaction.
(a)
(b)In order to study the stability of the nano-BiOBr photocatalyst, the XRD of the nano-BiOBr photocatalyst after the photocatalytic reaction was investigated (Figure1(b)). Compared with the results before the photodegradation, there are no noticeable changes in the crystal phases of the samples.
## 3.2. SEM Analysis
Figure2 shows the SEM images of nano-BiOBr photocatalysts prepared using different concentrations and different solvents. As shown in Figure 2(a), BiOBr microspheres with a diameter of ~2 μm were obtained, and the BiOBr microspheres are self-assembled from irregular nanosheets with a thickness of ~10 nm. From Figure 2(b), BiOBr cubes of ~2 μm are obtained, BiOBr being also self-assembled from irregular nanosheets in a fixed manner. Compared with (a) photocatalyst, (b) photocatalyst is self-assembled from nanosheets more densely, and the thickness of the nanosheets is higher (~20 nm). As shown in Figure 2(c), irregular sheetlike nano-BiOBr was obtained; compared with (b) photocatalyst, (c) photocatalyst is a nanosheet with a thickness of ~15 nm. From Figure 2(d), it can be observed that a mixture of BiOBr microspheres and sheetlike BiOBr is obtained. BiOBr microspheres are tightly assembled from irregular nanosheets with a thickness of ~15 nm. Compared with (b) photocatalysts, (d) photocatalysts have more unassembled nanosheets, and the thickness of the nanosheets is smaller and assembled in a way that cubes become microspheres. The results showed that the morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of the concentration of precursors. Moreover, the morphology of nano-BiOBr photocatalyst changed from microspheres to flakes with the addition of DI.Figure 2
SEM images of (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
## 3.3. UV-Vis DRS Analysis
Figure3 shows the UV-vis diffuse reflectance spectra and corresponding bandgap energies of the nano-BiOBr photocatalyst. It can be observed that the absorption edges of (a)–(d) photocatalysts can be found at 439, 450, 446, and 453 nm, respectively, indicating that the absorption wavelength range of (d) photocatalyst is the largest, and the absorption wavelength range of (a) photocatalyst is the smallest, meaning that, more visible light can be absorbed with the increasing of the concentration; the absorbed visible light is reduced with the addition of DI. The position of the absorption edge is closely related to the forbidden bands of the semiconductor photocatalyst. The forbidden bandwidth of the BiOBr photocatalyst is calculated by Equation (1) [21].
(1)αhv=Ahv−Egn/2,where α, h, ν, A, and Eg represent the intrinsic absorption coefficient, the Planck constant, the frequency of light, the proportion constant of photocatalyst, and the bandgap width of semiconductor, respectively. n=2 for direct bandgap semiconductor, and n=4 for indirect bandgap semiconductor. n=4 because BiOBr photocatalyst is an indirect bandgap semiconductor. Using the formula on the hν~αhν1/2 curve, as shown in Figure 3(b), it can be observed that the tangent in the middle section of the curve and the intercept between tangent and abscissa is the bandgap of BiOBr photocatalyst. The bandgap of (a)–(d) photocatalyst measured by plotting and tangent fitting was 2.77 eV, 2.57 eV, 2.67 eV, and 2.53 eV, respectively. It can be observed that the bandgap decreases continuously with the increase of the concentration and the bandgap increases with the addition of DI. It can be concluded that the concentration of substances and solvents has an important influence on the bandgap energy values of BiOBr photocatalyst.Figure 3
UV-vis diffuse reflectance spectra and a corresponding bandgap width of (A)–(D) photocatalysts.
(a)
(b)
## 3.4. Photodegradation of RhB Using Nano-BiOBr Semiconductors
### 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
### 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
## 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
## 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
## 3.5. PL Analysis
Nano-BiOBr photocatalyst using different concentrations and different solvents were characterized by photoluminescence (PL) spectroscopy to verify further the conclusions mentioned above. The excitation wavelength was 280 nm, and the emission range was 400 nm to 600 nm, as shown in Figure6. The movement, transfer, and recombination rate of photogenerated electron-hole pairs were revealed by PL spectra. The lower the intensity of emission peaks in PL spectra, the higher the separation efficiency of electrons and holes in semiconductors and the higher the photocatalytic activity of photocatalysts that were observable [23]. It can also be seen that the PL intensity increases with the increase of the concentration, while the photocatalytic activity decreases. As can also be observed, the PL intensity is weakened with the addition of DI and the photocatalytic activity increased. It can be seen from the figure that all of the photocatalysts have emission peaks at 437 nm, 449 nm, and 466 nm and the order of luminous intensity is (b) > (d) > (c) > (a), which means that the recombination rate of electrons and holes of the four materials decreases gradually and the photodegradation performance is gradually enhanced, which is in accordance with the previous photocatalytic experiment results.Figure 6
PL spectral of (a)–(d) photocatalysts.
## 4. Conclusion
Nano-BiOBr photocatalysts were successfully prepared by the hydrothermal method. The influences of different concentrations and different solvents on structure, morphology, optical properties, and photocatalytic properties are investigated systematically. The results show that the crystallinity of the nano-BiOBr photocatalyst decreases with the increase of the concentration and increases with the increase of the DI. The morphology of the nano-BiOBr photocatalyst changes from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of concentration, from microspheres to flakes with the addition of DI. It was also proved that the concentration and solvents have an essential influence on the bandgap energy values of the nano-BiOBr photocatalyst. The degradation performance of photocatalyst with the decline in the increase of concentrations improves with the addition of DI. The semiconductor (a) showed the highest photocatalytic activity toward RhB. PL intensity of photocatalyst increased with the increase of the concentration and weakened by the addition of DI. Therefore, the high photocatalytic activity of photocatalysts for contaminant aqueous solution makes this research a new platform to develop flexible photocatalyst for practical applications in water purification.
---
*Source: 1013075-2020-02-24.xml* | 1013075-2020-02-24_1013075-2020-02-24.md | 32,035 | Controllable Synthesis and Photocatalytic Activity of Nano-BiOBr Photocatalyst | Xiaoyang Wang; Fuchun Zhang; Yanning Yang; Yu Zhang; Lili Liu; Wenli Lei | Journal of Nanomaterials
(2020) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1013075 | 1013075-2020-02-24.xml | ---
## Abstract
Nano-BiOBr photocatalysts were successfully prepared by hydrothermal synthesis using the ethylene glycol solution. The nano-BiOBr photocatalysts were characterized and investigated by X-ray diffractometry (XRD), scanning electron microscopy (SEM), photoluminescence (PL), and UV-vis diffuse reflectance spectroscopy (UV-Vis DRS), and the catalytic ability toward photodegradation of rhodamine B (RhB) was also explored. The results showed that the crystallinity of the nano-BiOBr photocatalyst decreased with the increase of the concentration, while it increased with the amount of the applied deionized water. The morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microspheres and flakes with the increasing of the concentration and from microspheres to flakes with the addition of the deionized water. The results indicated that the concentration and solvents have an essential influence on the bandgap energy values of the nano-BiOBr photocatalyst, and photocatalyst showed an excellent photocatalyst activity toward photodegradation of RhB. The degradation yields of photocatalyst decreased with the increase of the concentration and increased with the addition of the deionized water. PL intensity of photocatalyst increased with the increase of the concentration and weakened with the addition of the deionized water.
---
## Body
## 1. Introduction
In recent years, the phenomenon of global water pollution has become a more and more severe issue with the rapid development of the economy, which has attracted widespread attention because of the close relationship between water resources and people’s daily work and life [1, 2]. Many ways can cause water pollution, one of them being the textile industry and wastewaters with organic dye, which are challenging due to their poor biodegradability [3–5]. Semiconductor bismuth halide (BiOX, X = Cl, Br, I)-based photocatalysts have attracted extensive attention from researchers because of their unique structure and excellent photocatalytic properties [6, 7]. BiOBr was the target material of the presented investigations because of its moderate bandgap, open layered structure, high oxidation ability, indirect transition mode, high visible light response ability, and excellent stability [8, 9]. There are many methods to prepare BiOBr, such as high temperature-based solid-state [10], hydrothermal [11], solvothermal [12], water- (alcohol-) based [13], ultrasound-assisted [14], and electrospinning method [15]. Among them, hydro- and solvothermal methods are the most commonly used synthesis pathways. The structure, morphology, crystallinity, and phase formation of the photocatalysts can be effectively obtained through controllable synthesis because of the slow product formation rate, simple and easy ways to control reaction conditions, and stable reaction environment during the water- (solvent-) based thermal reaction [16]. For example, nano-BiOBr microspheres were synthesized previously by the solvothermal method using ethylene glycol (EG) as a solvent [17]. On the other hand, nano-BiOX microspheres were obtained using other solvothermal methods and the same solvent EG [18]. BiOBr/SrFe12O19 nanosheets were synthesized by the solvothermal method using deionized water (DI) as a solvent [19]. AgBr/BiOBr nano-heterostructure-decorated polyacrylonitrile nanofibers were synthesized by electrospinning technique and solvothermal treatment in the presence of an EG solution as the reductant [20].Therefore, the present paper is aimed at using Bi(NO3)·5H2O and CTAB as raw materials, with EG and DI as a solvent, under the condition of different concentrations and different solvents to obtain nano-BiOBr photocatalyst. The influences of different solvents and concentrations of precursors on the structure, morphology, optical properties, and photocatalytic activities were also investigated systematically.
## 2. Experiment Section
### 2.1. Synthesis of Nano-BiOBr Photocatalyst
In the first step, 2 mmol of Bi(NO3)·5H2O is added to 80 ml of EG, and the solution was ultrasonicated until it was completely dissolved (obtaining solution A). Afterward, 2 mmol of CTAB was introduced into solution A, stirred magnetically until it was completely dissolved (solution B). Next, solution B was introduced into a high-temperature reactor (the filling degree is 80%), and after constant temperature reaction at 180°C for 10 hours in an incubator, the solution was naturally cooled to room temperature, and the precipitate was separated. Finally, the precipitate was washed with DI and alcohol, and then, the nano-BiOBr photocatalyst was finally obtained after the drying procedure at 80°C for 12 hours. Table 1 shows the abbreviation and synthesis parameters of nano-BiOBr photocatalyst obtained with different synthesis conditions, abbreviating them with (a)–(d) in the latter stages of the manuscript.Table 1
The specific process parameters of samples.
Sample
V1:V2
CBi:CBr
T (°C)
Notes
(a)
8 : 0
1 : 1
180
V1: EG volume
(b)
8 : 0
1 : 3
180
V2: DI volume
(c)
8 : 1
1 : 3
180
CBi: the source of Bi+3
(d)
8 : 0
1 : 5
180
CBr: the source of Br-
### 2.2. Characterization of Nano-BiOBr Photocatalyst
The crystalline phases were determined by a Bruker D8 Advance X-ray diffractometer (XRD) using a Cu Kα (λ=0.15418nm) radiation in the θ~2θ Bragg. The morphologies of the as-prepared samples were observed and investigated by a field emission scanning electron microscope (FESEM, Nova NanoSEM 450, FEI). The UV-vis absorption spectra of the samples were measured with a Cary 5000 (Agilent, USA). The photoluminescence (PL) spectra were observed with a He-Cd laser 280 nm.
### 2.3. Determination of the Photocatalytic Activity of the Nano-BiOBr Photocatalyst
The photocatalytic reactions were carried out in a CEL-LAB500E4 multisite photochemical reaction system. The catalytic activity of the target degradation product was evaluated by using nano-BiOBr photocatalyst under the visible light source. In a typical photocatalytic experiment, 0.05 g of the nano-BiOBr photocatalyst was dispersed into 50 ml of RhB (10 mg/l) solution and magnetically stirred in the dark for 30 min to reach the adsorption-desorption equilibrium between RhB and the nano-BiOBr photocatalyst. Then, the light source was turned on, and a sample of 4 ml of the suspension was continually taken from the reaction cell at every 15 minutes and centrifuged. Finally, the absorbance of the supernatant at the maximum absorption wavelength was analysed through an ultraviolet-visible spectrophotometer (UV-1901). The degradation efficiencies were calculated according to the expression of degradation rate (1−A/A0), where A0 is the absorbance of the target degradation at its maximum absorption wavelength before illumination, and A is the absorbance value after illumination for a certain time.
## 2.1. Synthesis of Nano-BiOBr Photocatalyst
In the first step, 2 mmol of Bi(NO3)·5H2O is added to 80 ml of EG, and the solution was ultrasonicated until it was completely dissolved (obtaining solution A). Afterward, 2 mmol of CTAB was introduced into solution A, stirred magnetically until it was completely dissolved (solution B). Next, solution B was introduced into a high-temperature reactor (the filling degree is 80%), and after constant temperature reaction at 180°C for 10 hours in an incubator, the solution was naturally cooled to room temperature, and the precipitate was separated. Finally, the precipitate was washed with DI and alcohol, and then, the nano-BiOBr photocatalyst was finally obtained after the drying procedure at 80°C for 12 hours. Table 1 shows the abbreviation and synthesis parameters of nano-BiOBr photocatalyst obtained with different synthesis conditions, abbreviating them with (a)–(d) in the latter stages of the manuscript.Table 1
The specific process parameters of samples.
Sample
V1:V2
CBi:CBr
T (°C)
Notes
(a)
8 : 0
1 : 1
180
V1: EG volume
(b)
8 : 0
1 : 3
180
V2: DI volume
(c)
8 : 1
1 : 3
180
CBi: the source of Bi+3
(d)
8 : 0
1 : 5
180
CBr: the source of Br-
## 2.2. Characterization of Nano-BiOBr Photocatalyst
The crystalline phases were determined by a Bruker D8 Advance X-ray diffractometer (XRD) using a Cu Kα (λ=0.15418nm) radiation in the θ~2θ Bragg. The morphologies of the as-prepared samples were observed and investigated by a field emission scanning electron microscope (FESEM, Nova NanoSEM 450, FEI). The UV-vis absorption spectra of the samples were measured with a Cary 5000 (Agilent, USA). The photoluminescence (PL) spectra were observed with a He-Cd laser 280 nm.
## 2.3. Determination of the Photocatalytic Activity of the Nano-BiOBr Photocatalyst
The photocatalytic reactions were carried out in a CEL-LAB500E4 multisite photochemical reaction system. The catalytic activity of the target degradation product was evaluated by using nano-BiOBr photocatalyst under the visible light source. In a typical photocatalytic experiment, 0.05 g of the nano-BiOBr photocatalyst was dispersed into 50 ml of RhB (10 mg/l) solution and magnetically stirred in the dark for 30 min to reach the adsorption-desorption equilibrium between RhB and the nano-BiOBr photocatalyst. Then, the light source was turned on, and a sample of 4 ml of the suspension was continually taken from the reaction cell at every 15 minutes and centrifuged. Finally, the absorbance of the supernatant at the maximum absorption wavelength was analysed through an ultraviolet-visible spectrophotometer (UV-1901). The degradation efficiencies were calculated according to the expression of degradation rate (1−A/A0), where A0 is the absorbance of the target degradation at its maximum absorption wavelength before illumination, and A is the absorbance value after illumination for a certain time.
## 3. Results and Discussion
### 3.1. XRD Analysis
The XRD patterns of nano-BiOBr photocatalyst at different concentrations and different solvents are shown in Figure1. It is clear that the prepared nano-BiOBr photocatalysts were consistent with the standard diffraction pattern of tetragonal BiOBr (PDF#85-0862) (Figure 1(a)). No other specific diffraction peaks were detected, indicating that the nano-BiOBr photocatalysts prepared in different concentrations and different solvents are pure tetragonal nanoparticles. The intensity of the diffraction peak is weakened with the increase of the concentration, indicating that the concentrations are the key factor affecting the crystallinity of nano-BiOBr photocatalyst, 2-theta was right-shifted, and the crystal space d decreased according to the Prague equation (dsinθ=nλ). The intensity of the diffraction peak increases with the increase of DI, indicating that the increase of DI throughout all the experiments is beneficial to increase the crystallinity of nano-BiOBr photocatalyst; the influence factors were more, and the main reason remains to be further studied.Figure 1
XRD patterns of samples (a) before and (b) after the reaction.
(a)
(b)In order to study the stability of the nano-BiOBr photocatalyst, the XRD of the nano-BiOBr photocatalyst after the photocatalytic reaction was investigated (Figure1(b)). Compared with the results before the photodegradation, there are no noticeable changes in the crystal phases of the samples.
### 3.2. SEM Analysis
Figure2 shows the SEM images of nano-BiOBr photocatalysts prepared using different concentrations and different solvents. As shown in Figure 2(a), BiOBr microspheres with a diameter of ~2 μm were obtained, and the BiOBr microspheres are self-assembled from irregular nanosheets with a thickness of ~10 nm. From Figure 2(b), BiOBr cubes of ~2 μm are obtained, BiOBr being also self-assembled from irregular nanosheets in a fixed manner. Compared with (a) photocatalyst, (b) photocatalyst is self-assembled from nanosheets more densely, and the thickness of the nanosheets is higher (~20 nm). As shown in Figure 2(c), irregular sheetlike nano-BiOBr was obtained; compared with (b) photocatalyst, (c) photocatalyst is a nanosheet with a thickness of ~15 nm. From Figure 2(d), it can be observed that a mixture of BiOBr microspheres and sheetlike BiOBr is obtained. BiOBr microspheres are tightly assembled from irregular nanosheets with a thickness of ~15 nm. Compared with (b) photocatalysts, (d) photocatalysts have more unassembled nanosheets, and the thickness of the nanosheets is smaller and assembled in a way that cubes become microspheres. The results showed that the morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of the concentration of precursors. Moreover, the morphology of nano-BiOBr photocatalyst changed from microspheres to flakes with the addition of DI.Figure 2
SEM images of (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
### 3.3. UV-Vis DRS Analysis
Figure3 shows the UV-vis diffuse reflectance spectra and corresponding bandgap energies of the nano-BiOBr photocatalyst. It can be observed that the absorption edges of (a)–(d) photocatalysts can be found at 439, 450, 446, and 453 nm, respectively, indicating that the absorption wavelength range of (d) photocatalyst is the largest, and the absorption wavelength range of (a) photocatalyst is the smallest, meaning that, more visible light can be absorbed with the increasing of the concentration; the absorbed visible light is reduced with the addition of DI. The position of the absorption edge is closely related to the forbidden bands of the semiconductor photocatalyst. The forbidden bandwidth of the BiOBr photocatalyst is calculated by Equation (1) [21].
(1)αhv=Ahv−Egn/2,where α, h, ν, A, and Eg represent the intrinsic absorption coefficient, the Planck constant, the frequency of light, the proportion constant of photocatalyst, and the bandgap width of semiconductor, respectively. n=2 for direct bandgap semiconductor, and n=4 for indirect bandgap semiconductor. n=4 because BiOBr photocatalyst is an indirect bandgap semiconductor. Using the formula on the hν~αhν1/2 curve, as shown in Figure 3(b), it can be observed that the tangent in the middle section of the curve and the intercept between tangent and abscissa is the bandgap of BiOBr photocatalyst. The bandgap of (a)–(d) photocatalyst measured by plotting and tangent fitting was 2.77 eV, 2.57 eV, 2.67 eV, and 2.53 eV, respectively. It can be observed that the bandgap decreases continuously with the increase of the concentration and the bandgap increases with the addition of DI. It can be concluded that the concentration of substances and solvents has an important influence on the bandgap energy values of BiOBr photocatalyst.Figure 3
UV-vis diffuse reflectance spectra and a corresponding bandgap width of (A)–(D) photocatalysts.
(a)
(b)
### 3.4. Photodegradation of RhB Using Nano-BiOBr Semiconductors
#### 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
#### 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
### 3.5. PL Analysis
Nano-BiOBr photocatalyst using different concentrations and different solvents were characterized by photoluminescence (PL) spectroscopy to verify further the conclusions mentioned above. The excitation wavelength was 280 nm, and the emission range was 400 nm to 600 nm, as shown in Figure6. The movement, transfer, and recombination rate of photogenerated electron-hole pairs were revealed by PL spectra. The lower the intensity of emission peaks in PL spectra, the higher the separation efficiency of electrons and holes in semiconductors and the higher the photocatalytic activity of photocatalysts that were observable [23]. It can also be seen that the PL intensity increases with the increase of the concentration, while the photocatalytic activity decreases. As can also be observed, the PL intensity is weakened with the addition of DI and the photocatalytic activity increased. It can be seen from the figure that all of the photocatalysts have emission peaks at 437 nm, 449 nm, and 466 nm and the order of luminous intensity is (b) > (d) > (c) > (a), which means that the recombination rate of electrons and holes of the four materials decreases gradually and the photodegradation performance is gradually enhanced, which is in accordance with the previous photocatalytic experiment results.Figure 6
PL spectral of (a)–(d) photocatalysts.
## 3.1. XRD Analysis
The XRD patterns of nano-BiOBr photocatalyst at different concentrations and different solvents are shown in Figure1. It is clear that the prepared nano-BiOBr photocatalysts were consistent with the standard diffraction pattern of tetragonal BiOBr (PDF#85-0862) (Figure 1(a)). No other specific diffraction peaks were detected, indicating that the nano-BiOBr photocatalysts prepared in different concentrations and different solvents are pure tetragonal nanoparticles. The intensity of the diffraction peak is weakened with the increase of the concentration, indicating that the concentrations are the key factor affecting the crystallinity of nano-BiOBr photocatalyst, 2-theta was right-shifted, and the crystal space d decreased according to the Prague equation (dsinθ=nλ). The intensity of the diffraction peak increases with the increase of DI, indicating that the increase of DI throughout all the experiments is beneficial to increase the crystallinity of nano-BiOBr photocatalyst; the influence factors were more, and the main reason remains to be further studied.Figure 1
XRD patterns of samples (a) before and (b) after the reaction.
(a)
(b)In order to study the stability of the nano-BiOBr photocatalyst, the XRD of the nano-BiOBr photocatalyst after the photocatalytic reaction was investigated (Figure1(b)). Compared with the results before the photodegradation, there are no noticeable changes in the crystal phases of the samples.
## 3.2. SEM Analysis
Figure2 shows the SEM images of nano-BiOBr photocatalysts prepared using different concentrations and different solvents. As shown in Figure 2(a), BiOBr microspheres with a diameter of ~2 μm were obtained, and the BiOBr microspheres are self-assembled from irregular nanosheets with a thickness of ~10 nm. From Figure 2(b), BiOBr cubes of ~2 μm are obtained, BiOBr being also self-assembled from irregular nanosheets in a fixed manner. Compared with (a) photocatalyst, (b) photocatalyst is self-assembled from nanosheets more densely, and the thickness of the nanosheets is higher (~20 nm). As shown in Figure 2(c), irregular sheetlike nano-BiOBr was obtained; compared with (b) photocatalyst, (c) photocatalyst is a nanosheet with a thickness of ~15 nm. From Figure 2(d), it can be observed that a mixture of BiOBr microspheres and sheetlike BiOBr is obtained. BiOBr microspheres are tightly assembled from irregular nanosheets with a thickness of ~15 nm. Compared with (b) photocatalysts, (d) photocatalysts have more unassembled nanosheets, and the thickness of the nanosheets is smaller and assembled in a way that cubes become microspheres. The results showed that the morphology of the nano-BiOBr photocatalyst changed from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of the concentration of precursors. Moreover, the morphology of nano-BiOBr photocatalyst changed from microspheres to flakes with the addition of DI.Figure 2
SEM images of (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
## 3.3. UV-Vis DRS Analysis
Figure3 shows the UV-vis diffuse reflectance spectra and corresponding bandgap energies of the nano-BiOBr photocatalyst. It can be observed that the absorption edges of (a)–(d) photocatalysts can be found at 439, 450, 446, and 453 nm, respectively, indicating that the absorption wavelength range of (d) photocatalyst is the largest, and the absorption wavelength range of (a) photocatalyst is the smallest, meaning that, more visible light can be absorbed with the increasing of the concentration; the absorbed visible light is reduced with the addition of DI. The position of the absorption edge is closely related to the forbidden bands of the semiconductor photocatalyst. The forbidden bandwidth of the BiOBr photocatalyst is calculated by Equation (1) [21].
(1)αhv=Ahv−Egn/2,where α, h, ν, A, and Eg represent the intrinsic absorption coefficient, the Planck constant, the frequency of light, the proportion constant of photocatalyst, and the bandgap width of semiconductor, respectively. n=2 for direct bandgap semiconductor, and n=4 for indirect bandgap semiconductor. n=4 because BiOBr photocatalyst is an indirect bandgap semiconductor. Using the formula on the hν~αhν1/2 curve, as shown in Figure 3(b), it can be observed that the tangent in the middle section of the curve and the intercept between tangent and abscissa is the bandgap of BiOBr photocatalyst. The bandgap of (a)–(d) photocatalyst measured by plotting and tangent fitting was 2.77 eV, 2.57 eV, 2.67 eV, and 2.53 eV, respectively. It can be observed that the bandgap decreases continuously with the increase of the concentration and the bandgap increases with the addition of DI. It can be concluded that the concentration of substances and solvents has an important influence on the bandgap energy values of BiOBr photocatalyst.Figure 3
UV-vis diffuse reflectance spectra and a corresponding bandgap width of (A)–(D) photocatalysts.
(a)
(b)
## 3.4. Photodegradation of RhB Using Nano-BiOBr Semiconductors
### 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
### 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
## 3.4.1. UV-Vis Absorption Spectral Analysis
Figure4 shows the UV-vis absorption spectral changes of RhB aqueous solution vs. irradiation time in the presence of the (a)–(d) photocatalysts. With the increase of the irradiation time, the absorption maximum of the spectra declined, and the band shifted to a smaller wavelength, which indicates N-demethylation and the destruction of the conjugated structure in the RhB photodegradation process [22]. The blue-shifted band implies that the main photocatalytic degradation path of the RhB is N-demethylation. The major peaks are reduced gradually during visible light irradiation, indicating a step-by-step degradation of RhB. The RhB UV-vis absorption spectra of the (a) photocatalyst were approximately a straight line after 60 min of illumination, indicating that the photocatalytic reaction of the (a) photocatalyst to RhB was mainly completed. The absorption peak of RhB in the visible light region of (a), (c), and (d) photocatalysts has completely disappeared after 80 min, and the decolourisation efficiency reaches 100%. The RhB solution photodegraded using (b) photocatalysts had a weak absorption peak at the end of the process. The findings are in accordance with the results obtained from previous sections, as the degradation efficiency of photocatalyst decreases with the increase of the concentration. On the other hand, the degradation efficiency of photocatalyst increased with the addition of DI.Figure 4
UV-vis absorption spectral changes of RhB aqueous solution as a function of irradiation time in the (a)–(d) photocatalysts.
(a)
(b)
(c)
(d)
## 3.4.2. Degradation Efficiency for RhB of BiOBr Photocatalysts
Figure5 shows the degradation performance for RhB by photocatalysts prepared in different concentrations and different solvents. The results showed that the degradation performance of photocatalyst declines in varying degrees with the increase of the concentration, which indicates that the concentration can change the degradation performance of the photocatalyst. The degradation performance of photocatalyst improves with the addition of DI. The degradation performance of (a)–(d) photocatalysts to RhB was 96.3%, 68.8%, 88.1%, and 81.3%, respectively, after 60 min of light irradiation. It can be summarised that (a) photocatalyst showed the highest photocatalytic activity for RhB.Figure 5
Degradation performance of (a)–(d) photocatalysts for RhB.
## 3.5. PL Analysis
Nano-BiOBr photocatalyst using different concentrations and different solvents were characterized by photoluminescence (PL) spectroscopy to verify further the conclusions mentioned above. The excitation wavelength was 280 nm, and the emission range was 400 nm to 600 nm, as shown in Figure6. The movement, transfer, and recombination rate of photogenerated electron-hole pairs were revealed by PL spectra. The lower the intensity of emission peaks in PL spectra, the higher the separation efficiency of electrons and holes in semiconductors and the higher the photocatalytic activity of photocatalysts that were observable [23]. It can also be seen that the PL intensity increases with the increase of the concentration, while the photocatalytic activity decreases. As can also be observed, the PL intensity is weakened with the addition of DI and the photocatalytic activity increased. It can be seen from the figure that all of the photocatalysts have emission peaks at 437 nm, 449 nm, and 466 nm and the order of luminous intensity is (b) > (d) > (c) > (a), which means that the recombination rate of electrons and holes of the four materials decreases gradually and the photodegradation performance is gradually enhanced, which is in accordance with the previous photocatalytic experiment results.Figure 6
PL spectral of (a)–(d) photocatalysts.
## 4. Conclusion
Nano-BiOBr photocatalysts were successfully prepared by the hydrothermal method. The influences of different concentrations and different solvents on structure, morphology, optical properties, and photocatalytic properties are investigated systematically. The results show that the crystallinity of the nano-BiOBr photocatalyst decreases with the increase of the concentration and increases with the increase of the DI. The morphology of the nano-BiOBr photocatalyst changes from microspheres to cubes and then to a mixture of microsphere and flakes with the increasing of concentration, from microspheres to flakes with the addition of DI. It was also proved that the concentration and solvents have an essential influence on the bandgap energy values of the nano-BiOBr photocatalyst. The degradation performance of photocatalyst with the decline in the increase of concentrations improves with the addition of DI. The semiconductor (a) showed the highest photocatalytic activity toward RhB. PL intensity of photocatalyst increased with the increase of the concentration and weakened by the addition of DI. Therefore, the high photocatalytic activity of photocatalysts for contaminant aqueous solution makes this research a new platform to develop flexible photocatalyst for practical applications in water purification.
---
*Source: 1013075-2020-02-24.xml* | 2020 |
# Experimental Study on SSRC under Eccentric Compression
**Authors:** Qingfu Li; Tianjing Zhang; Yaqian Yan; Qunhua Xiang
**Journal:** Advances in Civil Engineering
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1013188
---
## Abstract
The use of stainless steel bars can improve the durability and sustainability of building materials. Through the static performance test, this research analyzes the failure pattern and bearing performance of bias stainless steel reinforced concrete (SSRC) column. The influence of reinforcement ratio of longitudinal bars and eccentricity on the mechanical performance of specimens was studied. Different constitutive models of stainless steel bars were used to calculate the ultimate bearing capacity of the section of the column under eccentric compression column. Based on the experimental results, a method to modify the expression of the design specification is proposed. And, the results were compared with the test results. The results showed that the damage patterns and failure modes of SSRC columns are essentially the same as those of traditional reinforced concrete columns. The bearing capacity of SSRC columns rises with the increase in the longitudinal reinforcement ratio, and the ductility of the specimens is enhanced. The ultimate load of the specimen decreases with the rise in eccentricity but the deflection increases gradually. The strain distribution of the mid-span section of the SSRC column conforms to the plane section assumption. The bearing capacity of the specimen can be analyzed by referring to the calculation method of the specification, and some parameters in the calculation formula of the specification are modified to adapt to the design and calculation of the SSRC column.
---
## Body
## 1. Introduction
Reinforcement corrosion problems commonly exist in the concrete structure’s durability, which lead to the unsustainability of construction projects. The investigation results show that more than 80% of concrete structures have a certain degree of durability problems or more serious steel corrosion just 5–10 years after construction [1]. How to improve the durability and sustainability of reinforced concrete structures under poor environmental conditions has become a pressing problem that requires solution. At present, the main measures taken to prevent the rapid corrosion of steel bars and improve the durability of reinforced concrete structures include [2–5] coating the surface of ordinary steel bars with antirust materials, using epoxy-coated steel bars and hot-dip galvanized steel bars, controlling the quality of concrete, and using stainless steel bars. In order to fundamentally solve a series of problems caused by steel corrosion, the use of stainless steel bars is the first choice. The stainless steel reinforcement is an alloy steel and contains significant amounts of chromium in order to protect the steel against rust. It can generate a thin and smooth layer of colorless, transparent oxide film on the surface of the weak corrosive medium to protect it against environmental conditions, such as air and water. Existing research has focused on the corrosion resistance of stainless steel; however, there has been relatively little research on the mechanical performance of SSRC members. The eccentric compression member is the most fundamental and important mechanical component in a building’s structure [6]; as such, research on its mechanical performance will have theoretical significance and high application value.Currently, in China, there is a range of research on and applications for stainless steel bars and their composite materials. It mainly focuses on two aspects: one is to study its corrosion law in simulated concrete pore fluid and the other is to study its corrosion law in mortar or concrete environment, which is usually compared with carbon steel bars in the same situation by monitoring the corrosion rate. In 2006, Guoxue et al. preliminarily discussed the mechanical properties of SSRC beams, slates, and columns [7–11]. In 2007, Yongsheng et al. studied and analyzed the influence of the use of stainless steel bars on deflection, crack width, and ultimate bearing capacity of concrete beams and verified that the SSRC beams can meet the requirements of the maximum deflection and maximum crack width limits specified in the design of ordinary reinforced concrete structures[12]. In 2011, Huanxin et al. identified the difference between the constitutive models of stainless steel bars and ordinary steel bars [13]. In 2013, Jiawei et al. discussed the influence of use stainless steel bars on the bearing capacity of concrete beams through fatigue tests and studied the mechanical performance of SSRC beams, which included the stress-strain relationship between steel bars and concrete [10]. In 2013, Huitao conducted an experimental study on the mechanical properties of stainless steel bars, the bond properties between stainless steel bars and concrete, and the flexural properties of SSRC beams. The results showed that the strength, elongation, and cold bending property of stainless steel bars are higher than those of ordinary hot-rolled bars. The bonding performance between stainless steel bars and concrete is excellent. Under the same reinforcement condition, the bearing capacity of SSRC beam is higher than that of ordinary hot-rolled reinforced concrete beam, the failure law is consistent, and the plane section assumption is still applicable [14]. In 2014, Chen Long et al. used the potentiodynamic scanning technology and electron microscope test to study the critical chloride ion concentration of stainless steel bar and ordinary steel bar in simulated concrete pore fluid and concluded that the critical chloride ion concentration of the stainless steel bar is more than 20 times that of the ordinary steel bar and its corrosion resistance is far better than that of ordinary steel bar [15]. In 2016, Chengchang et al. proposed a preliminary calculation method for the bonding force between stainless steel bars and concrete [16], concluding that the carrying capacity of concrete beams with stainless steel bars was relatively greater. In 2018, Hailong et al. concluded that the critical value of corrosion resistance of stainless steel bars in the simulated liquid of freshly mixed concrete was more than 75 times that of carbon steel [17]. In 2018, Yi et al. studied the corrosion law of stainless steel bars and ordinary steel bars used in concrete structures. The results showed that the corrosion rate of carbon steel bars connected with stainless steel bars is significantly accelerated, and the galvanic corrosion is strengthened with the increase in corrosion time. At the same time, when two kinds of steel bars are overlapped, the galvanic corrosion is more severe [18]. In 2019, Chunyi et al. presented the bonding force formula between stainless steel bars and concrete [19].However, overseas research on the application of stainless steel bars is a long-standing concern; as early as 1937, the Progreso Pier Bridge in Mexico used stainless steel ones to replace ordinary steel bars to improve the durability of the structure [20]. Foreign scholars' research on stainless steel bars mainly focuses on corrosion resistance. For example, in 1985, Zoob et al. carried out the corrosion resistance test on 304 stainless steel bars. The research showed that the allowable chloride content of stainless steel bars buried in concrete is 7–10 times that of ordinary steel bars in the same environment and there is no obvious corrosion of stainless steel bars. In 1988, Flint et al. buried one end of 316 stainless steel bars and carbon steel bars in concrete and exposed the other end to sea water. The experimental results showed that the local erosion of stainless steel bars only occurred in the immersion section of sea water, the stainless steel bars wrapped in concrete remained intact, and the overall strength was not affected. However, the corrosion of carbon steel bars was significant and the strength was seriously reduced [21, 22]. In 1995, McDonld and others conducted the same experimental study on SSRC members and the results showed that the corrosion resistance of SSRC members was better than that of ordinary reinforced concrete members [23]. In 1998, Bertolini et al. conducted corrosion tests on SSRC members in high pH value and high chloride environment and found that the corrosion resistance of stainless steel reinforcement is extremely excellent. It can still remain passive in high pH value and high chloride environment [24]. In 2002, Abreu et al. used sodium chloride solution to simulate concrete pore fluid and used the EIS method and ZRA method to study galvanic corrosion behavior of the stainless steel bar and carbon steel bar. The results show that if the carbon steel bar and stainless steel bar are connected in simulated pore fluid, the probability of galvanic corrosion is small [25]. Castro (2003) conducted tests on the mechanical properties and corrosion properties of the austenitic stainless steel rebar under both hot-rolled and cold-rolled conditions [26]. Blanco et al. experimentally compared the corrosion behavior of two traditional austenitic steels and a dual-phase stainless steel with that of low nickel austenitic steel [27]. In 2010, Milan Kouřil et al. studied the corrosion resistance of stainless steel bars by the electrochemical test, and the results showed that the critical chloride concentration of stainless steel bars in concrete structure depends not only on the chemical composition of stainless steel and the pH level of concrete pore fluid but also on the surface state of steel. Stainless steel bars with smooth surface have good corrosion resistance, while stainless steel bars with rough surface have weak resistance to chloride ion [28]. In 2013, Hansson et al. used the linear polarization method to compare the corrosion performance of the stainless steel bar and carbon steel bar in concrete environment with the same chloride concentration. The results showed that the corrosion of ordinary carbon steel bar occurred within two weeks, while the stainless steel bar began to rust after 139 weeks. The use of stainless steel bars has a significant effect on improving the service life of concrete structures [29]. And, Gastaldi and Bertolini tested the chloride-induced corrosion resistance of low nickel bidirectional stainless steel threaded bars in different temperature ranges and traditional austenitic stainless steel threaded bars [30]. In recent years, foreign studies on the mechanical properties of stainless steel have gradually increased. In 2015, Mdina et al. carried out a series of experimental studies on the structural properties of three kinds of stainless steel bars (austenitic AISI304, bidirectional AISI2304, and new bidirectional AISI2001) at reinforcement, section, and structural member levels, respectively [31]. In 2016, Gardner et al. conducted 164 heating tests on four kinds of stainless steel bars (1.4307[304Ln], 1.4311[304Ln], 1.4162[LDX2101], and 1.4362[2304]) under high-temperature conditions [32]. In 2019, Yibu et al. conducted an experimental study on the strength of 59 stainless steel welding sections using through traditional welding and laser welding [33]. In 2019, Bemfica et al. studied axial torsional fatigue and cyclic deformation behavior of 304L stainless steel bars at room temperature [34].Stainless steel bars have high strength, high flexural strength ratio, high fatigue life, high impact toughness, and other properties. The main problem of the current application of stainless steel reinforcement is the high price, which affects the initial cost of engineering construction. But, generally, the stainless steel reinforcement can only be used in the key parts of the actual project, so the proportion of stainless steel reinforcement in the whole project cost is very small and some other characteristics of stainless steel reinforcement also make its life cycle cost much lower than that of carbon steel [35]. Compared with ordinary steel bars, first of all, the application of stainless steel bars can reduce the thickness of concrete and the number of steel bars. Secondly, the transportation, processing, and installation of stainless steel bars have no special requirements, and the construction cost is 25% lower than that of epoxy-coated carbon steel bars. Thirdly, stainless steel reinforced concrete buildings require little or no maintenance which can reduce maintenance and inspection costs, thereby reducing the social cost of maintenance disrupting operations. The Federal Highway Administration (FHWA) conducted a cost analysis of three bridges built in Illinois with different rustproofing methods. The results showed that the initial cost of the bridge with stainless steel reinforcement increased by 16%, but it took about six times as long to crack, resulting in a significant reduction in maintenance costs [36]. V al et al. put forward a kind of time-varying probability model to predict the expected cost of repair and replacement, and then, the model was used to calculate different exposures of reinforced concrete structures in the marine environment under the conditions of life cycle cost. The results showed that although the price of stainless steel was six to nine times that of carbon steel, the use of stainless steel bars was justified on a life cycle cost basis [37]. Frank N. Smith analyzed the serious corrosion problems during the service of the Oland Bridge in Sweden, pointed out that if the bridge were constructed with stainless steel bars, the construction cost would only increase by 8%, but the 100-year life could be achieved with very little maintenance [38]. Cope et al. evaluated the superiority of stainless steel over conventional steel in terms of both long-term usage costs and user costs with the data from one Midwestern state in the United States, using Monte Carlo simulation methods for most of the analyzed scenarios, which proved that in the case of uncertainty, using stainless steel as the bridge reinforcement materials is more cost-effective than conventional steel [39]. Younis et al. conducted a life cycle cost analysis of high-rise buildings based on a 100-year study period and showed that the proposed combination using stainless steel bars had a life cycle cost (LCC) approximately 50% lower than the conventional combination (i.e., concrete containing fresh water, natural aggregate, and black steel) [40].Although there has been much progress in the study of stainless steel bars, most has concentrated on the performance of the stainless steel material itself, such as stainless steel basic mechanical performance, weldability, and corrosion resistance. However, there has been relatively little research on the mechanical properties of concrete structural members containing stainless steel bars. When stainless steel bars are used in concrete compression members, whether the stainless steel bars and concrete materials can fully live up to their strengths needs to be verified through rigorous testing. In view of this, this paper will consider the SSRC columns as the research participants. Through testing, theoretical analysis, numerical simulation, and other technical methods, the mechanical performance of SSRC columns will be studied.
## 2. Experimental Design
### 2.1. Specimen
In order to conduct the study, eight SSRC eccentric compression columns were designed and constructed. The section determination of each specimen was 250 mm× 150 mm, with a height of 1000 mm. The form of symmetrical reinforcement was adopted, and the concrete cover thickness was 25 mm. The model of stainless steel bar used along the longitudinal direction of the specimen was 2304. The specific dimensions and reinforcement layout of the specimens are shown in Figure 1.Figure 1
Schematic of specimen dimensions and reinforcement (mm).
(a)(b)(c)At the same time, six 150 mm× 150 mm × 150 mm standard concrete companion specimens were poured for each concrete bias specimen. These were used for testing the concrete strength of the specimens.
### 2.2. Experimental Methods
The design parameters of the specimens are listed in Table1. The test was carried out on a WHY-5000 kN hydraulic pressure machine. In order to measure the strain on the concrete, six strain gauges were pasted equidistant on the middle section of the specimen. Three strain gauges were uniformly pasted on the surface of each longitudinal stainless steel bar of the specimen to measure the strain on the longitudinal steel bars. On the middle section of the bottom surface of the specimen, equidistant to each other, 5 displacement meters were placed to measure the deflection of the specimens. Figures 2 and 3 show the loading diagram and strain gauge distribution, respectively. Crack development, concrete strain, reinforcement strain, lateral displacement, and other test phenomena were observed and recorded during the test.Table 1
Specimen design parameters.
Specimen numberStrength grade of concreteSectional dimension (mm2)Initial eccentricity,e0 (mm)Longitudinal reinforcement diameter,d (mm)Hoop reinforcement form,d (mm)A-D1C45250× 150200128@125A-D4C45250× 15050128@125B-D1C45250× 150200168@125B-D2C45250× 150150168@125B-D3C45250× 150100168@125B-D4C45250× 15050168@125C-D1C45250× 150200258@125C-D4C45250× 15050258@125Note: letters A, B, and C represent the longitudinal stainless steel bars with diameters of 12 mm, 16 mm, and 25 mm respectively. D1, D2, D3, and D4 represent the initial eccentricity of 200 mm, 150 mm, 100 mm, and 50 mm, respectively, when the specimen is pressed.Figure 2
Experimental loading diagram.Figure 3
Distribution of the strain gauge (mm).
### 2.3. Mechanical Properties of Concrete and Reinforcement
In this experiment, 2304 stainless steel bars were used. Stainless steel specimens with diameters of 12 mm, 16 mm, and 25 mm were selected for tensile testing. The test results and the mechanical indexes of stainless steel bars are shown in Figure4 and 5 and Table 2.Figure 4
Load-displacement curve of stainless steel bars.Figure 5
Stress-strain curve of stainless steel bars.Table 2
Mechanical indexes of stainless steels.
Diameter (d/mm)Tensile strength (MPa)Yield strength (MPa)Elastic modulus (105 MPa)Elongation (%)12885.3677.51.5633.0016795.3565.81.5636.3925768.7572.71.4034.13The design strength of the concrete specimen is C45, and the measured strength of the concrete test blocks is shown in Table3. The axial compressive strength of concrete is calculated using formula (1), according to the Code for Design of Concrete Structures (GB50010-2010). Considering that all the specimens are manufactured and tested under laboratory conditions, the reduction factor of 0.88 can be ignored, so the actual strength of the concrete can be calculated using formula (2). Calculations for the mechanical properties of the concrete are shown in Table 3.(1)fck=0.88×0.76fcu,k,(2)fk=00.76fcu,k.Table 3
Mechanical properties of concrete.
Specimen numberfcu (MPa)fc (MPa)Specimen numberfcu (MPa)fc (MPa)A-D143.4733.04B-D342.4432.25A-D442.6431.65B-D446.0034.96B-D145.2034.20C-D145.3334.45B-D244.9334.15C-D445.6034.66
## 2.1. Specimen
In order to conduct the study, eight SSRC eccentric compression columns were designed and constructed. The section determination of each specimen was 250 mm× 150 mm, with a height of 1000 mm. The form of symmetrical reinforcement was adopted, and the concrete cover thickness was 25 mm. The model of stainless steel bar used along the longitudinal direction of the specimen was 2304. The specific dimensions and reinforcement layout of the specimens are shown in Figure 1.Figure 1
Schematic of specimen dimensions and reinforcement (mm).
(a)(b)(c)At the same time, six 150 mm× 150 mm × 150 mm standard concrete companion specimens were poured for each concrete bias specimen. These were used for testing the concrete strength of the specimens.
## 2.2. Experimental Methods
The design parameters of the specimens are listed in Table1. The test was carried out on a WHY-5000 kN hydraulic pressure machine. In order to measure the strain on the concrete, six strain gauges were pasted equidistant on the middle section of the specimen. Three strain gauges were uniformly pasted on the surface of each longitudinal stainless steel bar of the specimen to measure the strain on the longitudinal steel bars. On the middle section of the bottom surface of the specimen, equidistant to each other, 5 displacement meters were placed to measure the deflection of the specimens. Figures 2 and 3 show the loading diagram and strain gauge distribution, respectively. Crack development, concrete strain, reinforcement strain, lateral displacement, and other test phenomena were observed and recorded during the test.Table 1
Specimen design parameters.
Specimen numberStrength grade of concreteSectional dimension (mm2)Initial eccentricity,e0 (mm)Longitudinal reinforcement diameter,d (mm)Hoop reinforcement form,d (mm)A-D1C45250× 150200128@125A-D4C45250× 15050128@125B-D1C45250× 150200168@125B-D2C45250× 150150168@125B-D3C45250× 150100168@125B-D4C45250× 15050168@125C-D1C45250× 150200258@125C-D4C45250× 15050258@125Note: letters A, B, and C represent the longitudinal stainless steel bars with diameters of 12 mm, 16 mm, and 25 mm respectively. D1, D2, D3, and D4 represent the initial eccentricity of 200 mm, 150 mm, 100 mm, and 50 mm, respectively, when the specimen is pressed.Figure 2
Experimental loading diagram.Figure 3
Distribution of the strain gauge (mm).
## 2.3. Mechanical Properties of Concrete and Reinforcement
In this experiment, 2304 stainless steel bars were used. Stainless steel specimens with diameters of 12 mm, 16 mm, and 25 mm were selected for tensile testing. The test results and the mechanical indexes of stainless steel bars are shown in Figure4 and 5 and Table 2.Figure 4
Load-displacement curve of stainless steel bars.Figure 5
Stress-strain curve of stainless steel bars.Table 2
Mechanical indexes of stainless steels.
Diameter (d/mm)Tensile strength (MPa)Yield strength (MPa)Elastic modulus (105 MPa)Elongation (%)12885.3677.51.5633.0016795.3565.81.5636.3925768.7572.71.4034.13The design strength of the concrete specimen is C45, and the measured strength of the concrete test blocks is shown in Table3. The axial compressive strength of concrete is calculated using formula (1), according to the Code for Design of Concrete Structures (GB50010-2010). Considering that all the specimens are manufactured and tested under laboratory conditions, the reduction factor of 0.88 can be ignored, so the actual strength of the concrete can be calculated using formula (2). Calculations for the mechanical properties of the concrete are shown in Table 3.(1)fck=0.88×0.76fcu,k,(2)fk=00.76fcu,k.Table 3
Mechanical properties of concrete.
Specimen numberfcu (MPa)fc (MPa)Specimen numberfcu (MPa)fc (MPa)A-D143.4733.04B-D342.4432.25A-D442.6431.65B-D446.0034.96B-D145.2034.20C-D145.3334.45B-D244.9334.15C-D445.6034.66
## 3. Results and Discussion
### 3.1. Experiment Phenomena and Failure Modes
The experiments can be divided into two conditions: large eccentric compression, for which the initial eccentricities are 150 mm and 200 mm, and small eccentric compression, which includes the initial eccentricities of 50 mm and 100 mm. Specimens with D1/D2 in the specimen number are those with large eccentric compression, and the failure modes of specimens with large eccentric compression are essentially the same. When the load increased to 15–25% of the peak load, 2–4 horizontal cracks formed in the tensile zone of the SSRC columns. At the point of these cracks in the tensile zone, the strain on the stainless steel bars and the mid-span deflection of the SSRC columns greatly increased. As the load continued to increase, the cracks were more or less equally spaced, forming several major cracks. With further increase in the load, the width of the cracks increased, gradually extending to the compression zone, and the height of the compression zone of the SSRC columns decreased. When the load reached 80–90% of the peak load, the stainless steel bars in the tensile zone were close to yield: the strain on the stainless steel bars rapidly increased, the height of the compression zone of the specimen decreased, the compressive strain on the concrete in the compression zone of the concrete columns and the stainless steel bars increased, and longitudinal cracks appeared. As the load continued to increase, the bearing capacity of the specimens decreased abruptly, the strain on the concrete in the compression zone reached the ultimate compressive strain state, and the specimens were crushed. On the destruction of the specimens, the concrete in the compression area was crushed and, while the stainless steel bars in the compression area did not yield, the stainless steel bars in the tensile area did. The lateral deflection of the specimens was large, and there were characteristics of ductile failure [41].Specimens with D3/D4 in the specimen number are the specimens with small eccentric compression, and the failure modes of all specimens with small eccentric compression are essentially comparable. When the load increased to 25–40% of the peak load, several small cracks appeared in the tensile zone and the width of the cracks and their extension to the compression zone were not obvious. No main cracks formed, and the strain at the edge of the compression zone of the specimens increased rapidly. With an increase in the load, the strain on the stainless steel bars and concrete in the compression area of the specimens increased significantly and the cracks in the tensile area extended and developed slowly. When approaching failure conditions, longitudinal cracks appeared in the concrete in the compression zone. The destruction was sudden, without any visible symptoms, and the crushing area was large. When the specimens were destroyed, the reinforcement on the side closest to the loading point yielded, while the reinforcement on the side furthest from the loading point did not. The lateral deflection of the specimens was small, and the specimens showed certain characteristics of brittleness [41]. The failure mode of the specimens is shown in Figure 6.Figure 6
Failure mode of the specimen. (a) Large eccentric compression failure. (b) Small eccentric compression failure.
(a)(b)
### 3.2. Load-Deflection Curves
The load-deflection curves of each specimen are shown in Figure7. It can be seen in the graphs that the load-deflection curve of SSRC eccentric compression columns is more or less divided into three stages. First is the linear elastic stage: the bearing capacity of the SSRC column is still small, the specimen has not cracked, and the deflection is also small. At this point, the relationship of load-deflection is approximately linear. The second stage is the nonlinear ascending stage: with the increase in load, both the amount and width of the cracks in the tensile zone of the specimen increase and the cracks further extend to the compression zone. The plastic characteristics of the specimen become increasingly obvious. At this stage, there is a nonlinear relationship between load and deflection and, with an increase in load, the relationship becomes increasingly obvious and the slope becomes smaller, indicating a continuous decline in specimen stiffness. The third stage is the descending stage: after the peak load exceeds, the deflection of the specimen increases and the load decreases [42]. The descending section of the large eccentric compression SSRC columns is relatively gentle, and the ductility of the specimens is good. In order to protect the instrument, the displacement gauge for some specimens was removed at 80% of the peak load, so the descending section is not presented in the graphs.Figure 7
Load-deflection curve of the specimens.
(a)(b)(c)(d)(e)(f)(g)(h)
### 3.3. Influencing Factor Analysis
In order to study the stress of stainless steel bars in concrete eccentric pressure columns, the load-reinforcement strain curve of each specimen was drawn, as shown in Figure8. As can be seen in Figure 8, the load-strain curve of the specimens with large eccentricity failure (A-D1, B-D1, B-D2, and C-D1) can be more or less divided into the following three stages: the elastic stage, where the strain and load of the tensile stainless steel bars show an approximate linear relationship; the nonlinear stage, where the height of the compression zone decreases gradually with an increase in load, the stress is redistributed, and the load-strain curve of the stainless steel bars deviated from the original linear relationship and began to show nonlinearity; and, finally, the approximate level development stage, where the stainless steel bar strain increased and the bars were close to yield. Making full use of tensile area and the compression area, the stainless steel bars did not yield. In the specimens with small eccentric failure (A-D4, B-D3, B-D4, and C-D4), the load-strain curve of the compressive steel bar is essentially the same as that of the tensile stainless steel bar in large bias pressure. The stainless steel bars in the compression zone ultimately yielded, meaning that the stainless steel bars in the compression zone can be fully utilized. The strain of the stainless steel bars in the tensile area is small, unyielding, and not fully utilized. This is synonymous with the small bias of ordinary concrete columns. When the eccentricity of the tensile stainless steel bars is 50 mm, the stainless steel bars furthest away from the loading point also appear to be under compression [42]. This is because the initial eccentricity is too small, so the steel bars in the tensile area of the specimen appear to be under compression, as shown in graphs (f) and (h) in Figure 8.Figure 8
Load-longitudinal reinforcement strain curve of the specimen.
(a)(b)(c)(d)(e)(f)(g)(h)The load-deflection curves of different reinforcement ratios of longitudinal steels with the same eccentricity are displayed in Figure9. It can be seen from the graphs that the ratio of longitudinal reinforcement has a significant influence on the bearing capacity and deflection of SSRC columns. When the initial eccentricity is 200 mm meaning the specimen is under large eccentricity compression, with the same load condition, the smaller the reinforcement ratio, the greater the deflection, and the flatter the load-deflection curve. When the initial eccentricity is 50 mm, in the case of small eccentricity compression, the change in the reinforcement ratio of longitudinal stainless steel has little effect on the deflection of the specimen.Figure 9
Influence of longitudinal reinforcement ratio on the load-deflection curve.
(a)(b)Eccentricity has a significant effect on the load and deflection of concrete columns. Under the same conditions, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity, the load-deflection curves are gentler, and the bearing capacity of the specimens decreases. The comparison of the influence of eccentricity on the load-deflection curve of SSRC columns is shown in Figure10 [43].Figure 10
Comparison of the influence of eccentricity on the load-deflection curve.
(a)(b)(c)
### 3.4. Main Results
The main results of this research are shown in Table4. It can be seen from the table that, under the condition of large eccentricity pressure, the stainless steel bars on the tensile side have yielded, while the stainless steel bars on the compression side mostly have not. This phenomenon is different from that of ordinary carbon-reinforced concrete specimens. In the case of small eccentricity pressure, the steel on the compression side has mostly yielded, while the stainless steel bars on the side furthest from the loading point have not, which is consistent with that of ordinary carbon-reinforced concrete specimens [44–48]. Stainless steel bars can play a more beneficial role in concrete columns, but there are some disparities to ordinary reinforced concrete columns. As such, the calculation of SSRC columns cannot replicate the standard results of ordinary reinforced concrete columns.Table 4
Main test data.
Specimen numbere0/h0Nu (kN)εs′ (10−6)εs (10−6)Failure patternA-D10.913242.2−1195.056152.23Large eccentricityA-D40.228807.2−2422.90384.45Small eccentricityB-D10.922302.1−1251.094183.15Large eccentricityB-D20.691440.5−1850.982983.35Large eccentricityB-D30.461625.4−2657.561707.61Small eccentricityB-D40.230952.0−2775.89279.73Small eccentricityC-D10.941353.5−1166.361724.76Large eccentricityC-D40.2351032.0−1877.59−16.99Small eccentricityNote:Nu represents the maximum bearing load of the specimen, and εs′ and εs represent the strain of compression stainless steel and tension stainless steel bars, respectively, when the threshold load is reached. In this table, negative values represent compression and positive values represent tension.
## 3.1. Experiment Phenomena and Failure Modes
The experiments can be divided into two conditions: large eccentric compression, for which the initial eccentricities are 150 mm and 200 mm, and small eccentric compression, which includes the initial eccentricities of 50 mm and 100 mm. Specimens with D1/D2 in the specimen number are those with large eccentric compression, and the failure modes of specimens with large eccentric compression are essentially the same. When the load increased to 15–25% of the peak load, 2–4 horizontal cracks formed in the tensile zone of the SSRC columns. At the point of these cracks in the tensile zone, the strain on the stainless steel bars and the mid-span deflection of the SSRC columns greatly increased. As the load continued to increase, the cracks were more or less equally spaced, forming several major cracks. With further increase in the load, the width of the cracks increased, gradually extending to the compression zone, and the height of the compression zone of the SSRC columns decreased. When the load reached 80–90% of the peak load, the stainless steel bars in the tensile zone were close to yield: the strain on the stainless steel bars rapidly increased, the height of the compression zone of the specimen decreased, the compressive strain on the concrete in the compression zone of the concrete columns and the stainless steel bars increased, and longitudinal cracks appeared. As the load continued to increase, the bearing capacity of the specimens decreased abruptly, the strain on the concrete in the compression zone reached the ultimate compressive strain state, and the specimens were crushed. On the destruction of the specimens, the concrete in the compression area was crushed and, while the stainless steel bars in the compression area did not yield, the stainless steel bars in the tensile area did. The lateral deflection of the specimens was large, and there were characteristics of ductile failure [41].Specimens with D3/D4 in the specimen number are the specimens with small eccentric compression, and the failure modes of all specimens with small eccentric compression are essentially comparable. When the load increased to 25–40% of the peak load, several small cracks appeared in the tensile zone and the width of the cracks and their extension to the compression zone were not obvious. No main cracks formed, and the strain at the edge of the compression zone of the specimens increased rapidly. With an increase in the load, the strain on the stainless steel bars and concrete in the compression area of the specimens increased significantly and the cracks in the tensile area extended and developed slowly. When approaching failure conditions, longitudinal cracks appeared in the concrete in the compression zone. The destruction was sudden, without any visible symptoms, and the crushing area was large. When the specimens were destroyed, the reinforcement on the side closest to the loading point yielded, while the reinforcement on the side furthest from the loading point did not. The lateral deflection of the specimens was small, and the specimens showed certain characteristics of brittleness [41]. The failure mode of the specimens is shown in Figure 6.Figure 6
Failure mode of the specimen. (a) Large eccentric compression failure. (b) Small eccentric compression failure.
(a)(b)
## 3.2. Load-Deflection Curves
The load-deflection curves of each specimen are shown in Figure7. It can be seen in the graphs that the load-deflection curve of SSRC eccentric compression columns is more or less divided into three stages. First is the linear elastic stage: the bearing capacity of the SSRC column is still small, the specimen has not cracked, and the deflection is also small. At this point, the relationship of load-deflection is approximately linear. The second stage is the nonlinear ascending stage: with the increase in load, both the amount and width of the cracks in the tensile zone of the specimen increase and the cracks further extend to the compression zone. The plastic characteristics of the specimen become increasingly obvious. At this stage, there is a nonlinear relationship between load and deflection and, with an increase in load, the relationship becomes increasingly obvious and the slope becomes smaller, indicating a continuous decline in specimen stiffness. The third stage is the descending stage: after the peak load exceeds, the deflection of the specimen increases and the load decreases [42]. The descending section of the large eccentric compression SSRC columns is relatively gentle, and the ductility of the specimens is good. In order to protect the instrument, the displacement gauge for some specimens was removed at 80% of the peak load, so the descending section is not presented in the graphs.Figure 7
Load-deflection curve of the specimens.
(a)(b)(c)(d)(e)(f)(g)(h)
## 3.3. Influencing Factor Analysis
In order to study the stress of stainless steel bars in concrete eccentric pressure columns, the load-reinforcement strain curve of each specimen was drawn, as shown in Figure8. As can be seen in Figure 8, the load-strain curve of the specimens with large eccentricity failure (A-D1, B-D1, B-D2, and C-D1) can be more or less divided into the following three stages: the elastic stage, where the strain and load of the tensile stainless steel bars show an approximate linear relationship; the nonlinear stage, where the height of the compression zone decreases gradually with an increase in load, the stress is redistributed, and the load-strain curve of the stainless steel bars deviated from the original linear relationship and began to show nonlinearity; and, finally, the approximate level development stage, where the stainless steel bar strain increased and the bars were close to yield. Making full use of tensile area and the compression area, the stainless steel bars did not yield. In the specimens with small eccentric failure (A-D4, B-D3, B-D4, and C-D4), the load-strain curve of the compressive steel bar is essentially the same as that of the tensile stainless steel bar in large bias pressure. The stainless steel bars in the compression zone ultimately yielded, meaning that the stainless steel bars in the compression zone can be fully utilized. The strain of the stainless steel bars in the tensile area is small, unyielding, and not fully utilized. This is synonymous with the small bias of ordinary concrete columns. When the eccentricity of the tensile stainless steel bars is 50 mm, the stainless steel bars furthest away from the loading point also appear to be under compression [42]. This is because the initial eccentricity is too small, so the steel bars in the tensile area of the specimen appear to be under compression, as shown in graphs (f) and (h) in Figure 8.Figure 8
Load-longitudinal reinforcement strain curve of the specimen.
(a)(b)(c)(d)(e)(f)(g)(h)The load-deflection curves of different reinforcement ratios of longitudinal steels with the same eccentricity are displayed in Figure9. It can be seen from the graphs that the ratio of longitudinal reinforcement has a significant influence on the bearing capacity and deflection of SSRC columns. When the initial eccentricity is 200 mm meaning the specimen is under large eccentricity compression, with the same load condition, the smaller the reinforcement ratio, the greater the deflection, and the flatter the load-deflection curve. When the initial eccentricity is 50 mm, in the case of small eccentricity compression, the change in the reinforcement ratio of longitudinal stainless steel has little effect on the deflection of the specimen.Figure 9
Influence of longitudinal reinforcement ratio on the load-deflection curve.
(a)(b)Eccentricity has a significant effect on the load and deflection of concrete columns. Under the same conditions, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity, the load-deflection curves are gentler, and the bearing capacity of the specimens decreases. The comparison of the influence of eccentricity on the load-deflection curve of SSRC columns is shown in Figure10 [43].Figure 10
Comparison of the influence of eccentricity on the load-deflection curve.
(a)(b)(c)
## 3.4. Main Results
The main results of this research are shown in Table4. It can be seen from the table that, under the condition of large eccentricity pressure, the stainless steel bars on the tensile side have yielded, while the stainless steel bars on the compression side mostly have not. This phenomenon is different from that of ordinary carbon-reinforced concrete specimens. In the case of small eccentricity pressure, the steel on the compression side has mostly yielded, while the stainless steel bars on the side furthest from the loading point have not, which is consistent with that of ordinary carbon-reinforced concrete specimens [44–48]. Stainless steel bars can play a more beneficial role in concrete columns, but there are some disparities to ordinary reinforced concrete columns. As such, the calculation of SSRC columns cannot replicate the standard results of ordinary reinforced concrete columns.Table 4
Main test data.
Specimen numbere0/h0Nu (kN)εs′ (10−6)εs (10−6)Failure patternA-D10.913242.2−1195.056152.23Large eccentricityA-D40.228807.2−2422.90384.45Small eccentricityB-D10.922302.1−1251.094183.15Large eccentricityB-D20.691440.5−1850.982983.35Large eccentricityB-D30.461625.4−2657.561707.61Small eccentricityB-D40.230952.0−2775.89279.73Small eccentricityC-D10.941353.5−1166.361724.76Large eccentricityC-D40.2351032.0−1877.59−16.99Small eccentricityNote:Nu represents the maximum bearing load of the specimen, and εs′ and εs represent the strain of compression stainless steel and tension stainless steel bars, respectively, when the threshold load is reached. In this table, negative values represent compression and positive values represent tension.
## 4. Carrying Capacity Calculation
### 4.1. Verification of Plane Section Assumption
Figure11 shows the distribution of strain in the mid-span section along the height under various loads. It can be seen that the strain distribution along the height of the column in the mid-span section of the specimen is uniform and mostly linear, which meets the requirements of plane section assumption [49].Figure 11
Strain-load relationship diagram of concrete.
(a)(b)(c)(d)
### 4.2. Calculation of Ultimate Bearing Capacity
The mechanical properties of stainless steel bars are shown in Table5.Table 5
Mechanical properties of stainless steel bars.
ε0.2σ0.2εuσuEs0.0061605 MPa0.276797 MPa151 GPaThe constitutive model of stainless steel adopts the Rasmussen model and the constitutive model of steel according to China's current standards [50].The mathematical expression of the Rasmussen model is(3)ε=σsE0+εpyσsσ0.2n,ε≤ε0.2,σs−σ0.2E0.2+εuσm−σ0.2σu−σ0.2m,ε0.2<ε≤εu,where E0 is the initial elastic modulus; εpy is the residual strain, and εpy = 0.002; n=ln20/lnσ0.2/σ0.01, and σ0.01 is the point corresponding to the 0.01% residual strain of the steel bar; E0.2 is the slope of the tangent line at the nominal yield point; m=1+3.5σ0.2/σu; and εu=1−σ0.2/σu.The mathematical expression of the steel bar constitutive model in China's current code is(4)σ=E0ε,ε≤ε0.2,σ0.2+kε−ε0.2,ε0.2<ε≤εu,0,ε>εu,where σ is the steel bar stress, E0 is the initial elastic modulus of the steel, ε is the steel bar strain, σ0.2 is the nominal yield stress of the stainless steel bar, ε0.2 is the strain corresponding to the nominal yield stress of the steel bar, εu is the peak strain corresponding to the ultimate strength of the steel, and k is the slope of the hardened section of the steels, where k=σu−σ0.2/εu−ε0.2.According to the force balance of the damaged section in the stainless steel reinforcement eccentric compression members and the momentary balance of the center point in the tension steel bars, the calculation formula for the bearing capacity of the rectangular section of the SSRC eccentric compression columns can be obtained [43]:(5)N=fcbx+σs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.The theoretical value of the ultimate bearing capacity of stainless steel bar concrete eccentric pressure specimens under two constitutive models can also be obtained, as demonstrated in Table6.Table 6
Mechanical indexes of stainless steels.
Specimen numberUltimate bearing capacity,Nu/kNFailure patternRasmussen modelStandard modelTest valueA-D1249.51263.15242.2Large eccentricityA-D4856.08850.84807.2Small eccentricityB-D1337.02284.23302.1Large eccentricityB-D2438.28415.78440.5Large eccentricityB-D3745.62730.30625.4Small eccentricityB-D4979.74964.62952.0Small eccentricityC-D1640.03633.03653.5Large eccentricityC-D41093.041084.651032.0Small eccentricity
### 4.3. Comparative Analyses
In this experiment, eight stainless steel bar eccentric concrete column bearing capacity tests were carried out and the experimental values were compared with the calculated theoretical values. The comparison results are shown in Table6. Through comparative analysis, it can be seen that the ultimate bearing capacity of the stainless steel bar concrete column’s eccentric load-bearing capacity failure specimen is similar to the theoretical calculation results of the stainless steel bar constitutive model. Considering the convenience of this calculation and verification, the steel constitutive model in China’s current code is used to carry out the calculation. However, the calculation of the load-bearing capacity of the stainless steel bar eccentric compression member is more appropriate. Subsequently, China's normative model is revised to render the calculation results more relevant and provides a safety reserve. Formulas (6) and (7) are used to revise the standard model, and the revised results are shown in Table 7. It can be seen in Table 7 that the error between the theoretical calculation value and the test value of the modified stainless steel eccentric compression specimen’s bearing capacity is small. Within a reasonable range, formulas (6) and (7) can be used to modify the standard model.Table 7
Comparison of the calculated value and test value of load-bearing capacity of the revised standard mode.
Specimen numberUltimate bearing capacityNu/kNRatio of test to calculated valueRevised specification modelTest valueA-D1263.15242.21.0865A-D4805.97807.20.9985B-D1284.23302.10.9408B-D2415.78440.50.9439B-D3684.01625.41.0937B-D4915.27952.00.9614C-D1633.03653.51.7958C-D41025.771032.00.9940For small eccentric compression members, when calculating the bearing capacity, the formula for calculating the strength of the eccentric compression concrete column is shown in (6). To complete the calculation, formula (6) is added to formula (5) and the small eccentric compression of the stainless steel is calculated. The theoretical calculated bearing capacity of the compression test piece is shown in Table 7.(6)fc=0.76fcu−α.For large eccentric compression members, the following calculation formula is used when calculating the bearing capacity to obtain the calculated theoretical bearing capacity value of SSRC columns, under large eccentric compression. The calculation results are shown in Table7.(7)N=fcbx+βσs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.Here,α=1.5andβ=0.8.
## 4.1. Verification of Plane Section Assumption
Figure11 shows the distribution of strain in the mid-span section along the height under various loads. It can be seen that the strain distribution along the height of the column in the mid-span section of the specimen is uniform and mostly linear, which meets the requirements of plane section assumption [49].Figure 11
Strain-load relationship diagram of concrete.
(a)(b)(c)(d)
## 4.2. Calculation of Ultimate Bearing Capacity
The mechanical properties of stainless steel bars are shown in Table5.Table 5
Mechanical properties of stainless steel bars.
ε0.2σ0.2εuσuEs0.0061605 MPa0.276797 MPa151 GPaThe constitutive model of stainless steel adopts the Rasmussen model and the constitutive model of steel according to China's current standards [50].The mathematical expression of the Rasmussen model is(3)ε=σsE0+εpyσsσ0.2n,ε≤ε0.2,σs−σ0.2E0.2+εuσm−σ0.2σu−σ0.2m,ε0.2<ε≤εu,where E0 is the initial elastic modulus; εpy is the residual strain, and εpy = 0.002; n=ln20/lnσ0.2/σ0.01, and σ0.01 is the point corresponding to the 0.01% residual strain of the steel bar; E0.2 is the slope of the tangent line at the nominal yield point; m=1+3.5σ0.2/σu; and εu=1−σ0.2/σu.The mathematical expression of the steel bar constitutive model in China's current code is(4)σ=E0ε,ε≤ε0.2,σ0.2+kε−ε0.2,ε0.2<ε≤εu,0,ε>εu,where σ is the steel bar stress, E0 is the initial elastic modulus of the steel, ε is the steel bar strain, σ0.2 is the nominal yield stress of the stainless steel bar, ε0.2 is the strain corresponding to the nominal yield stress of the steel bar, εu is the peak strain corresponding to the ultimate strength of the steel, and k is the slope of the hardened section of the steels, where k=σu−σ0.2/εu−ε0.2.According to the force balance of the damaged section in the stainless steel reinforcement eccentric compression members and the momentary balance of the center point in the tension steel bars, the calculation formula for the bearing capacity of the rectangular section of the SSRC eccentric compression columns can be obtained [43]:(5)N=fcbx+σs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.The theoretical value of the ultimate bearing capacity of stainless steel bar concrete eccentric pressure specimens under two constitutive models can also be obtained, as demonstrated in Table6.Table 6
Mechanical indexes of stainless steels.
Specimen numberUltimate bearing capacity,Nu/kNFailure patternRasmussen modelStandard modelTest valueA-D1249.51263.15242.2Large eccentricityA-D4856.08850.84807.2Small eccentricityB-D1337.02284.23302.1Large eccentricityB-D2438.28415.78440.5Large eccentricityB-D3745.62730.30625.4Small eccentricityB-D4979.74964.62952.0Small eccentricityC-D1640.03633.03653.5Large eccentricityC-D41093.041084.651032.0Small eccentricity
## 4.3. Comparative Analyses
In this experiment, eight stainless steel bar eccentric concrete column bearing capacity tests were carried out and the experimental values were compared with the calculated theoretical values. The comparison results are shown in Table6. Through comparative analysis, it can be seen that the ultimate bearing capacity of the stainless steel bar concrete column’s eccentric load-bearing capacity failure specimen is similar to the theoretical calculation results of the stainless steel bar constitutive model. Considering the convenience of this calculation and verification, the steel constitutive model in China’s current code is used to carry out the calculation. However, the calculation of the load-bearing capacity of the stainless steel bar eccentric compression member is more appropriate. Subsequently, China's normative model is revised to render the calculation results more relevant and provides a safety reserve. Formulas (6) and (7) are used to revise the standard model, and the revised results are shown in Table 7. It can be seen in Table 7 that the error between the theoretical calculation value and the test value of the modified stainless steel eccentric compression specimen’s bearing capacity is small. Within a reasonable range, formulas (6) and (7) can be used to modify the standard model.Table 7
Comparison of the calculated value and test value of load-bearing capacity of the revised standard mode.
Specimen numberUltimate bearing capacityNu/kNRatio of test to calculated valueRevised specification modelTest valueA-D1263.15242.21.0865A-D4805.97807.20.9985B-D1284.23302.10.9408B-D2415.78440.50.9439B-D3684.01625.41.0937B-D4915.27952.00.9614C-D1633.03653.51.7958C-D41025.771032.00.9940For small eccentric compression members, when calculating the bearing capacity, the formula for calculating the strength of the eccentric compression concrete column is shown in (6). To complete the calculation, formula (6) is added to formula (5) and the small eccentric compression of the stainless steel is calculated. The theoretical calculated bearing capacity of the compression test piece is shown in Table 7.(6)fc=0.76fcu−α.For large eccentric compression members, the following calculation formula is used when calculating the bearing capacity to obtain the calculated theoretical bearing capacity value of SSRC columns, under large eccentric compression. The calculation results are shown in Table7.(7)N=fcbx+βσs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.Here,α=1.5andβ=0.8.
## 5. Conclusions
In this paper, eight stainless steel bar compression components are fabricated and tested. Based on the test results, the mechanical properties of the SSRC column are analyzed and the calculation formula of the bearing capacity of the SSRC eccentric compression members is proposed. The factors affecting the bearing capacity of the SSRC column are analyzed. The main conclusions are as follows:(1)
The failure mode of SSRC eccentric compression column in the ultimate state is the same as that of the ordinary concrete column. When large eccentric compression member destruction occurs, the concrete in the compression zone is crushed. However, the stainless steel bars in the compression zone did not yield, whereas the stainless steel bars in the tensile zone did. The lateral deflection of the specimen is relatively large, and it displayed characteristics of ductile failure. In the case of small eccentric compression member failure, the stainless steel bars near the loading point yielded, while the stainless steel bars furthest from the loading point did not. The lateral deflection of the specimen is relatively small, which indicates that it is a brittle failure.(2)
The deflection-load curve of the eccentric compression column of stainless steel bars can be more or less divided into the following three stages: linear elastic stage, nonlinear ascending stage, and descent stage. Compared with small eccentricity, the descending section of the large eccentricity compression member is relatively gentle and the ductility of the specimen is superior. In the case of large eccentric compression—when the specimens are in the same load—the smaller the reinforcement ratio, the greater the deflection. In the case of small eccentric compression, the change in reinforcement ratio of the longitudinal stainless steel bars has little effect on the deflection of the specimens. When other conditions are comparable, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity and the load-deflection curve is gentler.(3)
The strain distribution of the mid-span section of the SSRC column with small eccentric pressure is consistent with the assumption of the plane section along the height, which can be theoretically calculated according to the plane section assumption.(4)
The Rasmussen model and the current model, adopted with the Chinese specifications, are the two types of stainless steel reinforcement constitutive models currently most commonly used. The formula of the specification model is simple and easy to calculate, allowing for the modification of the specification model. According to the comparison of the revised and test results, the conclusions indicate that the correction formula can be used in the stainless steel bearing capacity calculation for reinforced concrete eccentric pressure column. The calculated results are representative of the test values.Although this paper has done a lot of research on the mechanical properties of SSRC eccentric pressure members, because of the complexity of SSRC structure and the discrete nature of the concrete material test, there are still many problems to be solved: (1) Explore the size effect of concrete structure on SSRC to solve whether the construction of large-volume SSRC can be used to analyze the current relevant theories. (2) Search the suitable type of concrete structure in which the strength of stainless steel bars can play out to the greatest extent. (3) Carry out the experimental research about the flexural, shear, and fatigue resistance of stainless steel reinforced concrete structure. For the application and development of stainless steel bars in practical engineering, more comprehensive research and comparison are needed.
---
*Source: 1013188-2021-12-01.xml* | 1013188-2021-12-01_1013188-2021-12-01.md | 58,249 | Experimental Study on SSRC under Eccentric Compression | Qingfu Li; Tianjing Zhang; Yaqian Yan; Qunhua Xiang | Advances in Civil Engineering
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1013188 | 1013188-2021-12-01.xml | ---
## Abstract
The use of stainless steel bars can improve the durability and sustainability of building materials. Through the static performance test, this research analyzes the failure pattern and bearing performance of bias stainless steel reinforced concrete (SSRC) column. The influence of reinforcement ratio of longitudinal bars and eccentricity on the mechanical performance of specimens was studied. Different constitutive models of stainless steel bars were used to calculate the ultimate bearing capacity of the section of the column under eccentric compression column. Based on the experimental results, a method to modify the expression of the design specification is proposed. And, the results were compared with the test results. The results showed that the damage patterns and failure modes of SSRC columns are essentially the same as those of traditional reinforced concrete columns. The bearing capacity of SSRC columns rises with the increase in the longitudinal reinforcement ratio, and the ductility of the specimens is enhanced. The ultimate load of the specimen decreases with the rise in eccentricity but the deflection increases gradually. The strain distribution of the mid-span section of the SSRC column conforms to the plane section assumption. The bearing capacity of the specimen can be analyzed by referring to the calculation method of the specification, and some parameters in the calculation formula of the specification are modified to adapt to the design and calculation of the SSRC column.
---
## Body
## 1. Introduction
Reinforcement corrosion problems commonly exist in the concrete structure’s durability, which lead to the unsustainability of construction projects. The investigation results show that more than 80% of concrete structures have a certain degree of durability problems or more serious steel corrosion just 5–10 years after construction [1]. How to improve the durability and sustainability of reinforced concrete structures under poor environmental conditions has become a pressing problem that requires solution. At present, the main measures taken to prevent the rapid corrosion of steel bars and improve the durability of reinforced concrete structures include [2–5] coating the surface of ordinary steel bars with antirust materials, using epoxy-coated steel bars and hot-dip galvanized steel bars, controlling the quality of concrete, and using stainless steel bars. In order to fundamentally solve a series of problems caused by steel corrosion, the use of stainless steel bars is the first choice. The stainless steel reinforcement is an alloy steel and contains significant amounts of chromium in order to protect the steel against rust. It can generate a thin and smooth layer of colorless, transparent oxide film on the surface of the weak corrosive medium to protect it against environmental conditions, such as air and water. Existing research has focused on the corrosion resistance of stainless steel; however, there has been relatively little research on the mechanical performance of SSRC members. The eccentric compression member is the most fundamental and important mechanical component in a building’s structure [6]; as such, research on its mechanical performance will have theoretical significance and high application value.Currently, in China, there is a range of research on and applications for stainless steel bars and their composite materials. It mainly focuses on two aspects: one is to study its corrosion law in simulated concrete pore fluid and the other is to study its corrosion law in mortar or concrete environment, which is usually compared with carbon steel bars in the same situation by monitoring the corrosion rate. In 2006, Guoxue et al. preliminarily discussed the mechanical properties of SSRC beams, slates, and columns [7–11]. In 2007, Yongsheng et al. studied and analyzed the influence of the use of stainless steel bars on deflection, crack width, and ultimate bearing capacity of concrete beams and verified that the SSRC beams can meet the requirements of the maximum deflection and maximum crack width limits specified in the design of ordinary reinforced concrete structures[12]. In 2011, Huanxin et al. identified the difference between the constitutive models of stainless steel bars and ordinary steel bars [13]. In 2013, Jiawei et al. discussed the influence of use stainless steel bars on the bearing capacity of concrete beams through fatigue tests and studied the mechanical performance of SSRC beams, which included the stress-strain relationship between steel bars and concrete [10]. In 2013, Huitao conducted an experimental study on the mechanical properties of stainless steel bars, the bond properties between stainless steel bars and concrete, and the flexural properties of SSRC beams. The results showed that the strength, elongation, and cold bending property of stainless steel bars are higher than those of ordinary hot-rolled bars. The bonding performance between stainless steel bars and concrete is excellent. Under the same reinforcement condition, the bearing capacity of SSRC beam is higher than that of ordinary hot-rolled reinforced concrete beam, the failure law is consistent, and the plane section assumption is still applicable [14]. In 2014, Chen Long et al. used the potentiodynamic scanning technology and electron microscope test to study the critical chloride ion concentration of stainless steel bar and ordinary steel bar in simulated concrete pore fluid and concluded that the critical chloride ion concentration of the stainless steel bar is more than 20 times that of the ordinary steel bar and its corrosion resistance is far better than that of ordinary steel bar [15]. In 2016, Chengchang et al. proposed a preliminary calculation method for the bonding force between stainless steel bars and concrete [16], concluding that the carrying capacity of concrete beams with stainless steel bars was relatively greater. In 2018, Hailong et al. concluded that the critical value of corrosion resistance of stainless steel bars in the simulated liquid of freshly mixed concrete was more than 75 times that of carbon steel [17]. In 2018, Yi et al. studied the corrosion law of stainless steel bars and ordinary steel bars used in concrete structures. The results showed that the corrosion rate of carbon steel bars connected with stainless steel bars is significantly accelerated, and the galvanic corrosion is strengthened with the increase in corrosion time. At the same time, when two kinds of steel bars are overlapped, the galvanic corrosion is more severe [18]. In 2019, Chunyi et al. presented the bonding force formula between stainless steel bars and concrete [19].However, overseas research on the application of stainless steel bars is a long-standing concern; as early as 1937, the Progreso Pier Bridge in Mexico used stainless steel ones to replace ordinary steel bars to improve the durability of the structure [20]. Foreign scholars' research on stainless steel bars mainly focuses on corrosion resistance. For example, in 1985, Zoob et al. carried out the corrosion resistance test on 304 stainless steel bars. The research showed that the allowable chloride content of stainless steel bars buried in concrete is 7–10 times that of ordinary steel bars in the same environment and there is no obvious corrosion of stainless steel bars. In 1988, Flint et al. buried one end of 316 stainless steel bars and carbon steel bars in concrete and exposed the other end to sea water. The experimental results showed that the local erosion of stainless steel bars only occurred in the immersion section of sea water, the stainless steel bars wrapped in concrete remained intact, and the overall strength was not affected. However, the corrosion of carbon steel bars was significant and the strength was seriously reduced [21, 22]. In 1995, McDonld and others conducted the same experimental study on SSRC members and the results showed that the corrosion resistance of SSRC members was better than that of ordinary reinforced concrete members [23]. In 1998, Bertolini et al. conducted corrosion tests on SSRC members in high pH value and high chloride environment and found that the corrosion resistance of stainless steel reinforcement is extremely excellent. It can still remain passive in high pH value and high chloride environment [24]. In 2002, Abreu et al. used sodium chloride solution to simulate concrete pore fluid and used the EIS method and ZRA method to study galvanic corrosion behavior of the stainless steel bar and carbon steel bar. The results show that if the carbon steel bar and stainless steel bar are connected in simulated pore fluid, the probability of galvanic corrosion is small [25]. Castro (2003) conducted tests on the mechanical properties and corrosion properties of the austenitic stainless steel rebar under both hot-rolled and cold-rolled conditions [26]. Blanco et al. experimentally compared the corrosion behavior of two traditional austenitic steels and a dual-phase stainless steel with that of low nickel austenitic steel [27]. In 2010, Milan Kouřil et al. studied the corrosion resistance of stainless steel bars by the electrochemical test, and the results showed that the critical chloride concentration of stainless steel bars in concrete structure depends not only on the chemical composition of stainless steel and the pH level of concrete pore fluid but also on the surface state of steel. Stainless steel bars with smooth surface have good corrosion resistance, while stainless steel bars with rough surface have weak resistance to chloride ion [28]. In 2013, Hansson et al. used the linear polarization method to compare the corrosion performance of the stainless steel bar and carbon steel bar in concrete environment with the same chloride concentration. The results showed that the corrosion of ordinary carbon steel bar occurred within two weeks, while the stainless steel bar began to rust after 139 weeks. The use of stainless steel bars has a significant effect on improving the service life of concrete structures [29]. And, Gastaldi and Bertolini tested the chloride-induced corrosion resistance of low nickel bidirectional stainless steel threaded bars in different temperature ranges and traditional austenitic stainless steel threaded bars [30]. In recent years, foreign studies on the mechanical properties of stainless steel have gradually increased. In 2015, Mdina et al. carried out a series of experimental studies on the structural properties of three kinds of stainless steel bars (austenitic AISI304, bidirectional AISI2304, and new bidirectional AISI2001) at reinforcement, section, and structural member levels, respectively [31]. In 2016, Gardner et al. conducted 164 heating tests on four kinds of stainless steel bars (1.4307[304Ln], 1.4311[304Ln], 1.4162[LDX2101], and 1.4362[2304]) under high-temperature conditions [32]. In 2019, Yibu et al. conducted an experimental study on the strength of 59 stainless steel welding sections using through traditional welding and laser welding [33]. In 2019, Bemfica et al. studied axial torsional fatigue and cyclic deformation behavior of 304L stainless steel bars at room temperature [34].Stainless steel bars have high strength, high flexural strength ratio, high fatigue life, high impact toughness, and other properties. The main problem of the current application of stainless steel reinforcement is the high price, which affects the initial cost of engineering construction. But, generally, the stainless steel reinforcement can only be used in the key parts of the actual project, so the proportion of stainless steel reinforcement in the whole project cost is very small and some other characteristics of stainless steel reinforcement also make its life cycle cost much lower than that of carbon steel [35]. Compared with ordinary steel bars, first of all, the application of stainless steel bars can reduce the thickness of concrete and the number of steel bars. Secondly, the transportation, processing, and installation of stainless steel bars have no special requirements, and the construction cost is 25% lower than that of epoxy-coated carbon steel bars. Thirdly, stainless steel reinforced concrete buildings require little or no maintenance which can reduce maintenance and inspection costs, thereby reducing the social cost of maintenance disrupting operations. The Federal Highway Administration (FHWA) conducted a cost analysis of three bridges built in Illinois with different rustproofing methods. The results showed that the initial cost of the bridge with stainless steel reinforcement increased by 16%, but it took about six times as long to crack, resulting in a significant reduction in maintenance costs [36]. V al et al. put forward a kind of time-varying probability model to predict the expected cost of repair and replacement, and then, the model was used to calculate different exposures of reinforced concrete structures in the marine environment under the conditions of life cycle cost. The results showed that although the price of stainless steel was six to nine times that of carbon steel, the use of stainless steel bars was justified on a life cycle cost basis [37]. Frank N. Smith analyzed the serious corrosion problems during the service of the Oland Bridge in Sweden, pointed out that if the bridge were constructed with stainless steel bars, the construction cost would only increase by 8%, but the 100-year life could be achieved with very little maintenance [38]. Cope et al. evaluated the superiority of stainless steel over conventional steel in terms of both long-term usage costs and user costs with the data from one Midwestern state in the United States, using Monte Carlo simulation methods for most of the analyzed scenarios, which proved that in the case of uncertainty, using stainless steel as the bridge reinforcement materials is more cost-effective than conventional steel [39]. Younis et al. conducted a life cycle cost analysis of high-rise buildings based on a 100-year study period and showed that the proposed combination using stainless steel bars had a life cycle cost (LCC) approximately 50% lower than the conventional combination (i.e., concrete containing fresh water, natural aggregate, and black steel) [40].Although there has been much progress in the study of stainless steel bars, most has concentrated on the performance of the stainless steel material itself, such as stainless steel basic mechanical performance, weldability, and corrosion resistance. However, there has been relatively little research on the mechanical properties of concrete structural members containing stainless steel bars. When stainless steel bars are used in concrete compression members, whether the stainless steel bars and concrete materials can fully live up to their strengths needs to be verified through rigorous testing. In view of this, this paper will consider the SSRC columns as the research participants. Through testing, theoretical analysis, numerical simulation, and other technical methods, the mechanical performance of SSRC columns will be studied.
## 2. Experimental Design
### 2.1. Specimen
In order to conduct the study, eight SSRC eccentric compression columns were designed and constructed. The section determination of each specimen was 250 mm× 150 mm, with a height of 1000 mm. The form of symmetrical reinforcement was adopted, and the concrete cover thickness was 25 mm. The model of stainless steel bar used along the longitudinal direction of the specimen was 2304. The specific dimensions and reinforcement layout of the specimens are shown in Figure 1.Figure 1
Schematic of specimen dimensions and reinforcement (mm).
(a)(b)(c)At the same time, six 150 mm× 150 mm × 150 mm standard concrete companion specimens were poured for each concrete bias specimen. These were used for testing the concrete strength of the specimens.
### 2.2. Experimental Methods
The design parameters of the specimens are listed in Table1. The test was carried out on a WHY-5000 kN hydraulic pressure machine. In order to measure the strain on the concrete, six strain gauges were pasted equidistant on the middle section of the specimen. Three strain gauges were uniformly pasted on the surface of each longitudinal stainless steel bar of the specimen to measure the strain on the longitudinal steel bars. On the middle section of the bottom surface of the specimen, equidistant to each other, 5 displacement meters were placed to measure the deflection of the specimens. Figures 2 and 3 show the loading diagram and strain gauge distribution, respectively. Crack development, concrete strain, reinforcement strain, lateral displacement, and other test phenomena were observed and recorded during the test.Table 1
Specimen design parameters.
Specimen numberStrength grade of concreteSectional dimension (mm2)Initial eccentricity,e0 (mm)Longitudinal reinforcement diameter,d (mm)Hoop reinforcement form,d (mm)A-D1C45250× 150200128@125A-D4C45250× 15050128@125B-D1C45250× 150200168@125B-D2C45250× 150150168@125B-D3C45250× 150100168@125B-D4C45250× 15050168@125C-D1C45250× 150200258@125C-D4C45250× 15050258@125Note: letters A, B, and C represent the longitudinal stainless steel bars with diameters of 12 mm, 16 mm, and 25 mm respectively. D1, D2, D3, and D4 represent the initial eccentricity of 200 mm, 150 mm, 100 mm, and 50 mm, respectively, when the specimen is pressed.Figure 2
Experimental loading diagram.Figure 3
Distribution of the strain gauge (mm).
### 2.3. Mechanical Properties of Concrete and Reinforcement
In this experiment, 2304 stainless steel bars were used. Stainless steel specimens with diameters of 12 mm, 16 mm, and 25 mm were selected for tensile testing. The test results and the mechanical indexes of stainless steel bars are shown in Figure4 and 5 and Table 2.Figure 4
Load-displacement curve of stainless steel bars.Figure 5
Stress-strain curve of stainless steel bars.Table 2
Mechanical indexes of stainless steels.
Diameter (d/mm)Tensile strength (MPa)Yield strength (MPa)Elastic modulus (105 MPa)Elongation (%)12885.3677.51.5633.0016795.3565.81.5636.3925768.7572.71.4034.13The design strength of the concrete specimen is C45, and the measured strength of the concrete test blocks is shown in Table3. The axial compressive strength of concrete is calculated using formula (1), according to the Code for Design of Concrete Structures (GB50010-2010). Considering that all the specimens are manufactured and tested under laboratory conditions, the reduction factor of 0.88 can be ignored, so the actual strength of the concrete can be calculated using formula (2). Calculations for the mechanical properties of the concrete are shown in Table 3.(1)fck=0.88×0.76fcu,k,(2)fk=00.76fcu,k.Table 3
Mechanical properties of concrete.
Specimen numberfcu (MPa)fc (MPa)Specimen numberfcu (MPa)fc (MPa)A-D143.4733.04B-D342.4432.25A-D442.6431.65B-D446.0034.96B-D145.2034.20C-D145.3334.45B-D244.9334.15C-D445.6034.66
## 2.1. Specimen
In order to conduct the study, eight SSRC eccentric compression columns were designed and constructed. The section determination of each specimen was 250 mm× 150 mm, with a height of 1000 mm. The form of symmetrical reinforcement was adopted, and the concrete cover thickness was 25 mm. The model of stainless steel bar used along the longitudinal direction of the specimen was 2304. The specific dimensions and reinforcement layout of the specimens are shown in Figure 1.Figure 1
Schematic of specimen dimensions and reinforcement (mm).
(a)(b)(c)At the same time, six 150 mm× 150 mm × 150 mm standard concrete companion specimens were poured for each concrete bias specimen. These were used for testing the concrete strength of the specimens.
## 2.2. Experimental Methods
The design parameters of the specimens are listed in Table1. The test was carried out on a WHY-5000 kN hydraulic pressure machine. In order to measure the strain on the concrete, six strain gauges were pasted equidistant on the middle section of the specimen. Three strain gauges were uniformly pasted on the surface of each longitudinal stainless steel bar of the specimen to measure the strain on the longitudinal steel bars. On the middle section of the bottom surface of the specimen, equidistant to each other, 5 displacement meters were placed to measure the deflection of the specimens. Figures 2 and 3 show the loading diagram and strain gauge distribution, respectively. Crack development, concrete strain, reinforcement strain, lateral displacement, and other test phenomena were observed and recorded during the test.Table 1
Specimen design parameters.
Specimen numberStrength grade of concreteSectional dimension (mm2)Initial eccentricity,e0 (mm)Longitudinal reinforcement diameter,d (mm)Hoop reinforcement form,d (mm)A-D1C45250× 150200128@125A-D4C45250× 15050128@125B-D1C45250× 150200168@125B-D2C45250× 150150168@125B-D3C45250× 150100168@125B-D4C45250× 15050168@125C-D1C45250× 150200258@125C-D4C45250× 15050258@125Note: letters A, B, and C represent the longitudinal stainless steel bars with diameters of 12 mm, 16 mm, and 25 mm respectively. D1, D2, D3, and D4 represent the initial eccentricity of 200 mm, 150 mm, 100 mm, and 50 mm, respectively, when the specimen is pressed.Figure 2
Experimental loading diagram.Figure 3
Distribution of the strain gauge (mm).
## 2.3. Mechanical Properties of Concrete and Reinforcement
In this experiment, 2304 stainless steel bars were used. Stainless steel specimens with diameters of 12 mm, 16 mm, and 25 mm were selected for tensile testing. The test results and the mechanical indexes of stainless steel bars are shown in Figure4 and 5 and Table 2.Figure 4
Load-displacement curve of stainless steel bars.Figure 5
Stress-strain curve of stainless steel bars.Table 2
Mechanical indexes of stainless steels.
Diameter (d/mm)Tensile strength (MPa)Yield strength (MPa)Elastic modulus (105 MPa)Elongation (%)12885.3677.51.5633.0016795.3565.81.5636.3925768.7572.71.4034.13The design strength of the concrete specimen is C45, and the measured strength of the concrete test blocks is shown in Table3. The axial compressive strength of concrete is calculated using formula (1), according to the Code for Design of Concrete Structures (GB50010-2010). Considering that all the specimens are manufactured and tested under laboratory conditions, the reduction factor of 0.88 can be ignored, so the actual strength of the concrete can be calculated using formula (2). Calculations for the mechanical properties of the concrete are shown in Table 3.(1)fck=0.88×0.76fcu,k,(2)fk=00.76fcu,k.Table 3
Mechanical properties of concrete.
Specimen numberfcu (MPa)fc (MPa)Specimen numberfcu (MPa)fc (MPa)A-D143.4733.04B-D342.4432.25A-D442.6431.65B-D446.0034.96B-D145.2034.20C-D145.3334.45B-D244.9334.15C-D445.6034.66
## 3. Results and Discussion
### 3.1. Experiment Phenomena and Failure Modes
The experiments can be divided into two conditions: large eccentric compression, for which the initial eccentricities are 150 mm and 200 mm, and small eccentric compression, which includes the initial eccentricities of 50 mm and 100 mm. Specimens with D1/D2 in the specimen number are those with large eccentric compression, and the failure modes of specimens with large eccentric compression are essentially the same. When the load increased to 15–25% of the peak load, 2–4 horizontal cracks formed in the tensile zone of the SSRC columns. At the point of these cracks in the tensile zone, the strain on the stainless steel bars and the mid-span deflection of the SSRC columns greatly increased. As the load continued to increase, the cracks were more or less equally spaced, forming several major cracks. With further increase in the load, the width of the cracks increased, gradually extending to the compression zone, and the height of the compression zone of the SSRC columns decreased. When the load reached 80–90% of the peak load, the stainless steel bars in the tensile zone were close to yield: the strain on the stainless steel bars rapidly increased, the height of the compression zone of the specimen decreased, the compressive strain on the concrete in the compression zone of the concrete columns and the stainless steel bars increased, and longitudinal cracks appeared. As the load continued to increase, the bearing capacity of the specimens decreased abruptly, the strain on the concrete in the compression zone reached the ultimate compressive strain state, and the specimens were crushed. On the destruction of the specimens, the concrete in the compression area was crushed and, while the stainless steel bars in the compression area did not yield, the stainless steel bars in the tensile area did. The lateral deflection of the specimens was large, and there were characteristics of ductile failure [41].Specimens with D3/D4 in the specimen number are the specimens with small eccentric compression, and the failure modes of all specimens with small eccentric compression are essentially comparable. When the load increased to 25–40% of the peak load, several small cracks appeared in the tensile zone and the width of the cracks and their extension to the compression zone were not obvious. No main cracks formed, and the strain at the edge of the compression zone of the specimens increased rapidly. With an increase in the load, the strain on the stainless steel bars and concrete in the compression area of the specimens increased significantly and the cracks in the tensile area extended and developed slowly. When approaching failure conditions, longitudinal cracks appeared in the concrete in the compression zone. The destruction was sudden, without any visible symptoms, and the crushing area was large. When the specimens were destroyed, the reinforcement on the side closest to the loading point yielded, while the reinforcement on the side furthest from the loading point did not. The lateral deflection of the specimens was small, and the specimens showed certain characteristics of brittleness [41]. The failure mode of the specimens is shown in Figure 6.Figure 6
Failure mode of the specimen. (a) Large eccentric compression failure. (b) Small eccentric compression failure.
(a)(b)
### 3.2. Load-Deflection Curves
The load-deflection curves of each specimen are shown in Figure7. It can be seen in the graphs that the load-deflection curve of SSRC eccentric compression columns is more or less divided into three stages. First is the linear elastic stage: the bearing capacity of the SSRC column is still small, the specimen has not cracked, and the deflection is also small. At this point, the relationship of load-deflection is approximately linear. The second stage is the nonlinear ascending stage: with the increase in load, both the amount and width of the cracks in the tensile zone of the specimen increase and the cracks further extend to the compression zone. The plastic characteristics of the specimen become increasingly obvious. At this stage, there is a nonlinear relationship between load and deflection and, with an increase in load, the relationship becomes increasingly obvious and the slope becomes smaller, indicating a continuous decline in specimen stiffness. The third stage is the descending stage: after the peak load exceeds, the deflection of the specimen increases and the load decreases [42]. The descending section of the large eccentric compression SSRC columns is relatively gentle, and the ductility of the specimens is good. In order to protect the instrument, the displacement gauge for some specimens was removed at 80% of the peak load, so the descending section is not presented in the graphs.Figure 7
Load-deflection curve of the specimens.
(a)(b)(c)(d)(e)(f)(g)(h)
### 3.3. Influencing Factor Analysis
In order to study the stress of stainless steel bars in concrete eccentric pressure columns, the load-reinforcement strain curve of each specimen was drawn, as shown in Figure8. As can be seen in Figure 8, the load-strain curve of the specimens with large eccentricity failure (A-D1, B-D1, B-D2, and C-D1) can be more or less divided into the following three stages: the elastic stage, where the strain and load of the tensile stainless steel bars show an approximate linear relationship; the nonlinear stage, where the height of the compression zone decreases gradually with an increase in load, the stress is redistributed, and the load-strain curve of the stainless steel bars deviated from the original linear relationship and began to show nonlinearity; and, finally, the approximate level development stage, where the stainless steel bar strain increased and the bars were close to yield. Making full use of tensile area and the compression area, the stainless steel bars did not yield. In the specimens with small eccentric failure (A-D4, B-D3, B-D4, and C-D4), the load-strain curve of the compressive steel bar is essentially the same as that of the tensile stainless steel bar in large bias pressure. The stainless steel bars in the compression zone ultimately yielded, meaning that the stainless steel bars in the compression zone can be fully utilized. The strain of the stainless steel bars in the tensile area is small, unyielding, and not fully utilized. This is synonymous with the small bias of ordinary concrete columns. When the eccentricity of the tensile stainless steel bars is 50 mm, the stainless steel bars furthest away from the loading point also appear to be under compression [42]. This is because the initial eccentricity is too small, so the steel bars in the tensile area of the specimen appear to be under compression, as shown in graphs (f) and (h) in Figure 8.Figure 8
Load-longitudinal reinforcement strain curve of the specimen.
(a)(b)(c)(d)(e)(f)(g)(h)The load-deflection curves of different reinforcement ratios of longitudinal steels with the same eccentricity are displayed in Figure9. It can be seen from the graphs that the ratio of longitudinal reinforcement has a significant influence on the bearing capacity and deflection of SSRC columns. When the initial eccentricity is 200 mm meaning the specimen is under large eccentricity compression, with the same load condition, the smaller the reinforcement ratio, the greater the deflection, and the flatter the load-deflection curve. When the initial eccentricity is 50 mm, in the case of small eccentricity compression, the change in the reinforcement ratio of longitudinal stainless steel has little effect on the deflection of the specimen.Figure 9
Influence of longitudinal reinforcement ratio on the load-deflection curve.
(a)(b)Eccentricity has a significant effect on the load and deflection of concrete columns. Under the same conditions, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity, the load-deflection curves are gentler, and the bearing capacity of the specimens decreases. The comparison of the influence of eccentricity on the load-deflection curve of SSRC columns is shown in Figure10 [43].Figure 10
Comparison of the influence of eccentricity on the load-deflection curve.
(a)(b)(c)
### 3.4. Main Results
The main results of this research are shown in Table4. It can be seen from the table that, under the condition of large eccentricity pressure, the stainless steel bars on the tensile side have yielded, while the stainless steel bars on the compression side mostly have not. This phenomenon is different from that of ordinary carbon-reinforced concrete specimens. In the case of small eccentricity pressure, the steel on the compression side has mostly yielded, while the stainless steel bars on the side furthest from the loading point have not, which is consistent with that of ordinary carbon-reinforced concrete specimens [44–48]. Stainless steel bars can play a more beneficial role in concrete columns, but there are some disparities to ordinary reinforced concrete columns. As such, the calculation of SSRC columns cannot replicate the standard results of ordinary reinforced concrete columns.Table 4
Main test data.
Specimen numbere0/h0Nu (kN)εs′ (10−6)εs (10−6)Failure patternA-D10.913242.2−1195.056152.23Large eccentricityA-D40.228807.2−2422.90384.45Small eccentricityB-D10.922302.1−1251.094183.15Large eccentricityB-D20.691440.5−1850.982983.35Large eccentricityB-D30.461625.4−2657.561707.61Small eccentricityB-D40.230952.0−2775.89279.73Small eccentricityC-D10.941353.5−1166.361724.76Large eccentricityC-D40.2351032.0−1877.59−16.99Small eccentricityNote:Nu represents the maximum bearing load of the specimen, and εs′ and εs represent the strain of compression stainless steel and tension stainless steel bars, respectively, when the threshold load is reached. In this table, negative values represent compression and positive values represent tension.
## 3.1. Experiment Phenomena and Failure Modes
The experiments can be divided into two conditions: large eccentric compression, for which the initial eccentricities are 150 mm and 200 mm, and small eccentric compression, which includes the initial eccentricities of 50 mm and 100 mm. Specimens with D1/D2 in the specimen number are those with large eccentric compression, and the failure modes of specimens with large eccentric compression are essentially the same. When the load increased to 15–25% of the peak load, 2–4 horizontal cracks formed in the tensile zone of the SSRC columns. At the point of these cracks in the tensile zone, the strain on the stainless steel bars and the mid-span deflection of the SSRC columns greatly increased. As the load continued to increase, the cracks were more or less equally spaced, forming several major cracks. With further increase in the load, the width of the cracks increased, gradually extending to the compression zone, and the height of the compression zone of the SSRC columns decreased. When the load reached 80–90% of the peak load, the stainless steel bars in the tensile zone were close to yield: the strain on the stainless steel bars rapidly increased, the height of the compression zone of the specimen decreased, the compressive strain on the concrete in the compression zone of the concrete columns and the stainless steel bars increased, and longitudinal cracks appeared. As the load continued to increase, the bearing capacity of the specimens decreased abruptly, the strain on the concrete in the compression zone reached the ultimate compressive strain state, and the specimens were crushed. On the destruction of the specimens, the concrete in the compression area was crushed and, while the stainless steel bars in the compression area did not yield, the stainless steel bars in the tensile area did. The lateral deflection of the specimens was large, and there were characteristics of ductile failure [41].Specimens with D3/D4 in the specimen number are the specimens with small eccentric compression, and the failure modes of all specimens with small eccentric compression are essentially comparable. When the load increased to 25–40% of the peak load, several small cracks appeared in the tensile zone and the width of the cracks and their extension to the compression zone were not obvious. No main cracks formed, and the strain at the edge of the compression zone of the specimens increased rapidly. With an increase in the load, the strain on the stainless steel bars and concrete in the compression area of the specimens increased significantly and the cracks in the tensile area extended and developed slowly. When approaching failure conditions, longitudinal cracks appeared in the concrete in the compression zone. The destruction was sudden, without any visible symptoms, and the crushing area was large. When the specimens were destroyed, the reinforcement on the side closest to the loading point yielded, while the reinforcement on the side furthest from the loading point did not. The lateral deflection of the specimens was small, and the specimens showed certain characteristics of brittleness [41]. The failure mode of the specimens is shown in Figure 6.Figure 6
Failure mode of the specimen. (a) Large eccentric compression failure. (b) Small eccentric compression failure.
(a)(b)
## 3.2. Load-Deflection Curves
The load-deflection curves of each specimen are shown in Figure7. It can be seen in the graphs that the load-deflection curve of SSRC eccentric compression columns is more or less divided into three stages. First is the linear elastic stage: the bearing capacity of the SSRC column is still small, the specimen has not cracked, and the deflection is also small. At this point, the relationship of load-deflection is approximately linear. The second stage is the nonlinear ascending stage: with the increase in load, both the amount and width of the cracks in the tensile zone of the specimen increase and the cracks further extend to the compression zone. The plastic characteristics of the specimen become increasingly obvious. At this stage, there is a nonlinear relationship between load and deflection and, with an increase in load, the relationship becomes increasingly obvious and the slope becomes smaller, indicating a continuous decline in specimen stiffness. The third stage is the descending stage: after the peak load exceeds, the deflection of the specimen increases and the load decreases [42]. The descending section of the large eccentric compression SSRC columns is relatively gentle, and the ductility of the specimens is good. In order to protect the instrument, the displacement gauge for some specimens was removed at 80% of the peak load, so the descending section is not presented in the graphs.Figure 7
Load-deflection curve of the specimens.
(a)(b)(c)(d)(e)(f)(g)(h)
## 3.3. Influencing Factor Analysis
In order to study the stress of stainless steel bars in concrete eccentric pressure columns, the load-reinforcement strain curve of each specimen was drawn, as shown in Figure8. As can be seen in Figure 8, the load-strain curve of the specimens with large eccentricity failure (A-D1, B-D1, B-D2, and C-D1) can be more or less divided into the following three stages: the elastic stage, where the strain and load of the tensile stainless steel bars show an approximate linear relationship; the nonlinear stage, where the height of the compression zone decreases gradually with an increase in load, the stress is redistributed, and the load-strain curve of the stainless steel bars deviated from the original linear relationship and began to show nonlinearity; and, finally, the approximate level development stage, where the stainless steel bar strain increased and the bars were close to yield. Making full use of tensile area and the compression area, the stainless steel bars did not yield. In the specimens with small eccentric failure (A-D4, B-D3, B-D4, and C-D4), the load-strain curve of the compressive steel bar is essentially the same as that of the tensile stainless steel bar in large bias pressure. The stainless steel bars in the compression zone ultimately yielded, meaning that the stainless steel bars in the compression zone can be fully utilized. The strain of the stainless steel bars in the tensile area is small, unyielding, and not fully utilized. This is synonymous with the small bias of ordinary concrete columns. When the eccentricity of the tensile stainless steel bars is 50 mm, the stainless steel bars furthest away from the loading point also appear to be under compression [42]. This is because the initial eccentricity is too small, so the steel bars in the tensile area of the specimen appear to be under compression, as shown in graphs (f) and (h) in Figure 8.Figure 8
Load-longitudinal reinforcement strain curve of the specimen.
(a)(b)(c)(d)(e)(f)(g)(h)The load-deflection curves of different reinforcement ratios of longitudinal steels with the same eccentricity are displayed in Figure9. It can be seen from the graphs that the ratio of longitudinal reinforcement has a significant influence on the bearing capacity and deflection of SSRC columns. When the initial eccentricity is 200 mm meaning the specimen is under large eccentricity compression, with the same load condition, the smaller the reinforcement ratio, the greater the deflection, and the flatter the load-deflection curve. When the initial eccentricity is 50 mm, in the case of small eccentricity compression, the change in the reinforcement ratio of longitudinal stainless steel has little effect on the deflection of the specimen.Figure 9
Influence of longitudinal reinforcement ratio on the load-deflection curve.
(a)(b)Eccentricity has a significant effect on the load and deflection of concrete columns. Under the same conditions, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity, the load-deflection curves are gentler, and the bearing capacity of the specimens decreases. The comparison of the influence of eccentricity on the load-deflection curve of SSRC columns is shown in Figure10 [43].Figure 10
Comparison of the influence of eccentricity on the load-deflection curve.
(a)(b)(c)
## 3.4. Main Results
The main results of this research are shown in Table4. It can be seen from the table that, under the condition of large eccentricity pressure, the stainless steel bars on the tensile side have yielded, while the stainless steel bars on the compression side mostly have not. This phenomenon is different from that of ordinary carbon-reinforced concrete specimens. In the case of small eccentricity pressure, the steel on the compression side has mostly yielded, while the stainless steel bars on the side furthest from the loading point have not, which is consistent with that of ordinary carbon-reinforced concrete specimens [44–48]. Stainless steel bars can play a more beneficial role in concrete columns, but there are some disparities to ordinary reinforced concrete columns. As such, the calculation of SSRC columns cannot replicate the standard results of ordinary reinforced concrete columns.Table 4
Main test data.
Specimen numbere0/h0Nu (kN)εs′ (10−6)εs (10−6)Failure patternA-D10.913242.2−1195.056152.23Large eccentricityA-D40.228807.2−2422.90384.45Small eccentricityB-D10.922302.1−1251.094183.15Large eccentricityB-D20.691440.5−1850.982983.35Large eccentricityB-D30.461625.4−2657.561707.61Small eccentricityB-D40.230952.0−2775.89279.73Small eccentricityC-D10.941353.5−1166.361724.76Large eccentricityC-D40.2351032.0−1877.59−16.99Small eccentricityNote:Nu represents the maximum bearing load of the specimen, and εs′ and εs represent the strain of compression stainless steel and tension stainless steel bars, respectively, when the threshold load is reached. In this table, negative values represent compression and positive values represent tension.
## 4. Carrying Capacity Calculation
### 4.1. Verification of Plane Section Assumption
Figure11 shows the distribution of strain in the mid-span section along the height under various loads. It can be seen that the strain distribution along the height of the column in the mid-span section of the specimen is uniform and mostly linear, which meets the requirements of plane section assumption [49].Figure 11
Strain-load relationship diagram of concrete.
(a)(b)(c)(d)
### 4.2. Calculation of Ultimate Bearing Capacity
The mechanical properties of stainless steel bars are shown in Table5.Table 5
Mechanical properties of stainless steel bars.
ε0.2σ0.2εuσuEs0.0061605 MPa0.276797 MPa151 GPaThe constitutive model of stainless steel adopts the Rasmussen model and the constitutive model of steel according to China's current standards [50].The mathematical expression of the Rasmussen model is(3)ε=σsE0+εpyσsσ0.2n,ε≤ε0.2,σs−σ0.2E0.2+εuσm−σ0.2σu−σ0.2m,ε0.2<ε≤εu,where E0 is the initial elastic modulus; εpy is the residual strain, and εpy = 0.002; n=ln20/lnσ0.2/σ0.01, and σ0.01 is the point corresponding to the 0.01% residual strain of the steel bar; E0.2 is the slope of the tangent line at the nominal yield point; m=1+3.5σ0.2/σu; and εu=1−σ0.2/σu.The mathematical expression of the steel bar constitutive model in China's current code is(4)σ=E0ε,ε≤ε0.2,σ0.2+kε−ε0.2,ε0.2<ε≤εu,0,ε>εu,where σ is the steel bar stress, E0 is the initial elastic modulus of the steel, ε is the steel bar strain, σ0.2 is the nominal yield stress of the stainless steel bar, ε0.2 is the strain corresponding to the nominal yield stress of the steel bar, εu is the peak strain corresponding to the ultimate strength of the steel, and k is the slope of the hardened section of the steels, where k=σu−σ0.2/εu−ε0.2.According to the force balance of the damaged section in the stainless steel reinforcement eccentric compression members and the momentary balance of the center point in the tension steel bars, the calculation formula for the bearing capacity of the rectangular section of the SSRC eccentric compression columns can be obtained [43]:(5)N=fcbx+σs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.The theoretical value of the ultimate bearing capacity of stainless steel bar concrete eccentric pressure specimens under two constitutive models can also be obtained, as demonstrated in Table6.Table 6
Mechanical indexes of stainless steels.
Specimen numberUltimate bearing capacity,Nu/kNFailure patternRasmussen modelStandard modelTest valueA-D1249.51263.15242.2Large eccentricityA-D4856.08850.84807.2Small eccentricityB-D1337.02284.23302.1Large eccentricityB-D2438.28415.78440.5Large eccentricityB-D3745.62730.30625.4Small eccentricityB-D4979.74964.62952.0Small eccentricityC-D1640.03633.03653.5Large eccentricityC-D41093.041084.651032.0Small eccentricity
### 4.3. Comparative Analyses
In this experiment, eight stainless steel bar eccentric concrete column bearing capacity tests were carried out and the experimental values were compared with the calculated theoretical values. The comparison results are shown in Table6. Through comparative analysis, it can be seen that the ultimate bearing capacity of the stainless steel bar concrete column’s eccentric load-bearing capacity failure specimen is similar to the theoretical calculation results of the stainless steel bar constitutive model. Considering the convenience of this calculation and verification, the steel constitutive model in China’s current code is used to carry out the calculation. However, the calculation of the load-bearing capacity of the stainless steel bar eccentric compression member is more appropriate. Subsequently, China's normative model is revised to render the calculation results more relevant and provides a safety reserve. Formulas (6) and (7) are used to revise the standard model, and the revised results are shown in Table 7. It can be seen in Table 7 that the error between the theoretical calculation value and the test value of the modified stainless steel eccentric compression specimen’s bearing capacity is small. Within a reasonable range, formulas (6) and (7) can be used to modify the standard model.Table 7
Comparison of the calculated value and test value of load-bearing capacity of the revised standard mode.
Specimen numberUltimate bearing capacityNu/kNRatio of test to calculated valueRevised specification modelTest valueA-D1263.15242.21.0865A-D4805.97807.20.9985B-D1284.23302.10.9408B-D2415.78440.50.9439B-D3684.01625.41.0937B-D4915.27952.00.9614C-D1633.03653.51.7958C-D41025.771032.00.9940For small eccentric compression members, when calculating the bearing capacity, the formula for calculating the strength of the eccentric compression concrete column is shown in (6). To complete the calculation, formula (6) is added to formula (5) and the small eccentric compression of the stainless steel is calculated. The theoretical calculated bearing capacity of the compression test piece is shown in Table 7.(6)fc=0.76fcu−α.For large eccentric compression members, the following calculation formula is used when calculating the bearing capacity to obtain the calculated theoretical bearing capacity value of SSRC columns, under large eccentric compression. The calculation results are shown in Table7.(7)N=fcbx+βσs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.Here,α=1.5andβ=0.8.
## 4.1. Verification of Plane Section Assumption
Figure11 shows the distribution of strain in the mid-span section along the height under various loads. It can be seen that the strain distribution along the height of the column in the mid-span section of the specimen is uniform and mostly linear, which meets the requirements of plane section assumption [49].Figure 11
Strain-load relationship diagram of concrete.
(a)(b)(c)(d)
## 4.2. Calculation of Ultimate Bearing Capacity
The mechanical properties of stainless steel bars are shown in Table5.Table 5
Mechanical properties of stainless steel bars.
ε0.2σ0.2εuσuEs0.0061605 MPa0.276797 MPa151 GPaThe constitutive model of stainless steel adopts the Rasmussen model and the constitutive model of steel according to China's current standards [50].The mathematical expression of the Rasmussen model is(3)ε=σsE0+εpyσsσ0.2n,ε≤ε0.2,σs−σ0.2E0.2+εuσm−σ0.2σu−σ0.2m,ε0.2<ε≤εu,where E0 is the initial elastic modulus; εpy is the residual strain, and εpy = 0.002; n=ln20/lnσ0.2/σ0.01, and σ0.01 is the point corresponding to the 0.01% residual strain of the steel bar; E0.2 is the slope of the tangent line at the nominal yield point; m=1+3.5σ0.2/σu; and εu=1−σ0.2/σu.The mathematical expression of the steel bar constitutive model in China's current code is(4)σ=E0ε,ε≤ε0.2,σ0.2+kε−ε0.2,ε0.2<ε≤εu,0,ε>εu,where σ is the steel bar stress, E0 is the initial elastic modulus of the steel, ε is the steel bar strain, σ0.2 is the nominal yield stress of the stainless steel bar, ε0.2 is the strain corresponding to the nominal yield stress of the steel bar, εu is the peak strain corresponding to the ultimate strength of the steel, and k is the slope of the hardened section of the steels, where k=σu−σ0.2/εu−ε0.2.According to the force balance of the damaged section in the stainless steel reinforcement eccentric compression members and the momentary balance of the center point in the tension steel bars, the calculation formula for the bearing capacity of the rectangular section of the SSRC eccentric compression columns can be obtained [43]:(5)N=fcbx+σs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.The theoretical value of the ultimate bearing capacity of stainless steel bar concrete eccentric pressure specimens under two constitutive models can also be obtained, as demonstrated in Table6.Table 6
Mechanical indexes of stainless steels.
Specimen numberUltimate bearing capacity,Nu/kNFailure patternRasmussen modelStandard modelTest valueA-D1249.51263.15242.2Large eccentricityA-D4856.08850.84807.2Small eccentricityB-D1337.02284.23302.1Large eccentricityB-D2438.28415.78440.5Large eccentricityB-D3745.62730.30625.4Small eccentricityB-D4979.74964.62952.0Small eccentricityC-D1640.03633.03653.5Large eccentricityC-D41093.041084.651032.0Small eccentricity
## 4.3. Comparative Analyses
In this experiment, eight stainless steel bar eccentric concrete column bearing capacity tests were carried out and the experimental values were compared with the calculated theoretical values. The comparison results are shown in Table6. Through comparative analysis, it can be seen that the ultimate bearing capacity of the stainless steel bar concrete column’s eccentric load-bearing capacity failure specimen is similar to the theoretical calculation results of the stainless steel bar constitutive model. Considering the convenience of this calculation and verification, the steel constitutive model in China’s current code is used to carry out the calculation. However, the calculation of the load-bearing capacity of the stainless steel bar eccentric compression member is more appropriate. Subsequently, China's normative model is revised to render the calculation results more relevant and provides a safety reserve. Formulas (6) and (7) are used to revise the standard model, and the revised results are shown in Table 7. It can be seen in Table 7 that the error between the theoretical calculation value and the test value of the modified stainless steel eccentric compression specimen’s bearing capacity is small. Within a reasonable range, formulas (6) and (7) can be used to modify the standard model.Table 7
Comparison of the calculated value and test value of load-bearing capacity of the revised standard mode.
Specimen numberUltimate bearing capacityNu/kNRatio of test to calculated valueRevised specification modelTest valueA-D1263.15242.21.0865A-D4805.97807.20.9985B-D1284.23302.10.9408B-D2415.78440.50.9439B-D3684.01625.41.0937B-D4915.27952.00.9614C-D1633.03653.51.7958C-D41025.771032.00.9940For small eccentric compression members, when calculating the bearing capacity, the formula for calculating the strength of the eccentric compression concrete column is shown in (6). To complete the calculation, formula (6) is added to formula (5) and the small eccentric compression of the stainless steel is calculated. The theoretical calculated bearing capacity of the compression test piece is shown in Table 7.(6)fc=0.76fcu−α.For large eccentric compression members, the following calculation formula is used when calculating the bearing capacity to obtain the calculated theoretical bearing capacity value of SSRC columns, under large eccentric compression. The calculation results are shown in Table7.(7)N=fcbx+βσs′As′−σsAs,Ne=fcbxh0−x2+σs′As′h0−as′.Here,α=1.5andβ=0.8.
## 5. Conclusions
In this paper, eight stainless steel bar compression components are fabricated and tested. Based on the test results, the mechanical properties of the SSRC column are analyzed and the calculation formula of the bearing capacity of the SSRC eccentric compression members is proposed. The factors affecting the bearing capacity of the SSRC column are analyzed. The main conclusions are as follows:(1)
The failure mode of SSRC eccentric compression column in the ultimate state is the same as that of the ordinary concrete column. When large eccentric compression member destruction occurs, the concrete in the compression zone is crushed. However, the stainless steel bars in the compression zone did not yield, whereas the stainless steel bars in the tensile zone did. The lateral deflection of the specimen is relatively large, and it displayed characteristics of ductile failure. In the case of small eccentric compression member failure, the stainless steel bars near the loading point yielded, while the stainless steel bars furthest from the loading point did not. The lateral deflection of the specimen is relatively small, which indicates that it is a brittle failure.(2)
The deflection-load curve of the eccentric compression column of stainless steel bars can be more or less divided into the following three stages: linear elastic stage, nonlinear ascending stage, and descent stage. Compared with small eccentricity, the descending section of the large eccentricity compression member is relatively gentle and the ductility of the specimen is superior. In the case of large eccentric compression—when the specimens are in the same load—the smaller the reinforcement ratio, the greater the deflection. In the case of small eccentric compression, the change in reinforcement ratio of the longitudinal stainless steel bars has little effect on the deflection of the specimens. When other conditions are comparable, the deflection corresponding to the ultimate load of the SSRC column rises with the increase in eccentricity and the load-deflection curve is gentler.(3)
The strain distribution of the mid-span section of the SSRC column with small eccentric pressure is consistent with the assumption of the plane section along the height, which can be theoretically calculated according to the plane section assumption.(4)
The Rasmussen model and the current model, adopted with the Chinese specifications, are the two types of stainless steel reinforcement constitutive models currently most commonly used. The formula of the specification model is simple and easy to calculate, allowing for the modification of the specification model. According to the comparison of the revised and test results, the conclusions indicate that the correction formula can be used in the stainless steel bearing capacity calculation for reinforced concrete eccentric pressure column. The calculated results are representative of the test values.Although this paper has done a lot of research on the mechanical properties of SSRC eccentric pressure members, because of the complexity of SSRC structure and the discrete nature of the concrete material test, there are still many problems to be solved: (1) Explore the size effect of concrete structure on SSRC to solve whether the construction of large-volume SSRC can be used to analyze the current relevant theories. (2) Search the suitable type of concrete structure in which the strength of stainless steel bars can play out to the greatest extent. (3) Carry out the experimental research about the flexural, shear, and fatigue resistance of stainless steel reinforced concrete structure. For the application and development of stainless steel bars in practical engineering, more comprehensive research and comparison are needed.
---
*Source: 1013188-2021-12-01.xml* | 2021 |
# A Novel Technique for Speech Recognition and Visualization Based Mobile Application to Support Two-Way Communication between Deaf-Mute and Normal Peoples
**Authors:** Kanwal Yousaf; Zahid Mehmood; Tanzila Saba; Amjad Rehman; Muhammad Rashid; Muhammad Altaf; Zhang Shuguang
**Journal:** Wireless Communications and Mobile Computing
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1013234
---
## Abstract
Mobile technology is very fast growing and incredible, yet there are not much technology development and improvement for Deaf-mute peoples. Existing mobile applications use sign language as the only option for communication with them. Before our article, no such application (app) that uses the disrupted speech of Deaf-mutes for the purpose of social connectivity exists in the mobile market. The proposed application, named as vocalizer to mute (V2M), uses automatic speech recognition (ASR) methodology to recognize the speech of Deaf-mute and convert it into a recognizable form of speech for a normal person. In this work mel frequency cepstral coefficients (MFCC) based features are extracted for each training and testing sample of Deaf-mute speech. The hidden Markov model toolkit (HTK) is used for the process of speech recognition. The application is also integrated with a 3D avatar for providing visualization support. The avatar is responsible for performing the sign language on behalf of a person with no awareness of Deaf-mute culture. The prototype application was piloted in social welfare institute for Deaf-mute children. Participants were 15 children aged between 7 and 13 years. The experimental results show the accuracy of the proposed application as 97.9%. The quantitative and qualitative analysis of results also revealed that face-to-face socialization of Deaf-mute is improved by the intervention of mobile technology. The participants also suggested that the proposed mobile application can act as a voice for them and they can socialize with friends and family by using this app.
---
## Body
## 1. Introduction
Historically the termdeaf-mute referred to the person who was either deaf using sign language as a source of communication or both deaf and unable to speak. This term continues to be used to refer to the person who is deaf but has some degree of speaking ability [1]. In deaf community, the worddeafis spelled in two separate ways. The small “d” deaf represents a person’s level of hearing through audiology and being not associated with the other members of the deaf community whereas the capital “D” Deaf indicates the culturally Deaf people who use sign language for communication [2].According to world federation of the deaf (WFD) over 5% of the world’s population (≈360 million people) has disabling hearing loss including 328 million adults and 32 million children [3]. The degree of hearing loss is categorized into mild, moderate, severe, or profound levels [4]. Hearing loss of a person has a direct impact on his/her speech and language development. People with severe or profound hearing loss have higher voice handicap index (VHI) scores than those who suffer from mild hearing loss [5]. A person with mild hearing loss has less problems in speech development as he/she might not be able to hear certain sounds and the speech clarity is not affected that much. A person with severe or profound hearing loss can have a severe problem in speech development and usually relies on sign language as a source of communication.Deaf people face many irritations and frustrations that limit their ability to do everyday tasks. Research indicated [6] that Deaf people, especially Deaf children, have high rates of behavioral and emotional issues in relation to different methods of communication. Most people with such disabilities become introverts and resist social connectivity and face-to-face socialization. The inability to speak with family and friends can cause low self-esteem and may result in social isolation of Deaf person. It is not only that they lack social interactions but communication is also a major barrier to Deaf-mute healthcare [7]. In such conditions, it becomes difficult for the caretaker to interact with the deaf person.Different medical treatments are available for the deaf community in order to get rid of their deafness but the cost of these treatments are expensive [8]. A report of world health organization (WHO) 2017 [9] states that there are different types of costs associated with hearing loss, which are as follows: (1) direct costs: they include the cost associated with hearing loss incurred by healthcare systems; some other types of direct costs include the education support for such children; (2) indirect costs: they include the loss of productivity and usually refer to the cost of individual being unable to contribute to the economy; and (3) intangible costs: they refer to the stigma experienced by the families that are experiencing the hearing loss. This report concludes that unaddressed hearing loss poses substantial costs to the healthcare system and to the economy as a whole.Many communication channels are available, through which Deaf-mute people can deliver their messages, e.g., notes, helper pages, sign language, books with letters, lip reading, and gestures. Despite these channels, there are many problems which are encountered by Deaf-mutes and normal people during communication. The problem is not confined only to a Deaf-mute person who is unable to hear or speak, but another problem is lack of awareness of Deaf culture by normal people. Majority of hearing people have either no/little knowledge or experience of sign language [10]. There are also more than 300 sign languages and it is hard for a normal person to understand and become used to these languages [11]. The above-mentioned problems can be solved by involving the assistive technology as it can be used as an interpreter for converting the sign languages into text or speech for better communication between the Deaf community and hearing individuals [12]. Other technologies such as speech technologies can assist in different ways to help people with hearing loss by improving their autonomy [13]. A common example of speech technology is speech recognition, also termed as automatic speech recognition (ASR). It is the process of converting the speech signal into sequences of words with the help of an algorithm [14]. The ASR process comprises three steps, i.e., (1) feature extraction, (2) acoustic model generation, and (3) recognition phase [15, 16]. For feature extraction, MFCC is the most commonly used technique [17, 18]. The success of MFCC makes it the standard choice in the state-of-the-art speech recognizers such as HTK [19].The main purpose of this research paper is to use a mobile-based assistive technology for providing a simple and cost-effective solution for Deaf-mute with little or complete speech development. The proposed system used HTK based speech recognizer to identify the speech of Deaf-mute and provide a communication platform for them. The next two sections explain the related work and proposed methodology of our system. Section4 states the experimental setup and results of the proposed system.
## 2. Related Work
The Deaf community is not a monolithic group; it has a diversity of groups which are as follows [20, 21]:(1)
Hard-of-hearing people: they are neither fully deaf nor fully hearing, also known as culturally marginal people [22]. They can obtain some useful linguistic information from speech.(2)
Culturally deaf people: they might belong to deaf families and use sign language as the primary source of communication. Their voice (speech clarity) may be disrupted.(3)
Congenital or prelingual deaf people: they are deaf by birth or become deaf before they learn to talk and are not affiliated with Deaf culture. They might or might not use sign language based communication.(4)
Orally educated or postlingual deaf people: they have been deafened in their childhood but developed the speaking skills.(5)
Late-deafened adults: they have had the opportunity to adjust their communication techniques as their progressive hearing losses.Each group of a Deaf community has a different degree of hearing loss and use a different source of communication. Table1 illustrates the details of Deaf community groups with their degree of hearing loss and source of communication with others.Table 1
Mapping of Deaf community groups with a degree of hearing loss and communication source [3, 20, 21].
Deaf Community Groups Degree of Hearing Loss Communication Source Hard-of-Hearing People Mild to Severe Speech/Sign Language Culturally Deaf People Profound Sign Language Congenital or Pre-lingual Deaf People Profound Sign Language Orally Educated or Post-lingual Deaf People Severe to Profound Speech/Sign Language Late-Deafened Adults Moderate to Profound Speech/Sign LanguageHearing loss or deafness has a direct impact on communication, educational achievements, or social interactions [23]. Lack of knowledge about Deaf culture is documented in society as well as in healthcare environment [24]. Kuenburg et al. also indicated that there are significant challenges in communication among healthcare professionals and Deaf people [25]. Improvement in healthcare access among Deaf people is possible by providing the sign language supported visual communication and implementation of communication technologies for healthcare professionals. Some of the implemented technology-based approaches for facilitating Deaf-mutes with easy-to-use services are as follows.
### 2.1. Sensor-Based Technology Approach
Sensors based assistance can be used for solving the social problems of Deaf-mute by bridging the communication gap. Sharma et al. used wearable sensor gloves for detecting the hand gestures of sign language [26]. In this system, flex sensors were used to record the sign language and to sense the environment. The hand gesture of a person activates glove, and flex sensors on glove convert those gestures into electrical signals. The signals are then matched from the database and converted into corresponding speech and displayed on LCD. The cost-effective sensor-based communication device [27] was also suggested for Deaf-mute people to communicate with the doctor. This experiment used a 32-bit microcontroller, LCD to display the input/output, and a processing unit. The LCD displays different hand sign language based pictures to the user. The user selects relevant pictures to describe the illness symptoms. These pictures then convert into patterns and pair with words to make sentences. Vijayalakshmi and Aarthi used flex sensors on the glove for gesture recognition [28]. The system was developed to recognize the words of American Sign Language (ASL). The text output obtained from sensor-based system is converted into speech by using the popular speech synthesis technique of hidden Markov model (HMM). The HMM-based-text-to-speech synthesizer (HTS) was attached to the system for converting the text obtained from hand gestures of people into speech. The HTS system involved training phase for extraction of spectral and excitation parameters from the collected speech data and was modeled by context-dependent HMMs. The synthesis phase of HTS system was used for the construction of HMM sequence by concatenating context-dependent HMMs. Similarly, Arif et al. used five flex sensors on a glove to translate ASL gestures for Deaf-mute into the visual and audio output on LCD [29].
### 2.2. Vision-Based Technology Approach
Many vision-based technology interventions are used to recognize the sign languages of Deaf people. For example, Soltani et al. developed a gesture-based game for Deaf-mutes by using Microsoft Kinect which recognizes the gesture command and converts it into text so that they can enjoy the interactive environment [7]. Voice for the mute (VOM) system was developed to take input in the form of fingerspelling and convert into corresponding speech [30]. The images of fingerspelling signs are retrieved from the camera. After performing noise removal and image processing, the fingerspelling signs are matched from the trained dataset. Processed signs are linked to appropriate text and convert this text into required speech. Nagori and Malode [31] proposed the communication platform by extracting images from the video and converting these images into corresponding speech. Sood and Mishra [32] presented the system that takes images of sign language as input and displays speech as output. The features used in vision-based approaches for speech processing are also used in different object recognition based applications [33–39].
### 2.3. Smartphone-Based Technology Approach
Smartphone technology plays a vital role in helping the people with impairments to get themselves interacted socially and to overcome their communication barriers. Smartphone technology approach is more portable and effective as compared to sensor or vision technology. Many of the new smartphones are furnished with advanced sensors, high processors, and high-resolution cameras [40]. A real-time emergency assistant “iHelp” [41] was proposed for Deaf-mute people where they can report any kind of emergency situation. The current location of the user is accessed through built-in GPS system in a smartphone. The information about the emergency situation is sent to the management through SMS and then passed on to the closest suitable rescue units, and hence the user can get rescue through the use of iHelp. MonoVoix [42] is an Android application that also acts as a sign language interpreter. It captures the signs from a mobile phone camera and then converts them into corresponding speech. Ear Hear [43] is an Android application for Deaf-mute people. It uses sign language to communicate with normal people. The speech-to-sign and sign-to-speech technology are used. For a hearing person to interact with Deaf-mute, the text-to-speech (TTS) technology inputs the speech signal, and a corresponding sign language video is played against that input through which the mute can easily understand. Bragg et al. [44] proposed a sound detector. The app is used to detect the red alert sounds and alert the deaf-mute person by vibrating and showing a popup notification.
## 2.1. Sensor-Based Technology Approach
Sensors based assistance can be used for solving the social problems of Deaf-mute by bridging the communication gap. Sharma et al. used wearable sensor gloves for detecting the hand gestures of sign language [26]. In this system, flex sensors were used to record the sign language and to sense the environment. The hand gesture of a person activates glove, and flex sensors on glove convert those gestures into electrical signals. The signals are then matched from the database and converted into corresponding speech and displayed on LCD. The cost-effective sensor-based communication device [27] was also suggested for Deaf-mute people to communicate with the doctor. This experiment used a 32-bit microcontroller, LCD to display the input/output, and a processing unit. The LCD displays different hand sign language based pictures to the user. The user selects relevant pictures to describe the illness symptoms. These pictures then convert into patterns and pair with words to make sentences. Vijayalakshmi and Aarthi used flex sensors on the glove for gesture recognition [28]. The system was developed to recognize the words of American Sign Language (ASL). The text output obtained from sensor-based system is converted into speech by using the popular speech synthesis technique of hidden Markov model (HMM). The HMM-based-text-to-speech synthesizer (HTS) was attached to the system for converting the text obtained from hand gestures of people into speech. The HTS system involved training phase for extraction of spectral and excitation parameters from the collected speech data and was modeled by context-dependent HMMs. The synthesis phase of HTS system was used for the construction of HMM sequence by concatenating context-dependent HMMs. Similarly, Arif et al. used five flex sensors on a glove to translate ASL gestures for Deaf-mute into the visual and audio output on LCD [29].
## 2.2. Vision-Based Technology Approach
Many vision-based technology interventions are used to recognize the sign languages of Deaf people. For example, Soltani et al. developed a gesture-based game for Deaf-mutes by using Microsoft Kinect which recognizes the gesture command and converts it into text so that they can enjoy the interactive environment [7]. Voice for the mute (VOM) system was developed to take input in the form of fingerspelling and convert into corresponding speech [30]. The images of fingerspelling signs are retrieved from the camera. After performing noise removal and image processing, the fingerspelling signs are matched from the trained dataset. Processed signs are linked to appropriate text and convert this text into required speech. Nagori and Malode [31] proposed the communication platform by extracting images from the video and converting these images into corresponding speech. Sood and Mishra [32] presented the system that takes images of sign language as input and displays speech as output. The features used in vision-based approaches for speech processing are also used in different object recognition based applications [33–39].
## 2.3. Smartphone-Based Technology Approach
Smartphone technology plays a vital role in helping the people with impairments to get themselves interacted socially and to overcome their communication barriers. Smartphone technology approach is more portable and effective as compared to sensor or vision technology. Many of the new smartphones are furnished with advanced sensors, high processors, and high-resolution cameras [40]. A real-time emergency assistant “iHelp” [41] was proposed for Deaf-mute people where they can report any kind of emergency situation. The current location of the user is accessed through built-in GPS system in a smartphone. The information about the emergency situation is sent to the management through SMS and then passed on to the closest suitable rescue units, and hence the user can get rescue through the use of iHelp. MonoVoix [42] is an Android application that also acts as a sign language interpreter. It captures the signs from a mobile phone camera and then converts them into corresponding speech. Ear Hear [43] is an Android application for Deaf-mute people. It uses sign language to communicate with normal people. The speech-to-sign and sign-to-speech technology are used. For a hearing person to interact with Deaf-mute, the text-to-speech (TTS) technology inputs the speech signal, and a corresponding sign language video is played against that input through which the mute can easily understand. Bragg et al. [44] proposed a sound detector. The app is used to detect the red alert sounds and alert the deaf-mute person by vibrating and showing a popup notification.
## 3. Proposed Methodology
Nowadays many technology devices such as smartphone-enabled devices prefer speech interfaces over visual ones. The research [49] highlighted that off-the-shelf speech recognition system cannot be used to detect the speech of deaf or hearing loss people as these systems contain a higher ratio of word error rate. This research recommended using human-based computations to recognize the deaf speech and using text-to-speech functionality for speech generation. In this regard, we proposed and developed an Android based application named as vocalizer to mute (V2M). The proposed application acts as an interpreter and encourages two-way communication between Deaf-mute and normal person. We refer to normal person as the one who has no hearing or vocal impairment or disability. The main features of the proposed application are listed below.
### 3.1. Normal to Deaf-Mute Person Communication
This module takes text or spoken message of a normal person as an input and outputs a 3D avatar that performs sign language for a Deaf-mute person. ASL based animations of an avatar are stored in a central database of application. Each animation file is given 2–5 tags. The steps of normal to Deaf-mute person communication are as follows:(1)
The application takes text/speech of normal person as an input.(2)
The application converts the speech message of a normal person into text by using the Google Cloud Speech Application Program Interface (API) as this API detects normal speech better compared to Deaf persons’ speech.(3)
The application matches the text to any of the tags associated with an animation file and displays the avatar performing corresponding sign for Deaf-mute.
### 3.2. Deaf-Mute to Normal Person Communication
Not everyone has knowledge of sign language so the proposed application uses disrupted speech of a Deaf-mute person. This disrupted form of speech is converted into recognizable speech format by using speech recognition system. HMM-based speech recognition is a growing technology as evidenced by the rapidly increasing commercial deployment. The performance of HMM-based speech recognition has already reached a level that can support viable applications [50]. For this purpose, HTK [51] is used for developing speech recognition system as this toolkit is primarily designed for building HMM-based speech recognition systems.
#### 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
### 3.3. Messaging Service for Deaf-Mute and Normal Person
The application also provides messaging feature to both Deaf-mute and normal people. A person can choose between the American Sign Language or English keyboard for sending the messages. The complete flowchart of “V2M” is illustrated in Figure3.Figure 3
Flowchart of V2M application.
## 3.1. Normal to Deaf-Mute Person Communication
This module takes text or spoken message of a normal person as an input and outputs a 3D avatar that performs sign language for a Deaf-mute person. ASL based animations of an avatar are stored in a central database of application. Each animation file is given 2–5 tags. The steps of normal to Deaf-mute person communication are as follows:(1)
The application takes text/speech of normal person as an input.(2)
The application converts the speech message of a normal person into text by using the Google Cloud Speech Application Program Interface (API) as this API detects normal speech better compared to Deaf persons’ speech.(3)
The application matches the text to any of the tags associated with an animation file and displays the avatar performing corresponding sign for Deaf-mute.
## 3.2. Deaf-Mute to Normal Person Communication
Not everyone has knowledge of sign language so the proposed application uses disrupted speech of a Deaf-mute person. This disrupted form of speech is converted into recognizable speech format by using speech recognition system. HMM-based speech recognition is a growing technology as evidenced by the rapidly increasing commercial deployment. The performance of HMM-based speech recognition has already reached a level that can support viable applications [50]. For this purpose, HTK [51] is used for developing speech recognition system as this toolkit is primarily designed for building HMM-based speech recognition systems.
### 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
## 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
## 3.3. Messaging Service for Deaf-Mute and Normal Person
The application also provides messaging feature to both Deaf-mute and normal people. A person can choose between the American Sign Language or English keyboard for sending the messages. The complete flowchart of “V2M” is illustrated in Figure3.Figure 3
Flowchart of V2M application.
## 4. Experimental Results and Discussions
### 4.1. Experimental Setup
The proposed application V2M required a camera, a mobile phone for the installation of the V2M app, laptop (acting as a server), and an instructor to guide the Deaf-mute student. The complete scenario is shown in Figure4.Figure 4
Experimental setup: a participant performing registration of speech sample task.A total of 15 students from Al-Mudassir Special Education Complex Baharwal, Pakistan, participated in this experiment and the participated students were between the ages of 7 and 13 with some speech training in school. The instructor guided all students in using the mobile application. The experiment consisted of two phases.
#### 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
#### 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
### 4.2. Qualitative Feedback
Researchers formalized questionnaire survey to evaluate the effectiveness of the Deaf-mute application. The survey comprised 12 questions for participants to answer and the reason for this short-length selection of questions was not to overwhelm Deaf-mute students with longer interviews. Secondly, these students had no experience of using any Deaf-mute based application. The qualitative feedback is summarized into following categories (paraphrased from the feedback forms).Familiarity with Existing Mobile Apps. All participants have not heard or used any mobile applications which are dedicated to Deaf-mutes.Ease of Use and Enjoyment. All participants enjoyed using the app. They liked the idea of using an avatar for performing sign language. Out of 15 students, 12 students performed the given tasks quite easily and 3 students have not used or interacted with mobile devices before. Initially, they found this app difficult but it became easier for them after app functions were performed 2-3 times in front of them. Overall they found this app user-friendly and interactive.Application Interface. Participants liked the interface of the app. They learned the steps of app quite fast and they also liked the idea of an avatar performing greeting gesture at home screen.Source of Communication. All participants were using sign language as a primary source of communication. They recommended the intervention of mobile application as a source of communication for them. They acknowledged that the mobile app can be used to convey the message of deaf-mute to a normal person.
### 4.3. Results and Comparative Analysis
The application training and testing corpus are obtained from the speech samples of Deaf-mutes. Training corpus is comprised of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. All participants uttered each alphabet, digit, and statement 2–4 times. The total training utterances are 2440. The HTK speech recognizer was used in training process and speech recognition. HMM was used at the backend of speech recognizer HTK. For testing, each participant was asked 10 questions to answer. There are a total of 390 testing utterances. The application recorded the answer (speech sample), processed it, and displayed (text/speech) result for normal person understanding. The accuracy of simulation results of proposed application is calculated by using precision and recall. For the V2M app, the precision is calculated by a fraction of correctly identified speech signals to a total number of speech samples whereas recall is a percentage of the number of relevant results. Precision, recall, and accuracy are calculated by using the following formulas:(8)precision=truepositivetptruepositivetp+falsepositivefprecall=truepositivetptruepositivetp+falsenegativefnaccuracy=truepositivetp+truenegativetntruepositivetp+truenegativetn+falsepositivefp+falsenegativefn.True positive (tp) refers to words that are uttered by the person and detected by the system.False positive (fp) refers to words not uttered by the person but detected by the system.False negative (fn) refers to words that are uttered by the person but the system does not detect it.True negative (tn) refers to everything else.The experimental results of the proposed methodology in terms of precision, recall, and accuracy parameters are illustrated in Table3.Table 3
Precision and recall of proposed application with speech samples (N = 2, 3, and 4).
Testing Statement Speech Samples Speech Samples Speech Samples N = 2 N = 3 N = 4 Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy S:1 37.5% 30% 20% 91.6% 78.5% 73.3% 100% 93.3% 93.3% S:2 62.5% 41.6% 33.3% 85.7% 92.4% 80% 100% 100% 100% S:3 66.6% 30.7% 26.67% 100% 80% 80% 100% 93.3% 93.3% S:4 60% 54.54% 40% 100% 86.6% 86.67% 92.8% 100% 100% S:5 80% 61% 53.3% 92.8% 86.6% 86.67% 100% 100% 100% S:6 57.1% 33.3% 26.67% 100% 73.3% 73.3% 100% 93.3% 93.3% S:7 53.8% 77.7% 46.67% 84.6% 84.6% 73.3% 100% 100% 100% S:8 45.45% 55.5% 33.3% 100% 80% 80% 100% 100% 100% S:9 30% 37.5% 20% 100% 86.7% 86.67% 100% 100% 100% S:10 75% 46.1% 40% 76.9% 83.3% 66.67% 93.3% 100% 100% Average 56.79 % 46.79 % 46.67 % 93.16 % 83.19 % 78.67 % 98.61 % 97.9 % 97.9 %It is observed from Table3 that the number of speech samples has direct impact on precision and recall of the application. Overall average precision is 56.79% and recall is 46.79% when registered sample count in all statements is 2 (N=2) for each participant. However, the average precision is 93.16% and recall is 83.19% for registered sample count 3 (N=3). The average accuracy in terms of precision and recall is above 97% when registered sample count in all statements is 4 (N=4) for each participant. The F1-score of best precision and recall is calculated:(9)F1scoreN=4=2∗precision∗recallprecision+recall=2∗0.9861∗0.9790.9861+0.979=0.98.Hence it is deducted that the precision of application decreases by taking the limited number of speech samples (N≤2) of the deaf-mute. The application outperforms when the number of speech samples for each statement is greater than 2 (2<N≤2). The speech recognition methodology of proposed application is compared with other speech recognition systems as shown in Table 4.Table 4
Comparison of proposed methodology with state-of-the-art ASR systems.
ASR Systems Methodology Accuracy Proposed Methodology (N=4) MFCC + HTK (8-state HMM) 97.9% MSIAC (Liu et al., 2017) [15] MFCC + GMM Experiment 1 89.50% Experiment 2 85.92% TAMEEM V1.0 (Abushariah, 2017) [45] MFCC + Sphinx 3 92.36% Speaker Identification System (Leu and Lin, 2017) [46] MFCC + GMM Experiment 1 94.08% Experiment 2 84.88% Telugu Speech Signals (Mannepalli et al., 2016) [47] MFCC − GMM 92% AMAZIGH LANGUAGE (Elouahabi et al., 2016) [48] MFCC + HTK (6-state HMM) 80%
## 4.1. Experimental Setup
The proposed application V2M required a camera, a mobile phone for the installation of the V2M app, laptop (acting as a server), and an instructor to guide the Deaf-mute student. The complete scenario is shown in Figure4.Figure 4
Experimental setup: a participant performing registration of speech sample task.A total of 15 students from Al-Mudassir Special Education Complex Baharwal, Pakistan, participated in this experiment and the participated students were between the ages of 7 and 13 with some speech training in school. The instructor guided all students in using the mobile application. The experiment consisted of two phases.
### 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
### 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
## 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
## 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
## 4.2. Qualitative Feedback
Researchers formalized questionnaire survey to evaluate the effectiveness of the Deaf-mute application. The survey comprised 12 questions for participants to answer and the reason for this short-length selection of questions was not to overwhelm Deaf-mute students with longer interviews. Secondly, these students had no experience of using any Deaf-mute based application. The qualitative feedback is summarized into following categories (paraphrased from the feedback forms).Familiarity with Existing Mobile Apps. All participants have not heard or used any mobile applications which are dedicated to Deaf-mutes.Ease of Use and Enjoyment. All participants enjoyed using the app. They liked the idea of using an avatar for performing sign language. Out of 15 students, 12 students performed the given tasks quite easily and 3 students have not used or interacted with mobile devices before. Initially, they found this app difficult but it became easier for them after app functions were performed 2-3 times in front of them. Overall they found this app user-friendly and interactive.Application Interface. Participants liked the interface of the app. They learned the steps of app quite fast and they also liked the idea of an avatar performing greeting gesture at home screen.Source of Communication. All participants were using sign language as a primary source of communication. They recommended the intervention of mobile application as a source of communication for them. They acknowledged that the mobile app can be used to convey the message of deaf-mute to a normal person.
## 4.3. Results and Comparative Analysis
The application training and testing corpus are obtained from the speech samples of Deaf-mutes. Training corpus is comprised of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. All participants uttered each alphabet, digit, and statement 2–4 times. The total training utterances are 2440. The HTK speech recognizer was used in training process and speech recognition. HMM was used at the backend of speech recognizer HTK. For testing, each participant was asked 10 questions to answer. There are a total of 390 testing utterances. The application recorded the answer (speech sample), processed it, and displayed (text/speech) result for normal person understanding. The accuracy of simulation results of proposed application is calculated by using precision and recall. For the V2M app, the precision is calculated by a fraction of correctly identified speech signals to a total number of speech samples whereas recall is a percentage of the number of relevant results. Precision, recall, and accuracy are calculated by using the following formulas:(8)precision=truepositivetptruepositivetp+falsepositivefprecall=truepositivetptruepositivetp+falsenegativefnaccuracy=truepositivetp+truenegativetntruepositivetp+truenegativetn+falsepositivefp+falsenegativefn.True positive (tp) refers to words that are uttered by the person and detected by the system.False positive (fp) refers to words not uttered by the person but detected by the system.False negative (fn) refers to words that are uttered by the person but the system does not detect it.True negative (tn) refers to everything else.The experimental results of the proposed methodology in terms of precision, recall, and accuracy parameters are illustrated in Table3.Table 3
Precision and recall of proposed application with speech samples (N = 2, 3, and 4).
Testing Statement Speech Samples Speech Samples Speech Samples N = 2 N = 3 N = 4 Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy S:1 37.5% 30% 20% 91.6% 78.5% 73.3% 100% 93.3% 93.3% S:2 62.5% 41.6% 33.3% 85.7% 92.4% 80% 100% 100% 100% S:3 66.6% 30.7% 26.67% 100% 80% 80% 100% 93.3% 93.3% S:4 60% 54.54% 40% 100% 86.6% 86.67% 92.8% 100% 100% S:5 80% 61% 53.3% 92.8% 86.6% 86.67% 100% 100% 100% S:6 57.1% 33.3% 26.67% 100% 73.3% 73.3% 100% 93.3% 93.3% S:7 53.8% 77.7% 46.67% 84.6% 84.6% 73.3% 100% 100% 100% S:8 45.45% 55.5% 33.3% 100% 80% 80% 100% 100% 100% S:9 30% 37.5% 20% 100% 86.7% 86.67% 100% 100% 100% S:10 75% 46.1% 40% 76.9% 83.3% 66.67% 93.3% 100% 100% Average 56.79 % 46.79 % 46.67 % 93.16 % 83.19 % 78.67 % 98.61 % 97.9 % 97.9 %It is observed from Table3 that the number of speech samples has direct impact on precision and recall of the application. Overall average precision is 56.79% and recall is 46.79% when registered sample count in all statements is 2 (N=2) for each participant. However, the average precision is 93.16% and recall is 83.19% for registered sample count 3 (N=3). The average accuracy in terms of precision and recall is above 97% when registered sample count in all statements is 4 (N=4) for each participant. The F1-score of best precision and recall is calculated:(9)F1scoreN=4=2∗precision∗recallprecision+recall=2∗0.9861∗0.9790.9861+0.979=0.98.Hence it is deducted that the precision of application decreases by taking the limited number of speech samples (N≤2) of the deaf-mute. The application outperforms when the number of speech samples for each statement is greater than 2 (2<N≤2). The speech recognition methodology of proposed application is compared with other speech recognition systems as shown in Table 4.Table 4
Comparison of proposed methodology with state-of-the-art ASR systems.
ASR Systems Methodology Accuracy Proposed Methodology (N=4) MFCC + HTK (8-state HMM) 97.9% MSIAC (Liu et al., 2017) [15] MFCC + GMM Experiment 1 89.50% Experiment 2 85.92% TAMEEM V1.0 (Abushariah, 2017) [45] MFCC + Sphinx 3 92.36% Speaker Identification System (Leu and Lin, 2017) [46] MFCC + GMM Experiment 1 94.08% Experiment 2 84.88% Telugu Speech Signals (Mannepalli et al., 2016) [47] MFCC − GMM 92% AMAZIGH LANGUAGE (Elouahabi et al., 2016) [48] MFCC + HTK (6-state HMM) 80%
## 5. Conclusion
Deaf people face many irritations and frustrations that limit their ability to do everyday tasks. Deaf children have high rates of behavioral and emotional issues in relation to different methods of communication. The main inspiration behind the proposed application is to remove the communication barrier for Deaf-mutes especially children. This app uses the speech or text input of normal person and translates it into sign language via 3D avatar. It provides speech recognition system for the distorted speech of Deaf-mutes. The speech recognition system uses MFCC feature extraction technique to extract the acoustic vectors from speech samples. The HTK toolkit is used to convert these acoustic vectors into recognizable words or sentences by using pronunciation dictionary and language model. The application is able to recognize Deaf-mute speech samples of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. It provides message service for both Deaf-mutes and normal people. Deaf-mutes can use customized sign language keyboard for composing the message. The app also can convert the received sign language message to text for a normal person. The proposed application was also tested on 15 children aged between 7 and 13 years. The accuracy of proposed application is 97.9%. The qualitative feedback of children also highlighted that it is easy for Deaf-mutes to adapt the mobile technology and mobile app can be used to convey their message to a normal person.
---
*Source: 1013234-2018-05-24.xml* | 1013234-2018-05-24_1013234-2018-05-24.md | 66,070 | A Novel Technique for Speech Recognition and Visualization Based Mobile Application to Support Two-Way Communication between Deaf-Mute and Normal Peoples | Kanwal Yousaf; Zahid Mehmood; Tanzila Saba; Amjad Rehman; Muhammad Rashid; Muhammad Altaf; Zhang Shuguang | Wireless Communications and Mobile Computing
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1013234 | 1013234-2018-05-24.xml | ---
## Abstract
Mobile technology is very fast growing and incredible, yet there are not much technology development and improvement for Deaf-mute peoples. Existing mobile applications use sign language as the only option for communication with them. Before our article, no such application (app) that uses the disrupted speech of Deaf-mutes for the purpose of social connectivity exists in the mobile market. The proposed application, named as vocalizer to mute (V2M), uses automatic speech recognition (ASR) methodology to recognize the speech of Deaf-mute and convert it into a recognizable form of speech for a normal person. In this work mel frequency cepstral coefficients (MFCC) based features are extracted for each training and testing sample of Deaf-mute speech. The hidden Markov model toolkit (HTK) is used for the process of speech recognition. The application is also integrated with a 3D avatar for providing visualization support. The avatar is responsible for performing the sign language on behalf of a person with no awareness of Deaf-mute culture. The prototype application was piloted in social welfare institute for Deaf-mute children. Participants were 15 children aged between 7 and 13 years. The experimental results show the accuracy of the proposed application as 97.9%. The quantitative and qualitative analysis of results also revealed that face-to-face socialization of Deaf-mute is improved by the intervention of mobile technology. The participants also suggested that the proposed mobile application can act as a voice for them and they can socialize with friends and family by using this app.
---
## Body
## 1. Introduction
Historically the termdeaf-mute referred to the person who was either deaf using sign language as a source of communication or both deaf and unable to speak. This term continues to be used to refer to the person who is deaf but has some degree of speaking ability [1]. In deaf community, the worddeafis spelled in two separate ways. The small “d” deaf represents a person’s level of hearing through audiology and being not associated with the other members of the deaf community whereas the capital “D” Deaf indicates the culturally Deaf people who use sign language for communication [2].According to world federation of the deaf (WFD) over 5% of the world’s population (≈360 million people) has disabling hearing loss including 328 million adults and 32 million children [3]. The degree of hearing loss is categorized into mild, moderate, severe, or profound levels [4]. Hearing loss of a person has a direct impact on his/her speech and language development. People with severe or profound hearing loss have higher voice handicap index (VHI) scores than those who suffer from mild hearing loss [5]. A person with mild hearing loss has less problems in speech development as he/she might not be able to hear certain sounds and the speech clarity is not affected that much. A person with severe or profound hearing loss can have a severe problem in speech development and usually relies on sign language as a source of communication.Deaf people face many irritations and frustrations that limit their ability to do everyday tasks. Research indicated [6] that Deaf people, especially Deaf children, have high rates of behavioral and emotional issues in relation to different methods of communication. Most people with such disabilities become introverts and resist social connectivity and face-to-face socialization. The inability to speak with family and friends can cause low self-esteem and may result in social isolation of Deaf person. It is not only that they lack social interactions but communication is also a major barrier to Deaf-mute healthcare [7]. In such conditions, it becomes difficult for the caretaker to interact with the deaf person.Different medical treatments are available for the deaf community in order to get rid of their deafness but the cost of these treatments are expensive [8]. A report of world health organization (WHO) 2017 [9] states that there are different types of costs associated with hearing loss, which are as follows: (1) direct costs: they include the cost associated with hearing loss incurred by healthcare systems; some other types of direct costs include the education support for such children; (2) indirect costs: they include the loss of productivity and usually refer to the cost of individual being unable to contribute to the economy; and (3) intangible costs: they refer to the stigma experienced by the families that are experiencing the hearing loss. This report concludes that unaddressed hearing loss poses substantial costs to the healthcare system and to the economy as a whole.Many communication channels are available, through which Deaf-mute people can deliver their messages, e.g., notes, helper pages, sign language, books with letters, lip reading, and gestures. Despite these channels, there are many problems which are encountered by Deaf-mutes and normal people during communication. The problem is not confined only to a Deaf-mute person who is unable to hear or speak, but another problem is lack of awareness of Deaf culture by normal people. Majority of hearing people have either no/little knowledge or experience of sign language [10]. There are also more than 300 sign languages and it is hard for a normal person to understand and become used to these languages [11]. The above-mentioned problems can be solved by involving the assistive technology as it can be used as an interpreter for converting the sign languages into text or speech for better communication between the Deaf community and hearing individuals [12]. Other technologies such as speech technologies can assist in different ways to help people with hearing loss by improving their autonomy [13]. A common example of speech technology is speech recognition, also termed as automatic speech recognition (ASR). It is the process of converting the speech signal into sequences of words with the help of an algorithm [14]. The ASR process comprises three steps, i.e., (1) feature extraction, (2) acoustic model generation, and (3) recognition phase [15, 16]. For feature extraction, MFCC is the most commonly used technique [17, 18]. The success of MFCC makes it the standard choice in the state-of-the-art speech recognizers such as HTK [19].The main purpose of this research paper is to use a mobile-based assistive technology for providing a simple and cost-effective solution for Deaf-mute with little or complete speech development. The proposed system used HTK based speech recognizer to identify the speech of Deaf-mute and provide a communication platform for them. The next two sections explain the related work and proposed methodology of our system. Section4 states the experimental setup and results of the proposed system.
## 2. Related Work
The Deaf community is not a monolithic group; it has a diversity of groups which are as follows [20, 21]:(1)
Hard-of-hearing people: they are neither fully deaf nor fully hearing, also known as culturally marginal people [22]. They can obtain some useful linguistic information from speech.(2)
Culturally deaf people: they might belong to deaf families and use sign language as the primary source of communication. Their voice (speech clarity) may be disrupted.(3)
Congenital or prelingual deaf people: they are deaf by birth or become deaf before they learn to talk and are not affiliated with Deaf culture. They might or might not use sign language based communication.(4)
Orally educated or postlingual deaf people: they have been deafened in their childhood but developed the speaking skills.(5)
Late-deafened adults: they have had the opportunity to adjust their communication techniques as their progressive hearing losses.Each group of a Deaf community has a different degree of hearing loss and use a different source of communication. Table1 illustrates the details of Deaf community groups with their degree of hearing loss and source of communication with others.Table 1
Mapping of Deaf community groups with a degree of hearing loss and communication source [3, 20, 21].
Deaf Community Groups Degree of Hearing Loss Communication Source Hard-of-Hearing People Mild to Severe Speech/Sign Language Culturally Deaf People Profound Sign Language Congenital or Pre-lingual Deaf People Profound Sign Language Orally Educated or Post-lingual Deaf People Severe to Profound Speech/Sign Language Late-Deafened Adults Moderate to Profound Speech/Sign LanguageHearing loss or deafness has a direct impact on communication, educational achievements, or social interactions [23]. Lack of knowledge about Deaf culture is documented in society as well as in healthcare environment [24]. Kuenburg et al. also indicated that there are significant challenges in communication among healthcare professionals and Deaf people [25]. Improvement in healthcare access among Deaf people is possible by providing the sign language supported visual communication and implementation of communication technologies for healthcare professionals. Some of the implemented technology-based approaches for facilitating Deaf-mutes with easy-to-use services are as follows.
### 2.1. Sensor-Based Technology Approach
Sensors based assistance can be used for solving the social problems of Deaf-mute by bridging the communication gap. Sharma et al. used wearable sensor gloves for detecting the hand gestures of sign language [26]. In this system, flex sensors were used to record the sign language and to sense the environment. The hand gesture of a person activates glove, and flex sensors on glove convert those gestures into electrical signals. The signals are then matched from the database and converted into corresponding speech and displayed on LCD. The cost-effective sensor-based communication device [27] was also suggested for Deaf-mute people to communicate with the doctor. This experiment used a 32-bit microcontroller, LCD to display the input/output, and a processing unit. The LCD displays different hand sign language based pictures to the user. The user selects relevant pictures to describe the illness symptoms. These pictures then convert into patterns and pair with words to make sentences. Vijayalakshmi and Aarthi used flex sensors on the glove for gesture recognition [28]. The system was developed to recognize the words of American Sign Language (ASL). The text output obtained from sensor-based system is converted into speech by using the popular speech synthesis technique of hidden Markov model (HMM). The HMM-based-text-to-speech synthesizer (HTS) was attached to the system for converting the text obtained from hand gestures of people into speech. The HTS system involved training phase for extraction of spectral and excitation parameters from the collected speech data and was modeled by context-dependent HMMs. The synthesis phase of HTS system was used for the construction of HMM sequence by concatenating context-dependent HMMs. Similarly, Arif et al. used five flex sensors on a glove to translate ASL gestures for Deaf-mute into the visual and audio output on LCD [29].
### 2.2. Vision-Based Technology Approach
Many vision-based technology interventions are used to recognize the sign languages of Deaf people. For example, Soltani et al. developed a gesture-based game for Deaf-mutes by using Microsoft Kinect which recognizes the gesture command and converts it into text so that they can enjoy the interactive environment [7]. Voice for the mute (VOM) system was developed to take input in the form of fingerspelling and convert into corresponding speech [30]. The images of fingerspelling signs are retrieved from the camera. After performing noise removal and image processing, the fingerspelling signs are matched from the trained dataset. Processed signs are linked to appropriate text and convert this text into required speech. Nagori and Malode [31] proposed the communication platform by extracting images from the video and converting these images into corresponding speech. Sood and Mishra [32] presented the system that takes images of sign language as input and displays speech as output. The features used in vision-based approaches for speech processing are also used in different object recognition based applications [33–39].
### 2.3. Smartphone-Based Technology Approach
Smartphone technology plays a vital role in helping the people with impairments to get themselves interacted socially and to overcome their communication barriers. Smartphone technology approach is more portable and effective as compared to sensor or vision technology. Many of the new smartphones are furnished with advanced sensors, high processors, and high-resolution cameras [40]. A real-time emergency assistant “iHelp” [41] was proposed for Deaf-mute people where they can report any kind of emergency situation. The current location of the user is accessed through built-in GPS system in a smartphone. The information about the emergency situation is sent to the management through SMS and then passed on to the closest suitable rescue units, and hence the user can get rescue through the use of iHelp. MonoVoix [42] is an Android application that also acts as a sign language interpreter. It captures the signs from a mobile phone camera and then converts them into corresponding speech. Ear Hear [43] is an Android application for Deaf-mute people. It uses sign language to communicate with normal people. The speech-to-sign and sign-to-speech technology are used. For a hearing person to interact with Deaf-mute, the text-to-speech (TTS) technology inputs the speech signal, and a corresponding sign language video is played against that input through which the mute can easily understand. Bragg et al. [44] proposed a sound detector. The app is used to detect the red alert sounds and alert the deaf-mute person by vibrating and showing a popup notification.
## 2.1. Sensor-Based Technology Approach
Sensors based assistance can be used for solving the social problems of Deaf-mute by bridging the communication gap. Sharma et al. used wearable sensor gloves for detecting the hand gestures of sign language [26]. In this system, flex sensors were used to record the sign language and to sense the environment. The hand gesture of a person activates glove, and flex sensors on glove convert those gestures into electrical signals. The signals are then matched from the database and converted into corresponding speech and displayed on LCD. The cost-effective sensor-based communication device [27] was also suggested for Deaf-mute people to communicate with the doctor. This experiment used a 32-bit microcontroller, LCD to display the input/output, and a processing unit. The LCD displays different hand sign language based pictures to the user. The user selects relevant pictures to describe the illness symptoms. These pictures then convert into patterns and pair with words to make sentences. Vijayalakshmi and Aarthi used flex sensors on the glove for gesture recognition [28]. The system was developed to recognize the words of American Sign Language (ASL). The text output obtained from sensor-based system is converted into speech by using the popular speech synthesis technique of hidden Markov model (HMM). The HMM-based-text-to-speech synthesizer (HTS) was attached to the system for converting the text obtained from hand gestures of people into speech. The HTS system involved training phase for extraction of spectral and excitation parameters from the collected speech data and was modeled by context-dependent HMMs. The synthesis phase of HTS system was used for the construction of HMM sequence by concatenating context-dependent HMMs. Similarly, Arif et al. used five flex sensors on a glove to translate ASL gestures for Deaf-mute into the visual and audio output on LCD [29].
## 2.2. Vision-Based Technology Approach
Many vision-based technology interventions are used to recognize the sign languages of Deaf people. For example, Soltani et al. developed a gesture-based game for Deaf-mutes by using Microsoft Kinect which recognizes the gesture command and converts it into text so that they can enjoy the interactive environment [7]. Voice for the mute (VOM) system was developed to take input in the form of fingerspelling and convert into corresponding speech [30]. The images of fingerspelling signs are retrieved from the camera. After performing noise removal and image processing, the fingerspelling signs are matched from the trained dataset. Processed signs are linked to appropriate text and convert this text into required speech. Nagori and Malode [31] proposed the communication platform by extracting images from the video and converting these images into corresponding speech. Sood and Mishra [32] presented the system that takes images of sign language as input and displays speech as output. The features used in vision-based approaches for speech processing are also used in different object recognition based applications [33–39].
## 2.3. Smartphone-Based Technology Approach
Smartphone technology plays a vital role in helping the people with impairments to get themselves interacted socially and to overcome their communication barriers. Smartphone technology approach is more portable and effective as compared to sensor or vision technology. Many of the new smartphones are furnished with advanced sensors, high processors, and high-resolution cameras [40]. A real-time emergency assistant “iHelp” [41] was proposed for Deaf-mute people where they can report any kind of emergency situation. The current location of the user is accessed through built-in GPS system in a smartphone. The information about the emergency situation is sent to the management through SMS and then passed on to the closest suitable rescue units, and hence the user can get rescue through the use of iHelp. MonoVoix [42] is an Android application that also acts as a sign language interpreter. It captures the signs from a mobile phone camera and then converts them into corresponding speech. Ear Hear [43] is an Android application for Deaf-mute people. It uses sign language to communicate with normal people. The speech-to-sign and sign-to-speech technology are used. For a hearing person to interact with Deaf-mute, the text-to-speech (TTS) technology inputs the speech signal, and a corresponding sign language video is played against that input through which the mute can easily understand. Bragg et al. [44] proposed a sound detector. The app is used to detect the red alert sounds and alert the deaf-mute person by vibrating and showing a popup notification.
## 3. Proposed Methodology
Nowadays many technology devices such as smartphone-enabled devices prefer speech interfaces over visual ones. The research [49] highlighted that off-the-shelf speech recognition system cannot be used to detect the speech of deaf or hearing loss people as these systems contain a higher ratio of word error rate. This research recommended using human-based computations to recognize the deaf speech and using text-to-speech functionality for speech generation. In this regard, we proposed and developed an Android based application named as vocalizer to mute (V2M). The proposed application acts as an interpreter and encourages two-way communication between Deaf-mute and normal person. We refer to normal person as the one who has no hearing or vocal impairment or disability. The main features of the proposed application are listed below.
### 3.1. Normal to Deaf-Mute Person Communication
This module takes text or spoken message of a normal person as an input and outputs a 3D avatar that performs sign language for a Deaf-mute person. ASL based animations of an avatar are stored in a central database of application. Each animation file is given 2–5 tags. The steps of normal to Deaf-mute person communication are as follows:(1)
The application takes text/speech of normal person as an input.(2)
The application converts the speech message of a normal person into text by using the Google Cloud Speech Application Program Interface (API) as this API detects normal speech better compared to Deaf persons’ speech.(3)
The application matches the text to any of the tags associated with an animation file and displays the avatar performing corresponding sign for Deaf-mute.
### 3.2. Deaf-Mute to Normal Person Communication
Not everyone has knowledge of sign language so the proposed application uses disrupted speech of a Deaf-mute person. This disrupted form of speech is converted into recognizable speech format by using speech recognition system. HMM-based speech recognition is a growing technology as evidenced by the rapidly increasing commercial deployment. The performance of HMM-based speech recognition has already reached a level that can support viable applications [50]. For this purpose, HTK [51] is used for developing speech recognition system as this toolkit is primarily designed for building HMM-based speech recognition systems.
#### 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
### 3.3. Messaging Service for Deaf-Mute and Normal Person
The application also provides messaging feature to both Deaf-mute and normal people. A person can choose between the American Sign Language or English keyboard for sending the messages. The complete flowchart of “V2M” is illustrated in Figure3.Figure 3
Flowchart of V2M application.
## 3.1. Normal to Deaf-Mute Person Communication
This module takes text or spoken message of a normal person as an input and outputs a 3D avatar that performs sign language for a Deaf-mute person. ASL based animations of an avatar are stored in a central database of application. Each animation file is given 2–5 tags. The steps of normal to Deaf-mute person communication are as follows:(1)
The application takes text/speech of normal person as an input.(2)
The application converts the speech message of a normal person into text by using the Google Cloud Speech Application Program Interface (API) as this API detects normal speech better compared to Deaf persons’ speech.(3)
The application matches the text to any of the tags associated with an animation file and displays the avatar performing corresponding sign for Deaf-mute.
## 3.2. Deaf-Mute to Normal Person Communication
Not everyone has knowledge of sign language so the proposed application uses disrupted speech of a Deaf-mute person. This disrupted form of speech is converted into recognizable speech format by using speech recognition system. HMM-based speech recognition is a growing technology as evidenced by the rapidly increasing commercial deployment. The performance of HMM-based speech recognition has already reached a level that can support viable applications [50]. For this purpose, HTK [51] is used for developing speech recognition system as this toolkit is primarily designed for building HMM-based speech recognition systems.
### 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
## 3.2.1. Speech Recognition System Using HTK
ASR system is implemented by using HTK version 3.4.1. The speech recognition process in HTK follows four steps to obtain the recognized speech of Deaf-mute. The steps are training corpus preparation, feature extraction, acoustic model generation, and recognition as illustrated in Figure1.Figure 1
Speech recognition process using MFCC and HTK.(a) Training Corpus Preparation. The training corpus consists of recordings of speech samples obtained from Deaf-mute in .wav format. The corpus contains spoken English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. The utterance of one participant is separated from the others due to the variance in speech clarity among Deaf-mute people. The training utterances of each participant are labeled to simple text file (.lab). This file is used in acoustic model generation phase of the system.(b) Acoustic Analysis. The purpose of the acoustic analysis is to convert the speech sample (.wav) into a format which is suitable for the recognition process. The proposed application used MFCC approach for acoustic analysis. MFCC is the feature extraction technique in speech recognition [52]. Main advantages of using MFCC are (1) low complexity and (2) better performance with high accuracy in recognition [53]. The overall working of MFCC is illustrated in Figure 2 [19].Figure 2
Block diagram of MFCC feature extraction technique.The features of each step of MFCC are listed below.(1) Pre-Emphasis. The first step of MFCC feature extraction is done by passing the speech signal through a filter. The pre-emphasis filter is the first-order high-pass filter. It is responsible for boosting the higher frequencies of a speech signal.(1)x′n=xn-αxn-10.9≤α≤1.0,where α represents the pre-emphasis coefficient, xn is the input speech signal, and x′n is the output speech signal with a high-pass filter applied to the input. Pre-emphasis is important because the components of speech with high frequency have small amplitude w.r.t components of speech with low frequency [54]. The silent intervals are also removed in this step by using the logarithmic technique for separating and segmenting speech from noisy background environments [55].(2) Framing. Framing process is used to split the pre-emphasized speech signal into short segments. The voice signal is represented by N frame samples and the interframe distance or frameshift is M (M<N). In the proposed application, the frame sample size (N)=256 and frameshift (M)=100. The frame size and frameshift (in milliseconds) are calculated as(2)FrameSizems=fn=1N∗M=25.6ms,Frame_Shift=10ms.(3) Windowing. The speech signal is a nonstationary signal but it is stationary for a very short period of time. The window function is used to analyze the speech signal and extract the stationary portion of a signal. There are two types of windowing:(i)
Rectangular window,(ii)
Hamming window.Rectangular window cuts the signal abruptly so the proposed application used Hamming window. Hamming window shrinks the values towards zero at the boundaries of the speech signal. The value of Hamming window (w(n)) is calculated as(3)wn=0.54-0.46∗cos2πnN-10≤n≤N-10otherwise.The windowing at time n is calculated by(4)ytn=wn∗sn.(4) Discrete Fourier Transform (DFT). The most efficient approach for computing Discrete Fourier Transform is to use Fast Fourier Transform algorithm as it reduces the computation complexity from Θ(n2) to Θ(nlogn). It converts the N discrete samples of speech from the time domain to the frequency domain as calculated by(5)XtK=∑n=1Nsn∗wn∗e-j2πk/Nn1≤K≤k=∑n=1Nytn∗e-j2πk/Nn,where XtK is the Fourier transform of ytn and k is the length of the DFT.(5) Mel-Filter Bank Processing. Human ears act as band-pass filters; i.e., they focus on only certain frequency bands and have less sensitivity at higher frequencies (roughly >1000 Hz). A unit of pitch (mel) is defined for separating the perceptually equidistant pair of sounds in pitch into an equal number of mels [56] and it is calculated as(6)Ytm=melf=2595∗log101+f700.(6) Log. This step takes the logarithm of each of the mel-spectrum values. As human ear has less sensitivity to the slight difference in amplitude at higher amplitudes as compared to lower amplitudes. Logarithm function makes the frequency estimates less sensitive to the slight difference in input.(7) Discrete Cosine Transform (DCT). It converts the frequency domain (log mel-spectrum) back to the time domain by using DCT. The result of the conversion is known as mel frequency cepstrum coefficient (MFCC) [57]. We calculated the mel frequency cepstrum by(7)Yt′j=∑m=1MlogYtmcosjm-0.5πM,k=1,…,J.In the proposed methodology, the value of J = 12 because a 12‐dimensional feature parameter is sufficient to represent the voice feature of a frame [17]. The extraction of cepstrum via DCT results in 12 cepstral coefficients for each frame. These set of coefficients are called acoustic vectors (.mfcc). The acoustic vector (.mfcc) files are used for both the training and testing speech samples. The HTK-HCopy runs for conversion of input speech sample into acoustic vectors. The configuration parameters, used for MFCC feature extraction of the speech sample, are listed in Table 2.Table 2
Details of a configuration file (config.txt).
Description Parameters Input Source File Format (xn) SOURCEFORMAT = WAV Output of Speech Sample TARGETKIND = MFCC_0 Pre-emphasis Coefficient (α) PREEMCOEF = 0.97 Frameshift (M) TARGETRATE = 100000 Window Size WINDOWSIZE = 250000 Using Hamming Window (wn) USEHAMMING = T No. of Filter Bank Channels (f) NUMCHANS = 26 No. of the Cepstral Coefficients NUMCEPS = 12 Save the Output File Compressed SAVECOMPRESSED = T(c) Acoustic Model Generation. It provides a reference acoustic model with which the comparisons are made to recognize the testing utterances. A prototype is used for the initialization of first HMM. This prototype is generated for each word of the Deaf-mute dictionary. The HMM topology comprises 6 active states (observation functions) and two nonemitting states (the initial and the last state with no observation function) which are used for all the HMMs. Single Gaussian observation functions with diagonal matrices are used as observation functions and are described by a mean vector and variance vector in a text description file known as prototype. This predefined prototype file along with acoustic vectors (.mfcc) of training data and associated labels (.lab) is used by the HTK tool HInit for initialization of HMM.(d) Recognition Phase. HTK provides a Viterbi word recognizer called HVite, and it is used to transcript the sequence of acoustic vectors into a sequence of words. HVite uses the Viterbi algorithm in finding the acoustic vectors as per MFCC model. The testing speech samples are also prepared in the same way of preparing the training corpus. In the testing phase, the speech sample is converted into series of acoustic vectors (.mfcc) using the HTK-HCopy tool. These input acoustic vectors along with HMM list, Deaf-mute pronunciation dictionary, and language model (text labels) are taken as an input by HVite to generate the recognized words.
## 3.3. Messaging Service for Deaf-Mute and Normal Person
The application also provides messaging feature to both Deaf-mute and normal people. A person can choose between the American Sign Language or English keyboard for sending the messages. The complete flowchart of “V2M” is illustrated in Figure3.Figure 3
Flowchart of V2M application.
## 4. Experimental Results and Discussions
### 4.1. Experimental Setup
The proposed application V2M required a camera, a mobile phone for the installation of the V2M app, laptop (acting as a server), and an instructor to guide the Deaf-mute student. The complete scenario is shown in Figure4.Figure 4
Experimental setup: a participant performing registration of speech sample task.A total of 15 students from Al-Mudassir Special Education Complex Baharwal, Pakistan, participated in this experiment and the participated students were between the ages of 7 and 13 with some speech training in school. The instructor guided all students in using the mobile application. The experiment consisted of two phases.
#### 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
#### 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
### 4.2. Qualitative Feedback
Researchers formalized questionnaire survey to evaluate the effectiveness of the Deaf-mute application. The survey comprised 12 questions for participants to answer and the reason for this short-length selection of questions was not to overwhelm Deaf-mute students with longer interviews. Secondly, these students had no experience of using any Deaf-mute based application. The qualitative feedback is summarized into following categories (paraphrased from the feedback forms).Familiarity with Existing Mobile Apps. All participants have not heard or used any mobile applications which are dedicated to Deaf-mutes.Ease of Use and Enjoyment. All participants enjoyed using the app. They liked the idea of using an avatar for performing sign language. Out of 15 students, 12 students performed the given tasks quite easily and 3 students have not used or interacted with mobile devices before. Initially, they found this app difficult but it became easier for them after app functions were performed 2-3 times in front of them. Overall they found this app user-friendly and interactive.Application Interface. Participants liked the interface of the app. They learned the steps of app quite fast and they also liked the idea of an avatar performing greeting gesture at home screen.Source of Communication. All participants were using sign language as a primary source of communication. They recommended the intervention of mobile application as a source of communication for them. They acknowledged that the mobile app can be used to convey the message of deaf-mute to a normal person.
### 4.3. Results and Comparative Analysis
The application training and testing corpus are obtained from the speech samples of Deaf-mutes. Training corpus is comprised of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. All participants uttered each alphabet, digit, and statement 2–4 times. The total training utterances are 2440. The HTK speech recognizer was used in training process and speech recognition. HMM was used at the backend of speech recognizer HTK. For testing, each participant was asked 10 questions to answer. There are a total of 390 testing utterances. The application recorded the answer (speech sample), processed it, and displayed (text/speech) result for normal person understanding. The accuracy of simulation results of proposed application is calculated by using precision and recall. For the V2M app, the precision is calculated by a fraction of correctly identified speech signals to a total number of speech samples whereas recall is a percentage of the number of relevant results. Precision, recall, and accuracy are calculated by using the following formulas:(8)precision=truepositivetptruepositivetp+falsepositivefprecall=truepositivetptruepositivetp+falsenegativefnaccuracy=truepositivetp+truenegativetntruepositivetp+truenegativetn+falsepositivefp+falsenegativefn.True positive (tp) refers to words that are uttered by the person and detected by the system.False positive (fp) refers to words not uttered by the person but detected by the system.False negative (fn) refers to words that are uttered by the person but the system does not detect it.True negative (tn) refers to everything else.The experimental results of the proposed methodology in terms of precision, recall, and accuracy parameters are illustrated in Table3.Table 3
Precision and recall of proposed application with speech samples (N = 2, 3, and 4).
Testing Statement Speech Samples Speech Samples Speech Samples N = 2 N = 3 N = 4 Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy S:1 37.5% 30% 20% 91.6% 78.5% 73.3% 100% 93.3% 93.3% S:2 62.5% 41.6% 33.3% 85.7% 92.4% 80% 100% 100% 100% S:3 66.6% 30.7% 26.67% 100% 80% 80% 100% 93.3% 93.3% S:4 60% 54.54% 40% 100% 86.6% 86.67% 92.8% 100% 100% S:5 80% 61% 53.3% 92.8% 86.6% 86.67% 100% 100% 100% S:6 57.1% 33.3% 26.67% 100% 73.3% 73.3% 100% 93.3% 93.3% S:7 53.8% 77.7% 46.67% 84.6% 84.6% 73.3% 100% 100% 100% S:8 45.45% 55.5% 33.3% 100% 80% 80% 100% 100% 100% S:9 30% 37.5% 20% 100% 86.7% 86.67% 100% 100% 100% S:10 75% 46.1% 40% 76.9% 83.3% 66.67% 93.3% 100% 100% Average 56.79 % 46.79 % 46.67 % 93.16 % 83.19 % 78.67 % 98.61 % 97.9 % 97.9 %It is observed from Table3 that the number of speech samples has direct impact on precision and recall of the application. Overall average precision is 56.79% and recall is 46.79% when registered sample count in all statements is 2 (N=2) for each participant. However, the average precision is 93.16% and recall is 83.19% for registered sample count 3 (N=3). The average accuracy in terms of precision and recall is above 97% when registered sample count in all statements is 4 (N=4) for each participant. The F1-score of best precision and recall is calculated:(9)F1scoreN=4=2∗precision∗recallprecision+recall=2∗0.9861∗0.9790.9861+0.979=0.98.Hence it is deducted that the precision of application decreases by taking the limited number of speech samples (N≤2) of the deaf-mute. The application outperforms when the number of speech samples for each statement is greater than 2 (2<N≤2). The speech recognition methodology of proposed application is compared with other speech recognition systems as shown in Table 4.Table 4
Comparison of proposed methodology with state-of-the-art ASR systems.
ASR Systems Methodology Accuracy Proposed Methodology (N=4) MFCC + HTK (8-state HMM) 97.9% MSIAC (Liu et al., 2017) [15] MFCC + GMM Experiment 1 89.50% Experiment 2 85.92% TAMEEM V1.0 (Abushariah, 2017) [45] MFCC + Sphinx 3 92.36% Speaker Identification System (Leu and Lin, 2017) [46] MFCC + GMM Experiment 1 94.08% Experiment 2 84.88% Telugu Speech Signals (Mannepalli et al., 2016) [47] MFCC − GMM 92% AMAZIGH LANGUAGE (Elouahabi et al., 2016) [48] MFCC + HTK (6-state HMM) 80%
## 4.1. Experimental Setup
The proposed application V2M required a camera, a mobile phone for the installation of the V2M app, laptop (acting as a server), and an instructor to guide the Deaf-mute student. The complete scenario is shown in Figure4.Figure 4
Experimental setup: a participant performing registration of speech sample task.A total of 15 students from Al-Mudassir Special Education Complex Baharwal, Pakistan, participated in this experiment and the participated students were between the ages of 7 and 13 with some speech training in school. The instructor guided all students in using the mobile application. The experiment consisted of two phases.
### 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
### 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
## 4.1.1. Speech Testing Phase
In this phase, instructor selected a “register voice” option from a menu of the app and entered a word/sentence or question (label) in a text field of the “register sample” dialog box, for which the training speech samples of participants were taken (see Figure5(b)). At first, the instructor needed sign language for asking the participants to speak a word/sentence or an answer. The system took 2 to 4 voice samples of each word/sentence. Whenever the participant registered his/her voice, the system acknowledged by a visual support (as in Figure 5(c)). For testing, the researcher asked questions via the V2M app, and it displayed an avatar that performed sign language for a Deaf-mute participant in order to understand the questions (see Figure 5(d)). In response, the participant selected the microphone icon (as shown in Figure 5(e)) to speak his/her answer. The app processed and compared the recorded speech sample with the registered samples. After the comparison, it returned the text and spoke out the answer of the participant (see Figure 5(f)).Figure 5
The working of V2M. (a) Avatar greets deaf-mute person. (b) Instructor registers text sample to ask participant for speaking it. (c) Participant recorded his/her speech samples. (d) Avatar asks a question to the Deaf-mute person. (e) Participant recorded his/her answer and app is processing the speech signal. (f) V2M displays and speaks the answer after matching the speech signal. (g) Sign language-based message service.
(a) (b) (c) (d) (e) (f) (g)
## 4.1.2. Message Activity Phase
The participants took minimal support from an instructor in this phase. They easily composed and sent the messages by selecting sign language keyboard (see Figure5(g)).
## 4.2. Qualitative Feedback
Researchers formalized questionnaire survey to evaluate the effectiveness of the Deaf-mute application. The survey comprised 12 questions for participants to answer and the reason for this short-length selection of questions was not to overwhelm Deaf-mute students with longer interviews. Secondly, these students had no experience of using any Deaf-mute based application. The qualitative feedback is summarized into following categories (paraphrased from the feedback forms).Familiarity with Existing Mobile Apps. All participants have not heard or used any mobile applications which are dedicated to Deaf-mutes.Ease of Use and Enjoyment. All participants enjoyed using the app. They liked the idea of using an avatar for performing sign language. Out of 15 students, 12 students performed the given tasks quite easily and 3 students have not used or interacted with mobile devices before. Initially, they found this app difficult but it became easier for them after app functions were performed 2-3 times in front of them. Overall they found this app user-friendly and interactive.Application Interface. Participants liked the interface of the app. They learned the steps of app quite fast and they also liked the idea of an avatar performing greeting gesture at home screen.Source of Communication. All participants were using sign language as a primary source of communication. They recommended the intervention of mobile application as a source of communication for them. They acknowledged that the mobile app can be used to convey the message of deaf-mute to a normal person.
## 4.3. Results and Comparative Analysis
The application training and testing corpus are obtained from the speech samples of Deaf-mutes. Training corpus is comprised of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. All participants uttered each alphabet, digit, and statement 2–4 times. The total training utterances are 2440. The HTK speech recognizer was used in training process and speech recognition. HMM was used at the backend of speech recognizer HTK. For testing, each participant was asked 10 questions to answer. There are a total of 390 testing utterances. The application recorded the answer (speech sample), processed it, and displayed (text/speech) result for normal person understanding. The accuracy of simulation results of proposed application is calculated by using precision and recall. For the V2M app, the precision is calculated by a fraction of correctly identified speech signals to a total number of speech samples whereas recall is a percentage of the number of relevant results. Precision, recall, and accuracy are calculated by using the following formulas:(8)precision=truepositivetptruepositivetp+falsepositivefprecall=truepositivetptruepositivetp+falsenegativefnaccuracy=truepositivetp+truenegativetntruepositivetp+truenegativetn+falsepositivefp+falsenegativefn.True positive (tp) refers to words that are uttered by the person and detected by the system.False positive (fp) refers to words not uttered by the person but detected by the system.False negative (fn) refers to words that are uttered by the person but the system does not detect it.True negative (tn) refers to everything else.The experimental results of the proposed methodology in terms of precision, recall, and accuracy parameters are illustrated in Table3.Table 3
Precision and recall of proposed application with speech samples (N = 2, 3, and 4).
Testing Statement Speech Samples Speech Samples Speech Samples N = 2 N = 3 N = 4 Precision Recall Accuracy Precision Recall Accuracy Precision Recall Accuracy S:1 37.5% 30% 20% 91.6% 78.5% 73.3% 100% 93.3% 93.3% S:2 62.5% 41.6% 33.3% 85.7% 92.4% 80% 100% 100% 100% S:3 66.6% 30.7% 26.67% 100% 80% 80% 100% 93.3% 93.3% S:4 60% 54.54% 40% 100% 86.6% 86.67% 92.8% 100% 100% S:5 80% 61% 53.3% 92.8% 86.6% 86.67% 100% 100% 100% S:6 57.1% 33.3% 26.67% 100% 73.3% 73.3% 100% 93.3% 93.3% S:7 53.8% 77.7% 46.67% 84.6% 84.6% 73.3% 100% 100% 100% S:8 45.45% 55.5% 33.3% 100% 80% 80% 100% 100% 100% S:9 30% 37.5% 20% 100% 86.7% 86.67% 100% 100% 100% S:10 75% 46.1% 40% 76.9% 83.3% 66.67% 93.3% 100% 100% Average 56.79 % 46.79 % 46.67 % 93.16 % 83.19 % 78.67 % 98.61 % 97.9 % 97.9 %It is observed from Table3 that the number of speech samples has direct impact on precision and recall of the application. Overall average precision is 56.79% and recall is 46.79% when registered sample count in all statements is 2 (N=2) for each participant. However, the average precision is 93.16% and recall is 83.19% for registered sample count 3 (N=3). The average accuracy in terms of precision and recall is above 97% when registered sample count in all statements is 4 (N=4) for each participant. The F1-score of best precision and recall is calculated:(9)F1scoreN=4=2∗precision∗recallprecision+recall=2∗0.9861∗0.9790.9861+0.979=0.98.Hence it is deducted that the precision of application decreases by taking the limited number of speech samples (N≤2) of the deaf-mute. The application outperforms when the number of speech samples for each statement is greater than 2 (2<N≤2). The speech recognition methodology of proposed application is compared with other speech recognition systems as shown in Table 4.Table 4
Comparison of proposed methodology with state-of-the-art ASR systems.
ASR Systems Methodology Accuracy Proposed Methodology (N=4) MFCC + HTK (8-state HMM) 97.9% MSIAC (Liu et al., 2017) [15] MFCC + GMM Experiment 1 89.50% Experiment 2 85.92% TAMEEM V1.0 (Abushariah, 2017) [45] MFCC + Sphinx 3 92.36% Speaker Identification System (Leu and Lin, 2017) [46] MFCC + GMM Experiment 1 94.08% Experiment 2 84.88% Telugu Speech Signals (Mannepalli et al., 2016) [47] MFCC − GMM 92% AMAZIGH LANGUAGE (Elouahabi et al., 2016) [48] MFCC + HTK (6-state HMM) 80%
## 5. Conclusion
Deaf people face many irritations and frustrations that limit their ability to do everyday tasks. Deaf children have high rates of behavioral and emotional issues in relation to different methods of communication. The main inspiration behind the proposed application is to remove the communication barrier for Deaf-mutes especially children. This app uses the speech or text input of normal person and translates it into sign language via 3D avatar. It provides speech recognition system for the distorted speech of Deaf-mutes. The speech recognition system uses MFCC feature extraction technique to extract the acoustic vectors from speech samples. The HTK toolkit is used to convert these acoustic vectors into recognizable words or sentences by using pronunciation dictionary and language model. The application is able to recognize Deaf-mute speech samples of English alphabets (A–Z), English digits (0 to 9), and 15 common sentences used in daily routine life, i.e., good morning, hello, good luck, thank you, etc. It provides message service for both Deaf-mutes and normal people. Deaf-mutes can use customized sign language keyboard for composing the message. The app also can convert the received sign language message to text for a normal person. The proposed application was also tested on 15 children aged between 7 and 13 years. The accuracy of proposed application is 97.9%. The qualitative feedback of children also highlighted that it is easy for Deaf-mutes to adapt the mobile technology and mobile app can be used to convey their message to a normal person.
---
*Source: 1013234-2018-05-24.xml* | 2018 |
# Application of a Remotely Controlled Artificial Intelligence Analgesic Pump Device in Painless Treatment of Children
**Authors:** Fengyang Zhang; Shihuan Wu; Meimin Qu; Li Zhou
**Journal:** Contrast Media & Molecular Imaging
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013241
---
## Abstract
In order to effectively improve the application of analgesic pump devices in the treatment of children, a method based on remote control artificial intelligence is proposed. 100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects; they were randomly divided into control group and observation group by an equidistant sampling method, with 50 cases in each group. Children in the control group were given articaine and adrenaline anesthesia, and the observation group was treated with articaine and adrenaline combined with a computer-controlled anesthesia system, the anesthesia pain degree and satisfaction degree of the two groups of children were observed and compared. The results showed that the pain score in anesthesia and intraoperative pain score in the observation group was significantly lower than that in the control group, and the differences were statistically significant (P<0.05). The total satisfaction of 96.6% patients in the observation group was significantly higher than that in the control group (84.7%) and the difference was statistically significant (P<0.05). There were no serious complications in both groups. The application of the computer anesthesia system combined with articaine adrenaline in the painless treatment of children’s dental pulp proved to have better effects, the treatment compliance is higher, and it is worthy of clinical promotion.
---
## Body
## 1. Introduction
An analgesic pump is a device that continuously pumps fluids and keeps drugs at a steady level in the blood so that fewer drugs can be used for better pain relief. Patients are often allowed to self-press to add an additional infusion dose to the ongoing infusion, so treatment is more individualized, consistent with the large variation in pain sensation. Because the analgesic pump is mainly used for postoperative acute analgesia and maternal labor analgesia, and opioid potent analgesic drugs are often added to analgesic pumps, improper application of the dosage of the drug can often cause life-threatening complications such as respiratory depression in patients; in most domestic hospitals, anesthesiologists are mainly responsible for the preparation and management of analgesic pumps throughout the hospital [1]. When treating children with dental diseases, a preliminary diagnosis of dental disease, related examinations, and a clear diagnosis are required, and then, the treatment plan is clear and implemented. However, due to the influence of related factors, children with dental diseases do not cooperate with clinical diagnosis and treatment operations, which seriously affects the clinical diagnosis and treatment of children with dental diseases. Therefore, when performing clinical diagnosis and treatment of dental patients, limited implementation of treatment for its acute inflammation and pulpitis-related teeth, avoiding the occurrence of pain, building the confidence of the child in diagnosis and treatment, adopting surface anesthesia treatment and injection anesthesia treatment to the children are followed to ensure that the clinical diagnosis and treatment of the children can be successfully completed [2]. The data results in this article show that after comparing the total effective rate of children with dental disease in the experimental group and the reference group, the difference was statistically significant (P<0.05); after comparing the total satisfaction rate of parents of children with dental disease in the experimental group and the reference group, the difference was statistically significant (P<0.05). These statements clarify the advantages of painless operation in the treatment of children’s dental diseases. An analgesic pump control method for intelligent infusion is introduced. By setting the sedation depth detection module and/or the respiratory frequency detection module and/or the blood oxygen detection module and the motor drive module, and the processor connected to it to obtain and judge the monitoring signal, the processor evaluates the abnormal signal in time through the judgment of the feedback information of the detection module and automatically adjusts the motor to stop the analgesic pump or slow down the infusion speed in order to reduce the risk of excessive infusion of the analgesic pump. To a certain extent, it reduces the risk of malignancy caused by excessive analgesia with analgesic pumps; however, it still lacks professional anesthesiologists to monitor and manage the abnormal conditions of patients, and safety needs to be paid attention to [3]. In addition, there is no “action” for patients with insufficient analgesia, and it is impossible to achieve individualized analgesia in a true sense.However, in reality, due to the large number and scattered distribution of pump patients in the whole hospital, patients have different needs for analgesia in different time periods after surgery; insufficient staffing of the anesthesiology department and cost factors make it impossible for each surgical department to assign a staff member to individualized control and management of the patient’s analgesic pump. Continuous pumping at the same speed throughout the entire process is likely to cause severe pain in some patients due to insufficient analgesia, some patients have adverse reactions such as nausea and vomiting, lethargy, urinary retention, numbness of the lower limbs, and itching of the skin due to excessive analgesia, which really affects the patient’s analgesic effect and comfort [4]. In addition, the electronically regulated analgesic pumps currently used in clinical practice are often blocked, and the operation is suspended due to mechanical alarms such as insufficient power. The current clinical solution is that the nurse in the ward calls the anesthesiologist, after the anesthesiologist arrives in the ward, the cause of the analgesic pump mechanical alarm is diagnosed and processed, and then the analgesic pump is started. From the patient reporting the alarm problem of the analgesic pump to the nurse calling, to the anesthesiologist arriving in the ward, several hours may pass during this period, and the analgesic pump alarm processing sometimes only needs to be restarted to solve the Wang Y. believes that it is precisely because the anesthesiologist and the patient/family cannot contact in time and provide reliable remote guidance, which undoubtedly increases the time for patients to endure pain and the labor cost for anesthesiologists to manage the analgesic pump [5]. Gao et al. The clinical treatment of dental pulpitis patients mainly adopts pulp opening and pulp extraction treatment plans, but for pediatric patients, controlling the pain during dental treatment and improving the compliance of children with treatment is the key to ensuring the success rate of oral treatment. Compared with traditional lidocaine local anesthesia, oral anesthesia with articaine and epinephrine is more effective, and the incidence of adverse reactions is relatively low, the operation is simple, it has positive significance in improving the compliance of children with treatment [6]. In the work by Yang and Bang, in addition, with the continuous progress and development of medical technology, the computer-controlled local anesthesia system has been applied to the painless treatment of children’s teeth and pulp, and has achieved good results; it can ensure painless local anesthesia injection. Computer-assisted systems are used to perform local anesthesia on patients, as it is beneficial to control the injection pressure and injection strength, and can reduce injection pain, make the treatment more comfortable for patients and facilitate the improvement of treatment compliance. This is a new method of anesthesia in recent years, which is conducive to the formation of anesthesia channels, during the injection process. The accuracy of the injection can be ensured, so that a better anesthesia effect can be achieved [7]. The income analysis and research data for this research question are derived from 100 children with dental diseases who attended the hospital and participated in the treatment, 100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, it is randomly divided into a control group and an observation group by means of equidistant sampling, 50 cases in each group. Children in the control group were given articaine and adrenaline anesthesia, the observation group was treated with articaine and adrenaline combined with a computer-controlled anesthesia system, observe and compare the anesthesia pain degree and satisfaction degree of the two groups of children. The results showed that the pain score in anesthesia and intraoperative pain score in the observation group was significantly lower than that in the control group, and the differences were statistically significant (P<0.05). The total satisfaction of 96.6% patients in the observation group was significantly higher than that in the control group (84.7%), and the difference was statistically significant (P<0.05). There were no serious complications in both groups. The aim of this study is to evaluate and study the effect of painless manipulation in the treatment of children with dental diseases. The computer-controlled anesthesia system combined with attevacaine and epinephrine in the treatment of children’s dental pulp pain basically achieves painless effect, and is superior to the traditional manual injection of attevacaine, which can better improve the compliance of children’s dental pulp and oral treatment, and has better anesthesia effect.
## 2. Materials and Methods
### 2.1. Composition of an Improved PCA System for Programmed Intermittent Drug Delivery
The improvement of the device is to add a single-chip microcomputer on the basis of the traditional patient-controlled electronic analgesic pump, set the timer on the microcontroller, and connect to the input device and display device of the original electronic analgesic pump. The timer is connected to the PCA pulse control stepper motor through the single-chip microcomputer and achieved an intermittent program control of stepper motors. This design has obtained the “Utility Model Patent Certificate” issued by the State Intellectual Property Office [8].
### 2.2. Pressure Detection of the Infusion System
This experiment uses an Edward pressure sensor connected to an analgesic pump. The entire improved analgesic pump system is connected to the pressure sensor, used to confirm the pressure at the output end during infusion, and recorded the peak pressure of continuous infusion and intermittent infusion; The output is connected to a quantitative beaker to detect the flow at the outflow end. The peak pressure and flow rate were measured 3 times each [9].
### 2.3. Comparison of Peak Pressure between Continuous Dosing and Intermittent Dosing
There was no statistically significant difference in the average value of the peak pressure and flow measurement results between the two groups (P>0.05) (Table 1).Table 1
Peak pressure and flow test results of two groups of analgesics infusion.
MethodSet valuePeak pressureFlowIntermittent dosing1026.7 ± 0.410Continuous dosing1023.8 ± 0.510Intermittent dosing527.2 ± 0.45Continuous dosing524.2 ± 0.65
## 2.1. Composition of an Improved PCA System for Programmed Intermittent Drug Delivery
The improvement of the device is to add a single-chip microcomputer on the basis of the traditional patient-controlled electronic analgesic pump, set the timer on the microcontroller, and connect to the input device and display device of the original electronic analgesic pump. The timer is connected to the PCA pulse control stepper motor through the single-chip microcomputer and achieved an intermittent program control of stepper motors. This design has obtained the “Utility Model Patent Certificate” issued by the State Intellectual Property Office [8].
## 2.2. Pressure Detection of the Infusion System
This experiment uses an Edward pressure sensor connected to an analgesic pump. The entire improved analgesic pump system is connected to the pressure sensor, used to confirm the pressure at the output end during infusion, and recorded the peak pressure of continuous infusion and intermittent infusion; The output is connected to a quantitative beaker to detect the flow at the outflow end. The peak pressure and flow rate were measured 3 times each [9].
## 2.3. Comparison of Peak Pressure between Continuous Dosing and Intermittent Dosing
There was no statistically significant difference in the average value of the peak pressure and flow measurement results between the two groups (P>0.05) (Table 1).Table 1
Peak pressure and flow test results of two groups of analgesics infusion.
MethodSet valuePeak pressureFlowIntermittent dosing1026.7 ± 0.410Continuous dosing1023.8 ± 0.510Intermittent dosing527.2 ± 0.45Continuous dosing524.2 ± 0.65
## 3. Simulation Experiment
### 3.1. General Information
100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, and they were divided into two groups, 50 cases in each group, according to whether they used painless operations. Among them, there were 26 males and 24 females in Group A, aged 3–12 years old, with an average of (5.04 ± 0.49) years old. In group B, there were 28 males and 22 females, aged 4–11 years old, with an average of (5.02 ± 0.45) years old. Comparing the general data of the two groups of children with dental diseases, the difference was not statistically significant (P>0.05).
### 3.2. Method
Children with dental disease in the reference group did not use painless operations, after taking tetracaine for topical anesthesia, routinely implement-related diagnosis, and treatment operations. Observation group’s dental patients were treated with painless operation and tetracaine was used for topical anesthesia, add bilan anesthesia to the surface anesthesia site for anesthesia treatment, then perform a painless operation, during the injection of the child; it is necessary to pay close attention to the pain and ensure that children can cooperate with them in the implementation of diagnosis and treatment operations [10].
### 3.3. Observation Indicators
(1) To compare and analyze the total effective rate of 100 children with dental disease, effective: the child can keep quiet and cooperate with related operations and can successfully complete the clinical-related diagnosis and treatment of the child, ineffective: the child cried and showed fear for related operations, it is difficult to complete clinical related diagnosis and treatment. (2) To compare and analyze the total satisfaction rate of 100 parents of children with dental diseases, and take a satisfaction survey form, divided into satisfaction and dissatisfaction, satisfaction: The score is 60 points and above, dissatisfied: the score is below 60 points [11].
### 3.4. Statistical Methods
SPSS21.0 statistical software was used to process the data, and the counting data were expressed as case number (n) and percentage (%). X2 test was adopted, and P<0.05 was considered statistically significant.
## 3.1. General Information
100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, and they were divided into two groups, 50 cases in each group, according to whether they used painless operations. Among them, there were 26 males and 24 females in Group A, aged 3–12 years old, with an average of (5.04 ± 0.49) years old. In group B, there were 28 males and 22 females, aged 4–11 years old, with an average of (5.02 ± 0.45) years old. Comparing the general data of the two groups of children with dental diseases, the difference was not statistically significant (P>0.05).
## 3.2. Method
Children with dental disease in the reference group did not use painless operations, after taking tetracaine for topical anesthesia, routinely implement-related diagnosis, and treatment operations. Observation group’s dental patients were treated with painless operation and tetracaine was used for topical anesthesia, add bilan anesthesia to the surface anesthesia site for anesthesia treatment, then perform a painless operation, during the injection of the child; it is necessary to pay close attention to the pain and ensure that children can cooperate with them in the implementation of diagnosis and treatment operations [10].
## 3.3. Observation Indicators
(1) To compare and analyze the total effective rate of 100 children with dental disease, effective: the child can keep quiet and cooperate with related operations and can successfully complete the clinical-related diagnosis and treatment of the child, ineffective: the child cried and showed fear for related operations, it is difficult to complete clinical related diagnosis and treatment. (2) To compare and analyze the total satisfaction rate of 100 parents of children with dental diseases, and take a satisfaction survey form, divided into satisfaction and dissatisfaction, satisfaction: The score is 60 points and above, dissatisfied: the score is below 60 points [11].
## 3.4. Statistical Methods
SPSS21.0 statistical software was used to process the data, and the counting data were expressed as case number (n) and percentage (%). X2 test was adopted, and P<0.05 was considered statistically significant.
## 4. Results and Analysis
### 4.1. Comparison of the Total Effective Rate of the Two Groups
After comparison and analysis of the total effective rate of the two groups of children with dental diseases, the difference was statistically significant (P<0.05). See Figure 1.Figure 1
Comparison of the total effective rate of the two groups of children with dental disease.
### 4.2. Comparison of the Total Satisfaction Rate of the Two Groups
There was a statistically significant difference in the total satisfaction rate of the parents of the two groups of dental children (P<0.05). See Table 2.Table 2
Comparison of the total satisfaction rate of the two groups of children with dental diseases.
GroupingDissatisfiedSatisfyOverall satisfaction rateGroup A133672Group B34794X28.5755P0.034
### 4.3. Comparison of the Pain Degree of Children before and after Treatment
As shown in Table3, before treatment, the children’s pain degree score has no comparative value, and the difference is not statistically significant (P>0.05); After treatment, the pain degree of group A was higher than that of group B, and the difference was statistically significant (P<0.05) [12].Table 3
Comparison of the degree of pain in children after treatment.
GroupingNumber of casesBefore therapyAfter treatmentGroup A507.02 ± 1.495.87 ± 1.16Group B506.97 ± 1.543.31 ± 0.24X240.80537.170P0.0010.001
### 4.4. Analysis
The microcomputer intermittent electronic analgesia pump uses a timer to achieve the purpose of electronically controlled intermittent infusion and manual single administration at the same time. The improved design of the system is relatively simple, as long as a timer is added to the connection of the original analgesic pump automatic control button to realize the selection of the infusion mode. The timer can set two time periods as the choice of dosing interval, in order to meet the needs of clinical application, the method of intermittent drug administration in clinical practice is greatly simplified. Through a meta-analysis of multiple clinical trials, it is believed that continuous intermittent administration can improve patient satisfaction during nerve block and labor analgesia, and reduce total drug consumption. The reason why programmed intermittent injection is more successful than continuous injection is that when a single dose is given, the drug is injected into the epidural space or fascial space and spreads more uniformly [13].The principle of PCA analgesic pump is realized by discontinuous squeezing silica gel infusion pipe, and liquid entering and releasing are controlled by two vertical rods moving synchronously. A meaningful drive pressure is generated only when the vertical pump, under the action of the motor, squeezes the infusion tube in turn. Our study design examined the pressure of a modified programmed intermittent injection pump. The improved analgesic effect was based on the assumption of increased injection pressure. However, in fact, we detected that the peak pressure of the improved analgesic pump was basically similar to that of the continuous analgesic pump. Therefore, we can conclude that it is not the peak pressure of infusion that influences the infusion outcome, but the infusion and drug metabolism model. Previously, intermittent analgesia was mostly achieved through two analgesic pumps connected to a three-way catheter, one pump for intermittent injection and the other pump for PCA. We connected the timer at the PCA key to realize the parallel path without affecting the PCA key function [14, 15].At this stage, due to wrong behaviors in diet, lifestyle, hygiene habits, and tooth use, as a result, there are more and more dental patients, and the incidence of children with dental diseases has also shown a linear upward trend in recent years. After the onset of the disease, children will have symptoms such as bleeding gums, gum inflammation, and swollen gums; if you do not intervene in time, it will cause the children to have loose and missing teeth, which will affect their chewing function and beautiful appearance. Moreover, dental disease can also cause gum inflammation, some inflammatory molecules will flow to various parts of the body along with the blood circulation function of the human body, it affects the body of the child and is not conducive to its healthy development. Therefore, for this kind of disease, children should be given timely and effective treatment to restore the chewing function of children, at the same time, it can also improve the aesthetics of children’s teeth, which is conducive to the improvement of the quality of their prognosis. At this stage, early induction and late symptomatic treatment are often used in clinical treatment for children with dental diseases to correct the orientation of the children’s teeth, relieve pain and inflammation, and restore the children’s tooth growth rate and chewing function. However, in actual operation, because the child is young and has incomplete knowledge of the disease, there is a fear of treatment, which leads to a low degree of cooperation in treatment [16]. Moreover, it is also prone to a variety of abnormal phenomena during treatment, such as crying, resistance, anxiety, etc., which seriously affects normal clinical treatment operations. Moreover, under the influence of this kind of phenomenon, it is easy to cause risk accidents. In order to avoid the occurrence of the above phenomenon and improve the treatment effect, a large number of experimental studies have also been carried out in the clinic, the final result pointed out that the painless treatment of children with dental diseases can not only reduce the pain during the treatment process but also, at the same time, it can also increase the degree of cooperation of the children to ensure that the treatment can be completed smoothly [17].The results of this study show that the painless treatment of group B children, the clinical efficacy and satisfaction of family care were higher than that of group A, and the pain degree score was lower than that of group A, and the difference was statistically significant (P<0.05). The results further confirmed the accuracy of relevant medical research, indicating that the painless operation has a positive effect and high application value for children with dental diseases. It can not only reduce the pain performance of children but also have a sedative effect. In combination with the psychological counseling and comfort of doctors and nurses, it can significantly improve its treatment compliance and reduce the treatment risk caused by the psychological stimulation of the child during the treatment process, significantly improving the safety and effectiveness of treatment. Moreover, under the application of painless operation, it can also reduce the worry of the children’s family members for the children, it also maintains a good attitude, thereby reducing the occurrence of nurse-patient disputes, and helping to establish a harmonious doctor-patient relationship [18].During the treatment of pulpitis, operations such as pulp opening and pulp extraction will aggravate the pain of the patient, it makes the patient feel unbearable pain, and the fear in the heart will increase. According to relevant data, among oral medicine patients, about 57% suffer from fear during treatment. Therefore, effective painless techniques must be adopted to ensure the smooth development of treatment. At present, the most common anesthetics in the Department of Endodontics include articaine, lidocaine, proca, and other anesthetics. In addition, the application of computer-controlled anesthesia systems has become more extensive. The application of this system can reduce the pain of anesthesia injection and has a higher clinical application value [19].Computer-controlled anesthesia system (STA) has been applied to oral clinic in recent years. It is characterized by safety, which can ensure slow and uniform injection, and the flow rate is lower than the patient’s pain threshold. At the same time, it can better control the injection strength, grasp the injection site, reduce the pain of injection, and make the patient feel the real comfortable injection. Especially for children with dental diseases, from injection to push medicine basically achieved painless effect. It can be seen from Table3 that the children’s response to pain in the two groups from the beginning of injection to the end of drug push during anesthesia, and the children’s response to pain by using attevacaine adrenaline connected to the computer-controlled anesthesia system was lower than that in the control group (P<0.05). In the stage of pulp-out, the children’s response to pain is shown in Table 3. The children with attevacaine adrenaline connected to the computer-controlled anesthesia system had significantly lower response to pain than the control group (P<0.05) [20].
## 4.1. Comparison of the Total Effective Rate of the Two Groups
After comparison and analysis of the total effective rate of the two groups of children with dental diseases, the difference was statistically significant (P<0.05). See Figure 1.Figure 1
Comparison of the total effective rate of the two groups of children with dental disease.
## 4.2. Comparison of the Total Satisfaction Rate of the Two Groups
There was a statistically significant difference in the total satisfaction rate of the parents of the two groups of dental children (P<0.05). See Table 2.Table 2
Comparison of the total satisfaction rate of the two groups of children with dental diseases.
GroupingDissatisfiedSatisfyOverall satisfaction rateGroup A133672Group B34794X28.5755P0.034
## 4.3. Comparison of the Pain Degree of Children before and after Treatment
As shown in Table3, before treatment, the children’s pain degree score has no comparative value, and the difference is not statistically significant (P>0.05); After treatment, the pain degree of group A was higher than that of group B, and the difference was statistically significant (P<0.05) [12].Table 3
Comparison of the degree of pain in children after treatment.
GroupingNumber of casesBefore therapyAfter treatmentGroup A507.02 ± 1.495.87 ± 1.16Group B506.97 ± 1.543.31 ± 0.24X240.80537.170P0.0010.001
## 4.4. Analysis
The microcomputer intermittent electronic analgesia pump uses a timer to achieve the purpose of electronically controlled intermittent infusion and manual single administration at the same time. The improved design of the system is relatively simple, as long as a timer is added to the connection of the original analgesic pump automatic control button to realize the selection of the infusion mode. The timer can set two time periods as the choice of dosing interval, in order to meet the needs of clinical application, the method of intermittent drug administration in clinical practice is greatly simplified. Through a meta-analysis of multiple clinical trials, it is believed that continuous intermittent administration can improve patient satisfaction during nerve block and labor analgesia, and reduce total drug consumption. The reason why programmed intermittent injection is more successful than continuous injection is that when a single dose is given, the drug is injected into the epidural space or fascial space and spreads more uniformly [13].The principle of PCA analgesic pump is realized by discontinuous squeezing silica gel infusion pipe, and liquid entering and releasing are controlled by two vertical rods moving synchronously. A meaningful drive pressure is generated only when the vertical pump, under the action of the motor, squeezes the infusion tube in turn. Our study design examined the pressure of a modified programmed intermittent injection pump. The improved analgesic effect was based on the assumption of increased injection pressure. However, in fact, we detected that the peak pressure of the improved analgesic pump was basically similar to that of the continuous analgesic pump. Therefore, we can conclude that it is not the peak pressure of infusion that influences the infusion outcome, but the infusion and drug metabolism model. Previously, intermittent analgesia was mostly achieved through two analgesic pumps connected to a three-way catheter, one pump for intermittent injection and the other pump for PCA. We connected the timer at the PCA key to realize the parallel path without affecting the PCA key function [14, 15].At this stage, due to wrong behaviors in diet, lifestyle, hygiene habits, and tooth use, as a result, there are more and more dental patients, and the incidence of children with dental diseases has also shown a linear upward trend in recent years. After the onset of the disease, children will have symptoms such as bleeding gums, gum inflammation, and swollen gums; if you do not intervene in time, it will cause the children to have loose and missing teeth, which will affect their chewing function and beautiful appearance. Moreover, dental disease can also cause gum inflammation, some inflammatory molecules will flow to various parts of the body along with the blood circulation function of the human body, it affects the body of the child and is not conducive to its healthy development. Therefore, for this kind of disease, children should be given timely and effective treatment to restore the chewing function of children, at the same time, it can also improve the aesthetics of children’s teeth, which is conducive to the improvement of the quality of their prognosis. At this stage, early induction and late symptomatic treatment are often used in clinical treatment for children with dental diseases to correct the orientation of the children’s teeth, relieve pain and inflammation, and restore the children’s tooth growth rate and chewing function. However, in actual operation, because the child is young and has incomplete knowledge of the disease, there is a fear of treatment, which leads to a low degree of cooperation in treatment [16]. Moreover, it is also prone to a variety of abnormal phenomena during treatment, such as crying, resistance, anxiety, etc., which seriously affects normal clinical treatment operations. Moreover, under the influence of this kind of phenomenon, it is easy to cause risk accidents. In order to avoid the occurrence of the above phenomenon and improve the treatment effect, a large number of experimental studies have also been carried out in the clinic, the final result pointed out that the painless treatment of children with dental diseases can not only reduce the pain during the treatment process but also, at the same time, it can also increase the degree of cooperation of the children to ensure that the treatment can be completed smoothly [17].The results of this study show that the painless treatment of group B children, the clinical efficacy and satisfaction of family care were higher than that of group A, and the pain degree score was lower than that of group A, and the difference was statistically significant (P<0.05). The results further confirmed the accuracy of relevant medical research, indicating that the painless operation has a positive effect and high application value for children with dental diseases. It can not only reduce the pain performance of children but also have a sedative effect. In combination with the psychological counseling and comfort of doctors and nurses, it can significantly improve its treatment compliance and reduce the treatment risk caused by the psychological stimulation of the child during the treatment process, significantly improving the safety and effectiveness of treatment. Moreover, under the application of painless operation, it can also reduce the worry of the children’s family members for the children, it also maintains a good attitude, thereby reducing the occurrence of nurse-patient disputes, and helping to establish a harmonious doctor-patient relationship [18].During the treatment of pulpitis, operations such as pulp opening and pulp extraction will aggravate the pain of the patient, it makes the patient feel unbearable pain, and the fear in the heart will increase. According to relevant data, among oral medicine patients, about 57% suffer from fear during treatment. Therefore, effective painless techniques must be adopted to ensure the smooth development of treatment. At present, the most common anesthetics in the Department of Endodontics include articaine, lidocaine, proca, and other anesthetics. In addition, the application of computer-controlled anesthesia systems has become more extensive. The application of this system can reduce the pain of anesthesia injection and has a higher clinical application value [19].Computer-controlled anesthesia system (STA) has been applied to oral clinic in recent years. It is characterized by safety, which can ensure slow and uniform injection, and the flow rate is lower than the patient’s pain threshold. At the same time, it can better control the injection strength, grasp the injection site, reduce the pain of injection, and make the patient feel the real comfortable injection. Especially for children with dental diseases, from injection to push medicine basically achieved painless effect. It can be seen from Table3 that the children’s response to pain in the two groups from the beginning of injection to the end of drug push during anesthesia, and the children’s response to pain by using attevacaine adrenaline connected to the computer-controlled anesthesia system was lower than that in the control group (P<0.05). In the stage of pulp-out, the children’s response to pain is shown in Table 3. The children with attevacaine adrenaline connected to the computer-controlled anesthesia system had significantly lower response to pain than the control group (P<0.05) [20].
## 5. Conclusion
The computer-controlled anesthesia system combined with articaine adrenaline is basically painless when used in the treatment of dental pulp pain in children, at the same time, it is better than the traditional hand injection method of articaine, which can better improve the compliance of children’s dental pulp oral treatment, and the anesthesia effect is better; at the same time, complications such as nerve injury and hematoma are avoided, and the pain of children is less, and the incidence of dental fear in children is reduced. Therefore, the computer-controlled anesthesia system is worthy of promotion.
---
*Source: 1013241-2022-04-06.xml* | 1013241-2022-04-06_1013241-2022-04-06.md | 36,753 | Application of a Remotely Controlled Artificial Intelligence Analgesic Pump Device in Painless Treatment of Children | Fengyang Zhang; Shihuan Wu; Meimin Qu; Li Zhou | Contrast Media & Molecular Imaging
(2022) | Physical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013241 | 1013241-2022-04-06.xml | ---
## Abstract
In order to effectively improve the application of analgesic pump devices in the treatment of children, a method based on remote control artificial intelligence is proposed. 100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects; they were randomly divided into control group and observation group by an equidistant sampling method, with 50 cases in each group. Children in the control group were given articaine and adrenaline anesthesia, and the observation group was treated with articaine and adrenaline combined with a computer-controlled anesthesia system, the anesthesia pain degree and satisfaction degree of the two groups of children were observed and compared. The results showed that the pain score in anesthesia and intraoperative pain score in the observation group was significantly lower than that in the control group, and the differences were statistically significant (P<0.05). The total satisfaction of 96.6% patients in the observation group was significantly higher than that in the control group (84.7%) and the difference was statistically significant (P<0.05). There were no serious complications in both groups. The application of the computer anesthesia system combined with articaine adrenaline in the painless treatment of children’s dental pulp proved to have better effects, the treatment compliance is higher, and it is worthy of clinical promotion.
---
## Body
## 1. Introduction
An analgesic pump is a device that continuously pumps fluids and keeps drugs at a steady level in the blood so that fewer drugs can be used for better pain relief. Patients are often allowed to self-press to add an additional infusion dose to the ongoing infusion, so treatment is more individualized, consistent with the large variation in pain sensation. Because the analgesic pump is mainly used for postoperative acute analgesia and maternal labor analgesia, and opioid potent analgesic drugs are often added to analgesic pumps, improper application of the dosage of the drug can often cause life-threatening complications such as respiratory depression in patients; in most domestic hospitals, anesthesiologists are mainly responsible for the preparation and management of analgesic pumps throughout the hospital [1]. When treating children with dental diseases, a preliminary diagnosis of dental disease, related examinations, and a clear diagnosis are required, and then, the treatment plan is clear and implemented. However, due to the influence of related factors, children with dental diseases do not cooperate with clinical diagnosis and treatment operations, which seriously affects the clinical diagnosis and treatment of children with dental diseases. Therefore, when performing clinical diagnosis and treatment of dental patients, limited implementation of treatment for its acute inflammation and pulpitis-related teeth, avoiding the occurrence of pain, building the confidence of the child in diagnosis and treatment, adopting surface anesthesia treatment and injection anesthesia treatment to the children are followed to ensure that the clinical diagnosis and treatment of the children can be successfully completed [2]. The data results in this article show that after comparing the total effective rate of children with dental disease in the experimental group and the reference group, the difference was statistically significant (P<0.05); after comparing the total satisfaction rate of parents of children with dental disease in the experimental group and the reference group, the difference was statistically significant (P<0.05). These statements clarify the advantages of painless operation in the treatment of children’s dental diseases. An analgesic pump control method for intelligent infusion is introduced. By setting the sedation depth detection module and/or the respiratory frequency detection module and/or the blood oxygen detection module and the motor drive module, and the processor connected to it to obtain and judge the monitoring signal, the processor evaluates the abnormal signal in time through the judgment of the feedback information of the detection module and automatically adjusts the motor to stop the analgesic pump or slow down the infusion speed in order to reduce the risk of excessive infusion of the analgesic pump. To a certain extent, it reduces the risk of malignancy caused by excessive analgesia with analgesic pumps; however, it still lacks professional anesthesiologists to monitor and manage the abnormal conditions of patients, and safety needs to be paid attention to [3]. In addition, there is no “action” for patients with insufficient analgesia, and it is impossible to achieve individualized analgesia in a true sense.However, in reality, due to the large number and scattered distribution of pump patients in the whole hospital, patients have different needs for analgesia in different time periods after surgery; insufficient staffing of the anesthesiology department and cost factors make it impossible for each surgical department to assign a staff member to individualized control and management of the patient’s analgesic pump. Continuous pumping at the same speed throughout the entire process is likely to cause severe pain in some patients due to insufficient analgesia, some patients have adverse reactions such as nausea and vomiting, lethargy, urinary retention, numbness of the lower limbs, and itching of the skin due to excessive analgesia, which really affects the patient’s analgesic effect and comfort [4]. In addition, the electronically regulated analgesic pumps currently used in clinical practice are often blocked, and the operation is suspended due to mechanical alarms such as insufficient power. The current clinical solution is that the nurse in the ward calls the anesthesiologist, after the anesthesiologist arrives in the ward, the cause of the analgesic pump mechanical alarm is diagnosed and processed, and then the analgesic pump is started. From the patient reporting the alarm problem of the analgesic pump to the nurse calling, to the anesthesiologist arriving in the ward, several hours may pass during this period, and the analgesic pump alarm processing sometimes only needs to be restarted to solve the Wang Y. believes that it is precisely because the anesthesiologist and the patient/family cannot contact in time and provide reliable remote guidance, which undoubtedly increases the time for patients to endure pain and the labor cost for anesthesiologists to manage the analgesic pump [5]. Gao et al. The clinical treatment of dental pulpitis patients mainly adopts pulp opening and pulp extraction treatment plans, but for pediatric patients, controlling the pain during dental treatment and improving the compliance of children with treatment is the key to ensuring the success rate of oral treatment. Compared with traditional lidocaine local anesthesia, oral anesthesia with articaine and epinephrine is more effective, and the incidence of adverse reactions is relatively low, the operation is simple, it has positive significance in improving the compliance of children with treatment [6]. In the work by Yang and Bang, in addition, with the continuous progress and development of medical technology, the computer-controlled local anesthesia system has been applied to the painless treatment of children’s teeth and pulp, and has achieved good results; it can ensure painless local anesthesia injection. Computer-assisted systems are used to perform local anesthesia on patients, as it is beneficial to control the injection pressure and injection strength, and can reduce injection pain, make the treatment more comfortable for patients and facilitate the improvement of treatment compliance. This is a new method of anesthesia in recent years, which is conducive to the formation of anesthesia channels, during the injection process. The accuracy of the injection can be ensured, so that a better anesthesia effect can be achieved [7]. The income analysis and research data for this research question are derived from 100 children with dental diseases who attended the hospital and participated in the treatment, 100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, it is randomly divided into a control group and an observation group by means of equidistant sampling, 50 cases in each group. Children in the control group were given articaine and adrenaline anesthesia, the observation group was treated with articaine and adrenaline combined with a computer-controlled anesthesia system, observe and compare the anesthesia pain degree and satisfaction degree of the two groups of children. The results showed that the pain score in anesthesia and intraoperative pain score in the observation group was significantly lower than that in the control group, and the differences were statistically significant (P<0.05). The total satisfaction of 96.6% patients in the observation group was significantly higher than that in the control group (84.7%), and the difference was statistically significant (P<0.05). There were no serious complications in both groups. The aim of this study is to evaluate and study the effect of painless manipulation in the treatment of children with dental diseases. The computer-controlled anesthesia system combined with attevacaine and epinephrine in the treatment of children’s dental pulp pain basically achieves painless effect, and is superior to the traditional manual injection of attevacaine, which can better improve the compliance of children’s dental pulp and oral treatment, and has better anesthesia effect.
## 2. Materials and Methods
### 2.1. Composition of an Improved PCA System for Programmed Intermittent Drug Delivery
The improvement of the device is to add a single-chip microcomputer on the basis of the traditional patient-controlled electronic analgesic pump, set the timer on the microcontroller, and connect to the input device and display device of the original electronic analgesic pump. The timer is connected to the PCA pulse control stepper motor through the single-chip microcomputer and achieved an intermittent program control of stepper motors. This design has obtained the “Utility Model Patent Certificate” issued by the State Intellectual Property Office [8].
### 2.2. Pressure Detection of the Infusion System
This experiment uses an Edward pressure sensor connected to an analgesic pump. The entire improved analgesic pump system is connected to the pressure sensor, used to confirm the pressure at the output end during infusion, and recorded the peak pressure of continuous infusion and intermittent infusion; The output is connected to a quantitative beaker to detect the flow at the outflow end. The peak pressure and flow rate were measured 3 times each [9].
### 2.3. Comparison of Peak Pressure between Continuous Dosing and Intermittent Dosing
There was no statistically significant difference in the average value of the peak pressure and flow measurement results between the two groups (P>0.05) (Table 1).Table 1
Peak pressure and flow test results of two groups of analgesics infusion.
MethodSet valuePeak pressureFlowIntermittent dosing1026.7 ± 0.410Continuous dosing1023.8 ± 0.510Intermittent dosing527.2 ± 0.45Continuous dosing524.2 ± 0.65
## 2.1. Composition of an Improved PCA System for Programmed Intermittent Drug Delivery
The improvement of the device is to add a single-chip microcomputer on the basis of the traditional patient-controlled electronic analgesic pump, set the timer on the microcontroller, and connect to the input device and display device of the original electronic analgesic pump. The timer is connected to the PCA pulse control stepper motor through the single-chip microcomputer and achieved an intermittent program control of stepper motors. This design has obtained the “Utility Model Patent Certificate” issued by the State Intellectual Property Office [8].
## 2.2. Pressure Detection of the Infusion System
This experiment uses an Edward pressure sensor connected to an analgesic pump. The entire improved analgesic pump system is connected to the pressure sensor, used to confirm the pressure at the output end during infusion, and recorded the peak pressure of continuous infusion and intermittent infusion; The output is connected to a quantitative beaker to detect the flow at the outflow end. The peak pressure and flow rate were measured 3 times each [9].
## 2.3. Comparison of Peak Pressure between Continuous Dosing and Intermittent Dosing
There was no statistically significant difference in the average value of the peak pressure and flow measurement results between the two groups (P>0.05) (Table 1).Table 1
Peak pressure and flow test results of two groups of analgesics infusion.
MethodSet valuePeak pressureFlowIntermittent dosing1026.7 ± 0.410Continuous dosing1023.8 ± 0.510Intermittent dosing527.2 ± 0.45Continuous dosing524.2 ± 0.65
## 3. Simulation Experiment
### 3.1. General Information
100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, and they were divided into two groups, 50 cases in each group, according to whether they used painless operations. Among them, there were 26 males and 24 females in Group A, aged 3–12 years old, with an average of (5.04 ± 0.49) years old. In group B, there were 28 males and 22 females, aged 4–11 years old, with an average of (5.02 ± 0.45) years old. Comparing the general data of the two groups of children with dental diseases, the difference was not statistically significant (P>0.05).
### 3.2. Method
Children with dental disease in the reference group did not use painless operations, after taking tetracaine for topical anesthesia, routinely implement-related diagnosis, and treatment operations. Observation group’s dental patients were treated with painless operation and tetracaine was used for topical anesthesia, add bilan anesthesia to the surface anesthesia site for anesthesia treatment, then perform a painless operation, during the injection of the child; it is necessary to pay close attention to the pain and ensure that children can cooperate with them in the implementation of diagnosis and treatment operations [10].
### 3.3. Observation Indicators
(1) To compare and analyze the total effective rate of 100 children with dental disease, effective: the child can keep quiet and cooperate with related operations and can successfully complete the clinical-related diagnosis and treatment of the child, ineffective: the child cried and showed fear for related operations, it is difficult to complete clinical related diagnosis and treatment. (2) To compare and analyze the total satisfaction rate of 100 parents of children with dental diseases, and take a satisfaction survey form, divided into satisfaction and dissatisfaction, satisfaction: The score is 60 points and above, dissatisfied: the score is below 60 points [11].
### 3.4. Statistical Methods
SPSS21.0 statistical software was used to process the data, and the counting data were expressed as case number (n) and percentage (%). X2 test was adopted, and P<0.05 was considered statistically significant.
## 3.1. General Information
100 children with dental pulpitis who were treated in a hospital from December 2018 to December 2020 were selected as the research subjects, and they were divided into two groups, 50 cases in each group, according to whether they used painless operations. Among them, there were 26 males and 24 females in Group A, aged 3–12 years old, with an average of (5.04 ± 0.49) years old. In group B, there were 28 males and 22 females, aged 4–11 years old, with an average of (5.02 ± 0.45) years old. Comparing the general data of the two groups of children with dental diseases, the difference was not statistically significant (P>0.05).
## 3.2. Method
Children with dental disease in the reference group did not use painless operations, after taking tetracaine for topical anesthesia, routinely implement-related diagnosis, and treatment operations. Observation group’s dental patients were treated with painless operation and tetracaine was used for topical anesthesia, add bilan anesthesia to the surface anesthesia site for anesthesia treatment, then perform a painless operation, during the injection of the child; it is necessary to pay close attention to the pain and ensure that children can cooperate with them in the implementation of diagnosis and treatment operations [10].
## 3.3. Observation Indicators
(1) To compare and analyze the total effective rate of 100 children with dental disease, effective: the child can keep quiet and cooperate with related operations and can successfully complete the clinical-related diagnosis and treatment of the child, ineffective: the child cried and showed fear for related operations, it is difficult to complete clinical related diagnosis and treatment. (2) To compare and analyze the total satisfaction rate of 100 parents of children with dental diseases, and take a satisfaction survey form, divided into satisfaction and dissatisfaction, satisfaction: The score is 60 points and above, dissatisfied: the score is below 60 points [11].
## 3.4. Statistical Methods
SPSS21.0 statistical software was used to process the data, and the counting data were expressed as case number (n) and percentage (%). X2 test was adopted, and P<0.05 was considered statistically significant.
## 4. Results and Analysis
### 4.1. Comparison of the Total Effective Rate of the Two Groups
After comparison and analysis of the total effective rate of the two groups of children with dental diseases, the difference was statistically significant (P<0.05). See Figure 1.Figure 1
Comparison of the total effective rate of the two groups of children with dental disease.
### 4.2. Comparison of the Total Satisfaction Rate of the Two Groups
There was a statistically significant difference in the total satisfaction rate of the parents of the two groups of dental children (P<0.05). See Table 2.Table 2
Comparison of the total satisfaction rate of the two groups of children with dental diseases.
GroupingDissatisfiedSatisfyOverall satisfaction rateGroup A133672Group B34794X28.5755P0.034
### 4.3. Comparison of the Pain Degree of Children before and after Treatment
As shown in Table3, before treatment, the children’s pain degree score has no comparative value, and the difference is not statistically significant (P>0.05); After treatment, the pain degree of group A was higher than that of group B, and the difference was statistically significant (P<0.05) [12].Table 3
Comparison of the degree of pain in children after treatment.
GroupingNumber of casesBefore therapyAfter treatmentGroup A507.02 ± 1.495.87 ± 1.16Group B506.97 ± 1.543.31 ± 0.24X240.80537.170P0.0010.001
### 4.4. Analysis
The microcomputer intermittent electronic analgesia pump uses a timer to achieve the purpose of electronically controlled intermittent infusion and manual single administration at the same time. The improved design of the system is relatively simple, as long as a timer is added to the connection of the original analgesic pump automatic control button to realize the selection of the infusion mode. The timer can set two time periods as the choice of dosing interval, in order to meet the needs of clinical application, the method of intermittent drug administration in clinical practice is greatly simplified. Through a meta-analysis of multiple clinical trials, it is believed that continuous intermittent administration can improve patient satisfaction during nerve block and labor analgesia, and reduce total drug consumption. The reason why programmed intermittent injection is more successful than continuous injection is that when a single dose is given, the drug is injected into the epidural space or fascial space and spreads more uniformly [13].The principle of PCA analgesic pump is realized by discontinuous squeezing silica gel infusion pipe, and liquid entering and releasing are controlled by two vertical rods moving synchronously. A meaningful drive pressure is generated only when the vertical pump, under the action of the motor, squeezes the infusion tube in turn. Our study design examined the pressure of a modified programmed intermittent injection pump. The improved analgesic effect was based on the assumption of increased injection pressure. However, in fact, we detected that the peak pressure of the improved analgesic pump was basically similar to that of the continuous analgesic pump. Therefore, we can conclude that it is not the peak pressure of infusion that influences the infusion outcome, but the infusion and drug metabolism model. Previously, intermittent analgesia was mostly achieved through two analgesic pumps connected to a three-way catheter, one pump for intermittent injection and the other pump for PCA. We connected the timer at the PCA key to realize the parallel path without affecting the PCA key function [14, 15].At this stage, due to wrong behaviors in diet, lifestyle, hygiene habits, and tooth use, as a result, there are more and more dental patients, and the incidence of children with dental diseases has also shown a linear upward trend in recent years. After the onset of the disease, children will have symptoms such as bleeding gums, gum inflammation, and swollen gums; if you do not intervene in time, it will cause the children to have loose and missing teeth, which will affect their chewing function and beautiful appearance. Moreover, dental disease can also cause gum inflammation, some inflammatory molecules will flow to various parts of the body along with the blood circulation function of the human body, it affects the body of the child and is not conducive to its healthy development. Therefore, for this kind of disease, children should be given timely and effective treatment to restore the chewing function of children, at the same time, it can also improve the aesthetics of children’s teeth, which is conducive to the improvement of the quality of their prognosis. At this stage, early induction and late symptomatic treatment are often used in clinical treatment for children with dental diseases to correct the orientation of the children’s teeth, relieve pain and inflammation, and restore the children’s tooth growth rate and chewing function. However, in actual operation, because the child is young and has incomplete knowledge of the disease, there is a fear of treatment, which leads to a low degree of cooperation in treatment [16]. Moreover, it is also prone to a variety of abnormal phenomena during treatment, such as crying, resistance, anxiety, etc., which seriously affects normal clinical treatment operations. Moreover, under the influence of this kind of phenomenon, it is easy to cause risk accidents. In order to avoid the occurrence of the above phenomenon and improve the treatment effect, a large number of experimental studies have also been carried out in the clinic, the final result pointed out that the painless treatment of children with dental diseases can not only reduce the pain during the treatment process but also, at the same time, it can also increase the degree of cooperation of the children to ensure that the treatment can be completed smoothly [17].The results of this study show that the painless treatment of group B children, the clinical efficacy and satisfaction of family care were higher than that of group A, and the pain degree score was lower than that of group A, and the difference was statistically significant (P<0.05). The results further confirmed the accuracy of relevant medical research, indicating that the painless operation has a positive effect and high application value for children with dental diseases. It can not only reduce the pain performance of children but also have a sedative effect. In combination with the psychological counseling and comfort of doctors and nurses, it can significantly improve its treatment compliance and reduce the treatment risk caused by the psychological stimulation of the child during the treatment process, significantly improving the safety and effectiveness of treatment. Moreover, under the application of painless operation, it can also reduce the worry of the children’s family members for the children, it also maintains a good attitude, thereby reducing the occurrence of nurse-patient disputes, and helping to establish a harmonious doctor-patient relationship [18].During the treatment of pulpitis, operations such as pulp opening and pulp extraction will aggravate the pain of the patient, it makes the patient feel unbearable pain, and the fear in the heart will increase. According to relevant data, among oral medicine patients, about 57% suffer from fear during treatment. Therefore, effective painless techniques must be adopted to ensure the smooth development of treatment. At present, the most common anesthetics in the Department of Endodontics include articaine, lidocaine, proca, and other anesthetics. In addition, the application of computer-controlled anesthesia systems has become more extensive. The application of this system can reduce the pain of anesthesia injection and has a higher clinical application value [19].Computer-controlled anesthesia system (STA) has been applied to oral clinic in recent years. It is characterized by safety, which can ensure slow and uniform injection, and the flow rate is lower than the patient’s pain threshold. At the same time, it can better control the injection strength, grasp the injection site, reduce the pain of injection, and make the patient feel the real comfortable injection. Especially for children with dental diseases, from injection to push medicine basically achieved painless effect. It can be seen from Table3 that the children’s response to pain in the two groups from the beginning of injection to the end of drug push during anesthesia, and the children’s response to pain by using attevacaine adrenaline connected to the computer-controlled anesthesia system was lower than that in the control group (P<0.05). In the stage of pulp-out, the children’s response to pain is shown in Table 3. The children with attevacaine adrenaline connected to the computer-controlled anesthesia system had significantly lower response to pain than the control group (P<0.05) [20].
## 4.1. Comparison of the Total Effective Rate of the Two Groups
After comparison and analysis of the total effective rate of the two groups of children with dental diseases, the difference was statistically significant (P<0.05). See Figure 1.Figure 1
Comparison of the total effective rate of the two groups of children with dental disease.
## 4.2. Comparison of the Total Satisfaction Rate of the Two Groups
There was a statistically significant difference in the total satisfaction rate of the parents of the two groups of dental children (P<0.05). See Table 2.Table 2
Comparison of the total satisfaction rate of the two groups of children with dental diseases.
GroupingDissatisfiedSatisfyOverall satisfaction rateGroup A133672Group B34794X28.5755P0.034
## 4.3. Comparison of the Pain Degree of Children before and after Treatment
As shown in Table3, before treatment, the children’s pain degree score has no comparative value, and the difference is not statistically significant (P>0.05); After treatment, the pain degree of group A was higher than that of group B, and the difference was statistically significant (P<0.05) [12].Table 3
Comparison of the degree of pain in children after treatment.
GroupingNumber of casesBefore therapyAfter treatmentGroup A507.02 ± 1.495.87 ± 1.16Group B506.97 ± 1.543.31 ± 0.24X240.80537.170P0.0010.001
## 4.4. Analysis
The microcomputer intermittent electronic analgesia pump uses a timer to achieve the purpose of electronically controlled intermittent infusion and manual single administration at the same time. The improved design of the system is relatively simple, as long as a timer is added to the connection of the original analgesic pump automatic control button to realize the selection of the infusion mode. The timer can set two time periods as the choice of dosing interval, in order to meet the needs of clinical application, the method of intermittent drug administration in clinical practice is greatly simplified. Through a meta-analysis of multiple clinical trials, it is believed that continuous intermittent administration can improve patient satisfaction during nerve block and labor analgesia, and reduce total drug consumption. The reason why programmed intermittent injection is more successful than continuous injection is that when a single dose is given, the drug is injected into the epidural space or fascial space and spreads more uniformly [13].The principle of PCA analgesic pump is realized by discontinuous squeezing silica gel infusion pipe, and liquid entering and releasing are controlled by two vertical rods moving synchronously. A meaningful drive pressure is generated only when the vertical pump, under the action of the motor, squeezes the infusion tube in turn. Our study design examined the pressure of a modified programmed intermittent injection pump. The improved analgesic effect was based on the assumption of increased injection pressure. However, in fact, we detected that the peak pressure of the improved analgesic pump was basically similar to that of the continuous analgesic pump. Therefore, we can conclude that it is not the peak pressure of infusion that influences the infusion outcome, but the infusion and drug metabolism model. Previously, intermittent analgesia was mostly achieved through two analgesic pumps connected to a three-way catheter, one pump for intermittent injection and the other pump for PCA. We connected the timer at the PCA key to realize the parallel path without affecting the PCA key function [14, 15].At this stage, due to wrong behaviors in diet, lifestyle, hygiene habits, and tooth use, as a result, there are more and more dental patients, and the incidence of children with dental diseases has also shown a linear upward trend in recent years. After the onset of the disease, children will have symptoms such as bleeding gums, gum inflammation, and swollen gums; if you do not intervene in time, it will cause the children to have loose and missing teeth, which will affect their chewing function and beautiful appearance. Moreover, dental disease can also cause gum inflammation, some inflammatory molecules will flow to various parts of the body along with the blood circulation function of the human body, it affects the body of the child and is not conducive to its healthy development. Therefore, for this kind of disease, children should be given timely and effective treatment to restore the chewing function of children, at the same time, it can also improve the aesthetics of children’s teeth, which is conducive to the improvement of the quality of their prognosis. At this stage, early induction and late symptomatic treatment are often used in clinical treatment for children with dental diseases to correct the orientation of the children’s teeth, relieve pain and inflammation, and restore the children’s tooth growth rate and chewing function. However, in actual operation, because the child is young and has incomplete knowledge of the disease, there is a fear of treatment, which leads to a low degree of cooperation in treatment [16]. Moreover, it is also prone to a variety of abnormal phenomena during treatment, such as crying, resistance, anxiety, etc., which seriously affects normal clinical treatment operations. Moreover, under the influence of this kind of phenomenon, it is easy to cause risk accidents. In order to avoid the occurrence of the above phenomenon and improve the treatment effect, a large number of experimental studies have also been carried out in the clinic, the final result pointed out that the painless treatment of children with dental diseases can not only reduce the pain during the treatment process but also, at the same time, it can also increase the degree of cooperation of the children to ensure that the treatment can be completed smoothly [17].The results of this study show that the painless treatment of group B children, the clinical efficacy and satisfaction of family care were higher than that of group A, and the pain degree score was lower than that of group A, and the difference was statistically significant (P<0.05). The results further confirmed the accuracy of relevant medical research, indicating that the painless operation has a positive effect and high application value for children with dental diseases. It can not only reduce the pain performance of children but also have a sedative effect. In combination with the psychological counseling and comfort of doctors and nurses, it can significantly improve its treatment compliance and reduce the treatment risk caused by the psychological stimulation of the child during the treatment process, significantly improving the safety and effectiveness of treatment. Moreover, under the application of painless operation, it can also reduce the worry of the children’s family members for the children, it also maintains a good attitude, thereby reducing the occurrence of nurse-patient disputes, and helping to establish a harmonious doctor-patient relationship [18].During the treatment of pulpitis, operations such as pulp opening and pulp extraction will aggravate the pain of the patient, it makes the patient feel unbearable pain, and the fear in the heart will increase. According to relevant data, among oral medicine patients, about 57% suffer from fear during treatment. Therefore, effective painless techniques must be adopted to ensure the smooth development of treatment. At present, the most common anesthetics in the Department of Endodontics include articaine, lidocaine, proca, and other anesthetics. In addition, the application of computer-controlled anesthesia systems has become more extensive. The application of this system can reduce the pain of anesthesia injection and has a higher clinical application value [19].Computer-controlled anesthesia system (STA) has been applied to oral clinic in recent years. It is characterized by safety, which can ensure slow and uniform injection, and the flow rate is lower than the patient’s pain threshold. At the same time, it can better control the injection strength, grasp the injection site, reduce the pain of injection, and make the patient feel the real comfortable injection. Especially for children with dental diseases, from injection to push medicine basically achieved painless effect. It can be seen from Table3 that the children’s response to pain in the two groups from the beginning of injection to the end of drug push during anesthesia, and the children’s response to pain by using attevacaine adrenaline connected to the computer-controlled anesthesia system was lower than that in the control group (P<0.05). In the stage of pulp-out, the children’s response to pain is shown in Table 3. The children with attevacaine adrenaline connected to the computer-controlled anesthesia system had significantly lower response to pain than the control group (P<0.05) [20].
## 5. Conclusion
The computer-controlled anesthesia system combined with articaine adrenaline is basically painless when used in the treatment of dental pulp pain in children, at the same time, it is better than the traditional hand injection method of articaine, which can better improve the compliance of children’s dental pulp oral treatment, and the anesthesia effect is better; at the same time, complications such as nerve injury and hematoma are avoided, and the pain of children is less, and the incidence of dental fear in children is reduced. Therefore, the computer-controlled anesthesia system is worthy of promotion.
---
*Source: 1013241-2022-04-06.xml* | 2022 |
# Residential Environment Pollution Monitoring System Based on Cloud Computing and Internet of Things
**Authors:** Jing Mi; Xinghua Sun; Shihui Zhang; Naidi Liu
**Journal:** International Journal of Analytical Chemistry
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013300
---
## Abstract
In order to solve the problems of single monitoring factor, weak comprehensive analysis ability, and poor real time performance in traditional environmental monitoring systems, a research method of residential environment pollution monitoring system based on cloud computing and Internet of Things is proposed. The method mainly includes two parts: an environmental monitoring terminal and an environmental pollution monitoring and management platform. Through the Wi-Fi module, the data is sent to the environmental pollution monitoring and management platform in real time. The environmental monitoring management platform is mainly composed of environmental pollution monitoring server, web server, and mobile terminal. The results are as follows. The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s. Practice has proved that the environmental pollution monitoring and alarm system operates stably and can realize real-time collection and transmission of data such as noise, PM 2.5, harmful gas concentration, illumination, GPS, and video images, providing a reliable guarantee for timely environmental pollution control.
---
## Body
## 1. Introduction
In order to solve the problem of air pollution in the indoor living environment, the indicators that cause air pollution can be collected. These indicators mainly include inhalable particulate matter, formaldehyde, carbon dioxide, and stupid substances. These air indicators can be monitored through the indoor living environment monitoring system, and indicators that exceed the standard can be warned. What is more important is to collect carbon monoxide, methane, and other combustible gases and smog in the indoor air to prevent unexpected accidents such as gas poisoning and home fire, monitor these indicators that affect the safety of indoor residents in real time, and timely monitor these indicators that exceed the standard. Prewarning and alarming: in order to prevent accidents and ensure the personal and property safety of indoor residents. Establish an indoor living environment comfort evaluation model, which requires the environmental elements that affect the comfort of indoor occupants and the environmental parameters of each environmental element.Since the reform and opening up, China’s economic power has greatly increased, creating many outstanding achievements. And people’s living standards have also undergone earth-shaking changes [1]. During this period, China’s GDP growth rate reached 9.8%, becoming the country with the fastest economic growth in the same period. The continuous 30 years of rapid economic growth makes people around the world see “China’s economic miracle” [2].While promoting development, economic development with high energy consumption has also brought great pressure to China’s ecological environment, even affecting the daily life of urban residents [3]. Noise is also a common factor affecting residents’ health. With the acceleration of urbanization, production, living and transportation levels are improving. And, noise generated by urban construction has a negative impact on people’s daily life and health [4, 5]. According to 44 urban environment monitoring networks, more than 2/3 of the urban population in China are faced with noise pollution [6]. China’s production of industrial waste is increasing rapidly, which will inevitably put enormous pressure on the environment. According to the statistics of relevant departments, the accumulation of household garbage in China is also very serious. Therefore, it can be concluded that the severe urban environment pollution in China has begun to affect people’s daily life and environment pollution even has seriously harmed people’s health in some places [7, 8]. In view of the current situation of urban environment pollution, combined with the impact of environment pollution on people’s physical and mental health, as well as China’s determination to solve the problem of environment pollution and the immaturity of environment monitoring technology and many other factors, it is very necessary to establish the residential environment pollution monitoring and alarm system.Saito et al. believed that the definition of environment monitoring was that using biochemical methods to analyze the proportion and harm of various environment factors in the environment and evaluate the quality and change trend of the environment correctly according to the results [9]. Alobaidi and Valyrakis believed that since the emergence of environment monitoring technology, countries all over the world began to vigorously invest in the development of environment pollution monitoring technology and the research and development of environment pollution monitoring products, resulting in rapid development of environment monitoring technology [10]. Aguilar-arevalo et al. believed that environment monitoring technology in various countries started early and became increasingly mature through advanced scientific and technological means. The whole development process mainly included three important stages of special monitoring of environment pollution accidents, pollution sources, and environment quality [11]. Saoutieff et al. believed that, in the 1950s, after several serious pollution accidents caused by toxic chemical leakage, the developed countries began to analyze collected environment samples by chemical means to determine the composition and content of pollution, assisting the subsequent treatment [12]. Li et al. believed that, by the end of 1960s, people gradually realized that not only chemical substances would cause environment pollution but also biological and physical factors would damage the environment. So physical and biological methods were also included in environment detection. Coupled with the government’s attention to environment pollution legislation and pollutant discharge control, the monitoring of environment pollution has developed greatly in this period [13]. Leonardo et al. believed that, around 1975, due to the deepening of people’s understanding of environment protection, the developed countries began to pay attention to the monitoring of the overall quality of the environment rather than the monitoring of a single pollution source, which made the scope of environment pollution monitoring more comprehensive [14]. Moon et al. believed by around 1980, the developed countries began to rely on advanced science and technology to establish their own intelligent environment monitoring system. And, the geographic information system technology, remote sensing, and positioning technology were used comprehensively to monitor the change of natural environment continuously, so as to achieve a wide range of monitoring and improve the ability of data collection and processing data. The prediction of future environment quality realized and the development of the intelligent monitoring technology was promoted [15]. Lu et al. believed that, from the 1980s to the early 21st century, the integration of multiple technologies was gradually regarded as the mainstream scheme to realize environment pollution monitoring for the environment monitoring systems of various countries [16]. At present, in terms of environment pollution monitoring, the developed countries adopted the technical integration of “3S” technology with a series of emerging technologies such as big data and artificial intelligence, so as to achieve intelligent, accurate, and comprehensive development of environment pollution problems, providing strong technical support for environment assessment, environment prediction, and decision-making [17]. On the basis of the current researches, the intelligent community environment pollution monitoring and alarm system was designed and realized. In terms of the environment pollution problems existing in the current urban community, combined with the current development of environment pollution monitoring technology, the Internet of Things technology, data fusion technology, embedded development technology, Wi-Fi communication technology, and application development technology were adopted on the basis of fully studying the system framework design, embedded software and hardware design, Web development and design, and the server function design.
## 2. Research Methods
### 2.1. System Function Design
According to the analysis of function and performance demand of the system, the system function modules could be divided into the environment pollution monitoring terminal and the environment pollution monitoring management platform [18, 19]. The environment monitoring terminal took STM32F103 and S3C6410 processor as the core processing unit. Through C# software development technology, Socket communication technology and database technology programming, area network server of the system was established. When the abnormal data appeared, alarm would be on in the server side. And, the current GPS data were recorded and sent to the environment management monitoring center and mobile phone terminals (community staff) [20]. The Web server of the system was the overall monitoring center of the environment monitoring system, mainly being responsible for the display of historical data, view of user records, alarm in emergencies, and task distribution in abnormal situations [21]. When the system data were abnormal, the system converted the received GPS information into location information and sent the anomaly and location information to mobile phone terminals over the network (inspection) which were the most important part of the control and treatment of environment pollution in the whole system and the key point to complete the timely treatment of environment pollution in the system [22]. The cellphone terminal of community manager was mainly responsible for receiving and displaying the data of each monitoring point in the community in real time, giving alarm prompt in case of abnormality. While the patrol mobile phone terminal was mainly responsible for receiving the tasks distributed by the environment management center and uploading the processing results of the task to the Web server [23]. The specific system function design diagram is shown in Figure 1.Figure 1
System function design diagram.
### 2.2. Hardware Design of Environment Parameter Collection and Transmission Module
STM32 processor was taken as the core in the environment parameter collection module of the system, which integrated clock circuit, power circuit, A/D (analog/digital) conversion module, serial communication module, and I2C bus module around it, constituting the core components of data collection. In addition, the multisensor of smoke, light intensity, and harmful gas concentration and ESP8266Wi-Fi module was extended. The specific hardware structure is shown in Figure2.Figure 2
Hardware structure of environment parameter collection and transmission module.
### 2.3. Hardware Design of Videos and Images Collection and Transmission Module
ARM1I processor was taken as the core in the hardware structure of videos and images collection system. And, the clock circuit, power supply circuit, NAND FLASH module, DDR module, serial communication module, and USB interface module were integrated around it, which constituted the core components of video image collection. In addition, GPS sensor, camera module, and USB Wi-Fi module were extended to realize the collection and transmission of location information and videos and images. The specific hardware structure is shown in Figure3.Figure 3
Hardware structure diagram of video image collection and transmission module.
### 2.4. Software Design of Environment Pollution Parameter Collection Module
For environment pollution parameter collection, data collection nodes are formed through the STM32 external multichannel sensors. Then multiple data collection nodes of sensor network were prepared. So, the pollution of the environment parameter collection module software design was required to initialize the hardware of the environment parameter collection module such as STM32, a serial port, Wi-Fi module, and a sensor. Then, the data collection worked well. But in reading environment pollution parameters, the outputs of the sensor were analog output and digital output. For digital output, it can be directly read by a serial port or I/O port. Analog output was converted into digital signal through A/D first and then its value is read. The data collection software design flow chart of the system is shown in Figure4.Figure 4
Software design flow chart of environment parameter collection function.
### 2.5. Software Design of the Video and Image Collection Module
In the software design of the video and image collection module, it was necessary to complete the writing of the customized UVC driver first. When collecting the number of videos and images, the hardware such as ARM1I, serial port, USB, and camera module needed to be initialized first. Then, the corresponding device files needed to be opened. The parameters of video image reading needed to be set, and the required cache of video image was allocated. And then, the BOA server was started, and the required driver was run. The specific software design process is shown in Figure5.Figure 5
Software design flow chart of the videos and images collection function.
### 2.6. Design of Environment Pollution Monitoring Server
This system environment pollution monitoring server design mainly included environment pollution monitoring server demand analysis and environment pollution monitoring server system design.Environment pollution monitoring server functions should include data receiving, real-time display, and comprehensive analysis. At the same time, remote video monitoring and image processing were also essential. So, the environment pollution monitoring server should meet the following functions.
#### 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
#### 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
#### 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 2.1. System Function Design
According to the analysis of function and performance demand of the system, the system function modules could be divided into the environment pollution monitoring terminal and the environment pollution monitoring management platform [18, 19]. The environment monitoring terminal took STM32F103 and S3C6410 processor as the core processing unit. Through C# software development technology, Socket communication technology and database technology programming, area network server of the system was established. When the abnormal data appeared, alarm would be on in the server side. And, the current GPS data were recorded and sent to the environment management monitoring center and mobile phone terminals (community staff) [20]. The Web server of the system was the overall monitoring center of the environment monitoring system, mainly being responsible for the display of historical data, view of user records, alarm in emergencies, and task distribution in abnormal situations [21]. When the system data were abnormal, the system converted the received GPS information into location information and sent the anomaly and location information to mobile phone terminals over the network (inspection) which were the most important part of the control and treatment of environment pollution in the whole system and the key point to complete the timely treatment of environment pollution in the system [22]. The cellphone terminal of community manager was mainly responsible for receiving and displaying the data of each monitoring point in the community in real time, giving alarm prompt in case of abnormality. While the patrol mobile phone terminal was mainly responsible for receiving the tasks distributed by the environment management center and uploading the processing results of the task to the Web server [23]. The specific system function design diagram is shown in Figure 1.Figure 1
System function design diagram.
## 2.2. Hardware Design of Environment Parameter Collection and Transmission Module
STM32 processor was taken as the core in the environment parameter collection module of the system, which integrated clock circuit, power circuit, A/D (analog/digital) conversion module, serial communication module, and I2C bus module around it, constituting the core components of data collection. In addition, the multisensor of smoke, light intensity, and harmful gas concentration and ESP8266Wi-Fi module was extended. The specific hardware structure is shown in Figure2.Figure 2
Hardware structure of environment parameter collection and transmission module.
## 2.3. Hardware Design of Videos and Images Collection and Transmission Module
ARM1I processor was taken as the core in the hardware structure of videos and images collection system. And, the clock circuit, power supply circuit, NAND FLASH module, DDR module, serial communication module, and USB interface module were integrated around it, which constituted the core components of video image collection. In addition, GPS sensor, camera module, and USB Wi-Fi module were extended to realize the collection and transmission of location information and videos and images. The specific hardware structure is shown in Figure3.Figure 3
Hardware structure diagram of video image collection and transmission module.
## 2.4. Software Design of Environment Pollution Parameter Collection Module
For environment pollution parameter collection, data collection nodes are formed through the STM32 external multichannel sensors. Then multiple data collection nodes of sensor network were prepared. So, the pollution of the environment parameter collection module software design was required to initialize the hardware of the environment parameter collection module such as STM32, a serial port, Wi-Fi module, and a sensor. Then, the data collection worked well. But in reading environment pollution parameters, the outputs of the sensor were analog output and digital output. For digital output, it can be directly read by a serial port or I/O port. Analog output was converted into digital signal through A/D first and then its value is read. The data collection software design flow chart of the system is shown in Figure4.Figure 4
Software design flow chart of environment parameter collection function.
## 2.5. Software Design of the Video and Image Collection Module
In the software design of the video and image collection module, it was necessary to complete the writing of the customized UVC driver first. When collecting the number of videos and images, the hardware such as ARM1I, serial port, USB, and camera module needed to be initialized first. Then, the corresponding device files needed to be opened. The parameters of video image reading needed to be set, and the required cache of video image was allocated. And then, the BOA server was started, and the required driver was run. The specific software design process is shown in Figure5.Figure 5
Software design flow chart of the videos and images collection function.
## 2.6. Design of Environment Pollution Monitoring Server
This system environment pollution monitoring server design mainly included environment pollution monitoring server demand analysis and environment pollution monitoring server system design.Environment pollution monitoring server functions should include data receiving, real-time display, and comprehensive analysis. At the same time, remote video monitoring and image processing were also essential. So, the environment pollution monitoring server should meet the following functions.
### 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
### 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
### 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
## 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
## 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 3. Results Analysis
### 3.1. System Performance Test
The system performance was tested from the system error, real-timeness, and stability:(1)
The system error test of the environment pollution parameters was collected by the system, the influence of the actual situation on the environment parameters should be considered. In the research, two representative kinds of weather (sunny and rainy) were selected to collect temperature, humidity, and light intensity data at 8 : 00, 11 : 00, 14 : 00, 17 : 00, and 20 : 00, respectively. The results are shown in Table1 [24]. Considering the complex relationship between the harmful gas concentration, PM 2.5, and meteorological factors, in order to simplify the analysis, the data collection time point of the harmful gas concentration, PM 2.5 concentration, and noise intensity under sunny conditions was the same as the above. The results are shown in Table 1.The system and the suction hole of the measuring instrument were placed at the same measuring point, and the aerosol with a gradual change of concentration from high to low was measured at the same time.(1)C=R∗K.According to formula (1), atmospheric particulate concentration C and mass concentration conversion coefficient K could be calculated, where R is the instrument-measured value.As could be seen from the data comparison in Table1, the data measured by the system was close to that measured by the instrument in terms of value. And, the overall error was small, which could be further reduced by selecting sensors with better performance [25].(2)
Real-timeness test of the system.In the real-time test of the system, the average time was calculated as the test result through multiple timing tests on the data updating time, alarm response time and data transmission time of each sensor as shown in Figure6.Figure 6
Test results of sensor data updating, alarm response, and average time of data transmission.Table 1
Comparison between the harmful gas concentration, PM 2.5 concentration, and noise intensity measured by the system and the measured values by the instrument.
Harmful gas concentration (ppm)PM 2.5 concentration (μg/m3)Noise intensity (db)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)51486.3533316.4646472.1348464.35333010.05252030293.4531303.335050050468.7033316.4648492.0449458.8935336.0647482.08The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s.These environmental parameters are collected with the help of IoT sensing devices, and uploaded to the data center with the help of the network. The indoor living environment comfort evaluation model can perform cloud computing analysis on these massive data and obtain the indoor living environment comfort evaluation results, so as to improve the comfort of indoor occupants and ensure physical and mental health and comfort.
## 3.1. System Performance Test
The system performance was tested from the system error, real-timeness, and stability:(1)
The system error test of the environment pollution parameters was collected by the system, the influence of the actual situation on the environment parameters should be considered. In the research, two representative kinds of weather (sunny and rainy) were selected to collect temperature, humidity, and light intensity data at 8 : 00, 11 : 00, 14 : 00, 17 : 00, and 20 : 00, respectively. The results are shown in Table1 [24]. Considering the complex relationship between the harmful gas concentration, PM 2.5, and meteorological factors, in order to simplify the analysis, the data collection time point of the harmful gas concentration, PM 2.5 concentration, and noise intensity under sunny conditions was the same as the above. The results are shown in Table 1.The system and the suction hole of the measuring instrument were placed at the same measuring point, and the aerosol with a gradual change of concentration from high to low was measured at the same time.(1)C=R∗K.According to formula (1), atmospheric particulate concentration C and mass concentration conversion coefficient K could be calculated, where R is the instrument-measured value.As could be seen from the data comparison in Table1, the data measured by the system was close to that measured by the instrument in terms of value. And, the overall error was small, which could be further reduced by selecting sensors with better performance [25].(2)
Real-timeness test of the system.In the real-time test of the system, the average time was calculated as the test result through multiple timing tests on the data updating time, alarm response time and data transmission time of each sensor as shown in Figure6.Figure 6
Test results of sensor data updating, alarm response, and average time of data transmission.Table 1
Comparison between the harmful gas concentration, PM 2.5 concentration, and noise intensity measured by the system and the measured values by the instrument.
Harmful gas concentration (ppm)PM 2.5 concentration (μg/m3)Noise intensity (db)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)51486.3533316.4646472.1348464.35333010.05252030293.4531303.335050050468.7033316.4648492.0449458.8935336.0647482.08The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s.These environmental parameters are collected with the help of IoT sensing devices, and uploaded to the data center with the help of the network. The indoor living environment comfort evaluation model can perform cloud computing analysis on these massive data and obtain the indoor living environment comfort evaluation results, so as to improve the comfort of indoor occupants and ensure physical and mental health and comfort.
## 4. Conclusion
In the research, the environment pollution monitoring system was investigated, and China’s current environment pollution monitoring of the defects in the products was analyzed. At the same time, the research of intelligent electronic surveillance products the market experience was utilized. Combined with the market demand of environment pollution monitoring system, the sensor network, wireless network communication, embedded development, digital image processing, data fusion, and many other technologies were adopted. The design of intelligent community environment pollution monitoring and alarm system was completed. The environment pollution problems in China were analyzed through specific data. And, the key technologies used in the system framework were introduced. According to the overall scheme design of the system, the environment monitoring terminal of the system was designed. By analyzing the function and performance requirements of each submodule, the sensor module, GPS module, camera module, and wireless network communication module of the data collection unit were determined, which collected the parameters including noise, illumination intensity, harmful gas concentration, and PM 2.5 concentration. The hardware circuit of each module was designed. Software for environment pollution monitoring and management platform was designed, mainly including environment pollution monitoring server software, Web server software, and mobile phone terminal software. A joint test on the environment pollution monitoring system of the smart community was conducted, and the test results were analyzed. The test results showed that the system could run stably and achieve all the functions including the real-time data collection, abnormal alarm, data uploading, and video monitoring of environment pollution problems.
---
*Source: 1013300-2022-08-17.xml* | 1013300-2022-08-17_1013300-2022-08-17.md | 34,579 | Residential Environment Pollution Monitoring System Based on Cloud Computing and Internet of Things | Jing Mi; Xinghua Sun; Shihui Zhang; Naidi Liu | International Journal of Analytical Chemistry
(2022) | Chemistry and Chemical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013300 | 1013300-2022-08-17.xml | ---
## Abstract
In order to solve the problems of single monitoring factor, weak comprehensive analysis ability, and poor real time performance in traditional environmental monitoring systems, a research method of residential environment pollution monitoring system based on cloud computing and Internet of Things is proposed. The method mainly includes two parts: an environmental monitoring terminal and an environmental pollution monitoring and management platform. Through the Wi-Fi module, the data is sent to the environmental pollution monitoring and management platform in real time. The environmental monitoring management platform is mainly composed of environmental pollution monitoring server, web server, and mobile terminal. The results are as follows. The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s. Practice has proved that the environmental pollution monitoring and alarm system operates stably and can realize real-time collection and transmission of data such as noise, PM 2.5, harmful gas concentration, illumination, GPS, and video images, providing a reliable guarantee for timely environmental pollution control.
---
## Body
## 1. Introduction
In order to solve the problem of air pollution in the indoor living environment, the indicators that cause air pollution can be collected. These indicators mainly include inhalable particulate matter, formaldehyde, carbon dioxide, and stupid substances. These air indicators can be monitored through the indoor living environment monitoring system, and indicators that exceed the standard can be warned. What is more important is to collect carbon monoxide, methane, and other combustible gases and smog in the indoor air to prevent unexpected accidents such as gas poisoning and home fire, monitor these indicators that affect the safety of indoor residents in real time, and timely monitor these indicators that exceed the standard. Prewarning and alarming: in order to prevent accidents and ensure the personal and property safety of indoor residents. Establish an indoor living environment comfort evaluation model, which requires the environmental elements that affect the comfort of indoor occupants and the environmental parameters of each environmental element.Since the reform and opening up, China’s economic power has greatly increased, creating many outstanding achievements. And people’s living standards have also undergone earth-shaking changes [1]. During this period, China’s GDP growth rate reached 9.8%, becoming the country with the fastest economic growth in the same period. The continuous 30 years of rapid economic growth makes people around the world see “China’s economic miracle” [2].While promoting development, economic development with high energy consumption has also brought great pressure to China’s ecological environment, even affecting the daily life of urban residents [3]. Noise is also a common factor affecting residents’ health. With the acceleration of urbanization, production, living and transportation levels are improving. And, noise generated by urban construction has a negative impact on people’s daily life and health [4, 5]. According to 44 urban environment monitoring networks, more than 2/3 of the urban population in China are faced with noise pollution [6]. China’s production of industrial waste is increasing rapidly, which will inevitably put enormous pressure on the environment. According to the statistics of relevant departments, the accumulation of household garbage in China is also very serious. Therefore, it can be concluded that the severe urban environment pollution in China has begun to affect people’s daily life and environment pollution even has seriously harmed people’s health in some places [7, 8]. In view of the current situation of urban environment pollution, combined with the impact of environment pollution on people’s physical and mental health, as well as China’s determination to solve the problem of environment pollution and the immaturity of environment monitoring technology and many other factors, it is very necessary to establish the residential environment pollution monitoring and alarm system.Saito et al. believed that the definition of environment monitoring was that using biochemical methods to analyze the proportion and harm of various environment factors in the environment and evaluate the quality and change trend of the environment correctly according to the results [9]. Alobaidi and Valyrakis believed that since the emergence of environment monitoring technology, countries all over the world began to vigorously invest in the development of environment pollution monitoring technology and the research and development of environment pollution monitoring products, resulting in rapid development of environment monitoring technology [10]. Aguilar-arevalo et al. believed that environment monitoring technology in various countries started early and became increasingly mature through advanced scientific and technological means. The whole development process mainly included three important stages of special monitoring of environment pollution accidents, pollution sources, and environment quality [11]. Saoutieff et al. believed that, in the 1950s, after several serious pollution accidents caused by toxic chemical leakage, the developed countries began to analyze collected environment samples by chemical means to determine the composition and content of pollution, assisting the subsequent treatment [12]. Li et al. believed that, by the end of 1960s, people gradually realized that not only chemical substances would cause environment pollution but also biological and physical factors would damage the environment. So physical and biological methods were also included in environment detection. Coupled with the government’s attention to environment pollution legislation and pollutant discharge control, the monitoring of environment pollution has developed greatly in this period [13]. Leonardo et al. believed that, around 1975, due to the deepening of people’s understanding of environment protection, the developed countries began to pay attention to the monitoring of the overall quality of the environment rather than the monitoring of a single pollution source, which made the scope of environment pollution monitoring more comprehensive [14]. Moon et al. believed by around 1980, the developed countries began to rely on advanced science and technology to establish their own intelligent environment monitoring system. And, the geographic information system technology, remote sensing, and positioning technology were used comprehensively to monitor the change of natural environment continuously, so as to achieve a wide range of monitoring and improve the ability of data collection and processing data. The prediction of future environment quality realized and the development of the intelligent monitoring technology was promoted [15]. Lu et al. believed that, from the 1980s to the early 21st century, the integration of multiple technologies was gradually regarded as the mainstream scheme to realize environment pollution monitoring for the environment monitoring systems of various countries [16]. At present, in terms of environment pollution monitoring, the developed countries adopted the technical integration of “3S” technology with a series of emerging technologies such as big data and artificial intelligence, so as to achieve intelligent, accurate, and comprehensive development of environment pollution problems, providing strong technical support for environment assessment, environment prediction, and decision-making [17]. On the basis of the current researches, the intelligent community environment pollution monitoring and alarm system was designed and realized. In terms of the environment pollution problems existing in the current urban community, combined with the current development of environment pollution monitoring technology, the Internet of Things technology, data fusion technology, embedded development technology, Wi-Fi communication technology, and application development technology were adopted on the basis of fully studying the system framework design, embedded software and hardware design, Web development and design, and the server function design.
## 2. Research Methods
### 2.1. System Function Design
According to the analysis of function and performance demand of the system, the system function modules could be divided into the environment pollution monitoring terminal and the environment pollution monitoring management platform [18, 19]. The environment monitoring terminal took STM32F103 and S3C6410 processor as the core processing unit. Through C# software development technology, Socket communication technology and database technology programming, area network server of the system was established. When the abnormal data appeared, alarm would be on in the server side. And, the current GPS data were recorded and sent to the environment management monitoring center and mobile phone terminals (community staff) [20]. The Web server of the system was the overall monitoring center of the environment monitoring system, mainly being responsible for the display of historical data, view of user records, alarm in emergencies, and task distribution in abnormal situations [21]. When the system data were abnormal, the system converted the received GPS information into location information and sent the anomaly and location information to mobile phone terminals over the network (inspection) which were the most important part of the control and treatment of environment pollution in the whole system and the key point to complete the timely treatment of environment pollution in the system [22]. The cellphone terminal of community manager was mainly responsible for receiving and displaying the data of each monitoring point in the community in real time, giving alarm prompt in case of abnormality. While the patrol mobile phone terminal was mainly responsible for receiving the tasks distributed by the environment management center and uploading the processing results of the task to the Web server [23]. The specific system function design diagram is shown in Figure 1.Figure 1
System function design diagram.
### 2.2. Hardware Design of Environment Parameter Collection and Transmission Module
STM32 processor was taken as the core in the environment parameter collection module of the system, which integrated clock circuit, power circuit, A/D (analog/digital) conversion module, serial communication module, and I2C bus module around it, constituting the core components of data collection. In addition, the multisensor of smoke, light intensity, and harmful gas concentration and ESP8266Wi-Fi module was extended. The specific hardware structure is shown in Figure2.Figure 2
Hardware structure of environment parameter collection and transmission module.
### 2.3. Hardware Design of Videos and Images Collection and Transmission Module
ARM1I processor was taken as the core in the hardware structure of videos and images collection system. And, the clock circuit, power supply circuit, NAND FLASH module, DDR module, serial communication module, and USB interface module were integrated around it, which constituted the core components of video image collection. In addition, GPS sensor, camera module, and USB Wi-Fi module were extended to realize the collection and transmission of location information and videos and images. The specific hardware structure is shown in Figure3.Figure 3
Hardware structure diagram of video image collection and transmission module.
### 2.4. Software Design of Environment Pollution Parameter Collection Module
For environment pollution parameter collection, data collection nodes are formed through the STM32 external multichannel sensors. Then multiple data collection nodes of sensor network were prepared. So, the pollution of the environment parameter collection module software design was required to initialize the hardware of the environment parameter collection module such as STM32, a serial port, Wi-Fi module, and a sensor. Then, the data collection worked well. But in reading environment pollution parameters, the outputs of the sensor were analog output and digital output. For digital output, it can be directly read by a serial port or I/O port. Analog output was converted into digital signal through A/D first and then its value is read. The data collection software design flow chart of the system is shown in Figure4.Figure 4
Software design flow chart of environment parameter collection function.
### 2.5. Software Design of the Video and Image Collection Module
In the software design of the video and image collection module, it was necessary to complete the writing of the customized UVC driver first. When collecting the number of videos and images, the hardware such as ARM1I, serial port, USB, and camera module needed to be initialized first. Then, the corresponding device files needed to be opened. The parameters of video image reading needed to be set, and the required cache of video image was allocated. And then, the BOA server was started, and the required driver was run. The specific software design process is shown in Figure5.Figure 5
Software design flow chart of the videos and images collection function.
### 2.6. Design of Environment Pollution Monitoring Server
This system environment pollution monitoring server design mainly included environment pollution monitoring server demand analysis and environment pollution monitoring server system design.Environment pollution monitoring server functions should include data receiving, real-time display, and comprehensive analysis. At the same time, remote video monitoring and image processing were also essential. So, the environment pollution monitoring server should meet the following functions.
#### 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
#### 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
#### 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 2.1. System Function Design
According to the analysis of function and performance demand of the system, the system function modules could be divided into the environment pollution monitoring terminal and the environment pollution monitoring management platform [18, 19]. The environment monitoring terminal took STM32F103 and S3C6410 processor as the core processing unit. Through C# software development technology, Socket communication technology and database technology programming, area network server of the system was established. When the abnormal data appeared, alarm would be on in the server side. And, the current GPS data were recorded and sent to the environment management monitoring center and mobile phone terminals (community staff) [20]. The Web server of the system was the overall monitoring center of the environment monitoring system, mainly being responsible for the display of historical data, view of user records, alarm in emergencies, and task distribution in abnormal situations [21]. When the system data were abnormal, the system converted the received GPS information into location information and sent the anomaly and location information to mobile phone terminals over the network (inspection) which were the most important part of the control and treatment of environment pollution in the whole system and the key point to complete the timely treatment of environment pollution in the system [22]. The cellphone terminal of community manager was mainly responsible for receiving and displaying the data of each monitoring point in the community in real time, giving alarm prompt in case of abnormality. While the patrol mobile phone terminal was mainly responsible for receiving the tasks distributed by the environment management center and uploading the processing results of the task to the Web server [23]. The specific system function design diagram is shown in Figure 1.Figure 1
System function design diagram.
## 2.2. Hardware Design of Environment Parameter Collection and Transmission Module
STM32 processor was taken as the core in the environment parameter collection module of the system, which integrated clock circuit, power circuit, A/D (analog/digital) conversion module, serial communication module, and I2C bus module around it, constituting the core components of data collection. In addition, the multisensor of smoke, light intensity, and harmful gas concentration and ESP8266Wi-Fi module was extended. The specific hardware structure is shown in Figure2.Figure 2
Hardware structure of environment parameter collection and transmission module.
## 2.3. Hardware Design of Videos and Images Collection and Transmission Module
ARM1I processor was taken as the core in the hardware structure of videos and images collection system. And, the clock circuit, power supply circuit, NAND FLASH module, DDR module, serial communication module, and USB interface module were integrated around it, which constituted the core components of video image collection. In addition, GPS sensor, camera module, and USB Wi-Fi module were extended to realize the collection and transmission of location information and videos and images. The specific hardware structure is shown in Figure3.Figure 3
Hardware structure diagram of video image collection and transmission module.
## 2.4. Software Design of Environment Pollution Parameter Collection Module
For environment pollution parameter collection, data collection nodes are formed through the STM32 external multichannel sensors. Then multiple data collection nodes of sensor network were prepared. So, the pollution of the environment parameter collection module software design was required to initialize the hardware of the environment parameter collection module such as STM32, a serial port, Wi-Fi module, and a sensor. Then, the data collection worked well. But in reading environment pollution parameters, the outputs of the sensor were analog output and digital output. For digital output, it can be directly read by a serial port or I/O port. Analog output was converted into digital signal through A/D first and then its value is read. The data collection software design flow chart of the system is shown in Figure4.Figure 4
Software design flow chart of environment parameter collection function.
## 2.5. Software Design of the Video and Image Collection Module
In the software design of the video and image collection module, it was necessary to complete the writing of the customized UVC driver first. When collecting the number of videos and images, the hardware such as ARM1I, serial port, USB, and camera module needed to be initialized first. Then, the corresponding device files needed to be opened. The parameters of video image reading needed to be set, and the required cache of video image was allocated. And then, the BOA server was started, and the required driver was run. The specific software design process is shown in Figure5.Figure 5
Software design flow chart of the videos and images collection function.
## 2.6. Design of Environment Pollution Monitoring Server
This system environment pollution monitoring server design mainly included environment pollution monitoring server demand analysis and environment pollution monitoring server system design.Environment pollution monitoring server functions should include data receiving, real-time display, and comprehensive analysis. At the same time, remote video monitoring and image processing were also essential. So, the environment pollution monitoring server should meet the following functions.
### 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
### 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
### 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 2.6.1. Real-Time Reception, Display, and Analysis of Pollution Data
The environment pollution data collection terminal of the system periodically sent the packaged data to the environment pollution monitoring server through Wi-Fi. The server automatically received and displayed the data in real time in the corresponding environment parameter display bar. At the same time, the server analyzed, stored, and charted the received information. Thus, the specific environment pollution situation of the community could be directly monitored in real time, whether from data or graphics. And, the historical record of the database could provide data support for predicting the future environment situation.
## 2.6.2. Real-Time Reception, Analysis, Storage, Display of GPS Data, and Display of Maps
The system collected the location information of the current monitoring area in real time through the GPS sensor. And, it was deleted by the ARMI1 processor. The data frame was sent to the environment pollution monitoring server through the wireless network using the TCP/IP transmission protocol. And, the server received the data in real time and then analyzed it. The longitude and latitude information in the data was displayed in the corresponding data column and stored in the local database in real time. At the same time, the network server should also have map display, search, view, and other functions.
## 2.6.3. Video Surveillance, Photo Capture, and Image Processing
The system obtained real-time videos through the camera. Some simple processing and compression were performed through the ARMII processor. And, the videos and images were sent to the environment pollution monitoring server through the HTTP protocol with the help of the boa server. In case of abnormal data collection, the collection terminal could identify and process the captured images and then send them to the server. The server could store them and made a comprehensive analysis of the processing results and data to draw a conclusion.
## 3. Results Analysis
### 3.1. System Performance Test
The system performance was tested from the system error, real-timeness, and stability:(1)
The system error test of the environment pollution parameters was collected by the system, the influence of the actual situation on the environment parameters should be considered. In the research, two representative kinds of weather (sunny and rainy) were selected to collect temperature, humidity, and light intensity data at 8 : 00, 11 : 00, 14 : 00, 17 : 00, and 20 : 00, respectively. The results are shown in Table1 [24]. Considering the complex relationship between the harmful gas concentration, PM 2.5, and meteorological factors, in order to simplify the analysis, the data collection time point of the harmful gas concentration, PM 2.5 concentration, and noise intensity under sunny conditions was the same as the above. The results are shown in Table 1.The system and the suction hole of the measuring instrument were placed at the same measuring point, and the aerosol with a gradual change of concentration from high to low was measured at the same time.(1)C=R∗K.According to formula (1), atmospheric particulate concentration C and mass concentration conversion coefficient K could be calculated, where R is the instrument-measured value.As could be seen from the data comparison in Table1, the data measured by the system was close to that measured by the instrument in terms of value. And, the overall error was small, which could be further reduced by selecting sensors with better performance [25].(2)
Real-timeness test of the system.In the real-time test of the system, the average time was calculated as the test result through multiple timing tests on the data updating time, alarm response time and data transmission time of each sensor as shown in Figure6.Figure 6
Test results of sensor data updating, alarm response, and average time of data transmission.Table 1
Comparison between the harmful gas concentration, PM 2.5 concentration, and noise intensity measured by the system and the measured values by the instrument.
Harmful gas concentration (ppm)PM 2.5 concentration (μg/m3)Noise intensity (db)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)51486.3533316.4646472.1348464.35333010.05252030293.4531303.335050050468.7033316.4648492.0449458.8935336.0647482.08The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s.These environmental parameters are collected with the help of IoT sensing devices, and uploaded to the data center with the help of the network. The indoor living environment comfort evaluation model can perform cloud computing analysis on these massive data and obtain the indoor living environment comfort evaluation results, so as to improve the comfort of indoor occupants and ensure physical and mental health and comfort.
## 3.1. System Performance Test
The system performance was tested from the system error, real-timeness, and stability:(1)
The system error test of the environment pollution parameters was collected by the system, the influence of the actual situation on the environment parameters should be considered. In the research, two representative kinds of weather (sunny and rainy) were selected to collect temperature, humidity, and light intensity data at 8 : 00, 11 : 00, 14 : 00, 17 : 00, and 20 : 00, respectively. The results are shown in Table1 [24]. Considering the complex relationship between the harmful gas concentration, PM 2.5, and meteorological factors, in order to simplify the analysis, the data collection time point of the harmful gas concentration, PM 2.5 concentration, and noise intensity under sunny conditions was the same as the above. The results are shown in Table 1.The system and the suction hole of the measuring instrument were placed at the same measuring point, and the aerosol with a gradual change of concentration from high to low was measured at the same time.(1)C=R∗K.According to formula (1), atmospheric particulate concentration C and mass concentration conversion coefficient K could be calculated, where R is the instrument-measured value.As could be seen from the data comparison in Table1, the data measured by the system was close to that measured by the instrument in terms of value. And, the overall error was small, which could be further reduced by selecting sensors with better performance [25].(2)
Real-timeness test of the system.In the real-time test of the system, the average time was calculated as the test result through multiple timing tests on the data updating time, alarm response time and data transmission time of each sensor as shown in Figure6.Figure 6
Test results of sensor data updating, alarm response, and average time of data transmission.Table 1
Comparison between the harmful gas concentration, PM 2.5 concentration, and noise intensity measured by the system and the measured values by the instrument.
Harmful gas concentration (ppm)PM 2.5 concentration (μg/m3)Noise intensity (db)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)Value by systemValue by instrumentError (%)51486.3533316.4646472.1348464.35333010.05252030293.4531303.335050050468.7033316.4648492.0449458.8935336.0647482.08The data measured by the system is close to the data measured by the instrument, and the overall error is small. The measurement error of harmful gases is about 6%. PM 2.5 is about 6.5%. Noise is about 1%. The average time for sensor data update is 0.762 s. The average alarm response time is 2 s. The average data transfer time is 2 s.These environmental parameters are collected with the help of IoT sensing devices, and uploaded to the data center with the help of the network. The indoor living environment comfort evaluation model can perform cloud computing analysis on these massive data and obtain the indoor living environment comfort evaluation results, so as to improve the comfort of indoor occupants and ensure physical and mental health and comfort.
## 4. Conclusion
In the research, the environment pollution monitoring system was investigated, and China’s current environment pollution monitoring of the defects in the products was analyzed. At the same time, the research of intelligent electronic surveillance products the market experience was utilized. Combined with the market demand of environment pollution monitoring system, the sensor network, wireless network communication, embedded development, digital image processing, data fusion, and many other technologies were adopted. The design of intelligent community environment pollution monitoring and alarm system was completed. The environment pollution problems in China were analyzed through specific data. And, the key technologies used in the system framework were introduced. According to the overall scheme design of the system, the environment monitoring terminal of the system was designed. By analyzing the function and performance requirements of each submodule, the sensor module, GPS module, camera module, and wireless network communication module of the data collection unit were determined, which collected the parameters including noise, illumination intensity, harmful gas concentration, and PM 2.5 concentration. The hardware circuit of each module was designed. Software for environment pollution monitoring and management platform was designed, mainly including environment pollution monitoring server software, Web server software, and mobile phone terminal software. A joint test on the environment pollution monitoring system of the smart community was conducted, and the test results were analyzed. The test results showed that the system could run stably and achieve all the functions including the real-time data collection, abnormal alarm, data uploading, and video monitoring of environment pollution problems.
---
*Source: 1013300-2022-08-17.xml* | 2022 |
# Medicinal Plants for the Treatment of Hypertrophic Scars
**Authors:** Qi Ye; Su-Juan Wang; Jian-Yu Chen; Khalid Rahman; Hai-Liang Xin; Hong Zhang
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101340
---
## Abstract
Hypertrophic scar is a complication of wound healing and has a high recurrence rate which can lead to significant abnormity in aesthetics and functions. To date, no ideal treatment method has been established. Meanwhile, the underlying mechanism of hypertrophic scarring has not been clearly defined. Although a large amount of scientific research has been reported on the use of medicinal plants as a natural source of treatment for hypertrophic scarring, it is currently scattered across a wide range of publications. Therefore, a systematic summary and knowledge for future prospects are necessary to facilitate further medicinal plant research for their potential use as antihypertrophic scar agents. A bibliographic investigation was accomplished by focusing on medicinal plants which have been scientifically testedin vitro and/or in vivo and proved as potential agents for the treatment of hypertrophic scars. Although the chemical components and mechanisms of action of medicinal plants with antihypertrophic scarring potential have been investigated, many others remain unknown. More investigations and clinical trials are necessary to make use of these medical plants reasonably and phytotherapy is a promising therapeutic approach against hypertrophic scars.
---
## Body
## 1. Introduction
Scar formation strongly depends on the presence of contraction during healing and the nature of the scar is actually the uneven look of the healed tissue resulting from disfigured tissue deformation and overaligned collagen fibers [1]. Collagen in hypertrophic scars is found to be in a disorganized, whorl-like arrangement rather than in the normal parallel orientation manner. Therefore, hypertrophic scars are indurate, elevated, poorly extensible, and also characterized by hypervascularity, thereby providing their erythematous appearances [2]. HS can cause significant abnormality in aesthetic and functional symptoms and to date no recognized treatment has been established. It commonly occurs after surgical incision, thermal injury, and traumatic injuries to the dermis with a subsequent abnormal healing response [3]. Furthermore, it is often associated with contractures that can lead to considerably reduced functional performance in patients.The development of antihypertrophic scars is an unsolved problem in the process of scar treatment. For this reason, some undiscovered successful treatments are needed to prevent excessive hypertrophic scarring. The reported preventions include topical medical application, cryotherapy, use of silicone gel sheets, injection of steroids, radiotherapy, and an early surgical procedure for wound closure [2]. In the last decade, there has been a renewed interest in the use of indigenous medicine worldwide, arising from the realization that orthodox medicine is not widespread. Although modern medicine may be available in some communities, herbal medicines have often maintained popularity for historical and cultural reasons, in addition to their cheaper costs [4]. Recent research has introduced the uses of phytochemical compounds and extracts isolated from medicinal plants in an attempt to resolve these problems as a promising therapy.Many treatment strategies are sought to prevent scar formation without compromising the wound healing process [5]. The effectiveness of currently used therapy against hypertrophic scar arises most probably from the increase of the medicinal plants reported. In the modern system of medicine, about 25% of prescriptions contain active principle(s) derived from plants [4]. A significant correlation between medicinal plants and their use in the treatment of many types of scars has been shown in epidemiological data generated throughout the world. Published clinical trials have, as yet, largely focused on characterizing the pharmacokinetics and metabolism of medicinal plants. Despite experimental advances in medicinal plant research against scars, findings in humans are still limited. However, in recent years, diverse benefits of medicinal plants in the treatment of hypertrophic scars have been described [6–9].In line with the latest findings responsible for the increased recognition of medicinal plants as potential therapeutic and/or preventative agents, the aim of the present review is to focus on recent experimental findings and clinical trials of medicinal plants and other preparations with similar actions that could account for beneficial effects on hypertrophic scars in patients. Natural products, such as plant extracts, either as pure compounds or as standardized extracts, provide unlimited opportunities for control of hypertrophic scarring owing to their chemical diversity [10]. Currently, a great deal of effort is being expended to find alternative sources of safe, effective, and acceptable natural medicinal plants for the treatment or prevention of hypertrophic scars; hence, all literature available was reviewed.
## 2. Suggested Mechanism of Hypertrophic Scarring
The molecular mechanism of hypertrophic scarring is associated with the unusual proliferation of fibroblasts and overproduction of collagen and extracellular matrix [70]. An array of intra- and extracellular mechanisms is essential in the prevention of scar formation. With the help of molecular biology, cell biology technology, hypertrophic scar animal models, and the setting-up of scar tissue engineering, the mechanism of hypertrophic scarring has been clearly defined (Figure 1). It is usually considered as migration and proliferation of different cell types such as keratinocytes, myofibroblasts [59], and mast cells [71]. Fibroblasts play an essential role in new tissue formation during wound repair [33], but their abnormal low death rate and high proliferation rate can cause scar tissue formulation [11]. Meanwhile, keratinocytes are indispensable in signal transduction between paracrine secretion and epithelium matrix. When cultured in the presence of keratinocytes, fibroblasts exhibit significant proliferation activity [72], showing the contribution of keratinocytes to fibroblasts proliferation. Myofibroblasts, which are different from fibroblasts and are related to the composition, organization, and mechanical properties of ECM [73], increase collagen synthesis and retard cell migration [71], thus resulting in excessive and rigid scarring. Fibroblasts are transformed into myofibroblasts by heterocellular gap junction intercellular communications between mast cells (RMC-1) and fibroblasts [71, 74]. In the process of wound healing, the combination of fibroblasts and myofibroblasts triggers excessive production of abnormal extracellular matrix protein [75], eliciting scarring [1, 75]. With the assistance of keratinocytes and mast cells, proliferative fibroblasts produce massive collagen which makes extracellular matrix accumulate below dermis, leading to scar formation. The complex forming process consists of three different phases, inflammation, proliferation, and maturation, which leads to hypertrophic scarring in the end [76]. The ratio of I to III collagens in healthy adults ranges from 3.5 to 6.1, while in patients with hypertrophic scars, it could be down to 2 and in keloid patients it can be as high as 19, which is related to the abnormal metabolism of collagens I and III in pathological scars, including more collagen synthesis and less collagen degradation.Figure 1
The mechanism of hypertrophic scarring.Although many targets of action, by which scarring can be inhibited, have been experimentally studied or postulated, few are well known or defined for inhibition of hypertrophic scarring by plant-derived compounds. Figure2 and Tables 5 and 6 summarize and enumerate the suggested mechanisms and correlative medicinal plants.Figure 2
The mechanisms by which extracts and compounds from medicinal plants display antihypertrophic scar activity.The size of a scar is influenced by many factors, such as wound size, wound contraction, and healing time. Wound contraction makes an important contribution to scar formation also the larger the area of the wound, the more cells migrate, resulting in more prominent scarring [1]. Therefore, induction of fibroblast apoptosis and reduction of extracellular matrix and collagen I/III production may be the pivotal measures against hypertrophic scarring.Many kinds of test models are applied to investigate wound healing mechanisms and inhibition of scar formation, including 2D hybrid agent-based model [1], pig surgical injury model, fibroblast populated collagen lattice (FPCL) model, rat laminectomies at Lumbar-1 level [5], incisional wound healing model [6], and rabbit ear model [54]. These models provide a mean for detecting and evaluating the mechanobiology in wound healing and scar formation [1].However, the complex mechanism of hypertrophic scarring still remains unknown which raises the question of how to control scar hyperplasia.
## 3. Medicinal Plants against Hypertrophic Scarring
Many beneficial uses of medicinal plants are extensively documented in the traditional medicine systems in many cultures. To collect the data which supports this finding, we performed a systematic review using PubMed, Elsevier, Springer, and Google Scholar databases and peer-reviewed articles published in the last 10 years. The search terms included scar, scaring, fibroblast, extract, and preparation. The phytochemicals from medicinal plants against scar hyperplasia are presented in Tables1 and 2, respectively, whilst the medicinal plant extracts are listed in Table 3. Their activities and mechanisms for antihypertrophic scarring were also described, respectively, in Tables 1, 2, and 3. There are five preparations (Table 4) reported on their effects and mechanisms of antihypertrophic scarring, namely, liposome-encapsulated 10-HCPT, oxymatrine-phospholipid complex (OMT-PLC), solid lipid nanoparticle-enriched hydrogel (SLN-gel), Ginsenoside Rg3/poly (l-lactide) (G-Rg3/PLLA), andCentella asiatica extract capsule, which are composed of different medicinal plants and vehicles. Medicinal plants can be used for different therapeutic purposes or as precursors of useful drugs containing different types of phytochemicals.Table 1
The components from medicinal plants with antihypertrophic scar activity.
Component
Botanical name
Family
Medicinal part
Observation
Dose
Effect
Mechanism of action
References
Madecassoside
Centella asiatica
Umbelliferae
Whole plant
In vitro
10~100μM
Antiproliferation of HSFBsPromotion of apoptosis Diminishment of scar formationFacilitation of wound healing
Inhibition of HKF migration, F-actin filaments protein, and cytoskeletal protein. Promotion of nuclear shrinkage and mitochondrial membrane depolarization Condensation of chromatin and fragment of nuclei Inhibitory phosphorylation of p38, PI3K, AKT, and cofilin. Activation of caspase-3/caspase-9 Facilitation of Bax mRNA expression and decrease of Bcl-2 and MMP-13 mRNA expression
[11, 12]
Genistein
Glycine max
Leguminosae
Fruit
In vitro
25~100μg/mL
Anti-proliferation of HSFBsSuppression of mitosisPromotion of apoptosis
Inhibition of TPKs, increase of caspase-3, and decreases ofα-SMA and Bcl-2 protein Enhancement of Bax protein Inhibition of types I/III precollagen mRNA expression, down-regulation of collagen I/III mRNA, reduction of PCNA expression, and inhibitory phosphorylation of c-Raf, MEK1/2, ERK1/2, and p38 Induction of morphology changes of apoptosis cells Inhibitory transdifferentiation of fibroblasts into myofibroblasts
[13–17]
37~370µM
Decrease of G0-G1 phase and increase of G2-M phase Increase of C-JUN mRNA expression and decrease of FOS-B mRNA expression in skin keratinocytes Inhibitory mRNA expression of C-JUN and C-FOS in human fibroblasts In keloid fibroblasts, decrease of C-JUN and C-FOS mRNA expression at 37 µM, but enhancement at 370 µM
Astragaloside IV
Astragalus Membranaceus
Leguminosae
Root
In vitro
12.5~200μM
Antiproliferation of HSFBs
Decrease of collagen I/collagen III and TGF-β1 secretion
[18]
Tetrandrine
Stephania tetrandra
Menispermaceae
Root
In vitro
10~80 μM
Antiproliferation of HSFBs
Inhibition of TGF-β1 mRNA transcription, promotion of Smad7 and MMP-1 mRNA expression, and inhibition of Smad2 mRNA expression Decrease of protein expression of collagen I/collagen III, Bcl-2, and MKP-1. Reduction of total collagen volume and S phase, increase of G0/G1 phase, and prevention of G0/G1 into G2 phase Inhibitory phosphorylation of MEK1/2 and ERK1/2
[19–22]
In vivo
1~10 mg/L
Local injection
0.5~2 mg/L50 mg/mL, 20 μL
Aloe-emodin
Rheum palmatum
Polygonaceae
Root, rhizome
In vitro
20~80 mg/L
Antiproliferation of HSFBs
Increase of S phase
[23]
5F
Pteris semipinnata
Pteridaceae
Whole plant
In vitro
20~80 μg/mL
Antiproliferation of HPS
Blockage of fibroblasts from G1 to S phase Decreased protein expression of TGF-β1 and type I collagen, increase of caspase-3, and reduction of total collagen and fibroblasts PCNA protein (cyclin) Inhibitory mRNA expression of type I/type III procollagen in SSSF
[24–26]
In vivo
10~40 mg/L
Reduction of PS volume
40~120 mg/L
Antiproliferation of SSSF
Local injection
20~80 mg/L
Promotion of HPS apoptosis Decrease of hypertrophic index
Reduction of collagen fiber content
Oxymatrine
Sophora japonica
Leguminosae
Root
In vitro
0.125~1.0 mg/mL2 μM
Antiproliferation of KFb and HFb Promotion of KFb apoptosis
Increase of S phase, inhibitory mRNA expression of collagen I/collagen III and reduction of protein expression of Smad3 and ERK1 Promotion of Smad7 protein expression Inhibition of p-Smad3 and nuclear translocation of Smad3
[27, 28]
Ginsenoside Rg3 (G-Rg3)
Panax ginseng
Araliaceae
Root, rhizome
In vivo Local injection
3 mg/mL, 0.1 mL
Inhibition of HSDecrease of scar tissue fibrosis
Increase of protein expression of PCNA, Bax, caspase-3, and Cyt-c Decrease of Bcl-2 protein expression
[29, 30]
Osthole
Cnidium monnieri
Apiaceae
Fruit
In vitro
5~50μM
Antiproliferation of HSFBs and Induction of apoptosis
Promotion of Bax mRNA expression and inhibition of Bcl-2 mRNA expression Decreases of TGF-β1 protein expression and facilitation of HSFBs shrinkage, chromatin condensation, membrane blebbing, apoptotic body formation, and DNA ladder formation
[31]Table 2
Antihypertrophic scar displaying phytochemicals widely distributed in medicinal plants.
Phytochemicals
Observation
Dose
Effect
Mechanism of action
References
10-Hydroxycamptothecin (HCPT)
In vivo
0.01~0.1 mg/mL
Decrease of the area of epidural scar tissue and the number of fibroblasts. Reduction of epidural adhesion and inhibitory proliferation of RESF
Inhibition of topoisomerase I
[5]
Angelica naphtha
In vitro
1~16 mg/L
Antiproliferation of HSFBs and induction of HSFBs apoptosis
Inhibition of G0/G1 and G2/M phases, promotion of S phase, and reduction of collagen protein in fibroblasts
[32]
Asiaticoside
In vivo In vitro
25~50 mg/mLLocal injection25~1000 μM300 μg/mL
Reduction of scar hyperplasia of HSREDecrease of hypertrophic indexPromotion of keratinocytes migrationAnti-proliferation of HSFBs
Inhibition of the mRNA expression of TGF-β1, RhocA, ROCK-I, and CTGF, facilitation of TGF-β3 mRNA expression, and decrease of the expression of types I/III collagen and TIMP-1 proteins
[33–36]
Matrine
In vitro
0.01~5.00 g/L
Antiproliferation and induction of apoptosis in HSFBs
Promotion of G2-M phase, inhibition of lactate dehydrogenase and Hyp and enhancement of I/III collagen ratio
[37]
Quercetin
In vivo
0.05%~1%, w/oLocal Application
Inhibition of scarring in hairless mice
Increase of the protein and mRNA expression of MMP-1 and enhancement of the phosphorylation of JNK and ERK
[38]
In vitro
10~40μM
Antiproliferation of HSkF
Emodin
In vitro
50~200μg/mL
Antiproliferation of HSFBs
Inhibition of G0/G1 phase, increase of intracellular calcium, and decrease of collagen synthesis
[39–41]
Resveratrol
In vitro In vivo
25~400μM 150~400 μM Local injections
Antiproliferation of HSFBs Reduction of hypertrophic scar index
Inhibition of the mRNA expression of type I/type III procollagens
[42]
Tan IIA
In vitro
20~80μg/mL0.05~0.15 mg/mL
Antiproliferation of HSFBsInduction of HSFBs apoptosis
Facilitation of nuclei shrinkage, condensation and fragmentation, blockage of HSFBs from G1 to S phases, downregulation of MDA content and XOD activity, increase of T-SOD and GSH-Px activity, and promotion of MMP-1 mRNA expression
[43–45]
Curcumin
In vitro
12.5~100μM
Antiproliferation of HSFBs
Inhibition of procollagen 1 mRNA expression Reduction of hypertrophic index and collagen fiber area density
[46]
In vivo
0.5~2.0 mM, 0.1 mL/d Local injections
Dihydroartemisinin
In vivo
180 mg/kg
Inhibition of HSRE scarring
Inhibition of collagen fibers and hypertrophic index
[47]
10 mL intragastric administration
Antifibroblast proliferation of HSRE
Arteannuin
In vitro
0.103~0.206 mg/mL
Antiproliferation of HSFBs
Congregation of nuclear chromatin, promotion of calcium concentration, increase of G0-G1 phase, and reduction of collagen levels and hypertrophic index of HSRE
[48–51]
In vivo
60 mg/mL/2 d
Decrease of HSRE scarring
20μL local injection
Antiproliferation of mastocyte
Panax notoginseng saponins (PNS)
In vitro
400~800μg/mL
Antiproliferation of HSFBs
Inhibition of G2-M and G0-G1 phases, increase of S phase, reduction of the protein expression of TGF-β1 and α-SMA, and inhibition of intracellular free calcium concentration
[52, 53]
Oleanolic Acid
In vivo
Topical application of 2.5, 5, and10% for 28 consecutive days
Inhibition of hypertrophic scarring, induction of apoptosis, and reduction of scar elevation index
Inhibition of the mRNA expression of TGF-β1 mRNA, MMP-1, TIMP-1, and P311. Increase of the mRNA expression of MMP-2, caspase-3, and caspase-9. Reduction of the protein expression of TGF-β1 and collagen I/collagen III
[54]
Hirudin
In vitro
1~50μM
Promotion of apoptosis
Increase of Gl phase and inhibition of S phase Enhancement of the protein expression of MMP-2, MMP-9, and p27, reduction of the protein expression of cyclin E and TGF-β1, and inhibition of the mRNA expression of I/III procollagens
[55]
Xiamenmycin
In vivo
10 mg/kg·d−1, intraperitoneal injection for 10 days
Attenuation of hypertrophic scarring and suppression of local inflammation in a mechanical stretch-induced mouse mode
Reduction of CD4+ lymphocyte and monocyte/macrophage retention in fibrotic foci Blockage of fibroblast adhesion with monocytes.Inactivation of FAK, p38, and Rho guanosine triphosphatase signaling
[56]
In vitro
5–30μg/mL
Inhibition of proliferation of HSFBsTable 3
The extracts from medicinal plants displaying anti-hypertrophic scarring.
Extract
Botanical name
Family
Medicinal part
Observation
Dose administration
Effect
Mechanism of action
References
Ethanolic extract
Calotropis gigantea
Asclepiadaceae
Root, bark
In vivo
100~400 mg/kgintragastric administration
Increase of wound contraction and decrease of scar area and the time of epithelization
Increase of hydroxyproline and collagen synthesis
[6]
Ethanolic extract
Daucus carota
Apiaceae
Root
In vivo
1, 2, and 4%epidermal administration
Decrease of wound area, epithelization period, and scar width. Increase of wound contraction
Increase of hydroxyproline content. Antioxidant and antimicrobial activities
[7]
Methanolic extract
Pistia stratiotes
Araceae
Leave
In vivo
5 and 10%epidermal administration
Decrease of wound area
Inhibition of hydroxyl radical scavenging and increase of fibroblast blood vessels and collagen fibers
[8]
Ethyl acetate extract
Gelidium amansii
Gelidiaceae
Whole plant
In vitro
5~10 mg/mL
Antiproliferation of HSFBs
Decrease of the protein expression of I/III collagens and TGF-β1
[9]
Ethanolic extract
Carthamus tinctorius
Asteraceae
Flower
In vitro
2~8μg/mL
Antiproliferation of HSFBs
Inhibition of collagen protein synthesis and promotion of fibroblast shrinkage
[57]
Aqueous extract
Oenothera paradoxa
Onagraceae
Seed
In vitro
0.1~10μg/mL
Protection of normal dermal fibroblasts
Decrease of LDH and ROS
[58]
Aqueous extract
Cigarette Smoke
Unknown
Unknown
In vitro
100% saturated solution
Antiproliferation of skin fibroblasts and promotion of cellular senescence
Inhibition of SOD and GSH-Px and promotion of ROS
[59]
Ethyl acetate extract
Rheum palmatum
Polygonaceae
Root, rhizome
In vitro
25μg/mL
Antiproliferation of HSFBs
Increase of G0/G1 phase
[60]
Methanol extract
Broussonetia kazinoki
Moraceae
Bark, root
In vitro
Unknown
Inhibition of hyperpigmentation
Reduction of tyrosinase enzyme synthesis
[61]
Ethanol extract
Scutellaria baicalensis Georgi
Lamiaceae
Root
In vivo
10 mg/mL epidermal administration
Inhibition of scarring
Reduction of the protein expression of TGF-β1
[62]
Aqueous extract
Allium cepa
Liliaceae
Corm
In vivo
1~2.5%, v/vlocal application
Suppression of scarring in hairless mice
Upregulation of MMP-1 and type I collagen expression
[38]
In vitro
1~2.5%, v/v
Antiproliferation of fibroblasts
Aqueous extract
Tamarindus indica
Fabaceae
Bark, leave
In vivo
Unknown
Anti-inflammation
Elimination of death cells and necrotic tissues
[63]
Ethanol extract
Aneilema keisak
Commelinaceae
Whole plant
In vitro
40μg/mL
Decrease of scarring
Inhibition of TGF-β1-dependent signalling by reducing Smad2 protein. Reduction of various hKF pathological responses, including hyperplastic growth, collagen production, and migration without DNA damage
[64]Table 4
The preparations from different medicinal plants with antihypertrophic scar activity.
Preparations
Botanical name
Family
Medicinal part
Preparation
Vehicle
Delivery system
Observation
Effect
Mechanism of action
References
Hydroxycamptothecin (HCPT)
Camptotheca acuminata
Nyssaceae
Fruit, leave
Liposome-encapsulated 10-HCPT
Liposome
Liposome-encapsulated
In vivo Implant
Antiproliferation of fibroblasts and reduction of epidural adhesion
Decrease of epidural scar area and fibroblast number in the epidural scar tissue
[65, 66]
Oxymatrine (OMT)
Sophora flavescens, Sophora alopecuroides, and Sophora subprostrata
Leguminosae
Unknown
Oxymatrine-phospholipid complex (OMT-PLC)
Phospholipid
Microemulsion
In vitro In vivo topical delivery
Antiproliferation of fibroblasts
Improvement of OMT skin permeability and increase of retention ratio of OMT in skin.
[67]
Astragaloside IV
Astragalus membranaceus
Leguminosae
Root
Solid lipidnanoparticle-enriched hydrogel (SLN-gel)
Lipid hydrogel
Solid lipid nanoparticle, hydrogel
In vitro In vivo,topical delivery
Enhancement of keratinocytes migration and proliferation Increase of drug uptake in fibroblasts Promotion of wound healing and inhibition of scar formation
Caveolae endocytosis pathway. Increase of wound closure rate and angiogenesis Improvement of collagen regular organization
[68]
Ginsenoside Rg3 (G-Rg3)
Red Panax ginseng
Araliaceae
Root, rhizome
Ginsenoside Rg3/Poly (l-lactide) (G-Rg3/PLLA)
Electrospun poly(L-lactide) fiber
Electrospun fibrous scaffolds, nanofibers
In vitro In vivo
Inhibition of fibroblast cell growth, antiproliferation of fibroblasts, and prevention of scar formation
Improvement of dermis layer thickness, collagen fibers, and microvessels
[29]
Centella asiatica extract
Centella asiatica
Apiaceae
Whole plant
Centella asiatica extract capsule
Capsule
Nothing
In vivo
Inhibition of tissue overgrowth, reduction of scar and keloid, and anti-inflammation
Promotion of collagen I protein expression, collagen remodeling, and glycosaminoglycan synthesis Enhancement of collagen and acidic mucopolysaccharides
[69]Table 5
Summary of antiscarring mechanisms of medicinal plant components.
Mechanism
Medicinal plant component
MAPK pathway
Inhibition of p-p38 signaling
Madecassoside, Genistein, and Xiamenmycin
Inhibition of p-ERK1/2 signaling
Genistein, Tetrandrine, Cryptotanshinone, and Quercetin
Inhibition of p-JNK signaling
Quercetin
PI3K/AKT signaling
Madecassoside
Mitochondrial-dependent pathway
Increase of Bax
Madecassoside, Genistein, Ginsenoside Rg3, and Osthole
Decrease of Bcl-2
Madecassoside, Genistein, Tetrandrine, Ginsenoside Rg3, and Osthole
Increase of cytoplasm Cyt-c
Ginsenoside Rg3
Cell cycle
Decrease of G0-G1 phase
Genistein, Angelica naphtha, Emodin, and Panax notoginseng saponins
Increase of G2-M
Genistein
Decrease of S phase
10-Hydroxycamptothecin, Tetrandrine, Aloe emodin, and Hirudin
Prevention from G0/G1 into G2 phase
Tetrandrine
RhoA/ROCK-I signal pathway
Inhibitory secretion of RhocA, ROCK-I, and CTGF
Xiamenmycin
VEGF signal pathway
Cryptotanshinone
FAK signal pathway
Cryptotanshinone, Xiamenmycin
TGF-β/Smad signaling pathway
Oxymatrine
Downregulation of collagen I/III expression
Genistein, Astragaloside IV, Tetrandrine, Resveratrol, 5F, Curcumin, Oleanolic Acid, and Hirudin
Decrease ofα-SMA
Genistein, Panax notoginseng saponins
Activation of caspases
Activation of caspase-3
Madecassoside, Genistein, 5F, Cryptotanshinone, Oleanolic Acid, and Ginsenoside Rg3
Activation of caspase-9
Madecassoside, Oleanolic Acid
Suppression of TPK activation
Kazinol F
Inhibition of topoisomerase I
10-Hydroxycamptothecin
Decrease of TGF-β1 secretion
Tetrandrine, Panax notoginseng saponins, Osthole, and Hirudin
Inhibition of TGF-β1 transcription
Astragaloside IV, Oleanolic Acid
Downregulation of TIMP-l expression
Oleanolic Acid
Reduction of LDH and increase of the ratio of collagen I/collagen III
Matrine
Increase of T-SOD and GSH-Px activity
Tan IIA
MMP
Enhancement of MMP-1
Tetrandrine, Tan IIA, and Oleanolic Acid
Enhancement of MMP-2 and MMP-9
Hirudin
Enhancement of MMP-13
Madecassoside
Increase of intracellular calcium
Emodin, ArteannuinTable 6
Summary of antiscarring mechanisms of plant extracts.
Mechanism
Medicinal plants extract
Cell cycle
Increase of G0-G1 phase
Rhubarb
Collagen
Downregulation of collagen l expression
Gelidium amansii
Downregulation of collagen III expression
Gelidium amansii, Scutellaria baicalensis Georgi
Enhancement of collagen synthesis
Calotropis gigantea
Inhibition of collagen synthesis
Carthamus tinctorius
Promotion of collagen I
Onion
MMP
Enhancement of MMP-1
Neonauclea reticulata, Onion
Increase of MMP-3 and MMP-9
Neonauclea reticulata
Elimination of hydroxyl radical
Pistia stratiotes
Decrease of LDH
Oenothera paradoxa
Decrease of ROS
Oenothera paradoxa, Neonauclea reticulata
Increase of ROS and reduction of SOD and GSH-Px
Cigarette SmokeThe use of herbal medicine remedy has been steadily increasing worldwide in recent years, as well as the search for new phytochemicals that could be potentially developed as useful drugs for the treatment of hypertrophic scar and other scar diseases [4]. The antihypertrophic scar activity of medicinal plants results from a variety of components contained in these plants (Tables 1 and 2). Many plant extracts (Table 3) have antihypertrophic scar activity owing to their phytochemical constituents. However, more work is needed to focus on purification and identification of active components and to elucidate the roles that these play in inhibition of scars when used alone or jointly. Moreover, many of them have not been tested for their cytotoxicity to normal cells, which seriously blocksin vivo investigations. Undeniably, no toxic and side effects have been proved for some active components. For example, Genistein, which is easily obtained and commonly used for hypertrophic scar treatment, has strong pharmacological effects, with no obvious toxicity or side effects [13].
## 4. New Preparations of Medicinal Plants
A large number of extracts and compounds of medicinal plants display antiscar activity. Nevertheless, drugs are difficult to get through the stratum corneum due to the natural barrier of skin, which causes lower permeability of drugs. The oral bioavailability of drugs at the permissive dose is very low, owing to their hydrophilicity (low permeability), poor absorption, and biotransformation or compact scar tissue. The appropriate form of prepared drugs can evidently improve drug permeability, lipid solubility, skin permeability, retention ratio, release time, and cytotoxicity. Hydroxycamptothecin (HCPT) is thought to be one of the most effective components against scars. However, the poor solubility and short half-life severely limit its clinical applications [65]. Compared with HCPT, the liposome-encapsulated HCPT (L-HCPT) can reduce epidural fibrosis by preventing the proliferation of fibroblasts in the scar tissue with longer half-life and better solubility [65]. The application of a silicone derivative to herbal extracts can improve skin pliability and alleviate the concomitant symptoms of scars including pain and itching [2]. However, it is extremely important to control the cytotoxicity of biomaterials for their clinical applications. Microemulsion, a transparent dispersion system, is a good vehicle for drug delivery due to its many advantages such as thermodynamic stability (long shelf life), easy formation (zero interfacial tension), low viscosity, high surface area (high solubilization capacity), and small droplet size [67]. It has been revealed that drug-free microemulsion is a promising preparation due to inapparent cytotoxicity [67]. The local or transdermal application of water-soluble pharmaceutical formulation may be suitable for medicinal plant extracts and compounds.Owing to compact scar tissue, it is necessary for the combination of natural products or crude extracts with some adjuvant as new dosage forms to increase their solubility, content, release time, uptake, and penetrability. These dosage forms include microemulsion [67], liposomes [66], solid lipid nanoparticle [68], and electrospun fibrous scaffolds [29]. Improvement of drug permeation may be a promising treatment in future research on the basis of the known medicinal plants.In addition, some of these plant extracts or purified chemical components are prepared as traditional medicinal injections for the deep antiscar treatment. For example,Carthamus tinctorius injection, whose primary component is hydroxysaflor yellow A, softens hypertrophic scar tissue and inhibits fibroblast proliferation by decreasing the type I/type III collagens ratio and the TGF-β1 level after local treatment [77]. Theradix astragali injection also inhibits the proliferation and reduces scar thickness and hardness by reducing Smad3 and TGF-β1 levels [78].
## 5. Current Treatment and Prospects for Future Therapies
Currently, occlusive dressings, compression therapy, intralesional steroid, cryosurgery, laser, radiation, surgical excision, and interferon therapy are curative for the majority of patients with hypertrophic scars [79]. Surgical therapy and excising fiber fraction are the common approaches for the treatment of hypertrophic scars. However, significant disadvantages were reported, such as the recurrence of adhesion after surgery as high as 45%–100% [54], which seriously limits its extensive application to scar prevention. Accordingly, physiotherapy is established, including occlusive dressings, pressure therapy, cryosurgery, radiation therapy, and laser therapy. Meanwhile, pharmacotherapy is also frequently applied, such as intralesional corticosteroid injection and topical drug treatment with interferon, bleomycin, 5-fluorouracil, verapamil, vitamin E, imiquimod, TGF-β3, or interleukin-10 [79, 80]. Pharmacotherapy mainly inhibits inflammation, proliferation, and remodeling phase [7] or modifies ECM metabolism via interfering the pivotal molecules of MAPK, TGF-β, and PI3K signaling transduction.However, there is no ideal treatment for hypertrophic scars so far and some chemical drugs also cause mal-effects simultaneously. Many kinds of natural products from medicinal plants have good antiscar activity and show notable advantages due to their fewer side-effects. Therefore, in addition to widespread uses of surgical therapy, physiotherapy, and pharmacotherapy, there is a great need for developing new natural drugs more efficient than or synergizing with the existing ones. Many kinds of purified natural products originated from medicinal plants are abundant in the natural environment, such as Ginsenoside Rg3 [29], Oleanolic Acid [54], Resveratrol [42], Asiaticoside [34], and Genistein [13], and are popular as antiscar agents due to their easy obtainment and fewer side-effects. Hence, we overviewed the major current herbs and their preparations applied to the treatment of hypertrophic scars.It is a challenge to identify and evaluate a safe, wholesome, and effective natural product against scars. Even though a number of new products have been reported by pharmacological tests in the last decades, many others remain unknown or untested.
## 6. Discussion
In this review, we gathered publications on medicinal plants with antihypertrophic scar activity and addressed the question whether the treatment of scars with medicinal plants is effective in humans. Althoughin vivo andin vitro investigations play an important role in the evaluation of safety and effectiveness of medicinal plants in preclinical trials, there is no perfect denouncement for their ultimate success as human drugs. Clearly, animal data are not sufficient for the confirmation of the safety and efficacy of medicinal plants in humans owing to their physiological structure differences. Furthermore, there are some conflicting clinical trials reported. For example, it has been reported that honey was effective in rapidly cleaning infection and promoting wound healing, indicating that honey possessed anti-infection activity [81]. However, it was also reported that honey did not affect the wound, scar, length, and remained length [82]. Therefore, the effectiveness of some drugs needs to be further clarified.On the other hand, only four publications reported negative results in our retried papers. Genistein phosphorylated c-Raf, MEK1/2, ERK1/2, and p38 proteins, but not JNK protein [14]. Asiaticoside had no effect on the expression of Smad2, Smad3, and Smad4 [34], while madecassoside regulated keloid-derived fibroblasts proliferation, migration, F-actin filaments, cytoskeletal protein actin, and the phosphorylation of cofilin via p38 MAPK and PI3K/AKT signaling, but not ERK1/2 and caspase-8 signaling [12]. Quercetin promoted phosphorylation of JNK and ERK, but not p38; it increased the protein and mRNA expression of MMP-1, but not type I collagen and TIMP-1 [38]. These studies indicate that the antiscar activity of medicinal plants needs to be scrutinized further.Many traditional medicines used in folk medicine are reported to have antiscar activity, but only a few have been studied systematicallyin vitro or/andin vivo, such as rhubarb [60] and tamarind [63]. Although numerousin vitrostudies have substantiated the antiscar activity of plant extracts and phytochemicals, there is very little evidence in humans. The number of clinical trials and their highlighted results are limited. The numerous traditional formulations effectively and extensively used in clinics have not been investigated. Also, the majority of the plants (Tables 1, 2, and 3) traditionally used as antiscar agents have not been investigated in animals. The phytochemicals within vitro antiscar activity may have no effectsin vivo due to the exceedingly high doses. Moreover, many of these phytochemicals have not been tested for their cytotoxicity, acute toxicity, or/and long-term toxicity in normal cells and animals, which seriously limitsin vivo investigations. Only two medicinal plants have been reported on their untoward reactions and cytotoxic effects. The clinical efficacy and safety should be investigated simultaneously for medicinal plant extracts and compounds.The natural barrier of skin can block drug getting through stratum corneum or decrease the amount of drug permeation, causing inefficiency or low-efficiency of drugs. Some adjuvants can significantly improve the penetrability of drugs and the desired therapeutic effects can be achieved. For example, hydroxycamptothecin (HCPT) is considered one of the most effective agents against scars, which prevents fibroblast proliferation and reduces epidural adhesion, but the poor solubility and short half-life severely limits its clinical application [65]. Some new dosage forms evidently reverse these conditions, such as microemulsion [67], liposomes [66], solid lipid nanoparticle [68], and electrospun fibrous scaffolds [29]. Therefore, the development of new dose types is necessary in order to ameliorate drug effects.Although enormous progress has been achieved over the last years, the impact of medicinal plants on individual types of scars needs to be explored in more detail. Polymechanistic phytochemicals such as Genistein may have an advantage over targeted therapeutics, which simultaneously tackle scar treatment from multiple angles. Genistein can act on many target points, including suppression of PDGF-promoted TPK activation, decrease of types I/III precollagen and PCNA expression, reduction of c-Raf, MEK1/2, ERK1/2, and p38 protein phosphorylation, and inhibition of RTK-Ras-MAPK (ERK/p38) [13]. Further insights into the molecular mechanisms of phytochemicals will facilitate the development of new drugs for the prevention and treatment of human scars.
## 7. Conclusion
In conclusion, the scaring process is complicated. The characteristics of an appropriate therapy for the prevention and treatment of scars should comprise the following: simple and easy delivery, comparability (effectiveness) with current therapies, and minimal drug interaction with concomitant treatments and lack of significant side effects [83]. Manyextracts and compounds from medicinal plants can inhibit scarring. The main mechanisms are suppression of proliferation and/or induction of apoptosis in scar fibroblasts by regulation of several pathways, such as MAPK, PI3K/AKT, RhoA/ROCK-I, VEGF, FAK, and TGF-β/Smad. Although the approaches described here are quite different and mechanisms are complicated, the utility should be maximized for medicinal plants as antihypertrophic scar agents. However, screening is necessary to minimize any potentially harmful side effects on human skin and health.
---
*Source: 101340-2015-03-11.xml* | 101340-2015-03-11_101340-2015-03-11.md | 38,689 | Medicinal Plants for the Treatment of Hypertrophic Scars | Qi Ye; Su-Juan Wang; Jian-Yu Chen; Khalid Rahman; Hai-Liang Xin; Hong Zhang | Evidence-Based Complementary and Alternative Medicine
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101340 | 101340-2015-03-11.xml | ---
## Abstract
Hypertrophic scar is a complication of wound healing and has a high recurrence rate which can lead to significant abnormity in aesthetics and functions. To date, no ideal treatment method has been established. Meanwhile, the underlying mechanism of hypertrophic scarring has not been clearly defined. Although a large amount of scientific research has been reported on the use of medicinal plants as a natural source of treatment for hypertrophic scarring, it is currently scattered across a wide range of publications. Therefore, a systematic summary and knowledge for future prospects are necessary to facilitate further medicinal plant research for their potential use as antihypertrophic scar agents. A bibliographic investigation was accomplished by focusing on medicinal plants which have been scientifically testedin vitro and/or in vivo and proved as potential agents for the treatment of hypertrophic scars. Although the chemical components and mechanisms of action of medicinal plants with antihypertrophic scarring potential have been investigated, many others remain unknown. More investigations and clinical trials are necessary to make use of these medical plants reasonably and phytotherapy is a promising therapeutic approach against hypertrophic scars.
---
## Body
## 1. Introduction
Scar formation strongly depends on the presence of contraction during healing and the nature of the scar is actually the uneven look of the healed tissue resulting from disfigured tissue deformation and overaligned collagen fibers [1]. Collagen in hypertrophic scars is found to be in a disorganized, whorl-like arrangement rather than in the normal parallel orientation manner. Therefore, hypertrophic scars are indurate, elevated, poorly extensible, and also characterized by hypervascularity, thereby providing their erythematous appearances [2]. HS can cause significant abnormality in aesthetic and functional symptoms and to date no recognized treatment has been established. It commonly occurs after surgical incision, thermal injury, and traumatic injuries to the dermis with a subsequent abnormal healing response [3]. Furthermore, it is often associated with contractures that can lead to considerably reduced functional performance in patients.The development of antihypertrophic scars is an unsolved problem in the process of scar treatment. For this reason, some undiscovered successful treatments are needed to prevent excessive hypertrophic scarring. The reported preventions include topical medical application, cryotherapy, use of silicone gel sheets, injection of steroids, radiotherapy, and an early surgical procedure for wound closure [2]. In the last decade, there has been a renewed interest in the use of indigenous medicine worldwide, arising from the realization that orthodox medicine is not widespread. Although modern medicine may be available in some communities, herbal medicines have often maintained popularity for historical and cultural reasons, in addition to their cheaper costs [4]. Recent research has introduced the uses of phytochemical compounds and extracts isolated from medicinal plants in an attempt to resolve these problems as a promising therapy.Many treatment strategies are sought to prevent scar formation without compromising the wound healing process [5]. The effectiveness of currently used therapy against hypertrophic scar arises most probably from the increase of the medicinal plants reported. In the modern system of medicine, about 25% of prescriptions contain active principle(s) derived from plants [4]. A significant correlation between medicinal plants and their use in the treatment of many types of scars has been shown in epidemiological data generated throughout the world. Published clinical trials have, as yet, largely focused on characterizing the pharmacokinetics and metabolism of medicinal plants. Despite experimental advances in medicinal plant research against scars, findings in humans are still limited. However, in recent years, diverse benefits of medicinal plants in the treatment of hypertrophic scars have been described [6–9].In line with the latest findings responsible for the increased recognition of medicinal plants as potential therapeutic and/or preventative agents, the aim of the present review is to focus on recent experimental findings and clinical trials of medicinal plants and other preparations with similar actions that could account for beneficial effects on hypertrophic scars in patients. Natural products, such as plant extracts, either as pure compounds or as standardized extracts, provide unlimited opportunities for control of hypertrophic scarring owing to their chemical diversity [10]. Currently, a great deal of effort is being expended to find alternative sources of safe, effective, and acceptable natural medicinal plants for the treatment or prevention of hypertrophic scars; hence, all literature available was reviewed.
## 2. Suggested Mechanism of Hypertrophic Scarring
The molecular mechanism of hypertrophic scarring is associated with the unusual proliferation of fibroblasts and overproduction of collagen and extracellular matrix [70]. An array of intra- and extracellular mechanisms is essential in the prevention of scar formation. With the help of molecular biology, cell biology technology, hypertrophic scar animal models, and the setting-up of scar tissue engineering, the mechanism of hypertrophic scarring has been clearly defined (Figure 1). It is usually considered as migration and proliferation of different cell types such as keratinocytes, myofibroblasts [59], and mast cells [71]. Fibroblasts play an essential role in new tissue formation during wound repair [33], but their abnormal low death rate and high proliferation rate can cause scar tissue formulation [11]. Meanwhile, keratinocytes are indispensable in signal transduction between paracrine secretion and epithelium matrix. When cultured in the presence of keratinocytes, fibroblasts exhibit significant proliferation activity [72], showing the contribution of keratinocytes to fibroblasts proliferation. Myofibroblasts, which are different from fibroblasts and are related to the composition, organization, and mechanical properties of ECM [73], increase collagen synthesis and retard cell migration [71], thus resulting in excessive and rigid scarring. Fibroblasts are transformed into myofibroblasts by heterocellular gap junction intercellular communications between mast cells (RMC-1) and fibroblasts [71, 74]. In the process of wound healing, the combination of fibroblasts and myofibroblasts triggers excessive production of abnormal extracellular matrix protein [75], eliciting scarring [1, 75]. With the assistance of keratinocytes and mast cells, proliferative fibroblasts produce massive collagen which makes extracellular matrix accumulate below dermis, leading to scar formation. The complex forming process consists of three different phases, inflammation, proliferation, and maturation, which leads to hypertrophic scarring in the end [76]. The ratio of I to III collagens in healthy adults ranges from 3.5 to 6.1, while in patients with hypertrophic scars, it could be down to 2 and in keloid patients it can be as high as 19, which is related to the abnormal metabolism of collagens I and III in pathological scars, including more collagen synthesis and less collagen degradation.Figure 1
The mechanism of hypertrophic scarring.Although many targets of action, by which scarring can be inhibited, have been experimentally studied or postulated, few are well known or defined for inhibition of hypertrophic scarring by plant-derived compounds. Figure2 and Tables 5 and 6 summarize and enumerate the suggested mechanisms and correlative medicinal plants.Figure 2
The mechanisms by which extracts and compounds from medicinal plants display antihypertrophic scar activity.The size of a scar is influenced by many factors, such as wound size, wound contraction, and healing time. Wound contraction makes an important contribution to scar formation also the larger the area of the wound, the more cells migrate, resulting in more prominent scarring [1]. Therefore, induction of fibroblast apoptosis and reduction of extracellular matrix and collagen I/III production may be the pivotal measures against hypertrophic scarring.Many kinds of test models are applied to investigate wound healing mechanisms and inhibition of scar formation, including 2D hybrid agent-based model [1], pig surgical injury model, fibroblast populated collagen lattice (FPCL) model, rat laminectomies at Lumbar-1 level [5], incisional wound healing model [6], and rabbit ear model [54]. These models provide a mean for detecting and evaluating the mechanobiology in wound healing and scar formation [1].However, the complex mechanism of hypertrophic scarring still remains unknown which raises the question of how to control scar hyperplasia.
## 3. Medicinal Plants against Hypertrophic Scarring
Many beneficial uses of medicinal plants are extensively documented in the traditional medicine systems in many cultures. To collect the data which supports this finding, we performed a systematic review using PubMed, Elsevier, Springer, and Google Scholar databases and peer-reviewed articles published in the last 10 years. The search terms included scar, scaring, fibroblast, extract, and preparation. The phytochemicals from medicinal plants against scar hyperplasia are presented in Tables1 and 2, respectively, whilst the medicinal plant extracts are listed in Table 3. Their activities and mechanisms for antihypertrophic scarring were also described, respectively, in Tables 1, 2, and 3. There are five preparations (Table 4) reported on their effects and mechanisms of antihypertrophic scarring, namely, liposome-encapsulated 10-HCPT, oxymatrine-phospholipid complex (OMT-PLC), solid lipid nanoparticle-enriched hydrogel (SLN-gel), Ginsenoside Rg3/poly (l-lactide) (G-Rg3/PLLA), andCentella asiatica extract capsule, which are composed of different medicinal plants and vehicles. Medicinal plants can be used for different therapeutic purposes or as precursors of useful drugs containing different types of phytochemicals.Table 1
The components from medicinal plants with antihypertrophic scar activity.
Component
Botanical name
Family
Medicinal part
Observation
Dose
Effect
Mechanism of action
References
Madecassoside
Centella asiatica
Umbelliferae
Whole plant
In vitro
10~100μM
Antiproliferation of HSFBsPromotion of apoptosis Diminishment of scar formationFacilitation of wound healing
Inhibition of HKF migration, F-actin filaments protein, and cytoskeletal protein. Promotion of nuclear shrinkage and mitochondrial membrane depolarization Condensation of chromatin and fragment of nuclei Inhibitory phosphorylation of p38, PI3K, AKT, and cofilin. Activation of caspase-3/caspase-9 Facilitation of Bax mRNA expression and decrease of Bcl-2 and MMP-13 mRNA expression
[11, 12]
Genistein
Glycine max
Leguminosae
Fruit
In vitro
25~100μg/mL
Anti-proliferation of HSFBsSuppression of mitosisPromotion of apoptosis
Inhibition of TPKs, increase of caspase-3, and decreases ofα-SMA and Bcl-2 protein Enhancement of Bax protein Inhibition of types I/III precollagen mRNA expression, down-regulation of collagen I/III mRNA, reduction of PCNA expression, and inhibitory phosphorylation of c-Raf, MEK1/2, ERK1/2, and p38 Induction of morphology changes of apoptosis cells Inhibitory transdifferentiation of fibroblasts into myofibroblasts
[13–17]
37~370µM
Decrease of G0-G1 phase and increase of G2-M phase Increase of C-JUN mRNA expression and decrease of FOS-B mRNA expression in skin keratinocytes Inhibitory mRNA expression of C-JUN and C-FOS in human fibroblasts In keloid fibroblasts, decrease of C-JUN and C-FOS mRNA expression at 37 µM, but enhancement at 370 µM
Astragaloside IV
Astragalus Membranaceus
Leguminosae
Root
In vitro
12.5~200μM
Antiproliferation of HSFBs
Decrease of collagen I/collagen III and TGF-β1 secretion
[18]
Tetrandrine
Stephania tetrandra
Menispermaceae
Root
In vitro
10~80 μM
Antiproliferation of HSFBs
Inhibition of TGF-β1 mRNA transcription, promotion of Smad7 and MMP-1 mRNA expression, and inhibition of Smad2 mRNA expression Decrease of protein expression of collagen I/collagen III, Bcl-2, and MKP-1. Reduction of total collagen volume and S phase, increase of G0/G1 phase, and prevention of G0/G1 into G2 phase Inhibitory phosphorylation of MEK1/2 and ERK1/2
[19–22]
In vivo
1~10 mg/L
Local injection
0.5~2 mg/L50 mg/mL, 20 μL
Aloe-emodin
Rheum palmatum
Polygonaceae
Root, rhizome
In vitro
20~80 mg/L
Antiproliferation of HSFBs
Increase of S phase
[23]
5F
Pteris semipinnata
Pteridaceae
Whole plant
In vitro
20~80 μg/mL
Antiproliferation of HPS
Blockage of fibroblasts from G1 to S phase Decreased protein expression of TGF-β1 and type I collagen, increase of caspase-3, and reduction of total collagen and fibroblasts PCNA protein (cyclin) Inhibitory mRNA expression of type I/type III procollagen in SSSF
[24–26]
In vivo
10~40 mg/L
Reduction of PS volume
40~120 mg/L
Antiproliferation of SSSF
Local injection
20~80 mg/L
Promotion of HPS apoptosis Decrease of hypertrophic index
Reduction of collagen fiber content
Oxymatrine
Sophora japonica
Leguminosae
Root
In vitro
0.125~1.0 mg/mL2 μM
Antiproliferation of KFb and HFb Promotion of KFb apoptosis
Increase of S phase, inhibitory mRNA expression of collagen I/collagen III and reduction of protein expression of Smad3 and ERK1 Promotion of Smad7 protein expression Inhibition of p-Smad3 and nuclear translocation of Smad3
[27, 28]
Ginsenoside Rg3 (G-Rg3)
Panax ginseng
Araliaceae
Root, rhizome
In vivo Local injection
3 mg/mL, 0.1 mL
Inhibition of HSDecrease of scar tissue fibrosis
Increase of protein expression of PCNA, Bax, caspase-3, and Cyt-c Decrease of Bcl-2 protein expression
[29, 30]
Osthole
Cnidium monnieri
Apiaceae
Fruit
In vitro
5~50μM
Antiproliferation of HSFBs and Induction of apoptosis
Promotion of Bax mRNA expression and inhibition of Bcl-2 mRNA expression Decreases of TGF-β1 protein expression and facilitation of HSFBs shrinkage, chromatin condensation, membrane blebbing, apoptotic body formation, and DNA ladder formation
[31]Table 2
Antihypertrophic scar displaying phytochemicals widely distributed in medicinal plants.
Phytochemicals
Observation
Dose
Effect
Mechanism of action
References
10-Hydroxycamptothecin (HCPT)
In vivo
0.01~0.1 mg/mL
Decrease of the area of epidural scar tissue and the number of fibroblasts. Reduction of epidural adhesion and inhibitory proliferation of RESF
Inhibition of topoisomerase I
[5]
Angelica naphtha
In vitro
1~16 mg/L
Antiproliferation of HSFBs and induction of HSFBs apoptosis
Inhibition of G0/G1 and G2/M phases, promotion of S phase, and reduction of collagen protein in fibroblasts
[32]
Asiaticoside
In vivo In vitro
25~50 mg/mLLocal injection25~1000 μM300 μg/mL
Reduction of scar hyperplasia of HSREDecrease of hypertrophic indexPromotion of keratinocytes migrationAnti-proliferation of HSFBs
Inhibition of the mRNA expression of TGF-β1, RhocA, ROCK-I, and CTGF, facilitation of TGF-β3 mRNA expression, and decrease of the expression of types I/III collagen and TIMP-1 proteins
[33–36]
Matrine
In vitro
0.01~5.00 g/L
Antiproliferation and induction of apoptosis in HSFBs
Promotion of G2-M phase, inhibition of lactate dehydrogenase and Hyp and enhancement of I/III collagen ratio
[37]
Quercetin
In vivo
0.05%~1%, w/oLocal Application
Inhibition of scarring in hairless mice
Increase of the protein and mRNA expression of MMP-1 and enhancement of the phosphorylation of JNK and ERK
[38]
In vitro
10~40μM
Antiproliferation of HSkF
Emodin
In vitro
50~200μg/mL
Antiproliferation of HSFBs
Inhibition of G0/G1 phase, increase of intracellular calcium, and decrease of collagen synthesis
[39–41]
Resveratrol
In vitro In vivo
25~400μM 150~400 μM Local injections
Antiproliferation of HSFBs Reduction of hypertrophic scar index
Inhibition of the mRNA expression of type I/type III procollagens
[42]
Tan IIA
In vitro
20~80μg/mL0.05~0.15 mg/mL
Antiproliferation of HSFBsInduction of HSFBs apoptosis
Facilitation of nuclei shrinkage, condensation and fragmentation, blockage of HSFBs from G1 to S phases, downregulation of MDA content and XOD activity, increase of T-SOD and GSH-Px activity, and promotion of MMP-1 mRNA expression
[43–45]
Curcumin
In vitro
12.5~100μM
Antiproliferation of HSFBs
Inhibition of procollagen 1 mRNA expression Reduction of hypertrophic index and collagen fiber area density
[46]
In vivo
0.5~2.0 mM, 0.1 mL/d Local injections
Dihydroartemisinin
In vivo
180 mg/kg
Inhibition of HSRE scarring
Inhibition of collagen fibers and hypertrophic index
[47]
10 mL intragastric administration
Antifibroblast proliferation of HSRE
Arteannuin
In vitro
0.103~0.206 mg/mL
Antiproliferation of HSFBs
Congregation of nuclear chromatin, promotion of calcium concentration, increase of G0-G1 phase, and reduction of collagen levels and hypertrophic index of HSRE
[48–51]
In vivo
60 mg/mL/2 d
Decrease of HSRE scarring
20μL local injection
Antiproliferation of mastocyte
Panax notoginseng saponins (PNS)
In vitro
400~800μg/mL
Antiproliferation of HSFBs
Inhibition of G2-M and G0-G1 phases, increase of S phase, reduction of the protein expression of TGF-β1 and α-SMA, and inhibition of intracellular free calcium concentration
[52, 53]
Oleanolic Acid
In vivo
Topical application of 2.5, 5, and10% for 28 consecutive days
Inhibition of hypertrophic scarring, induction of apoptosis, and reduction of scar elevation index
Inhibition of the mRNA expression of TGF-β1 mRNA, MMP-1, TIMP-1, and P311. Increase of the mRNA expression of MMP-2, caspase-3, and caspase-9. Reduction of the protein expression of TGF-β1 and collagen I/collagen III
[54]
Hirudin
In vitro
1~50μM
Promotion of apoptosis
Increase of Gl phase and inhibition of S phase Enhancement of the protein expression of MMP-2, MMP-9, and p27, reduction of the protein expression of cyclin E and TGF-β1, and inhibition of the mRNA expression of I/III procollagens
[55]
Xiamenmycin
In vivo
10 mg/kg·d−1, intraperitoneal injection for 10 days
Attenuation of hypertrophic scarring and suppression of local inflammation in a mechanical stretch-induced mouse mode
Reduction of CD4+ lymphocyte and monocyte/macrophage retention in fibrotic foci Blockage of fibroblast adhesion with monocytes.Inactivation of FAK, p38, and Rho guanosine triphosphatase signaling
[56]
In vitro
5–30μg/mL
Inhibition of proliferation of HSFBsTable 3
The extracts from medicinal plants displaying anti-hypertrophic scarring.
Extract
Botanical name
Family
Medicinal part
Observation
Dose administration
Effect
Mechanism of action
References
Ethanolic extract
Calotropis gigantea
Asclepiadaceae
Root, bark
In vivo
100~400 mg/kgintragastric administration
Increase of wound contraction and decrease of scar area and the time of epithelization
Increase of hydroxyproline and collagen synthesis
[6]
Ethanolic extract
Daucus carota
Apiaceae
Root
In vivo
1, 2, and 4%epidermal administration
Decrease of wound area, epithelization period, and scar width. Increase of wound contraction
Increase of hydroxyproline content. Antioxidant and antimicrobial activities
[7]
Methanolic extract
Pistia stratiotes
Araceae
Leave
In vivo
5 and 10%epidermal administration
Decrease of wound area
Inhibition of hydroxyl radical scavenging and increase of fibroblast blood vessels and collagen fibers
[8]
Ethyl acetate extract
Gelidium amansii
Gelidiaceae
Whole plant
In vitro
5~10 mg/mL
Antiproliferation of HSFBs
Decrease of the protein expression of I/III collagens and TGF-β1
[9]
Ethanolic extract
Carthamus tinctorius
Asteraceae
Flower
In vitro
2~8μg/mL
Antiproliferation of HSFBs
Inhibition of collagen protein synthesis and promotion of fibroblast shrinkage
[57]
Aqueous extract
Oenothera paradoxa
Onagraceae
Seed
In vitro
0.1~10μg/mL
Protection of normal dermal fibroblasts
Decrease of LDH and ROS
[58]
Aqueous extract
Cigarette Smoke
Unknown
Unknown
In vitro
100% saturated solution
Antiproliferation of skin fibroblasts and promotion of cellular senescence
Inhibition of SOD and GSH-Px and promotion of ROS
[59]
Ethyl acetate extract
Rheum palmatum
Polygonaceae
Root, rhizome
In vitro
25μg/mL
Antiproliferation of HSFBs
Increase of G0/G1 phase
[60]
Methanol extract
Broussonetia kazinoki
Moraceae
Bark, root
In vitro
Unknown
Inhibition of hyperpigmentation
Reduction of tyrosinase enzyme synthesis
[61]
Ethanol extract
Scutellaria baicalensis Georgi
Lamiaceae
Root
In vivo
10 mg/mL epidermal administration
Inhibition of scarring
Reduction of the protein expression of TGF-β1
[62]
Aqueous extract
Allium cepa
Liliaceae
Corm
In vivo
1~2.5%, v/vlocal application
Suppression of scarring in hairless mice
Upregulation of MMP-1 and type I collagen expression
[38]
In vitro
1~2.5%, v/v
Antiproliferation of fibroblasts
Aqueous extract
Tamarindus indica
Fabaceae
Bark, leave
In vivo
Unknown
Anti-inflammation
Elimination of death cells and necrotic tissues
[63]
Ethanol extract
Aneilema keisak
Commelinaceae
Whole plant
In vitro
40μg/mL
Decrease of scarring
Inhibition of TGF-β1-dependent signalling by reducing Smad2 protein. Reduction of various hKF pathological responses, including hyperplastic growth, collagen production, and migration without DNA damage
[64]Table 4
The preparations from different medicinal plants with antihypertrophic scar activity.
Preparations
Botanical name
Family
Medicinal part
Preparation
Vehicle
Delivery system
Observation
Effect
Mechanism of action
References
Hydroxycamptothecin (HCPT)
Camptotheca acuminata
Nyssaceae
Fruit, leave
Liposome-encapsulated 10-HCPT
Liposome
Liposome-encapsulated
In vivo Implant
Antiproliferation of fibroblasts and reduction of epidural adhesion
Decrease of epidural scar area and fibroblast number in the epidural scar tissue
[65, 66]
Oxymatrine (OMT)
Sophora flavescens, Sophora alopecuroides, and Sophora subprostrata
Leguminosae
Unknown
Oxymatrine-phospholipid complex (OMT-PLC)
Phospholipid
Microemulsion
In vitro In vivo topical delivery
Antiproliferation of fibroblasts
Improvement of OMT skin permeability and increase of retention ratio of OMT in skin.
[67]
Astragaloside IV
Astragalus membranaceus
Leguminosae
Root
Solid lipidnanoparticle-enriched hydrogel (SLN-gel)
Lipid hydrogel
Solid lipid nanoparticle, hydrogel
In vitro In vivo,topical delivery
Enhancement of keratinocytes migration and proliferation Increase of drug uptake in fibroblasts Promotion of wound healing and inhibition of scar formation
Caveolae endocytosis pathway. Increase of wound closure rate and angiogenesis Improvement of collagen regular organization
[68]
Ginsenoside Rg3 (G-Rg3)
Red Panax ginseng
Araliaceae
Root, rhizome
Ginsenoside Rg3/Poly (l-lactide) (G-Rg3/PLLA)
Electrospun poly(L-lactide) fiber
Electrospun fibrous scaffolds, nanofibers
In vitro In vivo
Inhibition of fibroblast cell growth, antiproliferation of fibroblasts, and prevention of scar formation
Improvement of dermis layer thickness, collagen fibers, and microvessels
[29]
Centella asiatica extract
Centella asiatica
Apiaceae
Whole plant
Centella asiatica extract capsule
Capsule
Nothing
In vivo
Inhibition of tissue overgrowth, reduction of scar and keloid, and anti-inflammation
Promotion of collagen I protein expression, collagen remodeling, and glycosaminoglycan synthesis Enhancement of collagen and acidic mucopolysaccharides
[69]Table 5
Summary of antiscarring mechanisms of medicinal plant components.
Mechanism
Medicinal plant component
MAPK pathway
Inhibition of p-p38 signaling
Madecassoside, Genistein, and Xiamenmycin
Inhibition of p-ERK1/2 signaling
Genistein, Tetrandrine, Cryptotanshinone, and Quercetin
Inhibition of p-JNK signaling
Quercetin
PI3K/AKT signaling
Madecassoside
Mitochondrial-dependent pathway
Increase of Bax
Madecassoside, Genistein, Ginsenoside Rg3, and Osthole
Decrease of Bcl-2
Madecassoside, Genistein, Tetrandrine, Ginsenoside Rg3, and Osthole
Increase of cytoplasm Cyt-c
Ginsenoside Rg3
Cell cycle
Decrease of G0-G1 phase
Genistein, Angelica naphtha, Emodin, and Panax notoginseng saponins
Increase of G2-M
Genistein
Decrease of S phase
10-Hydroxycamptothecin, Tetrandrine, Aloe emodin, and Hirudin
Prevention from G0/G1 into G2 phase
Tetrandrine
RhoA/ROCK-I signal pathway
Inhibitory secretion of RhocA, ROCK-I, and CTGF
Xiamenmycin
VEGF signal pathway
Cryptotanshinone
FAK signal pathway
Cryptotanshinone, Xiamenmycin
TGF-β/Smad signaling pathway
Oxymatrine
Downregulation of collagen I/III expression
Genistein, Astragaloside IV, Tetrandrine, Resveratrol, 5F, Curcumin, Oleanolic Acid, and Hirudin
Decrease ofα-SMA
Genistein, Panax notoginseng saponins
Activation of caspases
Activation of caspase-3
Madecassoside, Genistein, 5F, Cryptotanshinone, Oleanolic Acid, and Ginsenoside Rg3
Activation of caspase-9
Madecassoside, Oleanolic Acid
Suppression of TPK activation
Kazinol F
Inhibition of topoisomerase I
10-Hydroxycamptothecin
Decrease of TGF-β1 secretion
Tetrandrine, Panax notoginseng saponins, Osthole, and Hirudin
Inhibition of TGF-β1 transcription
Astragaloside IV, Oleanolic Acid
Downregulation of TIMP-l expression
Oleanolic Acid
Reduction of LDH and increase of the ratio of collagen I/collagen III
Matrine
Increase of T-SOD and GSH-Px activity
Tan IIA
MMP
Enhancement of MMP-1
Tetrandrine, Tan IIA, and Oleanolic Acid
Enhancement of MMP-2 and MMP-9
Hirudin
Enhancement of MMP-13
Madecassoside
Increase of intracellular calcium
Emodin, ArteannuinTable 6
Summary of antiscarring mechanisms of plant extracts.
Mechanism
Medicinal plants extract
Cell cycle
Increase of G0-G1 phase
Rhubarb
Collagen
Downregulation of collagen l expression
Gelidium amansii
Downregulation of collagen III expression
Gelidium amansii, Scutellaria baicalensis Georgi
Enhancement of collagen synthesis
Calotropis gigantea
Inhibition of collagen synthesis
Carthamus tinctorius
Promotion of collagen I
Onion
MMP
Enhancement of MMP-1
Neonauclea reticulata, Onion
Increase of MMP-3 and MMP-9
Neonauclea reticulata
Elimination of hydroxyl radical
Pistia stratiotes
Decrease of LDH
Oenothera paradoxa
Decrease of ROS
Oenothera paradoxa, Neonauclea reticulata
Increase of ROS and reduction of SOD and GSH-Px
Cigarette SmokeThe use of herbal medicine remedy has been steadily increasing worldwide in recent years, as well as the search for new phytochemicals that could be potentially developed as useful drugs for the treatment of hypertrophic scar and other scar diseases [4]. The antihypertrophic scar activity of medicinal plants results from a variety of components contained in these plants (Tables 1 and 2). Many plant extracts (Table 3) have antihypertrophic scar activity owing to their phytochemical constituents. However, more work is needed to focus on purification and identification of active components and to elucidate the roles that these play in inhibition of scars when used alone or jointly. Moreover, many of them have not been tested for their cytotoxicity to normal cells, which seriously blocksin vivo investigations. Undeniably, no toxic and side effects have been proved for some active components. For example, Genistein, which is easily obtained and commonly used for hypertrophic scar treatment, has strong pharmacological effects, with no obvious toxicity or side effects [13].
## 4. New Preparations of Medicinal Plants
A large number of extracts and compounds of medicinal plants display antiscar activity. Nevertheless, drugs are difficult to get through the stratum corneum due to the natural barrier of skin, which causes lower permeability of drugs. The oral bioavailability of drugs at the permissive dose is very low, owing to their hydrophilicity (low permeability), poor absorption, and biotransformation or compact scar tissue. The appropriate form of prepared drugs can evidently improve drug permeability, lipid solubility, skin permeability, retention ratio, release time, and cytotoxicity. Hydroxycamptothecin (HCPT) is thought to be one of the most effective components against scars. However, the poor solubility and short half-life severely limit its clinical applications [65]. Compared with HCPT, the liposome-encapsulated HCPT (L-HCPT) can reduce epidural fibrosis by preventing the proliferation of fibroblasts in the scar tissue with longer half-life and better solubility [65]. The application of a silicone derivative to herbal extracts can improve skin pliability and alleviate the concomitant symptoms of scars including pain and itching [2]. However, it is extremely important to control the cytotoxicity of biomaterials for their clinical applications. Microemulsion, a transparent dispersion system, is a good vehicle for drug delivery due to its many advantages such as thermodynamic stability (long shelf life), easy formation (zero interfacial tension), low viscosity, high surface area (high solubilization capacity), and small droplet size [67]. It has been revealed that drug-free microemulsion is a promising preparation due to inapparent cytotoxicity [67]. The local or transdermal application of water-soluble pharmaceutical formulation may be suitable for medicinal plant extracts and compounds.Owing to compact scar tissue, it is necessary for the combination of natural products or crude extracts with some adjuvant as new dosage forms to increase their solubility, content, release time, uptake, and penetrability. These dosage forms include microemulsion [67], liposomes [66], solid lipid nanoparticle [68], and electrospun fibrous scaffolds [29]. Improvement of drug permeation may be a promising treatment in future research on the basis of the known medicinal plants.In addition, some of these plant extracts or purified chemical components are prepared as traditional medicinal injections for the deep antiscar treatment. For example,Carthamus tinctorius injection, whose primary component is hydroxysaflor yellow A, softens hypertrophic scar tissue and inhibits fibroblast proliferation by decreasing the type I/type III collagens ratio and the TGF-β1 level after local treatment [77]. Theradix astragali injection also inhibits the proliferation and reduces scar thickness and hardness by reducing Smad3 and TGF-β1 levels [78].
## 5. Current Treatment and Prospects for Future Therapies
Currently, occlusive dressings, compression therapy, intralesional steroid, cryosurgery, laser, radiation, surgical excision, and interferon therapy are curative for the majority of patients with hypertrophic scars [79]. Surgical therapy and excising fiber fraction are the common approaches for the treatment of hypertrophic scars. However, significant disadvantages were reported, such as the recurrence of adhesion after surgery as high as 45%–100% [54], which seriously limits its extensive application to scar prevention. Accordingly, physiotherapy is established, including occlusive dressings, pressure therapy, cryosurgery, radiation therapy, and laser therapy. Meanwhile, pharmacotherapy is also frequently applied, such as intralesional corticosteroid injection and topical drug treatment with interferon, bleomycin, 5-fluorouracil, verapamil, vitamin E, imiquimod, TGF-β3, or interleukin-10 [79, 80]. Pharmacotherapy mainly inhibits inflammation, proliferation, and remodeling phase [7] or modifies ECM metabolism via interfering the pivotal molecules of MAPK, TGF-β, and PI3K signaling transduction.However, there is no ideal treatment for hypertrophic scars so far and some chemical drugs also cause mal-effects simultaneously. Many kinds of natural products from medicinal plants have good antiscar activity and show notable advantages due to their fewer side-effects. Therefore, in addition to widespread uses of surgical therapy, physiotherapy, and pharmacotherapy, there is a great need for developing new natural drugs more efficient than or synergizing with the existing ones. Many kinds of purified natural products originated from medicinal plants are abundant in the natural environment, such as Ginsenoside Rg3 [29], Oleanolic Acid [54], Resveratrol [42], Asiaticoside [34], and Genistein [13], and are popular as antiscar agents due to their easy obtainment and fewer side-effects. Hence, we overviewed the major current herbs and their preparations applied to the treatment of hypertrophic scars.It is a challenge to identify and evaluate a safe, wholesome, and effective natural product against scars. Even though a number of new products have been reported by pharmacological tests in the last decades, many others remain unknown or untested.
## 6. Discussion
In this review, we gathered publications on medicinal plants with antihypertrophic scar activity and addressed the question whether the treatment of scars with medicinal plants is effective in humans. Althoughin vivo andin vitro investigations play an important role in the evaluation of safety and effectiveness of medicinal plants in preclinical trials, there is no perfect denouncement for their ultimate success as human drugs. Clearly, animal data are not sufficient for the confirmation of the safety and efficacy of medicinal plants in humans owing to their physiological structure differences. Furthermore, there are some conflicting clinical trials reported. For example, it has been reported that honey was effective in rapidly cleaning infection and promoting wound healing, indicating that honey possessed anti-infection activity [81]. However, it was also reported that honey did not affect the wound, scar, length, and remained length [82]. Therefore, the effectiveness of some drugs needs to be further clarified.On the other hand, only four publications reported negative results in our retried papers. Genistein phosphorylated c-Raf, MEK1/2, ERK1/2, and p38 proteins, but not JNK protein [14]. Asiaticoside had no effect on the expression of Smad2, Smad3, and Smad4 [34], while madecassoside regulated keloid-derived fibroblasts proliferation, migration, F-actin filaments, cytoskeletal protein actin, and the phosphorylation of cofilin via p38 MAPK and PI3K/AKT signaling, but not ERK1/2 and caspase-8 signaling [12]. Quercetin promoted phosphorylation of JNK and ERK, but not p38; it increased the protein and mRNA expression of MMP-1, but not type I collagen and TIMP-1 [38]. These studies indicate that the antiscar activity of medicinal plants needs to be scrutinized further.Many traditional medicines used in folk medicine are reported to have antiscar activity, but only a few have been studied systematicallyin vitro or/andin vivo, such as rhubarb [60] and tamarind [63]. Although numerousin vitrostudies have substantiated the antiscar activity of plant extracts and phytochemicals, there is very little evidence in humans. The number of clinical trials and their highlighted results are limited. The numerous traditional formulations effectively and extensively used in clinics have not been investigated. Also, the majority of the plants (Tables 1, 2, and 3) traditionally used as antiscar agents have not been investigated in animals. The phytochemicals within vitro antiscar activity may have no effectsin vivo due to the exceedingly high doses. Moreover, many of these phytochemicals have not been tested for their cytotoxicity, acute toxicity, or/and long-term toxicity in normal cells and animals, which seriously limitsin vivo investigations. Only two medicinal plants have been reported on their untoward reactions and cytotoxic effects. The clinical efficacy and safety should be investigated simultaneously for medicinal plant extracts and compounds.The natural barrier of skin can block drug getting through stratum corneum or decrease the amount of drug permeation, causing inefficiency or low-efficiency of drugs. Some adjuvants can significantly improve the penetrability of drugs and the desired therapeutic effects can be achieved. For example, hydroxycamptothecin (HCPT) is considered one of the most effective agents against scars, which prevents fibroblast proliferation and reduces epidural adhesion, but the poor solubility and short half-life severely limits its clinical application [65]. Some new dosage forms evidently reverse these conditions, such as microemulsion [67], liposomes [66], solid lipid nanoparticle [68], and electrospun fibrous scaffolds [29]. Therefore, the development of new dose types is necessary in order to ameliorate drug effects.Although enormous progress has been achieved over the last years, the impact of medicinal plants on individual types of scars needs to be explored in more detail. Polymechanistic phytochemicals such as Genistein may have an advantage over targeted therapeutics, which simultaneously tackle scar treatment from multiple angles. Genistein can act on many target points, including suppression of PDGF-promoted TPK activation, decrease of types I/III precollagen and PCNA expression, reduction of c-Raf, MEK1/2, ERK1/2, and p38 protein phosphorylation, and inhibition of RTK-Ras-MAPK (ERK/p38) [13]. Further insights into the molecular mechanisms of phytochemicals will facilitate the development of new drugs for the prevention and treatment of human scars.
## 7. Conclusion
In conclusion, the scaring process is complicated. The characteristics of an appropriate therapy for the prevention and treatment of scars should comprise the following: simple and easy delivery, comparability (effectiveness) with current therapies, and minimal drug interaction with concomitant treatments and lack of significant side effects [83]. Manyextracts and compounds from medicinal plants can inhibit scarring. The main mechanisms are suppression of proliferation and/or induction of apoptosis in scar fibroblasts by regulation of several pathways, such as MAPK, PI3K/AKT, RhoA/ROCK-I, VEGF, FAK, and TGF-β/Smad. Although the approaches described here are quite different and mechanisms are complicated, the utility should be maximized for medicinal plants as antihypertrophic scar agents. However, screening is necessary to minimize any potentially harmful side effects on human skin and health.
---
*Source: 101340-2015-03-11.xml* | 2015 |
# Ameliorative Potential of Resveratrol in Dry Eye Disease by Restoring Mitochondrial Function
**Authors:** Jingyao Chen; Weijia Zhang; Yixin Zheng; Yanze Xu
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013444
---
## Abstract
Background and Significance. Dry eye disease (DED) is a prevalent optic surface illness with a high incidence worldwide that is caused by a variety of factors, including mitochondrial dysfunction. Resveratrol has been confirmed to protect the eye surface in DED, and as an antioxidant, resveratrol can maintain mitochondrial function. Therefore, we investigated whether resveratrol can improve DED by restoring mitochondrial function. Methods. The mitochondrial dysfunction of HCE-2 human corneal epithelial cells was induced by high osmotic pressure exposure and treated with resveratrol (50 μM). Western blotting was used to detect the expression of the antioxidant proteins SOD2, GPx, and SIRT1, and flow cytometry was used to detect cell apoptosis and ROS production. The DED mouse model was induced by 0.2% benzalkonium chloride (BAC) and treated with resveratrol. The tear yield was measured by the phenol cotton thread test, the density of cup cells in the conjunctiva was measured by periodic acid-Schiff (PAS) staining, and the expression levels of SIRT1, GPx, and SOD2 in lacrimal glands were detected by Western blotting. Results. In hypertonic conditions, the apoptosis of HCE-2 cells increased, the expression of the antioxidant proteins SOD2 and GPx decreased, ROS production increased, and the expression of SIRT1 protein, an essential regulator of mitochondrial function, was downregulated. Treatment with resveratrol reversed the mitochondrial dysfunction mediated by high osmotic pressure. In the DED mouse model, resveratrol treatment promoted tear production and goblet cell number in DED mice, decreased corneal fluorescein staining, upregulated SIRT1 expression, and induced SOD2 and GPx expression in DED mice. Conclusion. Resveratrol alleviates mitochondrial dysfunction by promoting SIRT1 expression, thus reducing ocular surface injury in mice with dry eye. This study suggests a new path against DED.
---
## Body
## 1. Introduction
DED is a prevalent ocular surface disorder caused by inadequate production of tears and excessive tear evaporation. Of note, the prevalence of DED in the world population ranges from 6 to 34%, and the prevalence of DED is higher in the aging population [1]. Thus, effective therapeutic strategies are urgently needed for remitting DED.Emerging evidence indicates that mitochondrial dysfunction is responsible for pathological processes, including but not limited to neurodegenerative disease [2], cancer [3], and DED [4]. Studies have shown that mitochondrial function is a crucial component in the progression of DED. For example, DDIT4 knockdown restores mitochondrial function under hyperosmolarity and preserves the viability of human corneal epithelial cells [5]. Moreover, the modulation of mitochondrial homeostasis is related to the outcome of DED [6]. Recent studies suggest that antioxidant administration may restore mitochondria. Resveratrol (3,5,4′-trihydroxy-trans-stilbene), a natural plant product, has been reported to have antioxidant effects and maintain mitochondrial function [7, 8]. The protective role of resveratrol in mitochondrial dysfunction-related diseases, such as cardiac diseases [9], hypoxic ischemic injury [10], and neurodegenerative disorders [11], has been well established. It is worth noting that the function of resveratrol in protecting the ocular surface in experimental DED has been reported [12]. However, the underlying mechanism by which resveratrol ameliorates DED remains obscure.Mammalian sirtuin 1 (SIRT1) is an exceedingly conserved NAD(+)-dependent deacetylase that has been reported to be engaged in the regulation of mitochondrial biogenesis [13]. Aberrant expression of SIRT1 leads to mitochondrial dysfunction, thereby enhancing pathological processes [14]. Earlier research revealed that the expression of SIRT1 is decreased in the condition of diabetic dry eye [15], indicating that SIRT1 may function in DED. It is well known that resveratrol is a potent activator of SIRT1 [16]. Currently, the antioxidative effect of resveratrol is achieved by upregulating SIRT1 expression [17]. For instance, resveratrol improves mitochondria and protects against metabolic disease by activating SIRT1 [18]. Resveratrol activates SIRT1 to alleviate cardiac dysfunction through mitochondrial regulation [19]. However, the correlation between resveratrol and SIRT1 in DED is unknown.Thus, we demonstrate that resveratrol treatment attenuates hyperosmolarity-induced mitochondrial dysfunction in human corneal epithelial cells (HCEpiCs). SIRT1 is reduced in hyperosmolarity-treated HCEpiCs, while resveratrol upregulates SIRT1 expression. Moreover, we found that resveratrol restores mitochondrial function by inducing SIRT1 expression. Consistently, resveratrol ameliorated dry eye symptoms in the DED mouse model. Thus, our results establish a novel mechanism by which resveratrol attenuates DED by facilitating SIRT1 expression.
## 2. Materials and Methods
### 2.1. Cell Culture and Treatment
Human corneal epithelial cells HCE-2[50.B1] (CRL-11135) were acquired from ATCC (Manassas, VA, USA). Cells were cultured at 37°C in 5% CO2 humidity in 10% fetal bovine serum (FBS, Gibco) and 1% v/v penicillin/streptomycin (Gibco) in Dulbecco’s modified Eagle’s medium (DMEM, Gibco). For the DED cell model, HCEpiCs were treated with 0 or 94 mM NaCl in the medium and treated at isotonic and high osmolarity (312 and 500 mOsM) for 24 h. For resveratrol treatment, HCEpiCs were administered at 50 μM, and the vehicle (alcohol) had a final concentration of 0.5% (nontoxic for cells) [20].
### 2.2. Cell Apoptosis Assay
The apoptosis of the indicated cells was analyzed by an Annexin V-FITC apoptosis detection kit (C1062S, Beyotime). Briefly, cells were collected and resuspended in PBS. After centrifugation, the suspension was discarded, and the cells were resuspended in buffer. Subsequently, 5μl of Annexin V-FITC and 10 μl of propidium iodide staining solution were added. After incubating at room temperature in the dark for 10–20 minutes, the cells were placed on ice and analyzed by flow cytometry.
### 2.3. Measurement of ROS Levels
The ROS level in the indicated cells was measured by an ROS assay kit (ab113851, Abcam). Briefly, HCEpiCs were stained with DCFDA for 30 minutes at 37°C.
### 2.4. Western Blot
Proteins isolated from HCEpiCs and lacrimal glands were measured by a BCA assay kit (P0012S, Beyotime). Approximately 40μg of protein was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes (1620177, BioRad). PVDF membranes were blocked in 5% nonfat milk and incubated with the primary antibodies at 4°C overnight. After washing with TBST three times, the membranes were incubated with secondary antibodies. Finally, the bands were measured with an ECL reagent kit (A38555, Thermo Scientific™).
### 2.5. Animal Model and Treatment
Seventy female C57BL/6 mice (Certificate number: SCXK(Dian)K2020-0004) aged 6–8 weeks were purchased from the Animal Center of Kunming Medical University. The mice were instilled with 5μL of 0.2% BAC (Sigma-Aldrich) solution in both the eyes, twice a day, for 2 consecutive weeks, to induce the mouse DED model [21]. After the successful establishment of the DED model, the mice were randomly divided into 3 groups (15 mice in each group): DED group, DED mice with alcohol administration, and DED mice with resveratrol administration, and the mice without BAC induction were used as the normal control group. Resveratrol (5 μL/eye) was administered 3 times/day in both the eyes for two weeks. Eventually, the mice were euthanized by CO2 asphyxiation, and the entire eye tissue, including the conjunctiva and eyeball, was removed for further analysis.
### 2.6. Corneal Fluorescein Staining
1μL of 1% sodium fluorescein was dropped into the inferior conjunctival sac using a micropipette; then, punctate staining on the corneal surface was evaluated in a blind fashion. Cobalt blue light was used for inspection and photographic recording under a slit-lamp microscope with 0 points for no staining of corneal fluorescein, 1 point for one-quarter staining, 2 points for less than half staining, 3 points for more than half staining, and 4 points for more than half staining [22].
### 2.7. Tear Production
The tear output was analyzed using phenol red cotton threads (Tianjin Jingming) [23]. The phenol red thread was positioned in the lateral canthus of the eye for 60 seconds, and then, thread wetting measurements were recorded.
### 2.8. Periodic Acid-Schiff (PAS) Staining
The eyeball was embedded and sliced into 5μm thick divisions. Each division was stained with periodic acid-Schiff (PAS) [24]. The goblet cell density was quantified.
### 2.9. Statistical Analysis
All data are expressed as the mean ± SEM. GraphPad software was used to analyze and draw figures. The statistical significance of differences was evaluated by the two-tailed Student'st-test or two-way ANOVA. All p values were considered statistically significant when values were <0.05.
## 2.1. Cell Culture and Treatment
Human corneal epithelial cells HCE-2[50.B1] (CRL-11135) were acquired from ATCC (Manassas, VA, USA). Cells were cultured at 37°C in 5% CO2 humidity in 10% fetal bovine serum (FBS, Gibco) and 1% v/v penicillin/streptomycin (Gibco) in Dulbecco’s modified Eagle’s medium (DMEM, Gibco). For the DED cell model, HCEpiCs were treated with 0 or 94 mM NaCl in the medium and treated at isotonic and high osmolarity (312 and 500 mOsM) for 24 h. For resveratrol treatment, HCEpiCs were administered at 50 μM, and the vehicle (alcohol) had a final concentration of 0.5% (nontoxic for cells) [20].
## 2.2. Cell Apoptosis Assay
The apoptosis of the indicated cells was analyzed by an Annexin V-FITC apoptosis detection kit (C1062S, Beyotime). Briefly, cells were collected and resuspended in PBS. After centrifugation, the suspension was discarded, and the cells were resuspended in buffer. Subsequently, 5μl of Annexin V-FITC and 10 μl of propidium iodide staining solution were added. After incubating at room temperature in the dark for 10–20 minutes, the cells were placed on ice and analyzed by flow cytometry.
## 2.3. Measurement of ROS Levels
The ROS level in the indicated cells was measured by an ROS assay kit (ab113851, Abcam). Briefly, HCEpiCs were stained with DCFDA for 30 minutes at 37°C.
## 2.4. Western Blot
Proteins isolated from HCEpiCs and lacrimal glands were measured by a BCA assay kit (P0012S, Beyotime). Approximately 40μg of protein was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes (1620177, BioRad). PVDF membranes were blocked in 5% nonfat milk and incubated with the primary antibodies at 4°C overnight. After washing with TBST three times, the membranes were incubated with secondary antibodies. Finally, the bands were measured with an ECL reagent kit (A38555, Thermo Scientific™).
## 2.5. Animal Model and Treatment
Seventy female C57BL/6 mice (Certificate number: SCXK(Dian)K2020-0004) aged 6–8 weeks were purchased from the Animal Center of Kunming Medical University. The mice were instilled with 5μL of 0.2% BAC (Sigma-Aldrich) solution in both the eyes, twice a day, for 2 consecutive weeks, to induce the mouse DED model [21]. After the successful establishment of the DED model, the mice were randomly divided into 3 groups (15 mice in each group): DED group, DED mice with alcohol administration, and DED mice with resveratrol administration, and the mice without BAC induction were used as the normal control group. Resveratrol (5 μL/eye) was administered 3 times/day in both the eyes for two weeks. Eventually, the mice were euthanized by CO2 asphyxiation, and the entire eye tissue, including the conjunctiva and eyeball, was removed for further analysis.
## 2.6. Corneal Fluorescein Staining
1μL of 1% sodium fluorescein was dropped into the inferior conjunctival sac using a micropipette; then, punctate staining on the corneal surface was evaluated in a blind fashion. Cobalt blue light was used for inspection and photographic recording under a slit-lamp microscope with 0 points for no staining of corneal fluorescein, 1 point for one-quarter staining, 2 points for less than half staining, 3 points for more than half staining, and 4 points for more than half staining [22].
## 2.7. Tear Production
The tear output was analyzed using phenol red cotton threads (Tianjin Jingming) [23]. The phenol red thread was positioned in the lateral canthus of the eye for 60 seconds, and then, thread wetting measurements were recorded.
## 2.8. Periodic Acid-Schiff (PAS) Staining
The eyeball was embedded and sliced into 5μm thick divisions. Each division was stained with periodic acid-Schiff (PAS) [24]. The goblet cell density was quantified.
## 2.9. Statistical Analysis
All data are expressed as the mean ± SEM. GraphPad software was used to analyze and draw figures. The statistical significance of differences was evaluated by the two-tailed Student'st-test or two-way ANOVA. All p values were considered statistically significant when values were <0.05.
## 3. Results
### 3.1. Environmental Hyperosmolarity Promotes Mitochondrial Dysfunction in HCEpiCs
To investigate the part of mitochondria in DED, a hyperosmolarity HCEpiCs model was created using 500 mOsM medium, and HCEpiCs exposed to 312 mOsM medium were regarded as controls. As shown in Figure1(a), after exposure to 500 mOsM medium, the apoptosis of HCEpiCs was increased. The expression levels of the antioxidant proteins SOD2 and GPx were reduced in HCEpiCs under hyperosmolarity (Figure 1(b)). Consistently, hyperosmolarity increased ROS production in HCEpiCs (Figure 1(c)).Figure 1
Environmental hyperosmolarity promotes mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under iso and hyperosmolarities (312 and 500 mOsM) determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production under iso- and hyper-osmolarities determined by flow cytometry.n = 3. ∗P<0.05 and ∗∗∗P<0.001.
(a)(b)(c)
### 3.2. Resveratrol Treatment Suppresses Mitochondrial Dysfunction in HCEpiCs
Resveratrol is reported to modulate mitochondrial function in vitro and in vivo. To understand the function of resveratrol in mitochondrial function in HCEpiCs, hyperosmolarity-treated HCEpiCs were administered 50μm of resveratrol. The apoptosis of HCEpiCs was reduced by resveratrol treatment (Figure 2(a)). Resveratrol administration promoted SOD2 and GPx expression (Figure 2(b)); in contrast, ROS production was reduced (Figure 2(c)).Figure 2
Resveratrol treatment suppresses mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production in HCEpiCs under hyperosmolar conditions with resveratrol treatment determined by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.01.
(a)(b)(c)
### 3.3. Resveratrol Upregulates SIRT1 Expression in HCEpiCs
Previous studies suggested that SIRT1 contributed to mitochondrial function maintenance [25]. SIRT1 is involved in resveratrol-mediated mitochondrial regulation [19, 26]. Here, we showed that SIRT1 was suppressed in HCEpiCs under hyperosmolarity (Figure 3(a)). We examined the effects of resveratrol on SIRT1 and found that the expression of SIRT1 was recovered with resveratrol treatment (Figure 3(b)).Figure 3
Resveratrol upregulates SIRT1 expression in HCEpiCs. (a) Expression of SIRT1 in HCEpiCs under iso- and hyper-osmolarities (312 and 500 mOsM) detected by Western blotting. (b) Expression of SIRT1 in HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by Western blotting. (c) Apoptosis of HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 determined by flow cytometry. (d) Expression of SIRT1, GPx, and SOD2 in HCEpiCs under hyperosmolarity with resveratrol and/or SIRT1 inhibitor EX527 treatment measured by Western blotting. (e) ROS production in HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 measured by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We next asked whether SIRT1 was responsible for resveratrol-mediated mitochondrial regulation in HCEpiCs. To test this hypothesis, we introduced the SIRT1 inhibitor EX527. Treatment with EX527 counteracted the inhibitory effect of resveratrol on HCEpiCs apoptosis (Figure3(c)) and SOD2 and GPx expression (Figure 3(d)). We also observed that EX527 eliminated part of the inhibitory effect of resveratrol on ROS production (Figure 3(e)).
### 3.4. Resveratrol Ameliorates DED Syndrome in Vivo via SIRT1
Next, a DED mouse model induced by BAC ammonium chloride was used to determine the role of resveratrol in DED progression. Tear output was measured by the phenol red cotton thread test, which indicated that resveratrol-treated DED mice experienced more tear production than alcohol-treated DED mice and DED mice (Figure4(a)). Corneal fluorescein staining was decreased in resveratrol-treated mice (Figure 4(b)). Moreover, the number of goblet cells was increased with resveratrol administration (Figure 4(c)). These data indicated that resveratrol attenuates DED progression.Figure 4
Resveratrol ameliorates DED syndrome in vivo via SIRT1. (a) The tear production in control mice, DED mice, DED mice treated with alcohol, or DED mice treated with resveratrol measured by the phenol red cotton thread test. (b) Corneal fluorescein staining in control mice, DED mice, DED mice with alcohol treatment, and DED mice treated with resveratrol. (c) Goblet cell density in the conjunctival epithelial layer measured by periodic acid-Schiff (PAS) staining. (d) Expression of SIRT1 in lacrimal glands determined by Western blotting. (e) Expression of GPx and SOD2 in lacrimal glands determined by Western blotting.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We then detected SIRT1 expression in lacrimal glands and found that SIRT1 was inhibited in DED mice and alcohol-treated DED mice, while resveratrol upregulated SIRT1 expression (Figure4(d)). Moreover, resveratrol administration induced SOD2 and GPx expression in the DED mouse model (Figure 4(e)).
## 3.1. Environmental Hyperosmolarity Promotes Mitochondrial Dysfunction in HCEpiCs
To investigate the part of mitochondria in DED, a hyperosmolarity HCEpiCs model was created using 500 mOsM medium, and HCEpiCs exposed to 312 mOsM medium were regarded as controls. As shown in Figure1(a), after exposure to 500 mOsM medium, the apoptosis of HCEpiCs was increased. The expression levels of the antioxidant proteins SOD2 and GPx were reduced in HCEpiCs under hyperosmolarity (Figure 1(b)). Consistently, hyperosmolarity increased ROS production in HCEpiCs (Figure 1(c)).Figure 1
Environmental hyperosmolarity promotes mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under iso and hyperosmolarities (312 and 500 mOsM) determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production under iso- and hyper-osmolarities determined by flow cytometry.n = 3. ∗P<0.05 and ∗∗∗P<0.001.
(a)(b)(c)
## 3.2. Resveratrol Treatment Suppresses Mitochondrial Dysfunction in HCEpiCs
Resveratrol is reported to modulate mitochondrial function in vitro and in vivo. To understand the function of resveratrol in mitochondrial function in HCEpiCs, hyperosmolarity-treated HCEpiCs were administered 50μm of resveratrol. The apoptosis of HCEpiCs was reduced by resveratrol treatment (Figure 2(a)). Resveratrol administration promoted SOD2 and GPx expression (Figure 2(b)); in contrast, ROS production was reduced (Figure 2(c)).Figure 2
Resveratrol treatment suppresses mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production in HCEpiCs under hyperosmolar conditions with resveratrol treatment determined by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.01.
(a)(b)(c)
## 3.3. Resveratrol Upregulates SIRT1 Expression in HCEpiCs
Previous studies suggested that SIRT1 contributed to mitochondrial function maintenance [25]. SIRT1 is involved in resveratrol-mediated mitochondrial regulation [19, 26]. Here, we showed that SIRT1 was suppressed in HCEpiCs under hyperosmolarity (Figure 3(a)). We examined the effects of resveratrol on SIRT1 and found that the expression of SIRT1 was recovered with resveratrol treatment (Figure 3(b)).Figure 3
Resveratrol upregulates SIRT1 expression in HCEpiCs. (a) Expression of SIRT1 in HCEpiCs under iso- and hyper-osmolarities (312 and 500 mOsM) detected by Western blotting. (b) Expression of SIRT1 in HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by Western blotting. (c) Apoptosis of HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 determined by flow cytometry. (d) Expression of SIRT1, GPx, and SOD2 in HCEpiCs under hyperosmolarity with resveratrol and/or SIRT1 inhibitor EX527 treatment measured by Western blotting. (e) ROS production in HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 measured by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We next asked whether SIRT1 was responsible for resveratrol-mediated mitochondrial regulation in HCEpiCs. To test this hypothesis, we introduced the SIRT1 inhibitor EX527. Treatment with EX527 counteracted the inhibitory effect of resveratrol on HCEpiCs apoptosis (Figure3(c)) and SOD2 and GPx expression (Figure 3(d)). We also observed that EX527 eliminated part of the inhibitory effect of resveratrol on ROS production (Figure 3(e)).
## 3.4. Resveratrol Ameliorates DED Syndrome in Vivo via SIRT1
Next, a DED mouse model induced by BAC ammonium chloride was used to determine the role of resveratrol in DED progression. Tear output was measured by the phenol red cotton thread test, which indicated that resveratrol-treated DED mice experienced more tear production than alcohol-treated DED mice and DED mice (Figure4(a)). Corneal fluorescein staining was decreased in resveratrol-treated mice (Figure 4(b)). Moreover, the number of goblet cells was increased with resveratrol administration (Figure 4(c)). These data indicated that resveratrol attenuates DED progression.Figure 4
Resveratrol ameliorates DED syndrome in vivo via SIRT1. (a) The tear production in control mice, DED mice, DED mice treated with alcohol, or DED mice treated with resveratrol measured by the phenol red cotton thread test. (b) Corneal fluorescein staining in control mice, DED mice, DED mice with alcohol treatment, and DED mice treated with resveratrol. (c) Goblet cell density in the conjunctival epithelial layer measured by periodic acid-Schiff (PAS) staining. (d) Expression of SIRT1 in lacrimal glands determined by Western blotting. (e) Expression of GPx and SOD2 in lacrimal glands determined by Western blotting.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We then detected SIRT1 expression in lacrimal glands and found that SIRT1 was inhibited in DED mice and alcohol-treated DED mice, while resveratrol upregulated SIRT1 expression (Figure4(d)). Moreover, resveratrol administration induced SOD2 and GPx expression in the DED mouse model (Figure 4(e)).
## 4. Discussion
In the current research, we demonstrated that hyperosmolarity induces apoptosis and mitochondrial dysfunction in HCEpiCs, while resveratrol restores the mitochondrial function of HCEpiCs under hyperosmolarity. Decreased expression of SIRT1 could be observed in HCEpiCs with hyperosmolarity culturing. Importantly, our results further demonstrated that SIRT1 is responsible for resveratrol-mediated mitochondrial restoration. Consistently, by establishing a DED mouse model, we found that resveratrol prevents DED syndrome. Thus, our data extended the role of resveratrol and illustrated the underlying mechanism of resveratrol in ameliorating DED.DED is a multifactorial disease and is closely related to mitochondrial function. In diabetic mice, Qu et al. [27] demonstrated that hyperglycemia-induced mitochondrial bioenergetic inadequacy of the lacrimal gland ameliorates early onset of dry eye. Bogdan et al. [28] proposed that insulin-like growth factor binding protein-3 (IGFBP-3) is involved in hyperosmolar stress responses in the corneal epithelium by modulating mitochondrial function. Hyperosmolarity can increase ROS and apoptosis of HCEpiCs (Figure 1), and our results confirm this. Since antioxidants are one of the most common factors for restoring mitochondrial function and could prevent mitochondrial-associated pathology [29], we focused on resveratrol and set out to determine the role of resveratrol in DED development. Resveratrol, a common antioxidant, contributes to mitochondrial function maintenance. Kang et al. [30] showed that resveratrol protects neural cells from injury via regulation of mitochondrial biogenesis and mitophagy. In C6 astrocytes, Bobermin et al. [31] demonstrated that resveratrol prevents an increase in ROS production, a reduction in mitochondrial membrane potential (ΔΨ), and bioenergetic insufficiency caused by ammonia. Importantly, several studies have indicated that resveratrol prevents DED syndrome [12, 32]. However, whether resveratrol attenuates DED development by regulating mitochondria remains unknown. Here, we found that hyperosmolarity culturing reduces the expression of the antioxidant proteins SOD2 and GPx and induces ROS levels. Resveratrol administration inhibits HCEpiCs apoptosis, increases SOD2 and GPx expression, and decreases ROS levels. Moreover, resveratrol attenuates DED syndrome and increases SOD2 and GPx expression in a DED mouse model. These results suggest that resveratrol may reduce oxidative stress and HCEpiCs apoptosis by maintaining mitochondrial function.SIRT1 contributes to the function and biogenesis of mitochondria [33]. Of note, Samadi et al. [34] described that SIRT1 expression is suppressed in a diabetic dry eye model. Here, we also observed decreased expression of SIRT1 in HCEpiCs from a hyperosmolarity culture and DED mouse model, which was accompanied by increased levels of oxidative stress, apoptosis, or dry eye syndrome. In addition, resveratrol was previously shown to be critical in SIRT1 activation [35]. In the next experiment, we demonstrated that resveratrol treatment reversed SIRT1 expression in HCEpiCs under hyperosmolarity and DED in mice, while the SIRT1 inhibitor EX527 rescued the inhibitory effect of resveratrol on mitochondrial dysfunction in HCEpiCs. This finding suggests that resveratrol ameliorates mitochondrial dysfunction via SIRT1.It was proposed in early studies that the antioxidant resveratrol is critical in preventing DED syndrome, but the mechanism remains unclear. Our results demonstrate that resveratrol can restore mitochondrial function in HCEpiCs and inhibit HCEpiCs apoptosis. Furthermore, our findings indicate that SIRT1 is the major effector in resveratrol-regulated DED development. Therefore, our results show that resveratrol/SIRT1 plays a significant role in DED development, which is beneficial to DED therapy.
## 5. Conclusion
In summary, we found that resveratrol reversed hyperosmolarity-mediated mitochondrial dysfunction in HCEpiCs, and we demonstrated that resveratrol alleviated mitochondrial dysfunction by promoting SIRT1 expression. At the same time, it has been proven in animal experiments that resveratrol can reduce ocular surface damage in a mouse model of DED. Finally, resveratrol improved the effect of DED by restoring mitochondrial function, and this study provides new ideas for the treatment of DED.
---
*Source: 1013444-2022-05-26.xml* | 1013444-2022-05-26_1013444-2022-05-26.md | 28,855 | Ameliorative Potential of Resveratrol in Dry Eye Disease by Restoring Mitochondrial Function | Jingyao Chen; Weijia Zhang; Yixin Zheng; Yanze Xu | Evidence-Based Complementary and Alternative Medicine
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013444 | 1013444-2022-05-26.xml | ---
## Abstract
Background and Significance. Dry eye disease (DED) is a prevalent optic surface illness with a high incidence worldwide that is caused by a variety of factors, including mitochondrial dysfunction. Resveratrol has been confirmed to protect the eye surface in DED, and as an antioxidant, resveratrol can maintain mitochondrial function. Therefore, we investigated whether resveratrol can improve DED by restoring mitochondrial function. Methods. The mitochondrial dysfunction of HCE-2 human corneal epithelial cells was induced by high osmotic pressure exposure and treated with resveratrol (50 μM). Western blotting was used to detect the expression of the antioxidant proteins SOD2, GPx, and SIRT1, and flow cytometry was used to detect cell apoptosis and ROS production. The DED mouse model was induced by 0.2% benzalkonium chloride (BAC) and treated with resveratrol. The tear yield was measured by the phenol cotton thread test, the density of cup cells in the conjunctiva was measured by periodic acid-Schiff (PAS) staining, and the expression levels of SIRT1, GPx, and SOD2 in lacrimal glands were detected by Western blotting. Results. In hypertonic conditions, the apoptosis of HCE-2 cells increased, the expression of the antioxidant proteins SOD2 and GPx decreased, ROS production increased, and the expression of SIRT1 protein, an essential regulator of mitochondrial function, was downregulated. Treatment with resveratrol reversed the mitochondrial dysfunction mediated by high osmotic pressure. In the DED mouse model, resveratrol treatment promoted tear production and goblet cell number in DED mice, decreased corneal fluorescein staining, upregulated SIRT1 expression, and induced SOD2 and GPx expression in DED mice. Conclusion. Resveratrol alleviates mitochondrial dysfunction by promoting SIRT1 expression, thus reducing ocular surface injury in mice with dry eye. This study suggests a new path against DED.
---
## Body
## 1. Introduction
DED is a prevalent ocular surface disorder caused by inadequate production of tears and excessive tear evaporation. Of note, the prevalence of DED in the world population ranges from 6 to 34%, and the prevalence of DED is higher in the aging population [1]. Thus, effective therapeutic strategies are urgently needed for remitting DED.Emerging evidence indicates that mitochondrial dysfunction is responsible for pathological processes, including but not limited to neurodegenerative disease [2], cancer [3], and DED [4]. Studies have shown that mitochondrial function is a crucial component in the progression of DED. For example, DDIT4 knockdown restores mitochondrial function under hyperosmolarity and preserves the viability of human corneal epithelial cells [5]. Moreover, the modulation of mitochondrial homeostasis is related to the outcome of DED [6]. Recent studies suggest that antioxidant administration may restore mitochondria. Resveratrol (3,5,4′-trihydroxy-trans-stilbene), a natural plant product, has been reported to have antioxidant effects and maintain mitochondrial function [7, 8]. The protective role of resveratrol in mitochondrial dysfunction-related diseases, such as cardiac diseases [9], hypoxic ischemic injury [10], and neurodegenerative disorders [11], has been well established. It is worth noting that the function of resveratrol in protecting the ocular surface in experimental DED has been reported [12]. However, the underlying mechanism by which resveratrol ameliorates DED remains obscure.Mammalian sirtuin 1 (SIRT1) is an exceedingly conserved NAD(+)-dependent deacetylase that has been reported to be engaged in the regulation of mitochondrial biogenesis [13]. Aberrant expression of SIRT1 leads to mitochondrial dysfunction, thereby enhancing pathological processes [14]. Earlier research revealed that the expression of SIRT1 is decreased in the condition of diabetic dry eye [15], indicating that SIRT1 may function in DED. It is well known that resveratrol is a potent activator of SIRT1 [16]. Currently, the antioxidative effect of resveratrol is achieved by upregulating SIRT1 expression [17]. For instance, resveratrol improves mitochondria and protects against metabolic disease by activating SIRT1 [18]. Resveratrol activates SIRT1 to alleviate cardiac dysfunction through mitochondrial regulation [19]. However, the correlation between resveratrol and SIRT1 in DED is unknown.Thus, we demonstrate that resveratrol treatment attenuates hyperosmolarity-induced mitochondrial dysfunction in human corneal epithelial cells (HCEpiCs). SIRT1 is reduced in hyperosmolarity-treated HCEpiCs, while resveratrol upregulates SIRT1 expression. Moreover, we found that resveratrol restores mitochondrial function by inducing SIRT1 expression. Consistently, resveratrol ameliorated dry eye symptoms in the DED mouse model. Thus, our results establish a novel mechanism by which resveratrol attenuates DED by facilitating SIRT1 expression.
## 2. Materials and Methods
### 2.1. Cell Culture and Treatment
Human corneal epithelial cells HCE-2[50.B1] (CRL-11135) were acquired from ATCC (Manassas, VA, USA). Cells were cultured at 37°C in 5% CO2 humidity in 10% fetal bovine serum (FBS, Gibco) and 1% v/v penicillin/streptomycin (Gibco) in Dulbecco’s modified Eagle’s medium (DMEM, Gibco). For the DED cell model, HCEpiCs were treated with 0 or 94 mM NaCl in the medium and treated at isotonic and high osmolarity (312 and 500 mOsM) for 24 h. For resveratrol treatment, HCEpiCs were administered at 50 μM, and the vehicle (alcohol) had a final concentration of 0.5% (nontoxic for cells) [20].
### 2.2. Cell Apoptosis Assay
The apoptosis of the indicated cells was analyzed by an Annexin V-FITC apoptosis detection kit (C1062S, Beyotime). Briefly, cells were collected and resuspended in PBS. After centrifugation, the suspension was discarded, and the cells were resuspended in buffer. Subsequently, 5μl of Annexin V-FITC and 10 μl of propidium iodide staining solution were added. After incubating at room temperature in the dark for 10–20 minutes, the cells were placed on ice and analyzed by flow cytometry.
### 2.3. Measurement of ROS Levels
The ROS level in the indicated cells was measured by an ROS assay kit (ab113851, Abcam). Briefly, HCEpiCs were stained with DCFDA for 30 minutes at 37°C.
### 2.4. Western Blot
Proteins isolated from HCEpiCs and lacrimal glands were measured by a BCA assay kit (P0012S, Beyotime). Approximately 40μg of protein was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes (1620177, BioRad). PVDF membranes were blocked in 5% nonfat milk and incubated with the primary antibodies at 4°C overnight. After washing with TBST three times, the membranes were incubated with secondary antibodies. Finally, the bands were measured with an ECL reagent kit (A38555, Thermo Scientific™).
### 2.5. Animal Model and Treatment
Seventy female C57BL/6 mice (Certificate number: SCXK(Dian)K2020-0004) aged 6–8 weeks were purchased from the Animal Center of Kunming Medical University. The mice were instilled with 5μL of 0.2% BAC (Sigma-Aldrich) solution in both the eyes, twice a day, for 2 consecutive weeks, to induce the mouse DED model [21]. After the successful establishment of the DED model, the mice were randomly divided into 3 groups (15 mice in each group): DED group, DED mice with alcohol administration, and DED mice with resveratrol administration, and the mice without BAC induction were used as the normal control group. Resveratrol (5 μL/eye) was administered 3 times/day in both the eyes for two weeks. Eventually, the mice were euthanized by CO2 asphyxiation, and the entire eye tissue, including the conjunctiva and eyeball, was removed for further analysis.
### 2.6. Corneal Fluorescein Staining
1μL of 1% sodium fluorescein was dropped into the inferior conjunctival sac using a micropipette; then, punctate staining on the corneal surface was evaluated in a blind fashion. Cobalt blue light was used for inspection and photographic recording under a slit-lamp microscope with 0 points for no staining of corneal fluorescein, 1 point for one-quarter staining, 2 points for less than half staining, 3 points for more than half staining, and 4 points for more than half staining [22].
### 2.7. Tear Production
The tear output was analyzed using phenol red cotton threads (Tianjin Jingming) [23]. The phenol red thread was positioned in the lateral canthus of the eye for 60 seconds, and then, thread wetting measurements were recorded.
### 2.8. Periodic Acid-Schiff (PAS) Staining
The eyeball was embedded and sliced into 5μm thick divisions. Each division was stained with periodic acid-Schiff (PAS) [24]. The goblet cell density was quantified.
### 2.9. Statistical Analysis
All data are expressed as the mean ± SEM. GraphPad software was used to analyze and draw figures. The statistical significance of differences was evaluated by the two-tailed Student'st-test or two-way ANOVA. All p values were considered statistically significant when values were <0.05.
## 2.1. Cell Culture and Treatment
Human corneal epithelial cells HCE-2[50.B1] (CRL-11135) were acquired from ATCC (Manassas, VA, USA). Cells were cultured at 37°C in 5% CO2 humidity in 10% fetal bovine serum (FBS, Gibco) and 1% v/v penicillin/streptomycin (Gibco) in Dulbecco’s modified Eagle’s medium (DMEM, Gibco). For the DED cell model, HCEpiCs were treated with 0 or 94 mM NaCl in the medium and treated at isotonic and high osmolarity (312 and 500 mOsM) for 24 h. For resveratrol treatment, HCEpiCs were administered at 50 μM, and the vehicle (alcohol) had a final concentration of 0.5% (nontoxic for cells) [20].
## 2.2. Cell Apoptosis Assay
The apoptosis of the indicated cells was analyzed by an Annexin V-FITC apoptosis detection kit (C1062S, Beyotime). Briefly, cells were collected and resuspended in PBS. After centrifugation, the suspension was discarded, and the cells were resuspended in buffer. Subsequently, 5μl of Annexin V-FITC and 10 μl of propidium iodide staining solution were added. After incubating at room temperature in the dark for 10–20 minutes, the cells were placed on ice and analyzed by flow cytometry.
## 2.3. Measurement of ROS Levels
The ROS level in the indicated cells was measured by an ROS assay kit (ab113851, Abcam). Briefly, HCEpiCs were stained with DCFDA for 30 minutes at 37°C.
## 2.4. Western Blot
Proteins isolated from HCEpiCs and lacrimal glands were measured by a BCA assay kit (P0012S, Beyotime). Approximately 40μg of protein was separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes (1620177, BioRad). PVDF membranes were blocked in 5% nonfat milk and incubated with the primary antibodies at 4°C overnight. After washing with TBST three times, the membranes were incubated with secondary antibodies. Finally, the bands were measured with an ECL reagent kit (A38555, Thermo Scientific™).
## 2.5. Animal Model and Treatment
Seventy female C57BL/6 mice (Certificate number: SCXK(Dian)K2020-0004) aged 6–8 weeks were purchased from the Animal Center of Kunming Medical University. The mice were instilled with 5μL of 0.2% BAC (Sigma-Aldrich) solution in both the eyes, twice a day, for 2 consecutive weeks, to induce the mouse DED model [21]. After the successful establishment of the DED model, the mice were randomly divided into 3 groups (15 mice in each group): DED group, DED mice with alcohol administration, and DED mice with resveratrol administration, and the mice without BAC induction were used as the normal control group. Resveratrol (5 μL/eye) was administered 3 times/day in both the eyes for two weeks. Eventually, the mice were euthanized by CO2 asphyxiation, and the entire eye tissue, including the conjunctiva and eyeball, was removed for further analysis.
## 2.6. Corneal Fluorescein Staining
1μL of 1% sodium fluorescein was dropped into the inferior conjunctival sac using a micropipette; then, punctate staining on the corneal surface was evaluated in a blind fashion. Cobalt blue light was used for inspection and photographic recording under a slit-lamp microscope with 0 points for no staining of corneal fluorescein, 1 point for one-quarter staining, 2 points for less than half staining, 3 points for more than half staining, and 4 points for more than half staining [22].
## 2.7. Tear Production
The tear output was analyzed using phenol red cotton threads (Tianjin Jingming) [23]. The phenol red thread was positioned in the lateral canthus of the eye for 60 seconds, and then, thread wetting measurements were recorded.
## 2.8. Periodic Acid-Schiff (PAS) Staining
The eyeball was embedded and sliced into 5μm thick divisions. Each division was stained with periodic acid-Schiff (PAS) [24]. The goblet cell density was quantified.
## 2.9. Statistical Analysis
All data are expressed as the mean ± SEM. GraphPad software was used to analyze and draw figures. The statistical significance of differences was evaluated by the two-tailed Student'st-test or two-way ANOVA. All p values were considered statistically significant when values were <0.05.
## 3. Results
### 3.1. Environmental Hyperosmolarity Promotes Mitochondrial Dysfunction in HCEpiCs
To investigate the part of mitochondria in DED, a hyperosmolarity HCEpiCs model was created using 500 mOsM medium, and HCEpiCs exposed to 312 mOsM medium were regarded as controls. As shown in Figure1(a), after exposure to 500 mOsM medium, the apoptosis of HCEpiCs was increased. The expression levels of the antioxidant proteins SOD2 and GPx were reduced in HCEpiCs under hyperosmolarity (Figure 1(b)). Consistently, hyperosmolarity increased ROS production in HCEpiCs (Figure 1(c)).Figure 1
Environmental hyperosmolarity promotes mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under iso and hyperosmolarities (312 and 500 mOsM) determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production under iso- and hyper-osmolarities determined by flow cytometry.n = 3. ∗P<0.05 and ∗∗∗P<0.001.
(a)(b)(c)
### 3.2. Resveratrol Treatment Suppresses Mitochondrial Dysfunction in HCEpiCs
Resveratrol is reported to modulate mitochondrial function in vitro and in vivo. To understand the function of resveratrol in mitochondrial function in HCEpiCs, hyperosmolarity-treated HCEpiCs were administered 50μm of resveratrol. The apoptosis of HCEpiCs was reduced by resveratrol treatment (Figure 2(a)). Resveratrol administration promoted SOD2 and GPx expression (Figure 2(b)); in contrast, ROS production was reduced (Figure 2(c)).Figure 2
Resveratrol treatment suppresses mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production in HCEpiCs under hyperosmolar conditions with resveratrol treatment determined by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.01.
(a)(b)(c)
### 3.3. Resveratrol Upregulates SIRT1 Expression in HCEpiCs
Previous studies suggested that SIRT1 contributed to mitochondrial function maintenance [25]. SIRT1 is involved in resveratrol-mediated mitochondrial regulation [19, 26]. Here, we showed that SIRT1 was suppressed in HCEpiCs under hyperosmolarity (Figure 3(a)). We examined the effects of resveratrol on SIRT1 and found that the expression of SIRT1 was recovered with resveratrol treatment (Figure 3(b)).Figure 3
Resveratrol upregulates SIRT1 expression in HCEpiCs. (a) Expression of SIRT1 in HCEpiCs under iso- and hyper-osmolarities (312 and 500 mOsM) detected by Western blotting. (b) Expression of SIRT1 in HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by Western blotting. (c) Apoptosis of HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 determined by flow cytometry. (d) Expression of SIRT1, GPx, and SOD2 in HCEpiCs under hyperosmolarity with resveratrol and/or SIRT1 inhibitor EX527 treatment measured by Western blotting. (e) ROS production in HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 measured by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We next asked whether SIRT1 was responsible for resveratrol-mediated mitochondrial regulation in HCEpiCs. To test this hypothesis, we introduced the SIRT1 inhibitor EX527. Treatment with EX527 counteracted the inhibitory effect of resveratrol on HCEpiCs apoptosis (Figure3(c)) and SOD2 and GPx expression (Figure 3(d)). We also observed that EX527 eliminated part of the inhibitory effect of resveratrol on ROS production (Figure 3(e)).
### 3.4. Resveratrol Ameliorates DED Syndrome in Vivo via SIRT1
Next, a DED mouse model induced by BAC ammonium chloride was used to determine the role of resveratrol in DED progression. Tear output was measured by the phenol red cotton thread test, which indicated that resveratrol-treated DED mice experienced more tear production than alcohol-treated DED mice and DED mice (Figure4(a)). Corneal fluorescein staining was decreased in resveratrol-treated mice (Figure 4(b)). Moreover, the number of goblet cells was increased with resveratrol administration (Figure 4(c)). These data indicated that resveratrol attenuates DED progression.Figure 4
Resveratrol ameliorates DED syndrome in vivo via SIRT1. (a) The tear production in control mice, DED mice, DED mice treated with alcohol, or DED mice treated with resveratrol measured by the phenol red cotton thread test. (b) Corneal fluorescein staining in control mice, DED mice, DED mice with alcohol treatment, and DED mice treated with resveratrol. (c) Goblet cell density in the conjunctival epithelial layer measured by periodic acid-Schiff (PAS) staining. (d) Expression of SIRT1 in lacrimal glands determined by Western blotting. (e) Expression of GPx and SOD2 in lacrimal glands determined by Western blotting.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We then detected SIRT1 expression in lacrimal glands and found that SIRT1 was inhibited in DED mice and alcohol-treated DED mice, while resveratrol upregulated SIRT1 expression (Figure4(d)). Moreover, resveratrol administration induced SOD2 and GPx expression in the DED mouse model (Figure 4(e)).
## 3.1. Environmental Hyperosmolarity Promotes Mitochondrial Dysfunction in HCEpiCs
To investigate the part of mitochondria in DED, a hyperosmolarity HCEpiCs model was created using 500 mOsM medium, and HCEpiCs exposed to 312 mOsM medium were regarded as controls. As shown in Figure1(a), after exposure to 500 mOsM medium, the apoptosis of HCEpiCs was increased. The expression levels of the antioxidant proteins SOD2 and GPx were reduced in HCEpiCs under hyperosmolarity (Figure 1(b)). Consistently, hyperosmolarity increased ROS production in HCEpiCs (Figure 1(c)).Figure 1
Environmental hyperosmolarity promotes mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under iso and hyperosmolarities (312 and 500 mOsM) determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production under iso- and hyper-osmolarities determined by flow cytometry.n = 3. ∗P<0.05 and ∗∗∗P<0.001.
(a)(b)(c)
## 3.2. Resveratrol Treatment Suppresses Mitochondrial Dysfunction in HCEpiCs
Resveratrol is reported to modulate mitochondrial function in vitro and in vivo. To understand the function of resveratrol in mitochondrial function in HCEpiCs, hyperosmolarity-treated HCEpiCs were administered 50μm of resveratrol. The apoptosis of HCEpiCs was reduced by resveratrol treatment (Figure 2(a)). Resveratrol administration promoted SOD2 and GPx expression (Figure 2(b)); in contrast, ROS production was reduced (Figure 2(c)).Figure 2
Resveratrol treatment suppresses mitochondrial dysfunction in HCEpiCs. (a) Apoptosis of HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by flow cytometry. (b) The expression levels of the antioxidant proteins SOD2 and GPx measured by Western blotting. (c) ROS production in HCEpiCs under hyperosmolar conditions with resveratrol treatment determined by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.01.
(a)(b)(c)
## 3.3. Resveratrol Upregulates SIRT1 Expression in HCEpiCs
Previous studies suggested that SIRT1 contributed to mitochondrial function maintenance [25]. SIRT1 is involved in resveratrol-mediated mitochondrial regulation [19, 26]. Here, we showed that SIRT1 was suppressed in HCEpiCs under hyperosmolarity (Figure 3(a)). We examined the effects of resveratrol on SIRT1 and found that the expression of SIRT1 was recovered with resveratrol treatment (Figure 3(b)).Figure 3
Resveratrol upregulates SIRT1 expression in HCEpiCs. (a) Expression of SIRT1 in HCEpiCs under iso- and hyper-osmolarities (312 and 500 mOsM) detected by Western blotting. (b) Expression of SIRT1 in HCEpiCs under hyperosmolarity (500 mOsM) with resveratrol treatment determined by Western blotting. (c) Apoptosis of HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 determined by flow cytometry. (d) Expression of SIRT1, GPx, and SOD2 in HCEpiCs under hyperosmolarity with resveratrol and/or SIRT1 inhibitor EX527 treatment measured by Western blotting. (e) ROS production in HCEpiCs under hyperosmolar conditions treated with resveratrol and/or the SIRT1 inhibitor EX527 measured by flow cytometry.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We next asked whether SIRT1 was responsible for resveratrol-mediated mitochondrial regulation in HCEpiCs. To test this hypothesis, we introduced the SIRT1 inhibitor EX527. Treatment with EX527 counteracted the inhibitory effect of resveratrol on HCEpiCs apoptosis (Figure3(c)) and SOD2 and GPx expression (Figure 3(d)). We also observed that EX527 eliminated part of the inhibitory effect of resveratrol on ROS production (Figure 3(e)).
## 3.4. Resveratrol Ameliorates DED Syndrome in Vivo via SIRT1
Next, a DED mouse model induced by BAC ammonium chloride was used to determine the role of resveratrol in DED progression. Tear output was measured by the phenol red cotton thread test, which indicated that resveratrol-treated DED mice experienced more tear production than alcohol-treated DED mice and DED mice (Figure4(a)). Corneal fluorescein staining was decreased in resveratrol-treated mice (Figure 4(b)). Moreover, the number of goblet cells was increased with resveratrol administration (Figure 4(c)). These data indicated that resveratrol attenuates DED progression.Figure 4
Resveratrol ameliorates DED syndrome in vivo via SIRT1. (a) The tear production in control mice, DED mice, DED mice treated with alcohol, or DED mice treated with resveratrol measured by the phenol red cotton thread test. (b) Corneal fluorescein staining in control mice, DED mice, DED mice with alcohol treatment, and DED mice treated with resveratrol. (c) Goblet cell density in the conjunctival epithelial layer measured by periodic acid-Schiff (PAS) staining. (d) Expression of SIRT1 in lacrimal glands determined by Western blotting. (e) Expression of GPx and SOD2 in lacrimal glands determined by Western blotting.n = 3. ∗P<0.05, ∗∗P<0.01, ∗∗∗P<0.001.
(a)(b)(c)(d)(e)We then detected SIRT1 expression in lacrimal glands and found that SIRT1 was inhibited in DED mice and alcohol-treated DED mice, while resveratrol upregulated SIRT1 expression (Figure4(d)). Moreover, resveratrol administration induced SOD2 and GPx expression in the DED mouse model (Figure 4(e)).
## 4. Discussion
In the current research, we demonstrated that hyperosmolarity induces apoptosis and mitochondrial dysfunction in HCEpiCs, while resveratrol restores the mitochondrial function of HCEpiCs under hyperosmolarity. Decreased expression of SIRT1 could be observed in HCEpiCs with hyperosmolarity culturing. Importantly, our results further demonstrated that SIRT1 is responsible for resveratrol-mediated mitochondrial restoration. Consistently, by establishing a DED mouse model, we found that resveratrol prevents DED syndrome. Thus, our data extended the role of resveratrol and illustrated the underlying mechanism of resveratrol in ameliorating DED.DED is a multifactorial disease and is closely related to mitochondrial function. In diabetic mice, Qu et al. [27] demonstrated that hyperglycemia-induced mitochondrial bioenergetic inadequacy of the lacrimal gland ameliorates early onset of dry eye. Bogdan et al. [28] proposed that insulin-like growth factor binding protein-3 (IGFBP-3) is involved in hyperosmolar stress responses in the corneal epithelium by modulating mitochondrial function. Hyperosmolarity can increase ROS and apoptosis of HCEpiCs (Figure 1), and our results confirm this. Since antioxidants are one of the most common factors for restoring mitochondrial function and could prevent mitochondrial-associated pathology [29], we focused on resveratrol and set out to determine the role of resveratrol in DED development. Resveratrol, a common antioxidant, contributes to mitochondrial function maintenance. Kang et al. [30] showed that resveratrol protects neural cells from injury via regulation of mitochondrial biogenesis and mitophagy. In C6 astrocytes, Bobermin et al. [31] demonstrated that resveratrol prevents an increase in ROS production, a reduction in mitochondrial membrane potential (ΔΨ), and bioenergetic insufficiency caused by ammonia. Importantly, several studies have indicated that resveratrol prevents DED syndrome [12, 32]. However, whether resveratrol attenuates DED development by regulating mitochondria remains unknown. Here, we found that hyperosmolarity culturing reduces the expression of the antioxidant proteins SOD2 and GPx and induces ROS levels. Resveratrol administration inhibits HCEpiCs apoptosis, increases SOD2 and GPx expression, and decreases ROS levels. Moreover, resveratrol attenuates DED syndrome and increases SOD2 and GPx expression in a DED mouse model. These results suggest that resveratrol may reduce oxidative stress and HCEpiCs apoptosis by maintaining mitochondrial function.SIRT1 contributes to the function and biogenesis of mitochondria [33]. Of note, Samadi et al. [34] described that SIRT1 expression is suppressed in a diabetic dry eye model. Here, we also observed decreased expression of SIRT1 in HCEpiCs from a hyperosmolarity culture and DED mouse model, which was accompanied by increased levels of oxidative stress, apoptosis, or dry eye syndrome. In addition, resveratrol was previously shown to be critical in SIRT1 activation [35]. In the next experiment, we demonstrated that resveratrol treatment reversed SIRT1 expression in HCEpiCs under hyperosmolarity and DED in mice, while the SIRT1 inhibitor EX527 rescued the inhibitory effect of resveratrol on mitochondrial dysfunction in HCEpiCs. This finding suggests that resveratrol ameliorates mitochondrial dysfunction via SIRT1.It was proposed in early studies that the antioxidant resveratrol is critical in preventing DED syndrome, but the mechanism remains unclear. Our results demonstrate that resveratrol can restore mitochondrial function in HCEpiCs and inhibit HCEpiCs apoptosis. Furthermore, our findings indicate that SIRT1 is the major effector in resveratrol-regulated DED development. Therefore, our results show that resveratrol/SIRT1 plays a significant role in DED development, which is beneficial to DED therapy.
## 5. Conclusion
In summary, we found that resveratrol reversed hyperosmolarity-mediated mitochondrial dysfunction in HCEpiCs, and we demonstrated that resveratrol alleviated mitochondrial dysfunction by promoting SIRT1 expression. At the same time, it has been proven in animal experiments that resveratrol can reduce ocular surface damage in a mouse model of DED. Finally, resveratrol improved the effect of DED by restoring mitochondrial function, and this study provides new ideas for the treatment of DED.
---
*Source: 1013444-2022-05-26.xml* | 2022 |
# Evaluating the Longitudinal Item and Category Stability of the SF-36 Full and Summary Scales Using Rasch Analysis
**Authors:** Reinie Cordier; Ted Brown; Lindy Clemson; Julie Byles
**Journal:** BioMed Research International
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1013453
---
## Abstract
Introduction. The Medical Outcome Study Short Form 36 (SF-36) is widely used for measuring Health-Related Quality of Life (HRQoL) and has undergone rigorous psychometric evaluation using Classic Test Theory (CTT). However, Item Response Theory-based evaluation of the SF-36 has been limited with an overwhelming focus on individual scales and cross-sectional data.Purpose. This study aimed to examine the longitudinal item and category stability of the SF-36 using Rasch analysis.Method. Using data from the 1921-1926 cohort of the Australian Longitudinal Study on Women’s Health, responses of the SF-36 from six waves of data collection were analysed. Rasch analysis using Winsteps version 3.92.0 was performed on all 36 items of the SF-36 and items that constitute the physical health and mental health scales.Results. Rasch analysis revealed issues with the SF-36 not detected using classical methods. Redundancy was seen for items on the total measure and both scales across all waves of data. Person separation indexes indicate that the measure lacks sensitivity to discriminate between high and low performances in this sample. The presence of Differential Item Functioning suggests that responses to items were influenced by locality and marital status.Conclusion. Previous evaluations of the SF-36 have relied on cross-sectional data; however, the findings of the current study demonstrate the longitudinal efficacy of the measure. Application of the Rasch Measurement Model indicated issues with internal consistency, generalisability, and sensitivity when the measure was evaluated as a whole and as both physical and mental health summary scales. Implications for future research are discussed.
---
## Body
## 1. Introduction
To be deemed effective and useful, health measures must fulfil several requirements including validity, reliability, interpretability, and responsiveness to change [1]. Measurement invariance is another important characteristic, ensuring that the same construct is being consistently measured across different populations and settings, and over time. Considerations of measurement invariance are important for longitudinal studies that seek to gauge change in a construct, across a broad population and over time. When studies involve an older population, measurements may be vulnerable to instability as the participants age, their living circumstances may change, and their physical and cognitive abilities may decline [2, 3].The Medical Outcome Study Short Form 36 (SF-36) is one of the most commonly used questionnaires for monitoring Health-Related Quality of Life (HRQoL) across a multitude of populations and settings, including client groups and healthy populations [4–10]. HRQoL refers to aspects of quality of life that are impacted by an individual’s mental and physical health [11].Development of the SF-36 came about following difficulties during the Health Insurance Experiment (HIE), whereby the completion of a lengthy health survey was refused by participants [9]. In response to this need, Ware et al. [9] constructed a health survey that was both comprehensive and relatively short. The initial survey, the SF-18, comprised of 18 items measuring physical functioning, role limitations relating to poor health, mental health and health perceptions [9]. Subsequently, additional items have been added to create the 20-item SF-20 version, and 36-item SF-36 version which is now the most commonly used.The SF-36 measures eight key health concepts: (1) physical functioning (PF); (2) role limitations due to physical health problems (RL-P); (3) bodily pain (BP); (4) general health (GH); (5) vitality (V); (6) social functioning (SF); (7) role limitations due to emotional problems (RL-E); and (8) mental health (MH) [9]. From the eight scales, the survey generates overall physical and mental health component summary scores. Both summary measures include scores from all eight subscales; however particular correlations are present; the physical functioning, role limitations-physical, and bodily pain scales should correlate highest with the physical component score (PCS) and lowest with the mental component score (MCS) [12]. The mental health, role limitations-emotional, and social functioning scales should correlate highest with the MCS and lowest with PCS, with the remaining general health and vitality scales found to correlate moderately with both the PCS and MCS [12]. Summary score results can be compared with gender and age-group norms derived from the general population, e.g., United States population norms [12].The SF-36 is now widely used for both research and clinical purposes and has undergone rigorous psychometric evaluation nationally and internationally using Classic Test Theory (CTT) [6, 7, 9, 10]. CTT seeks to determine the reliability of a whole instrument through evaluating the degree of variance in terms of the ratio between true and observed scores. Therefore observed results are the product of the respondent’s “true score,” in combination with error [13].A relatively new approach to psychometric test design is Item Response Theory (IRT; Edelen and Reeve) [14]. IRT models are typically considered to be unidimensional, assessing instrument reliability at item-level rather than instrument-level, by determining the unique contribution of each item to the construct or trait being measured. IRT considers the importance of participants’ responses, whereby the probability of their answering a particular item correctly is based on their responses to other items of greater or lesser levels of difficulty or challenge [14]. Within IRT, the Rasch Measurement Model (RMM) is the most frequently applied IRT approach to investigating the unidimensionality of items that make up scales and to determining if responses are indeed measuring a single dimension only, through the examination of item fit statistics [15].
### 1.1. Application of IRT/RMM to the SF-36
Under the assumptions of an IRT model, instruments deemed reliable should meet the following properties: unidimensionality, hierarchical ordering of items, and reproducibility of scale items across client populations [16].Unidimensionality assumes that a collection of items represent and assess a single construct, that is, fit a single one dimensional model [16].Item hierarchy refers to a hypothesised continuum along which instrument items should progress in difficulty from easier to more challenging to answer. In other words, the probability of answering the more difficult items is higher for those individuals with higher levels of the latent trait being measured, while those with lower levels of the trait have a lower probability of answering items at the upper end [16].Reproducibility relates to item hierarchy whereby item order and calibrations along the continuum are seen to remain relatively stable or constant across different groups of assessment respondents and assessment occasions [16]. Item reproducibility or stability is considered essential to the ability to accurately measure between-group differences and within-group changes over time [16].IRT-based evaluation of the SF-36 has overwhelmingly focused on individual scales, particularly the Physical Functioning-10 subscale, with only some studies having examined particular psychometric properties of the SF-36 as a whole instrument or by component summary scores [5, 6, 12, 16, 17].
### 1.2. Unidimensionality and Item Fit
Only a few analyses have investigated the model-fit of the SF-36 as a whole. A prospective cohort study, involving a sample of 583 participants who were opioid-dependent, assessed item-model fit and latent trait factors for the eight SF-36 subscales and for the whole instrument [6]. The RMM reliability estimates of all eight SF-36 subscales (including a revised PF-10 subscale) established that each measured a single latent trait [6]. Investigation of the dimensional structure of the instrument as a whole confirmed the presence of an eight-factor model; that is, the SF-36 measured eight distinct latent traits [6].Analysis confirming a two-factor structure, reflecting the SF-36 physical and mental health components, has also been conducted using principal component analysis, with the physical and mental health domains accounting for 70% of the total variance across both standard and acute forms [12]. A single-administration survey with a general U.S.A. population sample (n = 634) evaluated the item-fit of the SF-36 physical and mental HRQoL domains using RMM modelling [5]. In this analysis, eight items in the physical domain had disordered thresholds, whereby a person responding to higher or lower levels of a categorical scale did not necessarily possess higher or lower levels of the trait that was being assessed [5]. The authors suggested collapsing some category options to overcome this issue [5]. In terms of the HRQoL domains’ unidimensionality, the mental health items were seen to fit RMM expectations, whereas the physical domain required discarding of the seven misfitting items to produce a 14-item domain that met RMM requirements. Survey data for of 395 Taiwanese patients with chronic lung disease were analysed to conduct similar assessments of the SF-36 mental and physical health domains, with the authors concluding that each domain was unidimensional [7].Differential item functioning (DIF) analysis using IRT-based techniques has also been undertaken with the SF-36. DIF refers to the unequal endorsement of instrument items by respondents of different groups, given that the items intend to measure the same latent trait [10]. The presence of DIF undermines instrument construct validity and may compromise the ability to compare instrument scores across different groups of respondents [10]. Yu et al. [10] utilised the multiple-indicator, multiple-causes (MIMIC) technique, and an IRT-based methodology to detect if DIF existed in the SF-36 physical and mental health domains. Data were extracted from the 1994-95 cohort of the Southern California Kaiser Permanente database (n = 7,538), which evaluated the health outcomes of patients receiving pharmacist consultations. DIF across SF-36 physical and mental health domains was analysed in relation to the presence of five key disease types: hypertension, rheumatic conditions, respiratory diseases, depression, and diabetes. Results indicated the presence of statistically significant DIF for a total of five items, both physical and mental health-based, for the hypertension, respiratory, and diabetes groups, respectively [10]. The authors concluded that the presence of DIF for only five of 36 items did not warrant significant concern regarding the overall construct validity of the SF-36; however, they cautioned regarding the use of the SF-36 in comparing groups based on hypertension in particular, who returned DIF effect for two items in the physical health domain [10].
### 1.3. Cross Cultural Item Response Patterns
Rasch modelling has also been applied to translated versions of the SF-36 to examine its cross-cultural validation. An assessment of the appropriateness of a Korean version of the SF-36 with 510 elderly Korean adults was conducted using the RMM [17]. The authors verified the presence of unidimensionality in the instrument and determined through step calibration that the response options of three- and five-point scales for items were appropriate for this population [17]. Goodness-of-fit statistics however determined that nine items across the instrument were not appropriate for this population, in terms of being incongruent with other items, having significant overlap with other items, or creating confusion due to misinterpretation of the meaning of items [17].
### 1.4. Item Stability
While item-model fit and determination of the presence of DIF are important, these properties can mean very little if item responses are inconsistent or changeable over time. Evaluation of the stability of item responses is important to determining the rigour of an instrument. Most IRT evaluations of SF-36 data have been cross-sectional and therefore stability of item response has not been evaluated [5–7, 10, 17]. Two studies assessed performance across repeated administrations, following pre-post designs [18, 19]. Martin et al. [18] utilised the SF-36 as one of three evaluation tools pre- and posttreatments for rheumatoid arthritis (n = 339), but with the aim to compare measurement properties of these tools and determine sensitivity to change rather than stability. IRT analysis of the PF-10 revealed weaknesses in sensitivity to treatment response at 6 and 12 months, with authors suggesting construction of a more comprehensive measure. McHorney et al. [19] compared IRT and Likert scoring method of the SF-36 Physical Functioing-10 scale, using a pre-post design. The findings showed apparent differences in patients with very high and low physical functioning, suggesting that Rasch model of scoring may have important implications for clinical interpretations of the scale [19].Only one longitudinal study has evaluated properties of the SF-36 using IRT methodologies. The first administration of the standardised SF-36 was conducted as part of a four-year longitudinal Medical Outcomes Study of patients (N = 3,445) with chronic medical and psychiatric conditions [16]. Examination of the reproducibility of the item calibrations of the Physical Functioning-10 scale was conducted, from baseline to two years [16]. A high degree of consistency in item calibration between the two time points was found, both in order and magnitude [16]. However, this longitudinal study only evaluated the stability and structural validity of the Physical Functioing-10 scale using IRT. The stability of the remaining SF-36 subscales, the physical and mental health domains, and the measure as a whole over time has not been examined using IRT to date.A lack of evaluation regarding the performance of the SF-36 over time presents a significant gap in the literature, with unanswered questions about its measurement stability. It is vital that the long-term reliability of the SF-36 is examined, to determine its true suitability for inclusion in large-scale longitudinal studies tracking participants, particularly as they age over extended periods of time. This study therefore seeks to use an IRT-based methodology to evaluate the item stability of the SF-36 total and component summaries in a large, longitudinal data set. The following questions guided this research:(1)
Is there disordering or dysfunction within the SF-36 items against the construct being measured?(2)
Do the SF-36 items have a consistent hierarchy of difficulty and good distribution across all waves of a longitudinal survey?(3)
Is the SF-36 differentiating discreet subgroups of people reliably (e.g., urban vs. regional)?(4)
Does the SF-36 measure one or more constructs?(5)
Were all items in the SF-36 instrument used by all participant subgroups in the same way?
## 1.1. Application of IRT/RMM to the SF-36
Under the assumptions of an IRT model, instruments deemed reliable should meet the following properties: unidimensionality, hierarchical ordering of items, and reproducibility of scale items across client populations [16].Unidimensionality assumes that a collection of items represent and assess a single construct, that is, fit a single one dimensional model [16].Item hierarchy refers to a hypothesised continuum along which instrument items should progress in difficulty from easier to more challenging to answer. In other words, the probability of answering the more difficult items is higher for those individuals with higher levels of the latent trait being measured, while those with lower levels of the trait have a lower probability of answering items at the upper end [16].Reproducibility relates to item hierarchy whereby item order and calibrations along the continuum are seen to remain relatively stable or constant across different groups of assessment respondents and assessment occasions [16]. Item reproducibility or stability is considered essential to the ability to accurately measure between-group differences and within-group changes over time [16].IRT-based evaluation of the SF-36 has overwhelmingly focused on individual scales, particularly the Physical Functioning-10 subscale, with only some studies having examined particular psychometric properties of the SF-36 as a whole instrument or by component summary scores [5, 6, 12, 16, 17].
## 1.2. Unidimensionality and Item Fit
Only a few analyses have investigated the model-fit of the SF-36 as a whole. A prospective cohort study, involving a sample of 583 participants who were opioid-dependent, assessed item-model fit and latent trait factors for the eight SF-36 subscales and for the whole instrument [6]. The RMM reliability estimates of all eight SF-36 subscales (including a revised PF-10 subscale) established that each measured a single latent trait [6]. Investigation of the dimensional structure of the instrument as a whole confirmed the presence of an eight-factor model; that is, the SF-36 measured eight distinct latent traits [6].Analysis confirming a two-factor structure, reflecting the SF-36 physical and mental health components, has also been conducted using principal component analysis, with the physical and mental health domains accounting for 70% of the total variance across both standard and acute forms [12]. A single-administration survey with a general U.S.A. population sample (n = 634) evaluated the item-fit of the SF-36 physical and mental HRQoL domains using RMM modelling [5]. In this analysis, eight items in the physical domain had disordered thresholds, whereby a person responding to higher or lower levels of a categorical scale did not necessarily possess higher or lower levels of the trait that was being assessed [5]. The authors suggested collapsing some category options to overcome this issue [5]. In terms of the HRQoL domains’ unidimensionality, the mental health items were seen to fit RMM expectations, whereas the physical domain required discarding of the seven misfitting items to produce a 14-item domain that met RMM requirements. Survey data for of 395 Taiwanese patients with chronic lung disease were analysed to conduct similar assessments of the SF-36 mental and physical health domains, with the authors concluding that each domain was unidimensional [7].Differential item functioning (DIF) analysis using IRT-based techniques has also been undertaken with the SF-36. DIF refers to the unequal endorsement of instrument items by respondents of different groups, given that the items intend to measure the same latent trait [10]. The presence of DIF undermines instrument construct validity and may compromise the ability to compare instrument scores across different groups of respondents [10]. Yu et al. [10] utilised the multiple-indicator, multiple-causes (MIMIC) technique, and an IRT-based methodology to detect if DIF existed in the SF-36 physical and mental health domains. Data were extracted from the 1994-95 cohort of the Southern California Kaiser Permanente database (n = 7,538), which evaluated the health outcomes of patients receiving pharmacist consultations. DIF across SF-36 physical and mental health domains was analysed in relation to the presence of five key disease types: hypertension, rheumatic conditions, respiratory diseases, depression, and diabetes. Results indicated the presence of statistically significant DIF for a total of five items, both physical and mental health-based, for the hypertension, respiratory, and diabetes groups, respectively [10]. The authors concluded that the presence of DIF for only five of 36 items did not warrant significant concern regarding the overall construct validity of the SF-36; however, they cautioned regarding the use of the SF-36 in comparing groups based on hypertension in particular, who returned DIF effect for two items in the physical health domain [10].
## 1.3. Cross Cultural Item Response Patterns
Rasch modelling has also been applied to translated versions of the SF-36 to examine its cross-cultural validation. An assessment of the appropriateness of a Korean version of the SF-36 with 510 elderly Korean adults was conducted using the RMM [17]. The authors verified the presence of unidimensionality in the instrument and determined through step calibration that the response options of three- and five-point scales for items were appropriate for this population [17]. Goodness-of-fit statistics however determined that nine items across the instrument were not appropriate for this population, in terms of being incongruent with other items, having significant overlap with other items, or creating confusion due to misinterpretation of the meaning of items [17].
## 1.4. Item Stability
While item-model fit and determination of the presence of DIF are important, these properties can mean very little if item responses are inconsistent or changeable over time. Evaluation of the stability of item responses is important to determining the rigour of an instrument. Most IRT evaluations of SF-36 data have been cross-sectional and therefore stability of item response has not been evaluated [5–7, 10, 17]. Two studies assessed performance across repeated administrations, following pre-post designs [18, 19]. Martin et al. [18] utilised the SF-36 as one of three evaluation tools pre- and posttreatments for rheumatoid arthritis (n = 339), but with the aim to compare measurement properties of these tools and determine sensitivity to change rather than stability. IRT analysis of the PF-10 revealed weaknesses in sensitivity to treatment response at 6 and 12 months, with authors suggesting construction of a more comprehensive measure. McHorney et al. [19] compared IRT and Likert scoring method of the SF-36 Physical Functioing-10 scale, using a pre-post design. The findings showed apparent differences in patients with very high and low physical functioning, suggesting that Rasch model of scoring may have important implications for clinical interpretations of the scale [19].Only one longitudinal study has evaluated properties of the SF-36 using IRT methodologies. The first administration of the standardised SF-36 was conducted as part of a four-year longitudinal Medical Outcomes Study of patients (N = 3,445) with chronic medical and psychiatric conditions [16]. Examination of the reproducibility of the item calibrations of the Physical Functioning-10 scale was conducted, from baseline to two years [16]. A high degree of consistency in item calibration between the two time points was found, both in order and magnitude [16]. However, this longitudinal study only evaluated the stability and structural validity of the Physical Functioing-10 scale using IRT. The stability of the remaining SF-36 subscales, the physical and mental health domains, and the measure as a whole over time has not been examined using IRT to date.A lack of evaluation regarding the performance of the SF-36 over time presents a significant gap in the literature, with unanswered questions about its measurement stability. It is vital that the long-term reliability of the SF-36 is examined, to determine its true suitability for inclusion in large-scale longitudinal studies tracking participants, particularly as they age over extended periods of time. This study therefore seeks to use an IRT-based methodology to evaluate the item stability of the SF-36 total and component summaries in a large, longitudinal data set. The following questions guided this research:(1)
Is there disordering or dysfunction within the SF-36 items against the construct being measured?(2)
Do the SF-36 items have a consistent hierarchy of difficulty and good distribution across all waves of a longitudinal survey?(3)
Is the SF-36 differentiating discreet subgroups of people reliably (e.g., urban vs. regional)?(4)
Does the SF-36 measure one or more constructs?(5)
Were all items in the SF-36 instrument used by all participant subgroups in the same way?
## 2. Methods
Data were from an Australian prospective, population-based survey. The Australian Longitudinal Study on Women’s Health (ALSWH) aims to assess physical and emotional health, use of health services, health risk factors and behaviours, life stages, and demographic characteristics. The ALSWH is conducted by researchers from the University of Newcastle and the University of Queensland and is funded by the Australian Government Department of Health. The study commenced in 1996 and has been running for over 20 years.
### 2.1. Participants
Three cohorts of women born in 1973-78 (aged 18-23 in 1996), 1946-51 (aged 45-50), and 1921-26 (aged 70-75) were randomly selected from the Medicare database, which includes all Australian citizens and permanent residents. Women living in regional and remote areas were sampled at twice the rate of women living in urban areas in order to allow for meaningful statistical comparisons between urban and country-dwelling women.Over 40,000 respondents initially responded to the baseline postal survey in 1996 with response rates across the three age groups ranging between 37% and 52% [20]. Although some immigrant groups were underrepresented and tertiary educated women were overrepresented, the responding samples were considered to be “reasonably representative” of the Australian female adult population following a comparison to census data [21]. Each cohort has since been surveyed every three years on a rolling basis, commencing with the 1946-51 cohort in 2018, the 1921-26 cohort in 1999, and the 1973-78 cohort in 2000. Only data from the 12,432 respondents in the 1921-26 cohort were analysed in the current study. At the commencement of the longitudinal survey, these women were aged 70-75 years, and at the time of survey six, they were aged in their early nineties (N = 4,055), with most attrition being due to death (N = 5,273).A study analysed potential biases introduced through the attrition of participants from this cohort between survey one and survey five [22]. Nondeath attrition was related to having less education, not being born in Australia, being a current smoker, and having poorer health in this cohort. Analysis comparing the survey population to the Australian Census data collected over the same time period showed an increase in the underrepresentation of women from non-English speaking backgrounds and an increase in the overrepresentation of current and ex-smokers. Differences between the study population and the national population were considered to have changed “only slightly” between survey one and survey five.
### 2.2. Instrument
The SF-36 HRQoL scale is included in each survey. At baseline in 1996, mean scores for the 1921-26 cohort were lower than for other cohorts for the physical health subscales (PF, RP, and BP) and higher than for other cohorts for the mental health subscales (MH, RE, and BP) [23]. Over time, mean PF scores scale have declined, but with significant variation across different subgroups within the cohort [24]. Mean MH scores have remained relatively stable [25].
### 2.3. Data Analysis
A two-stepped approach was taken to evaluate the reliability and validity of the SF-36. Across surveys one to six. First, Rasch analyses using Winsteps version 3.92.0 [26], with the joint maximum likelihood estimation method [27] were performed on all 36 items for each of the six waves of data collection and then on the items that constitute the physical health scales (PF 10-items, RP 4 items, BP 2 items, and GH 5 items), the mental health scales (V, SF 2 items, RE 3 items, and MH 5 items) and the item measuring health transition for each wave of data. The RMM was adopted for the data analysis since the 6-point response Likert scale was invariant across all the 36 items. The RMM adopts a “the data fit the model” approach. “The empirical data must meet the prior requirements of Rasch model in order to achieve objective measurement” [28, p. 65]. Several criteria including item infit and outfit statistics, reliability measures, rating scale functioning, and differential item functioning (DIF) were used to investigate the quality of the SF-36 total scale, physical health scale, and mental health scale. Item fit statistics indicate the extent to which the data match the expectations of the RMM. Outfit and Infit mean square (MNSQ) as well as their standardized forms (ZSTD) are used.
#### 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
#### 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
#### 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
#### 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 2.1. Participants
Three cohorts of women born in 1973-78 (aged 18-23 in 1996), 1946-51 (aged 45-50), and 1921-26 (aged 70-75) were randomly selected from the Medicare database, which includes all Australian citizens and permanent residents. Women living in regional and remote areas were sampled at twice the rate of women living in urban areas in order to allow for meaningful statistical comparisons between urban and country-dwelling women.Over 40,000 respondents initially responded to the baseline postal survey in 1996 with response rates across the three age groups ranging between 37% and 52% [20]. Although some immigrant groups were underrepresented and tertiary educated women were overrepresented, the responding samples were considered to be “reasonably representative” of the Australian female adult population following a comparison to census data [21]. Each cohort has since been surveyed every three years on a rolling basis, commencing with the 1946-51 cohort in 2018, the 1921-26 cohort in 1999, and the 1973-78 cohort in 2000. Only data from the 12,432 respondents in the 1921-26 cohort were analysed in the current study. At the commencement of the longitudinal survey, these women were aged 70-75 years, and at the time of survey six, they were aged in their early nineties (N = 4,055), with most attrition being due to death (N = 5,273).A study analysed potential biases introduced through the attrition of participants from this cohort between survey one and survey five [22]. Nondeath attrition was related to having less education, not being born in Australia, being a current smoker, and having poorer health in this cohort. Analysis comparing the survey population to the Australian Census data collected over the same time period showed an increase in the underrepresentation of women from non-English speaking backgrounds and an increase in the overrepresentation of current and ex-smokers. Differences between the study population and the national population were considered to have changed “only slightly” between survey one and survey five.
## 2.2. Instrument
The SF-36 HRQoL scale is included in each survey. At baseline in 1996, mean scores for the 1921-26 cohort were lower than for other cohorts for the physical health subscales (PF, RP, and BP) and higher than for other cohorts for the mental health subscales (MH, RE, and BP) [23]. Over time, mean PF scores scale have declined, but with significant variation across different subgroups within the cohort [24]. Mean MH scores have remained relatively stable [25].
## 2.3. Data Analysis
A two-stepped approach was taken to evaluate the reliability and validity of the SF-36. Across surveys one to six. First, Rasch analyses using Winsteps version 3.92.0 [26], with the joint maximum likelihood estimation method [27] were performed on all 36 items for each of the six waves of data collection and then on the items that constitute the physical health scales (PF 10-items, RP 4 items, BP 2 items, and GH 5 items), the mental health scales (V, SF 2 items, RE 3 items, and MH 5 items) and the item measuring health transition for each wave of data. The RMM was adopted for the data analysis since the 6-point response Likert scale was invariant across all the 36 items. The RMM adopts a “the data fit the model” approach. “The empirical data must meet the prior requirements of Rasch model in order to achieve objective measurement” [28, p. 65]. Several criteria including item infit and outfit statistics, reliability measures, rating scale functioning, and differential item functioning (DIF) were used to investigate the quality of the SF-36 total scale, physical health scale, and mental health scale. Item fit statistics indicate the extent to which the data match the expectations of the RMM. Outfit and Infit mean square (MNSQ) as well as their standardized forms (ZSTD) are used.
### 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
### 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
### 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
### 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
## 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
## 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
## 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 3. Results
SF-36 data were gathered over six waves: Wave 1,N = 12,077; Wave 2,N = 10,411; Wave 3,N = 8,577; Wave 4,N = 7,112, Wave 5,N = 5,534; and Wave 6,N = 4,032. The sample size decreased with each subsequent phase of data collection as participants died or were lost to follow-up.
### 3.1. SF36 Total Scale Rasch Analysis for Six Waves of Data Collection
Total Rasch scale item statistics for six waves of data collection are shown in Table1. When all 36 SF-36 items were calibrated using the RMM for the six waves of data collection, MNSQ infit statistics ranged from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table 2). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31. This resulted in an average item separation index of 77.98 and an average item reliability of 1.00 over the six waves (see Table 3).Table 1
SF-36 total scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 SF36 ITEM LOGIT MEASURE MODEL S.E PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR 1: Q1 -0.36 0.01 -0.16 -0.37 0.01 -0.14 -0.37 0.01 -0.09 -0.52 0.01 -0.06 -0.60 0.01 -0.05 -0.68 0.01 0.01 2: Q2 -0.39 0.01 0.04 -0.44 0.01 0.03 -0.51 0.01 0.03 -0.56 0.01 0.08 -0.66 0.01 0.06 -0.73 0.01 0.11 3: Q3A 1.94 0.02 0.24 1.96 0.02 0.26 2.05 0.02 0.29 2.08 0.02 0.30 2.09 0.03 0.29 2.31 0.04 0.27 4: Q3B 0.36 0.01 0.46 0.39 0.01 0.44 0.53 0.01 0.44 0.67 0.01 0.45 0.77 0.02 0.45 0.95 0.02 0.41 5: Q3C 0.30 0.01 0.47 0.32 0.01 0.46 0.34 0.01 0.44 0.44 0.01 0.46 0.49 0.02 0.44 0.57 0.02 0.41 6: Q3D 0.82 0.01 0.45 0.84 0.01 0.44 0.93 0.01 0.46 1.00 0.02 0.47 1.11 0.02 0.46 1.25 0.02 0.44 7: Q3E 0.13 0.01 0.51 0.15 0.01 0.50 0.24 0.01 0.50 0.28 0.01 0.49 0.36 0.02 0.49 0.46 0.02 0.47 8: Q3F 0.54 0.01 0.44 0.63 0.01 0.41 0.72 0.01 0.41 0.73 0.02 0.43 0.73 0.02 0.43 0.81 0.02 0.40 9: Q3G 0.44 0.01 0.49 0.53 0.01 0.46 0.73 0.01 0.46 0.87 0.02 0.47 1.05 0.02 0.44 1.24 0.02 0.44 10: Q3H 0.05 0.01 0.52 0.10 0.01 0.49 0.23 0.01 0.49 0.36 0.01 0.50 0.48 0.02 0.48 0.65 0.02 0.48 11: Q3I -0.14 0.01 0.48 -0.13 0.01 0.45 -0.07 0.01 0.46 -0.02 0.01 0.44 0.05 0.01 0.44 0.11 0.02 0.45 12: Q3J -0.28 0.01 0.36 -0.31 0.01 0.35 -0.29 0.01 0.32 -0.24 0.01 0.32 -0.23 0.01 0.32 -0.23 0.02 0.32 13: Q4A 1.26 0.01 0.35 1.31 0.01 0.33 1.41 0.02 0.29 1.41 0.02 0.28 1.46 0.02 0.26 1.47 0.03 0.27 14: Q4B 1.63 0.01 0.35 1.74 0.02 0.32 1.87 0.02 0.29 1.89 0.02 0.28 1.92 0.03 0.27 1.94 0.03 0.26 15: Q4C 1.47 0.01 0.36 1.53 0.02 0.36 1.60 0.02 0.31 1.69 0.02 0.30 1.74 0.02 0.28 1.78 0.03 0.24 16: Q4D 1.50 0.01 0.36 1.58 0.02 0.35 1.67 0.02 0.31 1.73 0.02 0.30 1.77 0.02 0.28 1.80 0.03 0.28 17: Q5A 1.02 0.01 0.37 1.01 0.01 0.35 1.00 0.01 0.31 0.96 0.02 0.30 0.92 0.02 0.30 0.89 0.02 0.27 18: Q5B 1.22 0.01 0.36 1.22 0.01 0.35 1.23 0.02 0.31 1.21 0.02 0.30 1.18 0.02 0.29 1.15 0.02 0.27 19: Q5C 1.05 0.01 0.35 1.03 0.01 0.33 1.01 0.01 0.31 0.99 0.02 0.29 0.95 0.02 0.29 0.91 0.02 0.26 20: Q6 1.17 0.01 -0.22 1.35 0.01 -0.20 0.93 0.01 -0.16 0.69 0.01 -0.16 0.48 0.02 -0.12 0.37 0.02 -0.08 21: Q7 -0.28 0.01 -0.06 -0.26 0.01 -0.04 -0.40 0.01 0.01 -0.50 0.01 0.01 -0.57 0.01 0.03 -0.67 0.01 0.07 22: Q8 0.68 0.01 -0.18 0.73 0.01 -0.14 0.44 0.01 -0.11 0.26 0.01 -0.09 0.12 0.01 -0.04 -0.01 0.02 -0.02 23: Q9A -0.59 0.01 -0.05 -0.67 0.01 -0.06 -0.79 0.01 0.00 -0.82 0.01 -0.02 -0.93 0.01 0.01 -1.06 0.01 0.05 24: Q9B -2.04 0.01 0.39 -2.30 0.01 0.36 -2.39 0.01 0.33 -2.40 0.01 0.31 -2.38 0.02 0.31 -2.42 0.02 0.27 25: Q9C -2.64 0.01 0.40 -2.92 0.02 0.35 -2.98 0.02 0.34 -3.01 0.02 0.30 -2.86 0.02 0.31 -2.89 0.02 0.27 26: Q9D -0.30 0.01 0.06 -0.20 0.01 0.01 -0.28 0.01 0.09 -0.29 0.01 0.07 -0.30 0.01 0.08 -0.37 0.01 0.12 27: Q9E -0.77 0.01 -0.05 -0.89 0.01 -0.09 -1.00 0.01 -0.04 -1.06 0.01 -0.05 -1.17 0.01 -0.02 -1.35 0.01 0.00 28: Q9F -2.01 0.01 0.40 -2.15 0.01 0.34 -2.15 0.01 0.35 -2.16 0.01 0.34 -2.13 0.02 0.33 -2.13 0.02 0.31 29: Q9G -1.63 0.01 0.44 -1.70 0.01 0.41 -1.67 0.01 0.40 -1.60 0.01 0.40 -1.56 0.01 0.40 -1.57 0.01 0.37 30: Q9H 0.24 0.01 0.14 0.33 0.01 0.09 0.26 0.01 0.13 0.23 0.01 0.14 0.14 0.01 0.12 0.08 0.02 0.15 31: Q9I -1.23 0.01 0.39 -1.22 0.01 0.37 -1.15 0.01 0.34 -1.07 0.01 0.36 -1.03 0.01 0.34 -1.04 0.01 0.31 32: Q10 -1.34 0.01 0.35 -1.39 0.01 0.31 -1.30 0.01 0.28 -1.23 0.01 0.26 -1.20 0.01 0.24 -1.18 0.01 0.24 33: Q11A -1.39 0.01 0.33 -1.48 0.01 0.31 -1.49 0.01 0.28 -1.50 0.01 0.25 -1.49 0.01 0.26 -1.52 0.01 0.20 34: Q11B 0.28 0.01 0.03 0.43 0.01 -0.01 0.32 0.01 0.08 0.25 0.01 0.07 0.10 0.01 0.10 -0.01 0.02 0.14 35: Q11C -0.68 0.01 0.29 -0.75 0.01 0.27 -0.65 0.01 0.29 -0.59 0.01 0.25 -0.50 0.01 0.25 -0.48 0.01 0.24 36: Q11D -0.02 0.01 -0.06 0.01 0.01 -0.09 -0.03 0.01 0.00 -0.17 0.01 -0.01 -0.31 0.01 0.04 -0.40 0.01 0.09 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 2
SF-36 total scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM INFIT Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.92 -6.7 0.96 -2.9 0.86 -9.9 0.90 -7.3 0.83 -9.9 0.86 -9.8 2: Q2 0.51 -9.9 0.55 -9.9 0.53 -9.9 0.57 -9.9 0.54 -9.9 0.56 -9.9 3: Q3A 1.03 2.1 1.02 1.4 1.12 7.0 1.09 5.5 1.06 3.0 1.02 1.2 4: Q3B 0.59 -9.9 0.61 -9.9 0.64 -9.9 0.65 -9.9 0.72 -9.9 0.73 -9.9 5: Q3C 0.54 -9.9 0.56 -9.9 0.56 -9.9 0.57 -9.9 0.59 -9.9 0.60 -9.9 6: Q3D 0.82 -9.9 0.82 -9.9 0.84 -9.9 0.84 -9.9 0.86 -8.6 0.86 -8.7 7: Q3E 0.46 -9.9 0.48 -9.9 0.48 -9.9 0.50 -9.9 0.54 -9.9 0.56 -9.9 8: Q3F 0.66 -9.9 0.67 -9.9 0.72 -9.9 0.72 -9.9 0.75 -9.9 0.75 -9.9 9: Q3G 0.76 -9.9 0.78 -9.9 0.83 -9.9 0.85 -9.9 0.94 -3.7 0.94 -3.4 10: Q3H 0.46 -9.9 0.50 -9.9 0.53 -9.9 0.56 -9.9 0.65 -9.9 0.68 -9.9 11: Q3I 0.27 -9.9 0.30 -9.9 0.29 -9.9 0.31 -9.9 0.37 -9.9 0.39 -9.9 12: Q3J 0.17 -9.9 0.19 -9.9 0.13 -9.9 0.15 -9.9 0.17 -9.9 0.18 -9.9 13: Q4A 0.40 -9.9 0.41 -9.9 0.41 -9.9 0.42 -9.9 0.49 -9.9 0.50 -9.9 14: Q4B 0.57 -9.9 0.58 -9.9 0.61 -9.9 0.61 -9.9 0.66 -9.9 0.66 -9.9 15: Q4C 0.50 -9.9 0.51 -9.9 0.51 -9.9 0.52 -9.9 0.57 -9.9 0.58 -9.9 16: Q4D 0.51 -9.9 0.53 -9.9 0.54 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 17: Q5A 0.25 -9.9 0.27 -9.9 0.22 -9.9 0.23 -9.9 0.25 -9.9 0.26 -9.9 18: Q5B 0.37 -9.9 0.39 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.41 -9.9 19: Q5C 0.27 -9.9 0.29 -9.9 0.24 -9.9 0.26 -9.9 0.26 -9.9 0.28 -9.9 20: Q6 2.43 9.9 2.59 9.9 2.48 9.9 2.64 9.9 2.36 9.9 2.48 9.9 21: Q7 1.84 9.9 1.90 9.9 1.96 9.9 2.01 9.9 1.70 9.9 1.73 9.9 22: Q8 2.09 9.9 2.17 9.9 2.14 9.9 2.21 9.9 1.93 9.9 1.99 9.9 23: Q9A 1.48 9.9 1.52 9.9 1.48 9.9 1.52 9.9 1.44 9.9 1.46 9.9 24: Q9B 1.47 9.9 1.40 9.9 1.64 9.9 1.55 9.9 1.64 9.9 1.55 9.9 25: Q9C 1.67 9.9 1.53 9.9 1.82 9.9 1.66 9.9 1.74 9.9 1.57 9.9 26: Q9D 1.69 9.9 1.73 9.9 1.60 9.9 1.64 9.9 1.50 9.9 1.52 9.9 27: Q9E 1.55 9.9 1.58 9.9 1.55 9.9 1.58 9.9 1.49 9.9 1.50 9.9 28: Q9F 1.07 5.1 1.03 2.4 1.19 9.9 1.15 9.0 1.13 6.9 1.09 4.9 29: Q9G 1.02 2.0 1.00 0.2 1.04 3.0 1.02 1.4 1.02 1.1 1.00 -0.1 30: Q9H 1.63 9.9 1.62 9.9 1.49 9.9 1.50 9.9 1.35 9.9 1.36 9.9 31: Q9I 0.84 -9.9 0.84 -9.9 0.88 -9.9 0.88 -9.9 0.84 -9.9 0.84 -9.9 32: Q10 0.79 -9.9 0.79 -9.9 0.84 -9.9 0.83 -9.9 0.92 -6.6 0.92 -6.6 33: Q11A 0.77 -9.9 0.76 -9.9 0.64 -9.9 0.63 -9.9 0.66 -9.9 0.65 -9.9 34: Q11B 1.88 9.9 1.90 9.9 1.77 9.9 1.80 9.9 1.62 9.9 1.62 9.9 35: Q11C 1.04 3.3 1.05 4.1 0.95 -4.5 0.95 -3.8 0.99 -0.9 0.99 -0.5 36: Q11D 1.78 9.9 1.83 9.9 1.81 9.9 1.86 9.9 1.68 9.9 1.70 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.72 -9.9 0.73 -9.9 0.69 -9.9 0.70 -9.9 0.65 -9.9 0.65 -9.9 2: Q2 0.50 -9.9 0.52 -9.9 0.49 -9.9 0.50 -9.9 0.46 -9.9 0.47 -9.9 3: Q3A 1.11 4.4 1.06 2.3 1.18 6.0 1.11 3.9 1.26 6.4 1.17 4.3 4: Q3B 0.78 -9.9 0.78 -9.9 0.81 -9.1 0.81 -9.4 0.91 -3.6 0.90 -4.1 5: Q3C 0.62 -9.9 0.64 -9.9 0.65 -9.9 0.66 -9.9 0.72 -9.9 0.72 -9.9 6: Q3D 0.87 -7.1 0.86 -7.7 0.92 -3.6 0.90 -4.6 0.97 -0.9 0.93 -2.4 7: Q3E 0.56 -9.9 0.57 -9.9 0.61 -9.9 0.63 -9.9 0.70 -9.9 0.70 -9.9 8: Q3F 0.73 -9.9 0.73 -9.9 0.71 -9.9 0.72 -9.9 0.76 -9.9 0.76 -9.9 9: Q3G 0.98 -1.0 0.97 -1.3 1.03 1.5 1.01 0.6 1.09 3.3 1.04 1.6 10: Q3H 0.74 -9.9 0.76 -9.9 0.83 -8.0 0.85 -7.4 0.94 -2.4 0.93 -2.6 11: Q3I 0.41 -9.9 0.43 -9.9 0.48 -9.9 0.51 -9.9 0.55 -9.9 0.57 -9.9 12: Q3J 0.24 -9.9 0.26 -9.9 0.29 -9.9 0.31 -9.9 0.33 -9.9 0.34 -9.9 13: Q4A 0.52 -9.9 0.54 -9.9 0.58 -9.9 0.60 -9.9 0.60 -9.9 0.61 -9.9 14: Q4B 0.69 -9.9 0.69 -9.9 0.73 -9.9 0.72 -9.9 0.74 -8.7 0.74 -8.9 15: Q4C 0.62 -9.9 0.63 -9.9 0.67 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 16: Q4D 0.64 -9.9 0.64 -9.9 0.68 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 17: Q5A 0.27 -9.9 0.29 -9.9 0.30 -9.9 0.32 -9.9 0.32 -9.9 0.34 -9.9 18: Q5B 0.42 -9.9 0.43 -9.9 0.45 -9.9 0.47 -9.9 0.47 -9.9 0.49 -9.9 19: Q5C 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 0.33 -9.9 0.35 -9.9 20: Q6 2.17 9.9 2.27 9.9 2.06 9.9 2.14 9.9 2.00 9.9 2.06 9.9 21: Q7 1.61 9.9 1.63 9.9 1.52 9.9 1.53 9.9 1.40 9.9 1.41 9.9 22: Q8 1.75 9.9 1.80 9.9 1.65 9.9 1.68 9.9 1.56 9.9 1.59 9.9 23: Q9A 1.40 9.9 1.41 9.9 1.36 9.9 1.36 9.9 1.34 9.9 1.35 9.9 24: Q9B 1.62 9.9 1.53 9.9 1.61 9.9 1.53 9.9 1.58 9.9 1.51 9.9 25: Q9C 1.73 9.9 1.60 9.9 1.75 9.9 1.62 9.9 1.65 9.9 1.57 9.9 26: Q9D 1.51 9.9 1.53 9.9 1.47 9.9 1.49 9.9 1.41 9.9 1.42 9.9 27: Q9E 1.51 9.9 1.52 9.9 1.44 9.9 1.45 9.9 1.46 9.9 1.47 9.9 28: Q9F 1.15 7.5 1.11 5.5 1.16 7.0 1.12 5.3 1.12 4.6 1.09 3.6 29: Q9G 1.02 1.6 1.01 0.5 1.03 1.6 1.01 0.8 1.06 2.7 1.04 2.1 30: Q9H 1.36 9.9 1.36 9.9 1.35 9.9 1.37 9.9 1.29 9.9 1.29 9.9 31: Q9I 0.89 -8.2 0.89 -7.9 0.92 -5.2 0.92 -4.9 0.91 -5.1 0.91 -4.8 32: Q10 0.93 -5.1 0.94 -4.7 1.00 -0.2 1.00 0.0 1.06 3.0 1.06 3.1 33: Q11A 0.67 -9.9 0.66 -9.9 0.67 -9.9 0.66 -9.9 0.70 -9.9 0.69 -9.9 34: Q11B 1.58 9.9 1.59 9.9 1.44 9.9 1.45 9.9 1.38 9.9 1.38 9.9 35: Q11C 1.01 0.5 1.01 0.9 0.99 -0.6 1.00 -0.2 0.98 -1.0 0.98 -0.9 36: Q11D 1.61 9.9 1.64 9.9 1.51 9.9 1.53 9.9 1.43 9.9 1.43 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; Z-STD ≤-2.0 or ≥2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 3
SF-36 total scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons Mean -.69 -.68 -.72 -.75 -.80 -.85 S.D. .24 .22 .24 .23 .23 .24 MAX .82 .29 .64 .67 .03 .15 MIN -4.33 -2.70 -2.60 -2.59 -2.54 -2.76 Infit-MNSQ 1.03 1.02 1.03 1.04 1.04 1.03 Infit-ZSTD -.30 -.40 -.30 -.20 -.20 -.10 Outfit-MNSQ 1.01 1.01 1.00 1.00 .99 .99 Outfit-ZSTD -.40 -.40 -.30 -.30 -.30 -.20 Person separation .81c .60c .72c .71c .75c .78c Person reliability .40a .26a .34a .33a .36a .38a Items Mean .00 .00 .00 .00 .00 .00 S.D. 1.11 1.19 1.20 1.21 1.22 1.26 MAX 1.94 1.96 2.05 2.08 2.09 2.31 MIN -2.64 -2.92 -2.98 -3.01 -2.86 -2.89 Infit-MNSQ .98 .99 .98 .98 .98 .99 Infit-ZSTD -2.30 -2.30 -2.20 -1.90 -1.40 -.90 Outfit-MNSQ .99 1.00 .98 .98 .98 .98 Outfit-ZSTD -2.30 -2.30 -2.30 -2.00 -1.50 -1.10 Item separation 93.40 89.72 82.81 76.45 67.43 58.09 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 total scale person-item map in Supplemental Figure1 shows evidence of consistent hierarchical ordering of the SF-36 total scale items. Items which were less difficult are located at the bottom of the person-item map while more difficult items are located at the top of the map. The figure also shows that while each of the waves had a reasonable distribution of items in relation to item difficulty, several of the SF-36 total scale items have the same level of difficulty.Rasch analysis reports the calibrations of the five thresholds (for the six-category rating scale) increase monotonically from -3.15, -1.36, -.25, .48, 1.31, and 2.82 for wave one and -2.96, -1.30, -.31, .42, 1.29, and 2.78 for wave six.The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table3). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 3). When examining the overall RMM output of the SF-36 total scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1 to -3 logits. The person reliability was 0.35 and item reliability was 1.00. This places the item reliability for the SF-36 total scale in the acceptable range and the person reliability correlation in the unacceptable range.The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured. However, the separation index for persons was less than 2.0 indicating inadequate separation of participants on the construct.Item fit to the unidimensionality requirement of the RMM was also examined. Eleven out of the 36 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Specifically, items CH01:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, MH04:Q9F, VT03:Q9G, VT04:Q9I, SF02:Q10, CH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 30.6% (i.e., 11 of 36) of the 36 SF-36 total scale items met the RMM requirements. The following items had an Infit MnSq statistic that was less than 0.70: HT:Q2, PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, RE01:Q5A, RE02:Q5B, and RE03:Q5C. The following items had an Infit MNSQ statistic that was greater than 1.30: FO01:Q6, BP01:Q7, BP02:Q8, VT01:Q9A, MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH05:Q9H, GH02:Q11B, and GH05:Q11D.The Winsteps RMM program determines the dimensionality of a scale by using a Rasch-residual principal components analysis. When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table4). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 total scale over the six waves of data collection ranged from 58.5% to 62.1% and the unexplained variance in the first contrast ranged from 11.9% to 14.5%. The residual analysis completed indicated that no second dimension or factor existed. Linacre [32] suggests that a first single factor with 60% or greater of the accounted for variance is considered a reasonable unidimensional construct. “A second factor or residual factor should not indicate a substantial amount of variance if unidimensionality is tenable” [33, p. 192].Table 4
SF-36 total scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 86.71 100.00% 100.00% 92.22 100.00% 100.00% 92.42 100.00% 100.00% Raw variance explained by measures 50.71 58.50% 58.70% 56.22 61.00% 61.10% 56.42 61.00% 61.30% Raw variance explained by persons 3.47 4.00% 4.00% 1.93 2.10% 2.10% 2.04 2.20% 2.20% Raw Variance explained by items 47.24 54.50% 54.70% 54.30 58.90% 59.00% 54.38 58.80% 59.10% Raw unexplained variance (total) 36.00 41.50% 41.30% 36.00 39.00% 38.90% 36.00 39.00% 38.70% Unexplained variance in 1st contrast 12.60 14.50% 35.00% 12.57 13.60% 34.90% 12.26 13.30% 34.10% Unexplained variance in 2nd contrast 3.02 3.50% 8.40% 3.05 3.30% 8.50% 3.03 3.30% 8.40% Unexplained variance in 3rd contrast 1.89 2.20% 5.20% 1.78 1.90% 4.90% 1.84 2.00% 5.10% Unexplained variance in 4th contrast 1.59 1.80% 4.40% 1.54 1.70% 4.30% 1.50 1.60% 4.20% Unexplained variance in 5th contrast 1.24 1.40% 3.40% 1.27 1.40% 3.50% 1.26 1.40% 3.50% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 92.10 100.00% 100.00% 91.96 100.00% 100.00% 94.92 100.00% 100.00% Raw variance explained by measures 56.10 60.90% 61.50% 55.96 60.90% 61.70% 58.92 62.10% 63.00% Raw variance explained by persons 3.59 3.90% 3.90% 4.05 4.40% 4.50% 4.57 4.80% 4.90% Raw Variance explained by items 52.51 57.00% 57.60% 51.91 56.40% 57.20% 54.35 57.30% 58.10% Raw unexplained variance (total) 36.00 39.10% 38.50% 36.00 39.10% 38.30% 36.00 37.90% 37.00% Unexplained variance in 1st contrast 12.41 13.50% 34.50% 12.08 13.10% 33.60% 11.33 11.90% 31.50% Unexplained variance in 2nd contrast 3.06 3.30% 8.50% 3.20 3.50% 8.90% 3.22 3.40% 8.90% Unexplained variance in 3rd contrast 1.88 2.00% 5.20% 1.95 2.10% 5.40% 2.17 2.30% 6.00% Unexplained variance in 4th contrast 1.50 1.60% 4.20% 1.53 1.70% 4.30% 1.55 1.60% 4.30% Unexplained variance in 5th contrast 1.27 1.40% 3.50% 1.25 1.40% 3.50% 1.30 1.40% 3.60% Notes. a > 60% unexplained variance in the Rasch factor; b Eigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The point-measure correlation (PTMEA) ranges from +1 to -1 “with negative items suggesting improper scoring or not functioning as expected” [33, p. 192]. An inspection of the PTMEAs for the SF-36 total scale indicated that items GH01:Q1, SF01:Q6, BP01:Q7, and VT02:Q9E had consistent negative PTMEAs over the six waves of data collection. The rest of the SF-36 total scale items had PTMEAs that were positive, supporting item-level polarity. For all other items, the PTMEA correlations had acceptable values.The functioning of the six rating scale categories was examined for the SF-36 total scale. Rating scale frequency and percent indicated that all categories were used by the participants. The category use statistics are presented in Table5. The category logit measures ranged from -3.19 to 2.86 (see Table 5). None of the infit MNSQ scores fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. The results indicated that the six-level rating scale used in the SF-36 total scale fits appropriately to the predictive RMM (see Supplemental Figure 2); however, the full range of ratings were used by the participants who completed the SF-36 total scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and each response category was the most probable category for some part of the continuum.Table 5
SF-36 total scale Rasch analysis of summary of category structure for six waves of data collection
Wave 1 Wave 2 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 84119 19 (-3.15) 1.30 1.17 NONE 72530 19 (-3.19) 1.28 1.17 NONE 2 133566 30 -1.36 .99 1.01 -1.88 114964 31 -1.40 1.02 1.06 -1.93 3 96735 22 -.25 .66 .61 -.54 82817 22 -.26 .67 .63 -.57 4 40204 9 .48 .98 1.04 .61 34325 9 .50 .97 1.03 .61 5 44040 10 1.31 1.06 1.17 .28b 37154 10 1.34 1.07 1.20 .34b 6 25211 6 (2.82) .93 1.01 1.53 23593 6 (2.86) .92 .99 1.56 Wave 3 Wave 4 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 61757 20 (-3.14) 1.25 1.15 NONE 55809 22 (-3.06) 1.24 1.15 NONE 2 88701 29 -1.39 1.04 1.08 -1.87 72286 28 -1.35 1.05 1.10 -1.79 3 63285 20 -.28 .73 .67 -.57 50125 19 -.30 .78 .71 -.54 4 29486 9 .48 .94 .96 .48 25807 10 .45 .90 .88 .38 5 28991 9 1.34 1.09 1.16 .41b 24560 10 1.32 1.10 1.13 .41 6 18470 6 (2.86) .87 .93 1.55 15297 6 (2.85) .86 .91 1.54 Wave 5 Wave 6 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 47041 24 (-3.00) 1.26 1.16 NONE 36377 25 (-2.96) 1.23 1.14 NONE 2 54905 27 -1.31 1.06 1.09 -1.72 37987 26 -1.30 1.06 1.07 -1.67 3 36216 18 -.30 .83 .76 -.49 24952 17 -.31 .88 .83 -.49 4 21172 11 .43 .87 .81 .27 15554 11 .42 .87 .79 .23 5 18847 9 1.29 1.13 1.15 .46 13560 9 1.29 1.15 1.15 .50 6 11583 6 (2.80) .83 .88 1.48 8495 6 (2.78) .83 .88 1.44 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.To investigate the possibility of item bias, differential item functioning (DIF) analysis was conducted to determine whether different groups of participants based on marital status and area of residence (urban versus regional; see Table6) responded differently on the SF-36 total scale items, despite having the same level of the latent trait being measured [34]. Three of the SF-36 items exhibited a consistent pattern of DIF over the six waves of data collection for both marital status and area of residence, those being MH01:Q9B, MH02:Q9C, and MH05:Q9H. It should be noted that these three items also exhibited MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range.Table 6
Differential Item Functioning (DIF) for SF-36 total scale Rasch analysis for six waves of data collection based on marital status and area of residence.
WAVE 1 WAVE 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 8.68 . 01 3 ∗ ∗ 0.00 .924 1.78 .407 0.19 .452 0.25 .882 0.00 .861 2 3.17 .202 0.00 .295 0.55 .760 0.00 .791 0.14 .936 0.00 .134 3 4.95 .083 0.00 .811 0.21 .903 0.20 .326 5.47 .064 0.00 .907 4 0.87 .647 0.00 .492 0.44 .804 0.00 . 01 0 ∗ 0.34 .847 0.00 .438 5 0.89 .640 0.05 . 00 1 ∗ ∗ ∗ 0.09 .959 -0.14 .983 0.40 .818 0.02 . 00 2 ∗ ∗ 6 0.47 .792 0.00 .142 4.67 .095 0.00 .288 0.57 .750 0.00 .619 7 0.06 .971 0.00 .687 2.94 .227 -0.01 .800 1.39 .496 -0.02 .054 8 0.32 .855 0.00 .362 0.27 .875 0.02 . 03 3 ∗ 0.06 .974 0.00 .243 9 13.66 . 00 1 ∗ ∗ ∗ -0.06 . 00 3 ∗ ∗ 2.06 .354 -0.14 .603 0.70 .704 -0.06 .072 10 11.27 .004∗∗ 0.00 . 03 0 ∗ 7.04 . 02 9 ∗ 0.06 . 00 1 ∗ ∗ ∗ 0.00 1.000 -0.06 . 00 6 ∗ ∗ 11 3.41 .179 0.00 .071 6.25 . 04 3 ∗ 0.04 .503 0.00 1.000 0.00 .473 12 0.16 .926 0.00 .906 0.10 .952 0.00 .981 0.69 .706 0.00 .722 13 2.93 .227 0.00 .845 0.09 .959 -0.19 .474 0.04 .982 0.00 .159 14 0.96 .618 0.00 .327 3.10 .210 0.00 .822 6.13 . 04 6 ∗ -0.05 .126 15 2.37 .303 0.00 .366 0.06 .970 0.07 .660 0.38 .828 0.00 .815 16 0.00 1.000 0.00 .591 0.05 .976 0.00 .358 4.14 .124 0.00 .317 17 1.80 .404 0.00 .581 0.10 .952 -0.02 .956 0.03 .987 0.00 .475 18 3.78 .149 0.00 .704 0.52 .770 -0.02 .238 0.62 .731 0.00 .571 19 1.54 .460 0.00 .892 0.23 .893 -0.10 .836 0.06 .971 0.00 .882 20 55.71 .001∗∗∗ -0.07 .036 7.62 . 02 2 ∗ 0.00 .526 0.06 .970 0.00 .088 21 3.24 .195 -0.06 .011∗ 1.17 .554 0.15 .087 0.00 1.000 0.00 .784 22 33.92 . 00 1 ∗ ∗ ∗ 0.00 .239 4.09 .127 0.00 .661 1.52 .465 0.03 .649 23 7.12 . 02 8 ∗ 0.00 .100 2.90 .231 -0.06 .436 0.18 .916 0.00 .498 24 23.59 . 00 1 ∗ ∗ ∗ 0.11 . 00 1 ∗ ∗ ∗ 0.12 .942 0.00 .993 13.40 . 00 1 ∗ ∗ ∗ 0.00 .106 25 64.84 . 00 1 ∗ ∗ ∗ 0.15 . 00 1 ∗ ∗ ∗ 30.23 . 00 1 ∗ ∗ ∗ -0.38 .099 0.01 .997 0.07 . 02 0 ∗ 26 10.47 . 00 5 ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 2.13 .341 0.00 .512 28.34 . 00 1 ∗ ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 27 13.71 . 00 1 ∗ ∗ ∗ 0.00 .778 9.09 . 01 0 ∗ ∗ -0.07 .924 0.85 .651 0.00 .914 28 18.73 . 00 1 ∗ ∗ ∗ 0.10 . 00 1 ∗ ∗ ∗ 18.70 . 00 1 ∗ ∗ ∗ 0.00 .590 0.79 .671 0.05 . 00 3 ∗ ∗ 29 9.31 . 00 9 ∗ ∗ 0.00 . 04 7 ∗ 10.32 . 00 6 ∗ ∗ -0.43 .214 13.34 . 00 1 ∗ ∗ ∗ 0.00 .720 30 14.58 . 00 1 ∗ ∗ -0.06 . 00 8 ∗ ∗ 14.57 . 00 1 ∗ ∗ ∗ 0.00 .403 3.83 .145 -0.07 . 00 1 ∗ ∗ ∗ 31 7.38 . 02 4 ∗ 0.02 . 01 0 ∗ ∗ 1.40 .493 -0.23 .687 2.57 .273 0.00 .108 32 18.09 . 00 1 ∗ ∗ ∗ 0.00 . 02 8 ∗ 0.65 .720 0.00 .908 6.31 . 04 2 ∗ 0.00 .422 33 15.01 . 00 1 ∗ ∗ ∗ 0.02 . 00 5 ∗ ∗ 0.40 .820 -0.14 .541 0.00 1.000 0.02 . 00 6 ∗ ∗ 34 30.39 . 00 1 ∗ ∗ ∗ 0.00 .963 1.61 .443 0.00 .937 6.57 . 03 7 ∗ 0.00 .823 35 9.62 . 00 8 ∗ ∗ 0.00 .606 0.37 .833 -0.26 .284 0.31 .859 0.02 . 02 0 ∗ 36 29.49 . 00 1 ∗ ∗ ∗ -0.06 . 01 6 ∗ 1.11 .571 0.00 .357 5.88 .052 -0.02 . 04 2 ∗ Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 0.22 .898 0.00 .261 4.26 .117 0.14 .769 1.14 .560 0.11 .198 2 0.35 .841 0.00 . 00 9 ∗ ∗ 2.92 .229 0.00 .976 0.15 .930 0.00 .275 3 0.25 .882 0.00 .078 2.24 .323 -0.02 .403 5.55 .060 0.00 .613 4 0.03 .987 0.00 .145 1.97 .369 0.00 .756 4.67 .100 0.00 .027 5 0.66 .716 0.06 . 00 1 ∗ ∗ ∗ 0.46 .795 0.27 .618 1.40 .490 0.43 .083 6 6.42 . 03 9 ∗ 0.00 .555 5.78 .054 0.00 .271 11.47 . 0 1 ∗ ∗ ∗ -0.14 .427 7 2.74 .251 0.00 .705 4.04 .130 -0.20 .574 8.64 . 0 1 ∗ ∗ 0.04 .948 8 1.19 .549 0.00 .165 0.87 .645 0.05 . 03 9 ∗ 1.17 .560 0.00 .117 9 4.04 .130 -0.04 .998 1.92 .379 -0.20 .371 3.26 .190 -0.04 .752 10 1.85 .392 0.00 .894 2.52 .280 0.11 . 00 1 ∗ ∗ ∗ 2.21 .330 0.10 . 00 1 ∗ ∗ ∗ 11 3.41 .179 0.00 .186 2.23 .324 -0.31 .823 1.92 .380 -0.08 .821 12 0.08 .965 0.00 .394 0.00 1.000 -0.02 .598 1.52 .460 0.00 .357 13 0.16 .927 0.00 .214 0.01 .998 0.01 .649 1.11 .570 -0.04 .916 14 0.03 .986 0.00 .368 1.12 .569 -0.06 . 03 3 ∗ 1.21 .540 -0.07 .274 15 3.06 .214 0.00 .611 0.00 1.000 -0.06 .860 0.86 .650 -0.09 .225 16 2.99 .221 0.00 .578 1.01 .602 0.00 .833 1.66 .430 0.00 .499 17 0.57 .753 0.00 .475 1.03 .594 -0.10 .754 0.64 .730 0.05 .290 18 0.08 .961 0.00 .671 4.27 .116 -0.07 .210 0.13 .940 -0.08 .987 19 0.11 .947 0.00 .420 2.78 .246 -0.08 .828 0.19 .910 -0.08 .986 20 0.36 .837 0.00 .089 5.27 .070 -0.05 .120 4.98 .080 -0.07 .758 21 1.67 .430 0.00 .169 1.16 .556 0.10 .439 0.21 .900 0.10 . 04 6 ∗ 22 0.54 .762 0.00 . 04 9 ∗ 2.89 .233 0.00 .446 1.95 .370 0.00 .874 23 21.23 .001∗∗∗ 0.07 . 00 2 ∗ ∗ 0.50 .777 -0.07 .442 1.02 .600 0.00 .409 24 0.63 .730 0.00 .143 22.80 . 00 1 ∗ ∗ ∗ 0.00 .897 6.77 .030∗ 0.02 .084 25 13.68 . 00 1 ∗ ∗ ∗ 0.00 .098 11.59 . 00 3 ∗ ∗ 0.00 .638 1.33 .510 -0.06 .169 26 0.41 .817 0.00 . 02 1 ∗ 8.24 . 01 6 ∗ 0.02 .274 1.17 .550 -0.03 .566 27 0.48 .787 0.00 .163 1.40 .494 0.22 .323 4.61 .100 0.18 .521 28 9.62 . 00 8 ∗ ∗ 0.00 .890 3.16 .203 -0.07 .109 0.04 .980 -0.11 .169 29 0.05 .979 -0.06 . 00 8 ∗ ∗ 4.42 .108 0.30 .161 2.07 .350 0.30 .104 30 2.04 .357 0.00 . 03 5 ∗ 6.85 . 03 2 ∗ 0.00 .859 1.79 .400 0.00 .517 31 0.47 .789 0.00 .068 6.88 . 03 1 ∗ 0.33 .165 0.00 1.000 0.12 .985 32 0.16 .923 0.00 .477 2.37 .302 0.00 .478 0.07 .970 -0.05 .889 33 3.33 .186 0.00 .180 5.40 .066 0.05 .851 0.45 .800 -0.20 . 00 1 ∗ ∗ ∗ 34 0.00 1.000 -0.03 . 01 0 ∗ 1.00 .605 0.00 .477 0.49 .780 0.00 .999 35 0.00 1.000 -0.04 .808 0.54 .764 0.27 .217 2.08 .350 -0.21 . 00 6 ∗ ∗ 36 2.74 .251 0.00 .065 5.82 .054 0.00 .495 1.44 .480 -0.03 .508 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
### 3.2. SF36 Physical Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 physical health items were included in the initial analysis using the RMM: GH01:Q1, PF01:Q3A, PF02:Q3B, PF03:Q3C, PF04:Q3D, PF05:Q3E, PF06:Q3F, PF07:Q3G, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, BP01:Q7, BP02:Q8, GH02:Q11A, GH03:Q11B, GH04:Q11C, and GH05:Q11D (see Table7). When the 21 SF-36 items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.18 to 2.66 and outfit statistics ranging from 0.19 to 2.77 (see Table 8). The mean item measure was 0.00 logits (SD = 0.99). With respect to logit measures, there was a broad range, the lowest value being –2.49 and the highest value being +1.79 (see Table 9). This resulted in an average item separation index of 60.32 and an average reliability of 1.00 over the six waves of data collection (see Table 9). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 7
SF-36 Physical health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -0.84 0.01 -0.21 -0.92 0.01 -0.24 -0.91 0.01 -0.18 3:Q3A 1.62 0.02 0.37 1.58 0.02 0.44 1.63 0.02 0.43 4:Q3B 0.02 0.01 0.59 0.00 0.01 0.61 0.13 0.01 0.60 5:Q3C -0.05 0.01 0.59 -0.08 0.01 0.60 -0.08 0.01 0.59 6:Q3D 0.52 0.01 0.60 0.48 0.01 0.63 0.55 0.01 0.62 7:Q3E -0.24 0.01 0.65 -0.27 0.01 0.67 -0.19 0.01 0.67 8:Q3F 0.21 0.01 0.57 0.26 0.01 0.58 0.33 0.01 0.54 9:Q3G 0.11 0.01 0.64 0.15 0.01 0.66 0.34 0.01 0.63 10:Q3H -0.34 0.01 0.67 -0.34 0.01 0.68 -0.19 0.01 0.67 11:Q3I -0.57 0.01 0.59 -0.62 0.01 0.59 -0.54 0.01 0.61 12:Q3J -0.74 0.01 0.43 -0.84 0.01 0.40 -0.80 0.01 0.40 13:Q4A 0.96 0.01 0.42 0.95 0.01 0.41 1.02 0.02 0.38 14:Q4B 1.32 0.01 0.42 1.37 0.02 0.43 1.46 0.02 0.39 15:Q4C 1.16 0.01 0.46 1.17 0.01 0.50 1.20 0.02 0.42 16:Q4D 1.20 0.01 0.44 1.22 0.02 0.47 1.28 0.02 0.42 21:Q7 -0.74 0.01 -0.05 0.99 0.01 -0.19 -0.95 0.01 -0.02 22:Q8 0.36 0.01 -0.18 -0.78 0.01 -0.15 0.04 0.01 -0.14 33:Q11A -2.22 0.01 0.34 -2.49 0.01 0.34 -2.46 0.01 0.32 34:Q11B -0.07 0.01 0.02 0.04 0.01 -0.07 -0.09 0.01 0.02 35:Q11C -1.24 0.01 0.38 -1.42 0.01 0.38 -1.27 0.01 0.37 36:Q11D -0.42 0.01 -0.09 -0.45 0.01 -0.18 -0.50 0.01 -0.08 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -1.10 0.01 -0.14 -1.21 0.01 -0.13 -1.32 0.02 -0.06 3:Q3A 1.64 0.02 0.42 1.63 0.03 0.42 1.79 0.04 0.39 4:Q3B 0.26 0.02 0.60 0.33 0.02 0.60 0.47 0.02 0.57 5:Q3C 0.02 0.01 0.60 0.05 0.02 0.58 0.09 0.02 0.56 6:Q3D 0.59 0.02 0.62 0.67 0.02 0.62 0.77 0.02 0.59 7:Q3E -0.16 0.01 0.65 -0.09 0.02 0.65 -0.02 0.02 0.64 8:Q3F 0.32 0.02 0.57 0.30 0.02 0.57 0.34 0.02 0.53 9:Q3G 0.46 0.02 0.63 0.62 0.02 0.61 0.76 0.02 0.60 10:Q3H -0.07 0.01 0.66 0.04 0.02 0.65 0.17 0.02 0.63 11:Q3I -0.50 0.01 0.58 -0.43 0.02 0.58 -0.40 0.02 0.58 12:Q3J -0.76 0.01 0.42 -0.75 0.01 0.41 -0.79 0.02 0.39 13:Q4A 1.01 0.02 0.37 1.03 0.02 0.34 0.99 0.03 0.34 14:Q4B 1.47 0.02 0.38 1.47 0.03 0.35 1.45 0.03 0.33 15:Q4C 1.27 0.02 0.43 1.29 0.02 0.40 1.29 0.03 0.35 16:Q4D 1.31 0.02 0.41 1.33 0.02 0.39 1.32 0.03 0.36 21:Q7 -1.08 0.01 -0.01 -1.17 0.01 0.01 -1.31 0.02 0.08 22:Q8 -0.19 0.01 -0.13 -0.35 0.01 -0.06 -0.53 0.02 -0.02 33:Q11A -2.45 0.01 0.28 -2.43 0.02 0.26 -2.49 0.02 0.21 34:Q11B -0.20 0.01 0.01 -0.38 0.02 0.06 -0.53 0.02 0.11 35:Q11C -1.19 0.01 0.34 -1.08 0.01 0.33 -1.08 0.02 0.31 36:Q11D -0.68 0.01 -0.09 -0.85 0.01 -0.04 -0.98 0.02 0.02 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 8
SF-36 Physical health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.24 9.9 1.30 9.9 1.24 9.9 1.29 9.9 1.16 9.9 1.20 9.9 3:Q3A 0.93 -4.6 0.90 -6.3 0.97 -1.8 0.90 -6.2 0.93 -3.7 0.85 -7.6 4:Q3B 0.57 -9.9 0.59 -9.9 0.59 -9.9 0.60 -9.9 0.64 -9.9 0.65 -9.9 5:Q3C 0.53 -9.9 0.54 -9.9 0.53 -9.9 0.54 -9.9 0.54 -9.9 0.56 -9.9 6:Q3D 0.72 -9.9 0.73 -9.9 0.71 -9.9 0.71 -9.9 0.72 -9.9 0.71 -9.9 7:Q3E 0.44 -9.9 0.46 -9.9 0.45 -9.9 0.47 -9.9 0.48 -9.9 0.50 -9.9 8:Q3F 0.62 -9.9 0.63 -9.9 0.64 -9.9 0.64 -9.9 0.67 -9.9 0.67 -9.9 9:Q3G 0.71 -9.9 0.72 -9.9 0.73 -9.9 0.75 -9.9 0.81 -9.9 0.81 -9.9 10:Q3H 0.45 -9.9 0.49 -9.9 0.50 -9.9 0.53 -9.9 0.59 -9.9 0.62 -9.9 11:Q3I 0.28 -9.9 0.32 -9.9 0.30 -9.9 0.33 -9.9 0.36 -9.9 0.39 -9.9 12:Q3J 0.21 -9.9 0.23 -9.9 0.18 -9.9 0.19 -9.9 0.21 -9.9 0.23 -9.9 13:Q4A 0.36 -9.9 0.40 -9.9 0.37 -9.9 0.40 -9.9 0.44 -9.9 0.48 -9.9 14:Q4B 0.51 -9.9 0.53 -9.9 0.53 -9.9 0.54 -9.9 0.59 -9.9 0.60 -9.9 15:Q4C 0.44 -9.9 0.47 -9.9 0.43 -9.9 0.45 -9.9 0.49 -9.9 0.52 -9.9 16:Q4D 0.46 -9.9 0.49 -9.9 0.46 -9.9 0.48 -9.9 0.52 -9.9 0.55 -9.9 21:Q7 2.33 9.9 2.40 9.9 2.51 9.9 2.77 9.9 2.20 9.9 2.23 9.9 22:Q8 2.29 9.9 2.39 9.9 2.66 9.9 2.72 9.9 2.23 9.9 2.29 9.9 33:Q11A 1.24 9.9 1.20 9.9 1.10 7.1 1.06 3.9 1.12 7.2 1.08 4.4 34:Q11B 2.18 9.9 2.20 9.9 2.07 9.9 2.09 9.9 1.88 9.9 1.89 9.9 35:Q11C 1.25 9.9 1.26 9.9 1.16 9.9 1.17 9.9 1.17 9.9 1.18 9.9 36:Q11D 2.21 9.9 2.28 9.9 2.36 9.9 2.41 9.9 2.12 9.9 2.15 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.02 1.4 1.04 2.9 0.99 -0.8 1.00 0.0 0.92 -4.1 0.93 -3.4 3:Q3A 0.98 -0.9 0.87 -5.2 1.04 1.5 0.92 -2.8 1.12 3.1 0.95 -1.1 4:Q3B 0.67 -9.9 0.67 -9.9 0.69 -9.9 0.69 -9.9 0.76 -9.9 0.74 -9.9 5:Q3C 0.56 -9.9 0.57 -9.9 0.57 -9.9 0.58 -9.9 0.62 -9.9 0.63 -9.9 6:Q3D 0.72 -9.9 0.71 -9.9 0.76 -9.9 0.73 -9.9 0.80 -8.2 0.74 -9.9 7:Q3E 0.49 -9.9 0.51 -9.9 0.53 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 8:Q3F 0.63 -9.9 0.64 -9.9 0.62 -9.9 0.62 -9.9 0.65 -9.9 0.66 -9.9 9:Q3G 0.83 -9.9 0.81 -9.9 0.86 -7.0 0.83 -8.8 0.90 -3.9 0.83 -6.7 10:Q3H 0.66 -9.9 0.68 -9.9 0.72 -9.9 0.73 -9.9 0.79 -9.4 0.79 -9.5 11:Q3I 0.39 -9.9 0.42 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 0.53 -9.9 12:Q3J 0.28 -9.9 0.30 -9.9 0.33 -9.9 0.35 -9.9 0.37 -9.9 0.40 -9.9 13:Q4A 0.47 -9.9 0.51 -9.9 0.53 -9.9 0.58 -9.9 0.55 -9.9 0.59 -9.9 14:Q4B 0.62 -9.9 0.63 -9.9 0.65 -9.9 0.66 -9.9 0.68 -9.9 0.69 -9.9 15:Q4C 0.54 -9.9 0.57 -9.9 0.59 -9.9 0.61 -9.9 0.63 -9.9 0.65 -9.9 16:Q4D 0.56 -9.9 0.59 -9.9 0.60 -9.9 0.62 -9.9 0.63 -9.9 0.65 -9.9 21:Q7 2.08 9.9 2.09 9.9 1.96 9.9 1.96 9.9 1.79 9.9 1.79 9.9 22:Q8 2.07 9.9 2.11 9.9 1.95 9.9 1.97 9.9 1.85 9.9 1.86 9.9 33:Q11A 1.13 7.2 1.11 5.6 1.13 6.3 1.11 5.1 1.18 7.2 1.20 7.5 34:Q11B 1.85 9.9 1.86 9.9 1.69 9.9 1.69 9.9 1.62 9.9 1.59 9.9 35:Q11C 1.18 9.9 1.19 9.9 1.15 8.0 1.16 8.4 1.13 5.9 1.13 6.1 36:Q11D 2.05 9.9 2.09 9.9 1.95 9.9 1.96 9.9 1.83 9.9 1.82 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 9
SF-36 physical health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -1.77 -1.85 -1.90 -1.92 -1.95 -2.07 S.D. .38 .37 .40 .39 .39 .40 MAX 1.54 -.37 .40 .92 -.52 .40 MIN -5.13 -4.11 -.09 -5.08 -4.52 -.79 Infit-MNSQ 1.05 1.04 1.05 1.05 1.05 1.05 Infit-ZSTD -.10 -.20 -.10 .00 .00 .00 Outfit-MNSQ 1.00 1.01 .98 .97 .96 .96 Outfit-ZSTD -.30 -.30 -.30 -.20 -.20 -.20 Person separation .86c .88c .97c .96c .96c .96c Person reliability .43a .43a .48a .48a .48a .48a Items MEAN .00 .00 .00 .00 .00 .00 S.D. .91 .99 .98 1.00 1.02 1.08 MAX 1.62 1.58 1.63 1.64 1.63 1.79 MIN -2.22 -2.49 -2.46 -2.45 -2.43 -2.49 Infit-MNSQ .95 .98 .95 .94 .94 .95 Infit-ZSTD -3.00 -3.00 -3.10 -3.40 -3.40 -3.30 Outfit-MNSQ .98 1.00 .96 .95 .94 .94 Outfit-ZSTD -3.10 -3.40 -3.50 -3.60 -3.70 -3.60 Item separation 71.24 69.37 63.25 59.41 52.87 45.77 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 physical health scale person-item map is located in Supplemental Figure3 and reports evidence of the hierarchical ordering of the SF-36 physical health scale items. Items which are easier are located at the bottom of the SF-36 physical health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. It should also be noted that several of the SF-36 physical health scale items have the same level of difficulty.The average person measure was 1.91 logits (SD = 0.39) over the six waves of data collection (see Table9). The mean person separation was 0.93 with a mean reliability of 0.46 (see Table 9). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 physical health construct. When examining the overall RMM output of the SF-36 physical health total scale, the average person measure (1.91 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1.62 to -2.49 logits. The person reliability was 0.46 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 physical health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 physical health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.86, -2.13, -.83, .10, 1.96, and 5.32 for wave one and -3.64, -2.02, -.91, .01, 2.00, and 5.24 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Seven out of the 21 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Therefore items 1:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, GH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 7 / 21 or 52.4% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, and RP04:Q4D. The following items had an Infit MNSQ statistic that was greater than 1.30: BP01:Q7, BP02:Q8, GH03:Q11B, and GH05:Q11D.An inspection of the PTMEAs for the SF-36 physical health scale indicated that items HG01:Q1, BP01:Q7, BP02:Q8, and GH05:Q11D had consistent negative PTMEAs over the six waves of data collection. For all other items, the PTMEA correlations had acceptable values.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table10). This indicated that the unidimensional requirement of the SF-36 physical health scale was met. The raw variance explained by the SF-36 physical health scale over the six waves of data collection ranged from 41.6% to 48.9% and the unexplained variance in the first contrast ranged from 17.4% to 22.4%. The residual analysis completed indicated that no second dimension or factor existed.Table 10
SF-36 physical health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 36.07 100.00% 100.00% 36.07 100.00% 100.00% 37.89 100.00% 100.00% Raw variance explained by measures 15.07 41.80% 42.50% 15.07 41.80% 42.50% 16.89 44.60% 45.70% Raw variance explained by persons 1.90 5.30% 5.40% 1.90 5.30% 5.40% 0.96 2.50% 2.60% Raw Variance explained by items 13.16 36.50% 37.10% 13.16 36.50% 37.10% 15.94 42.10% 43.10% Raw unexplained variance (total) 21.00 58.20% 57.50% 21.00 58.20% 57.50% 21.00 55.40% 54.30% Unexplained variance in 1st contrast 8.00 22.20% 38.10% 8.00 22.20% 38.10% 7.70 20.30% 36.70% Unexplained variance in 2nd contrast 2.02 5.60% 9.60% 2.02 5.60% 9.60% 1.96 5.20% 9.40% Unexplained variance in 3rd contrast 1.51 4.20% 7.20% 1.51 4.20% 7.20% 1.44 3.80% 6.90% Unexplained variance in 4th contrast 1.31 3.60% 6.20% 1.31 3.60% 6.20% 1.23 3.20% 5.80% Unexplained variance in 5th contrast 0.99 2.80% 4.70% 0.99 2.80% 4.70% 0.99 2.60% 4.70% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 37.07 100.00% 100.00% 39.34 100.00% 100.00% 41.08 100.00% 100.00% Raw variance explained by measures 17.07 46.10% 48.00% 18.34 46.60% 48.60% 20.08 48.90% 51.10% Raw variance explained by persons 2.45 6.60% 6.90% 2.42 6.10% 6.40% 2.68 6.50% 6.80% Raw Variance explained by items 14.62 39.40% 41.10% 15.92 40.50% 42.20% 17.40 42.30% 44.20% Raw unexplained variance (total) 20.00 53.90% 52.00% 21.00 53.40% 51.40% 21.00 51.10% 48.90% Unexplained variance in 1st contrast 6.64 17.90% 33.20% 7.50 19.10% 35.70% 7.14 17.40% 34.00% Unexplained variance in 2nd contrast 2.10 5.70% 10.50% 2.06 5.20% 9.80% 2.24 5.50% 10.70% Unexplained variance in 3rd contrast 1.54 4.20% 7.70% 1.58 4.00% 7.50% 1.56 3.80% 7.40% Unexplained variance in 4th contrast 1.26 3.40% 6.30% 1.21 3.10% 5.80% 1.20 2.90% 5.70% Unexplained variance in 5th contrast 1.07 2.90% 5.30% 1.03 2.60% 4.90% 1.04 2.50% 4.90% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The functioning of the six rating scale categories was examined for the SF-36 physical health scale. The category logit measures ranged from -3.86 to 5.43 (see Table11). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category six. The infit MNSQ scores for this rating category ranged from 2.03 to 3.18 (see Table 11). The results indicated that the six-level rating scale used in the SF-36 physical health scale might not be the most robust to use (see Supplemental Figure 3); however, the full range of ratings were used by the participants who completed the SF-36 physical health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the first five response categories were the most probable category for some part of the continuum. Rating category six was problematic.Table 11
SF-36 physical health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 60721 23 (-3.86) 1.18 1.11 NONE 55350 25 ( -3.92) 1.12 1.07 NONE 46995 28 ( -3.86) 1.14 1.08 NONE 2 83039 32 -2.13 .93 .94 -2.56 70454 32 -2.19 .93 .92 -2.60 54692 32 -2.17 1.00 .99 -2.54 3 73299 28 -.83 .66 .59 -1.54 62780 29 -.85 .66 .60 -1.61 46905 28 -.90 .72 .62 -1.61 4 12957 5 .10 1.19 1.26 .59 11389 5 .14 1.21 1.38 .53 9720 6 .07 1.11 1.11 .38 5 15144 6 1.96 1.05 1.15 -.71b 12634 6 2.03 1.13 1.33 -.55b 9942 6 2.05 1.08 1.17 -.56b 6 238 0 (5.32) 2.67 2.61 4.21 233 0 (5.34) 3.18 2.98 4.23 155 0 (5.43) 2.77 2.30 4.33 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 42924 29 ( -3.75) 1.15 1.08 NONE 36502 31 ( -3.66) 1.15 1.08 NONE 28787 36 ( -3.64) 1.15 1.08 NONE 2 44603 30 -2.09 1.05 1.02 -2.42 33751 29 -2.01 1.08 1.01 -2.33 23233 29 -2.02 1.10 1.01 -2.30 3 36071 24 -.89 .76 .64 -1.52 25389 22 -.87 .82 .68 -1.44 16930 21 -.91 .86 .73 -1.46 4 8958 6 .06 1.00 .96 .24 7310 6 .06 .94 .86 .13 5353 7 .01 .91 .80 .03 5 8592 6 2.01 1.10 1.16 -.48b 6464 6 1.96 1.09 1.12 -.38b 4694 6 2.00 1.09 1.10 -.39b 6 150 0 (5.29) 2.54 2.02 4.18 129 0 (5.12) 2.25 1.75 4.02 80 0 (5.24) 2.03 1.52 4.13 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 physical scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table12). Four of the SF-36 physical health items exhibited a consistent pattern of DIF over the six waves of data collection. Item PF03:Q3C demonstrated DIF based on marital status alone while items GH02:Q11A, GH04:Q11C, and GH05:Q11D exhibited DIF based on both marital status and area of residence (see Table 12). It should be noted that items GH02:Q11A and GH04:Q11C had infit MNSQ statistics that fell within the 0.70-1.30 range while items PF03:Q3C and GH05:Q11D also had MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. SF-36 physical health items PF03:Q3C and GH05:Q11D appear to be particularly problematic items based on the RMM analysis findings.Table 12
Differential Item Functioning (DIF) for SF-36 physical health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 13.25 . 00 1 ∗ ∗ ∗ 0.00 .639 1.62 .442 0.14 .713 0.41 .816 0.00 .185 3:Q3A 5.79 .054 0.00 .725 1.27 .527 0.00 .330 5.69 .057 0.00 .073 4:Q3B 1.94 .376 0.00 .069 0.56 .754 0.15 .835 0.50 .779 0.06 . 00 1 ∗ ∗ ∗ 5:Q3C 1.84 .394 0.06 . 00 1 ∗ ∗ ∗ 0.03 .984 0.00 . 00 9 ∗ ∗ 0.56 .756 0.00 .442 6:Q3D 0.97 .614 0.00 .947 2.13 .342 -0.20 .778 0.44 .804 0.00 .700 7:Q3E 0.41 .816 0.00 .287 2.75 .250 0.00 .153 1.26 .529 0.00 .143 8:Q3F 0.06 .970 0.00 .684 0.18 .917 -0.08 .599 0.03 .988 -0.04 .964 9:Q3G 13.16 . 00 1 ∗ ∗ ∗ -0.02 .076 1.33 .512 0.03 . 00 6 ∗ ∗ 0.57 .750 0.00 .847 10:Q3H 12.78 . 00 2 ∗ ∗ 0.00 .324 5.72 .056 -0.22 .320 0.02 .990 0.00 .225 11:Q3I 7.45 . 02 4 ∗ 0.00 .357 5.55 .061 0.07 . 00 1 ∗ ∗ ∗ 0.00 1.000 0.00 .631 12:Q3J 0.95 .620 0.00 .306 0.03 .988 -0.03 .836 0.73 .693 0.00 .251 13:Q4A 2.34 .306 0.00 .519 0.08 .962 0.00 .461 0.17 .919 0.00 .360 14:Q4B 1.45 .481 0.00 .782 4.22 .119 -0.30 .206 6.61 . 03 6 ∗ 0.00 .520 15:Q4C 2.47 .288 0.00 .982 0.08 .961 0.00 .240 0.37 .831 0.00 .524 16:Q4D 0.08 .965 0.00 .845 0.06 .973 0.00 .873 4.54 .101 0.00 .053 21:Q7 2.54 .277 -0.05 . 00 5 ∗ ∗ 9.34 . 00 9 ∗ ∗ 0.00 .131 0.13 .941 0.00 .145 22:Q8 37.29 . 00 1 ∗ ∗ ∗ 0.00 .114 1.00 .605 -0.09 .651 1.85 .394 0.02 .081 33:Q11A 27.3 . 00 1 ∗ ∗ ∗ 0.07 . 00 1 ∗ ∗ ∗ 1.11 .572 0.00 .521 0.02 .990 0.00 .275 34:Q11B 36.38 . 00 1 ∗ ∗ ∗ 0.00 .905 1.41 .490 -0.20 .309 6.66 . 03 5 ∗ 0.00 .170 35:Q11C 12.2 . 00 2 ∗ ∗ 0.00 .204 0.68 .710 0.00 .963 0.58 .749 -0.05 . 00 2 ∗ ∗ 36:Q11D 35.29 . 00 1 ∗ ∗ ∗ -0.05 . 00 6 ∗ ∗ 1.38 .500 0.12 .444 7.03 . 02 9 ∗ 0.00 .724 Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 0.67 .714 0.00 .185 4.44 .107 0.23 .707 2.87 .235 0.09 .418 3:Q3A 0.22 .897 0.00 .073 1.32 .515 0.00 .375 3.73 .153 0.00 .176 4:Q3B 0.26 .878 0.06 . 00 1 ∗ ∗ ∗ 0.59 .744 0.34 .229 2.71 .254 0.38 .098 5:Q3C 3.48 .173 0.00 .442 0.65 .720 0.00 .342 1.81 .400 -0.13 .270 6:Q3D 1.39 .496 0.00 .700 2.83 .240 -0.16 .573 7.85 . 01 9 ∗ 0.00 .761 7:Q3E 0.10 .953 0.00 .143 1.73 .418 0.03 .025∗ 6.17 . 04 5 ∗ 0.00 .278 8:Q3F 2.31 .311 -0.04 .964 0.00 1.000 -0.17 .456 0.43 .808 -0.08 .248 9:Q3G 0.95 .621 0.00 .847 0.39 .824 0.12 . 00 1 ∗ ∗ ∗ 1.20 .547 0.11 . 00 1 ∗ ∗ ∗ 10:Q3H 1.89 .384 0.00 .225 0.73 .695 -0.26 .443 0.68 .712 -0.12 .739 11:Q3I 0.00 1.000 0.00 .631 0.42 .809 0.00 .961 0.55 .761 0.00 .387 12:Q3J 0.05 .975 0.00 .251 0.10 .953 0.06 .252 0.65 .722 -0.03 .664 13:Q4A 0.45 .798 0.00 .360 0.00 1.000 -0.06 . 04 2 ∗ 0.17 .922 -0.07 .282 14:Q4B 1.60 .447 0.00 .520 1.98 .367 -0.05 .861 0.82 .663 -0.12 .138 15:Q4C 2.50 .283 0.00 .524 0.01 .996 0.00 .453 0.20 .908 0.00 .255 16:Q4D 0.68 .711 0.00 .053 0.24 .889 -0.06 .733 0.73 .692 0.02 .431 21:Q7 3.61 .162 0.00 .145 0.00 1.000 -0.06 .413 1.75 .413 -0.07 .650 22:Q8 1.03 .595 0.02 .081 3.03 .217 0.00 .310 4.23 .119 -0.10 .644 33:Q11A 0.14 .934 0.00 .275 13.77 . 00 1 ∗ ∗ ∗ -0.05 .170 1.34 .509 -0.07 .729 34:Q11B 4.03 .131 0.00 .170 1.80 .403 0.20 . 04 8 ∗ 2.28 .317 0.07 .252 35:Q11C 0.00 1.000 -0.05 . 00 2 ∗ ∗ 0.49 .783 0.00 .280 3.68 .156 0.00 .681 36:Q11D 0.37 .831 0.00 .724 7.48 .023∗ 0.00 .941 1.55 .457 -0.03 .897 Notes. PROB. = probability; ∗p≤.05; ∗∗p ≤ .01; ∗∗∗p ≤ .001.
### 3.3. SF36 Mental Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 mental health items were included in the initial analysis using the RMM: RE01:Q5A, RE02:Q5B, RE03:Q5C, SF01:Q6, VT01:Q9A, 24MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10. When the 14 SF-36 mental health items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table13). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31 (see Table 14). This resulted in an average item separation index of 79.17 and an average reliability of 1.00 over the six waves (see Table 15). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 13
SF-36 mental health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR 17:Q5A 1.35 0.01 0.31 1.38 0.01 0.30 1.49 0.02 0.27 18:Q5B 1.57 0.01 0.31 1.62 0.02 0.29 1.75 0.02 0.27 19:Q5C 1.38 0.01 0.30 1.41 0.01 0.28 1.50 0.02 0.26 20:Q6 1.51 0.01 -0.09 1.78 0.02 -0.02 1.41 0.01 -0.02 23:Q9A -0.03 0.01 0.17 -0.06 0.01 0.22 -0.12 0.01 0.27 24:Q9B -1.28 0.01 0.46 -1.47 0.01 0.43 -1.54 0.01 0.41 25:Q9C -1.84 0.01 0.45 -2.04 0.01 0.40 -2.08 0.02 0.40 26:Q9D 0.21 0.01 0.20 0.30 0.01 0.18 0.30 0.01 0.26 27:Q9E -0.16 0.01 0.22 -0.24 0.01 0.26 -0.29 0.01 0.30 28:Q9F -1.25 0.01 0.46 -1.33 0.01 0.39 -1.32 0.01 0.40 29:Q9G -0.90 0.01 0.44 -0.93 0.01 0.39 -0.88 0.01 0.39 30:Q9H 0.63 0.01 0.27 0.72 0.01 0.25 0.74 0.01 0.28 31:Q9I -0.55 0.01 0.37 -0.50 0.01 0.31 -0.42 0.01 0.31 32:Q10 -0.65 0.01 0.28 -0.65 0.01 0.22 -0.55 0.01 0.19 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR. 17:Q5A 1.47 0.02 0.28 1.48 0.02 0.30 1.51 0.02 0.29 18:Q5B 1.76 0.02 0.28 1.77 0.02 0.30 1.81 0.03 0.32 19:Q5C 1.51 0.02 0.27 1.51 0.02 0.28 1.53 0.02 0.28 20:Q6 1.19 0.01 0.04 1.01 0.02 0.03 0.96 0.02 0.05 23:Q9A -0.14 0.01 0.23 -0.21 0.01 0.24 -0.29 0.01 0.26 24:Q9B -1.52 0.01 0.40 -1.49 0.02 0.40 -1.47 0.02 0.37 25:Q9C -2.07 0.02 0.35 -1.92 0.02 0.39 -1.91 0.02 0.35 26:Q9D 0.30 0.01 0.23 0.31 0.01 0.19 0.29 0.01 0.22 27:Q9E -0.34 0.01 0.27 -0.41 0.01 0.27 -0.53 0.01 0.29 28:Q9F -1.30 0.01 0.40 -1.25 0.01 0.42 -1.22 0.02 0.41 29:Q9G -0.80 0.01 0.39 -0.75 0.01 0.44 -0.72 0.01 0.43 30:Q9H 0.75 0.01 0.29 0.69 0.01 0.23 0.69 0.02 0.27 31:Q9I -0.34 0.01 0.33 -0.30 0.01 0.35 -0.27 0.01 0.32 32:Q10 -0.48 0.01 0.17 -0.44 0.01 0.17 -0.39 0.01 0.17 Note: MODEL S.E. = Model Standard Error; PTMEA CORR = Point Measure Correlation.Table 14
SF-36 mental health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.29 -9.9 0.32 -9.9 0.26 -9.9 0.28 -9.9 0.30 -9.9 0.32 -9.9 18:Q5B 0.43 -9.9 0.46 -9.9 0.43 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 19:Q5C 0.31 -9.9 0.34 -9.9 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 20:Q6 2.64 9.9 2.87 9.9 2.91 9.9 3.06 9.9 2.54 9.9 2.74 9.9 23:Q9A 1.08 7.7 1.15 9.9 1.03 3.2 1.12 9.9 1.04 3.5 1.09 6.7 24:Q9B 1.25 9.9 1.17 9.9 1.40 9.9 1.29 9.9 1.41 9.9 1.31 9.9 25:Q9C 1.44 9.9 1.30 9.9 1.59 9.9 1.39 9.9 1.51 9.9 1.33 9.9 26:Q9D 1.22 9.9 1.33 9.9 1.14 9.9 1.28 9.9 1.10 7.1 1.19 9.9 27:Q9E 1.12 9.9 1.17 9.9 1.08 7.8 1.12 9.9 1.07 5.3 1.08 6.3 28:Q9F 0.90 -6.6 0.88 -8.3 1.02 1.0 0.98 -0.9 0.96 -2.0 0.93 -3.6 29:Q9G 0.88 -9.9 0.87 -9.9 0.90 -7.4 0.89 -7.8 0.88 -7.8 0.87 -8.2 30:Q9H 1.22 9.9 1.29 9.9 1.15 8.5 1.24 9.9 1.09 4.6 1.16 7.9 31:Q9I 0.72 -9.9 0.73 -9.9 0.76 -9.9 0.77 -9.9 0.73 -9.9 0.74 -9.9 32:Q10 0.72 -9.9 0.77 -9.9 0.78 -9.9 0.81 -9.9 0.87 -9.9 0.91 -6.8 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.31 -9.9 0.34 -9.9 0.34 -9.9 0.37 -9.9 0.36 -9.9 0.39 -9.9 18:Q5B 0.50 -9.9 0.52 -9.9 0.52 -9.9 0.54 -9.9 0.53 -9.9 0.55 -9.9 19:Q5C 0.34 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.38 -9.9 0.41 -9.9 20:Q6 2.16 9.9 2.33 9.9 1.96 9.9 2.15 9.9 1.91 9.9 2.07 9.9 23:Q9A 1.04 2.7 1.07 5.3 1.02 1.7 1.05 3.4 1.03 1.8 1.04 2.2 24:Q9B 1.37 9.9 1.25 9.9 1.36 9.9 1.25 9.5 1.33 9.9 1.25 8.3 25:Q9C 1.50 9.9 1.36 9.9 1.51 9.9 1.36 9.9 1.42 9.9 1.32 9.1 26:Q9D 1.15 9.3 1.23 9.9 1.16 8.7 1.26 9.9 1.13 6.2 1.20 8.8 27:Q9E 1.11 7.9 1.11 8.0 1.08 5.1 1.08 4.8 1.10 5.2 1.09 4.4 28:Q9F 0.97 -1.6 0.94 -3.2 0.95 -2.2 0.91 -4.2 0.91 -3.5 0.89 -4.4 29:Q9G 0.87 -8.1 0.86 -8.5 0.84 -9.2 0.83 -9.7 0.85 -7.3 0.84 -7.8 30:Q9H 1.12 5.6 1.16 7.3 1.13 5.8 1.22 9.3 1.09 3.6 1.15 5.3 31:Q9I 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.78 -9.9 32:Q10 0.86 -9.9 0.91 -6.9 0.90 -6.3 0.94 -3.9 0.96 -2.4 1.00 0.2 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 15
SF-36 mental health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -.08 -.04 -.02 -.03 -.06 -.06 S.D. .30 .28 .31 .30 .29 .30 MAX .30 1.64 1.38 .30 .29 .30 MIN 1.87 -3.54 -2.99 2.37 2.16 2.12 Infit-MNSQ 1.01 1.01 1.02 1.02 1.02 1.02 Infit-ZSTD -.30 -.40 -.30 -.30 -.30 -.20 Outfit-MNSQ 1.06 1.08 1.06 1.03 1.02 1.01 Outfit-ZSTD -.20 -.20 -.20 -.20 -.20 -.20 Person separation .53 .33 .45 .36 .36 .41 Person reliability .22a .10a .17a .11a .12a .14a Items MEAN .00 .00 .00 .00 .00 .00 S.D. 1.17 1.20 1.19 1.13 1.31 1.07 MAX 1.59 1.78 1.75 1.75 2.13 1.67 MIN -1.88 -2.04 -2.08 -2.04 -1.94 -1.95 Infit-MNSQ 1.02 1.05 1.02 1.00 .99 .98 Infit-ZSTD .10 .20 -.60 -.30 -.50 -.50 Outfit-MNSQ 1.05 1.07 1.04 1.02 1.01 1.00 Outfit-ZSTD .10 .80 .20 .10 -.10 -.30 Item separation 95.77 89.12 83.98 77.85 68.89 59.38 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 mental health scale person-item map is shown in Supplemental Figure5 and reports evidence of the hierarchical ordering of the SF-36 mental health scale items. It should also be noted that several of the SF-36 mental health scale items have the same level of difficulty. The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table 15). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 15). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 mental health construct.When examining the overall RMM output of the SF-36 mental health scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +2.13 to -2.08 logits. The person reliability was 0.35 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 mental health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 mental health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.07, -1.06, -.17, .40, 1.14, and 2.54 for wave one and -2.98, -1.09, -.19, .41, 1.15, and 2.51 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Five out of the 14 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range; thus, items VT01:Q9A, MH01:Q9B, MH03:Q9D, 27VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10 met the RMM requirements (see Table14). In other words, only 9/14 or 64.3% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: RE01:Q5A, RE02:Q5B, and RE03:Q5C. Item SF01:Q6 had an Infit MNSQ statistic that was greater than 1.30.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table16). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 mental health scale over the six waves of data collection ranged from 62.5% to 66.1% and the unexplained variance in the first contrast ranged from 15.1% to 16.5%.Table 16
SF-36 mental health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 38.55 100.00% 100.00% 41.29 100.00% 100.00% 42.62 100.00% 100.00% Raw variance explained by measures 24.55 63.70% 63.70% 27.29 66.10% 66.20% 26.62 62.50% 62.50% Raw variance explained by persons 2.85 7.40% 7.40% 2.06 5.00% 5.00% 2.68 6.30% 6.30% Raw Variance explained by items 21.70 56.30% 56.30% 25.23 61.10% 61.20% 23.94 56.20% 56.20% Raw unexplained variance (total) 14.00 36.30% 36.30% 14.00 33.90% 33.80% 16.00 37.50% 37.50% Unexplained variance in 1st contrast 6.22 16.10% 44.50% 6.22 15.10% 44.40% 7.02 16.50% 43.90% Unexplained variance in 2nd contrast 1.49 3.90% 10.60% 1.47 3.60% 10.50% 1.62 3.80% 10.10% Unexplained variance in 3rd contrast 1.29 3.30% 9.20% 1.32 3.20% 9.40% 1.29 3.00% 8.10% Unexplained variance in 4th contrast 0.81 2.10% 5.80% 0.85 2.00% 6.00% 1.05 2.50% 6.60% Unexplained variance in 5th contrast 0.68 1.80% 4.90% 0.71 1.70% 5.00% 0.71 1.70% 4.40% WAVE 4 WAVE 5 WAVE 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 39.19 100.00% 100.00% 37.79 100.00% 100.00% 37.65 100.00% 100.00% Raw variance explained by measures 25.19 64.30% 64.50% 23.79 62.90% 63.30% 23.65 62.80% 63.20% Raw variance explained by persons 2.43 6.20% 6.20% 1.73 4.60% 4.60% 2.44 6.50% 6.50% Raw Variance explained by items 22.76 58.10% 58.30% 22.06 58.40% 58.70% 21.21 56.30% 56.60% Raw unexplained variance (total) 14.00 35.70% 35.50% 14.00 37.10% 36.70% 14.00 37.20% 36.80% Unexplained variance in 1st contrast 6.16 15.70% 44.00% 6.10 16.10% 43.60% 5.75 15.30% 41.10% Unexplained variance in 2nd contrast 1.52 3.90% 10.90% 1.61 4.20% 11.50% 1.67 4.40% 11.90% Unexplained variance in 3rd contrast 1.32 3.40% 9.40% 1.31 3.50% 9.30% 1.35 3.60% 9.60% Unexplained variance in 4th contrast 0.80 2.00% 5.70% 0.79 2.10% 5.60% 0.85 2.30% 6.10% Unexplained variance in 5th contrast 0.68 1.70% 4.90% 0.69 1.80% 4.90% 0.68 1.80% 4.80% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.An inspection of the PTMEAs for the SF-36 mental health scale indicated that, for all other items, the PTMEA correlations had acceptable values. All the SF-36 mental health scale items had PTMEAs that were positive, supporting item-level polarity.The functioning of the six rating scale categories was examined for the SF-36 mental health scale. Items which are easier are located at the bottom of the SF-36 mental health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. The category logit measures ranged from -3.86 to 2.57 (see Table17). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category one. The infit MNSQ scores for this rating category ranged from 1.38 to 1.41 (see Table 17). The results indicated that the six-level rating scale used in the SF-36 mental health scale might not be the most robust to use (see Supplemental Figure 6), however, the full range of ratings were used by the participants who completed the SF-36 mental health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the latter five response categories were the most probable category for some part of the continuum. Rating category one was problematic.Table 17
SF-36 mental health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 (p. 120) WAVE 3 (p. 125) CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 22667 14 ( -3.07) 1.38 1.20 NONE 18463 13 ( -3.18) 1.41 1.22 NONE 14323 12 ( -3.18) 1.38 1.22 NONE 2 49420 30 -1.06 .75 .78 -1.91 43019 30 -1.08 .76 .81 -2.03 33416 28 -1.12 .78 .85 -2.02 3 15086 9 -.17 .96 .86 .66 12291 8 -.15 .97 .89 .71 10845 9 -.17 .98 .85 .57 4 25646 15 .40 1.02 1.11 -.41b 20753 14 .43 1.00 1.12 -.38b 18002 15 .44 1.00 1.06 -.38b 5 28636 17 1.14 1.06 1.31 .51b 24231 17 1.16 1.08 1.38 .53b 18787 16 1.20 1.13 1.28 .63 6 24973 15 (2.54) 1.00 1.07 1.15 23360 16 (2.56) 1.00 1.08 1.17 18313 15 (2.60) .95 1.02 1.19 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 12561 13 ( -3.08) 1.37 1.21 NONE 10333 14 ( -3.00) 1.38 1.23 NONE 7471 14 ( -2.98) 1.37 1.21 NONE 2 27233 29 -1.11 .80 .88 -1.91 20854 28 -1.08 .82 .89 -1.82 14529 27 -1.09 .83 .91 -1.80 3 9548 10 -.18 .98 .82 .51 7515 10 -.19 .94 .76 .50 5675 11 -.19 .94 .78 .43 4 15240 16 .42 1.00 1.00 -.36b 12348 17 .40 .97 .94 -.40b 9024 17 .41 .98 .94 -.35b 5 15741 16 1.17 1.14 1.22 .60 12183 16 1.15 1.19 1.27 .61 8698 16 1.15 1.19 1.24 .64 6 15147 16 (2.57) .93 .99 1.16 11454 15 (2.53) .90 .99 1.11 8415 16 (2.51) .90 .96 1.07 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 mental scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table18). Six of the SF-36 mental health items exhibited a consistent pattern of DIF over the six waves of data collection. Items SF01:Q6, MH01:Q9B, MH02:Q9C, MH03:Q9D, MH04:Q9F, and MH05:Q9H exhibited DIF based on both marital status and area of residence (see Table 18). It should be noted that items MH01:Q9B and MH03:Q9D had infit MNSQ statistics that fell outside the 0.7-1.30 range. SF-36 physical health items MH01:Q9B and MH03:Q9D appear to be particularly problematic items based on the RMM analysis findings.Table 18
Differential Item Functioning (DIF) for SF-36 mental health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 3.13 .206 0.00 .720 0.05 .975 -0.17 .122 0.07 .969 0.00 .978 18:Q5B 6.16 . 04 5 ∗ 0.00 .799 0.41 .814 0.00 .347 0.59 .745 0.00 .165 19:Q5C 4.23 .119 0.00 .505 0.17 .922 -0.30 .066 0.79 .673 0.00 .484 20:Q6 62.55 . 00 1 ∗ ∗ ∗ -0.09 .058 6.62 . 03 6 ∗ 0.00 .056 0.00 1.000 0.00 .415 23:Q9A 8.45 . 01 4 ∗ 0.00 .101 0.00 1.000 -0.05 .498 0.05 .979 0.00 .725 24:Q9B 14.83 . 00 1 ∗ ∗ ∗ 0.09 . 00 1 ∗ ∗ ∗ 0.41 .813 0.00 .553 11.22 . 00 4 ∗ ∗ 0.02 .093 25:Q9C 62.48 . 00 1 ∗ ∗ ∗ 0.12 . 00 1 ∗ ∗ ∗ 29.94 . 00 1 ∗ ∗ ∗ 0.48 .087 0.01 .996 0.07 . 00 9 ∗ ∗ 26:Q9D 9.01 . 01 1 ∗ -0.07 . 00 1 ∗ ∗ ∗ 0.16 .925 -0.07 .476 22.89 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ ∗ 27:Q9E 8.72 . 01 3 ∗ 0.00 .741 0.49 .782 0.37 . 01 0 ∗ 0.57 .750 0.00 .207 28:Q9F 17.18 . 00 1 ∗ ∗ ∗ 0.08 . 00 1 ∗ ∗ ∗ 20.01 . 00 1 ∗ ∗ ∗ 0.00 .401 0.73 .694 0.05 .004 29:Q9G 5.04 .079 0.00 .719 3.76 .150 0.52 . 00 3 ∗ ∗ 11.42 . 00 3 ∗ ∗ 0.00 .815 30:Q9H 13.46 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ 8.62 . 01 3 ∗ 0.00 .176 3.70 .155 -0.07 . 00 2 ∗ ∗ 31:Q9I 1.75 .414 0.00 .224 0.51 .773 -0.18 .308 0.00 1.000 0.00 .299 32:Q10 14.70 . 00 1 ∗ ∗ ∗ 0.00 .207 0.11 .947 0.00 .165 2.75 .250 0.00 .978 Wave 4 Wave 5 Wave 6 SF36 ITEM No. Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SUMMARY DIF CHI-SQUARE (DIF = 1) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 0.00 1.000 0.00 .618 0.07 .966 -0.03 .986 0.94 .623 -0.18 . 00 1 ∗ ∗ ∗ 18:Q5B 0.00 1.000 0.00 .639 2.43 .294 0.00 .395 0.39 .824 0.00 .543 19:Q5C 0.00 1.000 0.00 .497 0.71 .701 0.20 .271 0.59 .744 -0.19 . 00 3 ∗ ∗ 20:Q6 0.00 1.000 0.00 .779 6.37 . 04 0 ∗ -0.02 .337 2.26 .320 -0.03 .162 23:Q9A 0.00 1.000 0.00 .900 1.95 .373 0.19 .254 1.18 .551 -0.04 .176 24:Q9B 6.95 . 00 8 ∗ ∗ 0.00 .384 13.76 . 00 1 ∗ ∗ ∗ 0.00 .784 3.06 .213 0.00 .580 25:Q9C 0.00 1.000 0.06 . 03 0 ∗ 6.84 . 03 2 ∗ -0.68 .078 0.77 .678 0.08 .371 26:Q9D 0.00 1.000 0.00 .544 13.70 . 00 1 ∗ ∗ ∗ -0.02 .118 2.06 .354 0.00 .923 27:Q9E 0.00 1.000 0.00 .537 3.30 .189 -0.08 .720 1.67 .430 0.06 .215 28:Q9F 0.00 1.000 0.00 .687 0.87 .644 0.00 .819 0.43 .806 0.00 .408 29:Q9G 0.00 1.000 0.00 .694 0.20 .908 0.27 .570 0.63 .729 0.03 .278 30:Q9H 6.10 . 01 4 ∗ 0.00 .297 4.86 .086 0.05 .065 0.08 .962 0.00 .419 31:Q9I 0.00 1.000 -0.05 .112 1.10 .574 0.48 .170 0.04 .981 -0.04 .664 32:Q10 0.00 1.000 0.00 .414 1.56 .456 0.08 . 01 9 ∗ 0.05 .979 0.00 .434 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 3.1. SF36 Total Scale Rasch Analysis for Six Waves of Data Collection
Total Rasch scale item statistics for six waves of data collection are shown in Table1. When all 36 SF-36 items were calibrated using the RMM for the six waves of data collection, MNSQ infit statistics ranged from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table 2). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31. This resulted in an average item separation index of 77.98 and an average item reliability of 1.00 over the six waves (see Table 3).Table 1
SF-36 total scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 SF36 ITEM LOGIT MEASURE MODEL S.E PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR 1: Q1 -0.36 0.01 -0.16 -0.37 0.01 -0.14 -0.37 0.01 -0.09 -0.52 0.01 -0.06 -0.60 0.01 -0.05 -0.68 0.01 0.01 2: Q2 -0.39 0.01 0.04 -0.44 0.01 0.03 -0.51 0.01 0.03 -0.56 0.01 0.08 -0.66 0.01 0.06 -0.73 0.01 0.11 3: Q3A 1.94 0.02 0.24 1.96 0.02 0.26 2.05 0.02 0.29 2.08 0.02 0.30 2.09 0.03 0.29 2.31 0.04 0.27 4: Q3B 0.36 0.01 0.46 0.39 0.01 0.44 0.53 0.01 0.44 0.67 0.01 0.45 0.77 0.02 0.45 0.95 0.02 0.41 5: Q3C 0.30 0.01 0.47 0.32 0.01 0.46 0.34 0.01 0.44 0.44 0.01 0.46 0.49 0.02 0.44 0.57 0.02 0.41 6: Q3D 0.82 0.01 0.45 0.84 0.01 0.44 0.93 0.01 0.46 1.00 0.02 0.47 1.11 0.02 0.46 1.25 0.02 0.44 7: Q3E 0.13 0.01 0.51 0.15 0.01 0.50 0.24 0.01 0.50 0.28 0.01 0.49 0.36 0.02 0.49 0.46 0.02 0.47 8: Q3F 0.54 0.01 0.44 0.63 0.01 0.41 0.72 0.01 0.41 0.73 0.02 0.43 0.73 0.02 0.43 0.81 0.02 0.40 9: Q3G 0.44 0.01 0.49 0.53 0.01 0.46 0.73 0.01 0.46 0.87 0.02 0.47 1.05 0.02 0.44 1.24 0.02 0.44 10: Q3H 0.05 0.01 0.52 0.10 0.01 0.49 0.23 0.01 0.49 0.36 0.01 0.50 0.48 0.02 0.48 0.65 0.02 0.48 11: Q3I -0.14 0.01 0.48 -0.13 0.01 0.45 -0.07 0.01 0.46 -0.02 0.01 0.44 0.05 0.01 0.44 0.11 0.02 0.45 12: Q3J -0.28 0.01 0.36 -0.31 0.01 0.35 -0.29 0.01 0.32 -0.24 0.01 0.32 -0.23 0.01 0.32 -0.23 0.02 0.32 13: Q4A 1.26 0.01 0.35 1.31 0.01 0.33 1.41 0.02 0.29 1.41 0.02 0.28 1.46 0.02 0.26 1.47 0.03 0.27 14: Q4B 1.63 0.01 0.35 1.74 0.02 0.32 1.87 0.02 0.29 1.89 0.02 0.28 1.92 0.03 0.27 1.94 0.03 0.26 15: Q4C 1.47 0.01 0.36 1.53 0.02 0.36 1.60 0.02 0.31 1.69 0.02 0.30 1.74 0.02 0.28 1.78 0.03 0.24 16: Q4D 1.50 0.01 0.36 1.58 0.02 0.35 1.67 0.02 0.31 1.73 0.02 0.30 1.77 0.02 0.28 1.80 0.03 0.28 17: Q5A 1.02 0.01 0.37 1.01 0.01 0.35 1.00 0.01 0.31 0.96 0.02 0.30 0.92 0.02 0.30 0.89 0.02 0.27 18: Q5B 1.22 0.01 0.36 1.22 0.01 0.35 1.23 0.02 0.31 1.21 0.02 0.30 1.18 0.02 0.29 1.15 0.02 0.27 19: Q5C 1.05 0.01 0.35 1.03 0.01 0.33 1.01 0.01 0.31 0.99 0.02 0.29 0.95 0.02 0.29 0.91 0.02 0.26 20: Q6 1.17 0.01 -0.22 1.35 0.01 -0.20 0.93 0.01 -0.16 0.69 0.01 -0.16 0.48 0.02 -0.12 0.37 0.02 -0.08 21: Q7 -0.28 0.01 -0.06 -0.26 0.01 -0.04 -0.40 0.01 0.01 -0.50 0.01 0.01 -0.57 0.01 0.03 -0.67 0.01 0.07 22: Q8 0.68 0.01 -0.18 0.73 0.01 -0.14 0.44 0.01 -0.11 0.26 0.01 -0.09 0.12 0.01 -0.04 -0.01 0.02 -0.02 23: Q9A -0.59 0.01 -0.05 -0.67 0.01 -0.06 -0.79 0.01 0.00 -0.82 0.01 -0.02 -0.93 0.01 0.01 -1.06 0.01 0.05 24: Q9B -2.04 0.01 0.39 -2.30 0.01 0.36 -2.39 0.01 0.33 -2.40 0.01 0.31 -2.38 0.02 0.31 -2.42 0.02 0.27 25: Q9C -2.64 0.01 0.40 -2.92 0.02 0.35 -2.98 0.02 0.34 -3.01 0.02 0.30 -2.86 0.02 0.31 -2.89 0.02 0.27 26: Q9D -0.30 0.01 0.06 -0.20 0.01 0.01 -0.28 0.01 0.09 -0.29 0.01 0.07 -0.30 0.01 0.08 -0.37 0.01 0.12 27: Q9E -0.77 0.01 -0.05 -0.89 0.01 -0.09 -1.00 0.01 -0.04 -1.06 0.01 -0.05 -1.17 0.01 -0.02 -1.35 0.01 0.00 28: Q9F -2.01 0.01 0.40 -2.15 0.01 0.34 -2.15 0.01 0.35 -2.16 0.01 0.34 -2.13 0.02 0.33 -2.13 0.02 0.31 29: Q9G -1.63 0.01 0.44 -1.70 0.01 0.41 -1.67 0.01 0.40 -1.60 0.01 0.40 -1.56 0.01 0.40 -1.57 0.01 0.37 30: Q9H 0.24 0.01 0.14 0.33 0.01 0.09 0.26 0.01 0.13 0.23 0.01 0.14 0.14 0.01 0.12 0.08 0.02 0.15 31: Q9I -1.23 0.01 0.39 -1.22 0.01 0.37 -1.15 0.01 0.34 -1.07 0.01 0.36 -1.03 0.01 0.34 -1.04 0.01 0.31 32: Q10 -1.34 0.01 0.35 -1.39 0.01 0.31 -1.30 0.01 0.28 -1.23 0.01 0.26 -1.20 0.01 0.24 -1.18 0.01 0.24 33: Q11A -1.39 0.01 0.33 -1.48 0.01 0.31 -1.49 0.01 0.28 -1.50 0.01 0.25 -1.49 0.01 0.26 -1.52 0.01 0.20 34: Q11B 0.28 0.01 0.03 0.43 0.01 -0.01 0.32 0.01 0.08 0.25 0.01 0.07 0.10 0.01 0.10 -0.01 0.02 0.14 35: Q11C -0.68 0.01 0.29 -0.75 0.01 0.27 -0.65 0.01 0.29 -0.59 0.01 0.25 -0.50 0.01 0.25 -0.48 0.01 0.24 36: Q11D -0.02 0.01 -0.06 0.01 0.01 -0.09 -0.03 0.01 0.00 -0.17 0.01 -0.01 -0.31 0.01 0.04 -0.40 0.01 0.09 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 2
SF-36 total scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM INFIT Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.92 -6.7 0.96 -2.9 0.86 -9.9 0.90 -7.3 0.83 -9.9 0.86 -9.8 2: Q2 0.51 -9.9 0.55 -9.9 0.53 -9.9 0.57 -9.9 0.54 -9.9 0.56 -9.9 3: Q3A 1.03 2.1 1.02 1.4 1.12 7.0 1.09 5.5 1.06 3.0 1.02 1.2 4: Q3B 0.59 -9.9 0.61 -9.9 0.64 -9.9 0.65 -9.9 0.72 -9.9 0.73 -9.9 5: Q3C 0.54 -9.9 0.56 -9.9 0.56 -9.9 0.57 -9.9 0.59 -9.9 0.60 -9.9 6: Q3D 0.82 -9.9 0.82 -9.9 0.84 -9.9 0.84 -9.9 0.86 -8.6 0.86 -8.7 7: Q3E 0.46 -9.9 0.48 -9.9 0.48 -9.9 0.50 -9.9 0.54 -9.9 0.56 -9.9 8: Q3F 0.66 -9.9 0.67 -9.9 0.72 -9.9 0.72 -9.9 0.75 -9.9 0.75 -9.9 9: Q3G 0.76 -9.9 0.78 -9.9 0.83 -9.9 0.85 -9.9 0.94 -3.7 0.94 -3.4 10: Q3H 0.46 -9.9 0.50 -9.9 0.53 -9.9 0.56 -9.9 0.65 -9.9 0.68 -9.9 11: Q3I 0.27 -9.9 0.30 -9.9 0.29 -9.9 0.31 -9.9 0.37 -9.9 0.39 -9.9 12: Q3J 0.17 -9.9 0.19 -9.9 0.13 -9.9 0.15 -9.9 0.17 -9.9 0.18 -9.9 13: Q4A 0.40 -9.9 0.41 -9.9 0.41 -9.9 0.42 -9.9 0.49 -9.9 0.50 -9.9 14: Q4B 0.57 -9.9 0.58 -9.9 0.61 -9.9 0.61 -9.9 0.66 -9.9 0.66 -9.9 15: Q4C 0.50 -9.9 0.51 -9.9 0.51 -9.9 0.52 -9.9 0.57 -9.9 0.58 -9.9 16: Q4D 0.51 -9.9 0.53 -9.9 0.54 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 17: Q5A 0.25 -9.9 0.27 -9.9 0.22 -9.9 0.23 -9.9 0.25 -9.9 0.26 -9.9 18: Q5B 0.37 -9.9 0.39 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.41 -9.9 19: Q5C 0.27 -9.9 0.29 -9.9 0.24 -9.9 0.26 -9.9 0.26 -9.9 0.28 -9.9 20: Q6 2.43 9.9 2.59 9.9 2.48 9.9 2.64 9.9 2.36 9.9 2.48 9.9 21: Q7 1.84 9.9 1.90 9.9 1.96 9.9 2.01 9.9 1.70 9.9 1.73 9.9 22: Q8 2.09 9.9 2.17 9.9 2.14 9.9 2.21 9.9 1.93 9.9 1.99 9.9 23: Q9A 1.48 9.9 1.52 9.9 1.48 9.9 1.52 9.9 1.44 9.9 1.46 9.9 24: Q9B 1.47 9.9 1.40 9.9 1.64 9.9 1.55 9.9 1.64 9.9 1.55 9.9 25: Q9C 1.67 9.9 1.53 9.9 1.82 9.9 1.66 9.9 1.74 9.9 1.57 9.9 26: Q9D 1.69 9.9 1.73 9.9 1.60 9.9 1.64 9.9 1.50 9.9 1.52 9.9 27: Q9E 1.55 9.9 1.58 9.9 1.55 9.9 1.58 9.9 1.49 9.9 1.50 9.9 28: Q9F 1.07 5.1 1.03 2.4 1.19 9.9 1.15 9.0 1.13 6.9 1.09 4.9 29: Q9G 1.02 2.0 1.00 0.2 1.04 3.0 1.02 1.4 1.02 1.1 1.00 -0.1 30: Q9H 1.63 9.9 1.62 9.9 1.49 9.9 1.50 9.9 1.35 9.9 1.36 9.9 31: Q9I 0.84 -9.9 0.84 -9.9 0.88 -9.9 0.88 -9.9 0.84 -9.9 0.84 -9.9 32: Q10 0.79 -9.9 0.79 -9.9 0.84 -9.9 0.83 -9.9 0.92 -6.6 0.92 -6.6 33: Q11A 0.77 -9.9 0.76 -9.9 0.64 -9.9 0.63 -9.9 0.66 -9.9 0.65 -9.9 34: Q11B 1.88 9.9 1.90 9.9 1.77 9.9 1.80 9.9 1.62 9.9 1.62 9.9 35: Q11C 1.04 3.3 1.05 4.1 0.95 -4.5 0.95 -3.8 0.99 -0.9 0.99 -0.5 36: Q11D 1.78 9.9 1.83 9.9 1.81 9.9 1.86 9.9 1.68 9.9 1.70 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.72 -9.9 0.73 -9.9 0.69 -9.9 0.70 -9.9 0.65 -9.9 0.65 -9.9 2: Q2 0.50 -9.9 0.52 -9.9 0.49 -9.9 0.50 -9.9 0.46 -9.9 0.47 -9.9 3: Q3A 1.11 4.4 1.06 2.3 1.18 6.0 1.11 3.9 1.26 6.4 1.17 4.3 4: Q3B 0.78 -9.9 0.78 -9.9 0.81 -9.1 0.81 -9.4 0.91 -3.6 0.90 -4.1 5: Q3C 0.62 -9.9 0.64 -9.9 0.65 -9.9 0.66 -9.9 0.72 -9.9 0.72 -9.9 6: Q3D 0.87 -7.1 0.86 -7.7 0.92 -3.6 0.90 -4.6 0.97 -0.9 0.93 -2.4 7: Q3E 0.56 -9.9 0.57 -9.9 0.61 -9.9 0.63 -9.9 0.70 -9.9 0.70 -9.9 8: Q3F 0.73 -9.9 0.73 -9.9 0.71 -9.9 0.72 -9.9 0.76 -9.9 0.76 -9.9 9: Q3G 0.98 -1.0 0.97 -1.3 1.03 1.5 1.01 0.6 1.09 3.3 1.04 1.6 10: Q3H 0.74 -9.9 0.76 -9.9 0.83 -8.0 0.85 -7.4 0.94 -2.4 0.93 -2.6 11: Q3I 0.41 -9.9 0.43 -9.9 0.48 -9.9 0.51 -9.9 0.55 -9.9 0.57 -9.9 12: Q3J 0.24 -9.9 0.26 -9.9 0.29 -9.9 0.31 -9.9 0.33 -9.9 0.34 -9.9 13: Q4A 0.52 -9.9 0.54 -9.9 0.58 -9.9 0.60 -9.9 0.60 -9.9 0.61 -9.9 14: Q4B 0.69 -9.9 0.69 -9.9 0.73 -9.9 0.72 -9.9 0.74 -8.7 0.74 -8.9 15: Q4C 0.62 -9.9 0.63 -9.9 0.67 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 16: Q4D 0.64 -9.9 0.64 -9.9 0.68 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 17: Q5A 0.27 -9.9 0.29 -9.9 0.30 -9.9 0.32 -9.9 0.32 -9.9 0.34 -9.9 18: Q5B 0.42 -9.9 0.43 -9.9 0.45 -9.9 0.47 -9.9 0.47 -9.9 0.49 -9.9 19: Q5C 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 0.33 -9.9 0.35 -9.9 20: Q6 2.17 9.9 2.27 9.9 2.06 9.9 2.14 9.9 2.00 9.9 2.06 9.9 21: Q7 1.61 9.9 1.63 9.9 1.52 9.9 1.53 9.9 1.40 9.9 1.41 9.9 22: Q8 1.75 9.9 1.80 9.9 1.65 9.9 1.68 9.9 1.56 9.9 1.59 9.9 23: Q9A 1.40 9.9 1.41 9.9 1.36 9.9 1.36 9.9 1.34 9.9 1.35 9.9 24: Q9B 1.62 9.9 1.53 9.9 1.61 9.9 1.53 9.9 1.58 9.9 1.51 9.9 25: Q9C 1.73 9.9 1.60 9.9 1.75 9.9 1.62 9.9 1.65 9.9 1.57 9.9 26: Q9D 1.51 9.9 1.53 9.9 1.47 9.9 1.49 9.9 1.41 9.9 1.42 9.9 27: Q9E 1.51 9.9 1.52 9.9 1.44 9.9 1.45 9.9 1.46 9.9 1.47 9.9 28: Q9F 1.15 7.5 1.11 5.5 1.16 7.0 1.12 5.3 1.12 4.6 1.09 3.6 29: Q9G 1.02 1.6 1.01 0.5 1.03 1.6 1.01 0.8 1.06 2.7 1.04 2.1 30: Q9H 1.36 9.9 1.36 9.9 1.35 9.9 1.37 9.9 1.29 9.9 1.29 9.9 31: Q9I 0.89 -8.2 0.89 -7.9 0.92 -5.2 0.92 -4.9 0.91 -5.1 0.91 -4.8 32: Q10 0.93 -5.1 0.94 -4.7 1.00 -0.2 1.00 0.0 1.06 3.0 1.06 3.1 33: Q11A 0.67 -9.9 0.66 -9.9 0.67 -9.9 0.66 -9.9 0.70 -9.9 0.69 -9.9 34: Q11B 1.58 9.9 1.59 9.9 1.44 9.9 1.45 9.9 1.38 9.9 1.38 9.9 35: Q11C 1.01 0.5 1.01 0.9 0.99 -0.6 1.00 -0.2 0.98 -1.0 0.98 -0.9 36: Q11D 1.61 9.9 1.64 9.9 1.51 9.9 1.53 9.9 1.43 9.9 1.43 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; Z-STD ≤-2.0 or ≥2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 3
SF-36 total scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons Mean -.69 -.68 -.72 -.75 -.80 -.85 S.D. .24 .22 .24 .23 .23 .24 MAX .82 .29 .64 .67 .03 .15 MIN -4.33 -2.70 -2.60 -2.59 -2.54 -2.76 Infit-MNSQ 1.03 1.02 1.03 1.04 1.04 1.03 Infit-ZSTD -.30 -.40 -.30 -.20 -.20 -.10 Outfit-MNSQ 1.01 1.01 1.00 1.00 .99 .99 Outfit-ZSTD -.40 -.40 -.30 -.30 -.30 -.20 Person separation .81c .60c .72c .71c .75c .78c Person reliability .40a .26a .34a .33a .36a .38a Items Mean .00 .00 .00 .00 .00 .00 S.D. 1.11 1.19 1.20 1.21 1.22 1.26 MAX 1.94 1.96 2.05 2.08 2.09 2.31 MIN -2.64 -2.92 -2.98 -3.01 -2.86 -2.89 Infit-MNSQ .98 .99 .98 .98 .98 .99 Infit-ZSTD -2.30 -2.30 -2.20 -1.90 -1.40 -.90 Outfit-MNSQ .99 1.00 .98 .98 .98 .98 Outfit-ZSTD -2.30 -2.30 -2.30 -2.00 -1.50 -1.10 Item separation 93.40 89.72 82.81 76.45 67.43 58.09 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 total scale person-item map in Supplemental Figure1 shows evidence of consistent hierarchical ordering of the SF-36 total scale items. Items which were less difficult are located at the bottom of the person-item map while more difficult items are located at the top of the map. The figure also shows that while each of the waves had a reasonable distribution of items in relation to item difficulty, several of the SF-36 total scale items have the same level of difficulty.Rasch analysis reports the calibrations of the five thresholds (for the six-category rating scale) increase monotonically from -3.15, -1.36, -.25, .48, 1.31, and 2.82 for wave one and -2.96, -1.30, -.31, .42, 1.29, and 2.78 for wave six.The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table3). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 3). When examining the overall RMM output of the SF-36 total scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1 to -3 logits. The person reliability was 0.35 and item reliability was 1.00. This places the item reliability for the SF-36 total scale in the acceptable range and the person reliability correlation in the unacceptable range.The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured. However, the separation index for persons was less than 2.0 indicating inadequate separation of participants on the construct.Item fit to the unidimensionality requirement of the RMM was also examined. Eleven out of the 36 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Specifically, items CH01:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, MH04:Q9F, VT03:Q9G, VT04:Q9I, SF02:Q10, CH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 30.6% (i.e., 11 of 36) of the 36 SF-36 total scale items met the RMM requirements. The following items had an Infit MnSq statistic that was less than 0.70: HT:Q2, PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, RE01:Q5A, RE02:Q5B, and RE03:Q5C. The following items had an Infit MNSQ statistic that was greater than 1.30: FO01:Q6, BP01:Q7, BP02:Q8, VT01:Q9A, MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH05:Q9H, GH02:Q11B, and GH05:Q11D.The Winsteps RMM program determines the dimensionality of a scale by using a Rasch-residual principal components analysis. When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table4). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 total scale over the six waves of data collection ranged from 58.5% to 62.1% and the unexplained variance in the first contrast ranged from 11.9% to 14.5%. The residual analysis completed indicated that no second dimension or factor existed. Linacre [32] suggests that a first single factor with 60% or greater of the accounted for variance is considered a reasonable unidimensional construct. “A second factor or residual factor should not indicate a substantial amount of variance if unidimensionality is tenable” [33, p. 192].Table 4
SF-36 total scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 86.71 100.00% 100.00% 92.22 100.00% 100.00% 92.42 100.00% 100.00% Raw variance explained by measures 50.71 58.50% 58.70% 56.22 61.00% 61.10% 56.42 61.00% 61.30% Raw variance explained by persons 3.47 4.00% 4.00% 1.93 2.10% 2.10% 2.04 2.20% 2.20% Raw Variance explained by items 47.24 54.50% 54.70% 54.30 58.90% 59.00% 54.38 58.80% 59.10% Raw unexplained variance (total) 36.00 41.50% 41.30% 36.00 39.00% 38.90% 36.00 39.00% 38.70% Unexplained variance in 1st contrast 12.60 14.50% 35.00% 12.57 13.60% 34.90% 12.26 13.30% 34.10% Unexplained variance in 2nd contrast 3.02 3.50% 8.40% 3.05 3.30% 8.50% 3.03 3.30% 8.40% Unexplained variance in 3rd contrast 1.89 2.20% 5.20% 1.78 1.90% 4.90% 1.84 2.00% 5.10% Unexplained variance in 4th contrast 1.59 1.80% 4.40% 1.54 1.70% 4.30% 1.50 1.60% 4.20% Unexplained variance in 5th contrast 1.24 1.40% 3.40% 1.27 1.40% 3.50% 1.26 1.40% 3.50% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 92.10 100.00% 100.00% 91.96 100.00% 100.00% 94.92 100.00% 100.00% Raw variance explained by measures 56.10 60.90% 61.50% 55.96 60.90% 61.70% 58.92 62.10% 63.00% Raw variance explained by persons 3.59 3.90% 3.90% 4.05 4.40% 4.50% 4.57 4.80% 4.90% Raw Variance explained by items 52.51 57.00% 57.60% 51.91 56.40% 57.20% 54.35 57.30% 58.10% Raw unexplained variance (total) 36.00 39.10% 38.50% 36.00 39.10% 38.30% 36.00 37.90% 37.00% Unexplained variance in 1st contrast 12.41 13.50% 34.50% 12.08 13.10% 33.60% 11.33 11.90% 31.50% Unexplained variance in 2nd contrast 3.06 3.30% 8.50% 3.20 3.50% 8.90% 3.22 3.40% 8.90% Unexplained variance in 3rd contrast 1.88 2.00% 5.20% 1.95 2.10% 5.40% 2.17 2.30% 6.00% Unexplained variance in 4th contrast 1.50 1.60% 4.20% 1.53 1.70% 4.30% 1.55 1.60% 4.30% Unexplained variance in 5th contrast 1.27 1.40% 3.50% 1.25 1.40% 3.50% 1.30 1.40% 3.60% Notes. a > 60% unexplained variance in the Rasch factor; b Eigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The point-measure correlation (PTMEA) ranges from +1 to -1 “with negative items suggesting improper scoring or not functioning as expected” [33, p. 192]. An inspection of the PTMEAs for the SF-36 total scale indicated that items GH01:Q1, SF01:Q6, BP01:Q7, and VT02:Q9E had consistent negative PTMEAs over the six waves of data collection. The rest of the SF-36 total scale items had PTMEAs that were positive, supporting item-level polarity. For all other items, the PTMEA correlations had acceptable values.The functioning of the six rating scale categories was examined for the SF-36 total scale. Rating scale frequency and percent indicated that all categories were used by the participants. The category use statistics are presented in Table5. The category logit measures ranged from -3.19 to 2.86 (see Table 5). None of the infit MNSQ scores fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. The results indicated that the six-level rating scale used in the SF-36 total scale fits appropriately to the predictive RMM (see Supplemental Figure 2); however, the full range of ratings were used by the participants who completed the SF-36 total scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and each response category was the most probable category for some part of the continuum.Table 5
SF-36 total scale Rasch analysis of summary of category structure for six waves of data collection
Wave 1 Wave 2 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 84119 19 (-3.15) 1.30 1.17 NONE 72530 19 (-3.19) 1.28 1.17 NONE 2 133566 30 -1.36 .99 1.01 -1.88 114964 31 -1.40 1.02 1.06 -1.93 3 96735 22 -.25 .66 .61 -.54 82817 22 -.26 .67 .63 -.57 4 40204 9 .48 .98 1.04 .61 34325 9 .50 .97 1.03 .61 5 44040 10 1.31 1.06 1.17 .28b 37154 10 1.34 1.07 1.20 .34b 6 25211 6 (2.82) .93 1.01 1.53 23593 6 (2.86) .92 .99 1.56 Wave 3 Wave 4 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 61757 20 (-3.14) 1.25 1.15 NONE 55809 22 (-3.06) 1.24 1.15 NONE 2 88701 29 -1.39 1.04 1.08 -1.87 72286 28 -1.35 1.05 1.10 -1.79 3 63285 20 -.28 .73 .67 -.57 50125 19 -.30 .78 .71 -.54 4 29486 9 .48 .94 .96 .48 25807 10 .45 .90 .88 .38 5 28991 9 1.34 1.09 1.16 .41b 24560 10 1.32 1.10 1.13 .41 6 18470 6 (2.86) .87 .93 1.55 15297 6 (2.85) .86 .91 1.54 Wave 5 Wave 6 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 47041 24 (-3.00) 1.26 1.16 NONE 36377 25 (-2.96) 1.23 1.14 NONE 2 54905 27 -1.31 1.06 1.09 -1.72 37987 26 -1.30 1.06 1.07 -1.67 3 36216 18 -.30 .83 .76 -.49 24952 17 -.31 .88 .83 -.49 4 21172 11 .43 .87 .81 .27 15554 11 .42 .87 .79 .23 5 18847 9 1.29 1.13 1.15 .46 13560 9 1.29 1.15 1.15 .50 6 11583 6 (2.80) .83 .88 1.48 8495 6 (2.78) .83 .88 1.44 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.To investigate the possibility of item bias, differential item functioning (DIF) analysis was conducted to determine whether different groups of participants based on marital status and area of residence (urban versus regional; see Table6) responded differently on the SF-36 total scale items, despite having the same level of the latent trait being measured [34]. Three of the SF-36 items exhibited a consistent pattern of DIF over the six waves of data collection for both marital status and area of residence, those being MH01:Q9B, MH02:Q9C, and MH05:Q9H. It should be noted that these three items also exhibited MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range.Table 6
Differential Item Functioning (DIF) for SF-36 total scale Rasch analysis for six waves of data collection based on marital status and area of residence.
WAVE 1 WAVE 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 8.68 . 01 3 ∗ ∗ 0.00 .924 1.78 .407 0.19 .452 0.25 .882 0.00 .861 2 3.17 .202 0.00 .295 0.55 .760 0.00 .791 0.14 .936 0.00 .134 3 4.95 .083 0.00 .811 0.21 .903 0.20 .326 5.47 .064 0.00 .907 4 0.87 .647 0.00 .492 0.44 .804 0.00 . 01 0 ∗ 0.34 .847 0.00 .438 5 0.89 .640 0.05 . 00 1 ∗ ∗ ∗ 0.09 .959 -0.14 .983 0.40 .818 0.02 . 00 2 ∗ ∗ 6 0.47 .792 0.00 .142 4.67 .095 0.00 .288 0.57 .750 0.00 .619 7 0.06 .971 0.00 .687 2.94 .227 -0.01 .800 1.39 .496 -0.02 .054 8 0.32 .855 0.00 .362 0.27 .875 0.02 . 03 3 ∗ 0.06 .974 0.00 .243 9 13.66 . 00 1 ∗ ∗ ∗ -0.06 . 00 3 ∗ ∗ 2.06 .354 -0.14 .603 0.70 .704 -0.06 .072 10 11.27 .004∗∗ 0.00 . 03 0 ∗ 7.04 . 02 9 ∗ 0.06 . 00 1 ∗ ∗ ∗ 0.00 1.000 -0.06 . 00 6 ∗ ∗ 11 3.41 .179 0.00 .071 6.25 . 04 3 ∗ 0.04 .503 0.00 1.000 0.00 .473 12 0.16 .926 0.00 .906 0.10 .952 0.00 .981 0.69 .706 0.00 .722 13 2.93 .227 0.00 .845 0.09 .959 -0.19 .474 0.04 .982 0.00 .159 14 0.96 .618 0.00 .327 3.10 .210 0.00 .822 6.13 . 04 6 ∗ -0.05 .126 15 2.37 .303 0.00 .366 0.06 .970 0.07 .660 0.38 .828 0.00 .815 16 0.00 1.000 0.00 .591 0.05 .976 0.00 .358 4.14 .124 0.00 .317 17 1.80 .404 0.00 .581 0.10 .952 -0.02 .956 0.03 .987 0.00 .475 18 3.78 .149 0.00 .704 0.52 .770 -0.02 .238 0.62 .731 0.00 .571 19 1.54 .460 0.00 .892 0.23 .893 -0.10 .836 0.06 .971 0.00 .882 20 55.71 .001∗∗∗ -0.07 .036 7.62 . 02 2 ∗ 0.00 .526 0.06 .970 0.00 .088 21 3.24 .195 -0.06 .011∗ 1.17 .554 0.15 .087 0.00 1.000 0.00 .784 22 33.92 . 00 1 ∗ ∗ ∗ 0.00 .239 4.09 .127 0.00 .661 1.52 .465 0.03 .649 23 7.12 . 02 8 ∗ 0.00 .100 2.90 .231 -0.06 .436 0.18 .916 0.00 .498 24 23.59 . 00 1 ∗ ∗ ∗ 0.11 . 00 1 ∗ ∗ ∗ 0.12 .942 0.00 .993 13.40 . 00 1 ∗ ∗ ∗ 0.00 .106 25 64.84 . 00 1 ∗ ∗ ∗ 0.15 . 00 1 ∗ ∗ ∗ 30.23 . 00 1 ∗ ∗ ∗ -0.38 .099 0.01 .997 0.07 . 02 0 ∗ 26 10.47 . 00 5 ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 2.13 .341 0.00 .512 28.34 . 00 1 ∗ ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 27 13.71 . 00 1 ∗ ∗ ∗ 0.00 .778 9.09 . 01 0 ∗ ∗ -0.07 .924 0.85 .651 0.00 .914 28 18.73 . 00 1 ∗ ∗ ∗ 0.10 . 00 1 ∗ ∗ ∗ 18.70 . 00 1 ∗ ∗ ∗ 0.00 .590 0.79 .671 0.05 . 00 3 ∗ ∗ 29 9.31 . 00 9 ∗ ∗ 0.00 . 04 7 ∗ 10.32 . 00 6 ∗ ∗ -0.43 .214 13.34 . 00 1 ∗ ∗ ∗ 0.00 .720 30 14.58 . 00 1 ∗ ∗ -0.06 . 00 8 ∗ ∗ 14.57 . 00 1 ∗ ∗ ∗ 0.00 .403 3.83 .145 -0.07 . 00 1 ∗ ∗ ∗ 31 7.38 . 02 4 ∗ 0.02 . 01 0 ∗ ∗ 1.40 .493 -0.23 .687 2.57 .273 0.00 .108 32 18.09 . 00 1 ∗ ∗ ∗ 0.00 . 02 8 ∗ 0.65 .720 0.00 .908 6.31 . 04 2 ∗ 0.00 .422 33 15.01 . 00 1 ∗ ∗ ∗ 0.02 . 00 5 ∗ ∗ 0.40 .820 -0.14 .541 0.00 1.000 0.02 . 00 6 ∗ ∗ 34 30.39 . 00 1 ∗ ∗ ∗ 0.00 .963 1.61 .443 0.00 .937 6.57 . 03 7 ∗ 0.00 .823 35 9.62 . 00 8 ∗ ∗ 0.00 .606 0.37 .833 -0.26 .284 0.31 .859 0.02 . 02 0 ∗ 36 29.49 . 00 1 ∗ ∗ ∗ -0.06 . 01 6 ∗ 1.11 .571 0.00 .357 5.88 .052 -0.02 . 04 2 ∗ Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 0.22 .898 0.00 .261 4.26 .117 0.14 .769 1.14 .560 0.11 .198 2 0.35 .841 0.00 . 00 9 ∗ ∗ 2.92 .229 0.00 .976 0.15 .930 0.00 .275 3 0.25 .882 0.00 .078 2.24 .323 -0.02 .403 5.55 .060 0.00 .613 4 0.03 .987 0.00 .145 1.97 .369 0.00 .756 4.67 .100 0.00 .027 5 0.66 .716 0.06 . 00 1 ∗ ∗ ∗ 0.46 .795 0.27 .618 1.40 .490 0.43 .083 6 6.42 . 03 9 ∗ 0.00 .555 5.78 .054 0.00 .271 11.47 . 0 1 ∗ ∗ ∗ -0.14 .427 7 2.74 .251 0.00 .705 4.04 .130 -0.20 .574 8.64 . 0 1 ∗ ∗ 0.04 .948 8 1.19 .549 0.00 .165 0.87 .645 0.05 . 03 9 ∗ 1.17 .560 0.00 .117 9 4.04 .130 -0.04 .998 1.92 .379 -0.20 .371 3.26 .190 -0.04 .752 10 1.85 .392 0.00 .894 2.52 .280 0.11 . 00 1 ∗ ∗ ∗ 2.21 .330 0.10 . 00 1 ∗ ∗ ∗ 11 3.41 .179 0.00 .186 2.23 .324 -0.31 .823 1.92 .380 -0.08 .821 12 0.08 .965 0.00 .394 0.00 1.000 -0.02 .598 1.52 .460 0.00 .357 13 0.16 .927 0.00 .214 0.01 .998 0.01 .649 1.11 .570 -0.04 .916 14 0.03 .986 0.00 .368 1.12 .569 -0.06 . 03 3 ∗ 1.21 .540 -0.07 .274 15 3.06 .214 0.00 .611 0.00 1.000 -0.06 .860 0.86 .650 -0.09 .225 16 2.99 .221 0.00 .578 1.01 .602 0.00 .833 1.66 .430 0.00 .499 17 0.57 .753 0.00 .475 1.03 .594 -0.10 .754 0.64 .730 0.05 .290 18 0.08 .961 0.00 .671 4.27 .116 -0.07 .210 0.13 .940 -0.08 .987 19 0.11 .947 0.00 .420 2.78 .246 -0.08 .828 0.19 .910 -0.08 .986 20 0.36 .837 0.00 .089 5.27 .070 -0.05 .120 4.98 .080 -0.07 .758 21 1.67 .430 0.00 .169 1.16 .556 0.10 .439 0.21 .900 0.10 . 04 6 ∗ 22 0.54 .762 0.00 . 04 9 ∗ 2.89 .233 0.00 .446 1.95 .370 0.00 .874 23 21.23 .001∗∗∗ 0.07 . 00 2 ∗ ∗ 0.50 .777 -0.07 .442 1.02 .600 0.00 .409 24 0.63 .730 0.00 .143 22.80 . 00 1 ∗ ∗ ∗ 0.00 .897 6.77 .030∗ 0.02 .084 25 13.68 . 00 1 ∗ ∗ ∗ 0.00 .098 11.59 . 00 3 ∗ ∗ 0.00 .638 1.33 .510 -0.06 .169 26 0.41 .817 0.00 . 02 1 ∗ 8.24 . 01 6 ∗ 0.02 .274 1.17 .550 -0.03 .566 27 0.48 .787 0.00 .163 1.40 .494 0.22 .323 4.61 .100 0.18 .521 28 9.62 . 00 8 ∗ ∗ 0.00 .890 3.16 .203 -0.07 .109 0.04 .980 -0.11 .169 29 0.05 .979 -0.06 . 00 8 ∗ ∗ 4.42 .108 0.30 .161 2.07 .350 0.30 .104 30 2.04 .357 0.00 . 03 5 ∗ 6.85 . 03 2 ∗ 0.00 .859 1.79 .400 0.00 .517 31 0.47 .789 0.00 .068 6.88 . 03 1 ∗ 0.33 .165 0.00 1.000 0.12 .985 32 0.16 .923 0.00 .477 2.37 .302 0.00 .478 0.07 .970 -0.05 .889 33 3.33 .186 0.00 .180 5.40 .066 0.05 .851 0.45 .800 -0.20 . 00 1 ∗ ∗ ∗ 34 0.00 1.000 -0.03 . 01 0 ∗ 1.00 .605 0.00 .477 0.49 .780 0.00 .999 35 0.00 1.000 -0.04 .808 0.54 .764 0.27 .217 2.08 .350 -0.21 . 00 6 ∗ ∗ 36 2.74 .251 0.00 .065 5.82 .054 0.00 .495 1.44 .480 -0.03 .508 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 3.2. SF36 Physical Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 physical health items were included in the initial analysis using the RMM: GH01:Q1, PF01:Q3A, PF02:Q3B, PF03:Q3C, PF04:Q3D, PF05:Q3E, PF06:Q3F, PF07:Q3G, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, BP01:Q7, BP02:Q8, GH02:Q11A, GH03:Q11B, GH04:Q11C, and GH05:Q11D (see Table7). When the 21 SF-36 items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.18 to 2.66 and outfit statistics ranging from 0.19 to 2.77 (see Table 8). The mean item measure was 0.00 logits (SD = 0.99). With respect to logit measures, there was a broad range, the lowest value being –2.49 and the highest value being +1.79 (see Table 9). This resulted in an average item separation index of 60.32 and an average reliability of 1.00 over the six waves of data collection (see Table 9). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 7
SF-36 Physical health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -0.84 0.01 -0.21 -0.92 0.01 -0.24 -0.91 0.01 -0.18 3:Q3A 1.62 0.02 0.37 1.58 0.02 0.44 1.63 0.02 0.43 4:Q3B 0.02 0.01 0.59 0.00 0.01 0.61 0.13 0.01 0.60 5:Q3C -0.05 0.01 0.59 -0.08 0.01 0.60 -0.08 0.01 0.59 6:Q3D 0.52 0.01 0.60 0.48 0.01 0.63 0.55 0.01 0.62 7:Q3E -0.24 0.01 0.65 -0.27 0.01 0.67 -0.19 0.01 0.67 8:Q3F 0.21 0.01 0.57 0.26 0.01 0.58 0.33 0.01 0.54 9:Q3G 0.11 0.01 0.64 0.15 0.01 0.66 0.34 0.01 0.63 10:Q3H -0.34 0.01 0.67 -0.34 0.01 0.68 -0.19 0.01 0.67 11:Q3I -0.57 0.01 0.59 -0.62 0.01 0.59 -0.54 0.01 0.61 12:Q3J -0.74 0.01 0.43 -0.84 0.01 0.40 -0.80 0.01 0.40 13:Q4A 0.96 0.01 0.42 0.95 0.01 0.41 1.02 0.02 0.38 14:Q4B 1.32 0.01 0.42 1.37 0.02 0.43 1.46 0.02 0.39 15:Q4C 1.16 0.01 0.46 1.17 0.01 0.50 1.20 0.02 0.42 16:Q4D 1.20 0.01 0.44 1.22 0.02 0.47 1.28 0.02 0.42 21:Q7 -0.74 0.01 -0.05 0.99 0.01 -0.19 -0.95 0.01 -0.02 22:Q8 0.36 0.01 -0.18 -0.78 0.01 -0.15 0.04 0.01 -0.14 33:Q11A -2.22 0.01 0.34 -2.49 0.01 0.34 -2.46 0.01 0.32 34:Q11B -0.07 0.01 0.02 0.04 0.01 -0.07 -0.09 0.01 0.02 35:Q11C -1.24 0.01 0.38 -1.42 0.01 0.38 -1.27 0.01 0.37 36:Q11D -0.42 0.01 -0.09 -0.45 0.01 -0.18 -0.50 0.01 -0.08 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -1.10 0.01 -0.14 -1.21 0.01 -0.13 -1.32 0.02 -0.06 3:Q3A 1.64 0.02 0.42 1.63 0.03 0.42 1.79 0.04 0.39 4:Q3B 0.26 0.02 0.60 0.33 0.02 0.60 0.47 0.02 0.57 5:Q3C 0.02 0.01 0.60 0.05 0.02 0.58 0.09 0.02 0.56 6:Q3D 0.59 0.02 0.62 0.67 0.02 0.62 0.77 0.02 0.59 7:Q3E -0.16 0.01 0.65 -0.09 0.02 0.65 -0.02 0.02 0.64 8:Q3F 0.32 0.02 0.57 0.30 0.02 0.57 0.34 0.02 0.53 9:Q3G 0.46 0.02 0.63 0.62 0.02 0.61 0.76 0.02 0.60 10:Q3H -0.07 0.01 0.66 0.04 0.02 0.65 0.17 0.02 0.63 11:Q3I -0.50 0.01 0.58 -0.43 0.02 0.58 -0.40 0.02 0.58 12:Q3J -0.76 0.01 0.42 -0.75 0.01 0.41 -0.79 0.02 0.39 13:Q4A 1.01 0.02 0.37 1.03 0.02 0.34 0.99 0.03 0.34 14:Q4B 1.47 0.02 0.38 1.47 0.03 0.35 1.45 0.03 0.33 15:Q4C 1.27 0.02 0.43 1.29 0.02 0.40 1.29 0.03 0.35 16:Q4D 1.31 0.02 0.41 1.33 0.02 0.39 1.32 0.03 0.36 21:Q7 -1.08 0.01 -0.01 -1.17 0.01 0.01 -1.31 0.02 0.08 22:Q8 -0.19 0.01 -0.13 -0.35 0.01 -0.06 -0.53 0.02 -0.02 33:Q11A -2.45 0.01 0.28 -2.43 0.02 0.26 -2.49 0.02 0.21 34:Q11B -0.20 0.01 0.01 -0.38 0.02 0.06 -0.53 0.02 0.11 35:Q11C -1.19 0.01 0.34 -1.08 0.01 0.33 -1.08 0.02 0.31 36:Q11D -0.68 0.01 -0.09 -0.85 0.01 -0.04 -0.98 0.02 0.02 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 8
SF-36 Physical health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.24 9.9 1.30 9.9 1.24 9.9 1.29 9.9 1.16 9.9 1.20 9.9 3:Q3A 0.93 -4.6 0.90 -6.3 0.97 -1.8 0.90 -6.2 0.93 -3.7 0.85 -7.6 4:Q3B 0.57 -9.9 0.59 -9.9 0.59 -9.9 0.60 -9.9 0.64 -9.9 0.65 -9.9 5:Q3C 0.53 -9.9 0.54 -9.9 0.53 -9.9 0.54 -9.9 0.54 -9.9 0.56 -9.9 6:Q3D 0.72 -9.9 0.73 -9.9 0.71 -9.9 0.71 -9.9 0.72 -9.9 0.71 -9.9 7:Q3E 0.44 -9.9 0.46 -9.9 0.45 -9.9 0.47 -9.9 0.48 -9.9 0.50 -9.9 8:Q3F 0.62 -9.9 0.63 -9.9 0.64 -9.9 0.64 -9.9 0.67 -9.9 0.67 -9.9 9:Q3G 0.71 -9.9 0.72 -9.9 0.73 -9.9 0.75 -9.9 0.81 -9.9 0.81 -9.9 10:Q3H 0.45 -9.9 0.49 -9.9 0.50 -9.9 0.53 -9.9 0.59 -9.9 0.62 -9.9 11:Q3I 0.28 -9.9 0.32 -9.9 0.30 -9.9 0.33 -9.9 0.36 -9.9 0.39 -9.9 12:Q3J 0.21 -9.9 0.23 -9.9 0.18 -9.9 0.19 -9.9 0.21 -9.9 0.23 -9.9 13:Q4A 0.36 -9.9 0.40 -9.9 0.37 -9.9 0.40 -9.9 0.44 -9.9 0.48 -9.9 14:Q4B 0.51 -9.9 0.53 -9.9 0.53 -9.9 0.54 -9.9 0.59 -9.9 0.60 -9.9 15:Q4C 0.44 -9.9 0.47 -9.9 0.43 -9.9 0.45 -9.9 0.49 -9.9 0.52 -9.9 16:Q4D 0.46 -9.9 0.49 -9.9 0.46 -9.9 0.48 -9.9 0.52 -9.9 0.55 -9.9 21:Q7 2.33 9.9 2.40 9.9 2.51 9.9 2.77 9.9 2.20 9.9 2.23 9.9 22:Q8 2.29 9.9 2.39 9.9 2.66 9.9 2.72 9.9 2.23 9.9 2.29 9.9 33:Q11A 1.24 9.9 1.20 9.9 1.10 7.1 1.06 3.9 1.12 7.2 1.08 4.4 34:Q11B 2.18 9.9 2.20 9.9 2.07 9.9 2.09 9.9 1.88 9.9 1.89 9.9 35:Q11C 1.25 9.9 1.26 9.9 1.16 9.9 1.17 9.9 1.17 9.9 1.18 9.9 36:Q11D 2.21 9.9 2.28 9.9 2.36 9.9 2.41 9.9 2.12 9.9 2.15 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.02 1.4 1.04 2.9 0.99 -0.8 1.00 0.0 0.92 -4.1 0.93 -3.4 3:Q3A 0.98 -0.9 0.87 -5.2 1.04 1.5 0.92 -2.8 1.12 3.1 0.95 -1.1 4:Q3B 0.67 -9.9 0.67 -9.9 0.69 -9.9 0.69 -9.9 0.76 -9.9 0.74 -9.9 5:Q3C 0.56 -9.9 0.57 -9.9 0.57 -9.9 0.58 -9.9 0.62 -9.9 0.63 -9.9 6:Q3D 0.72 -9.9 0.71 -9.9 0.76 -9.9 0.73 -9.9 0.80 -8.2 0.74 -9.9 7:Q3E 0.49 -9.9 0.51 -9.9 0.53 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 8:Q3F 0.63 -9.9 0.64 -9.9 0.62 -9.9 0.62 -9.9 0.65 -9.9 0.66 -9.9 9:Q3G 0.83 -9.9 0.81 -9.9 0.86 -7.0 0.83 -8.8 0.90 -3.9 0.83 -6.7 10:Q3H 0.66 -9.9 0.68 -9.9 0.72 -9.9 0.73 -9.9 0.79 -9.4 0.79 -9.5 11:Q3I 0.39 -9.9 0.42 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 0.53 -9.9 12:Q3J 0.28 -9.9 0.30 -9.9 0.33 -9.9 0.35 -9.9 0.37 -9.9 0.40 -9.9 13:Q4A 0.47 -9.9 0.51 -9.9 0.53 -9.9 0.58 -9.9 0.55 -9.9 0.59 -9.9 14:Q4B 0.62 -9.9 0.63 -9.9 0.65 -9.9 0.66 -9.9 0.68 -9.9 0.69 -9.9 15:Q4C 0.54 -9.9 0.57 -9.9 0.59 -9.9 0.61 -9.9 0.63 -9.9 0.65 -9.9 16:Q4D 0.56 -9.9 0.59 -9.9 0.60 -9.9 0.62 -9.9 0.63 -9.9 0.65 -9.9 21:Q7 2.08 9.9 2.09 9.9 1.96 9.9 1.96 9.9 1.79 9.9 1.79 9.9 22:Q8 2.07 9.9 2.11 9.9 1.95 9.9 1.97 9.9 1.85 9.9 1.86 9.9 33:Q11A 1.13 7.2 1.11 5.6 1.13 6.3 1.11 5.1 1.18 7.2 1.20 7.5 34:Q11B 1.85 9.9 1.86 9.9 1.69 9.9 1.69 9.9 1.62 9.9 1.59 9.9 35:Q11C 1.18 9.9 1.19 9.9 1.15 8.0 1.16 8.4 1.13 5.9 1.13 6.1 36:Q11D 2.05 9.9 2.09 9.9 1.95 9.9 1.96 9.9 1.83 9.9 1.82 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 9
SF-36 physical health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -1.77 -1.85 -1.90 -1.92 -1.95 -2.07 S.D. .38 .37 .40 .39 .39 .40 MAX 1.54 -.37 .40 .92 -.52 .40 MIN -5.13 -4.11 -.09 -5.08 -4.52 -.79 Infit-MNSQ 1.05 1.04 1.05 1.05 1.05 1.05 Infit-ZSTD -.10 -.20 -.10 .00 .00 .00 Outfit-MNSQ 1.00 1.01 .98 .97 .96 .96 Outfit-ZSTD -.30 -.30 -.30 -.20 -.20 -.20 Person separation .86c .88c .97c .96c .96c .96c Person reliability .43a .43a .48a .48a .48a .48a Items MEAN .00 .00 .00 .00 .00 .00 S.D. .91 .99 .98 1.00 1.02 1.08 MAX 1.62 1.58 1.63 1.64 1.63 1.79 MIN -2.22 -2.49 -2.46 -2.45 -2.43 -2.49 Infit-MNSQ .95 .98 .95 .94 .94 .95 Infit-ZSTD -3.00 -3.00 -3.10 -3.40 -3.40 -3.30 Outfit-MNSQ .98 1.00 .96 .95 .94 .94 Outfit-ZSTD -3.10 -3.40 -3.50 -3.60 -3.70 -3.60 Item separation 71.24 69.37 63.25 59.41 52.87 45.77 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 physical health scale person-item map is located in Supplemental Figure3 and reports evidence of the hierarchical ordering of the SF-36 physical health scale items. Items which are easier are located at the bottom of the SF-36 physical health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. It should also be noted that several of the SF-36 physical health scale items have the same level of difficulty.The average person measure was 1.91 logits (SD = 0.39) over the six waves of data collection (see Table9). The mean person separation was 0.93 with a mean reliability of 0.46 (see Table 9). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 physical health construct. When examining the overall RMM output of the SF-36 physical health total scale, the average person measure (1.91 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1.62 to -2.49 logits. The person reliability was 0.46 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 physical health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 physical health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.86, -2.13, -.83, .10, 1.96, and 5.32 for wave one and -3.64, -2.02, -.91, .01, 2.00, and 5.24 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Seven out of the 21 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Therefore items 1:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, GH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 7 / 21 or 52.4% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, and RP04:Q4D. The following items had an Infit MNSQ statistic that was greater than 1.30: BP01:Q7, BP02:Q8, GH03:Q11B, and GH05:Q11D.An inspection of the PTMEAs for the SF-36 physical health scale indicated that items HG01:Q1, BP01:Q7, BP02:Q8, and GH05:Q11D had consistent negative PTMEAs over the six waves of data collection. For all other items, the PTMEA correlations had acceptable values.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table10). This indicated that the unidimensional requirement of the SF-36 physical health scale was met. The raw variance explained by the SF-36 physical health scale over the six waves of data collection ranged from 41.6% to 48.9% and the unexplained variance in the first contrast ranged from 17.4% to 22.4%. The residual analysis completed indicated that no second dimension or factor existed.Table 10
SF-36 physical health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 36.07 100.00% 100.00% 36.07 100.00% 100.00% 37.89 100.00% 100.00% Raw variance explained by measures 15.07 41.80% 42.50% 15.07 41.80% 42.50% 16.89 44.60% 45.70% Raw variance explained by persons 1.90 5.30% 5.40% 1.90 5.30% 5.40% 0.96 2.50% 2.60% Raw Variance explained by items 13.16 36.50% 37.10% 13.16 36.50% 37.10% 15.94 42.10% 43.10% Raw unexplained variance (total) 21.00 58.20% 57.50% 21.00 58.20% 57.50% 21.00 55.40% 54.30% Unexplained variance in 1st contrast 8.00 22.20% 38.10% 8.00 22.20% 38.10% 7.70 20.30% 36.70% Unexplained variance in 2nd contrast 2.02 5.60% 9.60% 2.02 5.60% 9.60% 1.96 5.20% 9.40% Unexplained variance in 3rd contrast 1.51 4.20% 7.20% 1.51 4.20% 7.20% 1.44 3.80% 6.90% Unexplained variance in 4th contrast 1.31 3.60% 6.20% 1.31 3.60% 6.20% 1.23 3.20% 5.80% Unexplained variance in 5th contrast 0.99 2.80% 4.70% 0.99 2.80% 4.70% 0.99 2.60% 4.70% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 37.07 100.00% 100.00% 39.34 100.00% 100.00% 41.08 100.00% 100.00% Raw variance explained by measures 17.07 46.10% 48.00% 18.34 46.60% 48.60% 20.08 48.90% 51.10% Raw variance explained by persons 2.45 6.60% 6.90% 2.42 6.10% 6.40% 2.68 6.50% 6.80% Raw Variance explained by items 14.62 39.40% 41.10% 15.92 40.50% 42.20% 17.40 42.30% 44.20% Raw unexplained variance (total) 20.00 53.90% 52.00% 21.00 53.40% 51.40% 21.00 51.10% 48.90% Unexplained variance in 1st contrast 6.64 17.90% 33.20% 7.50 19.10% 35.70% 7.14 17.40% 34.00% Unexplained variance in 2nd contrast 2.10 5.70% 10.50% 2.06 5.20% 9.80% 2.24 5.50% 10.70% Unexplained variance in 3rd contrast 1.54 4.20% 7.70% 1.58 4.00% 7.50% 1.56 3.80% 7.40% Unexplained variance in 4th contrast 1.26 3.40% 6.30% 1.21 3.10% 5.80% 1.20 2.90% 5.70% Unexplained variance in 5th contrast 1.07 2.90% 5.30% 1.03 2.60% 4.90% 1.04 2.50% 4.90% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The functioning of the six rating scale categories was examined for the SF-36 physical health scale. The category logit measures ranged from -3.86 to 5.43 (see Table11). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category six. The infit MNSQ scores for this rating category ranged from 2.03 to 3.18 (see Table 11). The results indicated that the six-level rating scale used in the SF-36 physical health scale might not be the most robust to use (see Supplemental Figure 3); however, the full range of ratings were used by the participants who completed the SF-36 physical health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the first five response categories were the most probable category for some part of the continuum. Rating category six was problematic.Table 11
SF-36 physical health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 60721 23 (-3.86) 1.18 1.11 NONE 55350 25 ( -3.92) 1.12 1.07 NONE 46995 28 ( -3.86) 1.14 1.08 NONE 2 83039 32 -2.13 .93 .94 -2.56 70454 32 -2.19 .93 .92 -2.60 54692 32 -2.17 1.00 .99 -2.54 3 73299 28 -.83 .66 .59 -1.54 62780 29 -.85 .66 .60 -1.61 46905 28 -.90 .72 .62 -1.61 4 12957 5 .10 1.19 1.26 .59 11389 5 .14 1.21 1.38 .53 9720 6 .07 1.11 1.11 .38 5 15144 6 1.96 1.05 1.15 -.71b 12634 6 2.03 1.13 1.33 -.55b 9942 6 2.05 1.08 1.17 -.56b 6 238 0 (5.32) 2.67 2.61 4.21 233 0 (5.34) 3.18 2.98 4.23 155 0 (5.43) 2.77 2.30 4.33 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 42924 29 ( -3.75) 1.15 1.08 NONE 36502 31 ( -3.66) 1.15 1.08 NONE 28787 36 ( -3.64) 1.15 1.08 NONE 2 44603 30 -2.09 1.05 1.02 -2.42 33751 29 -2.01 1.08 1.01 -2.33 23233 29 -2.02 1.10 1.01 -2.30 3 36071 24 -.89 .76 .64 -1.52 25389 22 -.87 .82 .68 -1.44 16930 21 -.91 .86 .73 -1.46 4 8958 6 .06 1.00 .96 .24 7310 6 .06 .94 .86 .13 5353 7 .01 .91 .80 .03 5 8592 6 2.01 1.10 1.16 -.48b 6464 6 1.96 1.09 1.12 -.38b 4694 6 2.00 1.09 1.10 -.39b 6 150 0 (5.29) 2.54 2.02 4.18 129 0 (5.12) 2.25 1.75 4.02 80 0 (5.24) 2.03 1.52 4.13 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 physical scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table12). Four of the SF-36 physical health items exhibited a consistent pattern of DIF over the six waves of data collection. Item PF03:Q3C demonstrated DIF based on marital status alone while items GH02:Q11A, GH04:Q11C, and GH05:Q11D exhibited DIF based on both marital status and area of residence (see Table 12). It should be noted that items GH02:Q11A and GH04:Q11C had infit MNSQ statistics that fell within the 0.70-1.30 range while items PF03:Q3C and GH05:Q11D also had MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. SF-36 physical health items PF03:Q3C and GH05:Q11D appear to be particularly problematic items based on the RMM analysis findings.Table 12
Differential Item Functioning (DIF) for SF-36 physical health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 13.25 . 00 1 ∗ ∗ ∗ 0.00 .639 1.62 .442 0.14 .713 0.41 .816 0.00 .185 3:Q3A 5.79 .054 0.00 .725 1.27 .527 0.00 .330 5.69 .057 0.00 .073 4:Q3B 1.94 .376 0.00 .069 0.56 .754 0.15 .835 0.50 .779 0.06 . 00 1 ∗ ∗ ∗ 5:Q3C 1.84 .394 0.06 . 00 1 ∗ ∗ ∗ 0.03 .984 0.00 . 00 9 ∗ ∗ 0.56 .756 0.00 .442 6:Q3D 0.97 .614 0.00 .947 2.13 .342 -0.20 .778 0.44 .804 0.00 .700 7:Q3E 0.41 .816 0.00 .287 2.75 .250 0.00 .153 1.26 .529 0.00 .143 8:Q3F 0.06 .970 0.00 .684 0.18 .917 -0.08 .599 0.03 .988 -0.04 .964 9:Q3G 13.16 . 00 1 ∗ ∗ ∗ -0.02 .076 1.33 .512 0.03 . 00 6 ∗ ∗ 0.57 .750 0.00 .847 10:Q3H 12.78 . 00 2 ∗ ∗ 0.00 .324 5.72 .056 -0.22 .320 0.02 .990 0.00 .225 11:Q3I 7.45 . 02 4 ∗ 0.00 .357 5.55 .061 0.07 . 00 1 ∗ ∗ ∗ 0.00 1.000 0.00 .631 12:Q3J 0.95 .620 0.00 .306 0.03 .988 -0.03 .836 0.73 .693 0.00 .251 13:Q4A 2.34 .306 0.00 .519 0.08 .962 0.00 .461 0.17 .919 0.00 .360 14:Q4B 1.45 .481 0.00 .782 4.22 .119 -0.30 .206 6.61 . 03 6 ∗ 0.00 .520 15:Q4C 2.47 .288 0.00 .982 0.08 .961 0.00 .240 0.37 .831 0.00 .524 16:Q4D 0.08 .965 0.00 .845 0.06 .973 0.00 .873 4.54 .101 0.00 .053 21:Q7 2.54 .277 -0.05 . 00 5 ∗ ∗ 9.34 . 00 9 ∗ ∗ 0.00 .131 0.13 .941 0.00 .145 22:Q8 37.29 . 00 1 ∗ ∗ ∗ 0.00 .114 1.00 .605 -0.09 .651 1.85 .394 0.02 .081 33:Q11A 27.3 . 00 1 ∗ ∗ ∗ 0.07 . 00 1 ∗ ∗ ∗ 1.11 .572 0.00 .521 0.02 .990 0.00 .275 34:Q11B 36.38 . 00 1 ∗ ∗ ∗ 0.00 .905 1.41 .490 -0.20 .309 6.66 . 03 5 ∗ 0.00 .170 35:Q11C 12.2 . 00 2 ∗ ∗ 0.00 .204 0.68 .710 0.00 .963 0.58 .749 -0.05 . 00 2 ∗ ∗ 36:Q11D 35.29 . 00 1 ∗ ∗ ∗ -0.05 . 00 6 ∗ ∗ 1.38 .500 0.12 .444 7.03 . 02 9 ∗ 0.00 .724 Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 0.67 .714 0.00 .185 4.44 .107 0.23 .707 2.87 .235 0.09 .418 3:Q3A 0.22 .897 0.00 .073 1.32 .515 0.00 .375 3.73 .153 0.00 .176 4:Q3B 0.26 .878 0.06 . 00 1 ∗ ∗ ∗ 0.59 .744 0.34 .229 2.71 .254 0.38 .098 5:Q3C 3.48 .173 0.00 .442 0.65 .720 0.00 .342 1.81 .400 -0.13 .270 6:Q3D 1.39 .496 0.00 .700 2.83 .240 -0.16 .573 7.85 . 01 9 ∗ 0.00 .761 7:Q3E 0.10 .953 0.00 .143 1.73 .418 0.03 .025∗ 6.17 . 04 5 ∗ 0.00 .278 8:Q3F 2.31 .311 -0.04 .964 0.00 1.000 -0.17 .456 0.43 .808 -0.08 .248 9:Q3G 0.95 .621 0.00 .847 0.39 .824 0.12 . 00 1 ∗ ∗ ∗ 1.20 .547 0.11 . 00 1 ∗ ∗ ∗ 10:Q3H 1.89 .384 0.00 .225 0.73 .695 -0.26 .443 0.68 .712 -0.12 .739 11:Q3I 0.00 1.000 0.00 .631 0.42 .809 0.00 .961 0.55 .761 0.00 .387 12:Q3J 0.05 .975 0.00 .251 0.10 .953 0.06 .252 0.65 .722 -0.03 .664 13:Q4A 0.45 .798 0.00 .360 0.00 1.000 -0.06 . 04 2 ∗ 0.17 .922 -0.07 .282 14:Q4B 1.60 .447 0.00 .520 1.98 .367 -0.05 .861 0.82 .663 -0.12 .138 15:Q4C 2.50 .283 0.00 .524 0.01 .996 0.00 .453 0.20 .908 0.00 .255 16:Q4D 0.68 .711 0.00 .053 0.24 .889 -0.06 .733 0.73 .692 0.02 .431 21:Q7 3.61 .162 0.00 .145 0.00 1.000 -0.06 .413 1.75 .413 -0.07 .650 22:Q8 1.03 .595 0.02 .081 3.03 .217 0.00 .310 4.23 .119 -0.10 .644 33:Q11A 0.14 .934 0.00 .275 13.77 . 00 1 ∗ ∗ ∗ -0.05 .170 1.34 .509 -0.07 .729 34:Q11B 4.03 .131 0.00 .170 1.80 .403 0.20 . 04 8 ∗ 2.28 .317 0.07 .252 35:Q11C 0.00 1.000 -0.05 . 00 2 ∗ ∗ 0.49 .783 0.00 .280 3.68 .156 0.00 .681 36:Q11D 0.37 .831 0.00 .724 7.48 .023∗ 0.00 .941 1.55 .457 -0.03 .897 Notes. PROB. = probability; ∗p≤.05; ∗∗p ≤ .01; ∗∗∗p ≤ .001.
## 3.3. SF36 Mental Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 mental health items were included in the initial analysis using the RMM: RE01:Q5A, RE02:Q5B, RE03:Q5C, SF01:Q6, VT01:Q9A, 24MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10. When the 14 SF-36 mental health items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table13). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31 (see Table 14). This resulted in an average item separation index of 79.17 and an average reliability of 1.00 over the six waves (see Table 15). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 13
SF-36 mental health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR 17:Q5A 1.35 0.01 0.31 1.38 0.01 0.30 1.49 0.02 0.27 18:Q5B 1.57 0.01 0.31 1.62 0.02 0.29 1.75 0.02 0.27 19:Q5C 1.38 0.01 0.30 1.41 0.01 0.28 1.50 0.02 0.26 20:Q6 1.51 0.01 -0.09 1.78 0.02 -0.02 1.41 0.01 -0.02 23:Q9A -0.03 0.01 0.17 -0.06 0.01 0.22 -0.12 0.01 0.27 24:Q9B -1.28 0.01 0.46 -1.47 0.01 0.43 -1.54 0.01 0.41 25:Q9C -1.84 0.01 0.45 -2.04 0.01 0.40 -2.08 0.02 0.40 26:Q9D 0.21 0.01 0.20 0.30 0.01 0.18 0.30 0.01 0.26 27:Q9E -0.16 0.01 0.22 -0.24 0.01 0.26 -0.29 0.01 0.30 28:Q9F -1.25 0.01 0.46 -1.33 0.01 0.39 -1.32 0.01 0.40 29:Q9G -0.90 0.01 0.44 -0.93 0.01 0.39 -0.88 0.01 0.39 30:Q9H 0.63 0.01 0.27 0.72 0.01 0.25 0.74 0.01 0.28 31:Q9I -0.55 0.01 0.37 -0.50 0.01 0.31 -0.42 0.01 0.31 32:Q10 -0.65 0.01 0.28 -0.65 0.01 0.22 -0.55 0.01 0.19 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR. 17:Q5A 1.47 0.02 0.28 1.48 0.02 0.30 1.51 0.02 0.29 18:Q5B 1.76 0.02 0.28 1.77 0.02 0.30 1.81 0.03 0.32 19:Q5C 1.51 0.02 0.27 1.51 0.02 0.28 1.53 0.02 0.28 20:Q6 1.19 0.01 0.04 1.01 0.02 0.03 0.96 0.02 0.05 23:Q9A -0.14 0.01 0.23 -0.21 0.01 0.24 -0.29 0.01 0.26 24:Q9B -1.52 0.01 0.40 -1.49 0.02 0.40 -1.47 0.02 0.37 25:Q9C -2.07 0.02 0.35 -1.92 0.02 0.39 -1.91 0.02 0.35 26:Q9D 0.30 0.01 0.23 0.31 0.01 0.19 0.29 0.01 0.22 27:Q9E -0.34 0.01 0.27 -0.41 0.01 0.27 -0.53 0.01 0.29 28:Q9F -1.30 0.01 0.40 -1.25 0.01 0.42 -1.22 0.02 0.41 29:Q9G -0.80 0.01 0.39 -0.75 0.01 0.44 -0.72 0.01 0.43 30:Q9H 0.75 0.01 0.29 0.69 0.01 0.23 0.69 0.02 0.27 31:Q9I -0.34 0.01 0.33 -0.30 0.01 0.35 -0.27 0.01 0.32 32:Q10 -0.48 0.01 0.17 -0.44 0.01 0.17 -0.39 0.01 0.17 Note: MODEL S.E. = Model Standard Error; PTMEA CORR = Point Measure Correlation.Table 14
SF-36 mental health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.29 -9.9 0.32 -9.9 0.26 -9.9 0.28 -9.9 0.30 -9.9 0.32 -9.9 18:Q5B 0.43 -9.9 0.46 -9.9 0.43 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 19:Q5C 0.31 -9.9 0.34 -9.9 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 20:Q6 2.64 9.9 2.87 9.9 2.91 9.9 3.06 9.9 2.54 9.9 2.74 9.9 23:Q9A 1.08 7.7 1.15 9.9 1.03 3.2 1.12 9.9 1.04 3.5 1.09 6.7 24:Q9B 1.25 9.9 1.17 9.9 1.40 9.9 1.29 9.9 1.41 9.9 1.31 9.9 25:Q9C 1.44 9.9 1.30 9.9 1.59 9.9 1.39 9.9 1.51 9.9 1.33 9.9 26:Q9D 1.22 9.9 1.33 9.9 1.14 9.9 1.28 9.9 1.10 7.1 1.19 9.9 27:Q9E 1.12 9.9 1.17 9.9 1.08 7.8 1.12 9.9 1.07 5.3 1.08 6.3 28:Q9F 0.90 -6.6 0.88 -8.3 1.02 1.0 0.98 -0.9 0.96 -2.0 0.93 -3.6 29:Q9G 0.88 -9.9 0.87 -9.9 0.90 -7.4 0.89 -7.8 0.88 -7.8 0.87 -8.2 30:Q9H 1.22 9.9 1.29 9.9 1.15 8.5 1.24 9.9 1.09 4.6 1.16 7.9 31:Q9I 0.72 -9.9 0.73 -9.9 0.76 -9.9 0.77 -9.9 0.73 -9.9 0.74 -9.9 32:Q10 0.72 -9.9 0.77 -9.9 0.78 -9.9 0.81 -9.9 0.87 -9.9 0.91 -6.8 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.31 -9.9 0.34 -9.9 0.34 -9.9 0.37 -9.9 0.36 -9.9 0.39 -9.9 18:Q5B 0.50 -9.9 0.52 -9.9 0.52 -9.9 0.54 -9.9 0.53 -9.9 0.55 -9.9 19:Q5C 0.34 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.38 -9.9 0.41 -9.9 20:Q6 2.16 9.9 2.33 9.9 1.96 9.9 2.15 9.9 1.91 9.9 2.07 9.9 23:Q9A 1.04 2.7 1.07 5.3 1.02 1.7 1.05 3.4 1.03 1.8 1.04 2.2 24:Q9B 1.37 9.9 1.25 9.9 1.36 9.9 1.25 9.5 1.33 9.9 1.25 8.3 25:Q9C 1.50 9.9 1.36 9.9 1.51 9.9 1.36 9.9 1.42 9.9 1.32 9.1 26:Q9D 1.15 9.3 1.23 9.9 1.16 8.7 1.26 9.9 1.13 6.2 1.20 8.8 27:Q9E 1.11 7.9 1.11 8.0 1.08 5.1 1.08 4.8 1.10 5.2 1.09 4.4 28:Q9F 0.97 -1.6 0.94 -3.2 0.95 -2.2 0.91 -4.2 0.91 -3.5 0.89 -4.4 29:Q9G 0.87 -8.1 0.86 -8.5 0.84 -9.2 0.83 -9.7 0.85 -7.3 0.84 -7.8 30:Q9H 1.12 5.6 1.16 7.3 1.13 5.8 1.22 9.3 1.09 3.6 1.15 5.3 31:Q9I 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.78 -9.9 32:Q10 0.86 -9.9 0.91 -6.9 0.90 -6.3 0.94 -3.9 0.96 -2.4 1.00 0.2 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 15
SF-36 mental health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -.08 -.04 -.02 -.03 -.06 -.06 S.D. .30 .28 .31 .30 .29 .30 MAX .30 1.64 1.38 .30 .29 .30 MIN 1.87 -3.54 -2.99 2.37 2.16 2.12 Infit-MNSQ 1.01 1.01 1.02 1.02 1.02 1.02 Infit-ZSTD -.30 -.40 -.30 -.30 -.30 -.20 Outfit-MNSQ 1.06 1.08 1.06 1.03 1.02 1.01 Outfit-ZSTD -.20 -.20 -.20 -.20 -.20 -.20 Person separation .53 .33 .45 .36 .36 .41 Person reliability .22a .10a .17a .11a .12a .14a Items MEAN .00 .00 .00 .00 .00 .00 S.D. 1.17 1.20 1.19 1.13 1.31 1.07 MAX 1.59 1.78 1.75 1.75 2.13 1.67 MIN -1.88 -2.04 -2.08 -2.04 -1.94 -1.95 Infit-MNSQ 1.02 1.05 1.02 1.00 .99 .98 Infit-ZSTD .10 .20 -.60 -.30 -.50 -.50 Outfit-MNSQ 1.05 1.07 1.04 1.02 1.01 1.00 Outfit-ZSTD .10 .80 .20 .10 -.10 -.30 Item separation 95.77 89.12 83.98 77.85 68.89 59.38 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 mental health scale person-item map is shown in Supplemental Figure5 and reports evidence of the hierarchical ordering of the SF-36 mental health scale items. It should also be noted that several of the SF-36 mental health scale items have the same level of difficulty. The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table 15). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 15). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 mental health construct.When examining the overall RMM output of the SF-36 mental health scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +2.13 to -2.08 logits. The person reliability was 0.35 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 mental health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 mental health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.07, -1.06, -.17, .40, 1.14, and 2.54 for wave one and -2.98, -1.09, -.19, .41, 1.15, and 2.51 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Five out of the 14 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range; thus, items VT01:Q9A, MH01:Q9B, MH03:Q9D, 27VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10 met the RMM requirements (see Table14). In other words, only 9/14 or 64.3% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: RE01:Q5A, RE02:Q5B, and RE03:Q5C. Item SF01:Q6 had an Infit MNSQ statistic that was greater than 1.30.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table16). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 mental health scale over the six waves of data collection ranged from 62.5% to 66.1% and the unexplained variance in the first contrast ranged from 15.1% to 16.5%.Table 16
SF-36 mental health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 38.55 100.00% 100.00% 41.29 100.00% 100.00% 42.62 100.00% 100.00% Raw variance explained by measures 24.55 63.70% 63.70% 27.29 66.10% 66.20% 26.62 62.50% 62.50% Raw variance explained by persons 2.85 7.40% 7.40% 2.06 5.00% 5.00% 2.68 6.30% 6.30% Raw Variance explained by items 21.70 56.30% 56.30% 25.23 61.10% 61.20% 23.94 56.20% 56.20% Raw unexplained variance (total) 14.00 36.30% 36.30% 14.00 33.90% 33.80% 16.00 37.50% 37.50% Unexplained variance in 1st contrast 6.22 16.10% 44.50% 6.22 15.10% 44.40% 7.02 16.50% 43.90% Unexplained variance in 2nd contrast 1.49 3.90% 10.60% 1.47 3.60% 10.50% 1.62 3.80% 10.10% Unexplained variance in 3rd contrast 1.29 3.30% 9.20% 1.32 3.20% 9.40% 1.29 3.00% 8.10% Unexplained variance in 4th contrast 0.81 2.10% 5.80% 0.85 2.00% 6.00% 1.05 2.50% 6.60% Unexplained variance in 5th contrast 0.68 1.80% 4.90% 0.71 1.70% 5.00% 0.71 1.70% 4.40% WAVE 4 WAVE 5 WAVE 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 39.19 100.00% 100.00% 37.79 100.00% 100.00% 37.65 100.00% 100.00% Raw variance explained by measures 25.19 64.30% 64.50% 23.79 62.90% 63.30% 23.65 62.80% 63.20% Raw variance explained by persons 2.43 6.20% 6.20% 1.73 4.60% 4.60% 2.44 6.50% 6.50% Raw Variance explained by items 22.76 58.10% 58.30% 22.06 58.40% 58.70% 21.21 56.30% 56.60% Raw unexplained variance (total) 14.00 35.70% 35.50% 14.00 37.10% 36.70% 14.00 37.20% 36.80% Unexplained variance in 1st contrast 6.16 15.70% 44.00% 6.10 16.10% 43.60% 5.75 15.30% 41.10% Unexplained variance in 2nd contrast 1.52 3.90% 10.90% 1.61 4.20% 11.50% 1.67 4.40% 11.90% Unexplained variance in 3rd contrast 1.32 3.40% 9.40% 1.31 3.50% 9.30% 1.35 3.60% 9.60% Unexplained variance in 4th contrast 0.80 2.00% 5.70% 0.79 2.10% 5.60% 0.85 2.30% 6.10% Unexplained variance in 5th contrast 0.68 1.70% 4.90% 0.69 1.80% 4.90% 0.68 1.80% 4.80% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.An inspection of the PTMEAs for the SF-36 mental health scale indicated that, for all other items, the PTMEA correlations had acceptable values. All the SF-36 mental health scale items had PTMEAs that were positive, supporting item-level polarity.The functioning of the six rating scale categories was examined for the SF-36 mental health scale. Items which are easier are located at the bottom of the SF-36 mental health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. The category logit measures ranged from -3.86 to 2.57 (see Table17). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category one. The infit MNSQ scores for this rating category ranged from 1.38 to 1.41 (see Table 17). The results indicated that the six-level rating scale used in the SF-36 mental health scale might not be the most robust to use (see Supplemental Figure 6), however, the full range of ratings were used by the participants who completed the SF-36 mental health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the latter five response categories were the most probable category for some part of the continuum. Rating category one was problematic.Table 17
SF-36 mental health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 (p. 120) WAVE 3 (p. 125) CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 22667 14 ( -3.07) 1.38 1.20 NONE 18463 13 ( -3.18) 1.41 1.22 NONE 14323 12 ( -3.18) 1.38 1.22 NONE 2 49420 30 -1.06 .75 .78 -1.91 43019 30 -1.08 .76 .81 -2.03 33416 28 -1.12 .78 .85 -2.02 3 15086 9 -.17 .96 .86 .66 12291 8 -.15 .97 .89 .71 10845 9 -.17 .98 .85 .57 4 25646 15 .40 1.02 1.11 -.41b 20753 14 .43 1.00 1.12 -.38b 18002 15 .44 1.00 1.06 -.38b 5 28636 17 1.14 1.06 1.31 .51b 24231 17 1.16 1.08 1.38 .53b 18787 16 1.20 1.13 1.28 .63 6 24973 15 (2.54) 1.00 1.07 1.15 23360 16 (2.56) 1.00 1.08 1.17 18313 15 (2.60) .95 1.02 1.19 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 12561 13 ( -3.08) 1.37 1.21 NONE 10333 14 ( -3.00) 1.38 1.23 NONE 7471 14 ( -2.98) 1.37 1.21 NONE 2 27233 29 -1.11 .80 .88 -1.91 20854 28 -1.08 .82 .89 -1.82 14529 27 -1.09 .83 .91 -1.80 3 9548 10 -.18 .98 .82 .51 7515 10 -.19 .94 .76 .50 5675 11 -.19 .94 .78 .43 4 15240 16 .42 1.00 1.00 -.36b 12348 17 .40 .97 .94 -.40b 9024 17 .41 .98 .94 -.35b 5 15741 16 1.17 1.14 1.22 .60 12183 16 1.15 1.19 1.27 .61 8698 16 1.15 1.19 1.24 .64 6 15147 16 (2.57) .93 .99 1.16 11454 15 (2.53) .90 .99 1.11 8415 16 (2.51) .90 .96 1.07 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 mental scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table18). Six of the SF-36 mental health items exhibited a consistent pattern of DIF over the six waves of data collection. Items SF01:Q6, MH01:Q9B, MH02:Q9C, MH03:Q9D, MH04:Q9F, and MH05:Q9H exhibited DIF based on both marital status and area of residence (see Table 18). It should be noted that items MH01:Q9B and MH03:Q9D had infit MNSQ statistics that fell outside the 0.7-1.30 range. SF-36 physical health items MH01:Q9B and MH03:Q9D appear to be particularly problematic items based on the RMM analysis findings.Table 18
Differential Item Functioning (DIF) for SF-36 mental health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 3.13 .206 0.00 .720 0.05 .975 -0.17 .122 0.07 .969 0.00 .978 18:Q5B 6.16 . 04 5 ∗ 0.00 .799 0.41 .814 0.00 .347 0.59 .745 0.00 .165 19:Q5C 4.23 .119 0.00 .505 0.17 .922 -0.30 .066 0.79 .673 0.00 .484 20:Q6 62.55 . 00 1 ∗ ∗ ∗ -0.09 .058 6.62 . 03 6 ∗ 0.00 .056 0.00 1.000 0.00 .415 23:Q9A 8.45 . 01 4 ∗ 0.00 .101 0.00 1.000 -0.05 .498 0.05 .979 0.00 .725 24:Q9B 14.83 . 00 1 ∗ ∗ ∗ 0.09 . 00 1 ∗ ∗ ∗ 0.41 .813 0.00 .553 11.22 . 00 4 ∗ ∗ 0.02 .093 25:Q9C 62.48 . 00 1 ∗ ∗ ∗ 0.12 . 00 1 ∗ ∗ ∗ 29.94 . 00 1 ∗ ∗ ∗ 0.48 .087 0.01 .996 0.07 . 00 9 ∗ ∗ 26:Q9D 9.01 . 01 1 ∗ -0.07 . 00 1 ∗ ∗ ∗ 0.16 .925 -0.07 .476 22.89 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ ∗ 27:Q9E 8.72 . 01 3 ∗ 0.00 .741 0.49 .782 0.37 . 01 0 ∗ 0.57 .750 0.00 .207 28:Q9F 17.18 . 00 1 ∗ ∗ ∗ 0.08 . 00 1 ∗ ∗ ∗ 20.01 . 00 1 ∗ ∗ ∗ 0.00 .401 0.73 .694 0.05 .004 29:Q9G 5.04 .079 0.00 .719 3.76 .150 0.52 . 00 3 ∗ ∗ 11.42 . 00 3 ∗ ∗ 0.00 .815 30:Q9H 13.46 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ 8.62 . 01 3 ∗ 0.00 .176 3.70 .155 -0.07 . 00 2 ∗ ∗ 31:Q9I 1.75 .414 0.00 .224 0.51 .773 -0.18 .308 0.00 1.000 0.00 .299 32:Q10 14.70 . 00 1 ∗ ∗ ∗ 0.00 .207 0.11 .947 0.00 .165 2.75 .250 0.00 .978 Wave 4 Wave 5 Wave 6 SF36 ITEM No. Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SUMMARY DIF CHI-SQUARE (DIF = 1) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 0.00 1.000 0.00 .618 0.07 .966 -0.03 .986 0.94 .623 -0.18 . 00 1 ∗ ∗ ∗ 18:Q5B 0.00 1.000 0.00 .639 2.43 .294 0.00 .395 0.39 .824 0.00 .543 19:Q5C 0.00 1.000 0.00 .497 0.71 .701 0.20 .271 0.59 .744 -0.19 . 00 3 ∗ ∗ 20:Q6 0.00 1.000 0.00 .779 6.37 . 04 0 ∗ -0.02 .337 2.26 .320 -0.03 .162 23:Q9A 0.00 1.000 0.00 .900 1.95 .373 0.19 .254 1.18 .551 -0.04 .176 24:Q9B 6.95 . 00 8 ∗ ∗ 0.00 .384 13.76 . 00 1 ∗ ∗ ∗ 0.00 .784 3.06 .213 0.00 .580 25:Q9C 0.00 1.000 0.06 . 03 0 ∗ 6.84 . 03 2 ∗ -0.68 .078 0.77 .678 0.08 .371 26:Q9D 0.00 1.000 0.00 .544 13.70 . 00 1 ∗ ∗ ∗ -0.02 .118 2.06 .354 0.00 .923 27:Q9E 0.00 1.000 0.00 .537 3.30 .189 -0.08 .720 1.67 .430 0.06 .215 28:Q9F 0.00 1.000 0.00 .687 0.87 .644 0.00 .819 0.43 .806 0.00 .408 29:Q9G 0.00 1.000 0.00 .694 0.20 .908 0.27 .570 0.63 .729 0.03 .278 30:Q9H 6.10 . 01 4 ∗ 0.00 .297 4.86 .086 0.05 .065 0.08 .962 0.00 .419 31:Q9I 0.00 1.000 -0.05 .112 1.10 .574 0.48 .170 0.04 .981 -0.04 .664 32:Q10 0.00 1.000 0.00 .414 1.56 .456 0.08 . 01 9 ∗ 0.05 .979 0.00 .434 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 4. Discussion
### 4.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured?
For the SF-36 as a total measure, the rating scale categories increased monotonically, indicating that rating response scales were being used as expected and are appropriate for measurement across all waves. Previous longitudinal evaluation of the measure using CCT methods found poor test-retest reliability between two time points two weeks apart [36]. Previous research using IRT methods have been largely cross-sectional, providing little longitudinal evaluation of the measure using this method [5, 6, 10, 17]. In this sample, the pattern of more and less difficult items is consistent, indicating that item difficultly remained stable across each wave. Despite consistency across time in this sample, redundancy emerged as an issue with several total scale items displaying the same level of difficulty across all waves of data. This was seen again in both the SF-36 mental and physical health summary scores. It appears redundant items span across all uses of the measure and this suggests that item descriptors need to be more specific to avoid overlap across similar items.Category Six of the SF-36 physical health summary scale and Category One of the SF-36 mental health scale had scores outside the acceptable range, which may indicate these rating categories are not robust for use in longitudinal studies. Disordered categories had been seen in a previous evaluation of the SF-36, with authors suggesting collapsing some category response options [5]. These findings support this issue with the SF-36. Further investigation into the category disordering in the SF-36 mental and physical health response scale is warranted and collapsing of the response option categories may improve this, as suggested in previous literature [5, 17].When examining summary statistics for total SF-36 items, the mean person reliability fell in the unacceptable range. Inadequate person separation reliability was also seen across all waves of data, in both summary scales. The person separation index indicates the instrument used as a whole and as summary scales is not sensitive enough to separate high and low performances in the sample [29]. This presents an issue with internal consistency across all presentations of the measure. Comparatively, using classical methods, the measure was seen to discriminate between patients pre- and postoperation [37]. Results using IRT suggest that the measure is unable to discriminate between high and low performances.While results of IRT have raised doubts of the measures internal consistency, results from classical testing methods report strong internal consistency, reflected in high Cronbach’s alpha scores. When validating the measure in patients with endometriosis, Cronbach’s alpha for the total scale was above acceptable cut-offs [38]. Internal consistency scores have also been seen to be above .9 for the full scale and above .7 for each subscale [39]. In addition to internal consistency, the measure displayed acceptable content validity, correlating strongly with similar measures [38]. IRT assesses instrument reliability at item level, rather than instrument-level as well as considering considers the importance of participant responses.The contrast between results from IRT and CTT could be due to the further focus at item level that is characteristics to IRT. It is possible that overlapping items identified in the person-item map are contributing to lack of sensitivity in the scale. Addition of more items or altering current items to improve sensitivity may improve the person reliability. Further investigation into the similarity and specificity of these items is warranted, to ensure items capture the full variable being measured.
### 4.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves?
Several items on the total scale and both summary scales were found to have Infit statistics outside of the acceptable range. Many of the items remained problematic regardless of investigated as whole measure or by summary scale. The number of misfitting items was slightly lower when used in summary scales; however this can be due to the less items included in the summary scale analysis. These underfitting items create concerns about degradation of the model and the validity of the measure as a measure of health related quality of life [15]. Further investigation into such items is required to determine the reason for underfit. While overfit items do not degrade the model, they can result in misinterpretation of the model as working better than expected and also warrant further investigation [15].
### 4.3. Does the SF-36 Measure One or More Constructs?
The measure proved to be unidimensional across total scale and summary score analyses, indicating responses to each scale are likely to be determined by a single trait. As a total scale, the first single factor accounted for close to 60% across all six waves and the factor was considered unidimensional [32]. Residual analysis also indicated no second dimension or factor existed, further confirming unidimensioanlity of the total scale [33]. Analysis of all eight subscales revealed each scale measured a single latent trait [6]. Principal components analysis of the physical and mental health summary scores has confirmed the presence of a two-factor model, further corroborated by the results of the current study support the mental and physical health scales [12].Results suggest the responses to measure are determined by a single factor. While the responses may be determined by a single factor, previously identified misfitting and overlapping items may degrade the model and validity, suggesting that it may not be health-related quality of life that is determining response to these items. Further research should aim to correct misfitting items and reassess unidimensionality.
### 4.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way?
It appears that marital status and area of residence influence responses to both total and summary scale items. Differential item functioning has identified in the SF-36 previously, with health issues such as hypertension, respiratory issues, and diabetes influencing responses on five items in the measure [10]. Previously, the presence of DIF has been considered negligible, as it was only present for a small number of items [10]. As the SF-36 is a health-related quality of life measure, it is plausible that marital status or area of residence would have an impact in this domain as these factors can influence healthcare use and quality of life. However, the presence of DIF limits the ability of scores to be comparable across different populations.While several items on each summary scale and total scale exhibited DIF, only item 24:Q9B demonstrated DIF across analysis of total scale and items in the summary scales. This particular item also demonstrated Infit statistics outside the acceptable range, proving to be particularly problematic in every presentation of the measure. Several other items demonstrated DIF and misfit. Given that the number of items exhibiting DIF and misfit across all presentations of the measure, further investigation is needed into these specific items.
### 4.5. Limitations and Future Research
While the current study revealed differences between IRT and CTT evaluations of the SF-36, it did not compare each method in the same sample. Future research may perform both methods using the same sample, in order to explain the differences between methods and advantages of applying different frameworks when developing and evaluating measures. It may also be beneficial to compare methods longitudinally. A further limitation is the rate of attrition in the sample. While attrition is to be expected in a longitudinal study, results between waves should be interpreted in light of this.The results suggest the SF-36 is not as sound as previously suggested. It can be delivered as eight subscales and future research may apply the RMM to each subscale to evaluate the efficacy of the measure in this form. Based on the RMM findings in the current study, future research should further evaluate this measure using IRT methods. Results suggest multiple items needed to be reassessed to avoid degrading the model and improve performance of the SF-36 as a reliable measure of health-related quality of life.
## 4.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured?
For the SF-36 as a total measure, the rating scale categories increased monotonically, indicating that rating response scales were being used as expected and are appropriate for measurement across all waves. Previous longitudinal evaluation of the measure using CCT methods found poor test-retest reliability between two time points two weeks apart [36]. Previous research using IRT methods have been largely cross-sectional, providing little longitudinal evaluation of the measure using this method [5, 6, 10, 17]. In this sample, the pattern of more and less difficult items is consistent, indicating that item difficultly remained stable across each wave. Despite consistency across time in this sample, redundancy emerged as an issue with several total scale items displaying the same level of difficulty across all waves of data. This was seen again in both the SF-36 mental and physical health summary scores. It appears redundant items span across all uses of the measure and this suggests that item descriptors need to be more specific to avoid overlap across similar items.Category Six of the SF-36 physical health summary scale and Category One of the SF-36 mental health scale had scores outside the acceptable range, which may indicate these rating categories are not robust for use in longitudinal studies. Disordered categories had been seen in a previous evaluation of the SF-36, with authors suggesting collapsing some category response options [5]. These findings support this issue with the SF-36. Further investigation into the category disordering in the SF-36 mental and physical health response scale is warranted and collapsing of the response option categories may improve this, as suggested in previous literature [5, 17].When examining summary statistics for total SF-36 items, the mean person reliability fell in the unacceptable range. Inadequate person separation reliability was also seen across all waves of data, in both summary scales. The person separation index indicates the instrument used as a whole and as summary scales is not sensitive enough to separate high and low performances in the sample [29]. This presents an issue with internal consistency across all presentations of the measure. Comparatively, using classical methods, the measure was seen to discriminate between patients pre- and postoperation [37]. Results using IRT suggest that the measure is unable to discriminate between high and low performances.While results of IRT have raised doubts of the measures internal consistency, results from classical testing methods report strong internal consistency, reflected in high Cronbach’s alpha scores. When validating the measure in patients with endometriosis, Cronbach’s alpha for the total scale was above acceptable cut-offs [38]. Internal consistency scores have also been seen to be above .9 for the full scale and above .7 for each subscale [39]. In addition to internal consistency, the measure displayed acceptable content validity, correlating strongly with similar measures [38]. IRT assesses instrument reliability at item level, rather than instrument-level as well as considering considers the importance of participant responses.The contrast between results from IRT and CTT could be due to the further focus at item level that is characteristics to IRT. It is possible that overlapping items identified in the person-item map are contributing to lack of sensitivity in the scale. Addition of more items or altering current items to improve sensitivity may improve the person reliability. Further investigation into the similarity and specificity of these items is warranted, to ensure items capture the full variable being measured.
## 4.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves?
Several items on the total scale and both summary scales were found to have Infit statistics outside of the acceptable range. Many of the items remained problematic regardless of investigated as whole measure or by summary scale. The number of misfitting items was slightly lower when used in summary scales; however this can be due to the less items included in the summary scale analysis. These underfitting items create concerns about degradation of the model and the validity of the measure as a measure of health related quality of life [15]. Further investigation into such items is required to determine the reason for underfit. While overfit items do not degrade the model, they can result in misinterpretation of the model as working better than expected and also warrant further investigation [15].
## 4.3. Does the SF-36 Measure One or More Constructs?
The measure proved to be unidimensional across total scale and summary score analyses, indicating responses to each scale are likely to be determined by a single trait. As a total scale, the first single factor accounted for close to 60% across all six waves and the factor was considered unidimensional [32]. Residual analysis also indicated no second dimension or factor existed, further confirming unidimensioanlity of the total scale [33]. Analysis of all eight subscales revealed each scale measured a single latent trait [6]. Principal components analysis of the physical and mental health summary scores has confirmed the presence of a two-factor model, further corroborated by the results of the current study support the mental and physical health scales [12].Results suggest the responses to measure are determined by a single factor. While the responses may be determined by a single factor, previously identified misfitting and overlapping items may degrade the model and validity, suggesting that it may not be health-related quality of life that is determining response to these items. Further research should aim to correct misfitting items and reassess unidimensionality.
## 4.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way?
It appears that marital status and area of residence influence responses to both total and summary scale items. Differential item functioning has identified in the SF-36 previously, with health issues such as hypertension, respiratory issues, and diabetes influencing responses on five items in the measure [10]. Previously, the presence of DIF has been considered negligible, as it was only present for a small number of items [10]. As the SF-36 is a health-related quality of life measure, it is plausible that marital status or area of residence would have an impact in this domain as these factors can influence healthcare use and quality of life. However, the presence of DIF limits the ability of scores to be comparable across different populations.While several items on each summary scale and total scale exhibited DIF, only item 24:Q9B demonstrated DIF across analysis of total scale and items in the summary scales. This particular item also demonstrated Infit statistics outside the acceptable range, proving to be particularly problematic in every presentation of the measure. Several other items demonstrated DIF and misfit. Given that the number of items exhibiting DIF and misfit across all presentations of the measure, further investigation is needed into these specific items.
## 4.5. Limitations and Future Research
While the current study revealed differences between IRT and CTT evaluations of the SF-36, it did not compare each method in the same sample. Future research may perform both methods using the same sample, in order to explain the differences between methods and advantages of applying different frameworks when developing and evaluating measures. It may also be beneficial to compare methods longitudinally. A further limitation is the rate of attrition in the sample. While attrition is to be expected in a longitudinal study, results between waves should be interpreted in light of this.The results suggest the SF-36 is not as sound as previously suggested. It can be delivered as eight subscales and future research may apply the RMM to each subscale to evaluate the efficacy of the measure in this form. Based on the RMM findings in the current study, future research should further evaluate this measure using IRT methods. Results suggest multiple items needed to be reassessed to avoid degrading the model and improve performance of the SF-36 as a reliable measure of health-related quality of life.
## 5. Conclusions
Previous evaluations of the SF-36 have relied on cross-sectional data; however, the findings of the current study demonstrate the longitudinal efficacy of the measure. While using of the measure remained consistent across time for both the whole measure and summary scales, several issues were identified. Previous studies evaluating the SF-36 using CCT methods describe the measure as reliable and valid. However, evaluating the measure by application of the RMM indicated issues with internal consistency, generalisability, and sensitivity when the measure was evaluated as a whole and as both physical and mental health summary scales.
---
*Source: 1013453-2018-11-04.xml* | 1013453-2018-11-04_1013453-2018-11-04.md | 209,091 | Evaluating the Longitudinal Item and Category Stability of the SF-36 Full and Summary Scales Using Rasch Analysis | Reinie Cordier; Ted Brown; Lindy Clemson; Julie Byles | BioMed Research International
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1013453 | 1013453-2018-11-04.xml | ---
## Abstract
Introduction. The Medical Outcome Study Short Form 36 (SF-36) is widely used for measuring Health-Related Quality of Life (HRQoL) and has undergone rigorous psychometric evaluation using Classic Test Theory (CTT). However, Item Response Theory-based evaluation of the SF-36 has been limited with an overwhelming focus on individual scales and cross-sectional data.Purpose. This study aimed to examine the longitudinal item and category stability of the SF-36 using Rasch analysis.Method. Using data from the 1921-1926 cohort of the Australian Longitudinal Study on Women’s Health, responses of the SF-36 from six waves of data collection were analysed. Rasch analysis using Winsteps version 3.92.0 was performed on all 36 items of the SF-36 and items that constitute the physical health and mental health scales.Results. Rasch analysis revealed issues with the SF-36 not detected using classical methods. Redundancy was seen for items on the total measure and both scales across all waves of data. Person separation indexes indicate that the measure lacks sensitivity to discriminate between high and low performances in this sample. The presence of Differential Item Functioning suggests that responses to items were influenced by locality and marital status.Conclusion. Previous evaluations of the SF-36 have relied on cross-sectional data; however, the findings of the current study demonstrate the longitudinal efficacy of the measure. Application of the Rasch Measurement Model indicated issues with internal consistency, generalisability, and sensitivity when the measure was evaluated as a whole and as both physical and mental health summary scales. Implications for future research are discussed.
---
## Body
## 1. Introduction
To be deemed effective and useful, health measures must fulfil several requirements including validity, reliability, interpretability, and responsiveness to change [1]. Measurement invariance is another important characteristic, ensuring that the same construct is being consistently measured across different populations and settings, and over time. Considerations of measurement invariance are important for longitudinal studies that seek to gauge change in a construct, across a broad population and over time. When studies involve an older population, measurements may be vulnerable to instability as the participants age, their living circumstances may change, and their physical and cognitive abilities may decline [2, 3].The Medical Outcome Study Short Form 36 (SF-36) is one of the most commonly used questionnaires for monitoring Health-Related Quality of Life (HRQoL) across a multitude of populations and settings, including client groups and healthy populations [4–10]. HRQoL refers to aspects of quality of life that are impacted by an individual’s mental and physical health [11].Development of the SF-36 came about following difficulties during the Health Insurance Experiment (HIE), whereby the completion of a lengthy health survey was refused by participants [9]. In response to this need, Ware et al. [9] constructed a health survey that was both comprehensive and relatively short. The initial survey, the SF-18, comprised of 18 items measuring physical functioning, role limitations relating to poor health, mental health and health perceptions [9]. Subsequently, additional items have been added to create the 20-item SF-20 version, and 36-item SF-36 version which is now the most commonly used.The SF-36 measures eight key health concepts: (1) physical functioning (PF); (2) role limitations due to physical health problems (RL-P); (3) bodily pain (BP); (4) general health (GH); (5) vitality (V); (6) social functioning (SF); (7) role limitations due to emotional problems (RL-E); and (8) mental health (MH) [9]. From the eight scales, the survey generates overall physical and mental health component summary scores. Both summary measures include scores from all eight subscales; however particular correlations are present; the physical functioning, role limitations-physical, and bodily pain scales should correlate highest with the physical component score (PCS) and lowest with the mental component score (MCS) [12]. The mental health, role limitations-emotional, and social functioning scales should correlate highest with the MCS and lowest with PCS, with the remaining general health and vitality scales found to correlate moderately with both the PCS and MCS [12]. Summary score results can be compared with gender and age-group norms derived from the general population, e.g., United States population norms [12].The SF-36 is now widely used for both research and clinical purposes and has undergone rigorous psychometric evaluation nationally and internationally using Classic Test Theory (CTT) [6, 7, 9, 10]. CTT seeks to determine the reliability of a whole instrument through evaluating the degree of variance in terms of the ratio between true and observed scores. Therefore observed results are the product of the respondent’s “true score,” in combination with error [13].A relatively new approach to psychometric test design is Item Response Theory (IRT; Edelen and Reeve) [14]. IRT models are typically considered to be unidimensional, assessing instrument reliability at item-level rather than instrument-level, by determining the unique contribution of each item to the construct or trait being measured. IRT considers the importance of participants’ responses, whereby the probability of their answering a particular item correctly is based on their responses to other items of greater or lesser levels of difficulty or challenge [14]. Within IRT, the Rasch Measurement Model (RMM) is the most frequently applied IRT approach to investigating the unidimensionality of items that make up scales and to determining if responses are indeed measuring a single dimension only, through the examination of item fit statistics [15].
### 1.1. Application of IRT/RMM to the SF-36
Under the assumptions of an IRT model, instruments deemed reliable should meet the following properties: unidimensionality, hierarchical ordering of items, and reproducibility of scale items across client populations [16].Unidimensionality assumes that a collection of items represent and assess a single construct, that is, fit a single one dimensional model [16].Item hierarchy refers to a hypothesised continuum along which instrument items should progress in difficulty from easier to more challenging to answer. In other words, the probability of answering the more difficult items is higher for those individuals with higher levels of the latent trait being measured, while those with lower levels of the trait have a lower probability of answering items at the upper end [16].Reproducibility relates to item hierarchy whereby item order and calibrations along the continuum are seen to remain relatively stable or constant across different groups of assessment respondents and assessment occasions [16]. Item reproducibility or stability is considered essential to the ability to accurately measure between-group differences and within-group changes over time [16].IRT-based evaluation of the SF-36 has overwhelmingly focused on individual scales, particularly the Physical Functioning-10 subscale, with only some studies having examined particular psychometric properties of the SF-36 as a whole instrument or by component summary scores [5, 6, 12, 16, 17].
### 1.2. Unidimensionality and Item Fit
Only a few analyses have investigated the model-fit of the SF-36 as a whole. A prospective cohort study, involving a sample of 583 participants who were opioid-dependent, assessed item-model fit and latent trait factors for the eight SF-36 subscales and for the whole instrument [6]. The RMM reliability estimates of all eight SF-36 subscales (including a revised PF-10 subscale) established that each measured a single latent trait [6]. Investigation of the dimensional structure of the instrument as a whole confirmed the presence of an eight-factor model; that is, the SF-36 measured eight distinct latent traits [6].Analysis confirming a two-factor structure, reflecting the SF-36 physical and mental health components, has also been conducted using principal component analysis, with the physical and mental health domains accounting for 70% of the total variance across both standard and acute forms [12]. A single-administration survey with a general U.S.A. population sample (n = 634) evaluated the item-fit of the SF-36 physical and mental HRQoL domains using RMM modelling [5]. In this analysis, eight items in the physical domain had disordered thresholds, whereby a person responding to higher or lower levels of a categorical scale did not necessarily possess higher or lower levels of the trait that was being assessed [5]. The authors suggested collapsing some category options to overcome this issue [5]. In terms of the HRQoL domains’ unidimensionality, the mental health items were seen to fit RMM expectations, whereas the physical domain required discarding of the seven misfitting items to produce a 14-item domain that met RMM requirements. Survey data for of 395 Taiwanese patients with chronic lung disease were analysed to conduct similar assessments of the SF-36 mental and physical health domains, with the authors concluding that each domain was unidimensional [7].Differential item functioning (DIF) analysis using IRT-based techniques has also been undertaken with the SF-36. DIF refers to the unequal endorsement of instrument items by respondents of different groups, given that the items intend to measure the same latent trait [10]. The presence of DIF undermines instrument construct validity and may compromise the ability to compare instrument scores across different groups of respondents [10]. Yu et al. [10] utilised the multiple-indicator, multiple-causes (MIMIC) technique, and an IRT-based methodology to detect if DIF existed in the SF-36 physical and mental health domains. Data were extracted from the 1994-95 cohort of the Southern California Kaiser Permanente database (n = 7,538), which evaluated the health outcomes of patients receiving pharmacist consultations. DIF across SF-36 physical and mental health domains was analysed in relation to the presence of five key disease types: hypertension, rheumatic conditions, respiratory diseases, depression, and diabetes. Results indicated the presence of statistically significant DIF for a total of five items, both physical and mental health-based, for the hypertension, respiratory, and diabetes groups, respectively [10]. The authors concluded that the presence of DIF for only five of 36 items did not warrant significant concern regarding the overall construct validity of the SF-36; however, they cautioned regarding the use of the SF-36 in comparing groups based on hypertension in particular, who returned DIF effect for two items in the physical health domain [10].
### 1.3. Cross Cultural Item Response Patterns
Rasch modelling has also been applied to translated versions of the SF-36 to examine its cross-cultural validation. An assessment of the appropriateness of a Korean version of the SF-36 with 510 elderly Korean adults was conducted using the RMM [17]. The authors verified the presence of unidimensionality in the instrument and determined through step calibration that the response options of three- and five-point scales for items were appropriate for this population [17]. Goodness-of-fit statistics however determined that nine items across the instrument were not appropriate for this population, in terms of being incongruent with other items, having significant overlap with other items, or creating confusion due to misinterpretation of the meaning of items [17].
### 1.4. Item Stability
While item-model fit and determination of the presence of DIF are important, these properties can mean very little if item responses are inconsistent or changeable over time. Evaluation of the stability of item responses is important to determining the rigour of an instrument. Most IRT evaluations of SF-36 data have been cross-sectional and therefore stability of item response has not been evaluated [5–7, 10, 17]. Two studies assessed performance across repeated administrations, following pre-post designs [18, 19]. Martin et al. [18] utilised the SF-36 as one of three evaluation tools pre- and posttreatments for rheumatoid arthritis (n = 339), but with the aim to compare measurement properties of these tools and determine sensitivity to change rather than stability. IRT analysis of the PF-10 revealed weaknesses in sensitivity to treatment response at 6 and 12 months, with authors suggesting construction of a more comprehensive measure. McHorney et al. [19] compared IRT and Likert scoring method of the SF-36 Physical Functioing-10 scale, using a pre-post design. The findings showed apparent differences in patients with very high and low physical functioning, suggesting that Rasch model of scoring may have important implications for clinical interpretations of the scale [19].Only one longitudinal study has evaluated properties of the SF-36 using IRT methodologies. The first administration of the standardised SF-36 was conducted as part of a four-year longitudinal Medical Outcomes Study of patients (N = 3,445) with chronic medical and psychiatric conditions [16]. Examination of the reproducibility of the item calibrations of the Physical Functioning-10 scale was conducted, from baseline to two years [16]. A high degree of consistency in item calibration between the two time points was found, both in order and magnitude [16]. However, this longitudinal study only evaluated the stability and structural validity of the Physical Functioing-10 scale using IRT. The stability of the remaining SF-36 subscales, the physical and mental health domains, and the measure as a whole over time has not been examined using IRT to date.A lack of evaluation regarding the performance of the SF-36 over time presents a significant gap in the literature, with unanswered questions about its measurement stability. It is vital that the long-term reliability of the SF-36 is examined, to determine its true suitability for inclusion in large-scale longitudinal studies tracking participants, particularly as they age over extended periods of time. This study therefore seeks to use an IRT-based methodology to evaluate the item stability of the SF-36 total and component summaries in a large, longitudinal data set. The following questions guided this research:(1)
Is there disordering or dysfunction within the SF-36 items against the construct being measured?(2)
Do the SF-36 items have a consistent hierarchy of difficulty and good distribution across all waves of a longitudinal survey?(3)
Is the SF-36 differentiating discreet subgroups of people reliably (e.g., urban vs. regional)?(4)
Does the SF-36 measure one or more constructs?(5)
Were all items in the SF-36 instrument used by all participant subgroups in the same way?
## 1.1. Application of IRT/RMM to the SF-36
Under the assumptions of an IRT model, instruments deemed reliable should meet the following properties: unidimensionality, hierarchical ordering of items, and reproducibility of scale items across client populations [16].Unidimensionality assumes that a collection of items represent and assess a single construct, that is, fit a single one dimensional model [16].Item hierarchy refers to a hypothesised continuum along which instrument items should progress in difficulty from easier to more challenging to answer. In other words, the probability of answering the more difficult items is higher for those individuals with higher levels of the latent trait being measured, while those with lower levels of the trait have a lower probability of answering items at the upper end [16].Reproducibility relates to item hierarchy whereby item order and calibrations along the continuum are seen to remain relatively stable or constant across different groups of assessment respondents and assessment occasions [16]. Item reproducibility or stability is considered essential to the ability to accurately measure between-group differences and within-group changes over time [16].IRT-based evaluation of the SF-36 has overwhelmingly focused on individual scales, particularly the Physical Functioning-10 subscale, with only some studies having examined particular psychometric properties of the SF-36 as a whole instrument or by component summary scores [5, 6, 12, 16, 17].
## 1.2. Unidimensionality and Item Fit
Only a few analyses have investigated the model-fit of the SF-36 as a whole. A prospective cohort study, involving a sample of 583 participants who were opioid-dependent, assessed item-model fit and latent trait factors for the eight SF-36 subscales and for the whole instrument [6]. The RMM reliability estimates of all eight SF-36 subscales (including a revised PF-10 subscale) established that each measured a single latent trait [6]. Investigation of the dimensional structure of the instrument as a whole confirmed the presence of an eight-factor model; that is, the SF-36 measured eight distinct latent traits [6].Analysis confirming a two-factor structure, reflecting the SF-36 physical and mental health components, has also been conducted using principal component analysis, with the physical and mental health domains accounting for 70% of the total variance across both standard and acute forms [12]. A single-administration survey with a general U.S.A. population sample (n = 634) evaluated the item-fit of the SF-36 physical and mental HRQoL domains using RMM modelling [5]. In this analysis, eight items in the physical domain had disordered thresholds, whereby a person responding to higher or lower levels of a categorical scale did not necessarily possess higher or lower levels of the trait that was being assessed [5]. The authors suggested collapsing some category options to overcome this issue [5]. In terms of the HRQoL domains’ unidimensionality, the mental health items were seen to fit RMM expectations, whereas the physical domain required discarding of the seven misfitting items to produce a 14-item domain that met RMM requirements. Survey data for of 395 Taiwanese patients with chronic lung disease were analysed to conduct similar assessments of the SF-36 mental and physical health domains, with the authors concluding that each domain was unidimensional [7].Differential item functioning (DIF) analysis using IRT-based techniques has also been undertaken with the SF-36. DIF refers to the unequal endorsement of instrument items by respondents of different groups, given that the items intend to measure the same latent trait [10]. The presence of DIF undermines instrument construct validity and may compromise the ability to compare instrument scores across different groups of respondents [10]. Yu et al. [10] utilised the multiple-indicator, multiple-causes (MIMIC) technique, and an IRT-based methodology to detect if DIF existed in the SF-36 physical and mental health domains. Data were extracted from the 1994-95 cohort of the Southern California Kaiser Permanente database (n = 7,538), which evaluated the health outcomes of patients receiving pharmacist consultations. DIF across SF-36 physical and mental health domains was analysed in relation to the presence of five key disease types: hypertension, rheumatic conditions, respiratory diseases, depression, and diabetes. Results indicated the presence of statistically significant DIF for a total of five items, both physical and mental health-based, for the hypertension, respiratory, and diabetes groups, respectively [10]. The authors concluded that the presence of DIF for only five of 36 items did not warrant significant concern regarding the overall construct validity of the SF-36; however, they cautioned regarding the use of the SF-36 in comparing groups based on hypertension in particular, who returned DIF effect for two items in the physical health domain [10].
## 1.3. Cross Cultural Item Response Patterns
Rasch modelling has also been applied to translated versions of the SF-36 to examine its cross-cultural validation. An assessment of the appropriateness of a Korean version of the SF-36 with 510 elderly Korean adults was conducted using the RMM [17]. The authors verified the presence of unidimensionality in the instrument and determined through step calibration that the response options of three- and five-point scales for items were appropriate for this population [17]. Goodness-of-fit statistics however determined that nine items across the instrument were not appropriate for this population, in terms of being incongruent with other items, having significant overlap with other items, or creating confusion due to misinterpretation of the meaning of items [17].
## 1.4. Item Stability
While item-model fit and determination of the presence of DIF are important, these properties can mean very little if item responses are inconsistent or changeable over time. Evaluation of the stability of item responses is important to determining the rigour of an instrument. Most IRT evaluations of SF-36 data have been cross-sectional and therefore stability of item response has not been evaluated [5–7, 10, 17]. Two studies assessed performance across repeated administrations, following pre-post designs [18, 19]. Martin et al. [18] utilised the SF-36 as one of three evaluation tools pre- and posttreatments for rheumatoid arthritis (n = 339), but with the aim to compare measurement properties of these tools and determine sensitivity to change rather than stability. IRT analysis of the PF-10 revealed weaknesses in sensitivity to treatment response at 6 and 12 months, with authors suggesting construction of a more comprehensive measure. McHorney et al. [19] compared IRT and Likert scoring method of the SF-36 Physical Functioing-10 scale, using a pre-post design. The findings showed apparent differences in patients with very high and low physical functioning, suggesting that Rasch model of scoring may have important implications for clinical interpretations of the scale [19].Only one longitudinal study has evaluated properties of the SF-36 using IRT methodologies. The first administration of the standardised SF-36 was conducted as part of a four-year longitudinal Medical Outcomes Study of patients (N = 3,445) with chronic medical and psychiatric conditions [16]. Examination of the reproducibility of the item calibrations of the Physical Functioning-10 scale was conducted, from baseline to two years [16]. A high degree of consistency in item calibration between the two time points was found, both in order and magnitude [16]. However, this longitudinal study only evaluated the stability and structural validity of the Physical Functioing-10 scale using IRT. The stability of the remaining SF-36 subscales, the physical and mental health domains, and the measure as a whole over time has not been examined using IRT to date.A lack of evaluation regarding the performance of the SF-36 over time presents a significant gap in the literature, with unanswered questions about its measurement stability. It is vital that the long-term reliability of the SF-36 is examined, to determine its true suitability for inclusion in large-scale longitudinal studies tracking participants, particularly as they age over extended periods of time. This study therefore seeks to use an IRT-based methodology to evaluate the item stability of the SF-36 total and component summaries in a large, longitudinal data set. The following questions guided this research:(1)
Is there disordering or dysfunction within the SF-36 items against the construct being measured?(2)
Do the SF-36 items have a consistent hierarchy of difficulty and good distribution across all waves of a longitudinal survey?(3)
Is the SF-36 differentiating discreet subgroups of people reliably (e.g., urban vs. regional)?(4)
Does the SF-36 measure one or more constructs?(5)
Were all items in the SF-36 instrument used by all participant subgroups in the same way?
## 2. Methods
Data were from an Australian prospective, population-based survey. The Australian Longitudinal Study on Women’s Health (ALSWH) aims to assess physical and emotional health, use of health services, health risk factors and behaviours, life stages, and demographic characteristics. The ALSWH is conducted by researchers from the University of Newcastle and the University of Queensland and is funded by the Australian Government Department of Health. The study commenced in 1996 and has been running for over 20 years.
### 2.1. Participants
Three cohorts of women born in 1973-78 (aged 18-23 in 1996), 1946-51 (aged 45-50), and 1921-26 (aged 70-75) were randomly selected from the Medicare database, which includes all Australian citizens and permanent residents. Women living in regional and remote areas were sampled at twice the rate of women living in urban areas in order to allow for meaningful statistical comparisons between urban and country-dwelling women.Over 40,000 respondents initially responded to the baseline postal survey in 1996 with response rates across the three age groups ranging between 37% and 52% [20]. Although some immigrant groups were underrepresented and tertiary educated women were overrepresented, the responding samples were considered to be “reasonably representative” of the Australian female adult population following a comparison to census data [21]. Each cohort has since been surveyed every three years on a rolling basis, commencing with the 1946-51 cohort in 2018, the 1921-26 cohort in 1999, and the 1973-78 cohort in 2000. Only data from the 12,432 respondents in the 1921-26 cohort were analysed in the current study. At the commencement of the longitudinal survey, these women were aged 70-75 years, and at the time of survey six, they were aged in their early nineties (N = 4,055), with most attrition being due to death (N = 5,273).A study analysed potential biases introduced through the attrition of participants from this cohort between survey one and survey five [22]. Nondeath attrition was related to having less education, not being born in Australia, being a current smoker, and having poorer health in this cohort. Analysis comparing the survey population to the Australian Census data collected over the same time period showed an increase in the underrepresentation of women from non-English speaking backgrounds and an increase in the overrepresentation of current and ex-smokers. Differences between the study population and the national population were considered to have changed “only slightly” between survey one and survey five.
### 2.2. Instrument
The SF-36 HRQoL scale is included in each survey. At baseline in 1996, mean scores for the 1921-26 cohort were lower than for other cohorts for the physical health subscales (PF, RP, and BP) and higher than for other cohorts for the mental health subscales (MH, RE, and BP) [23]. Over time, mean PF scores scale have declined, but with significant variation across different subgroups within the cohort [24]. Mean MH scores have remained relatively stable [25].
### 2.3. Data Analysis
A two-stepped approach was taken to evaluate the reliability and validity of the SF-36. Across surveys one to six. First, Rasch analyses using Winsteps version 3.92.0 [26], with the joint maximum likelihood estimation method [27] were performed on all 36 items for each of the six waves of data collection and then on the items that constitute the physical health scales (PF 10-items, RP 4 items, BP 2 items, and GH 5 items), the mental health scales (V, SF 2 items, RE 3 items, and MH 5 items) and the item measuring health transition for each wave of data. The RMM was adopted for the data analysis since the 6-point response Likert scale was invariant across all the 36 items. The RMM adopts a “the data fit the model” approach. “The empirical data must meet the prior requirements of Rasch model in order to achieve objective measurement” [28, p. 65]. Several criteria including item infit and outfit statistics, reliability measures, rating scale functioning, and differential item functioning (DIF) were used to investigate the quality of the SF-36 total scale, physical health scale, and mental health scale. Item fit statistics indicate the extent to which the data match the expectations of the RMM. Outfit and Infit mean square (MNSQ) as well as their standardized forms (ZSTD) are used.
#### 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
#### 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
#### 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
#### 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 2.1. Participants
Three cohorts of women born in 1973-78 (aged 18-23 in 1996), 1946-51 (aged 45-50), and 1921-26 (aged 70-75) were randomly selected from the Medicare database, which includes all Australian citizens and permanent residents. Women living in regional and remote areas were sampled at twice the rate of women living in urban areas in order to allow for meaningful statistical comparisons between urban and country-dwelling women.Over 40,000 respondents initially responded to the baseline postal survey in 1996 with response rates across the three age groups ranging between 37% and 52% [20]. Although some immigrant groups were underrepresented and tertiary educated women were overrepresented, the responding samples were considered to be “reasonably representative” of the Australian female adult population following a comparison to census data [21]. Each cohort has since been surveyed every three years on a rolling basis, commencing with the 1946-51 cohort in 2018, the 1921-26 cohort in 1999, and the 1973-78 cohort in 2000. Only data from the 12,432 respondents in the 1921-26 cohort were analysed in the current study. At the commencement of the longitudinal survey, these women were aged 70-75 years, and at the time of survey six, they were aged in their early nineties (N = 4,055), with most attrition being due to death (N = 5,273).A study analysed potential biases introduced through the attrition of participants from this cohort between survey one and survey five [22]. Nondeath attrition was related to having less education, not being born in Australia, being a current smoker, and having poorer health in this cohort. Analysis comparing the survey population to the Australian Census data collected over the same time period showed an increase in the underrepresentation of women from non-English speaking backgrounds and an increase in the overrepresentation of current and ex-smokers. Differences between the study population and the national population were considered to have changed “only slightly” between survey one and survey five.
## 2.2. Instrument
The SF-36 HRQoL scale is included in each survey. At baseline in 1996, mean scores for the 1921-26 cohort were lower than for other cohorts for the physical health subscales (PF, RP, and BP) and higher than for other cohorts for the mental health subscales (MH, RE, and BP) [23]. Over time, mean PF scores scale have declined, but with significant variation across different subgroups within the cohort [24]. Mean MH scores have remained relatively stable [25].
## 2.3. Data Analysis
A two-stepped approach was taken to evaluate the reliability and validity of the SF-36. Across surveys one to six. First, Rasch analyses using Winsteps version 3.92.0 [26], with the joint maximum likelihood estimation method [27] were performed on all 36 items for each of the six waves of data collection and then on the items that constitute the physical health scales (PF 10-items, RP 4 items, BP 2 items, and GH 5 items), the mental health scales (V, SF 2 items, RE 3 items, and MH 5 items) and the item measuring health transition for each wave of data. The RMM was adopted for the data analysis since the 6-point response Likert scale was invariant across all the 36 items. The RMM adopts a “the data fit the model” approach. “The empirical data must meet the prior requirements of Rasch model in order to achieve objective measurement” [28, p. 65]. Several criteria including item infit and outfit statistics, reliability measures, rating scale functioning, and differential item functioning (DIF) were used to investigate the quality of the SF-36 total scale, physical health scale, and mental health scale. Item fit statistics indicate the extent to which the data match the expectations of the RMM. Outfit and Infit mean square (MNSQ) as well as their standardized forms (ZSTD) are used.
### 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
### 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
### 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
### 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 2.3.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured? Response Scale
Category and step (threshold) disordering of the response scale was examined. To determine whether the rating response scales were being used in the expected manner, the rate at which average measure scores (frequency endorsed) increased in relation to category increases was examined for even distribution. A uniform category distribution is achieved when average measure scores increase monotonically as the category increases. If categories are poorly defined or items are included that do not fit the construct, then non-uniformity occurs. Fit mean squares (MNSQ) below 0.7 or above 1.4 indicate a category misfit. When disordered categories are measured then a consideration should be made to collapse it with an adjacent category [29].The distance between categories is indicated by Andrich thresholds, or step calibrations. If there is no overlap, then categories should progress monotonically. Disordered steps indicate that the category defines only a narrow definition of the variable, rather than a problem with the sequencing of category definitions. An increase of at least 1.0 logit indicates distinct average measure categories on a 5-category scale, and gaps in the variable are indicated by an increase of >0.5 logits [30].
## 2.3.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves? Person and Item Fit Statistics
Misfitting items and the pattern of responses for each survey respondent were identified using fit statistics. These are used to determine whether an instrument is a valid measure of the construct it claims to measure. Fit statistics, reported as log odd units (logits), will be examined to determine whether the items contribute to the measurement a single construct, and the reliability of any one person’s responses. The item constructs reviewed in this study are health related quality of life as a whole, as well as quality of life related to physical health and mental health. Two unstandardized statistics, MNSQ and Z-Standard (Z-STD), were used to measure item and person infit and outfit. MNSQ values for infit and outfit should have a value close to 1.0 to fit the model for rating scales, but values within the range of 0.7-1.4 are considered acceptable [15]. The model is degraded by underfit (i.e., values > 1.0), indicating the possibility for other sources of variance in the model and further investigation is required to determine the reason for the underfit. Conversely, overfit (values < 1.0) does not always degrade the model and could result in a misinterpretation that the model worked better than expected [15]. Z-STD values for outfit are expected to reach 0. If a value exceeds ±2, it is deemed to fall outside of the predicted model [15].The person reliability statistic is equivalent to Cronbach’s alpha used in CTT and indicates a measure’s internal consistency (the relatedness amongst items) [15]. When person reliability values are low (i.e., < 0.8), the implications are twofold: (1) an instrument may not be sensitive enough to distinguish between high and low performers and more items are required; or (2) there were not enough persons in the sample with both high and low extreme values (a narrow range of person measures).Person separation (if the outlying measures are accidental) and person separation index (PSI)/strata (if the outlying measures represent true performances; 4∗person separation +1/3) are used to classify people. Person separation reports whether the test separates the sample into enough levels with reliability of 0.5 separating into only one or two levels. Low person separation suggests that the instrument is not sensitive enough to separate high and low performers, 0.8 indicating separation into 2-3 levels and 0.9 indicating separation into 3 or 4 levels [29]. PSI/strata of 3 are needed to consistently identify three different levels of performance (i.e., the minimum level required to attain a reliability of 0.9). Item reliability verifies item hierarchy with <3 levels (high, medium, and low) with item reliability < 0.9 indicating the sample is too small to confirm the construct validity (item difficulty) of the instrument.
## 2.3.3. Does the SF-36 Measure One or More Constructs? Dimensionality of the Scale
Dimensionality is tested by the following: (a) finding potentially problematic items by checking negative point-biserial correlations; (b) identifying misfitting persons or items using Rasch fit statistics; and (c) conducting Rasch factor analysis using principal components analysis (PCA) of the standardised residuals [31]. PCA of residuals checks that there are no further principal components (dimensions) after the intended or Rasch dimension is removed. No further dimensions are indicated if the residuals for pairs of items are uncorrelated and normally distributed. The criteria for determining the presences of further dimensions in the residuals were as follows: (1) >60% of the variance is explained by the Rasch factor; (2) an eigenvalue of <3 on first contrast; and (3) variance explained by the first contrast is <10% [32].The person-item dimensionality map provides a schematic representation of how person abilities and item difficulties are distributed using a logit scale. Items that represent similar difficulty will occupy the same place on the logit scale. If a person is represented on the logit scale with no corresponding item, then there are gaps in the item difficulty continuum. Another indicator of overall distribution is the person measure score. If people in the sample are more able than the most difficult item on a scale, then the person measure score location will be lower than the centralised item mean measure score (i.e., <50). If people in the samples are less able than the items on a scale, then the mean person location will be higher (i.e. >50).
## 2.3.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way? Differential Item Analysis
A differential item analysis (DIF) was performed to investigate whether items in the instrument were used by all groups in the same way. DIF is noticeable when a response to an item is influenced by a characteristic of the respondent other than their ability on the underlying trait. For DIF analysis, the sample was categorised by marital status (single, widowed, divorced, married, de facto, and other) and location (urban vs. regional). In determining DIF when comparing two groups (i.e., urban and regional) the hypothesis “this item has the same difficulty for two groups” is used. The difference in the difficulty of the item between the two groups, indicated by the DIF contrast, should be at least 0.5 logits with ap-value < 0.05 for DIF to be noticeable. In determining DIF when comparing more than two groups (i.e., marital status) the hypothesis “this item has no overall DIF across all groups” is used. DIF is then determined using the chi-square statistic andp-value < 0.05 [29].
## 3. Results
SF-36 data were gathered over six waves: Wave 1,N = 12,077; Wave 2,N = 10,411; Wave 3,N = 8,577; Wave 4,N = 7,112, Wave 5,N = 5,534; and Wave 6,N = 4,032. The sample size decreased with each subsequent phase of data collection as participants died or were lost to follow-up.
### 3.1. SF36 Total Scale Rasch Analysis for Six Waves of Data Collection
Total Rasch scale item statistics for six waves of data collection are shown in Table1. When all 36 SF-36 items were calibrated using the RMM for the six waves of data collection, MNSQ infit statistics ranged from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table 2). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31. This resulted in an average item separation index of 77.98 and an average item reliability of 1.00 over the six waves (see Table 3).Table 1
SF-36 total scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 SF36 ITEM LOGIT MEASURE MODEL S.E PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR 1: Q1 -0.36 0.01 -0.16 -0.37 0.01 -0.14 -0.37 0.01 -0.09 -0.52 0.01 -0.06 -0.60 0.01 -0.05 -0.68 0.01 0.01 2: Q2 -0.39 0.01 0.04 -0.44 0.01 0.03 -0.51 0.01 0.03 -0.56 0.01 0.08 -0.66 0.01 0.06 -0.73 0.01 0.11 3: Q3A 1.94 0.02 0.24 1.96 0.02 0.26 2.05 0.02 0.29 2.08 0.02 0.30 2.09 0.03 0.29 2.31 0.04 0.27 4: Q3B 0.36 0.01 0.46 0.39 0.01 0.44 0.53 0.01 0.44 0.67 0.01 0.45 0.77 0.02 0.45 0.95 0.02 0.41 5: Q3C 0.30 0.01 0.47 0.32 0.01 0.46 0.34 0.01 0.44 0.44 0.01 0.46 0.49 0.02 0.44 0.57 0.02 0.41 6: Q3D 0.82 0.01 0.45 0.84 0.01 0.44 0.93 0.01 0.46 1.00 0.02 0.47 1.11 0.02 0.46 1.25 0.02 0.44 7: Q3E 0.13 0.01 0.51 0.15 0.01 0.50 0.24 0.01 0.50 0.28 0.01 0.49 0.36 0.02 0.49 0.46 0.02 0.47 8: Q3F 0.54 0.01 0.44 0.63 0.01 0.41 0.72 0.01 0.41 0.73 0.02 0.43 0.73 0.02 0.43 0.81 0.02 0.40 9: Q3G 0.44 0.01 0.49 0.53 0.01 0.46 0.73 0.01 0.46 0.87 0.02 0.47 1.05 0.02 0.44 1.24 0.02 0.44 10: Q3H 0.05 0.01 0.52 0.10 0.01 0.49 0.23 0.01 0.49 0.36 0.01 0.50 0.48 0.02 0.48 0.65 0.02 0.48 11: Q3I -0.14 0.01 0.48 -0.13 0.01 0.45 -0.07 0.01 0.46 -0.02 0.01 0.44 0.05 0.01 0.44 0.11 0.02 0.45 12: Q3J -0.28 0.01 0.36 -0.31 0.01 0.35 -0.29 0.01 0.32 -0.24 0.01 0.32 -0.23 0.01 0.32 -0.23 0.02 0.32 13: Q4A 1.26 0.01 0.35 1.31 0.01 0.33 1.41 0.02 0.29 1.41 0.02 0.28 1.46 0.02 0.26 1.47 0.03 0.27 14: Q4B 1.63 0.01 0.35 1.74 0.02 0.32 1.87 0.02 0.29 1.89 0.02 0.28 1.92 0.03 0.27 1.94 0.03 0.26 15: Q4C 1.47 0.01 0.36 1.53 0.02 0.36 1.60 0.02 0.31 1.69 0.02 0.30 1.74 0.02 0.28 1.78 0.03 0.24 16: Q4D 1.50 0.01 0.36 1.58 0.02 0.35 1.67 0.02 0.31 1.73 0.02 0.30 1.77 0.02 0.28 1.80 0.03 0.28 17: Q5A 1.02 0.01 0.37 1.01 0.01 0.35 1.00 0.01 0.31 0.96 0.02 0.30 0.92 0.02 0.30 0.89 0.02 0.27 18: Q5B 1.22 0.01 0.36 1.22 0.01 0.35 1.23 0.02 0.31 1.21 0.02 0.30 1.18 0.02 0.29 1.15 0.02 0.27 19: Q5C 1.05 0.01 0.35 1.03 0.01 0.33 1.01 0.01 0.31 0.99 0.02 0.29 0.95 0.02 0.29 0.91 0.02 0.26 20: Q6 1.17 0.01 -0.22 1.35 0.01 -0.20 0.93 0.01 -0.16 0.69 0.01 -0.16 0.48 0.02 -0.12 0.37 0.02 -0.08 21: Q7 -0.28 0.01 -0.06 -0.26 0.01 -0.04 -0.40 0.01 0.01 -0.50 0.01 0.01 -0.57 0.01 0.03 -0.67 0.01 0.07 22: Q8 0.68 0.01 -0.18 0.73 0.01 -0.14 0.44 0.01 -0.11 0.26 0.01 -0.09 0.12 0.01 -0.04 -0.01 0.02 -0.02 23: Q9A -0.59 0.01 -0.05 -0.67 0.01 -0.06 -0.79 0.01 0.00 -0.82 0.01 -0.02 -0.93 0.01 0.01 -1.06 0.01 0.05 24: Q9B -2.04 0.01 0.39 -2.30 0.01 0.36 -2.39 0.01 0.33 -2.40 0.01 0.31 -2.38 0.02 0.31 -2.42 0.02 0.27 25: Q9C -2.64 0.01 0.40 -2.92 0.02 0.35 -2.98 0.02 0.34 -3.01 0.02 0.30 -2.86 0.02 0.31 -2.89 0.02 0.27 26: Q9D -0.30 0.01 0.06 -0.20 0.01 0.01 -0.28 0.01 0.09 -0.29 0.01 0.07 -0.30 0.01 0.08 -0.37 0.01 0.12 27: Q9E -0.77 0.01 -0.05 -0.89 0.01 -0.09 -1.00 0.01 -0.04 -1.06 0.01 -0.05 -1.17 0.01 -0.02 -1.35 0.01 0.00 28: Q9F -2.01 0.01 0.40 -2.15 0.01 0.34 -2.15 0.01 0.35 -2.16 0.01 0.34 -2.13 0.02 0.33 -2.13 0.02 0.31 29: Q9G -1.63 0.01 0.44 -1.70 0.01 0.41 -1.67 0.01 0.40 -1.60 0.01 0.40 -1.56 0.01 0.40 -1.57 0.01 0.37 30: Q9H 0.24 0.01 0.14 0.33 0.01 0.09 0.26 0.01 0.13 0.23 0.01 0.14 0.14 0.01 0.12 0.08 0.02 0.15 31: Q9I -1.23 0.01 0.39 -1.22 0.01 0.37 -1.15 0.01 0.34 -1.07 0.01 0.36 -1.03 0.01 0.34 -1.04 0.01 0.31 32: Q10 -1.34 0.01 0.35 -1.39 0.01 0.31 -1.30 0.01 0.28 -1.23 0.01 0.26 -1.20 0.01 0.24 -1.18 0.01 0.24 33: Q11A -1.39 0.01 0.33 -1.48 0.01 0.31 -1.49 0.01 0.28 -1.50 0.01 0.25 -1.49 0.01 0.26 -1.52 0.01 0.20 34: Q11B 0.28 0.01 0.03 0.43 0.01 -0.01 0.32 0.01 0.08 0.25 0.01 0.07 0.10 0.01 0.10 -0.01 0.02 0.14 35: Q11C -0.68 0.01 0.29 -0.75 0.01 0.27 -0.65 0.01 0.29 -0.59 0.01 0.25 -0.50 0.01 0.25 -0.48 0.01 0.24 36: Q11D -0.02 0.01 -0.06 0.01 0.01 -0.09 -0.03 0.01 0.00 -0.17 0.01 -0.01 -0.31 0.01 0.04 -0.40 0.01 0.09 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 2
SF-36 total scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM INFIT Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.92 -6.7 0.96 -2.9 0.86 -9.9 0.90 -7.3 0.83 -9.9 0.86 -9.8 2: Q2 0.51 -9.9 0.55 -9.9 0.53 -9.9 0.57 -9.9 0.54 -9.9 0.56 -9.9 3: Q3A 1.03 2.1 1.02 1.4 1.12 7.0 1.09 5.5 1.06 3.0 1.02 1.2 4: Q3B 0.59 -9.9 0.61 -9.9 0.64 -9.9 0.65 -9.9 0.72 -9.9 0.73 -9.9 5: Q3C 0.54 -9.9 0.56 -9.9 0.56 -9.9 0.57 -9.9 0.59 -9.9 0.60 -9.9 6: Q3D 0.82 -9.9 0.82 -9.9 0.84 -9.9 0.84 -9.9 0.86 -8.6 0.86 -8.7 7: Q3E 0.46 -9.9 0.48 -9.9 0.48 -9.9 0.50 -9.9 0.54 -9.9 0.56 -9.9 8: Q3F 0.66 -9.9 0.67 -9.9 0.72 -9.9 0.72 -9.9 0.75 -9.9 0.75 -9.9 9: Q3G 0.76 -9.9 0.78 -9.9 0.83 -9.9 0.85 -9.9 0.94 -3.7 0.94 -3.4 10: Q3H 0.46 -9.9 0.50 -9.9 0.53 -9.9 0.56 -9.9 0.65 -9.9 0.68 -9.9 11: Q3I 0.27 -9.9 0.30 -9.9 0.29 -9.9 0.31 -9.9 0.37 -9.9 0.39 -9.9 12: Q3J 0.17 -9.9 0.19 -9.9 0.13 -9.9 0.15 -9.9 0.17 -9.9 0.18 -9.9 13: Q4A 0.40 -9.9 0.41 -9.9 0.41 -9.9 0.42 -9.9 0.49 -9.9 0.50 -9.9 14: Q4B 0.57 -9.9 0.58 -9.9 0.61 -9.9 0.61 -9.9 0.66 -9.9 0.66 -9.9 15: Q4C 0.50 -9.9 0.51 -9.9 0.51 -9.9 0.52 -9.9 0.57 -9.9 0.58 -9.9 16: Q4D 0.51 -9.9 0.53 -9.9 0.54 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 17: Q5A 0.25 -9.9 0.27 -9.9 0.22 -9.9 0.23 -9.9 0.25 -9.9 0.26 -9.9 18: Q5B 0.37 -9.9 0.39 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.41 -9.9 19: Q5C 0.27 -9.9 0.29 -9.9 0.24 -9.9 0.26 -9.9 0.26 -9.9 0.28 -9.9 20: Q6 2.43 9.9 2.59 9.9 2.48 9.9 2.64 9.9 2.36 9.9 2.48 9.9 21: Q7 1.84 9.9 1.90 9.9 1.96 9.9 2.01 9.9 1.70 9.9 1.73 9.9 22: Q8 2.09 9.9 2.17 9.9 2.14 9.9 2.21 9.9 1.93 9.9 1.99 9.9 23: Q9A 1.48 9.9 1.52 9.9 1.48 9.9 1.52 9.9 1.44 9.9 1.46 9.9 24: Q9B 1.47 9.9 1.40 9.9 1.64 9.9 1.55 9.9 1.64 9.9 1.55 9.9 25: Q9C 1.67 9.9 1.53 9.9 1.82 9.9 1.66 9.9 1.74 9.9 1.57 9.9 26: Q9D 1.69 9.9 1.73 9.9 1.60 9.9 1.64 9.9 1.50 9.9 1.52 9.9 27: Q9E 1.55 9.9 1.58 9.9 1.55 9.9 1.58 9.9 1.49 9.9 1.50 9.9 28: Q9F 1.07 5.1 1.03 2.4 1.19 9.9 1.15 9.0 1.13 6.9 1.09 4.9 29: Q9G 1.02 2.0 1.00 0.2 1.04 3.0 1.02 1.4 1.02 1.1 1.00 -0.1 30: Q9H 1.63 9.9 1.62 9.9 1.49 9.9 1.50 9.9 1.35 9.9 1.36 9.9 31: Q9I 0.84 -9.9 0.84 -9.9 0.88 -9.9 0.88 -9.9 0.84 -9.9 0.84 -9.9 32: Q10 0.79 -9.9 0.79 -9.9 0.84 -9.9 0.83 -9.9 0.92 -6.6 0.92 -6.6 33: Q11A 0.77 -9.9 0.76 -9.9 0.64 -9.9 0.63 -9.9 0.66 -9.9 0.65 -9.9 34: Q11B 1.88 9.9 1.90 9.9 1.77 9.9 1.80 9.9 1.62 9.9 1.62 9.9 35: Q11C 1.04 3.3 1.05 4.1 0.95 -4.5 0.95 -3.8 0.99 -0.9 0.99 -0.5 36: Q11D 1.78 9.9 1.83 9.9 1.81 9.9 1.86 9.9 1.68 9.9 1.70 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.72 -9.9 0.73 -9.9 0.69 -9.9 0.70 -9.9 0.65 -9.9 0.65 -9.9 2: Q2 0.50 -9.9 0.52 -9.9 0.49 -9.9 0.50 -9.9 0.46 -9.9 0.47 -9.9 3: Q3A 1.11 4.4 1.06 2.3 1.18 6.0 1.11 3.9 1.26 6.4 1.17 4.3 4: Q3B 0.78 -9.9 0.78 -9.9 0.81 -9.1 0.81 -9.4 0.91 -3.6 0.90 -4.1 5: Q3C 0.62 -9.9 0.64 -9.9 0.65 -9.9 0.66 -9.9 0.72 -9.9 0.72 -9.9 6: Q3D 0.87 -7.1 0.86 -7.7 0.92 -3.6 0.90 -4.6 0.97 -0.9 0.93 -2.4 7: Q3E 0.56 -9.9 0.57 -9.9 0.61 -9.9 0.63 -9.9 0.70 -9.9 0.70 -9.9 8: Q3F 0.73 -9.9 0.73 -9.9 0.71 -9.9 0.72 -9.9 0.76 -9.9 0.76 -9.9 9: Q3G 0.98 -1.0 0.97 -1.3 1.03 1.5 1.01 0.6 1.09 3.3 1.04 1.6 10: Q3H 0.74 -9.9 0.76 -9.9 0.83 -8.0 0.85 -7.4 0.94 -2.4 0.93 -2.6 11: Q3I 0.41 -9.9 0.43 -9.9 0.48 -9.9 0.51 -9.9 0.55 -9.9 0.57 -9.9 12: Q3J 0.24 -9.9 0.26 -9.9 0.29 -9.9 0.31 -9.9 0.33 -9.9 0.34 -9.9 13: Q4A 0.52 -9.9 0.54 -9.9 0.58 -9.9 0.60 -9.9 0.60 -9.9 0.61 -9.9 14: Q4B 0.69 -9.9 0.69 -9.9 0.73 -9.9 0.72 -9.9 0.74 -8.7 0.74 -8.9 15: Q4C 0.62 -9.9 0.63 -9.9 0.67 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 16: Q4D 0.64 -9.9 0.64 -9.9 0.68 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 17: Q5A 0.27 -9.9 0.29 -9.9 0.30 -9.9 0.32 -9.9 0.32 -9.9 0.34 -9.9 18: Q5B 0.42 -9.9 0.43 -9.9 0.45 -9.9 0.47 -9.9 0.47 -9.9 0.49 -9.9 19: Q5C 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 0.33 -9.9 0.35 -9.9 20: Q6 2.17 9.9 2.27 9.9 2.06 9.9 2.14 9.9 2.00 9.9 2.06 9.9 21: Q7 1.61 9.9 1.63 9.9 1.52 9.9 1.53 9.9 1.40 9.9 1.41 9.9 22: Q8 1.75 9.9 1.80 9.9 1.65 9.9 1.68 9.9 1.56 9.9 1.59 9.9 23: Q9A 1.40 9.9 1.41 9.9 1.36 9.9 1.36 9.9 1.34 9.9 1.35 9.9 24: Q9B 1.62 9.9 1.53 9.9 1.61 9.9 1.53 9.9 1.58 9.9 1.51 9.9 25: Q9C 1.73 9.9 1.60 9.9 1.75 9.9 1.62 9.9 1.65 9.9 1.57 9.9 26: Q9D 1.51 9.9 1.53 9.9 1.47 9.9 1.49 9.9 1.41 9.9 1.42 9.9 27: Q9E 1.51 9.9 1.52 9.9 1.44 9.9 1.45 9.9 1.46 9.9 1.47 9.9 28: Q9F 1.15 7.5 1.11 5.5 1.16 7.0 1.12 5.3 1.12 4.6 1.09 3.6 29: Q9G 1.02 1.6 1.01 0.5 1.03 1.6 1.01 0.8 1.06 2.7 1.04 2.1 30: Q9H 1.36 9.9 1.36 9.9 1.35 9.9 1.37 9.9 1.29 9.9 1.29 9.9 31: Q9I 0.89 -8.2 0.89 -7.9 0.92 -5.2 0.92 -4.9 0.91 -5.1 0.91 -4.8 32: Q10 0.93 -5.1 0.94 -4.7 1.00 -0.2 1.00 0.0 1.06 3.0 1.06 3.1 33: Q11A 0.67 -9.9 0.66 -9.9 0.67 -9.9 0.66 -9.9 0.70 -9.9 0.69 -9.9 34: Q11B 1.58 9.9 1.59 9.9 1.44 9.9 1.45 9.9 1.38 9.9 1.38 9.9 35: Q11C 1.01 0.5 1.01 0.9 0.99 -0.6 1.00 -0.2 0.98 -1.0 0.98 -0.9 36: Q11D 1.61 9.9 1.64 9.9 1.51 9.9 1.53 9.9 1.43 9.9 1.43 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; Z-STD ≤-2.0 or ≥2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 3
SF-36 total scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons Mean -.69 -.68 -.72 -.75 -.80 -.85 S.D. .24 .22 .24 .23 .23 .24 MAX .82 .29 .64 .67 .03 .15 MIN -4.33 -2.70 -2.60 -2.59 -2.54 -2.76 Infit-MNSQ 1.03 1.02 1.03 1.04 1.04 1.03 Infit-ZSTD -.30 -.40 -.30 -.20 -.20 -.10 Outfit-MNSQ 1.01 1.01 1.00 1.00 .99 .99 Outfit-ZSTD -.40 -.40 -.30 -.30 -.30 -.20 Person separation .81c .60c .72c .71c .75c .78c Person reliability .40a .26a .34a .33a .36a .38a Items Mean .00 .00 .00 .00 .00 .00 S.D. 1.11 1.19 1.20 1.21 1.22 1.26 MAX 1.94 1.96 2.05 2.08 2.09 2.31 MIN -2.64 -2.92 -2.98 -3.01 -2.86 -2.89 Infit-MNSQ .98 .99 .98 .98 .98 .99 Infit-ZSTD -2.30 -2.30 -2.20 -1.90 -1.40 -.90 Outfit-MNSQ .99 1.00 .98 .98 .98 .98 Outfit-ZSTD -2.30 -2.30 -2.30 -2.00 -1.50 -1.10 Item separation 93.40 89.72 82.81 76.45 67.43 58.09 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 total scale person-item map in Supplemental Figure1 shows evidence of consistent hierarchical ordering of the SF-36 total scale items. Items which were less difficult are located at the bottom of the person-item map while more difficult items are located at the top of the map. The figure also shows that while each of the waves had a reasonable distribution of items in relation to item difficulty, several of the SF-36 total scale items have the same level of difficulty.Rasch analysis reports the calibrations of the five thresholds (for the six-category rating scale) increase monotonically from -3.15, -1.36, -.25, .48, 1.31, and 2.82 for wave one and -2.96, -1.30, -.31, .42, 1.29, and 2.78 for wave six.The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table3). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 3). When examining the overall RMM output of the SF-36 total scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1 to -3 logits. The person reliability was 0.35 and item reliability was 1.00. This places the item reliability for the SF-36 total scale in the acceptable range and the person reliability correlation in the unacceptable range.The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured. However, the separation index for persons was less than 2.0 indicating inadequate separation of participants on the construct.Item fit to the unidimensionality requirement of the RMM was also examined. Eleven out of the 36 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Specifically, items CH01:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, MH04:Q9F, VT03:Q9G, VT04:Q9I, SF02:Q10, CH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 30.6% (i.e., 11 of 36) of the 36 SF-36 total scale items met the RMM requirements. The following items had an Infit MnSq statistic that was less than 0.70: HT:Q2, PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, RE01:Q5A, RE02:Q5B, and RE03:Q5C. The following items had an Infit MNSQ statistic that was greater than 1.30: FO01:Q6, BP01:Q7, BP02:Q8, VT01:Q9A, MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH05:Q9H, GH02:Q11B, and GH05:Q11D.The Winsteps RMM program determines the dimensionality of a scale by using a Rasch-residual principal components analysis. When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table4). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 total scale over the six waves of data collection ranged from 58.5% to 62.1% and the unexplained variance in the first contrast ranged from 11.9% to 14.5%. The residual analysis completed indicated that no second dimension or factor existed. Linacre [32] suggests that a first single factor with 60% or greater of the accounted for variance is considered a reasonable unidimensional construct. “A second factor or residual factor should not indicate a substantial amount of variance if unidimensionality is tenable” [33, p. 192].Table 4
SF-36 total scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 86.71 100.00% 100.00% 92.22 100.00% 100.00% 92.42 100.00% 100.00% Raw variance explained by measures 50.71 58.50% 58.70% 56.22 61.00% 61.10% 56.42 61.00% 61.30% Raw variance explained by persons 3.47 4.00% 4.00% 1.93 2.10% 2.10% 2.04 2.20% 2.20% Raw Variance explained by items 47.24 54.50% 54.70% 54.30 58.90% 59.00% 54.38 58.80% 59.10% Raw unexplained variance (total) 36.00 41.50% 41.30% 36.00 39.00% 38.90% 36.00 39.00% 38.70% Unexplained variance in 1st contrast 12.60 14.50% 35.00% 12.57 13.60% 34.90% 12.26 13.30% 34.10% Unexplained variance in 2nd contrast 3.02 3.50% 8.40% 3.05 3.30% 8.50% 3.03 3.30% 8.40% Unexplained variance in 3rd contrast 1.89 2.20% 5.20% 1.78 1.90% 4.90% 1.84 2.00% 5.10% Unexplained variance in 4th contrast 1.59 1.80% 4.40% 1.54 1.70% 4.30% 1.50 1.60% 4.20% Unexplained variance in 5th contrast 1.24 1.40% 3.40% 1.27 1.40% 3.50% 1.26 1.40% 3.50% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 92.10 100.00% 100.00% 91.96 100.00% 100.00% 94.92 100.00% 100.00% Raw variance explained by measures 56.10 60.90% 61.50% 55.96 60.90% 61.70% 58.92 62.10% 63.00% Raw variance explained by persons 3.59 3.90% 3.90% 4.05 4.40% 4.50% 4.57 4.80% 4.90% Raw Variance explained by items 52.51 57.00% 57.60% 51.91 56.40% 57.20% 54.35 57.30% 58.10% Raw unexplained variance (total) 36.00 39.10% 38.50% 36.00 39.10% 38.30% 36.00 37.90% 37.00% Unexplained variance in 1st contrast 12.41 13.50% 34.50% 12.08 13.10% 33.60% 11.33 11.90% 31.50% Unexplained variance in 2nd contrast 3.06 3.30% 8.50% 3.20 3.50% 8.90% 3.22 3.40% 8.90% Unexplained variance in 3rd contrast 1.88 2.00% 5.20% 1.95 2.10% 5.40% 2.17 2.30% 6.00% Unexplained variance in 4th contrast 1.50 1.60% 4.20% 1.53 1.70% 4.30% 1.55 1.60% 4.30% Unexplained variance in 5th contrast 1.27 1.40% 3.50% 1.25 1.40% 3.50% 1.30 1.40% 3.60% Notes. a > 60% unexplained variance in the Rasch factor; b Eigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The point-measure correlation (PTMEA) ranges from +1 to -1 “with negative items suggesting improper scoring or not functioning as expected” [33, p. 192]. An inspection of the PTMEAs for the SF-36 total scale indicated that items GH01:Q1, SF01:Q6, BP01:Q7, and VT02:Q9E had consistent negative PTMEAs over the six waves of data collection. The rest of the SF-36 total scale items had PTMEAs that were positive, supporting item-level polarity. For all other items, the PTMEA correlations had acceptable values.The functioning of the six rating scale categories was examined for the SF-36 total scale. Rating scale frequency and percent indicated that all categories were used by the participants. The category use statistics are presented in Table5. The category logit measures ranged from -3.19 to 2.86 (see Table 5). None of the infit MNSQ scores fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. The results indicated that the six-level rating scale used in the SF-36 total scale fits appropriately to the predictive RMM (see Supplemental Figure 2); however, the full range of ratings were used by the participants who completed the SF-36 total scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and each response category was the most probable category for some part of the continuum.Table 5
SF-36 total scale Rasch analysis of summary of category structure for six waves of data collection
Wave 1 Wave 2 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 84119 19 (-3.15) 1.30 1.17 NONE 72530 19 (-3.19) 1.28 1.17 NONE 2 133566 30 -1.36 .99 1.01 -1.88 114964 31 -1.40 1.02 1.06 -1.93 3 96735 22 -.25 .66 .61 -.54 82817 22 -.26 .67 .63 -.57 4 40204 9 .48 .98 1.04 .61 34325 9 .50 .97 1.03 .61 5 44040 10 1.31 1.06 1.17 .28b 37154 10 1.34 1.07 1.20 .34b 6 25211 6 (2.82) .93 1.01 1.53 23593 6 (2.86) .92 .99 1.56 Wave 3 Wave 4 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 61757 20 (-3.14) 1.25 1.15 NONE 55809 22 (-3.06) 1.24 1.15 NONE 2 88701 29 -1.39 1.04 1.08 -1.87 72286 28 -1.35 1.05 1.10 -1.79 3 63285 20 -.28 .73 .67 -.57 50125 19 -.30 .78 .71 -.54 4 29486 9 .48 .94 .96 .48 25807 10 .45 .90 .88 .38 5 28991 9 1.34 1.09 1.16 .41b 24560 10 1.32 1.10 1.13 .41 6 18470 6 (2.86) .87 .93 1.55 15297 6 (2.85) .86 .91 1.54 Wave 5 Wave 6 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 47041 24 (-3.00) 1.26 1.16 NONE 36377 25 (-2.96) 1.23 1.14 NONE 2 54905 27 -1.31 1.06 1.09 -1.72 37987 26 -1.30 1.06 1.07 -1.67 3 36216 18 -.30 .83 .76 -.49 24952 17 -.31 .88 .83 -.49 4 21172 11 .43 .87 .81 .27 15554 11 .42 .87 .79 .23 5 18847 9 1.29 1.13 1.15 .46 13560 9 1.29 1.15 1.15 .50 6 11583 6 (2.80) .83 .88 1.48 8495 6 (2.78) .83 .88 1.44 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.To investigate the possibility of item bias, differential item functioning (DIF) analysis was conducted to determine whether different groups of participants based on marital status and area of residence (urban versus regional; see Table6) responded differently on the SF-36 total scale items, despite having the same level of the latent trait being measured [34]. Three of the SF-36 items exhibited a consistent pattern of DIF over the six waves of data collection for both marital status and area of residence, those being MH01:Q9B, MH02:Q9C, and MH05:Q9H. It should be noted that these three items also exhibited MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range.Table 6
Differential Item Functioning (DIF) for SF-36 total scale Rasch analysis for six waves of data collection based on marital status and area of residence.
WAVE 1 WAVE 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 8.68 . 01 3 ∗ ∗ 0.00 .924 1.78 .407 0.19 .452 0.25 .882 0.00 .861 2 3.17 .202 0.00 .295 0.55 .760 0.00 .791 0.14 .936 0.00 .134 3 4.95 .083 0.00 .811 0.21 .903 0.20 .326 5.47 .064 0.00 .907 4 0.87 .647 0.00 .492 0.44 .804 0.00 . 01 0 ∗ 0.34 .847 0.00 .438 5 0.89 .640 0.05 . 00 1 ∗ ∗ ∗ 0.09 .959 -0.14 .983 0.40 .818 0.02 . 00 2 ∗ ∗ 6 0.47 .792 0.00 .142 4.67 .095 0.00 .288 0.57 .750 0.00 .619 7 0.06 .971 0.00 .687 2.94 .227 -0.01 .800 1.39 .496 -0.02 .054 8 0.32 .855 0.00 .362 0.27 .875 0.02 . 03 3 ∗ 0.06 .974 0.00 .243 9 13.66 . 00 1 ∗ ∗ ∗ -0.06 . 00 3 ∗ ∗ 2.06 .354 -0.14 .603 0.70 .704 -0.06 .072 10 11.27 .004∗∗ 0.00 . 03 0 ∗ 7.04 . 02 9 ∗ 0.06 . 00 1 ∗ ∗ ∗ 0.00 1.000 -0.06 . 00 6 ∗ ∗ 11 3.41 .179 0.00 .071 6.25 . 04 3 ∗ 0.04 .503 0.00 1.000 0.00 .473 12 0.16 .926 0.00 .906 0.10 .952 0.00 .981 0.69 .706 0.00 .722 13 2.93 .227 0.00 .845 0.09 .959 -0.19 .474 0.04 .982 0.00 .159 14 0.96 .618 0.00 .327 3.10 .210 0.00 .822 6.13 . 04 6 ∗ -0.05 .126 15 2.37 .303 0.00 .366 0.06 .970 0.07 .660 0.38 .828 0.00 .815 16 0.00 1.000 0.00 .591 0.05 .976 0.00 .358 4.14 .124 0.00 .317 17 1.80 .404 0.00 .581 0.10 .952 -0.02 .956 0.03 .987 0.00 .475 18 3.78 .149 0.00 .704 0.52 .770 -0.02 .238 0.62 .731 0.00 .571 19 1.54 .460 0.00 .892 0.23 .893 -0.10 .836 0.06 .971 0.00 .882 20 55.71 .001∗∗∗ -0.07 .036 7.62 . 02 2 ∗ 0.00 .526 0.06 .970 0.00 .088 21 3.24 .195 -0.06 .011∗ 1.17 .554 0.15 .087 0.00 1.000 0.00 .784 22 33.92 . 00 1 ∗ ∗ ∗ 0.00 .239 4.09 .127 0.00 .661 1.52 .465 0.03 .649 23 7.12 . 02 8 ∗ 0.00 .100 2.90 .231 -0.06 .436 0.18 .916 0.00 .498 24 23.59 . 00 1 ∗ ∗ ∗ 0.11 . 00 1 ∗ ∗ ∗ 0.12 .942 0.00 .993 13.40 . 00 1 ∗ ∗ ∗ 0.00 .106 25 64.84 . 00 1 ∗ ∗ ∗ 0.15 . 00 1 ∗ ∗ ∗ 30.23 . 00 1 ∗ ∗ ∗ -0.38 .099 0.01 .997 0.07 . 02 0 ∗ 26 10.47 . 00 5 ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 2.13 .341 0.00 .512 28.34 . 00 1 ∗ ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 27 13.71 . 00 1 ∗ ∗ ∗ 0.00 .778 9.09 . 01 0 ∗ ∗ -0.07 .924 0.85 .651 0.00 .914 28 18.73 . 00 1 ∗ ∗ ∗ 0.10 . 00 1 ∗ ∗ ∗ 18.70 . 00 1 ∗ ∗ ∗ 0.00 .590 0.79 .671 0.05 . 00 3 ∗ ∗ 29 9.31 . 00 9 ∗ ∗ 0.00 . 04 7 ∗ 10.32 . 00 6 ∗ ∗ -0.43 .214 13.34 . 00 1 ∗ ∗ ∗ 0.00 .720 30 14.58 . 00 1 ∗ ∗ -0.06 . 00 8 ∗ ∗ 14.57 . 00 1 ∗ ∗ ∗ 0.00 .403 3.83 .145 -0.07 . 00 1 ∗ ∗ ∗ 31 7.38 . 02 4 ∗ 0.02 . 01 0 ∗ ∗ 1.40 .493 -0.23 .687 2.57 .273 0.00 .108 32 18.09 . 00 1 ∗ ∗ ∗ 0.00 . 02 8 ∗ 0.65 .720 0.00 .908 6.31 . 04 2 ∗ 0.00 .422 33 15.01 . 00 1 ∗ ∗ ∗ 0.02 . 00 5 ∗ ∗ 0.40 .820 -0.14 .541 0.00 1.000 0.02 . 00 6 ∗ ∗ 34 30.39 . 00 1 ∗ ∗ ∗ 0.00 .963 1.61 .443 0.00 .937 6.57 . 03 7 ∗ 0.00 .823 35 9.62 . 00 8 ∗ ∗ 0.00 .606 0.37 .833 -0.26 .284 0.31 .859 0.02 . 02 0 ∗ 36 29.49 . 00 1 ∗ ∗ ∗ -0.06 . 01 6 ∗ 1.11 .571 0.00 .357 5.88 .052 -0.02 . 04 2 ∗ Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 0.22 .898 0.00 .261 4.26 .117 0.14 .769 1.14 .560 0.11 .198 2 0.35 .841 0.00 . 00 9 ∗ ∗ 2.92 .229 0.00 .976 0.15 .930 0.00 .275 3 0.25 .882 0.00 .078 2.24 .323 -0.02 .403 5.55 .060 0.00 .613 4 0.03 .987 0.00 .145 1.97 .369 0.00 .756 4.67 .100 0.00 .027 5 0.66 .716 0.06 . 00 1 ∗ ∗ ∗ 0.46 .795 0.27 .618 1.40 .490 0.43 .083 6 6.42 . 03 9 ∗ 0.00 .555 5.78 .054 0.00 .271 11.47 . 0 1 ∗ ∗ ∗ -0.14 .427 7 2.74 .251 0.00 .705 4.04 .130 -0.20 .574 8.64 . 0 1 ∗ ∗ 0.04 .948 8 1.19 .549 0.00 .165 0.87 .645 0.05 . 03 9 ∗ 1.17 .560 0.00 .117 9 4.04 .130 -0.04 .998 1.92 .379 -0.20 .371 3.26 .190 -0.04 .752 10 1.85 .392 0.00 .894 2.52 .280 0.11 . 00 1 ∗ ∗ ∗ 2.21 .330 0.10 . 00 1 ∗ ∗ ∗ 11 3.41 .179 0.00 .186 2.23 .324 -0.31 .823 1.92 .380 -0.08 .821 12 0.08 .965 0.00 .394 0.00 1.000 -0.02 .598 1.52 .460 0.00 .357 13 0.16 .927 0.00 .214 0.01 .998 0.01 .649 1.11 .570 -0.04 .916 14 0.03 .986 0.00 .368 1.12 .569 -0.06 . 03 3 ∗ 1.21 .540 -0.07 .274 15 3.06 .214 0.00 .611 0.00 1.000 -0.06 .860 0.86 .650 -0.09 .225 16 2.99 .221 0.00 .578 1.01 .602 0.00 .833 1.66 .430 0.00 .499 17 0.57 .753 0.00 .475 1.03 .594 -0.10 .754 0.64 .730 0.05 .290 18 0.08 .961 0.00 .671 4.27 .116 -0.07 .210 0.13 .940 -0.08 .987 19 0.11 .947 0.00 .420 2.78 .246 -0.08 .828 0.19 .910 -0.08 .986 20 0.36 .837 0.00 .089 5.27 .070 -0.05 .120 4.98 .080 -0.07 .758 21 1.67 .430 0.00 .169 1.16 .556 0.10 .439 0.21 .900 0.10 . 04 6 ∗ 22 0.54 .762 0.00 . 04 9 ∗ 2.89 .233 0.00 .446 1.95 .370 0.00 .874 23 21.23 .001∗∗∗ 0.07 . 00 2 ∗ ∗ 0.50 .777 -0.07 .442 1.02 .600 0.00 .409 24 0.63 .730 0.00 .143 22.80 . 00 1 ∗ ∗ ∗ 0.00 .897 6.77 .030∗ 0.02 .084 25 13.68 . 00 1 ∗ ∗ ∗ 0.00 .098 11.59 . 00 3 ∗ ∗ 0.00 .638 1.33 .510 -0.06 .169 26 0.41 .817 0.00 . 02 1 ∗ 8.24 . 01 6 ∗ 0.02 .274 1.17 .550 -0.03 .566 27 0.48 .787 0.00 .163 1.40 .494 0.22 .323 4.61 .100 0.18 .521 28 9.62 . 00 8 ∗ ∗ 0.00 .890 3.16 .203 -0.07 .109 0.04 .980 -0.11 .169 29 0.05 .979 -0.06 . 00 8 ∗ ∗ 4.42 .108 0.30 .161 2.07 .350 0.30 .104 30 2.04 .357 0.00 . 03 5 ∗ 6.85 . 03 2 ∗ 0.00 .859 1.79 .400 0.00 .517 31 0.47 .789 0.00 .068 6.88 . 03 1 ∗ 0.33 .165 0.00 1.000 0.12 .985 32 0.16 .923 0.00 .477 2.37 .302 0.00 .478 0.07 .970 -0.05 .889 33 3.33 .186 0.00 .180 5.40 .066 0.05 .851 0.45 .800 -0.20 . 00 1 ∗ ∗ ∗ 34 0.00 1.000 -0.03 . 01 0 ∗ 1.00 .605 0.00 .477 0.49 .780 0.00 .999 35 0.00 1.000 -0.04 .808 0.54 .764 0.27 .217 2.08 .350 -0.21 . 00 6 ∗ ∗ 36 2.74 .251 0.00 .065 5.82 .054 0.00 .495 1.44 .480 -0.03 .508 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
### 3.2. SF36 Physical Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 physical health items were included in the initial analysis using the RMM: GH01:Q1, PF01:Q3A, PF02:Q3B, PF03:Q3C, PF04:Q3D, PF05:Q3E, PF06:Q3F, PF07:Q3G, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, BP01:Q7, BP02:Q8, GH02:Q11A, GH03:Q11B, GH04:Q11C, and GH05:Q11D (see Table7). When the 21 SF-36 items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.18 to 2.66 and outfit statistics ranging from 0.19 to 2.77 (see Table 8). The mean item measure was 0.00 logits (SD = 0.99). With respect to logit measures, there was a broad range, the lowest value being –2.49 and the highest value being +1.79 (see Table 9). This resulted in an average item separation index of 60.32 and an average reliability of 1.00 over the six waves of data collection (see Table 9). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 7
SF-36 Physical health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -0.84 0.01 -0.21 -0.92 0.01 -0.24 -0.91 0.01 -0.18 3:Q3A 1.62 0.02 0.37 1.58 0.02 0.44 1.63 0.02 0.43 4:Q3B 0.02 0.01 0.59 0.00 0.01 0.61 0.13 0.01 0.60 5:Q3C -0.05 0.01 0.59 -0.08 0.01 0.60 -0.08 0.01 0.59 6:Q3D 0.52 0.01 0.60 0.48 0.01 0.63 0.55 0.01 0.62 7:Q3E -0.24 0.01 0.65 -0.27 0.01 0.67 -0.19 0.01 0.67 8:Q3F 0.21 0.01 0.57 0.26 0.01 0.58 0.33 0.01 0.54 9:Q3G 0.11 0.01 0.64 0.15 0.01 0.66 0.34 0.01 0.63 10:Q3H -0.34 0.01 0.67 -0.34 0.01 0.68 -0.19 0.01 0.67 11:Q3I -0.57 0.01 0.59 -0.62 0.01 0.59 -0.54 0.01 0.61 12:Q3J -0.74 0.01 0.43 -0.84 0.01 0.40 -0.80 0.01 0.40 13:Q4A 0.96 0.01 0.42 0.95 0.01 0.41 1.02 0.02 0.38 14:Q4B 1.32 0.01 0.42 1.37 0.02 0.43 1.46 0.02 0.39 15:Q4C 1.16 0.01 0.46 1.17 0.01 0.50 1.20 0.02 0.42 16:Q4D 1.20 0.01 0.44 1.22 0.02 0.47 1.28 0.02 0.42 21:Q7 -0.74 0.01 -0.05 0.99 0.01 -0.19 -0.95 0.01 -0.02 22:Q8 0.36 0.01 -0.18 -0.78 0.01 -0.15 0.04 0.01 -0.14 33:Q11A -2.22 0.01 0.34 -2.49 0.01 0.34 -2.46 0.01 0.32 34:Q11B -0.07 0.01 0.02 0.04 0.01 -0.07 -0.09 0.01 0.02 35:Q11C -1.24 0.01 0.38 -1.42 0.01 0.38 -1.27 0.01 0.37 36:Q11D -0.42 0.01 -0.09 -0.45 0.01 -0.18 -0.50 0.01 -0.08 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -1.10 0.01 -0.14 -1.21 0.01 -0.13 -1.32 0.02 -0.06 3:Q3A 1.64 0.02 0.42 1.63 0.03 0.42 1.79 0.04 0.39 4:Q3B 0.26 0.02 0.60 0.33 0.02 0.60 0.47 0.02 0.57 5:Q3C 0.02 0.01 0.60 0.05 0.02 0.58 0.09 0.02 0.56 6:Q3D 0.59 0.02 0.62 0.67 0.02 0.62 0.77 0.02 0.59 7:Q3E -0.16 0.01 0.65 -0.09 0.02 0.65 -0.02 0.02 0.64 8:Q3F 0.32 0.02 0.57 0.30 0.02 0.57 0.34 0.02 0.53 9:Q3G 0.46 0.02 0.63 0.62 0.02 0.61 0.76 0.02 0.60 10:Q3H -0.07 0.01 0.66 0.04 0.02 0.65 0.17 0.02 0.63 11:Q3I -0.50 0.01 0.58 -0.43 0.02 0.58 -0.40 0.02 0.58 12:Q3J -0.76 0.01 0.42 -0.75 0.01 0.41 -0.79 0.02 0.39 13:Q4A 1.01 0.02 0.37 1.03 0.02 0.34 0.99 0.03 0.34 14:Q4B 1.47 0.02 0.38 1.47 0.03 0.35 1.45 0.03 0.33 15:Q4C 1.27 0.02 0.43 1.29 0.02 0.40 1.29 0.03 0.35 16:Q4D 1.31 0.02 0.41 1.33 0.02 0.39 1.32 0.03 0.36 21:Q7 -1.08 0.01 -0.01 -1.17 0.01 0.01 -1.31 0.02 0.08 22:Q8 -0.19 0.01 -0.13 -0.35 0.01 -0.06 -0.53 0.02 -0.02 33:Q11A -2.45 0.01 0.28 -2.43 0.02 0.26 -2.49 0.02 0.21 34:Q11B -0.20 0.01 0.01 -0.38 0.02 0.06 -0.53 0.02 0.11 35:Q11C -1.19 0.01 0.34 -1.08 0.01 0.33 -1.08 0.02 0.31 36:Q11D -0.68 0.01 -0.09 -0.85 0.01 -0.04 -0.98 0.02 0.02 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 8
SF-36 Physical health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.24 9.9 1.30 9.9 1.24 9.9 1.29 9.9 1.16 9.9 1.20 9.9 3:Q3A 0.93 -4.6 0.90 -6.3 0.97 -1.8 0.90 -6.2 0.93 -3.7 0.85 -7.6 4:Q3B 0.57 -9.9 0.59 -9.9 0.59 -9.9 0.60 -9.9 0.64 -9.9 0.65 -9.9 5:Q3C 0.53 -9.9 0.54 -9.9 0.53 -9.9 0.54 -9.9 0.54 -9.9 0.56 -9.9 6:Q3D 0.72 -9.9 0.73 -9.9 0.71 -9.9 0.71 -9.9 0.72 -9.9 0.71 -9.9 7:Q3E 0.44 -9.9 0.46 -9.9 0.45 -9.9 0.47 -9.9 0.48 -9.9 0.50 -9.9 8:Q3F 0.62 -9.9 0.63 -9.9 0.64 -9.9 0.64 -9.9 0.67 -9.9 0.67 -9.9 9:Q3G 0.71 -9.9 0.72 -9.9 0.73 -9.9 0.75 -9.9 0.81 -9.9 0.81 -9.9 10:Q3H 0.45 -9.9 0.49 -9.9 0.50 -9.9 0.53 -9.9 0.59 -9.9 0.62 -9.9 11:Q3I 0.28 -9.9 0.32 -9.9 0.30 -9.9 0.33 -9.9 0.36 -9.9 0.39 -9.9 12:Q3J 0.21 -9.9 0.23 -9.9 0.18 -9.9 0.19 -9.9 0.21 -9.9 0.23 -9.9 13:Q4A 0.36 -9.9 0.40 -9.9 0.37 -9.9 0.40 -9.9 0.44 -9.9 0.48 -9.9 14:Q4B 0.51 -9.9 0.53 -9.9 0.53 -9.9 0.54 -9.9 0.59 -9.9 0.60 -9.9 15:Q4C 0.44 -9.9 0.47 -9.9 0.43 -9.9 0.45 -9.9 0.49 -9.9 0.52 -9.9 16:Q4D 0.46 -9.9 0.49 -9.9 0.46 -9.9 0.48 -9.9 0.52 -9.9 0.55 -9.9 21:Q7 2.33 9.9 2.40 9.9 2.51 9.9 2.77 9.9 2.20 9.9 2.23 9.9 22:Q8 2.29 9.9 2.39 9.9 2.66 9.9 2.72 9.9 2.23 9.9 2.29 9.9 33:Q11A 1.24 9.9 1.20 9.9 1.10 7.1 1.06 3.9 1.12 7.2 1.08 4.4 34:Q11B 2.18 9.9 2.20 9.9 2.07 9.9 2.09 9.9 1.88 9.9 1.89 9.9 35:Q11C 1.25 9.9 1.26 9.9 1.16 9.9 1.17 9.9 1.17 9.9 1.18 9.9 36:Q11D 2.21 9.9 2.28 9.9 2.36 9.9 2.41 9.9 2.12 9.9 2.15 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.02 1.4 1.04 2.9 0.99 -0.8 1.00 0.0 0.92 -4.1 0.93 -3.4 3:Q3A 0.98 -0.9 0.87 -5.2 1.04 1.5 0.92 -2.8 1.12 3.1 0.95 -1.1 4:Q3B 0.67 -9.9 0.67 -9.9 0.69 -9.9 0.69 -9.9 0.76 -9.9 0.74 -9.9 5:Q3C 0.56 -9.9 0.57 -9.9 0.57 -9.9 0.58 -9.9 0.62 -9.9 0.63 -9.9 6:Q3D 0.72 -9.9 0.71 -9.9 0.76 -9.9 0.73 -9.9 0.80 -8.2 0.74 -9.9 7:Q3E 0.49 -9.9 0.51 -9.9 0.53 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 8:Q3F 0.63 -9.9 0.64 -9.9 0.62 -9.9 0.62 -9.9 0.65 -9.9 0.66 -9.9 9:Q3G 0.83 -9.9 0.81 -9.9 0.86 -7.0 0.83 -8.8 0.90 -3.9 0.83 -6.7 10:Q3H 0.66 -9.9 0.68 -9.9 0.72 -9.9 0.73 -9.9 0.79 -9.4 0.79 -9.5 11:Q3I 0.39 -9.9 0.42 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 0.53 -9.9 12:Q3J 0.28 -9.9 0.30 -9.9 0.33 -9.9 0.35 -9.9 0.37 -9.9 0.40 -9.9 13:Q4A 0.47 -9.9 0.51 -9.9 0.53 -9.9 0.58 -9.9 0.55 -9.9 0.59 -9.9 14:Q4B 0.62 -9.9 0.63 -9.9 0.65 -9.9 0.66 -9.9 0.68 -9.9 0.69 -9.9 15:Q4C 0.54 -9.9 0.57 -9.9 0.59 -9.9 0.61 -9.9 0.63 -9.9 0.65 -9.9 16:Q4D 0.56 -9.9 0.59 -9.9 0.60 -9.9 0.62 -9.9 0.63 -9.9 0.65 -9.9 21:Q7 2.08 9.9 2.09 9.9 1.96 9.9 1.96 9.9 1.79 9.9 1.79 9.9 22:Q8 2.07 9.9 2.11 9.9 1.95 9.9 1.97 9.9 1.85 9.9 1.86 9.9 33:Q11A 1.13 7.2 1.11 5.6 1.13 6.3 1.11 5.1 1.18 7.2 1.20 7.5 34:Q11B 1.85 9.9 1.86 9.9 1.69 9.9 1.69 9.9 1.62 9.9 1.59 9.9 35:Q11C 1.18 9.9 1.19 9.9 1.15 8.0 1.16 8.4 1.13 5.9 1.13 6.1 36:Q11D 2.05 9.9 2.09 9.9 1.95 9.9 1.96 9.9 1.83 9.9 1.82 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 9
SF-36 physical health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -1.77 -1.85 -1.90 -1.92 -1.95 -2.07 S.D. .38 .37 .40 .39 .39 .40 MAX 1.54 -.37 .40 .92 -.52 .40 MIN -5.13 -4.11 -.09 -5.08 -4.52 -.79 Infit-MNSQ 1.05 1.04 1.05 1.05 1.05 1.05 Infit-ZSTD -.10 -.20 -.10 .00 .00 .00 Outfit-MNSQ 1.00 1.01 .98 .97 .96 .96 Outfit-ZSTD -.30 -.30 -.30 -.20 -.20 -.20 Person separation .86c .88c .97c .96c .96c .96c Person reliability .43a .43a .48a .48a .48a .48a Items MEAN .00 .00 .00 .00 .00 .00 S.D. .91 .99 .98 1.00 1.02 1.08 MAX 1.62 1.58 1.63 1.64 1.63 1.79 MIN -2.22 -2.49 -2.46 -2.45 -2.43 -2.49 Infit-MNSQ .95 .98 .95 .94 .94 .95 Infit-ZSTD -3.00 -3.00 -3.10 -3.40 -3.40 -3.30 Outfit-MNSQ .98 1.00 .96 .95 .94 .94 Outfit-ZSTD -3.10 -3.40 -3.50 -3.60 -3.70 -3.60 Item separation 71.24 69.37 63.25 59.41 52.87 45.77 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 physical health scale person-item map is located in Supplemental Figure3 and reports evidence of the hierarchical ordering of the SF-36 physical health scale items. Items which are easier are located at the bottom of the SF-36 physical health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. It should also be noted that several of the SF-36 physical health scale items have the same level of difficulty.The average person measure was 1.91 logits (SD = 0.39) over the six waves of data collection (see Table9). The mean person separation was 0.93 with a mean reliability of 0.46 (see Table 9). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 physical health construct. When examining the overall RMM output of the SF-36 physical health total scale, the average person measure (1.91 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1.62 to -2.49 logits. The person reliability was 0.46 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 physical health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 physical health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.86, -2.13, -.83, .10, 1.96, and 5.32 for wave one and -3.64, -2.02, -.91, .01, 2.00, and 5.24 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Seven out of the 21 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Therefore items 1:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, GH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 7 / 21 or 52.4% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, and RP04:Q4D. The following items had an Infit MNSQ statistic that was greater than 1.30: BP01:Q7, BP02:Q8, GH03:Q11B, and GH05:Q11D.An inspection of the PTMEAs for the SF-36 physical health scale indicated that items HG01:Q1, BP01:Q7, BP02:Q8, and GH05:Q11D had consistent negative PTMEAs over the six waves of data collection. For all other items, the PTMEA correlations had acceptable values.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table10). This indicated that the unidimensional requirement of the SF-36 physical health scale was met. The raw variance explained by the SF-36 physical health scale over the six waves of data collection ranged from 41.6% to 48.9% and the unexplained variance in the first contrast ranged from 17.4% to 22.4%. The residual analysis completed indicated that no second dimension or factor existed.Table 10
SF-36 physical health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 36.07 100.00% 100.00% 36.07 100.00% 100.00% 37.89 100.00% 100.00% Raw variance explained by measures 15.07 41.80% 42.50% 15.07 41.80% 42.50% 16.89 44.60% 45.70% Raw variance explained by persons 1.90 5.30% 5.40% 1.90 5.30% 5.40% 0.96 2.50% 2.60% Raw Variance explained by items 13.16 36.50% 37.10% 13.16 36.50% 37.10% 15.94 42.10% 43.10% Raw unexplained variance (total) 21.00 58.20% 57.50% 21.00 58.20% 57.50% 21.00 55.40% 54.30% Unexplained variance in 1st contrast 8.00 22.20% 38.10% 8.00 22.20% 38.10% 7.70 20.30% 36.70% Unexplained variance in 2nd contrast 2.02 5.60% 9.60% 2.02 5.60% 9.60% 1.96 5.20% 9.40% Unexplained variance in 3rd contrast 1.51 4.20% 7.20% 1.51 4.20% 7.20% 1.44 3.80% 6.90% Unexplained variance in 4th contrast 1.31 3.60% 6.20% 1.31 3.60% 6.20% 1.23 3.20% 5.80% Unexplained variance in 5th contrast 0.99 2.80% 4.70% 0.99 2.80% 4.70% 0.99 2.60% 4.70% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 37.07 100.00% 100.00% 39.34 100.00% 100.00% 41.08 100.00% 100.00% Raw variance explained by measures 17.07 46.10% 48.00% 18.34 46.60% 48.60% 20.08 48.90% 51.10% Raw variance explained by persons 2.45 6.60% 6.90% 2.42 6.10% 6.40% 2.68 6.50% 6.80% Raw Variance explained by items 14.62 39.40% 41.10% 15.92 40.50% 42.20% 17.40 42.30% 44.20% Raw unexplained variance (total) 20.00 53.90% 52.00% 21.00 53.40% 51.40% 21.00 51.10% 48.90% Unexplained variance in 1st contrast 6.64 17.90% 33.20% 7.50 19.10% 35.70% 7.14 17.40% 34.00% Unexplained variance in 2nd contrast 2.10 5.70% 10.50% 2.06 5.20% 9.80% 2.24 5.50% 10.70% Unexplained variance in 3rd contrast 1.54 4.20% 7.70% 1.58 4.00% 7.50% 1.56 3.80% 7.40% Unexplained variance in 4th contrast 1.26 3.40% 6.30% 1.21 3.10% 5.80% 1.20 2.90% 5.70% Unexplained variance in 5th contrast 1.07 2.90% 5.30% 1.03 2.60% 4.90% 1.04 2.50% 4.90% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The functioning of the six rating scale categories was examined for the SF-36 physical health scale. The category logit measures ranged from -3.86 to 5.43 (see Table11). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category six. The infit MNSQ scores for this rating category ranged from 2.03 to 3.18 (see Table 11). The results indicated that the six-level rating scale used in the SF-36 physical health scale might not be the most robust to use (see Supplemental Figure 3); however, the full range of ratings were used by the participants who completed the SF-36 physical health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the first five response categories were the most probable category for some part of the continuum. Rating category six was problematic.Table 11
SF-36 physical health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 60721 23 (-3.86) 1.18 1.11 NONE 55350 25 ( -3.92) 1.12 1.07 NONE 46995 28 ( -3.86) 1.14 1.08 NONE 2 83039 32 -2.13 .93 .94 -2.56 70454 32 -2.19 .93 .92 -2.60 54692 32 -2.17 1.00 .99 -2.54 3 73299 28 -.83 .66 .59 -1.54 62780 29 -.85 .66 .60 -1.61 46905 28 -.90 .72 .62 -1.61 4 12957 5 .10 1.19 1.26 .59 11389 5 .14 1.21 1.38 .53 9720 6 .07 1.11 1.11 .38 5 15144 6 1.96 1.05 1.15 -.71b 12634 6 2.03 1.13 1.33 -.55b 9942 6 2.05 1.08 1.17 -.56b 6 238 0 (5.32) 2.67 2.61 4.21 233 0 (5.34) 3.18 2.98 4.23 155 0 (5.43) 2.77 2.30 4.33 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 42924 29 ( -3.75) 1.15 1.08 NONE 36502 31 ( -3.66) 1.15 1.08 NONE 28787 36 ( -3.64) 1.15 1.08 NONE 2 44603 30 -2.09 1.05 1.02 -2.42 33751 29 -2.01 1.08 1.01 -2.33 23233 29 -2.02 1.10 1.01 -2.30 3 36071 24 -.89 .76 .64 -1.52 25389 22 -.87 .82 .68 -1.44 16930 21 -.91 .86 .73 -1.46 4 8958 6 .06 1.00 .96 .24 7310 6 .06 .94 .86 .13 5353 7 .01 .91 .80 .03 5 8592 6 2.01 1.10 1.16 -.48b 6464 6 1.96 1.09 1.12 -.38b 4694 6 2.00 1.09 1.10 -.39b 6 150 0 (5.29) 2.54 2.02 4.18 129 0 (5.12) 2.25 1.75 4.02 80 0 (5.24) 2.03 1.52 4.13 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 physical scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table12). Four of the SF-36 physical health items exhibited a consistent pattern of DIF over the six waves of data collection. Item PF03:Q3C demonstrated DIF based on marital status alone while items GH02:Q11A, GH04:Q11C, and GH05:Q11D exhibited DIF based on both marital status and area of residence (see Table 12). It should be noted that items GH02:Q11A and GH04:Q11C had infit MNSQ statistics that fell within the 0.70-1.30 range while items PF03:Q3C and GH05:Q11D also had MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. SF-36 physical health items PF03:Q3C and GH05:Q11D appear to be particularly problematic items based on the RMM analysis findings.Table 12
Differential Item Functioning (DIF) for SF-36 physical health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 13.25 . 00 1 ∗ ∗ ∗ 0.00 .639 1.62 .442 0.14 .713 0.41 .816 0.00 .185 3:Q3A 5.79 .054 0.00 .725 1.27 .527 0.00 .330 5.69 .057 0.00 .073 4:Q3B 1.94 .376 0.00 .069 0.56 .754 0.15 .835 0.50 .779 0.06 . 00 1 ∗ ∗ ∗ 5:Q3C 1.84 .394 0.06 . 00 1 ∗ ∗ ∗ 0.03 .984 0.00 . 00 9 ∗ ∗ 0.56 .756 0.00 .442 6:Q3D 0.97 .614 0.00 .947 2.13 .342 -0.20 .778 0.44 .804 0.00 .700 7:Q3E 0.41 .816 0.00 .287 2.75 .250 0.00 .153 1.26 .529 0.00 .143 8:Q3F 0.06 .970 0.00 .684 0.18 .917 -0.08 .599 0.03 .988 -0.04 .964 9:Q3G 13.16 . 00 1 ∗ ∗ ∗ -0.02 .076 1.33 .512 0.03 . 00 6 ∗ ∗ 0.57 .750 0.00 .847 10:Q3H 12.78 . 00 2 ∗ ∗ 0.00 .324 5.72 .056 -0.22 .320 0.02 .990 0.00 .225 11:Q3I 7.45 . 02 4 ∗ 0.00 .357 5.55 .061 0.07 . 00 1 ∗ ∗ ∗ 0.00 1.000 0.00 .631 12:Q3J 0.95 .620 0.00 .306 0.03 .988 -0.03 .836 0.73 .693 0.00 .251 13:Q4A 2.34 .306 0.00 .519 0.08 .962 0.00 .461 0.17 .919 0.00 .360 14:Q4B 1.45 .481 0.00 .782 4.22 .119 -0.30 .206 6.61 . 03 6 ∗ 0.00 .520 15:Q4C 2.47 .288 0.00 .982 0.08 .961 0.00 .240 0.37 .831 0.00 .524 16:Q4D 0.08 .965 0.00 .845 0.06 .973 0.00 .873 4.54 .101 0.00 .053 21:Q7 2.54 .277 -0.05 . 00 5 ∗ ∗ 9.34 . 00 9 ∗ ∗ 0.00 .131 0.13 .941 0.00 .145 22:Q8 37.29 . 00 1 ∗ ∗ ∗ 0.00 .114 1.00 .605 -0.09 .651 1.85 .394 0.02 .081 33:Q11A 27.3 . 00 1 ∗ ∗ ∗ 0.07 . 00 1 ∗ ∗ ∗ 1.11 .572 0.00 .521 0.02 .990 0.00 .275 34:Q11B 36.38 . 00 1 ∗ ∗ ∗ 0.00 .905 1.41 .490 -0.20 .309 6.66 . 03 5 ∗ 0.00 .170 35:Q11C 12.2 . 00 2 ∗ ∗ 0.00 .204 0.68 .710 0.00 .963 0.58 .749 -0.05 . 00 2 ∗ ∗ 36:Q11D 35.29 . 00 1 ∗ ∗ ∗ -0.05 . 00 6 ∗ ∗ 1.38 .500 0.12 .444 7.03 . 02 9 ∗ 0.00 .724 Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 0.67 .714 0.00 .185 4.44 .107 0.23 .707 2.87 .235 0.09 .418 3:Q3A 0.22 .897 0.00 .073 1.32 .515 0.00 .375 3.73 .153 0.00 .176 4:Q3B 0.26 .878 0.06 . 00 1 ∗ ∗ ∗ 0.59 .744 0.34 .229 2.71 .254 0.38 .098 5:Q3C 3.48 .173 0.00 .442 0.65 .720 0.00 .342 1.81 .400 -0.13 .270 6:Q3D 1.39 .496 0.00 .700 2.83 .240 -0.16 .573 7.85 . 01 9 ∗ 0.00 .761 7:Q3E 0.10 .953 0.00 .143 1.73 .418 0.03 .025∗ 6.17 . 04 5 ∗ 0.00 .278 8:Q3F 2.31 .311 -0.04 .964 0.00 1.000 -0.17 .456 0.43 .808 -0.08 .248 9:Q3G 0.95 .621 0.00 .847 0.39 .824 0.12 . 00 1 ∗ ∗ ∗ 1.20 .547 0.11 . 00 1 ∗ ∗ ∗ 10:Q3H 1.89 .384 0.00 .225 0.73 .695 -0.26 .443 0.68 .712 -0.12 .739 11:Q3I 0.00 1.000 0.00 .631 0.42 .809 0.00 .961 0.55 .761 0.00 .387 12:Q3J 0.05 .975 0.00 .251 0.10 .953 0.06 .252 0.65 .722 -0.03 .664 13:Q4A 0.45 .798 0.00 .360 0.00 1.000 -0.06 . 04 2 ∗ 0.17 .922 -0.07 .282 14:Q4B 1.60 .447 0.00 .520 1.98 .367 -0.05 .861 0.82 .663 -0.12 .138 15:Q4C 2.50 .283 0.00 .524 0.01 .996 0.00 .453 0.20 .908 0.00 .255 16:Q4D 0.68 .711 0.00 .053 0.24 .889 -0.06 .733 0.73 .692 0.02 .431 21:Q7 3.61 .162 0.00 .145 0.00 1.000 -0.06 .413 1.75 .413 -0.07 .650 22:Q8 1.03 .595 0.02 .081 3.03 .217 0.00 .310 4.23 .119 -0.10 .644 33:Q11A 0.14 .934 0.00 .275 13.77 . 00 1 ∗ ∗ ∗ -0.05 .170 1.34 .509 -0.07 .729 34:Q11B 4.03 .131 0.00 .170 1.80 .403 0.20 . 04 8 ∗ 2.28 .317 0.07 .252 35:Q11C 0.00 1.000 -0.05 . 00 2 ∗ ∗ 0.49 .783 0.00 .280 3.68 .156 0.00 .681 36:Q11D 0.37 .831 0.00 .724 7.48 .023∗ 0.00 .941 1.55 .457 -0.03 .897 Notes. PROB. = probability; ∗p≤.05; ∗∗p ≤ .01; ∗∗∗p ≤ .001.
### 3.3. SF36 Mental Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 mental health items were included in the initial analysis using the RMM: RE01:Q5A, RE02:Q5B, RE03:Q5C, SF01:Q6, VT01:Q9A, 24MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10. When the 14 SF-36 mental health items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table13). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31 (see Table 14). This resulted in an average item separation index of 79.17 and an average reliability of 1.00 over the six waves (see Table 15). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 13
SF-36 mental health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR 17:Q5A 1.35 0.01 0.31 1.38 0.01 0.30 1.49 0.02 0.27 18:Q5B 1.57 0.01 0.31 1.62 0.02 0.29 1.75 0.02 0.27 19:Q5C 1.38 0.01 0.30 1.41 0.01 0.28 1.50 0.02 0.26 20:Q6 1.51 0.01 -0.09 1.78 0.02 -0.02 1.41 0.01 -0.02 23:Q9A -0.03 0.01 0.17 -0.06 0.01 0.22 -0.12 0.01 0.27 24:Q9B -1.28 0.01 0.46 -1.47 0.01 0.43 -1.54 0.01 0.41 25:Q9C -1.84 0.01 0.45 -2.04 0.01 0.40 -2.08 0.02 0.40 26:Q9D 0.21 0.01 0.20 0.30 0.01 0.18 0.30 0.01 0.26 27:Q9E -0.16 0.01 0.22 -0.24 0.01 0.26 -0.29 0.01 0.30 28:Q9F -1.25 0.01 0.46 -1.33 0.01 0.39 -1.32 0.01 0.40 29:Q9G -0.90 0.01 0.44 -0.93 0.01 0.39 -0.88 0.01 0.39 30:Q9H 0.63 0.01 0.27 0.72 0.01 0.25 0.74 0.01 0.28 31:Q9I -0.55 0.01 0.37 -0.50 0.01 0.31 -0.42 0.01 0.31 32:Q10 -0.65 0.01 0.28 -0.65 0.01 0.22 -0.55 0.01 0.19 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR. 17:Q5A 1.47 0.02 0.28 1.48 0.02 0.30 1.51 0.02 0.29 18:Q5B 1.76 0.02 0.28 1.77 0.02 0.30 1.81 0.03 0.32 19:Q5C 1.51 0.02 0.27 1.51 0.02 0.28 1.53 0.02 0.28 20:Q6 1.19 0.01 0.04 1.01 0.02 0.03 0.96 0.02 0.05 23:Q9A -0.14 0.01 0.23 -0.21 0.01 0.24 -0.29 0.01 0.26 24:Q9B -1.52 0.01 0.40 -1.49 0.02 0.40 -1.47 0.02 0.37 25:Q9C -2.07 0.02 0.35 -1.92 0.02 0.39 -1.91 0.02 0.35 26:Q9D 0.30 0.01 0.23 0.31 0.01 0.19 0.29 0.01 0.22 27:Q9E -0.34 0.01 0.27 -0.41 0.01 0.27 -0.53 0.01 0.29 28:Q9F -1.30 0.01 0.40 -1.25 0.01 0.42 -1.22 0.02 0.41 29:Q9G -0.80 0.01 0.39 -0.75 0.01 0.44 -0.72 0.01 0.43 30:Q9H 0.75 0.01 0.29 0.69 0.01 0.23 0.69 0.02 0.27 31:Q9I -0.34 0.01 0.33 -0.30 0.01 0.35 -0.27 0.01 0.32 32:Q10 -0.48 0.01 0.17 -0.44 0.01 0.17 -0.39 0.01 0.17 Note: MODEL S.E. = Model Standard Error; PTMEA CORR = Point Measure Correlation.Table 14
SF-36 mental health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.29 -9.9 0.32 -9.9 0.26 -9.9 0.28 -9.9 0.30 -9.9 0.32 -9.9 18:Q5B 0.43 -9.9 0.46 -9.9 0.43 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 19:Q5C 0.31 -9.9 0.34 -9.9 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 20:Q6 2.64 9.9 2.87 9.9 2.91 9.9 3.06 9.9 2.54 9.9 2.74 9.9 23:Q9A 1.08 7.7 1.15 9.9 1.03 3.2 1.12 9.9 1.04 3.5 1.09 6.7 24:Q9B 1.25 9.9 1.17 9.9 1.40 9.9 1.29 9.9 1.41 9.9 1.31 9.9 25:Q9C 1.44 9.9 1.30 9.9 1.59 9.9 1.39 9.9 1.51 9.9 1.33 9.9 26:Q9D 1.22 9.9 1.33 9.9 1.14 9.9 1.28 9.9 1.10 7.1 1.19 9.9 27:Q9E 1.12 9.9 1.17 9.9 1.08 7.8 1.12 9.9 1.07 5.3 1.08 6.3 28:Q9F 0.90 -6.6 0.88 -8.3 1.02 1.0 0.98 -0.9 0.96 -2.0 0.93 -3.6 29:Q9G 0.88 -9.9 0.87 -9.9 0.90 -7.4 0.89 -7.8 0.88 -7.8 0.87 -8.2 30:Q9H 1.22 9.9 1.29 9.9 1.15 8.5 1.24 9.9 1.09 4.6 1.16 7.9 31:Q9I 0.72 -9.9 0.73 -9.9 0.76 -9.9 0.77 -9.9 0.73 -9.9 0.74 -9.9 32:Q10 0.72 -9.9 0.77 -9.9 0.78 -9.9 0.81 -9.9 0.87 -9.9 0.91 -6.8 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.31 -9.9 0.34 -9.9 0.34 -9.9 0.37 -9.9 0.36 -9.9 0.39 -9.9 18:Q5B 0.50 -9.9 0.52 -9.9 0.52 -9.9 0.54 -9.9 0.53 -9.9 0.55 -9.9 19:Q5C 0.34 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.38 -9.9 0.41 -9.9 20:Q6 2.16 9.9 2.33 9.9 1.96 9.9 2.15 9.9 1.91 9.9 2.07 9.9 23:Q9A 1.04 2.7 1.07 5.3 1.02 1.7 1.05 3.4 1.03 1.8 1.04 2.2 24:Q9B 1.37 9.9 1.25 9.9 1.36 9.9 1.25 9.5 1.33 9.9 1.25 8.3 25:Q9C 1.50 9.9 1.36 9.9 1.51 9.9 1.36 9.9 1.42 9.9 1.32 9.1 26:Q9D 1.15 9.3 1.23 9.9 1.16 8.7 1.26 9.9 1.13 6.2 1.20 8.8 27:Q9E 1.11 7.9 1.11 8.0 1.08 5.1 1.08 4.8 1.10 5.2 1.09 4.4 28:Q9F 0.97 -1.6 0.94 -3.2 0.95 -2.2 0.91 -4.2 0.91 -3.5 0.89 -4.4 29:Q9G 0.87 -8.1 0.86 -8.5 0.84 -9.2 0.83 -9.7 0.85 -7.3 0.84 -7.8 30:Q9H 1.12 5.6 1.16 7.3 1.13 5.8 1.22 9.3 1.09 3.6 1.15 5.3 31:Q9I 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.78 -9.9 32:Q10 0.86 -9.9 0.91 -6.9 0.90 -6.3 0.94 -3.9 0.96 -2.4 1.00 0.2 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 15
SF-36 mental health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -.08 -.04 -.02 -.03 -.06 -.06 S.D. .30 .28 .31 .30 .29 .30 MAX .30 1.64 1.38 .30 .29 .30 MIN 1.87 -3.54 -2.99 2.37 2.16 2.12 Infit-MNSQ 1.01 1.01 1.02 1.02 1.02 1.02 Infit-ZSTD -.30 -.40 -.30 -.30 -.30 -.20 Outfit-MNSQ 1.06 1.08 1.06 1.03 1.02 1.01 Outfit-ZSTD -.20 -.20 -.20 -.20 -.20 -.20 Person separation .53 .33 .45 .36 .36 .41 Person reliability .22a .10a .17a .11a .12a .14a Items MEAN .00 .00 .00 .00 .00 .00 S.D. 1.17 1.20 1.19 1.13 1.31 1.07 MAX 1.59 1.78 1.75 1.75 2.13 1.67 MIN -1.88 -2.04 -2.08 -2.04 -1.94 -1.95 Infit-MNSQ 1.02 1.05 1.02 1.00 .99 .98 Infit-ZSTD .10 .20 -.60 -.30 -.50 -.50 Outfit-MNSQ 1.05 1.07 1.04 1.02 1.01 1.00 Outfit-ZSTD .10 .80 .20 .10 -.10 -.30 Item separation 95.77 89.12 83.98 77.85 68.89 59.38 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 mental health scale person-item map is shown in Supplemental Figure5 and reports evidence of the hierarchical ordering of the SF-36 mental health scale items. It should also be noted that several of the SF-36 mental health scale items have the same level of difficulty. The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table 15). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 15). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 mental health construct.When examining the overall RMM output of the SF-36 mental health scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +2.13 to -2.08 logits. The person reliability was 0.35 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 mental health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 mental health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.07, -1.06, -.17, .40, 1.14, and 2.54 for wave one and -2.98, -1.09, -.19, .41, 1.15, and 2.51 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Five out of the 14 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range; thus, items VT01:Q9A, MH01:Q9B, MH03:Q9D, 27VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10 met the RMM requirements (see Table14). In other words, only 9/14 or 64.3% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: RE01:Q5A, RE02:Q5B, and RE03:Q5C. Item SF01:Q6 had an Infit MNSQ statistic that was greater than 1.30.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table16). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 mental health scale over the six waves of data collection ranged from 62.5% to 66.1% and the unexplained variance in the first contrast ranged from 15.1% to 16.5%.Table 16
SF-36 mental health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 38.55 100.00% 100.00% 41.29 100.00% 100.00% 42.62 100.00% 100.00% Raw variance explained by measures 24.55 63.70% 63.70% 27.29 66.10% 66.20% 26.62 62.50% 62.50% Raw variance explained by persons 2.85 7.40% 7.40% 2.06 5.00% 5.00% 2.68 6.30% 6.30% Raw Variance explained by items 21.70 56.30% 56.30% 25.23 61.10% 61.20% 23.94 56.20% 56.20% Raw unexplained variance (total) 14.00 36.30% 36.30% 14.00 33.90% 33.80% 16.00 37.50% 37.50% Unexplained variance in 1st contrast 6.22 16.10% 44.50% 6.22 15.10% 44.40% 7.02 16.50% 43.90% Unexplained variance in 2nd contrast 1.49 3.90% 10.60% 1.47 3.60% 10.50% 1.62 3.80% 10.10% Unexplained variance in 3rd contrast 1.29 3.30% 9.20% 1.32 3.20% 9.40% 1.29 3.00% 8.10% Unexplained variance in 4th contrast 0.81 2.10% 5.80% 0.85 2.00% 6.00% 1.05 2.50% 6.60% Unexplained variance in 5th contrast 0.68 1.80% 4.90% 0.71 1.70% 5.00% 0.71 1.70% 4.40% WAVE 4 WAVE 5 WAVE 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 39.19 100.00% 100.00% 37.79 100.00% 100.00% 37.65 100.00% 100.00% Raw variance explained by measures 25.19 64.30% 64.50% 23.79 62.90% 63.30% 23.65 62.80% 63.20% Raw variance explained by persons 2.43 6.20% 6.20% 1.73 4.60% 4.60% 2.44 6.50% 6.50% Raw Variance explained by items 22.76 58.10% 58.30% 22.06 58.40% 58.70% 21.21 56.30% 56.60% Raw unexplained variance (total) 14.00 35.70% 35.50% 14.00 37.10% 36.70% 14.00 37.20% 36.80% Unexplained variance in 1st contrast 6.16 15.70% 44.00% 6.10 16.10% 43.60% 5.75 15.30% 41.10% Unexplained variance in 2nd contrast 1.52 3.90% 10.90% 1.61 4.20% 11.50% 1.67 4.40% 11.90% Unexplained variance in 3rd contrast 1.32 3.40% 9.40% 1.31 3.50% 9.30% 1.35 3.60% 9.60% Unexplained variance in 4th contrast 0.80 2.00% 5.70% 0.79 2.10% 5.60% 0.85 2.30% 6.10% Unexplained variance in 5th contrast 0.68 1.70% 4.90% 0.69 1.80% 4.90% 0.68 1.80% 4.80% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.An inspection of the PTMEAs for the SF-36 mental health scale indicated that, for all other items, the PTMEA correlations had acceptable values. All the SF-36 mental health scale items had PTMEAs that were positive, supporting item-level polarity.The functioning of the six rating scale categories was examined for the SF-36 mental health scale. Items which are easier are located at the bottom of the SF-36 mental health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. The category logit measures ranged from -3.86 to 2.57 (see Table17). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category one. The infit MNSQ scores for this rating category ranged from 1.38 to 1.41 (see Table 17). The results indicated that the six-level rating scale used in the SF-36 mental health scale might not be the most robust to use (see Supplemental Figure 6), however, the full range of ratings were used by the participants who completed the SF-36 mental health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the latter five response categories were the most probable category for some part of the continuum. Rating category one was problematic.Table 17
SF-36 mental health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 (p. 120) WAVE 3 (p. 125) CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 22667 14 ( -3.07) 1.38 1.20 NONE 18463 13 ( -3.18) 1.41 1.22 NONE 14323 12 ( -3.18) 1.38 1.22 NONE 2 49420 30 -1.06 .75 .78 -1.91 43019 30 -1.08 .76 .81 -2.03 33416 28 -1.12 .78 .85 -2.02 3 15086 9 -.17 .96 .86 .66 12291 8 -.15 .97 .89 .71 10845 9 -.17 .98 .85 .57 4 25646 15 .40 1.02 1.11 -.41b 20753 14 .43 1.00 1.12 -.38b 18002 15 .44 1.00 1.06 -.38b 5 28636 17 1.14 1.06 1.31 .51b 24231 17 1.16 1.08 1.38 .53b 18787 16 1.20 1.13 1.28 .63 6 24973 15 (2.54) 1.00 1.07 1.15 23360 16 (2.56) 1.00 1.08 1.17 18313 15 (2.60) .95 1.02 1.19 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 12561 13 ( -3.08) 1.37 1.21 NONE 10333 14 ( -3.00) 1.38 1.23 NONE 7471 14 ( -2.98) 1.37 1.21 NONE 2 27233 29 -1.11 .80 .88 -1.91 20854 28 -1.08 .82 .89 -1.82 14529 27 -1.09 .83 .91 -1.80 3 9548 10 -.18 .98 .82 .51 7515 10 -.19 .94 .76 .50 5675 11 -.19 .94 .78 .43 4 15240 16 .42 1.00 1.00 -.36b 12348 17 .40 .97 .94 -.40b 9024 17 .41 .98 .94 -.35b 5 15741 16 1.17 1.14 1.22 .60 12183 16 1.15 1.19 1.27 .61 8698 16 1.15 1.19 1.24 .64 6 15147 16 (2.57) .93 .99 1.16 11454 15 (2.53) .90 .99 1.11 8415 16 (2.51) .90 .96 1.07 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 mental scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table18). Six of the SF-36 mental health items exhibited a consistent pattern of DIF over the six waves of data collection. Items SF01:Q6, MH01:Q9B, MH02:Q9C, MH03:Q9D, MH04:Q9F, and MH05:Q9H exhibited DIF based on both marital status and area of residence (see Table 18). It should be noted that items MH01:Q9B and MH03:Q9D had infit MNSQ statistics that fell outside the 0.7-1.30 range. SF-36 physical health items MH01:Q9B and MH03:Q9D appear to be particularly problematic items based on the RMM analysis findings.Table 18
Differential Item Functioning (DIF) for SF-36 mental health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 3.13 .206 0.00 .720 0.05 .975 -0.17 .122 0.07 .969 0.00 .978 18:Q5B 6.16 . 04 5 ∗ 0.00 .799 0.41 .814 0.00 .347 0.59 .745 0.00 .165 19:Q5C 4.23 .119 0.00 .505 0.17 .922 -0.30 .066 0.79 .673 0.00 .484 20:Q6 62.55 . 00 1 ∗ ∗ ∗ -0.09 .058 6.62 . 03 6 ∗ 0.00 .056 0.00 1.000 0.00 .415 23:Q9A 8.45 . 01 4 ∗ 0.00 .101 0.00 1.000 -0.05 .498 0.05 .979 0.00 .725 24:Q9B 14.83 . 00 1 ∗ ∗ ∗ 0.09 . 00 1 ∗ ∗ ∗ 0.41 .813 0.00 .553 11.22 . 00 4 ∗ ∗ 0.02 .093 25:Q9C 62.48 . 00 1 ∗ ∗ ∗ 0.12 . 00 1 ∗ ∗ ∗ 29.94 . 00 1 ∗ ∗ ∗ 0.48 .087 0.01 .996 0.07 . 00 9 ∗ ∗ 26:Q9D 9.01 . 01 1 ∗ -0.07 . 00 1 ∗ ∗ ∗ 0.16 .925 -0.07 .476 22.89 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ ∗ 27:Q9E 8.72 . 01 3 ∗ 0.00 .741 0.49 .782 0.37 . 01 0 ∗ 0.57 .750 0.00 .207 28:Q9F 17.18 . 00 1 ∗ ∗ ∗ 0.08 . 00 1 ∗ ∗ ∗ 20.01 . 00 1 ∗ ∗ ∗ 0.00 .401 0.73 .694 0.05 .004 29:Q9G 5.04 .079 0.00 .719 3.76 .150 0.52 . 00 3 ∗ ∗ 11.42 . 00 3 ∗ ∗ 0.00 .815 30:Q9H 13.46 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ 8.62 . 01 3 ∗ 0.00 .176 3.70 .155 -0.07 . 00 2 ∗ ∗ 31:Q9I 1.75 .414 0.00 .224 0.51 .773 -0.18 .308 0.00 1.000 0.00 .299 32:Q10 14.70 . 00 1 ∗ ∗ ∗ 0.00 .207 0.11 .947 0.00 .165 2.75 .250 0.00 .978 Wave 4 Wave 5 Wave 6 SF36 ITEM No. Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SUMMARY DIF CHI-SQUARE (DIF = 1) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 0.00 1.000 0.00 .618 0.07 .966 -0.03 .986 0.94 .623 -0.18 . 00 1 ∗ ∗ ∗ 18:Q5B 0.00 1.000 0.00 .639 2.43 .294 0.00 .395 0.39 .824 0.00 .543 19:Q5C 0.00 1.000 0.00 .497 0.71 .701 0.20 .271 0.59 .744 -0.19 . 00 3 ∗ ∗ 20:Q6 0.00 1.000 0.00 .779 6.37 . 04 0 ∗ -0.02 .337 2.26 .320 -0.03 .162 23:Q9A 0.00 1.000 0.00 .900 1.95 .373 0.19 .254 1.18 .551 -0.04 .176 24:Q9B 6.95 . 00 8 ∗ ∗ 0.00 .384 13.76 . 00 1 ∗ ∗ ∗ 0.00 .784 3.06 .213 0.00 .580 25:Q9C 0.00 1.000 0.06 . 03 0 ∗ 6.84 . 03 2 ∗ -0.68 .078 0.77 .678 0.08 .371 26:Q9D 0.00 1.000 0.00 .544 13.70 . 00 1 ∗ ∗ ∗ -0.02 .118 2.06 .354 0.00 .923 27:Q9E 0.00 1.000 0.00 .537 3.30 .189 -0.08 .720 1.67 .430 0.06 .215 28:Q9F 0.00 1.000 0.00 .687 0.87 .644 0.00 .819 0.43 .806 0.00 .408 29:Q9G 0.00 1.000 0.00 .694 0.20 .908 0.27 .570 0.63 .729 0.03 .278 30:Q9H 6.10 . 01 4 ∗ 0.00 .297 4.86 .086 0.05 .065 0.08 .962 0.00 .419 31:Q9I 0.00 1.000 -0.05 .112 1.10 .574 0.48 .170 0.04 .981 -0.04 .664 32:Q10 0.00 1.000 0.00 .414 1.56 .456 0.08 . 01 9 ∗ 0.05 .979 0.00 .434 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 3.1. SF36 Total Scale Rasch Analysis for Six Waves of Data Collection
Total Rasch scale item statistics for six waves of data collection are shown in Table1. When all 36 SF-36 items were calibrated using the RMM for the six waves of data collection, MNSQ infit statistics ranged from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table 2). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31. This resulted in an average item separation index of 77.98 and an average item reliability of 1.00 over the six waves (see Table 3).Table 1
SF-36 total scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 SF36 ITEM LOGIT MEASURE MODEL S.E PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR LOGIT MEASURE MODEL S.E. PTMEA CORR 1: Q1 -0.36 0.01 -0.16 -0.37 0.01 -0.14 -0.37 0.01 -0.09 -0.52 0.01 -0.06 -0.60 0.01 -0.05 -0.68 0.01 0.01 2: Q2 -0.39 0.01 0.04 -0.44 0.01 0.03 -0.51 0.01 0.03 -0.56 0.01 0.08 -0.66 0.01 0.06 -0.73 0.01 0.11 3: Q3A 1.94 0.02 0.24 1.96 0.02 0.26 2.05 0.02 0.29 2.08 0.02 0.30 2.09 0.03 0.29 2.31 0.04 0.27 4: Q3B 0.36 0.01 0.46 0.39 0.01 0.44 0.53 0.01 0.44 0.67 0.01 0.45 0.77 0.02 0.45 0.95 0.02 0.41 5: Q3C 0.30 0.01 0.47 0.32 0.01 0.46 0.34 0.01 0.44 0.44 0.01 0.46 0.49 0.02 0.44 0.57 0.02 0.41 6: Q3D 0.82 0.01 0.45 0.84 0.01 0.44 0.93 0.01 0.46 1.00 0.02 0.47 1.11 0.02 0.46 1.25 0.02 0.44 7: Q3E 0.13 0.01 0.51 0.15 0.01 0.50 0.24 0.01 0.50 0.28 0.01 0.49 0.36 0.02 0.49 0.46 0.02 0.47 8: Q3F 0.54 0.01 0.44 0.63 0.01 0.41 0.72 0.01 0.41 0.73 0.02 0.43 0.73 0.02 0.43 0.81 0.02 0.40 9: Q3G 0.44 0.01 0.49 0.53 0.01 0.46 0.73 0.01 0.46 0.87 0.02 0.47 1.05 0.02 0.44 1.24 0.02 0.44 10: Q3H 0.05 0.01 0.52 0.10 0.01 0.49 0.23 0.01 0.49 0.36 0.01 0.50 0.48 0.02 0.48 0.65 0.02 0.48 11: Q3I -0.14 0.01 0.48 -0.13 0.01 0.45 -0.07 0.01 0.46 -0.02 0.01 0.44 0.05 0.01 0.44 0.11 0.02 0.45 12: Q3J -0.28 0.01 0.36 -0.31 0.01 0.35 -0.29 0.01 0.32 -0.24 0.01 0.32 -0.23 0.01 0.32 -0.23 0.02 0.32 13: Q4A 1.26 0.01 0.35 1.31 0.01 0.33 1.41 0.02 0.29 1.41 0.02 0.28 1.46 0.02 0.26 1.47 0.03 0.27 14: Q4B 1.63 0.01 0.35 1.74 0.02 0.32 1.87 0.02 0.29 1.89 0.02 0.28 1.92 0.03 0.27 1.94 0.03 0.26 15: Q4C 1.47 0.01 0.36 1.53 0.02 0.36 1.60 0.02 0.31 1.69 0.02 0.30 1.74 0.02 0.28 1.78 0.03 0.24 16: Q4D 1.50 0.01 0.36 1.58 0.02 0.35 1.67 0.02 0.31 1.73 0.02 0.30 1.77 0.02 0.28 1.80 0.03 0.28 17: Q5A 1.02 0.01 0.37 1.01 0.01 0.35 1.00 0.01 0.31 0.96 0.02 0.30 0.92 0.02 0.30 0.89 0.02 0.27 18: Q5B 1.22 0.01 0.36 1.22 0.01 0.35 1.23 0.02 0.31 1.21 0.02 0.30 1.18 0.02 0.29 1.15 0.02 0.27 19: Q5C 1.05 0.01 0.35 1.03 0.01 0.33 1.01 0.01 0.31 0.99 0.02 0.29 0.95 0.02 0.29 0.91 0.02 0.26 20: Q6 1.17 0.01 -0.22 1.35 0.01 -0.20 0.93 0.01 -0.16 0.69 0.01 -0.16 0.48 0.02 -0.12 0.37 0.02 -0.08 21: Q7 -0.28 0.01 -0.06 -0.26 0.01 -0.04 -0.40 0.01 0.01 -0.50 0.01 0.01 -0.57 0.01 0.03 -0.67 0.01 0.07 22: Q8 0.68 0.01 -0.18 0.73 0.01 -0.14 0.44 0.01 -0.11 0.26 0.01 -0.09 0.12 0.01 -0.04 -0.01 0.02 -0.02 23: Q9A -0.59 0.01 -0.05 -0.67 0.01 -0.06 -0.79 0.01 0.00 -0.82 0.01 -0.02 -0.93 0.01 0.01 -1.06 0.01 0.05 24: Q9B -2.04 0.01 0.39 -2.30 0.01 0.36 -2.39 0.01 0.33 -2.40 0.01 0.31 -2.38 0.02 0.31 -2.42 0.02 0.27 25: Q9C -2.64 0.01 0.40 -2.92 0.02 0.35 -2.98 0.02 0.34 -3.01 0.02 0.30 -2.86 0.02 0.31 -2.89 0.02 0.27 26: Q9D -0.30 0.01 0.06 -0.20 0.01 0.01 -0.28 0.01 0.09 -0.29 0.01 0.07 -0.30 0.01 0.08 -0.37 0.01 0.12 27: Q9E -0.77 0.01 -0.05 -0.89 0.01 -0.09 -1.00 0.01 -0.04 -1.06 0.01 -0.05 -1.17 0.01 -0.02 -1.35 0.01 0.00 28: Q9F -2.01 0.01 0.40 -2.15 0.01 0.34 -2.15 0.01 0.35 -2.16 0.01 0.34 -2.13 0.02 0.33 -2.13 0.02 0.31 29: Q9G -1.63 0.01 0.44 -1.70 0.01 0.41 -1.67 0.01 0.40 -1.60 0.01 0.40 -1.56 0.01 0.40 -1.57 0.01 0.37 30: Q9H 0.24 0.01 0.14 0.33 0.01 0.09 0.26 0.01 0.13 0.23 0.01 0.14 0.14 0.01 0.12 0.08 0.02 0.15 31: Q9I -1.23 0.01 0.39 -1.22 0.01 0.37 -1.15 0.01 0.34 -1.07 0.01 0.36 -1.03 0.01 0.34 -1.04 0.01 0.31 32: Q10 -1.34 0.01 0.35 -1.39 0.01 0.31 -1.30 0.01 0.28 -1.23 0.01 0.26 -1.20 0.01 0.24 -1.18 0.01 0.24 33: Q11A -1.39 0.01 0.33 -1.48 0.01 0.31 -1.49 0.01 0.28 -1.50 0.01 0.25 -1.49 0.01 0.26 -1.52 0.01 0.20 34: Q11B 0.28 0.01 0.03 0.43 0.01 -0.01 0.32 0.01 0.08 0.25 0.01 0.07 0.10 0.01 0.10 -0.01 0.02 0.14 35: Q11C -0.68 0.01 0.29 -0.75 0.01 0.27 -0.65 0.01 0.29 -0.59 0.01 0.25 -0.50 0.01 0.25 -0.48 0.01 0.24 36: Q11D -0.02 0.01 -0.06 0.01 0.01 -0.09 -0.03 0.01 0.00 -0.17 0.01 -0.01 -0.31 0.01 0.04 -0.40 0.01 0.09 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 2
SF-36 total scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM INFIT Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.92 -6.7 0.96 -2.9 0.86 -9.9 0.90 -7.3 0.83 -9.9 0.86 -9.8 2: Q2 0.51 -9.9 0.55 -9.9 0.53 -9.9 0.57 -9.9 0.54 -9.9 0.56 -9.9 3: Q3A 1.03 2.1 1.02 1.4 1.12 7.0 1.09 5.5 1.06 3.0 1.02 1.2 4: Q3B 0.59 -9.9 0.61 -9.9 0.64 -9.9 0.65 -9.9 0.72 -9.9 0.73 -9.9 5: Q3C 0.54 -9.9 0.56 -9.9 0.56 -9.9 0.57 -9.9 0.59 -9.9 0.60 -9.9 6: Q3D 0.82 -9.9 0.82 -9.9 0.84 -9.9 0.84 -9.9 0.86 -8.6 0.86 -8.7 7: Q3E 0.46 -9.9 0.48 -9.9 0.48 -9.9 0.50 -9.9 0.54 -9.9 0.56 -9.9 8: Q3F 0.66 -9.9 0.67 -9.9 0.72 -9.9 0.72 -9.9 0.75 -9.9 0.75 -9.9 9: Q3G 0.76 -9.9 0.78 -9.9 0.83 -9.9 0.85 -9.9 0.94 -3.7 0.94 -3.4 10: Q3H 0.46 -9.9 0.50 -9.9 0.53 -9.9 0.56 -9.9 0.65 -9.9 0.68 -9.9 11: Q3I 0.27 -9.9 0.30 -9.9 0.29 -9.9 0.31 -9.9 0.37 -9.9 0.39 -9.9 12: Q3J 0.17 -9.9 0.19 -9.9 0.13 -9.9 0.15 -9.9 0.17 -9.9 0.18 -9.9 13: Q4A 0.40 -9.9 0.41 -9.9 0.41 -9.9 0.42 -9.9 0.49 -9.9 0.50 -9.9 14: Q4B 0.57 -9.9 0.58 -9.9 0.61 -9.9 0.61 -9.9 0.66 -9.9 0.66 -9.9 15: Q4C 0.50 -9.9 0.51 -9.9 0.51 -9.9 0.52 -9.9 0.57 -9.9 0.58 -9.9 16: Q4D 0.51 -9.9 0.53 -9.9 0.54 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 17: Q5A 0.25 -9.9 0.27 -9.9 0.22 -9.9 0.23 -9.9 0.25 -9.9 0.26 -9.9 18: Q5B 0.37 -9.9 0.39 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.41 -9.9 19: Q5C 0.27 -9.9 0.29 -9.9 0.24 -9.9 0.26 -9.9 0.26 -9.9 0.28 -9.9 20: Q6 2.43 9.9 2.59 9.9 2.48 9.9 2.64 9.9 2.36 9.9 2.48 9.9 21: Q7 1.84 9.9 1.90 9.9 1.96 9.9 2.01 9.9 1.70 9.9 1.73 9.9 22: Q8 2.09 9.9 2.17 9.9 2.14 9.9 2.21 9.9 1.93 9.9 1.99 9.9 23: Q9A 1.48 9.9 1.52 9.9 1.48 9.9 1.52 9.9 1.44 9.9 1.46 9.9 24: Q9B 1.47 9.9 1.40 9.9 1.64 9.9 1.55 9.9 1.64 9.9 1.55 9.9 25: Q9C 1.67 9.9 1.53 9.9 1.82 9.9 1.66 9.9 1.74 9.9 1.57 9.9 26: Q9D 1.69 9.9 1.73 9.9 1.60 9.9 1.64 9.9 1.50 9.9 1.52 9.9 27: Q9E 1.55 9.9 1.58 9.9 1.55 9.9 1.58 9.9 1.49 9.9 1.50 9.9 28: Q9F 1.07 5.1 1.03 2.4 1.19 9.9 1.15 9.0 1.13 6.9 1.09 4.9 29: Q9G 1.02 2.0 1.00 0.2 1.04 3.0 1.02 1.4 1.02 1.1 1.00 -0.1 30: Q9H 1.63 9.9 1.62 9.9 1.49 9.9 1.50 9.9 1.35 9.9 1.36 9.9 31: Q9I 0.84 -9.9 0.84 -9.9 0.88 -9.9 0.88 -9.9 0.84 -9.9 0.84 -9.9 32: Q10 0.79 -9.9 0.79 -9.9 0.84 -9.9 0.83 -9.9 0.92 -6.6 0.92 -6.6 33: Q11A 0.77 -9.9 0.76 -9.9 0.64 -9.9 0.63 -9.9 0.66 -9.9 0.65 -9.9 34: Q11B 1.88 9.9 1.90 9.9 1.77 9.9 1.80 9.9 1.62 9.9 1.62 9.9 35: Q11C 1.04 3.3 1.05 4.1 0.95 -4.5 0.95 -3.8 0.99 -0.9 0.99 -0.5 36: Q11D 1.78 9.9 1.83 9.9 1.81 9.9 1.86 9.9 1.68 9.9 1.70 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1: Q1 0.72 -9.9 0.73 -9.9 0.69 -9.9 0.70 -9.9 0.65 -9.9 0.65 -9.9 2: Q2 0.50 -9.9 0.52 -9.9 0.49 -9.9 0.50 -9.9 0.46 -9.9 0.47 -9.9 3: Q3A 1.11 4.4 1.06 2.3 1.18 6.0 1.11 3.9 1.26 6.4 1.17 4.3 4: Q3B 0.78 -9.9 0.78 -9.9 0.81 -9.1 0.81 -9.4 0.91 -3.6 0.90 -4.1 5: Q3C 0.62 -9.9 0.64 -9.9 0.65 -9.9 0.66 -9.9 0.72 -9.9 0.72 -9.9 6: Q3D 0.87 -7.1 0.86 -7.7 0.92 -3.6 0.90 -4.6 0.97 -0.9 0.93 -2.4 7: Q3E 0.56 -9.9 0.57 -9.9 0.61 -9.9 0.63 -9.9 0.70 -9.9 0.70 -9.9 8: Q3F 0.73 -9.9 0.73 -9.9 0.71 -9.9 0.72 -9.9 0.76 -9.9 0.76 -9.9 9: Q3G 0.98 -1.0 0.97 -1.3 1.03 1.5 1.01 0.6 1.09 3.3 1.04 1.6 10: Q3H 0.74 -9.9 0.76 -9.9 0.83 -8.0 0.85 -7.4 0.94 -2.4 0.93 -2.6 11: Q3I 0.41 -9.9 0.43 -9.9 0.48 -9.9 0.51 -9.9 0.55 -9.9 0.57 -9.9 12: Q3J 0.24 -9.9 0.26 -9.9 0.29 -9.9 0.31 -9.9 0.33 -9.9 0.34 -9.9 13: Q4A 0.52 -9.9 0.54 -9.9 0.58 -9.9 0.60 -9.9 0.60 -9.9 0.61 -9.9 14: Q4B 0.69 -9.9 0.69 -9.9 0.73 -9.9 0.72 -9.9 0.74 -8.7 0.74 -8.9 15: Q4C 0.62 -9.9 0.63 -9.9 0.67 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 16: Q4D 0.64 -9.9 0.64 -9.9 0.68 -9.9 0.68 -9.9 0.71 -9.9 0.71 -9.9 17: Q5A 0.27 -9.9 0.29 -9.9 0.30 -9.9 0.32 -9.9 0.32 -9.9 0.34 -9.9 18: Q5B 0.42 -9.9 0.43 -9.9 0.45 -9.9 0.47 -9.9 0.47 -9.9 0.49 -9.9 19: Q5C 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 0.33 -9.9 0.35 -9.9 20: Q6 2.17 9.9 2.27 9.9 2.06 9.9 2.14 9.9 2.00 9.9 2.06 9.9 21: Q7 1.61 9.9 1.63 9.9 1.52 9.9 1.53 9.9 1.40 9.9 1.41 9.9 22: Q8 1.75 9.9 1.80 9.9 1.65 9.9 1.68 9.9 1.56 9.9 1.59 9.9 23: Q9A 1.40 9.9 1.41 9.9 1.36 9.9 1.36 9.9 1.34 9.9 1.35 9.9 24: Q9B 1.62 9.9 1.53 9.9 1.61 9.9 1.53 9.9 1.58 9.9 1.51 9.9 25: Q9C 1.73 9.9 1.60 9.9 1.75 9.9 1.62 9.9 1.65 9.9 1.57 9.9 26: Q9D 1.51 9.9 1.53 9.9 1.47 9.9 1.49 9.9 1.41 9.9 1.42 9.9 27: Q9E 1.51 9.9 1.52 9.9 1.44 9.9 1.45 9.9 1.46 9.9 1.47 9.9 28: Q9F 1.15 7.5 1.11 5.5 1.16 7.0 1.12 5.3 1.12 4.6 1.09 3.6 29: Q9G 1.02 1.6 1.01 0.5 1.03 1.6 1.01 0.8 1.06 2.7 1.04 2.1 30: Q9H 1.36 9.9 1.36 9.9 1.35 9.9 1.37 9.9 1.29 9.9 1.29 9.9 31: Q9I 0.89 -8.2 0.89 -7.9 0.92 -5.2 0.92 -4.9 0.91 -5.1 0.91 -4.8 32: Q10 0.93 -5.1 0.94 -4.7 1.00 -0.2 1.00 0.0 1.06 3.0 1.06 3.1 33: Q11A 0.67 -9.9 0.66 -9.9 0.67 -9.9 0.66 -9.9 0.70 -9.9 0.69 -9.9 34: Q11B 1.58 9.9 1.59 9.9 1.44 9.9 1.45 9.9 1.38 9.9 1.38 9.9 35: Q11C 1.01 0.5 1.01 0.9 0.99 -0.6 1.00 -0.2 0.98 -1.0 0.98 -0.9 36: Q11D 1.61 9.9 1.64 9.9 1.51 9.9 1.53 9.9 1.43 9.9 1.43 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; Z-STD ≤-2.0 or ≥2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 3
SF-36 total scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons Mean -.69 -.68 -.72 -.75 -.80 -.85 S.D. .24 .22 .24 .23 .23 .24 MAX .82 .29 .64 .67 .03 .15 MIN -4.33 -2.70 -2.60 -2.59 -2.54 -2.76 Infit-MNSQ 1.03 1.02 1.03 1.04 1.04 1.03 Infit-ZSTD -.30 -.40 -.30 -.20 -.20 -.10 Outfit-MNSQ 1.01 1.01 1.00 1.00 .99 .99 Outfit-ZSTD -.40 -.40 -.30 -.30 -.30 -.20 Person separation .81c .60c .72c .71c .75c .78c Person reliability .40a .26a .34a .33a .36a .38a Items Mean .00 .00 .00 .00 .00 .00 S.D. 1.11 1.19 1.20 1.21 1.22 1.26 MAX 1.94 1.96 2.05 2.08 2.09 2.31 MIN -2.64 -2.92 -2.98 -3.01 -2.86 -2.89 Infit-MNSQ .98 .99 .98 .98 .98 .99 Infit-ZSTD -2.30 -2.30 -2.20 -1.90 -1.40 -.90 Outfit-MNSQ .99 1.00 .98 .98 .98 .98 Outfit-ZSTD -2.30 -2.30 -2.30 -2.00 -1.50 -1.10 Item separation 93.40 89.72 82.81 76.45 67.43 58.09 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 total scale person-item map in Supplemental Figure1 shows evidence of consistent hierarchical ordering of the SF-36 total scale items. Items which were less difficult are located at the bottom of the person-item map while more difficult items are located at the top of the map. The figure also shows that while each of the waves had a reasonable distribution of items in relation to item difficulty, several of the SF-36 total scale items have the same level of difficulty.Rasch analysis reports the calibrations of the five thresholds (for the six-category rating scale) increase monotonically from -3.15, -1.36, -.25, .48, 1.31, and 2.82 for wave one and -2.96, -1.30, -.31, .42, 1.29, and 2.78 for wave six.The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table3). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 3). When examining the overall RMM output of the SF-36 total scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1 to -3 logits. The person reliability was 0.35 and item reliability was 1.00. This places the item reliability for the SF-36 total scale in the acceptable range and the person reliability correlation in the unacceptable range.The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured. However, the separation index for persons was less than 2.0 indicating inadequate separation of participants on the construct.Item fit to the unidimensionality requirement of the RMM was also examined. Eleven out of the 36 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Specifically, items CH01:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, MH04:Q9F, VT03:Q9G, VT04:Q9I, SF02:Q10, CH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 30.6% (i.e., 11 of 36) of the 36 SF-36 total scale items met the RMM requirements. The following items had an Infit MnSq statistic that was less than 0.70: HT:Q2, PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, RE01:Q5A, RE02:Q5B, and RE03:Q5C. The following items had an Infit MNSQ statistic that was greater than 1.30: FO01:Q6, BP01:Q7, BP02:Q8, VT01:Q9A, MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH05:Q9H, GH02:Q11B, and GH05:Q11D.The Winsteps RMM program determines the dimensionality of a scale by using a Rasch-residual principal components analysis. When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table4). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 total scale over the six waves of data collection ranged from 58.5% to 62.1% and the unexplained variance in the first contrast ranged from 11.9% to 14.5%. The residual analysis completed indicated that no second dimension or factor existed. Linacre [32] suggests that a first single factor with 60% or greater of the accounted for variance is considered a reasonable unidimensional construct. “A second factor or residual factor should not indicate a substantial amount of variance if unidimensionality is tenable” [33, p. 192].Table 4
SF-36 total scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 86.71 100.00% 100.00% 92.22 100.00% 100.00% 92.42 100.00% 100.00% Raw variance explained by measures 50.71 58.50% 58.70% 56.22 61.00% 61.10% 56.42 61.00% 61.30% Raw variance explained by persons 3.47 4.00% 4.00% 1.93 2.10% 2.10% 2.04 2.20% 2.20% Raw Variance explained by items 47.24 54.50% 54.70% 54.30 58.90% 59.00% 54.38 58.80% 59.10% Raw unexplained variance (total) 36.00 41.50% 41.30% 36.00 39.00% 38.90% 36.00 39.00% 38.70% Unexplained variance in 1st contrast 12.60 14.50% 35.00% 12.57 13.60% 34.90% 12.26 13.30% 34.10% Unexplained variance in 2nd contrast 3.02 3.50% 8.40% 3.05 3.30% 8.50% 3.03 3.30% 8.40% Unexplained variance in 3rd contrast 1.89 2.20% 5.20% 1.78 1.90% 4.90% 1.84 2.00% 5.10% Unexplained variance in 4th contrast 1.59 1.80% 4.40% 1.54 1.70% 4.30% 1.50 1.60% 4.20% Unexplained variance in 5th contrast 1.24 1.40% 3.40% 1.27 1.40% 3.50% 1.26 1.40% 3.50% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 92.10 100.00% 100.00% 91.96 100.00% 100.00% 94.92 100.00% 100.00% Raw variance explained by measures 56.10 60.90% 61.50% 55.96 60.90% 61.70% 58.92 62.10% 63.00% Raw variance explained by persons 3.59 3.90% 3.90% 4.05 4.40% 4.50% 4.57 4.80% 4.90% Raw Variance explained by items 52.51 57.00% 57.60% 51.91 56.40% 57.20% 54.35 57.30% 58.10% Raw unexplained variance (total) 36.00 39.10% 38.50% 36.00 39.10% 38.30% 36.00 37.90% 37.00% Unexplained variance in 1st contrast 12.41 13.50% 34.50% 12.08 13.10% 33.60% 11.33 11.90% 31.50% Unexplained variance in 2nd contrast 3.06 3.30% 8.50% 3.20 3.50% 8.90% 3.22 3.40% 8.90% Unexplained variance in 3rd contrast 1.88 2.00% 5.20% 1.95 2.10% 5.40% 2.17 2.30% 6.00% Unexplained variance in 4th contrast 1.50 1.60% 4.20% 1.53 1.70% 4.30% 1.55 1.60% 4.30% Unexplained variance in 5th contrast 1.27 1.40% 3.50% 1.25 1.40% 3.50% 1.30 1.40% 3.60% Notes. a > 60% unexplained variance in the Rasch factor; b Eigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The point-measure correlation (PTMEA) ranges from +1 to -1 “with negative items suggesting improper scoring or not functioning as expected” [33, p. 192]. An inspection of the PTMEAs for the SF-36 total scale indicated that items GH01:Q1, SF01:Q6, BP01:Q7, and VT02:Q9E had consistent negative PTMEAs over the six waves of data collection. The rest of the SF-36 total scale items had PTMEAs that were positive, supporting item-level polarity. For all other items, the PTMEA correlations had acceptable values.The functioning of the six rating scale categories was examined for the SF-36 total scale. Rating scale frequency and percent indicated that all categories were used by the participants. The category use statistics are presented in Table5. The category logit measures ranged from -3.19 to 2.86 (see Table 5). None of the infit MNSQ scores fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. The results indicated that the six-level rating scale used in the SF-36 total scale fits appropriately to the predictive RMM (see Supplemental Figure 2); however, the full range of ratings were used by the participants who completed the SF-36 total scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and each response category was the most probable category for some part of the continuum.Table 5
SF-36 total scale Rasch analysis of summary of category structure for six waves of data collection
Wave 1 Wave 2 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 84119 19 (-3.15) 1.30 1.17 NONE 72530 19 (-3.19) 1.28 1.17 NONE 2 133566 30 -1.36 .99 1.01 -1.88 114964 31 -1.40 1.02 1.06 -1.93 3 96735 22 -.25 .66 .61 -.54 82817 22 -.26 .67 .63 -.57 4 40204 9 .48 .98 1.04 .61 34325 9 .50 .97 1.03 .61 5 44040 10 1.31 1.06 1.17 .28b 37154 10 1.34 1.07 1.20 .34b 6 25211 6 (2.82) .93 1.01 1.53 23593 6 (2.86) .92 .99 1.56 Wave 3 Wave 4 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 61757 20 (-3.14) 1.25 1.15 NONE 55809 22 (-3.06) 1.24 1.15 NONE 2 88701 29 -1.39 1.04 1.08 -1.87 72286 28 -1.35 1.05 1.10 -1.79 3 63285 20 -.28 .73 .67 -.57 50125 19 -.30 .78 .71 -.54 4 29486 9 .48 .94 .96 .48 25807 10 .45 .90 .88 .38 5 28991 9 1.34 1.09 1.16 .41b 24560 10 1.32 1.10 1.13 .41 6 18470 6 (2.86) .87 .93 1.55 15297 6 (2.85) .86 .91 1.54 Wave 5 Wave 6 Cat. Label N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 47041 24 (-3.00) 1.26 1.16 NONE 36377 25 (-2.96) 1.23 1.14 NONE 2 54905 27 -1.31 1.06 1.09 -1.72 37987 26 -1.30 1.06 1.07 -1.67 3 36216 18 -.30 .83 .76 -.49 24952 17 -.31 .88 .83 -.49 4 21172 11 .43 .87 .81 .27 15554 11 .42 .87 .79 .23 5 18847 9 1.29 1.13 1.15 .46 13560 9 1.29 1.15 1.15 .50 6 11583 6 (2.80) .83 .88 1.48 8495 6 (2.78) .83 .88 1.44 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.To investigate the possibility of item bias, differential item functioning (DIF) analysis was conducted to determine whether different groups of participants based on marital status and area of residence (urban versus regional; see Table6) responded differently on the SF-36 total scale items, despite having the same level of the latent trait being measured [34]. Three of the SF-36 items exhibited a consistent pattern of DIF over the six waves of data collection for both marital status and area of residence, those being MH01:Q9B, MH02:Q9C, and MH05:Q9H. It should be noted that these three items also exhibited MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range.Table 6
Differential Item Functioning (DIF) for SF-36 total scale Rasch analysis for six waves of data collection based on marital status and area of residence.
WAVE 1 WAVE 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 8.68 . 01 3 ∗ ∗ 0.00 .924 1.78 .407 0.19 .452 0.25 .882 0.00 .861 2 3.17 .202 0.00 .295 0.55 .760 0.00 .791 0.14 .936 0.00 .134 3 4.95 .083 0.00 .811 0.21 .903 0.20 .326 5.47 .064 0.00 .907 4 0.87 .647 0.00 .492 0.44 .804 0.00 . 01 0 ∗ 0.34 .847 0.00 .438 5 0.89 .640 0.05 . 00 1 ∗ ∗ ∗ 0.09 .959 -0.14 .983 0.40 .818 0.02 . 00 2 ∗ ∗ 6 0.47 .792 0.00 .142 4.67 .095 0.00 .288 0.57 .750 0.00 .619 7 0.06 .971 0.00 .687 2.94 .227 -0.01 .800 1.39 .496 -0.02 .054 8 0.32 .855 0.00 .362 0.27 .875 0.02 . 03 3 ∗ 0.06 .974 0.00 .243 9 13.66 . 00 1 ∗ ∗ ∗ -0.06 . 00 3 ∗ ∗ 2.06 .354 -0.14 .603 0.70 .704 -0.06 .072 10 11.27 .004∗∗ 0.00 . 03 0 ∗ 7.04 . 02 9 ∗ 0.06 . 00 1 ∗ ∗ ∗ 0.00 1.000 -0.06 . 00 6 ∗ ∗ 11 3.41 .179 0.00 .071 6.25 . 04 3 ∗ 0.04 .503 0.00 1.000 0.00 .473 12 0.16 .926 0.00 .906 0.10 .952 0.00 .981 0.69 .706 0.00 .722 13 2.93 .227 0.00 .845 0.09 .959 -0.19 .474 0.04 .982 0.00 .159 14 0.96 .618 0.00 .327 3.10 .210 0.00 .822 6.13 . 04 6 ∗ -0.05 .126 15 2.37 .303 0.00 .366 0.06 .970 0.07 .660 0.38 .828 0.00 .815 16 0.00 1.000 0.00 .591 0.05 .976 0.00 .358 4.14 .124 0.00 .317 17 1.80 .404 0.00 .581 0.10 .952 -0.02 .956 0.03 .987 0.00 .475 18 3.78 .149 0.00 .704 0.52 .770 -0.02 .238 0.62 .731 0.00 .571 19 1.54 .460 0.00 .892 0.23 .893 -0.10 .836 0.06 .971 0.00 .882 20 55.71 .001∗∗∗ -0.07 .036 7.62 . 02 2 ∗ 0.00 .526 0.06 .970 0.00 .088 21 3.24 .195 -0.06 .011∗ 1.17 .554 0.15 .087 0.00 1.000 0.00 .784 22 33.92 . 00 1 ∗ ∗ ∗ 0.00 .239 4.09 .127 0.00 .661 1.52 .465 0.03 .649 23 7.12 . 02 8 ∗ 0.00 .100 2.90 .231 -0.06 .436 0.18 .916 0.00 .498 24 23.59 . 00 1 ∗ ∗ ∗ 0.11 . 00 1 ∗ ∗ ∗ 0.12 .942 0.00 .993 13.40 . 00 1 ∗ ∗ ∗ 0.00 .106 25 64.84 . 00 1 ∗ ∗ ∗ 0.15 . 00 1 ∗ ∗ ∗ 30.23 . 00 1 ∗ ∗ ∗ -0.38 .099 0.01 .997 0.07 . 02 0 ∗ 26 10.47 . 00 5 ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 2.13 .341 0.00 .512 28.34 . 00 1 ∗ ∗ ∗ -0.07 . 00 1 ∗ ∗ ∗ 27 13.71 . 00 1 ∗ ∗ ∗ 0.00 .778 9.09 . 01 0 ∗ ∗ -0.07 .924 0.85 .651 0.00 .914 28 18.73 . 00 1 ∗ ∗ ∗ 0.10 . 00 1 ∗ ∗ ∗ 18.70 . 00 1 ∗ ∗ ∗ 0.00 .590 0.79 .671 0.05 . 00 3 ∗ ∗ 29 9.31 . 00 9 ∗ ∗ 0.00 . 04 7 ∗ 10.32 . 00 6 ∗ ∗ -0.43 .214 13.34 . 00 1 ∗ ∗ ∗ 0.00 .720 30 14.58 . 00 1 ∗ ∗ -0.06 . 00 8 ∗ ∗ 14.57 . 00 1 ∗ ∗ ∗ 0.00 .403 3.83 .145 -0.07 . 00 1 ∗ ∗ ∗ 31 7.38 . 02 4 ∗ 0.02 . 01 0 ∗ ∗ 1.40 .493 -0.23 .687 2.57 .273 0.00 .108 32 18.09 . 00 1 ∗ ∗ ∗ 0.00 . 02 8 ∗ 0.65 .720 0.00 .908 6.31 . 04 2 ∗ 0.00 .422 33 15.01 . 00 1 ∗ ∗ ∗ 0.02 . 00 5 ∗ ∗ 0.40 .820 -0.14 .541 0.00 1.000 0.02 . 00 6 ∗ ∗ 34 30.39 . 00 1 ∗ ∗ ∗ 0.00 .963 1.61 .443 0.00 .937 6.57 . 03 7 ∗ 0.00 .823 35 9.62 . 00 8 ∗ ∗ 0.00 .606 0.37 .833 -0.26 .284 0.31 .859 0.02 . 02 0 ∗ 36 29.49 . 00 1 ∗ ∗ ∗ -0.06 . 01 6 ∗ 1.11 .571 0.00 .357 5.88 .052 -0.02 . 04 2 ∗ Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional ITEM SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF PROB. DIF CONTRAST Mantel- Haenszel Probability No. CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) CHI-SQUARE (DIF=2) 1 0.22 .898 0.00 .261 4.26 .117 0.14 .769 1.14 .560 0.11 .198 2 0.35 .841 0.00 . 00 9 ∗ ∗ 2.92 .229 0.00 .976 0.15 .930 0.00 .275 3 0.25 .882 0.00 .078 2.24 .323 -0.02 .403 5.55 .060 0.00 .613 4 0.03 .987 0.00 .145 1.97 .369 0.00 .756 4.67 .100 0.00 .027 5 0.66 .716 0.06 . 00 1 ∗ ∗ ∗ 0.46 .795 0.27 .618 1.40 .490 0.43 .083 6 6.42 . 03 9 ∗ 0.00 .555 5.78 .054 0.00 .271 11.47 . 0 1 ∗ ∗ ∗ -0.14 .427 7 2.74 .251 0.00 .705 4.04 .130 -0.20 .574 8.64 . 0 1 ∗ ∗ 0.04 .948 8 1.19 .549 0.00 .165 0.87 .645 0.05 . 03 9 ∗ 1.17 .560 0.00 .117 9 4.04 .130 -0.04 .998 1.92 .379 -0.20 .371 3.26 .190 -0.04 .752 10 1.85 .392 0.00 .894 2.52 .280 0.11 . 00 1 ∗ ∗ ∗ 2.21 .330 0.10 . 00 1 ∗ ∗ ∗ 11 3.41 .179 0.00 .186 2.23 .324 -0.31 .823 1.92 .380 -0.08 .821 12 0.08 .965 0.00 .394 0.00 1.000 -0.02 .598 1.52 .460 0.00 .357 13 0.16 .927 0.00 .214 0.01 .998 0.01 .649 1.11 .570 -0.04 .916 14 0.03 .986 0.00 .368 1.12 .569 -0.06 . 03 3 ∗ 1.21 .540 -0.07 .274 15 3.06 .214 0.00 .611 0.00 1.000 -0.06 .860 0.86 .650 -0.09 .225 16 2.99 .221 0.00 .578 1.01 .602 0.00 .833 1.66 .430 0.00 .499 17 0.57 .753 0.00 .475 1.03 .594 -0.10 .754 0.64 .730 0.05 .290 18 0.08 .961 0.00 .671 4.27 .116 -0.07 .210 0.13 .940 -0.08 .987 19 0.11 .947 0.00 .420 2.78 .246 -0.08 .828 0.19 .910 -0.08 .986 20 0.36 .837 0.00 .089 5.27 .070 -0.05 .120 4.98 .080 -0.07 .758 21 1.67 .430 0.00 .169 1.16 .556 0.10 .439 0.21 .900 0.10 . 04 6 ∗ 22 0.54 .762 0.00 . 04 9 ∗ 2.89 .233 0.00 .446 1.95 .370 0.00 .874 23 21.23 .001∗∗∗ 0.07 . 00 2 ∗ ∗ 0.50 .777 -0.07 .442 1.02 .600 0.00 .409 24 0.63 .730 0.00 .143 22.80 . 00 1 ∗ ∗ ∗ 0.00 .897 6.77 .030∗ 0.02 .084 25 13.68 . 00 1 ∗ ∗ ∗ 0.00 .098 11.59 . 00 3 ∗ ∗ 0.00 .638 1.33 .510 -0.06 .169 26 0.41 .817 0.00 . 02 1 ∗ 8.24 . 01 6 ∗ 0.02 .274 1.17 .550 -0.03 .566 27 0.48 .787 0.00 .163 1.40 .494 0.22 .323 4.61 .100 0.18 .521 28 9.62 . 00 8 ∗ ∗ 0.00 .890 3.16 .203 -0.07 .109 0.04 .980 -0.11 .169 29 0.05 .979 -0.06 . 00 8 ∗ ∗ 4.42 .108 0.30 .161 2.07 .350 0.30 .104 30 2.04 .357 0.00 . 03 5 ∗ 6.85 . 03 2 ∗ 0.00 .859 1.79 .400 0.00 .517 31 0.47 .789 0.00 .068 6.88 . 03 1 ∗ 0.33 .165 0.00 1.000 0.12 .985 32 0.16 .923 0.00 .477 2.37 .302 0.00 .478 0.07 .970 -0.05 .889 33 3.33 .186 0.00 .180 5.40 .066 0.05 .851 0.45 .800 -0.20 . 00 1 ∗ ∗ ∗ 34 0.00 1.000 -0.03 . 01 0 ∗ 1.00 .605 0.00 .477 0.49 .780 0.00 .999 35 0.00 1.000 -0.04 .808 0.54 .764 0.27 .217 2.08 .350 -0.21 . 00 6 ∗ ∗ 36 2.74 .251 0.00 .065 5.82 .054 0.00 .495 1.44 .480 -0.03 .508 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 3.2. SF36 Physical Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 physical health items were included in the initial analysis using the RMM: GH01:Q1, PF01:Q3A, PF02:Q3B, PF03:Q3C, PF04:Q3D, PF05:Q3E, PF06:Q3F, PF07:Q3G, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, RP04:Q4D, BP01:Q7, BP02:Q8, GH02:Q11A, GH03:Q11B, GH04:Q11C, and GH05:Q11D (see Table7). When the 21 SF-36 items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.18 to 2.66 and outfit statistics ranging from 0.19 to 2.77 (see Table 8). The mean item measure was 0.00 logits (SD = 0.99). With respect to logit measures, there was a broad range, the lowest value being –2.49 and the highest value being +1.79 (see Table 9). This resulted in an average item separation index of 60.32 and an average reliability of 1.00 over the six waves of data collection (see Table 9). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 7
SF-36 Physical health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -0.84 0.01 -0.21 -0.92 0.01 -0.24 -0.91 0.01 -0.18 3:Q3A 1.62 0.02 0.37 1.58 0.02 0.44 1.63 0.02 0.43 4:Q3B 0.02 0.01 0.59 0.00 0.01 0.61 0.13 0.01 0.60 5:Q3C -0.05 0.01 0.59 -0.08 0.01 0.60 -0.08 0.01 0.59 6:Q3D 0.52 0.01 0.60 0.48 0.01 0.63 0.55 0.01 0.62 7:Q3E -0.24 0.01 0.65 -0.27 0.01 0.67 -0.19 0.01 0.67 8:Q3F 0.21 0.01 0.57 0.26 0.01 0.58 0.33 0.01 0.54 9:Q3G 0.11 0.01 0.64 0.15 0.01 0.66 0.34 0.01 0.63 10:Q3H -0.34 0.01 0.67 -0.34 0.01 0.68 -0.19 0.01 0.67 11:Q3I -0.57 0.01 0.59 -0.62 0.01 0.59 -0.54 0.01 0.61 12:Q3J -0.74 0.01 0.43 -0.84 0.01 0.40 -0.80 0.01 0.40 13:Q4A 0.96 0.01 0.42 0.95 0.01 0.41 1.02 0.02 0.38 14:Q4B 1.32 0.01 0.42 1.37 0.02 0.43 1.46 0.02 0.39 15:Q4C 1.16 0.01 0.46 1.17 0.01 0.50 1.20 0.02 0.42 16:Q4D 1.20 0.01 0.44 1.22 0.02 0.47 1.28 0.02 0.42 21:Q7 -0.74 0.01 -0.05 0.99 0.01 -0.19 -0.95 0.01 -0.02 22:Q8 0.36 0.01 -0.18 -0.78 0.01 -0.15 0.04 0.01 -0.14 33:Q11A -2.22 0.01 0.34 -2.49 0.01 0.34 -2.46 0.01 0.32 34:Q11B -0.07 0.01 0.02 0.04 0.01 -0.07 -0.09 0.01 0.02 35:Q11C -1.24 0.01 0.38 -1.42 0.01 0.38 -1.27 0.01 0.37 36:Q11D -0.42 0.01 -0.09 -0.45 0.01 -0.18 -0.50 0.01 -0.08 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. 1:Q1 -1.10 0.01 -0.14 -1.21 0.01 -0.13 -1.32 0.02 -0.06 3:Q3A 1.64 0.02 0.42 1.63 0.03 0.42 1.79 0.04 0.39 4:Q3B 0.26 0.02 0.60 0.33 0.02 0.60 0.47 0.02 0.57 5:Q3C 0.02 0.01 0.60 0.05 0.02 0.58 0.09 0.02 0.56 6:Q3D 0.59 0.02 0.62 0.67 0.02 0.62 0.77 0.02 0.59 7:Q3E -0.16 0.01 0.65 -0.09 0.02 0.65 -0.02 0.02 0.64 8:Q3F 0.32 0.02 0.57 0.30 0.02 0.57 0.34 0.02 0.53 9:Q3G 0.46 0.02 0.63 0.62 0.02 0.61 0.76 0.02 0.60 10:Q3H -0.07 0.01 0.66 0.04 0.02 0.65 0.17 0.02 0.63 11:Q3I -0.50 0.01 0.58 -0.43 0.02 0.58 -0.40 0.02 0.58 12:Q3J -0.76 0.01 0.42 -0.75 0.01 0.41 -0.79 0.02 0.39 13:Q4A 1.01 0.02 0.37 1.03 0.02 0.34 0.99 0.03 0.34 14:Q4B 1.47 0.02 0.38 1.47 0.03 0.35 1.45 0.03 0.33 15:Q4C 1.27 0.02 0.43 1.29 0.02 0.40 1.29 0.03 0.35 16:Q4D 1.31 0.02 0.41 1.33 0.02 0.39 1.32 0.03 0.36 21:Q7 -1.08 0.01 -0.01 -1.17 0.01 0.01 -1.31 0.02 0.08 22:Q8 -0.19 0.01 -0.13 -0.35 0.01 -0.06 -0.53 0.02 -0.02 33:Q11A -2.45 0.01 0.28 -2.43 0.02 0.26 -2.49 0.02 0.21 34:Q11B -0.20 0.01 0.01 -0.38 0.02 0.06 -0.53 0.02 0.11 35:Q11C -1.19 0.01 0.34 -1.08 0.01 0.33 -1.08 0.02 0.31 36:Q11D -0.68 0.01 -0.09 -0.85 0.01 -0.04 -0.98 0.02 0.02 Note. MODEL S.E. = model standard error; PTMEA CORR = point measure correlation.Table 8
SF-36 Physical health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.24 9.9 1.30 9.9 1.24 9.9 1.29 9.9 1.16 9.9 1.20 9.9 3:Q3A 0.93 -4.6 0.90 -6.3 0.97 -1.8 0.90 -6.2 0.93 -3.7 0.85 -7.6 4:Q3B 0.57 -9.9 0.59 -9.9 0.59 -9.9 0.60 -9.9 0.64 -9.9 0.65 -9.9 5:Q3C 0.53 -9.9 0.54 -9.9 0.53 -9.9 0.54 -9.9 0.54 -9.9 0.56 -9.9 6:Q3D 0.72 -9.9 0.73 -9.9 0.71 -9.9 0.71 -9.9 0.72 -9.9 0.71 -9.9 7:Q3E 0.44 -9.9 0.46 -9.9 0.45 -9.9 0.47 -9.9 0.48 -9.9 0.50 -9.9 8:Q3F 0.62 -9.9 0.63 -9.9 0.64 -9.9 0.64 -9.9 0.67 -9.9 0.67 -9.9 9:Q3G 0.71 -9.9 0.72 -9.9 0.73 -9.9 0.75 -9.9 0.81 -9.9 0.81 -9.9 10:Q3H 0.45 -9.9 0.49 -9.9 0.50 -9.9 0.53 -9.9 0.59 -9.9 0.62 -9.9 11:Q3I 0.28 -9.9 0.32 -9.9 0.30 -9.9 0.33 -9.9 0.36 -9.9 0.39 -9.9 12:Q3J 0.21 -9.9 0.23 -9.9 0.18 -9.9 0.19 -9.9 0.21 -9.9 0.23 -9.9 13:Q4A 0.36 -9.9 0.40 -9.9 0.37 -9.9 0.40 -9.9 0.44 -9.9 0.48 -9.9 14:Q4B 0.51 -9.9 0.53 -9.9 0.53 -9.9 0.54 -9.9 0.59 -9.9 0.60 -9.9 15:Q4C 0.44 -9.9 0.47 -9.9 0.43 -9.9 0.45 -9.9 0.49 -9.9 0.52 -9.9 16:Q4D 0.46 -9.9 0.49 -9.9 0.46 -9.9 0.48 -9.9 0.52 -9.9 0.55 -9.9 21:Q7 2.33 9.9 2.40 9.9 2.51 9.9 2.77 9.9 2.20 9.9 2.23 9.9 22:Q8 2.29 9.9 2.39 9.9 2.66 9.9 2.72 9.9 2.23 9.9 2.29 9.9 33:Q11A 1.24 9.9 1.20 9.9 1.10 7.1 1.06 3.9 1.12 7.2 1.08 4.4 34:Q11B 2.18 9.9 2.20 9.9 2.07 9.9 2.09 9.9 1.88 9.9 1.89 9.9 35:Q11C 1.25 9.9 1.26 9.9 1.16 9.9 1.17 9.9 1.17 9.9 1.18 9.9 36:Q11D 2.21 9.9 2.28 9.9 2.36 9.9 2.41 9.9 2.12 9.9 2.15 9.9 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 1:Q1 1.02 1.4 1.04 2.9 0.99 -0.8 1.00 0.0 0.92 -4.1 0.93 -3.4 3:Q3A 0.98 -0.9 0.87 -5.2 1.04 1.5 0.92 -2.8 1.12 3.1 0.95 -1.1 4:Q3B 0.67 -9.9 0.67 -9.9 0.69 -9.9 0.69 -9.9 0.76 -9.9 0.74 -9.9 5:Q3C 0.56 -9.9 0.57 -9.9 0.57 -9.9 0.58 -9.9 0.62 -9.9 0.63 -9.9 6:Q3D 0.72 -9.9 0.71 -9.9 0.76 -9.9 0.73 -9.9 0.80 -8.2 0.74 -9.9 7:Q3E 0.49 -9.9 0.51 -9.9 0.53 -9.9 0.55 -9.9 0.59 -9.9 0.60 -9.9 8:Q3F 0.63 -9.9 0.64 -9.9 0.62 -9.9 0.62 -9.9 0.65 -9.9 0.66 -9.9 9:Q3G 0.83 -9.9 0.81 -9.9 0.86 -7.0 0.83 -8.8 0.90 -3.9 0.83 -6.7 10:Q3H 0.66 -9.9 0.68 -9.9 0.72 -9.9 0.73 -9.9 0.79 -9.4 0.79 -9.5 11:Q3I 0.39 -9.9 0.42 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 0.53 -9.9 12:Q3J 0.28 -9.9 0.30 -9.9 0.33 -9.9 0.35 -9.9 0.37 -9.9 0.40 -9.9 13:Q4A 0.47 -9.9 0.51 -9.9 0.53 -9.9 0.58 -9.9 0.55 -9.9 0.59 -9.9 14:Q4B 0.62 -9.9 0.63 -9.9 0.65 -9.9 0.66 -9.9 0.68 -9.9 0.69 -9.9 15:Q4C 0.54 -9.9 0.57 -9.9 0.59 -9.9 0.61 -9.9 0.63 -9.9 0.65 -9.9 16:Q4D 0.56 -9.9 0.59 -9.9 0.60 -9.9 0.62 -9.9 0.63 -9.9 0.65 -9.9 21:Q7 2.08 9.9 2.09 9.9 1.96 9.9 1.96 9.9 1.79 9.9 1.79 9.9 22:Q8 2.07 9.9 2.11 9.9 1.95 9.9 1.97 9.9 1.85 9.9 1.86 9.9 33:Q11A 1.13 7.2 1.11 5.6 1.13 6.3 1.11 5.1 1.18 7.2 1.20 7.5 34:Q11B 1.85 9.9 1.86 9.9 1.69 9.9 1.69 9.9 1.62 9.9 1.59 9.9 35:Q11C 1.18 9.9 1.19 9.9 1.15 8.0 1.16 8.4 1.13 5.9 1.13 6.1 36:Q11D 2.05 9.9 2.09 9.9 1.95 9.9 1.96 9.9 1.83 9.9 1.82 9.9 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 9
SF-36 physical health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -1.77 -1.85 -1.90 -1.92 -1.95 -2.07 S.D. .38 .37 .40 .39 .39 .40 MAX 1.54 -.37 .40 .92 -.52 .40 MIN -5.13 -4.11 -.09 -5.08 -4.52 -.79 Infit-MNSQ 1.05 1.04 1.05 1.05 1.05 1.05 Infit-ZSTD -.10 -.20 -.10 .00 .00 .00 Outfit-MNSQ 1.00 1.01 .98 .97 .96 .96 Outfit-ZSTD -.30 -.30 -.30 -.20 -.20 -.20 Person separation .86c .88c .97c .96c .96c .96c Person reliability .43a .43a .48a .48a .48a .48a Items MEAN .00 .00 .00 .00 .00 .00 S.D. .91 .99 .98 1.00 1.02 1.08 MAX 1.62 1.58 1.63 1.64 1.63 1.79 MIN -2.22 -2.49 -2.46 -2.45 -2.43 -2.49 Infit-MNSQ .95 .98 .95 .94 .94 .95 Infit-ZSTD -3.00 -3.00 -3.10 -3.40 -3.40 -3.30 Outfit-MNSQ .98 1.00 .96 .95 .94 .94 Outfit-ZSTD -3.10 -3.40 -3.50 -3.60 -3.70 -3.60 Item separation 71.24 69.37 63.25 59.41 52.87 45.77 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 physical health scale person-item map is located in Supplemental Figure3 and reports evidence of the hierarchical ordering of the SF-36 physical health scale items. Items which are easier are located at the bottom of the SF-36 physical health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. It should also be noted that several of the SF-36 physical health scale items have the same level of difficulty.The average person measure was 1.91 logits (SD = 0.39) over the six waves of data collection (see Table9). The mean person separation was 0.93 with a mean reliability of 0.46 (see Table 9). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 physical health construct. When examining the overall RMM output of the SF-36 physical health total scale, the average person measure (1.91 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +1.62 to -2.49 logits. The person reliability was 0.46 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 physical health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 physical health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.86, -2.13, -.83, .10, 1.96, and 5.32 for wave one and -3.64, -2.02, -.91, .01, 2.00, and 5.24 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Seven out of the 21 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range. Therefore items 1:Q1, PF01:Q3A, PF04:Q3D, PF06:Q3F, PF07:Q3G, GH02:Q11A, and GH04:Q11C met the RMM requirements (see Table2). In other words, only 7 / 21 or 52.4% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: PF02:Q3B, PF03:Q3C, PF05:Q3E, PF08:Q3H, PF09:Q3I, PF10:Q3J, RP01:Q4A, RP02:Q4B, RP03:Q4C, and RP04:Q4D. The following items had an Infit MNSQ statistic that was greater than 1.30: BP01:Q7, BP02:Q8, GH03:Q11B, and GH05:Q11D.An inspection of the PTMEAs for the SF-36 physical health scale indicated that items HG01:Q1, BP01:Q7, BP02:Q8, and GH05:Q11D had consistent negative PTMEAs over the six waves of data collection. For all other items, the PTMEA correlations had acceptable values.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table10). This indicated that the unidimensional requirement of the SF-36 physical health scale was met. The raw variance explained by the SF-36 physical health scale over the six waves of data collection ranged from 41.6% to 48.9% and the unexplained variance in the first contrast ranged from 17.4% to 22.4%. The residual analysis completed indicated that no second dimension or factor existed.Table 10
SF-36 physical health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
Wave 1 Wave 2 Wave 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 36.07 100.00% 100.00% 36.07 100.00% 100.00% 37.89 100.00% 100.00% Raw variance explained by measures 15.07 41.80% 42.50% 15.07 41.80% 42.50% 16.89 44.60% 45.70% Raw variance explained by persons 1.90 5.30% 5.40% 1.90 5.30% 5.40% 0.96 2.50% 2.60% Raw Variance explained by items 13.16 36.50% 37.10% 13.16 36.50% 37.10% 15.94 42.10% 43.10% Raw unexplained variance (total) 21.00 58.20% 57.50% 21.00 58.20% 57.50% 21.00 55.40% 54.30% Unexplained variance in 1st contrast 8.00 22.20% 38.10% 8.00 22.20% 38.10% 7.70 20.30% 36.70% Unexplained variance in 2nd contrast 2.02 5.60% 9.60% 2.02 5.60% 9.60% 1.96 5.20% 9.40% Unexplained variance in 3rd contrast 1.51 4.20% 7.20% 1.51 4.20% 7.20% 1.44 3.80% 6.90% Unexplained variance in 4th contrast 1.31 3.60% 6.20% 1.31 3.60% 6.20% 1.23 3.20% 5.80% Unexplained variance in 5th contrast 0.99 2.80% 4.70% 0.99 2.80% 4.70% 0.99 2.60% 4.70% Wave 4 Wave 5 Wave 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 37.07 100.00% 100.00% 39.34 100.00% 100.00% 41.08 100.00% 100.00% Raw variance explained by measures 17.07 46.10% 48.00% 18.34 46.60% 48.60% 20.08 48.90% 51.10% Raw variance explained by persons 2.45 6.60% 6.90% 2.42 6.10% 6.40% 2.68 6.50% 6.80% Raw Variance explained by items 14.62 39.40% 41.10% 15.92 40.50% 42.20% 17.40 42.30% 44.20% Raw unexplained variance (total) 20.00 53.90% 52.00% 21.00 53.40% 51.40% 21.00 51.10% 48.90% Unexplained variance in 1st contrast 6.64 17.90% 33.20% 7.50 19.10% 35.70% 7.14 17.40% 34.00% Unexplained variance in 2nd contrast 2.10 5.70% 10.50% 2.06 5.20% 9.80% 2.24 5.50% 10.70% Unexplained variance in 3rd contrast 1.54 4.20% 7.70% 1.58 4.00% 7.50% 1.56 3.80% 7.40% Unexplained variance in 4th contrast 1.26 3.40% 6.30% 1.21 3.10% 5.80% 1.20 2.90% 5.70% Unexplained variance in 5th contrast 1.07 2.90% 5.30% 1.03 2.60% 4.90% 1.04 2.50% 4.90% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.The functioning of the six rating scale categories was examined for the SF-36 physical health scale. The category logit measures ranged from -3.86 to 5.43 (see Table11). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category six. The infit MNSQ scores for this rating category ranged from 2.03 to 3.18 (see Table 11). The results indicated that the six-level rating scale used in the SF-36 physical health scale might not be the most robust to use (see Supplemental Figure 3); however, the full range of ratings were used by the participants who completed the SF-36 physical health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the first five response categories were the most probable category for some part of the continuum. Rating category six was problematic.Table 11
SF-36 physical health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 60721 23 (-3.86) 1.18 1.11 NONE 55350 25 ( -3.92) 1.12 1.07 NONE 46995 28 ( -3.86) 1.14 1.08 NONE 2 83039 32 -2.13 .93 .94 -2.56 70454 32 -2.19 .93 .92 -2.60 54692 32 -2.17 1.00 .99 -2.54 3 73299 28 -.83 .66 .59 -1.54 62780 29 -.85 .66 .60 -1.61 46905 28 -.90 .72 .62 -1.61 4 12957 5 .10 1.19 1.26 .59 11389 5 .14 1.21 1.38 .53 9720 6 .07 1.11 1.11 .38 5 15144 6 1.96 1.05 1.15 -.71b 12634 6 2.03 1.13 1.33 -.55b 9942 6 2.05 1.08 1.17 -.56b 6 238 0 (5.32) 2.67 2.61 4.21 233 0 (5.34) 3.18 2.98 4.23 155 0 (5.43) 2.77 2.30 4.33 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds N % Average Measures Infit MnSq Outfit MnSq Andrich Thresholds 1 42924 29 ( -3.75) 1.15 1.08 NONE 36502 31 ( -3.66) 1.15 1.08 NONE 28787 36 ( -3.64) 1.15 1.08 NONE 2 44603 30 -2.09 1.05 1.02 -2.42 33751 29 -2.01 1.08 1.01 -2.33 23233 29 -2.02 1.10 1.01 -2.30 3 36071 24 -.89 .76 .64 -1.52 25389 22 -.87 .82 .68 -1.44 16930 21 -.91 .86 .73 -1.46 4 8958 6 .06 1.00 .96 .24 7310 6 .06 .94 .86 .13 5353 7 .01 .91 .80 .03 5 8592 6 2.01 1.10 1.16 -.48b 6464 6 1.96 1.09 1.12 -.38b 4694 6 2.00 1.09 1.10 -.39b 6 150 0 (5.29) 2.54 2.02 4.18 129 0 (5.12) 2.25 1.75 4.02 80 0 (5.24) 2.03 1.52 4.13 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 physical scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table12). Four of the SF-36 physical health items exhibited a consistent pattern of DIF over the six waves of data collection. Item PF03:Q3C demonstrated DIF based on marital status alone while items GH02:Q11A, GH04:Q11C, and GH05:Q11D exhibited DIF based on both marital status and area of residence (see Table 12). It should be noted that items GH02:Q11A and GH04:Q11C had infit MNSQ statistics that fell within the 0.70-1.30 range while items PF03:Q3C and GH05:Q11D also had MNSQ infit scores outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range. SF-36 physical health items PF03:Q3C and GH05:Q11D appear to be particularly problematic items based on the RMM analysis findings.Table 12
Differential Item Functioning (DIF) for SF-36 physical health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 13.25 . 00 1 ∗ ∗ ∗ 0.00 .639 1.62 .442 0.14 .713 0.41 .816 0.00 .185 3:Q3A 5.79 .054 0.00 .725 1.27 .527 0.00 .330 5.69 .057 0.00 .073 4:Q3B 1.94 .376 0.00 .069 0.56 .754 0.15 .835 0.50 .779 0.06 . 00 1 ∗ ∗ ∗ 5:Q3C 1.84 .394 0.06 . 00 1 ∗ ∗ ∗ 0.03 .984 0.00 . 00 9 ∗ ∗ 0.56 .756 0.00 .442 6:Q3D 0.97 .614 0.00 .947 2.13 .342 -0.20 .778 0.44 .804 0.00 .700 7:Q3E 0.41 .816 0.00 .287 2.75 .250 0.00 .153 1.26 .529 0.00 .143 8:Q3F 0.06 .970 0.00 .684 0.18 .917 -0.08 .599 0.03 .988 -0.04 .964 9:Q3G 13.16 . 00 1 ∗ ∗ ∗ -0.02 .076 1.33 .512 0.03 . 00 6 ∗ ∗ 0.57 .750 0.00 .847 10:Q3H 12.78 . 00 2 ∗ ∗ 0.00 .324 5.72 .056 -0.22 .320 0.02 .990 0.00 .225 11:Q3I 7.45 . 02 4 ∗ 0.00 .357 5.55 .061 0.07 . 00 1 ∗ ∗ ∗ 0.00 1.000 0.00 .631 12:Q3J 0.95 .620 0.00 .306 0.03 .988 -0.03 .836 0.73 .693 0.00 .251 13:Q4A 2.34 .306 0.00 .519 0.08 .962 0.00 .461 0.17 .919 0.00 .360 14:Q4B 1.45 .481 0.00 .782 4.22 .119 -0.30 .206 6.61 . 03 6 ∗ 0.00 .520 15:Q4C 2.47 .288 0.00 .982 0.08 .961 0.00 .240 0.37 .831 0.00 .524 16:Q4D 0.08 .965 0.00 .845 0.06 .973 0.00 .873 4.54 .101 0.00 .053 21:Q7 2.54 .277 -0.05 . 00 5 ∗ ∗ 9.34 . 00 9 ∗ ∗ 0.00 .131 0.13 .941 0.00 .145 22:Q8 37.29 . 00 1 ∗ ∗ ∗ 0.00 .114 1.00 .605 -0.09 .651 1.85 .394 0.02 .081 33:Q11A 27.3 . 00 1 ∗ ∗ ∗ 0.07 . 00 1 ∗ ∗ ∗ 1.11 .572 0.00 .521 0.02 .990 0.00 .275 34:Q11B 36.38 . 00 1 ∗ ∗ ∗ 0.00 .905 1.41 .490 -0.20 .309 6.66 . 03 5 ∗ 0.00 .170 35:Q11C 12.2 . 00 2 ∗ ∗ 0.00 .204 0.68 .710 0.00 .963 0.58 .749 -0.05 . 00 2 ∗ ∗ 36:Q11D 35.29 . 00 1 ∗ ∗ ∗ -0.05 . 00 6 ∗ ∗ 1.38 .500 0.12 .444 7.03 . 02 9 ∗ 0.00 .724 Wave 4 Wave 5 Wave 6 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability SUM DIF CHI-SQUARE (DIF = 2) PROB DIF CONTRAST Mantel- Haenszel Probability 1:Q1 0.67 .714 0.00 .185 4.44 .107 0.23 .707 2.87 .235 0.09 .418 3:Q3A 0.22 .897 0.00 .073 1.32 .515 0.00 .375 3.73 .153 0.00 .176 4:Q3B 0.26 .878 0.06 . 00 1 ∗ ∗ ∗ 0.59 .744 0.34 .229 2.71 .254 0.38 .098 5:Q3C 3.48 .173 0.00 .442 0.65 .720 0.00 .342 1.81 .400 -0.13 .270 6:Q3D 1.39 .496 0.00 .700 2.83 .240 -0.16 .573 7.85 . 01 9 ∗ 0.00 .761 7:Q3E 0.10 .953 0.00 .143 1.73 .418 0.03 .025∗ 6.17 . 04 5 ∗ 0.00 .278 8:Q3F 2.31 .311 -0.04 .964 0.00 1.000 -0.17 .456 0.43 .808 -0.08 .248 9:Q3G 0.95 .621 0.00 .847 0.39 .824 0.12 . 00 1 ∗ ∗ ∗ 1.20 .547 0.11 . 00 1 ∗ ∗ ∗ 10:Q3H 1.89 .384 0.00 .225 0.73 .695 -0.26 .443 0.68 .712 -0.12 .739 11:Q3I 0.00 1.000 0.00 .631 0.42 .809 0.00 .961 0.55 .761 0.00 .387 12:Q3J 0.05 .975 0.00 .251 0.10 .953 0.06 .252 0.65 .722 -0.03 .664 13:Q4A 0.45 .798 0.00 .360 0.00 1.000 -0.06 . 04 2 ∗ 0.17 .922 -0.07 .282 14:Q4B 1.60 .447 0.00 .520 1.98 .367 -0.05 .861 0.82 .663 -0.12 .138 15:Q4C 2.50 .283 0.00 .524 0.01 .996 0.00 .453 0.20 .908 0.00 .255 16:Q4D 0.68 .711 0.00 .053 0.24 .889 -0.06 .733 0.73 .692 0.02 .431 21:Q7 3.61 .162 0.00 .145 0.00 1.000 -0.06 .413 1.75 .413 -0.07 .650 22:Q8 1.03 .595 0.02 .081 3.03 .217 0.00 .310 4.23 .119 -0.10 .644 33:Q11A 0.14 .934 0.00 .275 13.77 . 00 1 ∗ ∗ ∗ -0.05 .170 1.34 .509 -0.07 .729 34:Q11B 4.03 .131 0.00 .170 1.80 .403 0.20 . 04 8 ∗ 2.28 .317 0.07 .252 35:Q11C 0.00 1.000 -0.05 . 00 2 ∗ ∗ 0.49 .783 0.00 .280 3.68 .156 0.00 .681 36:Q11D 0.37 .831 0.00 .724 7.48 .023∗ 0.00 .941 1.55 .457 -0.03 .897 Notes. PROB. = probability; ∗p≤.05; ∗∗p ≤ .01; ∗∗∗p ≤ .001.
## 3.3. SF36 Mental Health Scale Rasch Analysis for Six Waves of Data Collection
The following SF-36 mental health items were included in the initial analysis using the RMM: RE01:Q5A, RE02:Q5B, RE03:Q5C, SF01:Q6, VT01:Q9A, 24MH01:Q9B, MH02:Q9C, MH03:Q9D, VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10. When the 14 SF-36 mental health items were calibrated using the RMM for the six waves of data collection, the items were found to have MNSQ infit statistics ranging from 0.13 to 2.43 and outfit statistics ranging from 0.22 to 2.64 (see Table13). The mean item measure was 0.00 logits (SD = 1.12). With respect to logit measures, there was a broad range, the lowest value being –3.01 and the highest value being +2.31 (see Table 14). This resulted in an average item separation index of 79.17 and an average reliability of 1.00 over the six waves (see Table 15). The separation index for items was greater than 2.0 indicating adequate separation of the items on the construct being measured.Table 13
SF-36 mental health scale Rasch analysis item statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR. MEASURE MODEL S.E. PTMEA CORR 17:Q5A 1.35 0.01 0.31 1.38 0.01 0.30 1.49 0.02 0.27 18:Q5B 1.57 0.01 0.31 1.62 0.02 0.29 1.75 0.02 0.27 19:Q5C 1.38 0.01 0.30 1.41 0.01 0.28 1.50 0.02 0.26 20:Q6 1.51 0.01 -0.09 1.78 0.02 -0.02 1.41 0.01 -0.02 23:Q9A -0.03 0.01 0.17 -0.06 0.01 0.22 -0.12 0.01 0.27 24:Q9B -1.28 0.01 0.46 -1.47 0.01 0.43 -1.54 0.01 0.41 25:Q9C -1.84 0.01 0.45 -2.04 0.01 0.40 -2.08 0.02 0.40 26:Q9D 0.21 0.01 0.20 0.30 0.01 0.18 0.30 0.01 0.26 27:Q9E -0.16 0.01 0.22 -0.24 0.01 0.26 -0.29 0.01 0.30 28:Q9F -1.25 0.01 0.46 -1.33 0.01 0.39 -1.32 0.01 0.40 29:Q9G -0.90 0.01 0.44 -0.93 0.01 0.39 -0.88 0.01 0.39 30:Q9H 0.63 0.01 0.27 0.72 0.01 0.25 0.74 0.01 0.28 31:Q9I -0.55 0.01 0.37 -0.50 0.01 0.31 -0.42 0.01 0.31 32:Q10 -0.65 0.01 0.28 -0.65 0.01 0.22 -0.55 0.01 0.19 Wave 4 Wave 5 Wave 6 SF36 ITEM MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR MEASURE MODEL S.E. PTMEA CORR. 17:Q5A 1.47 0.02 0.28 1.48 0.02 0.30 1.51 0.02 0.29 18:Q5B 1.76 0.02 0.28 1.77 0.02 0.30 1.81 0.03 0.32 19:Q5C 1.51 0.02 0.27 1.51 0.02 0.28 1.53 0.02 0.28 20:Q6 1.19 0.01 0.04 1.01 0.02 0.03 0.96 0.02 0.05 23:Q9A -0.14 0.01 0.23 -0.21 0.01 0.24 -0.29 0.01 0.26 24:Q9B -1.52 0.01 0.40 -1.49 0.02 0.40 -1.47 0.02 0.37 25:Q9C -2.07 0.02 0.35 -1.92 0.02 0.39 -1.91 0.02 0.35 26:Q9D 0.30 0.01 0.23 0.31 0.01 0.19 0.29 0.01 0.22 27:Q9E -0.34 0.01 0.27 -0.41 0.01 0.27 -0.53 0.01 0.29 28:Q9F -1.30 0.01 0.40 -1.25 0.01 0.42 -1.22 0.02 0.41 29:Q9G -0.80 0.01 0.39 -0.75 0.01 0.44 -0.72 0.01 0.43 30:Q9H 0.75 0.01 0.29 0.69 0.01 0.23 0.69 0.02 0.27 31:Q9I -0.34 0.01 0.33 -0.30 0.01 0.35 -0.27 0.01 0.32 32:Q10 -0.48 0.01 0.17 -0.44 0.01 0.17 -0.39 0.01 0.17 Note: MODEL S.E. = Model Standard Error; PTMEA CORR = Point Measure Correlation.Table 14
SF-36 mental health scale Rasch analysis Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.29 -9.9 0.32 -9.9 0.26 -9.9 0.28 -9.9 0.30 -9.9 0.32 -9.9 18:Q5B 0.43 -9.9 0.46 -9.9 0.43 -9.9 0.46 -9.9 0.48 -9.9 0.50 -9.9 19:Q5C 0.31 -9.9 0.34 -9.9 0.29 -9.9 0.31 -9.9 0.32 -9.9 0.34 -9.9 20:Q6 2.64 9.9 2.87 9.9 2.91 9.9 3.06 9.9 2.54 9.9 2.74 9.9 23:Q9A 1.08 7.7 1.15 9.9 1.03 3.2 1.12 9.9 1.04 3.5 1.09 6.7 24:Q9B 1.25 9.9 1.17 9.9 1.40 9.9 1.29 9.9 1.41 9.9 1.31 9.9 25:Q9C 1.44 9.9 1.30 9.9 1.59 9.9 1.39 9.9 1.51 9.9 1.33 9.9 26:Q9D 1.22 9.9 1.33 9.9 1.14 9.9 1.28 9.9 1.10 7.1 1.19 9.9 27:Q9E 1.12 9.9 1.17 9.9 1.08 7.8 1.12 9.9 1.07 5.3 1.08 6.3 28:Q9F 0.90 -6.6 0.88 -8.3 1.02 1.0 0.98 -0.9 0.96 -2.0 0.93 -3.6 29:Q9G 0.88 -9.9 0.87 -9.9 0.90 -7.4 0.89 -7.8 0.88 -7.8 0.87 -8.2 30:Q9H 1.22 9.9 1.29 9.9 1.15 8.5 1.24 9.9 1.09 4.6 1.16 7.9 31:Q9I 0.72 -9.9 0.73 -9.9 0.76 -9.9 0.77 -9.9 0.73 -9.9 0.74 -9.9 32:Q10 0.72 -9.9 0.77 -9.9 0.78 -9.9 0.81 -9.9 0.87 -9.9 0.91 -6.8 Wave 4 Wave 5 Wave 6 SF36 ITEM Infit Outfit Infit Outfit Infit Outfit MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD MNSQ ZSTD 17:Q5A 0.31 -9.9 0.34 -9.9 0.34 -9.9 0.37 -9.9 0.36 -9.9 0.39 -9.9 18:Q5B 0.50 -9.9 0.52 -9.9 0.52 -9.9 0.54 -9.9 0.53 -9.9 0.55 -9.9 19:Q5C 0.34 -9.9 0.36 -9.9 0.37 -9.9 0.39 -9.9 0.38 -9.9 0.41 -9.9 20:Q6 2.16 9.9 2.33 9.9 1.96 9.9 2.15 9.9 1.91 9.9 2.07 9.9 23:Q9A 1.04 2.7 1.07 5.3 1.02 1.7 1.05 3.4 1.03 1.8 1.04 2.2 24:Q9B 1.37 9.9 1.25 9.9 1.36 9.9 1.25 9.5 1.33 9.9 1.25 8.3 25:Q9C 1.50 9.9 1.36 9.9 1.51 9.9 1.36 9.9 1.42 9.9 1.32 9.1 26:Q9D 1.15 9.3 1.23 9.9 1.16 8.7 1.26 9.9 1.13 6.2 1.20 8.8 27:Q9E 1.11 7.9 1.11 8.0 1.08 5.1 1.08 4.8 1.10 5.2 1.09 4.4 28:Q9F 0.97 -1.6 0.94 -3.2 0.95 -2.2 0.91 -4.2 0.91 -3.5 0.89 -4.4 29:Q9G 0.87 -8.1 0.86 -8.5 0.84 -9.2 0.83 -9.7 0.85 -7.3 0.84 -7.8 30:Q9H 1.12 5.6 1.16 7.3 1.13 5.8 1.22 9.3 1.09 3.6 1.15 5.3 31:Q9I 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.77 -9.9 0.76 -9.9 0.78 -9.9 32:Q10 0.86 -9.9 0.91 -6.9 0.90 -6.3 0.94 -3.9 0.96 -2.4 1.00 0.2 Notes. MNSQ = mean square residual fit statistic; ZSTD: standardized mean square residual fit statistic; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.Table 15
SF-36 mental health scale Rasch analysis summary Item and Person Infit and Outfit statistics for six waves of data collection.
Wave 1 Wave 2 Wave 3 Wave 4 Wave 5 Wave 6 Persons MEAN -.08 -.04 -.02 -.03 -.06 -.06 S.D. .30 .28 .31 .30 .29 .30 MAX .30 1.64 1.38 .30 .29 .30 MIN 1.87 -3.54 -2.99 2.37 2.16 2.12 Infit-MNSQ 1.01 1.01 1.02 1.02 1.02 1.02 Infit-ZSTD -.30 -.40 -.30 -.30 -.30 -.20 Outfit-MNSQ 1.06 1.08 1.06 1.03 1.02 1.01 Outfit-ZSTD -.20 -.20 -.20 -.20 -.20 -.20 Person separation .53 .33 .45 .36 .36 .41 Person reliability .22a .10a .17a .11a .12a .14a Items MEAN .00 .00 .00 .00 .00 .00 S.D. 1.17 1.20 1.19 1.13 1.31 1.07 MAX 1.59 1.78 1.75 1.75 2.13 1.67 MIN -1.88 -2.04 -2.08 -2.04 -1.94 -1.95 Infit-MNSQ 1.02 1.05 1.02 1.00 .99 .98 Infit-ZSTD .10 .20 -.60 -.30 -.50 -.50 Outfit-MNSQ 1.05 1.07 1.04 1.02 1.01 1.00 Outfit-ZSTD .10 .80 .20 .10 -.10 -.30 Item separation 95.77 89.12 83.98 77.85 68.89 59.38 Item reliability 1.00 1.00 1.00 1.00 1.00 1.00 Notes. aPerson or item reliability <0.8; bItem separation <3.0; cPerson separation <2.0; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The SF-36 mental health scale person-item map is shown in Supplemental Figure5 and reports evidence of the hierarchical ordering of the SF-36 mental health scale items. It should also be noted that several of the SF-36 mental health scale items have the same level of difficulty. The average person measure was 0.75 logits (SD = 0.23) over the six waves of data collection (see Table 15). The mean person separation was 0.73 with a mean reliability of 0.35 (see Table 15). With a mean person separation reliability of less than 2.0, this indicates inadequate separation of participants on the SF-36 mental health construct.When examining the overall RMM output of the SF-36 mental health scale, the average person measure (0.75 logits) was higher than the average item measure (0.00 logits). The range of logit values for items was from +2.13 to -2.08 logits. The person reliability was 0.35 and item reliability was 1.00. Reliability ranges of .80 or greater are generally considered desirable [35]. This places the item reliability for the SF-36 mental health scale in the acceptable range and the person reliability correlation in the less than desired range.The SF-36 mental health scale has a six-category rating scale which generates five thresholds. Rasch analysis reports the calibrations of the six thresholds increases monotonically from -3.07, -1.06, -.17, .40, 1.14, and 2.54 for wave one and -2.98, -1.09, -.19, .41, 1.15, and 2.51 for wave six.Item fit to the unidimensionality requirement of the RMM was also examined. Five out of the 14 items were found to have MNSQ infit and outfit statistics inside the 0.70 to 1.30 range and/or a z-score that fell inside the +2 to -2 range; thus, items VT01:Q9A, MH01:Q9B, MH03:Q9D, 27VT02:Q9E, MH04:Q9F, VT03:Q9G, MH05:Q9H, VT04:Q9I, and SF02:Q10 met the RMM requirements (see Table14). In other words, only 9/14 or 64.3% of the SF-36 physical health scale items met the RMM requirements. The following items had an Infit MNSQ statistic that was less than 0.70: RE01:Q5A, RE02:Q5B, and RE03:Q5C. Item SF01:Q6 had an Infit MNSQ statistic that was greater than 1.30.When the item residuals from the RMM output were factor analysed, no significant factor loadings were present (see Table16). This indicated that the unidimensional requirement of the SF-36 total scale was met. The raw variance explained by the SF-36 mental health scale over the six waves of data collection ranged from 62.5% to 66.1% and the unexplained variance in the first contrast ranged from 15.1% to 16.5%.Table 16
SF-36 mental health scale Rasch analysis of standardised residual variance in Eigenvalue units for six waves of data collection.
WAVE 1 WAVE 2 WAVE 3 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 38.55 100.00% 100.00% 41.29 100.00% 100.00% 42.62 100.00% 100.00% Raw variance explained by measures 24.55 63.70% 63.70% 27.29 66.10% 66.20% 26.62 62.50% 62.50% Raw variance explained by persons 2.85 7.40% 7.40% 2.06 5.00% 5.00% 2.68 6.30% 6.30% Raw Variance explained by items 21.70 56.30% 56.30% 25.23 61.10% 61.20% 23.94 56.20% 56.20% Raw unexplained variance (total) 14.00 36.30% 36.30% 14.00 33.90% 33.80% 16.00 37.50% 37.50% Unexplained variance in 1st contrast 6.22 16.10% 44.50% 6.22 15.10% 44.40% 7.02 16.50% 43.90% Unexplained variance in 2nd contrast 1.49 3.90% 10.60% 1.47 3.60% 10.50% 1.62 3.80% 10.10% Unexplained variance in 3rd contrast 1.29 3.30% 9.20% 1.32 3.20% 9.40% 1.29 3.00% 8.10% Unexplained variance in 4th contrast 0.81 2.10% 5.80% 0.85 2.00% 6.00% 1.05 2.50% 6.60% Unexplained variance in 5th contrast 0.68 1.80% 4.90% 0.71 1.70% 5.00% 0.71 1.70% 4.40% WAVE 4 WAVE 5 WAVE 6 Eigenvalue Observed Expected Eigenvalue Observed Expected Eigenvalue Observed Expected Total raw variance in observations 39.19 100.00% 100.00% 37.79 100.00% 100.00% 37.65 100.00% 100.00% Raw variance explained by measures 25.19 64.30% 64.50% 23.79 62.90% 63.30% 23.65 62.80% 63.20% Raw variance explained by persons 2.43 6.20% 6.20% 1.73 4.60% 4.60% 2.44 6.50% 6.50% Raw Variance explained by items 22.76 58.10% 58.30% 22.06 58.40% 58.70% 21.21 56.30% 56.60% Raw unexplained variance (total) 14.00 35.70% 35.50% 14.00 37.10% 36.70% 14.00 37.20% 36.80% Unexplained variance in 1st contrast 6.16 15.70% 44.00% 6.10 16.10% 43.60% 5.75 15.30% 41.10% Unexplained variance in 2nd contrast 1.52 3.90% 10.90% 1.61 4.20% 11.50% 1.67 4.40% 11.90% Unexplained variance in 3rd contrast 1.32 3.40% 9.40% 1.31 3.50% 9.30% 1.35 3.60% 9.60% Unexplained variance in 4th contrast 0.80 2.00% 5.70% 0.79 2.10% 5.60% 0.85 2.30% 6.10% Unexplained variance in 5th contrast 0.68 1.70% 4.90% 0.69 1.80% 4.90% 0.68 1.80% 4.80% Notes. a > 60% unexplained variance in the Rasch factor; bEigenvalue in the first contrast <3.0; c < 10% unexplained variance in the first contrast.An inspection of the PTMEAs for the SF-36 mental health scale indicated that, for all other items, the PTMEA correlations had acceptable values. All the SF-36 mental health scale items had PTMEAs that were positive, supporting item-level polarity.The functioning of the six rating scale categories was examined for the SF-36 mental health scale. Items which are easier are located at the bottom of the SF-36 mental health person-item map while more difficult items are located at the top of the map. The patterns of more challenging items and less difficult items on the person-item map for each of the six waves of data collection appear to be fairly consistent. The category logit measures ranged from -3.86 to 2.57 (see Table17). Of the six rating scale categories, only one had infit MNSQ scores that fell outside the 0.7-1.30 range and/or a z-score that fell inside the +2 to -2 range over the six waves of data collection, this being category one. The infit MNSQ scores for this rating category ranged from 1.38 to 1.41 (see Table 17). The results indicated that the six-level rating scale used in the SF-36 mental health scale might not be the most robust to use (see Supplemental Figure 6), however, the full range of ratings were used by the participants who completed the SF-36 mental health scale. The probability curves for the rating scales of the six waves of data collection illustrated that each threshold estimate represented a separate point on the measure variable and the latter five response categories were the most probable category for some part of the continuum. Rating category one was problematic.Table 17
SF-36 mental health scale Rasch analysis of summary of category structure for six waves of data collection.
WAVE 1 WAVE 2 (p. 120) WAVE 3 (p. 125) CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 22667 14 ( -3.07) 1.38 1.20 NONE 18463 13 ( -3.18) 1.41 1.22 NONE 14323 12 ( -3.18) 1.38 1.22 NONE 2 49420 30 -1.06 .75 .78 -1.91 43019 30 -1.08 .76 .81 -2.03 33416 28 -1.12 .78 .85 -2.02 3 15086 9 -.17 .96 .86 .66 12291 8 -.15 .97 .89 .71 10845 9 -.17 .98 .85 .57 4 25646 15 .40 1.02 1.11 -.41b 20753 14 .43 1.00 1.12 -.38b 18002 15 .44 1.00 1.06 -.38b 5 28636 17 1.14 1.06 1.31 .51b 24231 17 1.16 1.08 1.38 .53b 18787 16 1.20 1.13 1.28 .63 6 24973 15 (2.54) 1.00 1.07 1.15 23360 16 (2.56) 1.00 1.08 1.17 18313 15 (2.60) .95 1.02 1.19 WAVE 4 WAVE 5 WAVE 6 CAT. LABEL N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold N % Average Measures Infit MnSq Outfit MnSq Andrich Threshold 1 12561 13 ( -3.08) 1.37 1.21 NONE 10333 14 ( -3.00) 1.38 1.23 NONE 7471 14 ( -2.98) 1.37 1.21 NONE 2 27233 29 -1.11 .80 .88 -1.91 20854 28 -1.08 .82 .89 -1.82 14529 27 -1.09 .83 .91 -1.80 3 9548 10 -.18 .98 .82 .51 7515 10 -.19 .94 .76 .50 5675 11 -.19 .94 .78 .43 4 15240 16 .42 1.00 1.00 -.36b 12348 17 .40 .97 .94 -.40b 9024 17 .41 .98 .94 -.35b 5 15741 16 1.17 1.14 1.22 .60 12183 16 1.15 1.19 1.27 .61 8698 16 1.15 1.19 1.24 .64 6 15147 16 (2.57) .93 .99 1.16 11454 15 (2.53) .90 .99 1.11 8415 16 (2.51) .90 .96 1.07 Notes. aAndrich threshold category increase of >5; bAndrich threshold category decrease where an increase is expected; values in italic for Infit or Outfit MnSq > 1.34; values underlined for Infit or Outfit MnSq < 0.64.The Rasch output logit performance scores for the participants were compared to determine if any of the SF-36 mental scale items exhibited differential item functioning (DIF), based on marital status and area of residence (urban versus regional) (see Table18). Six of the SF-36 mental health items exhibited a consistent pattern of DIF over the six waves of data collection. Items SF01:Q6, MH01:Q9B, MH02:Q9C, MH03:Q9D, MH04:Q9F, and MH05:Q9H exhibited DIF based on both marital status and area of residence (see Table 18). It should be noted that items MH01:Q9B and MH03:Q9D had infit MNSQ statistics that fell outside the 0.7-1.30 range. SF-36 physical health items MH01:Q9B and MH03:Q9D appear to be particularly problematic items based on the RMM analysis findings.Table 18
Differential Item Functioning (DIF) for SF-36 mental health scale Rasch analysis for six waves of data collection based on marital status and area of residence.
Wave 1 Wave 2 Wave 3 Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SF36 ITEM No. SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 3.13 .206 0.00 .720 0.05 .975 -0.17 .122 0.07 .969 0.00 .978 18:Q5B 6.16 . 04 5 ∗ 0.00 .799 0.41 .814 0.00 .347 0.59 .745 0.00 .165 19:Q5C 4.23 .119 0.00 .505 0.17 .922 -0.30 .066 0.79 .673 0.00 .484 20:Q6 62.55 . 00 1 ∗ ∗ ∗ -0.09 .058 6.62 . 03 6 ∗ 0.00 .056 0.00 1.000 0.00 .415 23:Q9A 8.45 . 01 4 ∗ 0.00 .101 0.00 1.000 -0.05 .498 0.05 .979 0.00 .725 24:Q9B 14.83 . 00 1 ∗ ∗ ∗ 0.09 . 00 1 ∗ ∗ ∗ 0.41 .813 0.00 .553 11.22 . 00 4 ∗ ∗ 0.02 .093 25:Q9C 62.48 . 00 1 ∗ ∗ ∗ 0.12 . 00 1 ∗ ∗ ∗ 29.94 . 00 1 ∗ ∗ ∗ 0.48 .087 0.01 .996 0.07 . 00 9 ∗ ∗ 26:Q9D 9.01 . 01 1 ∗ -0.07 . 00 1 ∗ ∗ ∗ 0.16 .925 -0.07 .476 22.89 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ ∗ 27:Q9E 8.72 . 01 3 ∗ 0.00 .741 0.49 .782 0.37 . 01 0 ∗ 0.57 .750 0.00 .207 28:Q9F 17.18 . 00 1 ∗ ∗ ∗ 0.08 . 00 1 ∗ ∗ ∗ 20.01 . 00 1 ∗ ∗ ∗ 0.00 .401 0.73 .694 0.05 .004 29:Q9G 5.04 .079 0.00 .719 3.76 .150 0.52 . 00 3 ∗ ∗ 11.42 . 00 3 ∗ ∗ 0.00 .815 30:Q9H 13.46 . 00 1 ∗ ∗ ∗ -0.06 . 00 1 ∗ ∗ 8.62 . 01 3 ∗ 0.00 .176 3.70 .155 -0.07 . 00 2 ∗ ∗ 31:Q9I 1.75 .414 0.00 .224 0.51 .773 -0.18 .308 0.00 1.000 0.00 .299 32:Q10 14.70 . 00 1 ∗ ∗ ∗ 0.00 .207 0.11 .947 0.00 .165 2.75 .250 0.00 .978 Wave 4 Wave 5 Wave 6 SF36 ITEM No. Marital status Urban and Regional Marital status Urban and Regional Marital status Urban and Regional SUMMARY DIF CHI-SQUARE (DIF = 1) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability SUMMARY DIF CHI-SQUARE (DIF = 2) PROB. DIF CONTRAST Mantel- Haenszel Probability 17:Q5A 0.00 1.000 0.00 .618 0.07 .966 -0.03 .986 0.94 .623 -0.18 . 00 1 ∗ ∗ ∗ 18:Q5B 0.00 1.000 0.00 .639 2.43 .294 0.00 .395 0.39 .824 0.00 .543 19:Q5C 0.00 1.000 0.00 .497 0.71 .701 0.20 .271 0.59 .744 -0.19 . 00 3 ∗ ∗ 20:Q6 0.00 1.000 0.00 .779 6.37 . 04 0 ∗ -0.02 .337 2.26 .320 -0.03 .162 23:Q9A 0.00 1.000 0.00 .900 1.95 .373 0.19 .254 1.18 .551 -0.04 .176 24:Q9B 6.95 . 00 8 ∗ ∗ 0.00 .384 13.76 . 00 1 ∗ ∗ ∗ 0.00 .784 3.06 .213 0.00 .580 25:Q9C 0.00 1.000 0.06 . 03 0 ∗ 6.84 . 03 2 ∗ -0.68 .078 0.77 .678 0.08 .371 26:Q9D 0.00 1.000 0.00 .544 13.70 . 00 1 ∗ ∗ ∗ -0.02 .118 2.06 .354 0.00 .923 27:Q9E 0.00 1.000 0.00 .537 3.30 .189 -0.08 .720 1.67 .430 0.06 .215 28:Q9F 0.00 1.000 0.00 .687 0.87 .644 0.00 .819 0.43 .806 0.00 .408 29:Q9G 0.00 1.000 0.00 .694 0.20 .908 0.27 .570 0.63 .729 0.03 .278 30:Q9H 6.10 . 01 4 ∗ 0.00 .297 4.86 .086 0.05 .065 0.08 .962 0.00 .419 31:Q9I 0.00 1.000 -0.05 .112 1.10 .574 0.48 .170 0.04 .981 -0.04 .664 32:Q10 0.00 1.000 0.00 .414 1.56 .456 0.08 . 01 9 ∗ 0.05 .979 0.00 .434 Notes. PROB. = probability; p∗≤.05; p∗∗≤.01; p∗∗∗≤.001.
## 4. Discussion
### 4.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured?
For the SF-36 as a total measure, the rating scale categories increased monotonically, indicating that rating response scales were being used as expected and are appropriate for measurement across all waves. Previous longitudinal evaluation of the measure using CCT methods found poor test-retest reliability between two time points two weeks apart [36]. Previous research using IRT methods have been largely cross-sectional, providing little longitudinal evaluation of the measure using this method [5, 6, 10, 17]. In this sample, the pattern of more and less difficult items is consistent, indicating that item difficultly remained stable across each wave. Despite consistency across time in this sample, redundancy emerged as an issue with several total scale items displaying the same level of difficulty across all waves of data. This was seen again in both the SF-36 mental and physical health summary scores. It appears redundant items span across all uses of the measure and this suggests that item descriptors need to be more specific to avoid overlap across similar items.Category Six of the SF-36 physical health summary scale and Category One of the SF-36 mental health scale had scores outside the acceptable range, which may indicate these rating categories are not robust for use in longitudinal studies. Disordered categories had been seen in a previous evaluation of the SF-36, with authors suggesting collapsing some category response options [5]. These findings support this issue with the SF-36. Further investigation into the category disordering in the SF-36 mental and physical health response scale is warranted and collapsing of the response option categories may improve this, as suggested in previous literature [5, 17].When examining summary statistics for total SF-36 items, the mean person reliability fell in the unacceptable range. Inadequate person separation reliability was also seen across all waves of data, in both summary scales. The person separation index indicates the instrument used as a whole and as summary scales is not sensitive enough to separate high and low performances in the sample [29]. This presents an issue with internal consistency across all presentations of the measure. Comparatively, using classical methods, the measure was seen to discriminate between patients pre- and postoperation [37]. Results using IRT suggest that the measure is unable to discriminate between high and low performances.While results of IRT have raised doubts of the measures internal consistency, results from classical testing methods report strong internal consistency, reflected in high Cronbach’s alpha scores. When validating the measure in patients with endometriosis, Cronbach’s alpha for the total scale was above acceptable cut-offs [38]. Internal consistency scores have also been seen to be above .9 for the full scale and above .7 for each subscale [39]. In addition to internal consistency, the measure displayed acceptable content validity, correlating strongly with similar measures [38]. IRT assesses instrument reliability at item level, rather than instrument-level as well as considering considers the importance of participant responses.The contrast between results from IRT and CTT could be due to the further focus at item level that is characteristics to IRT. It is possible that overlapping items identified in the person-item map are contributing to lack of sensitivity in the scale. Addition of more items or altering current items to improve sensitivity may improve the person reliability. Further investigation into the similarity and specificity of these items is warranted, to ensure items capture the full variable being measured.
### 4.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves?
Several items on the total scale and both summary scales were found to have Infit statistics outside of the acceptable range. Many of the items remained problematic regardless of investigated as whole measure or by summary scale. The number of misfitting items was slightly lower when used in summary scales; however this can be due to the less items included in the summary scale analysis. These underfitting items create concerns about degradation of the model and the validity of the measure as a measure of health related quality of life [15]. Further investigation into such items is required to determine the reason for underfit. While overfit items do not degrade the model, they can result in misinterpretation of the model as working better than expected and also warrant further investigation [15].
### 4.3. Does the SF-36 Measure One or More Constructs?
The measure proved to be unidimensional across total scale and summary score analyses, indicating responses to each scale are likely to be determined by a single trait. As a total scale, the first single factor accounted for close to 60% across all six waves and the factor was considered unidimensional [32]. Residual analysis also indicated no second dimension or factor existed, further confirming unidimensioanlity of the total scale [33]. Analysis of all eight subscales revealed each scale measured a single latent trait [6]. Principal components analysis of the physical and mental health summary scores has confirmed the presence of a two-factor model, further corroborated by the results of the current study support the mental and physical health scales [12].Results suggest the responses to measure are determined by a single factor. While the responses may be determined by a single factor, previously identified misfitting and overlapping items may degrade the model and validity, suggesting that it may not be health-related quality of life that is determining response to these items. Further research should aim to correct misfitting items and reassess unidimensionality.
### 4.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way?
It appears that marital status and area of residence influence responses to both total and summary scale items. Differential item functioning has identified in the SF-36 previously, with health issues such as hypertension, respiratory issues, and diabetes influencing responses on five items in the measure [10]. Previously, the presence of DIF has been considered negligible, as it was only present for a small number of items [10]. As the SF-36 is a health-related quality of life measure, it is plausible that marital status or area of residence would have an impact in this domain as these factors can influence healthcare use and quality of life. However, the presence of DIF limits the ability of scores to be comparable across different populations.While several items on each summary scale and total scale exhibited DIF, only item 24:Q9B demonstrated DIF across analysis of total scale and items in the summary scales. This particular item also demonstrated Infit statistics outside the acceptable range, proving to be particularly problematic in every presentation of the measure. Several other items demonstrated DIF and misfit. Given that the number of items exhibiting DIF and misfit across all presentations of the measure, further investigation is needed into these specific items.
### 4.5. Limitations and Future Research
While the current study revealed differences between IRT and CTT evaluations of the SF-36, it did not compare each method in the same sample. Future research may perform both methods using the same sample, in order to explain the differences between methods and advantages of applying different frameworks when developing and evaluating measures. It may also be beneficial to compare methods longitudinally. A further limitation is the rate of attrition in the sample. While attrition is to be expected in a longitudinal study, results between waves should be interpreted in light of this.The results suggest the SF-36 is not as sound as previously suggested. It can be delivered as eight subscales and future research may apply the RMM to each subscale to evaluate the efficacy of the measure in this form. Based on the RMM findings in the current study, future research should further evaluate this measure using IRT methods. Results suggest multiple items needed to be reassessed to avoid degrading the model and improve performance of the SF-36 as a reliable measure of health-related quality of life.
## 4.1. Is There Disordering or Dysfunction within the SF-36 Items against the Construct Being Measured?
For the SF-36 as a total measure, the rating scale categories increased monotonically, indicating that rating response scales were being used as expected and are appropriate for measurement across all waves. Previous longitudinal evaluation of the measure using CCT methods found poor test-retest reliability between two time points two weeks apart [36]. Previous research using IRT methods have been largely cross-sectional, providing little longitudinal evaluation of the measure using this method [5, 6, 10, 17]. In this sample, the pattern of more and less difficult items is consistent, indicating that item difficultly remained stable across each wave. Despite consistency across time in this sample, redundancy emerged as an issue with several total scale items displaying the same level of difficulty across all waves of data. This was seen again in both the SF-36 mental and physical health summary scores. It appears redundant items span across all uses of the measure and this suggests that item descriptors need to be more specific to avoid overlap across similar items.Category Six of the SF-36 physical health summary scale and Category One of the SF-36 mental health scale had scores outside the acceptable range, which may indicate these rating categories are not robust for use in longitudinal studies. Disordered categories had been seen in a previous evaluation of the SF-36, with authors suggesting collapsing some category response options [5]. These findings support this issue with the SF-36. Further investigation into the category disordering in the SF-36 mental and physical health response scale is warranted and collapsing of the response option categories may improve this, as suggested in previous literature [5, 17].When examining summary statistics for total SF-36 items, the mean person reliability fell in the unacceptable range. Inadequate person separation reliability was also seen across all waves of data, in both summary scales. The person separation index indicates the instrument used as a whole and as summary scales is not sensitive enough to separate high and low performances in the sample [29]. This presents an issue with internal consistency across all presentations of the measure. Comparatively, using classical methods, the measure was seen to discriminate between patients pre- and postoperation [37]. Results using IRT suggest that the measure is unable to discriminate between high and low performances.While results of IRT have raised doubts of the measures internal consistency, results from classical testing methods report strong internal consistency, reflected in high Cronbach’s alpha scores. When validating the measure in patients with endometriosis, Cronbach’s alpha for the total scale was above acceptable cut-offs [38]. Internal consistency scores have also been seen to be above .9 for the full scale and above .7 for each subscale [39]. In addition to internal consistency, the measure displayed acceptable content validity, correlating strongly with similar measures [38]. IRT assesses instrument reliability at item level, rather than instrument-level as well as considering considers the importance of participant responses.The contrast between results from IRT and CTT could be due to the further focus at item level that is characteristics to IRT. It is possible that overlapping items identified in the person-item map are contributing to lack of sensitivity in the scale. Addition of more items or altering current items to improve sensitivity may improve the person reliability. Further investigation into the similarity and specificity of these items is warranted, to ensure items capture the full variable being measured.
## 4.2. Do the SF-36 Items Have a Consistent Hierarchy and Good Distribution across All Waves?
Several items on the total scale and both summary scales were found to have Infit statistics outside of the acceptable range. Many of the items remained problematic regardless of investigated as whole measure or by summary scale. The number of misfitting items was slightly lower when used in summary scales; however this can be due to the less items included in the summary scale analysis. These underfitting items create concerns about degradation of the model and the validity of the measure as a measure of health related quality of life [15]. Further investigation into such items is required to determine the reason for underfit. While overfit items do not degrade the model, they can result in misinterpretation of the model as working better than expected and also warrant further investigation [15].
## 4.3. Does the SF-36 Measure One or More Constructs?
The measure proved to be unidimensional across total scale and summary score analyses, indicating responses to each scale are likely to be determined by a single trait. As a total scale, the first single factor accounted for close to 60% across all six waves and the factor was considered unidimensional [32]. Residual analysis also indicated no second dimension or factor existed, further confirming unidimensioanlity of the total scale [33]. Analysis of all eight subscales revealed each scale measured a single latent trait [6]. Principal components analysis of the physical and mental health summary scores has confirmed the presence of a two-factor model, further corroborated by the results of the current study support the mental and physical health scales [12].Results suggest the responses to measure are determined by a single factor. While the responses may be determined by a single factor, previously identified misfitting and overlapping items may degrade the model and validity, suggesting that it may not be health-related quality of life that is determining response to these items. Further research should aim to correct misfitting items and reassess unidimensionality.
## 4.4. Were All Items in the SF-36 Instrument Used by All Groups in the Same Way?
It appears that marital status and area of residence influence responses to both total and summary scale items. Differential item functioning has identified in the SF-36 previously, with health issues such as hypertension, respiratory issues, and diabetes influencing responses on five items in the measure [10]. Previously, the presence of DIF has been considered negligible, as it was only present for a small number of items [10]. As the SF-36 is a health-related quality of life measure, it is plausible that marital status or area of residence would have an impact in this domain as these factors can influence healthcare use and quality of life. However, the presence of DIF limits the ability of scores to be comparable across different populations.While several items on each summary scale and total scale exhibited DIF, only item 24:Q9B demonstrated DIF across analysis of total scale and items in the summary scales. This particular item also demonstrated Infit statistics outside the acceptable range, proving to be particularly problematic in every presentation of the measure. Several other items demonstrated DIF and misfit. Given that the number of items exhibiting DIF and misfit across all presentations of the measure, further investigation is needed into these specific items.
## 4.5. Limitations and Future Research
While the current study revealed differences between IRT and CTT evaluations of the SF-36, it did not compare each method in the same sample. Future research may perform both methods using the same sample, in order to explain the differences between methods and advantages of applying different frameworks when developing and evaluating measures. It may also be beneficial to compare methods longitudinally. A further limitation is the rate of attrition in the sample. While attrition is to be expected in a longitudinal study, results between waves should be interpreted in light of this.The results suggest the SF-36 is not as sound as previously suggested. It can be delivered as eight subscales and future research may apply the RMM to each subscale to evaluate the efficacy of the measure in this form. Based on the RMM findings in the current study, future research should further evaluate this measure using IRT methods. Results suggest multiple items needed to be reassessed to avoid degrading the model and improve performance of the SF-36 as a reliable measure of health-related quality of life.
## 5. Conclusions
Previous evaluations of the SF-36 have relied on cross-sectional data; however, the findings of the current study demonstrate the longitudinal efficacy of the measure. While using of the measure remained consistent across time for both the whole measure and summary scales, several issues were identified. Previous studies evaluating the SF-36 using CCT methods describe the measure as reliable and valid. However, evaluating the measure by application of the RMM indicated issues with internal consistency, generalisability, and sensitivity when the measure was evaluated as a whole and as both physical and mental health summary scales.
---
*Source: 1013453-2018-11-04.xml* | 2018 |
# Retracted: Small Molecule Kaempferol Promotes Insulin Sensitivity and Preserved Pancreaticβ-Cell Mass in Middle-Aged Obese Diabetic Mice
**Journal:** Journal of Diabetes Research
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1013482
---
## Body
---
*Source: 1013482-2020-09-29.xml* | 1013482-2020-09-29_1013482-2020-09-29.md | 359 | Retracted: Small Molecule Kaempferol Promotes Insulin Sensitivity and Preserved Pancreaticβ-Cell Mass in Middle-Aged Obese Diabetic Mice | Journal of Diabetes Research
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1013482 | 1013482-2020-09-29.xml | ---
## Body
---
*Source: 1013482-2020-09-29.xml* | 2020 |
|
# DNA Methylation Is Involved in the Expression of miR-142-3p in Fibroblasts and Induced Pluripotent Stem Cells
**Authors:** Siti Razila Abdul Razak; Yukihiro Baba; Hiromitsu Nakauchi; Makoto Otsu; Sumiko Watanabe
**Journal:** Stem Cells International
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101349
---
## Abstract
MicroRNAs are differentially expressed in cells and regulate multiple biological processes. We have been analyzing comprehensive expression patterns of microRNA in human and mouse embryonic stem and induced pluripotent stem cells. We determined microRNAs specifically expressed in these pluripotent stem cells, and miR-142-3p is one of such microRNAs. miR-142-3p is expressed at higher levels in induced pluripotent stem cells relative to fibroblasts in mice. Level of expression of miR142-3p decreased during embryoid body formation from induced pluripotent stem cells. Loss-of-function analyses of miR-142-3p suggested that miR-142-3p plays roles in the proliferation and differentiation of induced pluripotent stem cells. CpG motifs were found in the 5′ genomic region of themiR-142-3p; they were highly methylated in fibroblasts, but not in undifferentiated induced pluripotent stem cells. Treating fibroblasts with 5-aza-2′-deoxycytidine increased the expression of miR-142-3p significantly and reduced methylation at the CpG sites, suggesting that the expression of miR-142-3p is suppressed by DNA methylation in fibroblasts. Luciferase analysis using various lengths of the 5′ genomic region of miR142-3p indicated that CpGs in the proximal enhancer region may play roles in suppressing the expression of miR-142-3p in fibroblasts.
---
## Body
## 1. Introduction
The self-renewal and differentiation of pluripotent stem cells are regulated by various factors including growth factors, cytokines, intracellular signaling molecules, the extracellular matrix, and transcription factors. In addition, the roles of microRNAs (miRNAs) and epigenetic regulation such as DNA methylation and histone modification have received increasing attention in recent years [1]. The complex regulatory networks involving these mechanisms have been studied extensively in embryonic stem (ES) and induced pluripotent stem (iPS) cells and have revealed that the regulatory activity, in combination with transcription factors, is associated with pluripotency [2].We previously assessed the expression pattern of miRNAs in human and mouse ES and iPS cells [3]. We found that several miRNAs were highly expressed in undifferentiated iPS cells [3]. Among these, we focused on miRNA- (miR-) 142-3p in the current study. miR-142 was first identified in hematopoietic cells [4], where it plays various roles in differentiation and functions during hemopoiesis [5–7]. miR-142 is highly conserved among vertebrates [8] and has been implicated in cardiac cell fate determination [9], osteoblast differentiation [10], and vascular development [11]. In cancer,miR-142-3p was identified at the breakpoint of aMYC translocation in B-cell leukemia [12] and was mutated in 20% of diffuse large B-cell lymphomas [13]. It is also critically involved in T-cell leukemogenesis [14] and the migration of hepatocellular carcinoma cells [15].miRNAs are transcribed by RNA polymerase II [16], which involves various transcription factors. In hematopoietic cells, specifically, Spi1, Cebpb, Runx1, and LMO2 have all been reported to regulate miR-142 expression [17, 18]. However, these transcription factors are mostly hematopoietic cell-specific, suggesting that the expression of miR-142 in undifferentiated iPS cells involves regulation of other factors. In this study, we examined the roles of miR-142-3p in iPS cells and found that miR-142-3p might be involved in the proliferation of iPS cells and in maintaining their immaturity. Furthermore, miR-142-3p might also play roles in the mesodermal differentiation of iPS cells. Our data suggest roles for the methylation of CpG motifs in the 5′ genomic region of miR-142-3p in suppressing its expression in fibroblasts. Luciferase analysis of the isolated genomic region of miR-142-3p supports the idea that the expression of miR-142-3p in cells including fibroblasts and iPS is regulated, at least partially, by DNA methylation.
## 2. Materials and Methods
### 2.1. Cell Lines, 5-Aza-2′-deoxycytidine (5-Aza-dC) Treatment, and Transfection
3T3 cells were cultured in the DMEM (Nacalai Tesque) supplemented with 10% fetal bovine serum (GIBCO) and 0.5% penicillin/streptomycin (Nacalai Tesque). Preparation and culture of mouse embryonic fibroblast (MEF) and tail-tip fibroblasts (TTF) are described previously [3]. ICR mice were purchased from local dealers, and all experiments with animals were approved by the Animal Care Committee of the Institute of Medical Science at the University of Tokyo. Mouse iPS cell line, SP-iPS, was from B6 mouse MEF with infection of 4 factors (Sox2, Oct3/4, Klf4, and c-myc) by using retrovirus [19]. Culture of the iPS cells and formation of embryoid body (EB) is described previously [3]. For treatment of 5-aza-dC, cells were treated with final concentration of 5 or 10 μM 5-aza-dC (SIGMA) or dimethyl sulfoxide (DMSO) for control samples 6 hours after the cells were plated, and cells were cultured for 3 days before analysis unless otherwise noted. For plasmid transfection, 3T3 cells were plated in a 24-well culture plate 1 day before transfection. Transfection of luciferase plasmid was done by using Gene Juice Transfection Reagent (Novagen). Briefly, Gene Juice Reagent (1.5 μL), plasmid (0.25 μg in 0.25 μL for each plasmid), and Opti-MEM (Gibco-Life Technologies) were mixed and added to 3T3 cells. For plasmid transfection to iPS, electroporation was employed. iPS cells were dissociated into single cells by 0.05% trypsin-EDTA, washed with PBS, and resuspended in Opti-MEM. For each transfection, 1
×
10
6 cells/30 μL were gently mixed with 15 μg of plasmid and placed in 2 mm gap electroporation cuvette (Nepa Gene Co., Ltd.). The cells were electroporated for two times at 175 V, 2 ms at 50 ms interval (CUY21 EDIT, Nepa Gene Co., Ltd). Immediately after electroporation, 1 mL of iPS culture medium was gently added to the cuvette, and cells were transferred and cultured on feeder cells in iPS medium. On the following day, the cells were dissociated and stained with SSEA-1 marker. Subsequently, the GFP + SSEA-1 + double positive cells from study or control group were sorted by FACS (MoFlo, DakoCytomation) and used for cell proliferation and colony formation assay.
### 2.2. RNA Extraction and Real-Time PCR for Quantification of miRNAs and mRNA
Total RNA was extracted using the Sepasol (Nacalai Tesque), and level of mature miRNAs was detected using TaqMan MicroRNA systems (Applied Biosystems) using primer specific for each mature miRNA supplied by Applied Biosystems using Light Cycler 1.5 (ROCHE). Briefly, a total of 500 ng RNA were reverse-transcribed with Taqman Reverse-Transcription PCR Kit with specific primer for miR-142-3p. Then, cDNA was mixed with TaqMan Universal Master Mix (Applied Biosystems) and was subjected for real-time PCR. Ct value was analyzed with SDS 2.4 and RQmanager 1.2.1 and quantitated using2
-
Δ
Δ
Ct method (Livak, 2001). All data were normalized to endogenous control, the U6 snRNA. Sequences of the primers are T/brachyury 5′-cacaccactgacgcacacggt-3′, 5′-atgaggaggctttgggccgt-3′, Gata4 5′-agccggtgggtgatccgaag-3′, 5′-agaaatcgtgcgggagggcg-3′, Fgf5 5′-gcagtccgagcaaccggaact-3′, and 5′-ggacttctgcgaggctgcga-3′. For quantification of mRNA, total RNA (1 μg) from each sample was used to generate cDNA using ReverTra Ace qRT-PCR RT Kit (Toyobo). Then, cDNA was mixed with Sybr Green Master Mix (ROCHE) and was subjected for real-time PCR using Light Cycler 1.5 (ROCHE). Expression levels of mRNA were compared to known standard samples and normalized to GAPDH.
### 2.3. Isolation and Bisulfite Treatment of Genomic DNA
Genomic DNA was isolated from ~5 × 106 cells using the QIAamp DNA Mini and Blood Mini kit (Qiagen). Genomic DNA (1 μg) was subjected for bisulfite conversion using EpiTect Bisulfite (Qiagen). The converted DNA was further subjected to PCR for A-tailing procedure with HotStarTaq DNA Polymerase (Qiagen). Regions covering up to 700 bp upstream of the miR-142 seed sequence were amplified and were cloned into pGEM-T Easy Vector (Invitrogen). All positive clones were sequence and methylation results obtained were analyzed by Quantification Tool for Methylation Analysis (QUMA, http://quma.cdb.riken.jp) which was used for detection of CpG island methylation [20].
#### 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
#### 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
#### 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 2.1. Cell Lines, 5-Aza-2′-deoxycytidine (5-Aza-dC) Treatment, and Transfection
3T3 cells were cultured in the DMEM (Nacalai Tesque) supplemented with 10% fetal bovine serum (GIBCO) and 0.5% penicillin/streptomycin (Nacalai Tesque). Preparation and culture of mouse embryonic fibroblast (MEF) and tail-tip fibroblasts (TTF) are described previously [3]. ICR mice were purchased from local dealers, and all experiments with animals were approved by the Animal Care Committee of the Institute of Medical Science at the University of Tokyo. Mouse iPS cell line, SP-iPS, was from B6 mouse MEF with infection of 4 factors (Sox2, Oct3/4, Klf4, and c-myc) by using retrovirus [19]. Culture of the iPS cells and formation of embryoid body (EB) is described previously [3]. For treatment of 5-aza-dC, cells were treated with final concentration of 5 or 10 μM 5-aza-dC (SIGMA) or dimethyl sulfoxide (DMSO) for control samples 6 hours after the cells were plated, and cells were cultured for 3 days before analysis unless otherwise noted. For plasmid transfection, 3T3 cells were plated in a 24-well culture plate 1 day before transfection. Transfection of luciferase plasmid was done by using Gene Juice Transfection Reagent (Novagen). Briefly, Gene Juice Reagent (1.5 μL), plasmid (0.25 μg in 0.25 μL for each plasmid), and Opti-MEM (Gibco-Life Technologies) were mixed and added to 3T3 cells. For plasmid transfection to iPS, electroporation was employed. iPS cells were dissociated into single cells by 0.05% trypsin-EDTA, washed with PBS, and resuspended in Opti-MEM. For each transfection, 1
×
10
6 cells/30 μL were gently mixed with 15 μg of plasmid and placed in 2 mm gap electroporation cuvette (Nepa Gene Co., Ltd.). The cells were electroporated for two times at 175 V, 2 ms at 50 ms interval (CUY21 EDIT, Nepa Gene Co., Ltd). Immediately after electroporation, 1 mL of iPS culture medium was gently added to the cuvette, and cells were transferred and cultured on feeder cells in iPS medium. On the following day, the cells were dissociated and stained with SSEA-1 marker. Subsequently, the GFP + SSEA-1 + double positive cells from study or control group were sorted by FACS (MoFlo, DakoCytomation) and used for cell proliferation and colony formation assay.
## 2.2. RNA Extraction and Real-Time PCR for Quantification of miRNAs and mRNA
Total RNA was extracted using the Sepasol (Nacalai Tesque), and level of mature miRNAs was detected using TaqMan MicroRNA systems (Applied Biosystems) using primer specific for each mature miRNA supplied by Applied Biosystems using Light Cycler 1.5 (ROCHE). Briefly, a total of 500 ng RNA were reverse-transcribed with Taqman Reverse-Transcription PCR Kit with specific primer for miR-142-3p. Then, cDNA was mixed with TaqMan Universal Master Mix (Applied Biosystems) and was subjected for real-time PCR. Ct value was analyzed with SDS 2.4 and RQmanager 1.2.1 and quantitated using2
-
Δ
Δ
Ct method (Livak, 2001). All data were normalized to endogenous control, the U6 snRNA. Sequences of the primers are T/brachyury 5′-cacaccactgacgcacacggt-3′, 5′-atgaggaggctttgggccgt-3′, Gata4 5′-agccggtgggtgatccgaag-3′, 5′-agaaatcgtgcgggagggcg-3′, Fgf5 5′-gcagtccgagcaaccggaact-3′, and 5′-ggacttctgcgaggctgcga-3′. For quantification of mRNA, total RNA (1 μg) from each sample was used to generate cDNA using ReverTra Ace qRT-PCR RT Kit (Toyobo). Then, cDNA was mixed with Sybr Green Master Mix (ROCHE) and was subjected for real-time PCR using Light Cycler 1.5 (ROCHE). Expression levels of mRNA were compared to known standard samples and normalized to GAPDH.
## 2.3. Isolation and Bisulfite Treatment of Genomic DNA
Genomic DNA was isolated from ~5 × 106 cells using the QIAamp DNA Mini and Blood Mini kit (Qiagen). Genomic DNA (1 μg) was subjected for bisulfite conversion using EpiTect Bisulfite (Qiagen). The converted DNA was further subjected to PCR for A-tailing procedure with HotStarTaq DNA Polymerase (Qiagen). Regions covering up to 700 bp upstream of the miR-142 seed sequence were amplified and were cloned into pGEM-T Easy Vector (Invitrogen). All positive clones were sequence and methylation results obtained were analyzed by Quantification Tool for Methylation Analysis (QUMA, http://quma.cdb.riken.jp) which was used for detection of CpG island methylation [20].
### 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
### 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
### 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
## 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
## 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 3. Results
### 3.1. Characterization of miR-142-3p Expression in iPS Cells, Embryoid Bodies, and Fibroblasts
We previously characterized the expression pattern of miRNAs in mouse and human iPS and ES cells using miRNA arrays and found that miR-142-3p, but not miR-142-5p, was expressed at high levels in iPS cells (see Supplementary Figure 1 available online athttp://dx.doi.org/10.1155/2014/101349) [3]. We first confirmed the expression pattern of miR-142-3p using quantitative reverse transcription-polymerase chain reaction (qRT-PCR). miR-142-3p was expressed at a high level in undifferentiated iPS cells, whereas fibroblasts such as 3T3, mouse embryonic fibroblasts (MEFs), and tail-tip fibroblasts (TTF) expressed only very low levels (Figure 1(a)). When iPS cells were differentiated by formation of embryoid bodies (EBs), the expression of miR-142-3p fell to very low levels on day 2 but then increased on the following days (Figure 1(a)).
#### 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
### 3.2. 5-Aza-2′-deoxycytidine Treatment Upregulates miR-142-3p in Fibroblasts
To assess the transcriptional regulation of miR-142-3p expression, we examined its 5′ genomic sequence and identified 25 CpG motifs in a region covering ~1000 base pairs (bp) upstream of the miR-142-5p core sequence (Supplementary Figure 2). We hypothesized that miR-142-3p expression is regulated epigenetically by DNA methylation in iPS cells and fibroblasts. MEFs and 3T3 cells were treated for 3 days with 5 or 10μM of 5-aza-2′-deoxycytidine (5-aza-dC), a DNA methyltransferase inhibitor (Dnmt), and the levels of miR-142-3p were assessed using real-time qPCR. The expression of miR-142-3p was upregulated by 5-aza-dC treatment (Figures 2(a) and 2(b)). In contrast, the levels of miR-17 were rather reduced but not significantly by 5-aza-dC (Figure 2(c)), whereas the expression of neither miR-142-3p nor miR-17 was changed significantly by 5-aza-dC in undifferentiated iPS cells (Figures 2(d) and 2(e)). We also examined the effects of 5-aza-dC on miR-142-3p in EBs and found that 10 μM 5-aza-dC rather suppressed the expression (Figure 2(f)). We also examined the effects of 5-aza-dC for miR-142-3p expression in thymocytes. Levels of miR-142-3p were upregulated slightly by 10 μM of 5-aza-dC, but to a much lesser extent than observed in fibroblasts (Figure 2(g)). Taken together, these results suggest that miR-142-3p is suppressed by DNA methylation in fibroblasts but that the downregulation of miR-142-3p during EB formation might be regulated by a different mechanism.5-Aza-2′-deoxycytidine (5-aza-dC) treatment upregulates miR-142-3p in fibroblasts. (a–g) 3T3 (a), MEF (b, c), iPS (d, e), embryoid body (EB) formed from mouse iPS (f), or mouse thymocytes (g) were treated with 5-aza-dC at indicated final concentration (5 or 10μM). Cells were cultured for 3 days in the presence of 5-aza-dC, except for EB, which was treated with 5-aza-dC for two days. Control cells were treated with DMSO. Then, cells were harvested, and total RNA was extracted. Level of miR-142-3p or miR-17 was examined by RT-qPCR. Value of U6 was used as a control. Values are expressed as relative to those of control samples of each cell type and are average of 3 or 4 times experiments with standard deviation. P value, ∗∗ < 0.01, 0.01 < ∗ < 0.05, and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
### 3.3. Proximal CpGs in the miR-142-3p Genomic Region Regulate Transcriptional Activity
We next performed promoter analyses of different fragments of the 5′ upstream region of miR-142-3p using luciferase assays. Previous reports indicated that transiently transfected plasmids could be CpG-methylated in the cellsde novo [23, 24]. Luciferase constructs were transfected into 3T3 cells, which were cultured in the presence or absence of 5-aza-dC for 3 days. Luciferase assays were then performed. In the absence of 5-aza-dC, the −274, −540, and −860 Luc constructs showed significant luciferase activity, which increased gradually when longer promoters were used (Figure 3(a)). In contrast, −1130 Luc had very low luciferase activity, suggesting the presence of a region between −860 and −1130 nucleotides (nt) that inhibited luciferase activity. When cells were cultured in the presence of 5-aza-dC, the luciferase activity of −274 Luc was upregulated significantly (Figure 3(a)). Since there are six CpGs in the region covering −274 to ATG, we speculated that the methylation status of the proximal six CpGs might play roles in the upregulation of luciferase activity.Figure 3
Expression of miR-142-3p was regulated by DNA methylation. (a) Left panel shows schematic representation of luciferase constructs. Luciferase analysis using plasmids containing indicated length fragments of the 5′ upstream region of miR-142-3p-luciferase was done. Plasmid was transfected into 3T3 cells, and, after 6 hours, samples were treated with DMSO or 5-aza-dC (10μM) and cultured for additional 3 days. Then cells were harvested, and luciferase activities were examined. Values are average of 3 times independent experiments with standard deviation. P value, ∗∗ < 0.01 and n.s. > 0.05, was calculated by Student’s t-test. (b–g) CpG methylation of 5′ upstream region of miR-142-3p was examined by bisulfite conversions. Genomic DNAs extracted from 3T3, MEF in the presence or absence of 5-aza-dC, iPS, or EB prepared from iPS were subjected to bisulfite sequence. 5-Aza-dC was present in the culture medium of 3T3 or MEF 72 hours before harvesting cells for genomic DNA extraction (e, f). (h) 3T3 cells were transfected with expression plasmid of Oct4, Sox2, Klf4, or Myc with −540 Luc. For control sample, empty expression plasmid and −540 Luc were transfected. Cells were harvested after 3 days of culture, and luciferase analysis was conducted. (i) 3T3 cells were transfected with indicated expression plasmid, and, after 3 days, cells were harvested, and total RNA was extracted. Expression level of endogenous miR-142-3p was examined by RT-qPCR. (h, i) Values are relative to control vector transfected samples and average of 4 independent samples with SD.
### 3.4. CpG Methylation in the 5′ Genomic Region of miR-142-3p
To further elucidate the role of CpG sites and DNA methylation in regulating the expression of miR-142-3p, we analyzed the methylation status of the CpG sites identified in the region up to 700 bp upstream of the pre-miR-142-5p core region (Supplementary Figure 2) using bisulfite conversion. Analyses performed in 3T3 cells and MEFs revealed that the CpG sites were hypermethylated (Figures3(b) and 3(c)). In contrast, those in undifferentiated iPS cells were hypomethylated (Figure 3(d)). We then analyzed the effects of 5-aza-dC on the methylation status in 3T3 cells and MEFs. Treatment with 5-aza-dC lowered methylation levels significantly, particularly at the proximal eight CpGs (Figures 3(e) and 3(f)). CpGs were also hypomethylated in day 5 EBs (Figure 3(g)), even though the expression of miR-142-3p was much lower than in undifferentiated iPS cells (Figure 1(a)).
### 3.5. Roles of Pluripotency-Related Transcription Factors in miR-142-3p Gene Activation
We next investigated the possible involvement of the pluripotency-associated transcription factors Oct4, Sox2, Klf4, and c-Myc in the regulation of miR-142-3p transcription. The miR-142-3p promoter-luciferase construct (−540 Luc) was transfected into 3T3 cells with one of the four transcription factors, and luciferase assays were performed 3 days later. Luciferase activity was strongly upregulated by Klf4, whereas the other three transcription factors suppressed luciferase activity (Figure3(h)). In addition, cotransfection with Klf4 and one of Oct4, Sox2, and c-Myc lowered luciferase activity compared with Klf4 alone (Figure 3(h)). We then analyzed the effects of overexpressing these transcription factors on the expression of endogenous miR-142-3p in 3T3 cells, but no effects were observed (Figure 3(i)).
## 3.1. Characterization of miR-142-3p Expression in iPS Cells, Embryoid Bodies, and Fibroblasts
We previously characterized the expression pattern of miRNAs in mouse and human iPS and ES cells using miRNA arrays and found that miR-142-3p, but not miR-142-5p, was expressed at high levels in iPS cells (see Supplementary Figure 1 available online athttp://dx.doi.org/10.1155/2014/101349) [3]. We first confirmed the expression pattern of miR-142-3p using quantitative reverse transcription-polymerase chain reaction (qRT-PCR). miR-142-3p was expressed at a high level in undifferentiated iPS cells, whereas fibroblasts such as 3T3, mouse embryonic fibroblasts (MEFs), and tail-tip fibroblasts (TTF) expressed only very low levels (Figure 1(a)). When iPS cells were differentiated by formation of embryoid bodies (EBs), the expression of miR-142-3p fell to very low levels on day 2 but then increased on the following days (Figure 1(a)).
### 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
## 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
## 3.2. 5-Aza-2′-deoxycytidine Treatment Upregulates miR-142-3p in Fibroblasts
To assess the transcriptional regulation of miR-142-3p expression, we examined its 5′ genomic sequence and identified 25 CpG motifs in a region covering ~1000 base pairs (bp) upstream of the miR-142-5p core sequence (Supplementary Figure 2). We hypothesized that miR-142-3p expression is regulated epigenetically by DNA methylation in iPS cells and fibroblasts. MEFs and 3T3 cells were treated for 3 days with 5 or 10μM of 5-aza-2′-deoxycytidine (5-aza-dC), a DNA methyltransferase inhibitor (Dnmt), and the levels of miR-142-3p were assessed using real-time qPCR. The expression of miR-142-3p was upregulated by 5-aza-dC treatment (Figures 2(a) and 2(b)). In contrast, the levels of miR-17 were rather reduced but not significantly by 5-aza-dC (Figure 2(c)), whereas the expression of neither miR-142-3p nor miR-17 was changed significantly by 5-aza-dC in undifferentiated iPS cells (Figures 2(d) and 2(e)). We also examined the effects of 5-aza-dC on miR-142-3p in EBs and found that 10 μM 5-aza-dC rather suppressed the expression (Figure 2(f)). We also examined the effects of 5-aza-dC for miR-142-3p expression in thymocytes. Levels of miR-142-3p were upregulated slightly by 10 μM of 5-aza-dC, but to a much lesser extent than observed in fibroblasts (Figure 2(g)). Taken together, these results suggest that miR-142-3p is suppressed by DNA methylation in fibroblasts but that the downregulation of miR-142-3p during EB formation might be regulated by a different mechanism.5-Aza-2′-deoxycytidine (5-aza-dC) treatment upregulates miR-142-3p in fibroblasts. (a–g) 3T3 (a), MEF (b, c), iPS (d, e), embryoid body (EB) formed from mouse iPS (f), or mouse thymocytes (g) were treated with 5-aza-dC at indicated final concentration (5 or 10μM). Cells were cultured for 3 days in the presence of 5-aza-dC, except for EB, which was treated with 5-aza-dC for two days. Control cells were treated with DMSO. Then, cells were harvested, and total RNA was extracted. Level of miR-142-3p or miR-17 was examined by RT-qPCR. Value of U6 was used as a control. Values are expressed as relative to those of control samples of each cell type and are average of 3 or 4 times experiments with standard deviation. P value, ∗∗ < 0.01, 0.01 < ∗ < 0.05, and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
## 3.3. Proximal CpGs in the miR-142-3p Genomic Region Regulate Transcriptional Activity
We next performed promoter analyses of different fragments of the 5′ upstream region of miR-142-3p using luciferase assays. Previous reports indicated that transiently transfected plasmids could be CpG-methylated in the cellsde novo [23, 24]. Luciferase constructs were transfected into 3T3 cells, which were cultured in the presence or absence of 5-aza-dC for 3 days. Luciferase assays were then performed. In the absence of 5-aza-dC, the −274, −540, and −860 Luc constructs showed significant luciferase activity, which increased gradually when longer promoters were used (Figure 3(a)). In contrast, −1130 Luc had very low luciferase activity, suggesting the presence of a region between −860 and −1130 nucleotides (nt) that inhibited luciferase activity. When cells were cultured in the presence of 5-aza-dC, the luciferase activity of −274 Luc was upregulated significantly (Figure 3(a)). Since there are six CpGs in the region covering −274 to ATG, we speculated that the methylation status of the proximal six CpGs might play roles in the upregulation of luciferase activity.Figure 3
Expression of miR-142-3p was regulated by DNA methylation. (a) Left panel shows schematic representation of luciferase constructs. Luciferase analysis using plasmids containing indicated length fragments of the 5′ upstream region of miR-142-3p-luciferase was done. Plasmid was transfected into 3T3 cells, and, after 6 hours, samples were treated with DMSO or 5-aza-dC (10μM) and cultured for additional 3 days. Then cells were harvested, and luciferase activities were examined. Values are average of 3 times independent experiments with standard deviation. P value, ∗∗ < 0.01 and n.s. > 0.05, was calculated by Student’s t-test. (b–g) CpG methylation of 5′ upstream region of miR-142-3p was examined by bisulfite conversions. Genomic DNAs extracted from 3T3, MEF in the presence or absence of 5-aza-dC, iPS, or EB prepared from iPS were subjected to bisulfite sequence. 5-Aza-dC was present in the culture medium of 3T3 or MEF 72 hours before harvesting cells for genomic DNA extraction (e, f). (h) 3T3 cells were transfected with expression plasmid of Oct4, Sox2, Klf4, or Myc with −540 Luc. For control sample, empty expression plasmid and −540 Luc were transfected. Cells were harvested after 3 days of culture, and luciferase analysis was conducted. (i) 3T3 cells were transfected with indicated expression plasmid, and, after 3 days, cells were harvested, and total RNA was extracted. Expression level of endogenous miR-142-3p was examined by RT-qPCR. (h, i) Values are relative to control vector transfected samples and average of 4 independent samples with SD.
## 3.4. CpG Methylation in the 5′ Genomic Region of miR-142-3p
To further elucidate the role of CpG sites and DNA methylation in regulating the expression of miR-142-3p, we analyzed the methylation status of the CpG sites identified in the region up to 700 bp upstream of the pre-miR-142-5p core region (Supplementary Figure 2) using bisulfite conversion. Analyses performed in 3T3 cells and MEFs revealed that the CpG sites were hypermethylated (Figures3(b) and 3(c)). In contrast, those in undifferentiated iPS cells were hypomethylated (Figure 3(d)). We then analyzed the effects of 5-aza-dC on the methylation status in 3T3 cells and MEFs. Treatment with 5-aza-dC lowered methylation levels significantly, particularly at the proximal eight CpGs (Figures 3(e) and 3(f)). CpGs were also hypomethylated in day 5 EBs (Figure 3(g)), even though the expression of miR-142-3p was much lower than in undifferentiated iPS cells (Figure 1(a)).
## 3.5. Roles of Pluripotency-Related Transcription Factors in miR-142-3p Gene Activation
We next investigated the possible involvement of the pluripotency-associated transcription factors Oct4, Sox2, Klf4, and c-Myc in the regulation of miR-142-3p transcription. The miR-142-3p promoter-luciferase construct (−540 Luc) was transfected into 3T3 cells with one of the four transcription factors, and luciferase assays were performed 3 days later. Luciferase activity was strongly upregulated by Klf4, whereas the other three transcription factors suppressed luciferase activity (Figure3(h)). In addition, cotransfection with Klf4 and one of Oct4, Sox2, and c-Myc lowered luciferase activity compared with Klf4 alone (Figure 3(h)). We then analyzed the effects of overexpressing these transcription factors on the expression of endogenous miR-142-3p in 3T3 cells, but no effects were observed (Figure 3(i)).
## 4. Discussion
This study revealed that miR-142-3p is expressed in undifferentiated iPS cells, but not in fibroblasts, and DNA methylation might play a pivotal role in suppressing miR-142-3p expression in fibroblasts. Previous studies revealed that the transcription of miRNAs could be regulated by DNA methylation [25, 26]. miR-142-3p was reported to be upregulated in the human melanoma cell line WM1552C after treatment with 5-aza-dC [27], suggesting that the expression of miR-142-3p was attenuated by DNA methylation not only in fibroblasts, but also in melanocyte lineage cells. In the current study, 5-aza-dC did not enhance the expression of miR-142-3p in mouse P1 thymocytes, supporting the hypothesis that DNA methylation is not a major mechanism that regulates the expression of miR-142-3p in hematopoietic cells.The expression of miR-142-3p in hematopoietic cells is regulated by various transcription factors that also play important roles in hematopoiesis [17, 18]. The sequence of pre-miR-142 is highly conserved among vertebrates [8]. In addition, the expression of human miR-142 was recently reported to be regulated by the methylation of a CpG in its enhancer region in mesenchymal cells [8]. Although no similarity was found in the mouse and human upstream genomic regions (~2000 nt) of miR-142-3p, miR-142 expression is regulated by CpG methylation in both species.Methylation changes occur predominantly at the end of reprogramming. The genomic region harboring pluripotency-associated genes including Nanog, Oct4, and Zfp42 is demethylated very late during reprogramming [28]. When 5-aza-dC is present during this period, an increased number of embryonic stem cell-like colonies are observed [29]. Furthermore, 5-aza-dC enhances the generation of iPS cells by inhibiting Dnmt1 activity [30]. The expression of miR-142-3p might be desilenced by the suppression of DNA demethylation and stimulated by other genes that play roles in the late phase of reprogramming. We observed that Klf4 upregulated luciferase activity but that Klf4 did not enhance the expression of endogenous miR-142-3p in 3T3 cells. Therefore, we hypothesize that a molecular environment related to reprogramming, which 3T3 cells lack, might be required for miR-142-3p expression. We identified several potential binding sites for c-Myc and Sox2 in the genomic region up to 1 kb from the miR-142 mature sequence using the Genomatix Software Suite (http://www.genomatix.de/solutions/genomatix-software-suite.html). Therefore, the combination of these transcription factors in a wider genomic region might cooperate for the full induction of miR-142-3p expression.TGF-βR1 and TGF-βR2 were both predicted to be targets of miR-142-3p [31], and TGF-βR1 was identified as a direct target in non-small-cell lung cancer [32]. TGF-β1 is involved in the reprogramming process in which the inhibition of TGF-β signaling enhances the efficiency of reprogramming [33]. More recently, a report indicated that the miR-142-3p-mediated regulation of Wnt signaling could modulate the proliferation of mesenchymal progenitors [34]. The identification of miR-142-3p target genes in the TGF-β and Wnt signaling pathways further supports the hypothesis that miR-142-3p is involved in the regulation of iPS cell physiology.
## 5. Conclusions
miR-142-3p, which is highly expressed in iPS cells but not in fibroblasts, plays roles in the proliferation and differentiation of iPS cells. The expression of miR-142-3p is suppressed by DNA methylation of its CpG motifs in the 5′ genomic region in fibroblasts.
---
*Source: 101349-2014-12-02.xml* | 101349-2014-12-02_101349-2014-12-02.md | 51,091 | DNA Methylation Is Involved in the Expression of miR-142-3p in Fibroblasts and Induced Pluripotent Stem Cells | Siti Razila Abdul Razak; Yukihiro Baba; Hiromitsu Nakauchi; Makoto Otsu; Sumiko Watanabe | Stem Cells International
(2014) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101349 | 101349-2014-12-02.xml | ---
## Abstract
MicroRNAs are differentially expressed in cells and regulate multiple biological processes. We have been analyzing comprehensive expression patterns of microRNA in human and mouse embryonic stem and induced pluripotent stem cells. We determined microRNAs specifically expressed in these pluripotent stem cells, and miR-142-3p is one of such microRNAs. miR-142-3p is expressed at higher levels in induced pluripotent stem cells relative to fibroblasts in mice. Level of expression of miR142-3p decreased during embryoid body formation from induced pluripotent stem cells. Loss-of-function analyses of miR-142-3p suggested that miR-142-3p plays roles in the proliferation and differentiation of induced pluripotent stem cells. CpG motifs were found in the 5′ genomic region of themiR-142-3p; they were highly methylated in fibroblasts, but not in undifferentiated induced pluripotent stem cells. Treating fibroblasts with 5-aza-2′-deoxycytidine increased the expression of miR-142-3p significantly and reduced methylation at the CpG sites, suggesting that the expression of miR-142-3p is suppressed by DNA methylation in fibroblasts. Luciferase analysis using various lengths of the 5′ genomic region of miR142-3p indicated that CpGs in the proximal enhancer region may play roles in suppressing the expression of miR-142-3p in fibroblasts.
---
## Body
## 1. Introduction
The self-renewal and differentiation of pluripotent stem cells are regulated by various factors including growth factors, cytokines, intracellular signaling molecules, the extracellular matrix, and transcription factors. In addition, the roles of microRNAs (miRNAs) and epigenetic regulation such as DNA methylation and histone modification have received increasing attention in recent years [1]. The complex regulatory networks involving these mechanisms have been studied extensively in embryonic stem (ES) and induced pluripotent stem (iPS) cells and have revealed that the regulatory activity, in combination with transcription factors, is associated with pluripotency [2].We previously assessed the expression pattern of miRNAs in human and mouse ES and iPS cells [3]. We found that several miRNAs were highly expressed in undifferentiated iPS cells [3]. Among these, we focused on miRNA- (miR-) 142-3p in the current study. miR-142 was first identified in hematopoietic cells [4], where it plays various roles in differentiation and functions during hemopoiesis [5–7]. miR-142 is highly conserved among vertebrates [8] and has been implicated in cardiac cell fate determination [9], osteoblast differentiation [10], and vascular development [11]. In cancer,miR-142-3p was identified at the breakpoint of aMYC translocation in B-cell leukemia [12] and was mutated in 20% of diffuse large B-cell lymphomas [13]. It is also critically involved in T-cell leukemogenesis [14] and the migration of hepatocellular carcinoma cells [15].miRNAs are transcribed by RNA polymerase II [16], which involves various transcription factors. In hematopoietic cells, specifically, Spi1, Cebpb, Runx1, and LMO2 have all been reported to regulate miR-142 expression [17, 18]. However, these transcription factors are mostly hematopoietic cell-specific, suggesting that the expression of miR-142 in undifferentiated iPS cells involves regulation of other factors. In this study, we examined the roles of miR-142-3p in iPS cells and found that miR-142-3p might be involved in the proliferation of iPS cells and in maintaining their immaturity. Furthermore, miR-142-3p might also play roles in the mesodermal differentiation of iPS cells. Our data suggest roles for the methylation of CpG motifs in the 5′ genomic region of miR-142-3p in suppressing its expression in fibroblasts. Luciferase analysis of the isolated genomic region of miR-142-3p supports the idea that the expression of miR-142-3p in cells including fibroblasts and iPS is regulated, at least partially, by DNA methylation.
## 2. Materials and Methods
### 2.1. Cell Lines, 5-Aza-2′-deoxycytidine (5-Aza-dC) Treatment, and Transfection
3T3 cells were cultured in the DMEM (Nacalai Tesque) supplemented with 10% fetal bovine serum (GIBCO) and 0.5% penicillin/streptomycin (Nacalai Tesque). Preparation and culture of mouse embryonic fibroblast (MEF) and tail-tip fibroblasts (TTF) are described previously [3]. ICR mice were purchased from local dealers, and all experiments with animals were approved by the Animal Care Committee of the Institute of Medical Science at the University of Tokyo. Mouse iPS cell line, SP-iPS, was from B6 mouse MEF with infection of 4 factors (Sox2, Oct3/4, Klf4, and c-myc) by using retrovirus [19]. Culture of the iPS cells and formation of embryoid body (EB) is described previously [3]. For treatment of 5-aza-dC, cells were treated with final concentration of 5 or 10 μM 5-aza-dC (SIGMA) or dimethyl sulfoxide (DMSO) for control samples 6 hours after the cells were plated, and cells were cultured for 3 days before analysis unless otherwise noted. For plasmid transfection, 3T3 cells were plated in a 24-well culture plate 1 day before transfection. Transfection of luciferase plasmid was done by using Gene Juice Transfection Reagent (Novagen). Briefly, Gene Juice Reagent (1.5 μL), plasmid (0.25 μg in 0.25 μL for each plasmid), and Opti-MEM (Gibco-Life Technologies) were mixed and added to 3T3 cells. For plasmid transfection to iPS, electroporation was employed. iPS cells were dissociated into single cells by 0.05% trypsin-EDTA, washed with PBS, and resuspended in Opti-MEM. For each transfection, 1
×
10
6 cells/30 μL were gently mixed with 15 μg of plasmid and placed in 2 mm gap electroporation cuvette (Nepa Gene Co., Ltd.). The cells were electroporated for two times at 175 V, 2 ms at 50 ms interval (CUY21 EDIT, Nepa Gene Co., Ltd). Immediately after electroporation, 1 mL of iPS culture medium was gently added to the cuvette, and cells were transferred and cultured on feeder cells in iPS medium. On the following day, the cells were dissociated and stained with SSEA-1 marker. Subsequently, the GFP + SSEA-1 + double positive cells from study or control group were sorted by FACS (MoFlo, DakoCytomation) and used for cell proliferation and colony formation assay.
### 2.2. RNA Extraction and Real-Time PCR for Quantification of miRNAs and mRNA
Total RNA was extracted using the Sepasol (Nacalai Tesque), and level of mature miRNAs was detected using TaqMan MicroRNA systems (Applied Biosystems) using primer specific for each mature miRNA supplied by Applied Biosystems using Light Cycler 1.5 (ROCHE). Briefly, a total of 500 ng RNA were reverse-transcribed with Taqman Reverse-Transcription PCR Kit with specific primer for miR-142-3p. Then, cDNA was mixed with TaqMan Universal Master Mix (Applied Biosystems) and was subjected for real-time PCR. Ct value was analyzed with SDS 2.4 and RQmanager 1.2.1 and quantitated using2
-
Δ
Δ
Ct method (Livak, 2001). All data were normalized to endogenous control, the U6 snRNA. Sequences of the primers are T/brachyury 5′-cacaccactgacgcacacggt-3′, 5′-atgaggaggctttgggccgt-3′, Gata4 5′-agccggtgggtgatccgaag-3′, 5′-agaaatcgtgcgggagggcg-3′, Fgf5 5′-gcagtccgagcaaccggaact-3′, and 5′-ggacttctgcgaggctgcga-3′. For quantification of mRNA, total RNA (1 μg) from each sample was used to generate cDNA using ReverTra Ace qRT-PCR RT Kit (Toyobo). Then, cDNA was mixed with Sybr Green Master Mix (ROCHE) and was subjected for real-time PCR using Light Cycler 1.5 (ROCHE). Expression levels of mRNA were compared to known standard samples and normalized to GAPDH.
### 2.3. Isolation and Bisulfite Treatment of Genomic DNA
Genomic DNA was isolated from ~5 × 106 cells using the QIAamp DNA Mini and Blood Mini kit (Qiagen). Genomic DNA (1 μg) was subjected for bisulfite conversion using EpiTect Bisulfite (Qiagen). The converted DNA was further subjected to PCR for A-tailing procedure with HotStarTaq DNA Polymerase (Qiagen). Regions covering up to 700 bp upstream of the miR-142 seed sequence were amplified and were cloned into pGEM-T Easy Vector (Invitrogen). All positive clones were sequence and methylation results obtained were analyzed by Quantification Tool for Methylation Analysis (QUMA, http://quma.cdb.riken.jp) which was used for detection of CpG island methylation [20].
#### 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
#### 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
#### 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 2.1. Cell Lines, 5-Aza-2′-deoxycytidine (5-Aza-dC) Treatment, and Transfection
3T3 cells were cultured in the DMEM (Nacalai Tesque) supplemented with 10% fetal bovine serum (GIBCO) and 0.5% penicillin/streptomycin (Nacalai Tesque). Preparation and culture of mouse embryonic fibroblast (MEF) and tail-tip fibroblasts (TTF) are described previously [3]. ICR mice were purchased from local dealers, and all experiments with animals were approved by the Animal Care Committee of the Institute of Medical Science at the University of Tokyo. Mouse iPS cell line, SP-iPS, was from B6 mouse MEF with infection of 4 factors (Sox2, Oct3/4, Klf4, and c-myc) by using retrovirus [19]. Culture of the iPS cells and formation of embryoid body (EB) is described previously [3]. For treatment of 5-aza-dC, cells were treated with final concentration of 5 or 10 μM 5-aza-dC (SIGMA) or dimethyl sulfoxide (DMSO) for control samples 6 hours after the cells were plated, and cells were cultured for 3 days before analysis unless otherwise noted. For plasmid transfection, 3T3 cells were plated in a 24-well culture plate 1 day before transfection. Transfection of luciferase plasmid was done by using Gene Juice Transfection Reagent (Novagen). Briefly, Gene Juice Reagent (1.5 μL), plasmid (0.25 μg in 0.25 μL for each plasmid), and Opti-MEM (Gibco-Life Technologies) were mixed and added to 3T3 cells. For plasmid transfection to iPS, electroporation was employed. iPS cells were dissociated into single cells by 0.05% trypsin-EDTA, washed with PBS, and resuspended in Opti-MEM. For each transfection, 1
×
10
6 cells/30 μL were gently mixed with 15 μg of plasmid and placed in 2 mm gap electroporation cuvette (Nepa Gene Co., Ltd.). The cells were electroporated for two times at 175 V, 2 ms at 50 ms interval (CUY21 EDIT, Nepa Gene Co., Ltd). Immediately after electroporation, 1 mL of iPS culture medium was gently added to the cuvette, and cells were transferred and cultured on feeder cells in iPS medium. On the following day, the cells were dissociated and stained with SSEA-1 marker. Subsequently, the GFP + SSEA-1 + double positive cells from study or control group were sorted by FACS (MoFlo, DakoCytomation) and used for cell proliferation and colony formation assay.
## 2.2. RNA Extraction and Real-Time PCR for Quantification of miRNAs and mRNA
Total RNA was extracted using the Sepasol (Nacalai Tesque), and level of mature miRNAs was detected using TaqMan MicroRNA systems (Applied Biosystems) using primer specific for each mature miRNA supplied by Applied Biosystems using Light Cycler 1.5 (ROCHE). Briefly, a total of 500 ng RNA were reverse-transcribed with Taqman Reverse-Transcription PCR Kit with specific primer for miR-142-3p. Then, cDNA was mixed with TaqMan Universal Master Mix (Applied Biosystems) and was subjected for real-time PCR. Ct value was analyzed with SDS 2.4 and RQmanager 1.2.1 and quantitated using2
-
Δ
Δ
Ct method (Livak, 2001). All data were normalized to endogenous control, the U6 snRNA. Sequences of the primers are T/brachyury 5′-cacaccactgacgcacacggt-3′, 5′-atgaggaggctttgggccgt-3′, Gata4 5′-agccggtgggtgatccgaag-3′, 5′-agaaatcgtgcgggagggcg-3′, Fgf5 5′-gcagtccgagcaaccggaact-3′, and 5′-ggacttctgcgaggctgcga-3′. For quantification of mRNA, total RNA (1 μg) from each sample was used to generate cDNA using ReverTra Ace qRT-PCR RT Kit (Toyobo). Then, cDNA was mixed with Sybr Green Master Mix (ROCHE) and was subjected for real-time PCR using Light Cycler 1.5 (ROCHE). Expression levels of mRNA were compared to known standard samples and normalized to GAPDH.
## 2.3. Isolation and Bisulfite Treatment of Genomic DNA
Genomic DNA was isolated from ~5 × 106 cells using the QIAamp DNA Mini and Blood Mini kit (Qiagen). Genomic DNA (1 μg) was subjected for bisulfite conversion using EpiTect Bisulfite (Qiagen). The converted DNA was further subjected to PCR for A-tailing procedure with HotStarTaq DNA Polymerase (Qiagen). Regions covering up to 700 bp upstream of the miR-142 seed sequence were amplified and were cloned into pGEM-T Easy Vector (Invitrogen). All positive clones were sequence and methylation results obtained were analyzed by Quantification Tool for Methylation Analysis (QUMA, http://quma.cdb.riken.jp) which was used for detection of CpG island methylation [20].
### 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
### 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
### 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 2.3.1. DNA Construction
Plasmids containing antisense sequences of mature miR-142-3p or miR-17 expression plasmid were constructed as follows: double strand DNA, which encode antisense of mature miR-142-3p or miR-17, was inserted downstream of U6 promoter usingBamHI andEcoRI sites of pMX retrovirus vector containing EGFP after 5′ LTR (Figure 1(b)). Expression plasmids for mouse Oct4, Sox2, Klf, and Myc were purchased from AddGene.Differential expression level of miR-142-3p in fibroblasts and iPS. (a) Expression of miR-142-3p was examined by RT-qPCR in various cells. Total RNA was extracted from indicated cells, and RT-qPCR was done using TaqMan MicroRNA systems. U6 shRNA was used as a control. Experiments were done three times using independently prepared cells, and average values with standard deviation are shown. (b) Schematic representation of antisense- (as-) miR-142-3p and EGFP expression plasmid. LTR was used to drive EGFP, and U6 promoter was used to drive as-miR-142-3p. (c) Effect of overexpressed as-miR-142-3p for expression level of endogenous miR-142-3p in iPS cells. as-miR-142-3p/EGFP or control vector was transfected into iPS, and, after 24 hours, level of miR-142-3p in iPS was examined by RT-qPCR. Data were expressed as relative expression level of miR-142-3p in as-miR-142-3p/EGFP expressing cells to that in control vector expressing cells. Experiments were performed three times, and average values with standard deviation are shown. (d, e, and f) Effects of expression of as-miR-142-3p for proliferation and alkaline phosphatase (ALP) expression of iPS. as-miR-142-3p/EGFP plasmid was transfected into undifferentiated iPS, and EGFP positive cells were purified by a cell sorter. Then EGFP positive cells were cultured for 2 days for Ki67 immunostaining and for 5 days for ALP assay. Immunostaining with anti-Ki67 antibody or ALP staining was done, and positive cells were counted under a microscope. Experiments were performed three times, and average values with standard deviation are shown. In (f), morphology of representative colonies of as-miR-142-3p or control vector transfected iPS is shown. (g–i) Expression of lineage marker genes in embryoid body (EB). iPS cells were transfected with as-miR-142-3p/EGFP or as-miR-17/EGFP as a control, purified according to their expression of EGFP, and then subjected to an EB formation. After 6 days of culturing in EB formation condition, the differentiation of cells into the ectodermal (g), endodermal (h), and mesodermal (i) lineages was assessed using RT-qPCR with primers againstFgf5,Gata4, andT brachyury, respectively. P value, * < 0.05 and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
## 2.3.2. Cell Sorting, Cell Staining with Alkaline Phosphatase (ALP), and Immunostaining
Cells’ sorting was done using a MoFlo (DakoCytomation). ALP staining was done using BCIP-NBT solution kit for alkaline phosphatase stain (Nacalai Tesque) according to the manufacturer’s instructions. Immunostaining was done using antibody anti-Ki67 proliferation antigen (BD Biosciences), and the primary antibody was visualized using appropriate secondary antibody conjugated with Alexa 488 (Molecular Probes).
## 2.3.3. Luciferase Analysis
3T3 cells were plated in a 24-well culture plate 1 day before transfection and transfected with luciferase plasmid (0.25μg) by using Gene Juice Transfection Reagent (Novagen). Six hours after transfection, cells were treated with final concentration of 10 μM of 5-azacytidine and were cultured for 3 days. Cells were harvested using Cell Culture Lysis Reagent 5X (Promega). Luciferase activity toward a luciferase assay substrate (Promega) was measured with a luminometer (Lumat LB9507, Berthold Technologies).
## 3. Results
### 3.1. Characterization of miR-142-3p Expression in iPS Cells, Embryoid Bodies, and Fibroblasts
We previously characterized the expression pattern of miRNAs in mouse and human iPS and ES cells using miRNA arrays and found that miR-142-3p, but not miR-142-5p, was expressed at high levels in iPS cells (see Supplementary Figure 1 available online athttp://dx.doi.org/10.1155/2014/101349) [3]. We first confirmed the expression pattern of miR-142-3p using quantitative reverse transcription-polymerase chain reaction (qRT-PCR). miR-142-3p was expressed at a high level in undifferentiated iPS cells, whereas fibroblasts such as 3T3, mouse embryonic fibroblasts (MEFs), and tail-tip fibroblasts (TTF) expressed only very low levels (Figure 1(a)). When iPS cells were differentiated by formation of embryoid bodies (EBs), the expression of miR-142-3p fell to very low levels on day 2 but then increased on the following days (Figure 1(a)).
#### 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
### 3.2. 5-Aza-2′-deoxycytidine Treatment Upregulates miR-142-3p in Fibroblasts
To assess the transcriptional regulation of miR-142-3p expression, we examined its 5′ genomic sequence and identified 25 CpG motifs in a region covering ~1000 base pairs (bp) upstream of the miR-142-5p core sequence (Supplementary Figure 2). We hypothesized that miR-142-3p expression is regulated epigenetically by DNA methylation in iPS cells and fibroblasts. MEFs and 3T3 cells were treated for 3 days with 5 or 10μM of 5-aza-2′-deoxycytidine (5-aza-dC), a DNA methyltransferase inhibitor (Dnmt), and the levels of miR-142-3p were assessed using real-time qPCR. The expression of miR-142-3p was upregulated by 5-aza-dC treatment (Figures 2(a) and 2(b)). In contrast, the levels of miR-17 were rather reduced but not significantly by 5-aza-dC (Figure 2(c)), whereas the expression of neither miR-142-3p nor miR-17 was changed significantly by 5-aza-dC in undifferentiated iPS cells (Figures 2(d) and 2(e)). We also examined the effects of 5-aza-dC on miR-142-3p in EBs and found that 10 μM 5-aza-dC rather suppressed the expression (Figure 2(f)). We also examined the effects of 5-aza-dC for miR-142-3p expression in thymocytes. Levels of miR-142-3p were upregulated slightly by 10 μM of 5-aza-dC, but to a much lesser extent than observed in fibroblasts (Figure 2(g)). Taken together, these results suggest that miR-142-3p is suppressed by DNA methylation in fibroblasts but that the downregulation of miR-142-3p during EB formation might be regulated by a different mechanism.5-Aza-2′-deoxycytidine (5-aza-dC) treatment upregulates miR-142-3p in fibroblasts. (a–g) 3T3 (a), MEF (b, c), iPS (d, e), embryoid body (EB) formed from mouse iPS (f), or mouse thymocytes (g) were treated with 5-aza-dC at indicated final concentration (5 or 10μM). Cells were cultured for 3 days in the presence of 5-aza-dC, except for EB, which was treated with 5-aza-dC for two days. Control cells were treated with DMSO. Then, cells were harvested, and total RNA was extracted. Level of miR-142-3p or miR-17 was examined by RT-qPCR. Value of U6 was used as a control. Values are expressed as relative to those of control samples of each cell type and are average of 3 or 4 times experiments with standard deviation. P value, ∗∗ < 0.01, 0.01 < ∗ < 0.05, and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
### 3.3. Proximal CpGs in the miR-142-3p Genomic Region Regulate Transcriptional Activity
We next performed promoter analyses of different fragments of the 5′ upstream region of miR-142-3p using luciferase assays. Previous reports indicated that transiently transfected plasmids could be CpG-methylated in the cellsde novo [23, 24]. Luciferase constructs were transfected into 3T3 cells, which were cultured in the presence or absence of 5-aza-dC for 3 days. Luciferase assays were then performed. In the absence of 5-aza-dC, the −274, −540, and −860 Luc constructs showed significant luciferase activity, which increased gradually when longer promoters were used (Figure 3(a)). In contrast, −1130 Luc had very low luciferase activity, suggesting the presence of a region between −860 and −1130 nucleotides (nt) that inhibited luciferase activity. When cells were cultured in the presence of 5-aza-dC, the luciferase activity of −274 Luc was upregulated significantly (Figure 3(a)). Since there are six CpGs in the region covering −274 to ATG, we speculated that the methylation status of the proximal six CpGs might play roles in the upregulation of luciferase activity.Figure 3
Expression of miR-142-3p was regulated by DNA methylation. (a) Left panel shows schematic representation of luciferase constructs. Luciferase analysis using plasmids containing indicated length fragments of the 5′ upstream region of miR-142-3p-luciferase was done. Plasmid was transfected into 3T3 cells, and, after 6 hours, samples were treated with DMSO or 5-aza-dC (10μM) and cultured for additional 3 days. Then cells were harvested, and luciferase activities were examined. Values are average of 3 times independent experiments with standard deviation. P value, ∗∗ < 0.01 and n.s. > 0.05, was calculated by Student’s t-test. (b–g) CpG methylation of 5′ upstream region of miR-142-3p was examined by bisulfite conversions. Genomic DNAs extracted from 3T3, MEF in the presence or absence of 5-aza-dC, iPS, or EB prepared from iPS were subjected to bisulfite sequence. 5-Aza-dC was present in the culture medium of 3T3 or MEF 72 hours before harvesting cells for genomic DNA extraction (e, f). (h) 3T3 cells were transfected with expression plasmid of Oct4, Sox2, Klf4, or Myc with −540 Luc. For control sample, empty expression plasmid and −540 Luc were transfected. Cells were harvested after 3 days of culture, and luciferase analysis was conducted. (i) 3T3 cells were transfected with indicated expression plasmid, and, after 3 days, cells were harvested, and total RNA was extracted. Expression level of endogenous miR-142-3p was examined by RT-qPCR. (h, i) Values are relative to control vector transfected samples and average of 4 independent samples with SD.
### 3.4. CpG Methylation in the 5′ Genomic Region of miR-142-3p
To further elucidate the role of CpG sites and DNA methylation in regulating the expression of miR-142-3p, we analyzed the methylation status of the CpG sites identified in the region up to 700 bp upstream of the pre-miR-142-5p core region (Supplementary Figure 2) using bisulfite conversion. Analyses performed in 3T3 cells and MEFs revealed that the CpG sites were hypermethylated (Figures3(b) and 3(c)). In contrast, those in undifferentiated iPS cells were hypomethylated (Figure 3(d)). We then analyzed the effects of 5-aza-dC on the methylation status in 3T3 cells and MEFs. Treatment with 5-aza-dC lowered methylation levels significantly, particularly at the proximal eight CpGs (Figures 3(e) and 3(f)). CpGs were also hypomethylated in day 5 EBs (Figure 3(g)), even though the expression of miR-142-3p was much lower than in undifferentiated iPS cells (Figure 1(a)).
### 3.5. Roles of Pluripotency-Related Transcription Factors in miR-142-3p Gene Activation
We next investigated the possible involvement of the pluripotency-associated transcription factors Oct4, Sox2, Klf4, and c-Myc in the regulation of miR-142-3p transcription. The miR-142-3p promoter-luciferase construct (−540 Luc) was transfected into 3T3 cells with one of the four transcription factors, and luciferase assays were performed 3 days later. Luciferase activity was strongly upregulated by Klf4, whereas the other three transcription factors suppressed luciferase activity (Figure3(h)). In addition, cotransfection with Klf4 and one of Oct4, Sox2, and c-Myc lowered luciferase activity compared with Klf4 alone (Figure 3(h)). We then analyzed the effects of overexpressing these transcription factors on the expression of endogenous miR-142-3p in 3T3 cells, but no effects were observed (Figure 3(i)).
## 3.1. Characterization of miR-142-3p Expression in iPS Cells, Embryoid Bodies, and Fibroblasts
We previously characterized the expression pattern of miRNAs in mouse and human iPS and ES cells using miRNA arrays and found that miR-142-3p, but not miR-142-5p, was expressed at high levels in iPS cells (see Supplementary Figure 1 available online athttp://dx.doi.org/10.1155/2014/101349) [3]. We first confirmed the expression pattern of miR-142-3p using quantitative reverse transcription-polymerase chain reaction (qRT-PCR). miR-142-3p was expressed at a high level in undifferentiated iPS cells, whereas fibroblasts such as 3T3, mouse embryonic fibroblasts (MEFs), and tail-tip fibroblasts (TTF) expressed only very low levels (Figure 1(a)). When iPS cells were differentiated by formation of embryoid bodies (EBs), the expression of miR-142-3p fell to very low levels on day 2 but then increased on the following days (Figure 1(a)).
### 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
## 3.1.1. Functional Analyses of miR-142-3p in iPS Cell Physiology
We next constructed an expression plasmid encoding antisense miR-142-3p (as-miR-142-3p) and enhanced green fluorescent protein (EGFP; Figure1(b)). A plasmid without insertion of antisense miR-142-3p was used as a control for all experiments. The effect of expressing as-miR-142-3p on endogenous miR-142-3p was then examined and confirmed in mouse iPS cells (Figure 1(c)). Specifically, as-miR-142-3p/EGFP was transfected into undifferentiated iPS to analyze the role of miR-142-3p in the proliferation and maintenance of immaturity in iPS cells. Twenty-four hours after transfection, EGFP-positive cells were purified using a cell sorter and cultured for 3 days. Cell proliferation was then assessed by immunostaining for Ki67, a proliferative marker (Figure 1(d)). The population of Ki67-positive cells was slightly, but significantly, lower in as-miR-142-3p-expressing iPS cells (Figure 1(d)). We then counted the number of alkaline phosphatase- (ALP-) positive iPS colonies, and significantly fewer ALP-positive cells were found within the as-miR-142-3p-expressing iPS colonies (Figure 1(e)). Morphology of colonies of iPS cell was indistinguishable between control and as-miR-142-3p expressing samples (Figure 1(f)).We then analyzed the roles of miRNA-142-3p on the ability of iPS cells to differentiate. iPS cells were transfected with as-miR-142-3p/EGFP, purified according to their expression of EGFP, and then subjected to an EB formation assay. An expression plasmid containing antisense sequence against miR-17, which is expressed at very high levels in undifferentiated iPS cells [3, 21], was used as a control. After 6 days, the differentiation of cells into the ectodermal, endodermal, and mesodermal lineages was assessed using real-time quantitative PCR (qPCR) with primers againstFgf5,Gata4, andT brachyury, respectively (Figures 1(g), 1(h), and 1(i)). Data revealed that as-miR-142-3p, but not as-miR-17, suppressed the expression ofT brachyury, which is expressed specifically in cells of the mesodermal lineage [22] (Figure 1(i)). The expression of as-miR-142-3p did not affect the expression ofFgf5 orGata4, although as-miR-17 enhanced expression ofFgf5, as expected (Figures 1(g) and 1(h)).
## 3.2. 5-Aza-2′-deoxycytidine Treatment Upregulates miR-142-3p in Fibroblasts
To assess the transcriptional regulation of miR-142-3p expression, we examined its 5′ genomic sequence and identified 25 CpG motifs in a region covering ~1000 base pairs (bp) upstream of the miR-142-5p core sequence (Supplementary Figure 2). We hypothesized that miR-142-3p expression is regulated epigenetically by DNA methylation in iPS cells and fibroblasts. MEFs and 3T3 cells were treated for 3 days with 5 or 10μM of 5-aza-2′-deoxycytidine (5-aza-dC), a DNA methyltransferase inhibitor (Dnmt), and the levels of miR-142-3p were assessed using real-time qPCR. The expression of miR-142-3p was upregulated by 5-aza-dC treatment (Figures 2(a) and 2(b)). In contrast, the levels of miR-17 were rather reduced but not significantly by 5-aza-dC (Figure 2(c)), whereas the expression of neither miR-142-3p nor miR-17 was changed significantly by 5-aza-dC in undifferentiated iPS cells (Figures 2(d) and 2(e)). We also examined the effects of 5-aza-dC on miR-142-3p in EBs and found that 10 μM 5-aza-dC rather suppressed the expression (Figure 2(f)). We also examined the effects of 5-aza-dC for miR-142-3p expression in thymocytes. Levels of miR-142-3p were upregulated slightly by 10 μM of 5-aza-dC, but to a much lesser extent than observed in fibroblasts (Figure 2(g)). Taken together, these results suggest that miR-142-3p is suppressed by DNA methylation in fibroblasts but that the downregulation of miR-142-3p during EB formation might be regulated by a different mechanism.5-Aza-2′-deoxycytidine (5-aza-dC) treatment upregulates miR-142-3p in fibroblasts. (a–g) 3T3 (a), MEF (b, c), iPS (d, e), embryoid body (EB) formed from mouse iPS (f), or mouse thymocytes (g) were treated with 5-aza-dC at indicated final concentration (5 or 10μM). Cells were cultured for 3 days in the presence of 5-aza-dC, except for EB, which was treated with 5-aza-dC for two days. Control cells were treated with DMSO. Then, cells were harvested, and total RNA was extracted. Level of miR-142-3p or miR-17 was examined by RT-qPCR. Value of U6 was used as a control. Values are expressed as relative to those of control samples of each cell type and are average of 3 or 4 times experiments with standard deviation. P value, ∗∗ < 0.01, 0.01 < ∗ < 0.05, and n.s. > 0.05, was calculated by Student’s t-test.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
## 3.3. Proximal CpGs in the miR-142-3p Genomic Region Regulate Transcriptional Activity
We next performed promoter analyses of different fragments of the 5′ upstream region of miR-142-3p using luciferase assays. Previous reports indicated that transiently transfected plasmids could be CpG-methylated in the cellsde novo [23, 24]. Luciferase constructs were transfected into 3T3 cells, which were cultured in the presence or absence of 5-aza-dC for 3 days. Luciferase assays were then performed. In the absence of 5-aza-dC, the −274, −540, and −860 Luc constructs showed significant luciferase activity, which increased gradually when longer promoters were used (Figure 3(a)). In contrast, −1130 Luc had very low luciferase activity, suggesting the presence of a region between −860 and −1130 nucleotides (nt) that inhibited luciferase activity. When cells were cultured in the presence of 5-aza-dC, the luciferase activity of −274 Luc was upregulated significantly (Figure 3(a)). Since there are six CpGs in the region covering −274 to ATG, we speculated that the methylation status of the proximal six CpGs might play roles in the upregulation of luciferase activity.Figure 3
Expression of miR-142-3p was regulated by DNA methylation. (a) Left panel shows schematic representation of luciferase constructs. Luciferase analysis using plasmids containing indicated length fragments of the 5′ upstream region of miR-142-3p-luciferase was done. Plasmid was transfected into 3T3 cells, and, after 6 hours, samples were treated with DMSO or 5-aza-dC (10μM) and cultured for additional 3 days. Then cells were harvested, and luciferase activities were examined. Values are average of 3 times independent experiments with standard deviation. P value, ∗∗ < 0.01 and n.s. > 0.05, was calculated by Student’s t-test. (b–g) CpG methylation of 5′ upstream region of miR-142-3p was examined by bisulfite conversions. Genomic DNAs extracted from 3T3, MEF in the presence or absence of 5-aza-dC, iPS, or EB prepared from iPS were subjected to bisulfite sequence. 5-Aza-dC was present in the culture medium of 3T3 or MEF 72 hours before harvesting cells for genomic DNA extraction (e, f). (h) 3T3 cells were transfected with expression plasmid of Oct4, Sox2, Klf4, or Myc with −540 Luc. For control sample, empty expression plasmid and −540 Luc were transfected. Cells were harvested after 3 days of culture, and luciferase analysis was conducted. (i) 3T3 cells were transfected with indicated expression plasmid, and, after 3 days, cells were harvested, and total RNA was extracted. Expression level of endogenous miR-142-3p was examined by RT-qPCR. (h, i) Values are relative to control vector transfected samples and average of 4 independent samples with SD.
## 3.4. CpG Methylation in the 5′ Genomic Region of miR-142-3p
To further elucidate the role of CpG sites and DNA methylation in regulating the expression of miR-142-3p, we analyzed the methylation status of the CpG sites identified in the region up to 700 bp upstream of the pre-miR-142-5p core region (Supplementary Figure 2) using bisulfite conversion. Analyses performed in 3T3 cells and MEFs revealed that the CpG sites were hypermethylated (Figures3(b) and 3(c)). In contrast, those in undifferentiated iPS cells were hypomethylated (Figure 3(d)). We then analyzed the effects of 5-aza-dC on the methylation status in 3T3 cells and MEFs. Treatment with 5-aza-dC lowered methylation levels significantly, particularly at the proximal eight CpGs (Figures 3(e) and 3(f)). CpGs were also hypomethylated in day 5 EBs (Figure 3(g)), even though the expression of miR-142-3p was much lower than in undifferentiated iPS cells (Figure 1(a)).
## 3.5. Roles of Pluripotency-Related Transcription Factors in miR-142-3p Gene Activation
We next investigated the possible involvement of the pluripotency-associated transcription factors Oct4, Sox2, Klf4, and c-Myc in the regulation of miR-142-3p transcription. The miR-142-3p promoter-luciferase construct (−540 Luc) was transfected into 3T3 cells with one of the four transcription factors, and luciferase assays were performed 3 days later. Luciferase activity was strongly upregulated by Klf4, whereas the other three transcription factors suppressed luciferase activity (Figure3(h)). In addition, cotransfection with Klf4 and one of Oct4, Sox2, and c-Myc lowered luciferase activity compared with Klf4 alone (Figure 3(h)). We then analyzed the effects of overexpressing these transcription factors on the expression of endogenous miR-142-3p in 3T3 cells, but no effects were observed (Figure 3(i)).
## 4. Discussion
This study revealed that miR-142-3p is expressed in undifferentiated iPS cells, but not in fibroblasts, and DNA methylation might play a pivotal role in suppressing miR-142-3p expression in fibroblasts. Previous studies revealed that the transcription of miRNAs could be regulated by DNA methylation [25, 26]. miR-142-3p was reported to be upregulated in the human melanoma cell line WM1552C after treatment with 5-aza-dC [27], suggesting that the expression of miR-142-3p was attenuated by DNA methylation not only in fibroblasts, but also in melanocyte lineage cells. In the current study, 5-aza-dC did not enhance the expression of miR-142-3p in mouse P1 thymocytes, supporting the hypothesis that DNA methylation is not a major mechanism that regulates the expression of miR-142-3p in hematopoietic cells.The expression of miR-142-3p in hematopoietic cells is regulated by various transcription factors that also play important roles in hematopoiesis [17, 18]. The sequence of pre-miR-142 is highly conserved among vertebrates [8]. In addition, the expression of human miR-142 was recently reported to be regulated by the methylation of a CpG in its enhancer region in mesenchymal cells [8]. Although no similarity was found in the mouse and human upstream genomic regions (~2000 nt) of miR-142-3p, miR-142 expression is regulated by CpG methylation in both species.Methylation changes occur predominantly at the end of reprogramming. The genomic region harboring pluripotency-associated genes including Nanog, Oct4, and Zfp42 is demethylated very late during reprogramming [28]. When 5-aza-dC is present during this period, an increased number of embryonic stem cell-like colonies are observed [29]. Furthermore, 5-aza-dC enhances the generation of iPS cells by inhibiting Dnmt1 activity [30]. The expression of miR-142-3p might be desilenced by the suppression of DNA demethylation and stimulated by other genes that play roles in the late phase of reprogramming. We observed that Klf4 upregulated luciferase activity but that Klf4 did not enhance the expression of endogenous miR-142-3p in 3T3 cells. Therefore, we hypothesize that a molecular environment related to reprogramming, which 3T3 cells lack, might be required for miR-142-3p expression. We identified several potential binding sites for c-Myc and Sox2 in the genomic region up to 1 kb from the miR-142 mature sequence using the Genomatix Software Suite (http://www.genomatix.de/solutions/genomatix-software-suite.html). Therefore, the combination of these transcription factors in a wider genomic region might cooperate for the full induction of miR-142-3p expression.TGF-βR1 and TGF-βR2 were both predicted to be targets of miR-142-3p [31], and TGF-βR1 was identified as a direct target in non-small-cell lung cancer [32]. TGF-β1 is involved in the reprogramming process in which the inhibition of TGF-β signaling enhances the efficiency of reprogramming [33]. More recently, a report indicated that the miR-142-3p-mediated regulation of Wnt signaling could modulate the proliferation of mesenchymal progenitors [34]. The identification of miR-142-3p target genes in the TGF-β and Wnt signaling pathways further supports the hypothesis that miR-142-3p is involved in the regulation of iPS cell physiology.
## 5. Conclusions
miR-142-3p, which is highly expressed in iPS cells but not in fibroblasts, plays roles in the proliferation and differentiation of iPS cells. The expression of miR-142-3p is suppressed by DNA methylation of its CpG motifs in the 5′ genomic region in fibroblasts.
---
*Source: 101349-2014-12-02.xml* | 2014 |
# State Estimators for Uncertain Linear Systems with Different Disturbance/Noise Using Quadratic Boundedness
**Authors:** Longge Zhang; Xiangjie Liu; Xiaobing Kong
**Journal:** Journal of Applied Mathematics
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101353
---
## Abstract
This paper designs state estimators for uncertain linear systems with polytopic description, different state disturbance, and measurement noise. Necessary and sufficient stability conditions are derived followed with the upper bounding sequences on the estimation error. All the conditions can be expressed in the form of linear matrix inequalities. A numerical example is given to illustrate the effectiveness of the approach.
---
## Body
## 1. Introduction
In many control systems, the state variables are usually not accessible for direct connection. In this case, it is necessary to design a state estimator, so that its output will generate an estimate of those states. Generally speaking, there are two kinds of estimators for dynamic systems: observers and filters. The former is under the supposition of the perfect knowledge of system and measurement equations, and the latter can be applied to the system with disturbance. Many literatures focus on the design of state estimators for linear system, for example, a sliding mode and a disturbance detector for a discrete Kalman filter [1], the quantized measurement method [2], the least squares estimation for linear singular systems [3], stochastic disturbances and deterministic unknown inputs on linear time-invariant systems [4], and the bounded disturbances on a dynamic system [5].The concept of quadratic boundedness (QB) is first defined for an uncertain nonlinear dynamical system [6], and then its necessary and sufficient conditions for a class of nominally linear systems [7] and a class of linear systems which contain norm-bounded uncertainties [8] are obtained. For discrete system, QB is applied mainly two regions: the receding horizon control (RHC) and the design of estimator. In RHC research, Ding utilizes QB to characterize the stability properties of the controlled system [9–13]. Alessandri et al. find the upper bounds on the norm of the estimation error by means of invariant sets, and these upper bounds can be expressed in terms of linear matrix inequalities [14]. Paper in [5] designs a filter by searching a suitable tradeoff between the transient and asymptotic behaviors of the estimation error. The designed filter is for the linear discrete systems with the identical state disturbance and measurement noise. For the discrete linear systems, the disturbance and noise are different in general. Nevertheless, little work has been done on the design of the state estimators for uncertain linear systems with different disturbance/noise. So how to design state estimators for uncertain linear systems with different state disturbance and measurement noise is important work.The existing research work on state estimation usually constructs a filter for uncertain systems with bounded disturbance/noise, with no consideration of input or state constraint. Since the disturbance and noise are not assumed exactly identical, the stability station of the estimator is different from that in the paper [5]. For the above reasons, the situation becomes more complicated and the extension of the method is not straightforward.This paper designs state estimators for uncertain linear systems with polytopic description. The problem is constructed in the form of linear matrix inequalities (LMIs). The organization of the paper is as follows. The earlier results are presented in the Section2. The new robust estimator for uncertain linear systems with different disturbance/noise is designed in Section 3. A numerical simulation example is followed in Section 4. And some conclusions are given in the end.Notations. For any vector x and a positive-defined matrix Q,ℰQ is the ellipsoid which is defined as {x∣x′Qx≤1};Q′ is the transpose of matrix Q. ||x|| is the Euclid norm of vector x. The symbol * induces a symmetric structure in LMIs.
## 2. Earlier Results
In this section, some results presented by Alessandri et al. [5, 14] are briefly introduced.For a given discrete-time dynamic system,(2.1)xt+1=Atxt+Gtwt,t=0,1,…,
where xt∈Rn is the state vector and wt∈ℰQ⊂Rp is the noise vector. The definition of strictly quadratically bounded with a common Lyapunov matrix of a system and positively invariant set are defined by Alessandri et al. [5, 14], and the following theorem is proved.Theorem 2.1 (see [14]).
The following facts are equivalent:(i)
System (2.1) is strictly quadratically bounded with a common Lyapunov matrix P>0 for all allowable wt∈ℰQ and (At,Gt)∈φ,t=0,1,…, where φ is a known bounded set.(ii)
The ellipsoidℰP is a positively invariant set for system (2.1) for all allowable wt∈ℰQ and (At,Gt)∈φ,t=0,1,….(iii)
There existsαt∈(0,1) such that for any (At,Gt)∈φ,(2.2)[A′tPAt-P+αtPA′tPGt*G′tPGt-αtQ]≤0.
For a discrete-time linear system with the same state disturbance and noise,(2.3)xt+1=Atxt+Btwt,yt=Ctxt+Dtwt,zt=Ltxt,
for t=0,1,…, where xt∈Rn,yt∈Rm,zt∈Rr are the state vector, the measured output and the signal will be estimated, respectively, and wt∈ℰQ⊂Rp is the disturbance/noise vector. At,Bt,Ct,Dt,Lt are the system matrixes with the proper dimensions. We consider the disturbance/noise to be unknown, and the system matrixes are supposed to be unknown and time varying but belonging to a polytopic set 𝒫, that is, (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…, where
(2.4)P≜{(A,B,C,D,L)=∑i=1Nλi(A(i),B(i),C(i),D(i),L(i));∑i=1Nλi=1,λi≥0,i=1,2,…N}.
Here (A(i),B(i),C(i),D(i),L(i))i=1,2,…N are vertexes of the polytope 𝒫.Definition 2.2 (see [5]).
A sequence of vectorsξt is said to be exponentially bounded with constants β∈(0,1),k1≥0, and k2>-k1 if
(2.5)‖ξt‖≤k1+k2(1-β)t,t=0,1,….
It is easy to see thatβ determines the convergence speed and k11/2 represents an upper bound of the sequence ξt.Theorem 2.3 (see [5]).
Consider two scalarsα∈(0,1) and γ>0. The following facts are equivalent.(i)
There existÂ,B̂,L̂, and P>0 such that the following conditions are satisfied for any (A,B,C,D,L)∈𝒫:
(2.6)C̃P-1C̃′-γ2I<0,[A′PA-P+αPA′PG*G′PG-αQ]<0.(ii)
There existV,W,X>0,Y>0, and Z such that
(2.7)[γ2IL-WL*XX**Y]>0,[(1-α)X(1-α)X0A′XA′Y+C′Z′+V′*(1-α)Y0A′XA′Y+C′Z′**αQB′XD′Z+B′Y***XX****Y]>0,
for(A,B,C,D,L)=(A(i),B(i),C(i),D(i),L(i)),i=1,2,…N.
Then, the costJ(r,α)≜μγ-(1-μ)α can be minimized over V,W,X>0,Y>0,Z,α∈(0,1) and γ>0 under the constrains (2.7).
## 3. A Robust Estimator for Uncertain Linear Systems with Different Noises
Let us consider the discrete-time linear system with different disturbance/noise:(3.1)xt+1=Atxt+Btwt,yt=Ctxt+Dtvt,zt=Ltxt,
where the matrixes and vectors are the same as (2.3) except wt∈ℰQ1,vt∈ℰQ2, which are the vectors of the state disturbance and measurement noise, respectively.To estimate the signalzt, the linear filter is introduced which has the following form:(3.2)x̂t+1=Âx̂t+B̂yt,ẑt=L̂x̂t,
for t=0,1,…, where x̂t∈Rn is the filter state vector and ẑt∈Rr is the estimation of the signal zt.Define the estimation erroret, the augmented state vector, and the augmented disturbance/noise as(3.3)et≜zt-ẑt,x̃t≜[xtx̂t],w̃t=[wtvt]
and the dynamic system associated with the estimation error(3.4)x̃t+1=[A0B̂CtÂ]̲Ãx̃t+[Bt00B̂Dt]̲B̃w̃t,(3.5)et=[Lt-L̂]̲C̃x̃t,
for t=0,1,….The objective is to find an estimateẑt of the signal zt such that the estimation error et=zt-ẑt is exponentially bounded for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2, and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…. Then the following problem has to be solved.Problem 1.
Find matricesÂ,B̂,L̂ such that, for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2 and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…; the estimation error et is exponentially bounded with constants β∈(0,1),k1≥0, and k2>-k1.
In order to solve Problem1, we now exploit the results on quadratic boundedness. More specifically, the following proposition holds.Proposition 3.1.
Suppose there exist matricesÂ,B̂,L̂, a symmetric matrix P>0, and two scalars γ>0 and α∈(0,1) such that, for any (A,B,C,D,L)∈𝒫,
(3.6)C̃P-1C̃′-γ2I<0,(3.7)[Ã′PÃ-P+αPÃ′PB̃*B̃′PB̃-αR]<0,
where R=diag{Q1,Q2}. Then, for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2, and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…, the estimation error is exponentially bounded with constants
(3.8)β=α,k1=γ2,k2=γ2(x̃′0Px̃0-1).
Hence the matrices Â,B̂, and L̂ are a solution of Problem 1.Remark 3.2.
We can see that condition (3.7) ensures system (3.1) is strictly bounded with a common Lyapunov matrix P, and from Corollary 2 of paper [5], it is clearly, γ is a bound. But its feasibility cannot be easily verified. The following theorem transposes them into the equivalent LMI conditions.Theorem 3.3.
Consider two scalarsα∈(0,1) and γ>0. The following facts are equivalent.(i)
There existÂ,B̂,L̂, and P>0 such that conditions (3.6) and (3.7) are satisfied for any (A,B,C,D,L)∈𝒫.(ii)
There existV,W,X>0,Y>0, and Z such that(3.9)[γ2IL-WL*XX**Y]>0,(3.10)[(1-α)X(1-α)X00A′XA′Y+C′Z′+V′*(1-α)Y00A′XA′Y+C′Z′**αQ10B′XB′Y**0αQ20D′Z****XX*****Y]>0.Proof.
The proof of (3.6)⇔(3.9) is the same as Theorem 2 of reference [5]; here it is omitted for brevity.
(3.7)⇒(3.10) suppose the condition (3.7) is satisfied. The matrix P and matrix P-1 are partitioned as
(3.11)P=[P11P12P21P22],P-1=[S11SS21S],
with P11∈Rn×n and S11∈Rn×n. Clearly PP-1=P-1P=I, so we have
(3.12)S12P′12=I-S11P11,S11P12+S12P22=0.
Moreover, (3.7) is strict inequality, and we can assume, without loss of generality, that I-S11P11 is invertible [15]. Hence S12 and P12 are invertible. Using the Schur complement, we can rewrite (3.7) as
(3.13)[(1-α)P0Ã′P0αRB̃′P**P]>0.
Define
(3.14)T≜[H′000I000H′],
where H′=[IIS12′S11-10], so
(3.15)T′[(1-α)P0Ã′P0αRB̃′P**P]T=[H000I000H][(1-α)P0Ã′P0αRB̃′P**P][H′000I000H′]=[(1-α)HP0HÃ′P0αRB̃′PHPAHPB̃HP][H′000I000H′]=[(1-α)HPH′0HÃ′PH′0αRB̃′PH′HPAH′HPB̃HPH′],HPH′=[IS11-1S12I0][P11P12P21P22][IIS12′S11-10]=[P11+S11-1S12P21+(P12+S11-1S12P22)S12′S11-1P11+S11-1S12P21P11+P12S12′S11-1P11].
Using condition (3.12), we can get
(3.16)HPH′=[S11-1S11-1S11-1P11]=[XXXY],
where X≜S11-1,Y≜P11:
(3.17)HÃ′PH′=[IS11-1S12I0][A′C′B̂′0Â′][P11P12P21P22][IIS12′S11-10]=[A′C′B̂′+S11-1S12Â′A′C′B̂′][P11+P12S12′S11-1P11P21+P22S12′S11-1P21]=[A′XA′Y+C′Z′+V′A′XA′Y+C′Z′].
Here we define V≜P12ÂS12′S11-1,Z≜P12B̂:
(3.18)B̃′PH′=[B′00D′B̂′][P11P12P21P22][IIS12′S11-0]=[B′P11B′P12D′B̂′P21D′B′̂P22][IIS12′S11-0]=[B′P11+B′P12S12′S11-1B′P11D′B̂′P21+D′B̂′P22S12′S11-1D′B̂′P21]=[B′XB′Y0D′Z′].
So we can get condition (3.10).
(3.10)⇒(3.7) suppose there exist V,X>0,Y>0 and Z satisfied condition (3.4), as condition (3.4) holds at every vertex (A(i),B(i),C(i),D(i),L(i)) of the polytope 𝒫, and it also holds for every system matrice (A,B,C,D,L)∈𝒫.
We can obtain[XXXY]>0 from condition (3.10), and based on the Schur complement, the result of I-X-1Y<0 can be deduced. Then there exist two square invertible matrixes M and N such that M′N′=I-X-1Y. Choosing P11=Y,S11=X-1,S12′=M and P12=N, condition (3.7) can be obtained by premultiplying and postmultiplying condition (3.10) by (T′)-1and T-1. If we apply the change of variable
(3.19)Â=N-1VX-1M-1,B̂=N-1Z,L̂=WX-1M-1,P=[YNN′-N′X-1M-1].
so, we can get the linear filter (3.2).Remark 3.4.
In paper [5], the estimators for uncertain systems propose that the state disturbance and measurement noise are identical with the time. Ordinarily, they are different in practice, so the result in our paper is the general case.
## 4. A Numerical Example
Let us consider the system [12] in the form of (3.1) with(4.1)At=[0.3850.330.21+at0.59],Bt=[0.30.3],Ct=[0.20.2+at],Dt=0.3,Lt=[10],
where at is an uncertain parameter satisfying |at|≤0.11. Suppose the state disturbance satisfies the condition |wt|≤0.25 and the measurement noise satisfies |vt|≤0.1 (i.e., Q1=16,Q2=100).As this kind of uncertainty is of the polytopic type described in Section3, the proposed method is used to obtain a linear filter. In the context, we will refer to this filter as the “filter with different disturbance/noise” (FDDN). Choose two sets of initial states: x̂0={[86]T,[-117]T},x0={[107]T,[-9-6]T}. The resulting state trajectories are shown in Figure 1 by the marked solid line, followed with the estimator state trajectories shown by the marked dotted line. Figure 1 indicates that the designed estimator can track the systems’ states effectively.Figure 1
The state and estimator trajectories.The performance of the filter can be further studied by using an average measure of the estimation error, such as the expected quadratic estimation error. Comparison was then made between the FDDN and the “filter with identity disturbance/noise” (FIDN), to evaluate the performance achieved when the different disturbance/noise is taken into account in the synthesis of the filter. At each time instant, the uncertain parameters were chosen to be within, with equal probability, one of their limit values. We assumedx0,wt and vt,t=0,1,…, to be independent random vectors, and the initial states are in the ball of radius 10 (i.e., ||x0||≤10). Figure 2 shows the plots of the “root mean square error” (RMSE), computed over 103 randomly chosen simulations, for the considered filters. The performance of the FDDN turns out to be better from the point of view of the asymptotic behavior when there is large difference between the disturbance and noise.Figure 2
Plots of the RSME for the considered filters.
## 5. Conclusions
The main contribution of this work is the method of constructing an estimator for the uncertain system with the different state disturbance and measurement noise. The stability of the estimator is analyzed using quadratic boundness. Moreover, the estimator can be got by LMI procedures.
---
*Source: 101353-2012-06-03.xml* | 101353-2012-06-03_101353-2012-06-03.md | 14,462 | State Estimators for Uncertain Linear Systems with Different Disturbance/Noise Using Quadratic Boundedness | Longge Zhang; Xiangjie Liu; Xiaobing Kong | Journal of Applied Mathematics
(2012) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101353 | 101353-2012-06-03.xml | ---
## Abstract
This paper designs state estimators for uncertain linear systems with polytopic description, different state disturbance, and measurement noise. Necessary and sufficient stability conditions are derived followed with the upper bounding sequences on the estimation error. All the conditions can be expressed in the form of linear matrix inequalities. A numerical example is given to illustrate the effectiveness of the approach.
---
## Body
## 1. Introduction
In many control systems, the state variables are usually not accessible for direct connection. In this case, it is necessary to design a state estimator, so that its output will generate an estimate of those states. Generally speaking, there are two kinds of estimators for dynamic systems: observers and filters. The former is under the supposition of the perfect knowledge of system and measurement equations, and the latter can be applied to the system with disturbance. Many literatures focus on the design of state estimators for linear system, for example, a sliding mode and a disturbance detector for a discrete Kalman filter [1], the quantized measurement method [2], the least squares estimation for linear singular systems [3], stochastic disturbances and deterministic unknown inputs on linear time-invariant systems [4], and the bounded disturbances on a dynamic system [5].The concept of quadratic boundedness (QB) is first defined for an uncertain nonlinear dynamical system [6], and then its necessary and sufficient conditions for a class of nominally linear systems [7] and a class of linear systems which contain norm-bounded uncertainties [8] are obtained. For discrete system, QB is applied mainly two regions: the receding horizon control (RHC) and the design of estimator. In RHC research, Ding utilizes QB to characterize the stability properties of the controlled system [9–13]. Alessandri et al. find the upper bounds on the norm of the estimation error by means of invariant sets, and these upper bounds can be expressed in terms of linear matrix inequalities [14]. Paper in [5] designs a filter by searching a suitable tradeoff between the transient and asymptotic behaviors of the estimation error. The designed filter is for the linear discrete systems with the identical state disturbance and measurement noise. For the discrete linear systems, the disturbance and noise are different in general. Nevertheless, little work has been done on the design of the state estimators for uncertain linear systems with different disturbance/noise. So how to design state estimators for uncertain linear systems with different state disturbance and measurement noise is important work.The existing research work on state estimation usually constructs a filter for uncertain systems with bounded disturbance/noise, with no consideration of input or state constraint. Since the disturbance and noise are not assumed exactly identical, the stability station of the estimator is different from that in the paper [5]. For the above reasons, the situation becomes more complicated and the extension of the method is not straightforward.This paper designs state estimators for uncertain linear systems with polytopic description. The problem is constructed in the form of linear matrix inequalities (LMIs). The organization of the paper is as follows. The earlier results are presented in the Section2. The new robust estimator for uncertain linear systems with different disturbance/noise is designed in Section 3. A numerical simulation example is followed in Section 4. And some conclusions are given in the end.Notations. For any vector x and a positive-defined matrix Q,ℰQ is the ellipsoid which is defined as {x∣x′Qx≤1};Q′ is the transpose of matrix Q. ||x|| is the Euclid norm of vector x. The symbol * induces a symmetric structure in LMIs.
## 2. Earlier Results
In this section, some results presented by Alessandri et al. [5, 14] are briefly introduced.For a given discrete-time dynamic system,(2.1)xt+1=Atxt+Gtwt,t=0,1,…,
where xt∈Rn is the state vector and wt∈ℰQ⊂Rp is the noise vector. The definition of strictly quadratically bounded with a common Lyapunov matrix of a system and positively invariant set are defined by Alessandri et al. [5, 14], and the following theorem is proved.Theorem 2.1 (see [14]).
The following facts are equivalent:(i)
System (2.1) is strictly quadratically bounded with a common Lyapunov matrix P>0 for all allowable wt∈ℰQ and (At,Gt)∈φ,t=0,1,…, where φ is a known bounded set.(ii)
The ellipsoidℰP is a positively invariant set for system (2.1) for all allowable wt∈ℰQ and (At,Gt)∈φ,t=0,1,….(iii)
There existsαt∈(0,1) such that for any (At,Gt)∈φ,(2.2)[A′tPAt-P+αtPA′tPGt*G′tPGt-αtQ]≤0.
For a discrete-time linear system with the same state disturbance and noise,(2.3)xt+1=Atxt+Btwt,yt=Ctxt+Dtwt,zt=Ltxt,
for t=0,1,…, where xt∈Rn,yt∈Rm,zt∈Rr are the state vector, the measured output and the signal will be estimated, respectively, and wt∈ℰQ⊂Rp is the disturbance/noise vector. At,Bt,Ct,Dt,Lt are the system matrixes with the proper dimensions. We consider the disturbance/noise to be unknown, and the system matrixes are supposed to be unknown and time varying but belonging to a polytopic set 𝒫, that is, (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…, where
(2.4)P≜{(A,B,C,D,L)=∑i=1Nλi(A(i),B(i),C(i),D(i),L(i));∑i=1Nλi=1,λi≥0,i=1,2,…N}.
Here (A(i),B(i),C(i),D(i),L(i))i=1,2,…N are vertexes of the polytope 𝒫.Definition 2.2 (see [5]).
A sequence of vectorsξt is said to be exponentially bounded with constants β∈(0,1),k1≥0, and k2>-k1 if
(2.5)‖ξt‖≤k1+k2(1-β)t,t=0,1,….
It is easy to see thatβ determines the convergence speed and k11/2 represents an upper bound of the sequence ξt.Theorem 2.3 (see [5]).
Consider two scalarsα∈(0,1) and γ>0. The following facts are equivalent.(i)
There existÂ,B̂,L̂, and P>0 such that the following conditions are satisfied for any (A,B,C,D,L)∈𝒫:
(2.6)C̃P-1C̃′-γ2I<0,[A′PA-P+αPA′PG*G′PG-αQ]<0.(ii)
There existV,W,X>0,Y>0, and Z such that
(2.7)[γ2IL-WL*XX**Y]>0,[(1-α)X(1-α)X0A′XA′Y+C′Z′+V′*(1-α)Y0A′XA′Y+C′Z′**αQB′XD′Z+B′Y***XX****Y]>0,
for(A,B,C,D,L)=(A(i),B(i),C(i),D(i),L(i)),i=1,2,…N.
Then, the costJ(r,α)≜μγ-(1-μ)α can be minimized over V,W,X>0,Y>0,Z,α∈(0,1) and γ>0 under the constrains (2.7).
## 3. A Robust Estimator for Uncertain Linear Systems with Different Noises
Let us consider the discrete-time linear system with different disturbance/noise:(3.1)xt+1=Atxt+Btwt,yt=Ctxt+Dtvt,zt=Ltxt,
where the matrixes and vectors are the same as (2.3) except wt∈ℰQ1,vt∈ℰQ2, which are the vectors of the state disturbance and measurement noise, respectively.To estimate the signalzt, the linear filter is introduced which has the following form:(3.2)x̂t+1=Âx̂t+B̂yt,ẑt=L̂x̂t,
for t=0,1,…, where x̂t∈Rn is the filter state vector and ẑt∈Rr is the estimation of the signal zt.Define the estimation erroret, the augmented state vector, and the augmented disturbance/noise as(3.3)et≜zt-ẑt,x̃t≜[xtx̂t],w̃t=[wtvt]
and the dynamic system associated with the estimation error(3.4)x̃t+1=[A0B̂CtÂ]̲Ãx̃t+[Bt00B̂Dt]̲B̃w̃t,(3.5)et=[Lt-L̂]̲C̃x̃t,
for t=0,1,….The objective is to find an estimateẑt of the signal zt such that the estimation error et=zt-ẑt is exponentially bounded for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2, and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…. Then the following problem has to be solved.Problem 1.
Find matricesÂ,B̂,L̂ such that, for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2 and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…; the estimation error et is exponentially bounded with constants β∈(0,1),k1≥0, and k2>-k1.
In order to solve Problem1, we now exploit the results on quadratic boundedness. More specifically, the following proposition holds.Proposition 3.1.
Suppose there exist matricesÂ,B̂,L̂, a symmetric matrix P>0, and two scalars γ>0 and α∈(0,1) such that, for any (A,B,C,D,L)∈𝒫,
(3.6)C̃P-1C̃′-γ2I<0,(3.7)[Ã′PÃ-P+αPÃ′PB̃*B̃′PB̃-αR]<0,
where R=diag{Q1,Q2}. Then, for any x0∈Rn,wt∈ℰQ1,vt∈ℰQ2, and (At,Bt,Ct,Dt,Lt)∈𝒫,t=0,1,…, the estimation error is exponentially bounded with constants
(3.8)β=α,k1=γ2,k2=γ2(x̃′0Px̃0-1).
Hence the matrices Â,B̂, and L̂ are a solution of Problem 1.Remark 3.2.
We can see that condition (3.7) ensures system (3.1) is strictly bounded with a common Lyapunov matrix P, and from Corollary 2 of paper [5], it is clearly, γ is a bound. But its feasibility cannot be easily verified. The following theorem transposes them into the equivalent LMI conditions.Theorem 3.3.
Consider two scalarsα∈(0,1) and γ>0. The following facts are equivalent.(i)
There existÂ,B̂,L̂, and P>0 such that conditions (3.6) and (3.7) are satisfied for any (A,B,C,D,L)∈𝒫.(ii)
There existV,W,X>0,Y>0, and Z such that(3.9)[γ2IL-WL*XX**Y]>0,(3.10)[(1-α)X(1-α)X00A′XA′Y+C′Z′+V′*(1-α)Y00A′XA′Y+C′Z′**αQ10B′XB′Y**0αQ20D′Z****XX*****Y]>0.Proof.
The proof of (3.6)⇔(3.9) is the same as Theorem 2 of reference [5]; here it is omitted for brevity.
(3.7)⇒(3.10) suppose the condition (3.7) is satisfied. The matrix P and matrix P-1 are partitioned as
(3.11)P=[P11P12P21P22],P-1=[S11SS21S],
with P11∈Rn×n and S11∈Rn×n. Clearly PP-1=P-1P=I, so we have
(3.12)S12P′12=I-S11P11,S11P12+S12P22=0.
Moreover, (3.7) is strict inequality, and we can assume, without loss of generality, that I-S11P11 is invertible [15]. Hence S12 and P12 are invertible. Using the Schur complement, we can rewrite (3.7) as
(3.13)[(1-α)P0Ã′P0αRB̃′P**P]>0.
Define
(3.14)T≜[H′000I000H′],
where H′=[IIS12′S11-10], so
(3.15)T′[(1-α)P0Ã′P0αRB̃′P**P]T=[H000I000H][(1-α)P0Ã′P0αRB̃′P**P][H′000I000H′]=[(1-α)HP0HÃ′P0αRB̃′PHPAHPB̃HP][H′000I000H′]=[(1-α)HPH′0HÃ′PH′0αRB̃′PH′HPAH′HPB̃HPH′],HPH′=[IS11-1S12I0][P11P12P21P22][IIS12′S11-10]=[P11+S11-1S12P21+(P12+S11-1S12P22)S12′S11-1P11+S11-1S12P21P11+P12S12′S11-1P11].
Using condition (3.12), we can get
(3.16)HPH′=[S11-1S11-1S11-1P11]=[XXXY],
where X≜S11-1,Y≜P11:
(3.17)HÃ′PH′=[IS11-1S12I0][A′C′B̂′0Â′][P11P12P21P22][IIS12′S11-10]=[A′C′B̂′+S11-1S12Â′A′C′B̂′][P11+P12S12′S11-1P11P21+P22S12′S11-1P21]=[A′XA′Y+C′Z′+V′A′XA′Y+C′Z′].
Here we define V≜P12ÂS12′S11-1,Z≜P12B̂:
(3.18)B̃′PH′=[B′00D′B̂′][P11P12P21P22][IIS12′S11-0]=[B′P11B′P12D′B̂′P21D′B′̂P22][IIS12′S11-0]=[B′P11+B′P12S12′S11-1B′P11D′B̂′P21+D′B̂′P22S12′S11-1D′B̂′P21]=[B′XB′Y0D′Z′].
So we can get condition (3.10).
(3.10)⇒(3.7) suppose there exist V,X>0,Y>0 and Z satisfied condition (3.4), as condition (3.4) holds at every vertex (A(i),B(i),C(i),D(i),L(i)) of the polytope 𝒫, and it also holds for every system matrice (A,B,C,D,L)∈𝒫.
We can obtain[XXXY]>0 from condition (3.10), and based on the Schur complement, the result of I-X-1Y<0 can be deduced. Then there exist two square invertible matrixes M and N such that M′N′=I-X-1Y. Choosing P11=Y,S11=X-1,S12′=M and P12=N, condition (3.7) can be obtained by premultiplying and postmultiplying condition (3.10) by (T′)-1and T-1. If we apply the change of variable
(3.19)Â=N-1VX-1M-1,B̂=N-1Z,L̂=WX-1M-1,P=[YNN′-N′X-1M-1].
so, we can get the linear filter (3.2).Remark 3.4.
In paper [5], the estimators for uncertain systems propose that the state disturbance and measurement noise are identical with the time. Ordinarily, they are different in practice, so the result in our paper is the general case.
## 4. A Numerical Example
Let us consider the system [12] in the form of (3.1) with(4.1)At=[0.3850.330.21+at0.59],Bt=[0.30.3],Ct=[0.20.2+at],Dt=0.3,Lt=[10],
where at is an uncertain parameter satisfying |at|≤0.11. Suppose the state disturbance satisfies the condition |wt|≤0.25 and the measurement noise satisfies |vt|≤0.1 (i.e., Q1=16,Q2=100).As this kind of uncertainty is of the polytopic type described in Section3, the proposed method is used to obtain a linear filter. In the context, we will refer to this filter as the “filter with different disturbance/noise” (FDDN). Choose two sets of initial states: x̂0={[86]T,[-117]T},x0={[107]T,[-9-6]T}. The resulting state trajectories are shown in Figure 1 by the marked solid line, followed with the estimator state trajectories shown by the marked dotted line. Figure 1 indicates that the designed estimator can track the systems’ states effectively.Figure 1
The state and estimator trajectories.The performance of the filter can be further studied by using an average measure of the estimation error, such as the expected quadratic estimation error. Comparison was then made between the FDDN and the “filter with identity disturbance/noise” (FIDN), to evaluate the performance achieved when the different disturbance/noise is taken into account in the synthesis of the filter. At each time instant, the uncertain parameters were chosen to be within, with equal probability, one of their limit values. We assumedx0,wt and vt,t=0,1,…, to be independent random vectors, and the initial states are in the ball of radius 10 (i.e., ||x0||≤10). Figure 2 shows the plots of the “root mean square error” (RMSE), computed over 103 randomly chosen simulations, for the considered filters. The performance of the FDDN turns out to be better from the point of view of the asymptotic behavior when there is large difference between the disturbance and noise.Figure 2
Plots of the RSME for the considered filters.
## 5. Conclusions
The main contribution of this work is the method of constructing an estimator for the uncertain system with the different state disturbance and measurement noise. The stability of the estimator is analyzed using quadratic boundness. Moreover, the estimator can be got by LMI procedures.
---
*Source: 101353-2012-06-03.xml* | 2012 |
# Denoising Seismic Data via a Threshold Shrink Method in the Non-Subsampled Contourlet Transform Domain
**Authors:** Yu Yang; Qi Ran; Kang Chen; Cheng Lei; Yusheng Zhang; Han Liang; Song Han; Cong Tang
**Journal:** Mathematical Problems in Engineering
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013623
---
## Abstract
In seismic exploration, effective seismic signals can be seriously distorted by and interfered with noise, and the performance of traditional seismic denoising approaches can hardly meet the requirements of high-precision seismic exploration. To remarkably enhance signal-to-noise ratios (SNR) and adapt to high-precision seismic exploration, this work exploits the non-subsampled contourlet transform (NSCT) and threshold shrink method to design a new approach for suppressing seismic random noise. NSCT is an excellent multiscale, multidirectional, and shift-invariant image decomposition scheme, which can not only calculate exact contourlet transform coefficients through multiresolution analysis but also give an almost optimized approximation. It has better high-frequency response and stronger ability to describe curves and surfaces. Specifically, we propose to utilize the superior performance NSCT to decomposing the noisy seismic data into various frequency sub-bands and orientation response sub-bands, obtaining fine enough transform high frequencies to effectively achieve the separation of signals and noises. Besides, we use the adaptive Bayesian threshold shrink method instead of traditional handcraft threshold scheme for denoising the high-frequency sub-bands of NSCT coefficients, which pays more attention to the internal characteristics of the signals/data itself and improve the robustness of method, which can work better for preserving richer structure details of effective signals. The proposed method can achieve seismic random noise attenuation while retaining effective signals to the maximum degree. Experimental results reveal that the proposed method is superior to wavelet-based and curvelet-based threshold denoising methods, which increases synthetic seismic data with lower SNR from −8.2293 dB to 8.6838 dB, and 11.8084 dB and 9.1072 dB higher than two classic sparse transform based methods, respectively. Furthermore, we also apply the proposed method to process field data, which achieves satisfactory results.
---
## Body
## 1. Introduction
In recent years, high-precision seismic exploration has been a key subject in modern seismic exploration. This technique will be hindered if the noise in acquired seismic signals cannot be removed perfectly. The traditional seismic data denoising approaches can hardly meet the requirements of high-precision seismic exploration because the level and complexity of the accompanying noise in seismic signals have significantly increased due to the increasingly complex exploration environment and the increase in exploration depth with the extension of field of seismic exploration. So, it is crucial to design new effective techniques to remarkably enhance the signal-to-noise ratio (SNR).At present, many seismic denoising approaches have been proposed including the initial seismic data denoising method [1], traditional transform domain based denoising methods [2–4], sparse transform based methods [5–8] for solving multitasks, learning-based methods [9, 10], and other methods [11–14]. Actually, as the most common seismic noise, random noise can penetrate the whole time domain and severely distort and interfere with effective seismic data. Thus, since Canales [1] first developed a random noise reduction approach, a lot of random noise attenuation methods have been presented on this basis, such as the sparse transform based approaches, the empirical mode decomposition (EMD) based approaches, and fast dictionary learning-based approaches [10]. Chen and Ma [4] removed random noise with predictive filtering of f-x empirical mode decomposition. Chen and Fomel [6] developed an EMD-Seislet transform based method to remove the seismic random noise. Liu et al. [7] presented variational mode decomposition for suppressing the random noise of seismic data. In reality, one type of the intensively used and most efficient seismic random noise attenuation approaches are based on the sparse transform of multiscale geometric analysis. Zhang and Lu [2] removed noise and improved the resolution of seismic data by using the applied wavelet transform. Neelamani et al. [3] attenuated random noise with the curvelet transform, and subsequently, several variants [8, 12] with good results have been reported. Lin [14] proposed a three-dimensional (3-D) steerable pyramid decomposition-based suppression method of seismic random noise. Sang et al. [15] presented an unconventional technique on the basis of a proximal classifier with consistency (PCC) in transform domain for attenuating seismic random noise, and they also proposed another seismic denoising approach [16] via the deep neural network and simultaneously suppressed seismic coherent and incoherent noises [17] based on the deep neural network.High SNR data are the important guarantee of high-precision seismic exploration. But, the existing transform domain based methods are difficult to obtain higher SNR data due to not fine enough transform high frequencies such as wavelet transform or curvelet transform. Compared with the existing sparse transform based methods, that is, wavelet-based transform and curvelet-based transform methods, the NSCT presents multiscale, multidirectional, and shift-invariant decomposition scheme, which has better high-frequency response and stronger ability to describe curves and surfaces. Besides, they often conduct rough threshold operation by using manual threshold processing methods such as hard thresholding or soft thresholding. There is often a loss of effective signals. Therefore, to remarkably enhance SNRs and adapt to high precision seismic exploration, we exploit an effective seismic data denoising method in this paper. The contributions are as follows:(i)
We propose to utilize the new sparse transform technique, non-subsampled contourlet transform (NSCT), to decomposing the noisy seismic data into various frequency sub-bands and orientation response sub-bands, obtaining fine enough transform high frequencies to effectively achieve the separation of signals and noises.(ii)
We use the adaptive Bayesian threshold shrink method instead of the traditional handcraft threshold scheme for denoising the high-frequency sub-bands of NSCT coefficients, which pays more attention to the internal characteristics of the signals/data itself and improve the robustness of method, which can work better for preserving richer structure details of effective signals.(iii)
We conduct the experiments on synthetic and field data, which reveals that our approach is superior to the wavelet and the curvelet transform based classical ones, achieving higher signal-to-noise ratio (SNR) values.The remainder of this paper is organized as follows. We present our method in Section2. Experiments and performance evaluation are presented in Sections 3. Conclusion is drawn in Section 4.
## 2. Method
In this paper, we focus on transform domain based thresholding methods due to their good performance. The wavelet-based thresholding scheme is the most classic method for seismic data denoising. Wavelets can sparsely represent one-dimensional (1-D) digital data with smoothed point discontinuities and have been successfully used for representing digital signals [18]. However, wavelets cannot efficiently handle higher dimensional data because of the usual presence of other kinds of singularities. As a matter of fact, curvelets [19], contourlets [20], bandelets [21], and some other image/signal representations can take the advantages of the anisotropic regularity of a surface along edges, but these representations all have their own disadvantages, such as lack of a multiresolution geometry representation for curvelets, extremely limited clear directional features for contourlets, and computationally expensive geometry optimization for bandelets. The non-subsampled contourlet transform (NSCT) [22] is an excellent multiscale, multidirectional, and shift-invariant image decomposition scheme, which can not only calculate exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. It has better high-frequency response and stronger ability to describe curves and surfaces. Therefore, we attempt to utilize NSCT to denoise seismic data in this paper.
### 2.1. NSCT for Seismic Data
The NSCT [22] primarily consists of a cascade of non-subsampled pyramid filter bank (NSPFB) and non-subsampled direction filter bank (NSDFB). First, the NSPFB is utilized to decompose an image and the sub-bands obtained are used as inputs of the NSDFB to generate decomposition results of the initial image in multiple directions and dimensions. The NSCT conducts K-level decomposition on an image to produce one low-frequency (LF) and several high-frequency (HF) sub-bands, and the size of all these sub-bands is identical with that of the original image. So, the full reconstruction of NSCT is possible since NSPFB and NSDFB can both be completely rebuilt.The shift-invariant filtering structure of the NSCT results in its multiscale feature. By using a bank of non-subsampled 2-D two-channel filters, we have sub-band decomposition like the Laplacian pyramid. Figure1 shows a 3-stage non-subsampled pyramid (NSP) decomposition, whose expansion has an alike concept with the 1-D non-subsampled wavelet transform (NSWT) using the àtrous algorithm [22]. For a J-stage decomposition, the redundancy will be J + 1. The region −π/2j,π/2j2 and its complement are the ideal passband support of the low- and high-pass filter at the j-th stage, respectively. Upsampling the filter for the first stage can yield the filters for subsequent stages; thus, no additional filter is needed to give the multiscale property. Our structure differs from that of the separable NSWT. Particularly, our structure produces one bandpass image in each stage leading to J + 1 redundancy. However, the NSWT generates three directional sub-bands in each stage, leading to 3J + 1 redundancy. The advantage of NSP lies in its ability to generate better filters due to its generality.Figure 1
(a) Schematic diagram of 3-stage NSP decomposition. The aliasing due to upsampling is denoted with lighter gray. (b) Sub-bands on the 2-D frequency plane.The NSCT has following steps. First, the NSPFB is used to multidimensionally decompose an image into an HF and an LF sub-band. In multilevel decomposition, an image is finally decomposed to an LF sub-band and a set of HF sub-bands if only its LF sub-band is further iteratively filtered. The redundancy of anX-level NSPFB decomposition will be X + 1. With regard to the X-level low-pass and the bandpass filter, the ranges of the ideal support of frequency domain are −π/2x−1,π/2x−12 and −π/2x−1,π/2x−12∪−π/2x,π/2x2, respectively. The decomposition does not extra filter during acquiring multidimension properties. So, the generated redundancy of X + 1 will be generated in each stage with a bandpass image, and the structure is significantly superior to that of the wavelet transform. Then, these sub-bands can be decomposed along singular points and multiple directions in various dimensions, and the directions are integrated. The NSDFB also belongs to a two-channel filter bank which comprises decomposition filters Uiz,i=0,1 and synthesis filters Viz,i=0,1 satisfying Bézout’s identity:(1)U0z+V0z=U1z+V1z.To adopt ideal support of the frequency domain, two channels are decomposed byU0z and U1z. Then, U1z and U0z are upsampled instead of subsampled by all sampling matrices in each level to get direction filters in the subsequent levels. This completes the image decomposition and an NSCT transform is schematically presented in Figures 2 and 3. Figure 2 displays an overview of NSCT which has a filter bank for dividing the 2-D frequency plane into sub-bands plotted in the bottom left quarter of Figure 1. By using NSCT, we decompose the 2-D seismic signal data into two shift-invariant components: an NSP structure to ensure multiscale properties (Part 1 of Figure 2) and an NSDFB structure to give directionality (Part 2 of Figure 2). The obtained idealized frequency partitioning diagram is presented in Figure 3. The structure consists in a bank of filters that splits the 2-D frequency plane into several sub-bands. In this paper, we use this mode of non-downsampling to reduce the sampling distortion in the filters and obtain translation invariance, in which the size of the directional sub-band at each scale is the same as that of the original 2-D seismic signal matrix. The NSCT has more details to be preserved, and the decomposition can better maintain the edge information and contour structure of the seismic signals.Figure 2
Schematic diagram of the NSCT of 2D seismic data.Figure 3
Ideal frequency division by the NSCT.Figure4 shows processing results of implementing the two-level NSCT on the synthesized seismic signal data with noise (Figure 4(a)) to yield a low-pass sub-band (Figure 4(b)) and a set of high-pass sub-bands (Figures 4(c) and 4(d)). Here, two and four shearing directions are used for the coarser and the finer scale, respectively. We can see from Figure 4 that the LF record (Figure 4(b)) decomposed by the NSCT basically contains the effective synthesized seismic signals. For HF records (Figure 4(d)) of scale 1, all they contain is noise, while HF records (Figure 4(c)) of scale 2 contain partially effective signals and noise, which needs to be further processed via signal-noise separation.Figure 4
Example of NSCT processing. (a) Synthesized seismic signal data with noise. (b) Approximate NSCT coefficients. Seismic signals of the detailed NSCT coefficients at scale 2, 2 directions (c) and scale 1, 4 directions (d).
(a)(b)(c)(d)
### 2.2. Denoising Seismic Data Using the Threshold Shrink in the NSCT Domain
To remarkably suppress seismic random noise while not damaging the effective signal in the process of denoising, this paper proposes a novel NSCT-based scheme with an adaptive threshold value setting for suppressing seismic random noise. The NSCT with the properties of multiscale, multidirection, and relative optimized sparsity can not only calculate the exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. As the NSCT is multidirectional, large coefficients can be obtained when the direction of the NSCT basic function is approximately the same as the direction of the seismic signals, while small coefficients can be obtained when they have large difference. Thus, the random noises are distributed on small coefficients, so we can remove smaller coefficients to achieve random noise attenuate using an appropriate thresholding operator.We first analyze the sparsity of NSCT before giving the steps of denoising. It is known that the degree of approximation of the decomposed effective data determines the effect of noise suppression [23]. That is to say, the denoising effect depends on the sparsity of the approach. Figure 5 presents the reconstruction error on synthesized data (Figure 6(a)) in the wavelet transform domain, curvelet transform domain, and NSCT domain. Clearly, the construction error of NSCT is the smallest at the same percentage of coefficients, and it is approximate to zero at 6% coefficient, showing its optimal sparsity. In Figure 6, we compare the high-frequency coefficients of NSCT, curvelet transform, and wavelet transform, where we can clearly see that the NSCT represents the curvature more accurately.Figure 5
Reconstruction errors in three transform domains.Figure 6
Comparison of the NSCT, curvelet transform, and WT coefficients on the synthetic seismic data. (a) The original synthetic seismic data, and the fusion of HF coefficients of (b) NSCT, (c) discrete curvelet transform, and (d) WT.
(a)(b)(c)(d)Generally, one threshold is used for the whole image/signals (or sub-bands) in signal denoising techniques based on the threshold shrink. Obviously, the threshold value should be smaller if the signals contain more effective information, and it should be larger if the signals have more smooth regions. For the two cases, a larger threshold value should be correspondingly used for a higher noise level. Evidently, detailed information with an optimal threshold value does not function adequately for smooth regions and vice versa. Therefore, setting for threshold value can be further optimized by introducing adaptive threshold for different regions in seismic signals to exploit the fact that most signals consist of smooth regions and effective seismic signal information.Specifically, the two-dimensional noisy seismic data can be calculated by(2)ft,g=xt,g+nt,g,where t denotes time, g represents the trace number, and xt,g, nt,g, and ft,g represent the effective seismic data, the additive random noise, and the noisy observed seismic signals, respectively. Signal xt,g will be recovered from ft,g.In NSCT-based denoising approaches, a threshold is properly set for the NSCT coefficients so that seismic signals can be retrieved from the acquired noisy seismic data. The proposed seismic random noise attenuation has the following main steps.Step 1.
Decomposing noisy seismic data with aK-level NSCT to yield one low-pass sub-band and one set of high-pass sub-bands Dk,jk=1,2,…,K;j=1,2,…,J, with the current scale k, the decomposition orientation j, and the total number J of decomposition directions.Step 2.
Calculating denoising threshold values of all sub-bandsDk,j. The level adaptive Bayesian threshold [24] is used and calculated as below:(i)
Using the robust median estimator to calculate noise varianceδ from sub-bands:(3)δ=MedianCx,y0.6745,Cx,y∈DK,J.(ii)
Using the maximum likelihood estimator (MLE) [24] to estimate signal variance δk,j for the noisy coefficients of each detail sub-band Dk,j:(4)δk,j=max0,1mn∑x=1m∑y=1nCk,jx,y2−δ2,whereCk,jx,y∈Dk,j, and m and n denote the size of seismic signals(iii)
Calculating discriminating thresholdδth with the near exponential prior of NSCT coefficients across scales:(5)δth=δ⋅∑kδk,j⋅2−k∑kk2⋅2−k,wherek denotes the current scale(iv)
Calculating denoising thresholdTk,δk,j of each sub-band for δk,j<δth:(6)Tk,δk,j=2k−J/2/J⋅δ2δk,j,where δk,j denotes the standard deviation of sub-band Dk,j.Step 3.
Processing the noise-related NSCT coefficients in high-frequency sub-bandsDk,j with the well-known soft-thresholding method [25]:(7)C^k,jx,y=0,otherwise,sgnCk,jx,y⋅Ck,jx,y−Tk,δk,j,Ck,jx,y≥Tk,δk,j,where C^k,jx,y and Ck,jx,y are the matrices of the coefficients after and before denoising in the NSCT domain, respectively. Tk,δk,j denotes the adaptive Bayesian thresholdStep 4.
Reconstructing the denoised seismic data by conducting an inverse NSCT on these denoised NSCT sub-bands.
The workflow of our method is figuratively presented in Figure7, where a two-level NSCT is performed on synthetic data to yield a low-pass sub-band and a set of high-pass sub-bands, and two and four shearing directions are used for coarser and finer scale, respectively.Figure 7
Demonstration for the analysis framework of seismic random denoising in the NSCT domain. (a) Before attack. (b) After attack.
## 2.1. NSCT for Seismic Data
The NSCT [22] primarily consists of a cascade of non-subsampled pyramid filter bank (NSPFB) and non-subsampled direction filter bank (NSDFB). First, the NSPFB is utilized to decompose an image and the sub-bands obtained are used as inputs of the NSDFB to generate decomposition results of the initial image in multiple directions and dimensions. The NSCT conducts K-level decomposition on an image to produce one low-frequency (LF) and several high-frequency (HF) sub-bands, and the size of all these sub-bands is identical with that of the original image. So, the full reconstruction of NSCT is possible since NSPFB and NSDFB can both be completely rebuilt.The shift-invariant filtering structure of the NSCT results in its multiscale feature. By using a bank of non-subsampled 2-D two-channel filters, we have sub-band decomposition like the Laplacian pyramid. Figure1 shows a 3-stage non-subsampled pyramid (NSP) decomposition, whose expansion has an alike concept with the 1-D non-subsampled wavelet transform (NSWT) using the àtrous algorithm [22]. For a J-stage decomposition, the redundancy will be J + 1. The region −π/2j,π/2j2 and its complement are the ideal passband support of the low- and high-pass filter at the j-th stage, respectively. Upsampling the filter for the first stage can yield the filters for subsequent stages; thus, no additional filter is needed to give the multiscale property. Our structure differs from that of the separable NSWT. Particularly, our structure produces one bandpass image in each stage leading to J + 1 redundancy. However, the NSWT generates three directional sub-bands in each stage, leading to 3J + 1 redundancy. The advantage of NSP lies in its ability to generate better filters due to its generality.Figure 1
(a) Schematic diagram of 3-stage NSP decomposition. The aliasing due to upsampling is denoted with lighter gray. (b) Sub-bands on the 2-D frequency plane.The NSCT has following steps. First, the NSPFB is used to multidimensionally decompose an image into an HF and an LF sub-band. In multilevel decomposition, an image is finally decomposed to an LF sub-band and a set of HF sub-bands if only its LF sub-band is further iteratively filtered. The redundancy of anX-level NSPFB decomposition will be X + 1. With regard to the X-level low-pass and the bandpass filter, the ranges of the ideal support of frequency domain are −π/2x−1,π/2x−12 and −π/2x−1,π/2x−12∪−π/2x,π/2x2, respectively. The decomposition does not extra filter during acquiring multidimension properties. So, the generated redundancy of X + 1 will be generated in each stage with a bandpass image, and the structure is significantly superior to that of the wavelet transform. Then, these sub-bands can be decomposed along singular points and multiple directions in various dimensions, and the directions are integrated. The NSDFB also belongs to a two-channel filter bank which comprises decomposition filters Uiz,i=0,1 and synthesis filters Viz,i=0,1 satisfying Bézout’s identity:(1)U0z+V0z=U1z+V1z.To adopt ideal support of the frequency domain, two channels are decomposed byU0z and U1z. Then, U1z and U0z are upsampled instead of subsampled by all sampling matrices in each level to get direction filters in the subsequent levels. This completes the image decomposition and an NSCT transform is schematically presented in Figures 2 and 3. Figure 2 displays an overview of NSCT which has a filter bank for dividing the 2-D frequency plane into sub-bands plotted in the bottom left quarter of Figure 1. By using NSCT, we decompose the 2-D seismic signal data into two shift-invariant components: an NSP structure to ensure multiscale properties (Part 1 of Figure 2) and an NSDFB structure to give directionality (Part 2 of Figure 2). The obtained idealized frequency partitioning diagram is presented in Figure 3. The structure consists in a bank of filters that splits the 2-D frequency plane into several sub-bands. In this paper, we use this mode of non-downsampling to reduce the sampling distortion in the filters and obtain translation invariance, in which the size of the directional sub-band at each scale is the same as that of the original 2-D seismic signal matrix. The NSCT has more details to be preserved, and the decomposition can better maintain the edge information and contour structure of the seismic signals.Figure 2
Schematic diagram of the NSCT of 2D seismic data.Figure 3
Ideal frequency division by the NSCT.Figure4 shows processing results of implementing the two-level NSCT on the synthesized seismic signal data with noise (Figure 4(a)) to yield a low-pass sub-band (Figure 4(b)) and a set of high-pass sub-bands (Figures 4(c) and 4(d)). Here, two and four shearing directions are used for the coarser and the finer scale, respectively. We can see from Figure 4 that the LF record (Figure 4(b)) decomposed by the NSCT basically contains the effective synthesized seismic signals. For HF records (Figure 4(d)) of scale 1, all they contain is noise, while HF records (Figure 4(c)) of scale 2 contain partially effective signals and noise, which needs to be further processed via signal-noise separation.Figure 4
Example of NSCT processing. (a) Synthesized seismic signal data with noise. (b) Approximate NSCT coefficients. Seismic signals of the detailed NSCT coefficients at scale 2, 2 directions (c) and scale 1, 4 directions (d).
(a)(b)(c)(d)
## 2.2. Denoising Seismic Data Using the Threshold Shrink in the NSCT Domain
To remarkably suppress seismic random noise while not damaging the effective signal in the process of denoising, this paper proposes a novel NSCT-based scheme with an adaptive threshold value setting for suppressing seismic random noise. The NSCT with the properties of multiscale, multidirection, and relative optimized sparsity can not only calculate the exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. As the NSCT is multidirectional, large coefficients can be obtained when the direction of the NSCT basic function is approximately the same as the direction of the seismic signals, while small coefficients can be obtained when they have large difference. Thus, the random noises are distributed on small coefficients, so we can remove smaller coefficients to achieve random noise attenuate using an appropriate thresholding operator.We first analyze the sparsity of NSCT before giving the steps of denoising. It is known that the degree of approximation of the decomposed effective data determines the effect of noise suppression [23]. That is to say, the denoising effect depends on the sparsity of the approach. Figure 5 presents the reconstruction error on synthesized data (Figure 6(a)) in the wavelet transform domain, curvelet transform domain, and NSCT domain. Clearly, the construction error of NSCT is the smallest at the same percentage of coefficients, and it is approximate to zero at 6% coefficient, showing its optimal sparsity. In Figure 6, we compare the high-frequency coefficients of NSCT, curvelet transform, and wavelet transform, where we can clearly see that the NSCT represents the curvature more accurately.Figure 5
Reconstruction errors in three transform domains.Figure 6
Comparison of the NSCT, curvelet transform, and WT coefficients on the synthetic seismic data. (a) The original synthetic seismic data, and the fusion of HF coefficients of (b) NSCT, (c) discrete curvelet transform, and (d) WT.
(a)(b)(c)(d)Generally, one threshold is used for the whole image/signals (or sub-bands) in signal denoising techniques based on the threshold shrink. Obviously, the threshold value should be smaller if the signals contain more effective information, and it should be larger if the signals have more smooth regions. For the two cases, a larger threshold value should be correspondingly used for a higher noise level. Evidently, detailed information with an optimal threshold value does not function adequately for smooth regions and vice versa. Therefore, setting for threshold value can be further optimized by introducing adaptive threshold for different regions in seismic signals to exploit the fact that most signals consist of smooth regions and effective seismic signal information.Specifically, the two-dimensional noisy seismic data can be calculated by(2)ft,g=xt,g+nt,g,where t denotes time, g represents the trace number, and xt,g, nt,g, and ft,g represent the effective seismic data, the additive random noise, and the noisy observed seismic signals, respectively. Signal xt,g will be recovered from ft,g.In NSCT-based denoising approaches, a threshold is properly set for the NSCT coefficients so that seismic signals can be retrieved from the acquired noisy seismic data. The proposed seismic random noise attenuation has the following main steps.Step 1.
Decomposing noisy seismic data with aK-level NSCT to yield one low-pass sub-band and one set of high-pass sub-bands Dk,jk=1,2,…,K;j=1,2,…,J, with the current scale k, the decomposition orientation j, and the total number J of decomposition directions.Step 2.
Calculating denoising threshold values of all sub-bandsDk,j. The level adaptive Bayesian threshold [24] is used and calculated as below:(i)
Using the robust median estimator to calculate noise varianceδ from sub-bands:(3)δ=MedianCx,y0.6745,Cx,y∈DK,J.(ii)
Using the maximum likelihood estimator (MLE) [24] to estimate signal variance δk,j for the noisy coefficients of each detail sub-band Dk,j:(4)δk,j=max0,1mn∑x=1m∑y=1nCk,jx,y2−δ2,whereCk,jx,y∈Dk,j, and m and n denote the size of seismic signals(iii)
Calculating discriminating thresholdδth with the near exponential prior of NSCT coefficients across scales:(5)δth=δ⋅∑kδk,j⋅2−k∑kk2⋅2−k,wherek denotes the current scale(iv)
Calculating denoising thresholdTk,δk,j of each sub-band for δk,j<δth:(6)Tk,δk,j=2k−J/2/J⋅δ2δk,j,where δk,j denotes the standard deviation of sub-band Dk,j.Step 3.
Processing the noise-related NSCT coefficients in high-frequency sub-bandsDk,j with the well-known soft-thresholding method [25]:(7)C^k,jx,y=0,otherwise,sgnCk,jx,y⋅Ck,jx,y−Tk,δk,j,Ck,jx,y≥Tk,δk,j,where C^k,jx,y and Ck,jx,y are the matrices of the coefficients after and before denoising in the NSCT domain, respectively. Tk,δk,j denotes the adaptive Bayesian thresholdStep 4.
Reconstructing the denoised seismic data by conducting an inverse NSCT on these denoised NSCT sub-bands.
The workflow of our method is figuratively presented in Figure7, where a two-level NSCT is performed on synthetic data to yield a low-pass sub-band and a set of high-pass sub-bands, and two and four shearing directions are used for coarser and finer scale, respectively.Figure 7
Demonstration for the analysis framework of seismic random denoising in the NSCT domain. (a) Before attack. (b) After attack.
## 3. Experiments
In this section, we evaluate our proposed method with two classic sparse transform based methods (wavelet- and the curvelet-based thresholding schemes) on synthetic seismic data and field data. The hardware used in this experiment is Intel 6226R CPU @2.90 GHz processor, 93 GB memory and NVIDIA RTX 3090 24G graphics card. The software used is Matlab R2016b.
### 3.1. Synthetic Seismic Example
To demonstrate the performance, an example of hyperbolic-events synthetic data is used. Figure8(a) presents the synthesized signals of 150 traces with 1 ms time sampling interval. The Ricker wavelet is expressed by(8)xt=1−2π2f2t2⋅e−π2f2t2.Figure 8
Processed results on synthesized data with noise for various approaches. (a) Noise-free. (b) Synthesized data with noise. Denoised data and removed noise by the wavelet-based method (c, f), the curvelet-based threshold denoising method (d, g), and our approach (e, h), respectively.
(a)(b)(c)(d)(e)(f)(g)(h)Figure8(b) is the corresponding noisy synthetic data, which is denoised by using two existing methods, namely, the wavelet- and the curvelet-based threshold denoising approach and the proposed approach. These three methods use the same threshold method. We set up K = 2 and J = 2 for the proposed approach. So, we can acquire the denoising threshold Tk,δk,j by computing formulas (3-6) step by step; thus, we can further obtain the denoised high-frequency results by the soft-thresholding method. Figures 8(c)–8(e) show the obtained results by three methods, respectively. Obviously, the result of our approach is much superior to the ones by other two approaches. Concretely, the results are evaluated by using SNRs [26]:(9)SNR=20⋅log10x02x1−x02,where x0 and x1 represent the noise-free data and the noisy or denoised data, respectively. The resulted SNR values for Figures 8(b)–8(e)) are −2.2063 dB, 5.8541 dB, 8.2496 dB, and 13.2078 dB, respectively. Obviously, wavelet- and the curvelet-based approaches implement insufficient noise removal, while our approach conducts good performance in attenuating most random noise and the SNR value has been significantly improved. Figures 8(f)–8(h) show the removed noise sections by these three approaches, respectively. The wavelet- and the curvelet-based approaches lose part of useful signals (red arrow). Obviously, the proposed approach does not harm any useful signals. Besides, Tables 1 and 2 present the summary of the results with various SNRs before and after denoising. Our approach shows the better denoising performance, especially for the low SNR seismic data.Table 1
Comparison of various SNRs before denoising and after denoising (dB).
Noisy dataWavelet-based denoisingCurvelet-based denoisingOur method−8.2293−3.1246−0.42348.6838−2.20635.85418.249613.20781.31167.756610.539415.95107.316911.914813.878419.4674Table 2
Comparison of various SNRs before denoising and after denoising in the NSCT domain (dB).
Noisy dataHard thresholdingSoft thresholdingShrink thresholding−6.44686.43256.87467.23140.432614.237614.362815.30715.618316.928317.142617.95528.634121.687522.432822.9457In addition, to validate the processing result of our method, real noisy seismic data are measured with same excitation and reception in the identical data area. Figure9 presents the acquired noisy signals (Figure 9(a)) and the denoising result with the proposed approach (Figure 9(b)). We can see that several highlighted effective signals, clearer interlayer structure, and improved event continuity can be observed from the patterns of the denoised data, which significantly improves the SNR value. Figure 10(a) shows the real stacked profile. Similarly, after processing using our proposed approach, effective signals are highlighted; information between layers is richer and noises are effectively suppressed, significantly improving the SNR (Figure 10(b)); it can be seen from the removed noise that there is basically no effective signal loss.Figure 9
(a) Real migration profile. (b) Processed result by our approach.
(a)(b)Figure 10
(a) Real stacked profile. (b) Processed result by the proposed approach. (c) Noise removed by our approach.
(a)(b)(c)
## 3.1. Synthetic Seismic Example
To demonstrate the performance, an example of hyperbolic-events synthetic data is used. Figure8(a) presents the synthesized signals of 150 traces with 1 ms time sampling interval. The Ricker wavelet is expressed by(8)xt=1−2π2f2t2⋅e−π2f2t2.Figure 8
Processed results on synthesized data with noise for various approaches. (a) Noise-free. (b) Synthesized data with noise. Denoised data and removed noise by the wavelet-based method (c, f), the curvelet-based threshold denoising method (d, g), and our approach (e, h), respectively.
(a)(b)(c)(d)(e)(f)(g)(h)Figure8(b) is the corresponding noisy synthetic data, which is denoised by using two existing methods, namely, the wavelet- and the curvelet-based threshold denoising approach and the proposed approach. These three methods use the same threshold method. We set up K = 2 and J = 2 for the proposed approach. So, we can acquire the denoising threshold Tk,δk,j by computing formulas (3-6) step by step; thus, we can further obtain the denoised high-frequency results by the soft-thresholding method. Figures 8(c)–8(e) show the obtained results by three methods, respectively. Obviously, the result of our approach is much superior to the ones by other two approaches. Concretely, the results are evaluated by using SNRs [26]:(9)SNR=20⋅log10x02x1−x02,where x0 and x1 represent the noise-free data and the noisy or denoised data, respectively. The resulted SNR values for Figures 8(b)–8(e)) are −2.2063 dB, 5.8541 dB, 8.2496 dB, and 13.2078 dB, respectively. Obviously, wavelet- and the curvelet-based approaches implement insufficient noise removal, while our approach conducts good performance in attenuating most random noise and the SNR value has been significantly improved. Figures 8(f)–8(h) show the removed noise sections by these three approaches, respectively. The wavelet- and the curvelet-based approaches lose part of useful signals (red arrow). Obviously, the proposed approach does not harm any useful signals. Besides, Tables 1 and 2 present the summary of the results with various SNRs before and after denoising. Our approach shows the better denoising performance, especially for the low SNR seismic data.Table 1
Comparison of various SNRs before denoising and after denoising (dB).
Noisy dataWavelet-based denoisingCurvelet-based denoisingOur method−8.2293−3.1246−0.42348.6838−2.20635.85418.249613.20781.31167.756610.539415.95107.316911.914813.878419.4674Table 2
Comparison of various SNRs before denoising and after denoising in the NSCT domain (dB).
Noisy dataHard thresholdingSoft thresholdingShrink thresholding−6.44686.43256.87467.23140.432614.237614.362815.30715.618316.928317.142617.95528.634121.687522.432822.9457In addition, to validate the processing result of our method, real noisy seismic data are measured with same excitation and reception in the identical data area. Figure9 presents the acquired noisy signals (Figure 9(a)) and the denoising result with the proposed approach (Figure 9(b)). We can see that several highlighted effective signals, clearer interlayer structure, and improved event continuity can be observed from the patterns of the denoised data, which significantly improves the SNR value. Figure 10(a) shows the real stacked profile. Similarly, after processing using our proposed approach, effective signals are highlighted; information between layers is richer and noises are effectively suppressed, significantly improving the SNR (Figure 10(b)); it can be seen from the removed noise that there is basically no effective signal loss.Figure 9
(a) Real migration profile. (b) Processed result by our approach.
(a)(b)Figure 10
(a) Real stacked profile. (b) Processed result by the proposed approach. (c) Noise removed by our approach.
(a)(b)(c)
## 4. Conclusion
This article presents a novel NSCT-based seismic random noise denoising method. The superior performance NSCT with an appropriate thresholding operator brings excellent denoising results for seismic signals. The proposed method can achieve seismic random noise attenuation while retaining effective signals to the maximum degree. The experiments are performed with both synthesized and real seismic signals and the results demonstrate effectiveness of our approach compared with existing ones. In the future, we will consider deep learning based techniques to denoise seismic data with low SNR in view of the powerful learning ability and feature recognition ability, which aim to highlight effective signals and suppress false signals.
---
*Source: 1013623-2022-08-08.xml* | 1013623-2022-08-08_1013623-2022-08-08.md | 39,509 | Denoising Seismic Data via a Threshold Shrink Method in the Non-Subsampled Contourlet Transform Domain | Yu Yang; Qi Ran; Kang Chen; Cheng Lei; Yusheng Zhang; Han Liang; Song Han; Cong Tang | Mathematical Problems in Engineering
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013623 | 1013623-2022-08-08.xml | ---
## Abstract
In seismic exploration, effective seismic signals can be seriously distorted by and interfered with noise, and the performance of traditional seismic denoising approaches can hardly meet the requirements of high-precision seismic exploration. To remarkably enhance signal-to-noise ratios (SNR) and adapt to high-precision seismic exploration, this work exploits the non-subsampled contourlet transform (NSCT) and threshold shrink method to design a new approach for suppressing seismic random noise. NSCT is an excellent multiscale, multidirectional, and shift-invariant image decomposition scheme, which can not only calculate exact contourlet transform coefficients through multiresolution analysis but also give an almost optimized approximation. It has better high-frequency response and stronger ability to describe curves and surfaces. Specifically, we propose to utilize the superior performance NSCT to decomposing the noisy seismic data into various frequency sub-bands and orientation response sub-bands, obtaining fine enough transform high frequencies to effectively achieve the separation of signals and noises. Besides, we use the adaptive Bayesian threshold shrink method instead of traditional handcraft threshold scheme for denoising the high-frequency sub-bands of NSCT coefficients, which pays more attention to the internal characteristics of the signals/data itself and improve the robustness of method, which can work better for preserving richer structure details of effective signals. The proposed method can achieve seismic random noise attenuation while retaining effective signals to the maximum degree. Experimental results reveal that the proposed method is superior to wavelet-based and curvelet-based threshold denoising methods, which increases synthetic seismic data with lower SNR from −8.2293 dB to 8.6838 dB, and 11.8084 dB and 9.1072 dB higher than two classic sparse transform based methods, respectively. Furthermore, we also apply the proposed method to process field data, which achieves satisfactory results.
---
## Body
## 1. Introduction
In recent years, high-precision seismic exploration has been a key subject in modern seismic exploration. This technique will be hindered if the noise in acquired seismic signals cannot be removed perfectly. The traditional seismic data denoising approaches can hardly meet the requirements of high-precision seismic exploration because the level and complexity of the accompanying noise in seismic signals have significantly increased due to the increasingly complex exploration environment and the increase in exploration depth with the extension of field of seismic exploration. So, it is crucial to design new effective techniques to remarkably enhance the signal-to-noise ratio (SNR).At present, many seismic denoising approaches have been proposed including the initial seismic data denoising method [1], traditional transform domain based denoising methods [2–4], sparse transform based methods [5–8] for solving multitasks, learning-based methods [9, 10], and other methods [11–14]. Actually, as the most common seismic noise, random noise can penetrate the whole time domain and severely distort and interfere with effective seismic data. Thus, since Canales [1] first developed a random noise reduction approach, a lot of random noise attenuation methods have been presented on this basis, such as the sparse transform based approaches, the empirical mode decomposition (EMD) based approaches, and fast dictionary learning-based approaches [10]. Chen and Ma [4] removed random noise with predictive filtering of f-x empirical mode decomposition. Chen and Fomel [6] developed an EMD-Seislet transform based method to remove the seismic random noise. Liu et al. [7] presented variational mode decomposition for suppressing the random noise of seismic data. In reality, one type of the intensively used and most efficient seismic random noise attenuation approaches are based on the sparse transform of multiscale geometric analysis. Zhang and Lu [2] removed noise and improved the resolution of seismic data by using the applied wavelet transform. Neelamani et al. [3] attenuated random noise with the curvelet transform, and subsequently, several variants [8, 12] with good results have been reported. Lin [14] proposed a three-dimensional (3-D) steerable pyramid decomposition-based suppression method of seismic random noise. Sang et al. [15] presented an unconventional technique on the basis of a proximal classifier with consistency (PCC) in transform domain for attenuating seismic random noise, and they also proposed another seismic denoising approach [16] via the deep neural network and simultaneously suppressed seismic coherent and incoherent noises [17] based on the deep neural network.High SNR data are the important guarantee of high-precision seismic exploration. But, the existing transform domain based methods are difficult to obtain higher SNR data due to not fine enough transform high frequencies such as wavelet transform or curvelet transform. Compared with the existing sparse transform based methods, that is, wavelet-based transform and curvelet-based transform methods, the NSCT presents multiscale, multidirectional, and shift-invariant decomposition scheme, which has better high-frequency response and stronger ability to describe curves and surfaces. Besides, they often conduct rough threshold operation by using manual threshold processing methods such as hard thresholding or soft thresholding. There is often a loss of effective signals. Therefore, to remarkably enhance SNRs and adapt to high precision seismic exploration, we exploit an effective seismic data denoising method in this paper. The contributions are as follows:(i)
We propose to utilize the new sparse transform technique, non-subsampled contourlet transform (NSCT), to decomposing the noisy seismic data into various frequency sub-bands and orientation response sub-bands, obtaining fine enough transform high frequencies to effectively achieve the separation of signals and noises.(ii)
We use the adaptive Bayesian threshold shrink method instead of the traditional handcraft threshold scheme for denoising the high-frequency sub-bands of NSCT coefficients, which pays more attention to the internal characteristics of the signals/data itself and improve the robustness of method, which can work better for preserving richer structure details of effective signals.(iii)
We conduct the experiments on synthetic and field data, which reveals that our approach is superior to the wavelet and the curvelet transform based classical ones, achieving higher signal-to-noise ratio (SNR) values.The remainder of this paper is organized as follows. We present our method in Section2. Experiments and performance evaluation are presented in Sections 3. Conclusion is drawn in Section 4.
## 2. Method
In this paper, we focus on transform domain based thresholding methods due to their good performance. The wavelet-based thresholding scheme is the most classic method for seismic data denoising. Wavelets can sparsely represent one-dimensional (1-D) digital data with smoothed point discontinuities and have been successfully used for representing digital signals [18]. However, wavelets cannot efficiently handle higher dimensional data because of the usual presence of other kinds of singularities. As a matter of fact, curvelets [19], contourlets [20], bandelets [21], and some other image/signal representations can take the advantages of the anisotropic regularity of a surface along edges, but these representations all have their own disadvantages, such as lack of a multiresolution geometry representation for curvelets, extremely limited clear directional features for contourlets, and computationally expensive geometry optimization for bandelets. The non-subsampled contourlet transform (NSCT) [22] is an excellent multiscale, multidirectional, and shift-invariant image decomposition scheme, which can not only calculate exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. It has better high-frequency response and stronger ability to describe curves and surfaces. Therefore, we attempt to utilize NSCT to denoise seismic data in this paper.
### 2.1. NSCT for Seismic Data
The NSCT [22] primarily consists of a cascade of non-subsampled pyramid filter bank (NSPFB) and non-subsampled direction filter bank (NSDFB). First, the NSPFB is utilized to decompose an image and the sub-bands obtained are used as inputs of the NSDFB to generate decomposition results of the initial image in multiple directions and dimensions. The NSCT conducts K-level decomposition on an image to produce one low-frequency (LF) and several high-frequency (HF) sub-bands, and the size of all these sub-bands is identical with that of the original image. So, the full reconstruction of NSCT is possible since NSPFB and NSDFB can both be completely rebuilt.The shift-invariant filtering structure of the NSCT results in its multiscale feature. By using a bank of non-subsampled 2-D two-channel filters, we have sub-band decomposition like the Laplacian pyramid. Figure1 shows a 3-stage non-subsampled pyramid (NSP) decomposition, whose expansion has an alike concept with the 1-D non-subsampled wavelet transform (NSWT) using the àtrous algorithm [22]. For a J-stage decomposition, the redundancy will be J + 1. The region −π/2j,π/2j2 and its complement are the ideal passband support of the low- and high-pass filter at the j-th stage, respectively. Upsampling the filter for the first stage can yield the filters for subsequent stages; thus, no additional filter is needed to give the multiscale property. Our structure differs from that of the separable NSWT. Particularly, our structure produces one bandpass image in each stage leading to J + 1 redundancy. However, the NSWT generates three directional sub-bands in each stage, leading to 3J + 1 redundancy. The advantage of NSP lies in its ability to generate better filters due to its generality.Figure 1
(a) Schematic diagram of 3-stage NSP decomposition. The aliasing due to upsampling is denoted with lighter gray. (b) Sub-bands on the 2-D frequency plane.The NSCT has following steps. First, the NSPFB is used to multidimensionally decompose an image into an HF and an LF sub-band. In multilevel decomposition, an image is finally decomposed to an LF sub-band and a set of HF sub-bands if only its LF sub-band is further iteratively filtered. The redundancy of anX-level NSPFB decomposition will be X + 1. With regard to the X-level low-pass and the bandpass filter, the ranges of the ideal support of frequency domain are −π/2x−1,π/2x−12 and −π/2x−1,π/2x−12∪−π/2x,π/2x2, respectively. The decomposition does not extra filter during acquiring multidimension properties. So, the generated redundancy of X + 1 will be generated in each stage with a bandpass image, and the structure is significantly superior to that of the wavelet transform. Then, these sub-bands can be decomposed along singular points and multiple directions in various dimensions, and the directions are integrated. The NSDFB also belongs to a two-channel filter bank which comprises decomposition filters Uiz,i=0,1 and synthesis filters Viz,i=0,1 satisfying Bézout’s identity:(1)U0z+V0z=U1z+V1z.To adopt ideal support of the frequency domain, two channels are decomposed byU0z and U1z. Then, U1z and U0z are upsampled instead of subsampled by all sampling matrices in each level to get direction filters in the subsequent levels. This completes the image decomposition and an NSCT transform is schematically presented in Figures 2 and 3. Figure 2 displays an overview of NSCT which has a filter bank for dividing the 2-D frequency plane into sub-bands plotted in the bottom left quarter of Figure 1. By using NSCT, we decompose the 2-D seismic signal data into two shift-invariant components: an NSP structure to ensure multiscale properties (Part 1 of Figure 2) and an NSDFB structure to give directionality (Part 2 of Figure 2). The obtained idealized frequency partitioning diagram is presented in Figure 3. The structure consists in a bank of filters that splits the 2-D frequency plane into several sub-bands. In this paper, we use this mode of non-downsampling to reduce the sampling distortion in the filters and obtain translation invariance, in which the size of the directional sub-band at each scale is the same as that of the original 2-D seismic signal matrix. The NSCT has more details to be preserved, and the decomposition can better maintain the edge information and contour structure of the seismic signals.Figure 2
Schematic diagram of the NSCT of 2D seismic data.Figure 3
Ideal frequency division by the NSCT.Figure4 shows processing results of implementing the two-level NSCT on the synthesized seismic signal data with noise (Figure 4(a)) to yield a low-pass sub-band (Figure 4(b)) and a set of high-pass sub-bands (Figures 4(c) and 4(d)). Here, two and four shearing directions are used for the coarser and the finer scale, respectively. We can see from Figure 4 that the LF record (Figure 4(b)) decomposed by the NSCT basically contains the effective synthesized seismic signals. For HF records (Figure 4(d)) of scale 1, all they contain is noise, while HF records (Figure 4(c)) of scale 2 contain partially effective signals and noise, which needs to be further processed via signal-noise separation.Figure 4
Example of NSCT processing. (a) Synthesized seismic signal data with noise. (b) Approximate NSCT coefficients. Seismic signals of the detailed NSCT coefficients at scale 2, 2 directions (c) and scale 1, 4 directions (d).
(a)(b)(c)(d)
### 2.2. Denoising Seismic Data Using the Threshold Shrink in the NSCT Domain
To remarkably suppress seismic random noise while not damaging the effective signal in the process of denoising, this paper proposes a novel NSCT-based scheme with an adaptive threshold value setting for suppressing seismic random noise. The NSCT with the properties of multiscale, multidirection, and relative optimized sparsity can not only calculate the exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. As the NSCT is multidirectional, large coefficients can be obtained when the direction of the NSCT basic function is approximately the same as the direction of the seismic signals, while small coefficients can be obtained when they have large difference. Thus, the random noises are distributed on small coefficients, so we can remove smaller coefficients to achieve random noise attenuate using an appropriate thresholding operator.We first analyze the sparsity of NSCT before giving the steps of denoising. It is known that the degree of approximation of the decomposed effective data determines the effect of noise suppression [23]. That is to say, the denoising effect depends on the sparsity of the approach. Figure 5 presents the reconstruction error on synthesized data (Figure 6(a)) in the wavelet transform domain, curvelet transform domain, and NSCT domain. Clearly, the construction error of NSCT is the smallest at the same percentage of coefficients, and it is approximate to zero at 6% coefficient, showing its optimal sparsity. In Figure 6, we compare the high-frequency coefficients of NSCT, curvelet transform, and wavelet transform, where we can clearly see that the NSCT represents the curvature more accurately.Figure 5
Reconstruction errors in three transform domains.Figure 6
Comparison of the NSCT, curvelet transform, and WT coefficients on the synthetic seismic data. (a) The original synthetic seismic data, and the fusion of HF coefficients of (b) NSCT, (c) discrete curvelet transform, and (d) WT.
(a)(b)(c)(d)Generally, one threshold is used for the whole image/signals (or sub-bands) in signal denoising techniques based on the threshold shrink. Obviously, the threshold value should be smaller if the signals contain more effective information, and it should be larger if the signals have more smooth regions. For the two cases, a larger threshold value should be correspondingly used for a higher noise level. Evidently, detailed information with an optimal threshold value does not function adequately for smooth regions and vice versa. Therefore, setting for threshold value can be further optimized by introducing adaptive threshold for different regions in seismic signals to exploit the fact that most signals consist of smooth regions and effective seismic signal information.Specifically, the two-dimensional noisy seismic data can be calculated by(2)ft,g=xt,g+nt,g,where t denotes time, g represents the trace number, and xt,g, nt,g, and ft,g represent the effective seismic data, the additive random noise, and the noisy observed seismic signals, respectively. Signal xt,g will be recovered from ft,g.In NSCT-based denoising approaches, a threshold is properly set for the NSCT coefficients so that seismic signals can be retrieved from the acquired noisy seismic data. The proposed seismic random noise attenuation has the following main steps.Step 1.
Decomposing noisy seismic data with aK-level NSCT to yield one low-pass sub-band and one set of high-pass sub-bands Dk,jk=1,2,…,K;j=1,2,…,J, with the current scale k, the decomposition orientation j, and the total number J of decomposition directions.Step 2.
Calculating denoising threshold values of all sub-bandsDk,j. The level adaptive Bayesian threshold [24] is used and calculated as below:(i)
Using the robust median estimator to calculate noise varianceδ from sub-bands:(3)δ=MedianCx,y0.6745,Cx,y∈DK,J.(ii)
Using the maximum likelihood estimator (MLE) [24] to estimate signal variance δk,j for the noisy coefficients of each detail sub-band Dk,j:(4)δk,j=max0,1mn∑x=1m∑y=1nCk,jx,y2−δ2,whereCk,jx,y∈Dk,j, and m and n denote the size of seismic signals(iii)
Calculating discriminating thresholdδth with the near exponential prior of NSCT coefficients across scales:(5)δth=δ⋅∑kδk,j⋅2−k∑kk2⋅2−k,wherek denotes the current scale(iv)
Calculating denoising thresholdTk,δk,j of each sub-band for δk,j<δth:(6)Tk,δk,j=2k−J/2/J⋅δ2δk,j,where δk,j denotes the standard deviation of sub-band Dk,j.Step 3.
Processing the noise-related NSCT coefficients in high-frequency sub-bandsDk,j with the well-known soft-thresholding method [25]:(7)C^k,jx,y=0,otherwise,sgnCk,jx,y⋅Ck,jx,y−Tk,δk,j,Ck,jx,y≥Tk,δk,j,where C^k,jx,y and Ck,jx,y are the matrices of the coefficients after and before denoising in the NSCT domain, respectively. Tk,δk,j denotes the adaptive Bayesian thresholdStep 4.
Reconstructing the denoised seismic data by conducting an inverse NSCT on these denoised NSCT sub-bands.
The workflow of our method is figuratively presented in Figure7, where a two-level NSCT is performed on synthetic data to yield a low-pass sub-band and a set of high-pass sub-bands, and two and four shearing directions are used for coarser and finer scale, respectively.Figure 7
Demonstration for the analysis framework of seismic random denoising in the NSCT domain. (a) Before attack. (b) After attack.
## 2.1. NSCT for Seismic Data
The NSCT [22] primarily consists of a cascade of non-subsampled pyramid filter bank (NSPFB) and non-subsampled direction filter bank (NSDFB). First, the NSPFB is utilized to decompose an image and the sub-bands obtained are used as inputs of the NSDFB to generate decomposition results of the initial image in multiple directions and dimensions. The NSCT conducts K-level decomposition on an image to produce one low-frequency (LF) and several high-frequency (HF) sub-bands, and the size of all these sub-bands is identical with that of the original image. So, the full reconstruction of NSCT is possible since NSPFB and NSDFB can both be completely rebuilt.The shift-invariant filtering structure of the NSCT results in its multiscale feature. By using a bank of non-subsampled 2-D two-channel filters, we have sub-band decomposition like the Laplacian pyramid. Figure1 shows a 3-stage non-subsampled pyramid (NSP) decomposition, whose expansion has an alike concept with the 1-D non-subsampled wavelet transform (NSWT) using the àtrous algorithm [22]. For a J-stage decomposition, the redundancy will be J + 1. The region −π/2j,π/2j2 and its complement are the ideal passband support of the low- and high-pass filter at the j-th stage, respectively. Upsampling the filter for the first stage can yield the filters for subsequent stages; thus, no additional filter is needed to give the multiscale property. Our structure differs from that of the separable NSWT. Particularly, our structure produces one bandpass image in each stage leading to J + 1 redundancy. However, the NSWT generates three directional sub-bands in each stage, leading to 3J + 1 redundancy. The advantage of NSP lies in its ability to generate better filters due to its generality.Figure 1
(a) Schematic diagram of 3-stage NSP decomposition. The aliasing due to upsampling is denoted with lighter gray. (b) Sub-bands on the 2-D frequency plane.The NSCT has following steps. First, the NSPFB is used to multidimensionally decompose an image into an HF and an LF sub-band. In multilevel decomposition, an image is finally decomposed to an LF sub-band and a set of HF sub-bands if only its LF sub-band is further iteratively filtered. The redundancy of anX-level NSPFB decomposition will be X + 1. With regard to the X-level low-pass and the bandpass filter, the ranges of the ideal support of frequency domain are −π/2x−1,π/2x−12 and −π/2x−1,π/2x−12∪−π/2x,π/2x2, respectively. The decomposition does not extra filter during acquiring multidimension properties. So, the generated redundancy of X + 1 will be generated in each stage with a bandpass image, and the structure is significantly superior to that of the wavelet transform. Then, these sub-bands can be decomposed along singular points and multiple directions in various dimensions, and the directions are integrated. The NSDFB also belongs to a two-channel filter bank which comprises decomposition filters Uiz,i=0,1 and synthesis filters Viz,i=0,1 satisfying Bézout’s identity:(1)U0z+V0z=U1z+V1z.To adopt ideal support of the frequency domain, two channels are decomposed byU0z and U1z. Then, U1z and U0z are upsampled instead of subsampled by all sampling matrices in each level to get direction filters in the subsequent levels. This completes the image decomposition and an NSCT transform is schematically presented in Figures 2 and 3. Figure 2 displays an overview of NSCT which has a filter bank for dividing the 2-D frequency plane into sub-bands plotted in the bottom left quarter of Figure 1. By using NSCT, we decompose the 2-D seismic signal data into two shift-invariant components: an NSP structure to ensure multiscale properties (Part 1 of Figure 2) and an NSDFB structure to give directionality (Part 2 of Figure 2). The obtained idealized frequency partitioning diagram is presented in Figure 3. The structure consists in a bank of filters that splits the 2-D frequency plane into several sub-bands. In this paper, we use this mode of non-downsampling to reduce the sampling distortion in the filters and obtain translation invariance, in which the size of the directional sub-band at each scale is the same as that of the original 2-D seismic signal matrix. The NSCT has more details to be preserved, and the decomposition can better maintain the edge information and contour structure of the seismic signals.Figure 2
Schematic diagram of the NSCT of 2D seismic data.Figure 3
Ideal frequency division by the NSCT.Figure4 shows processing results of implementing the two-level NSCT on the synthesized seismic signal data with noise (Figure 4(a)) to yield a low-pass sub-band (Figure 4(b)) and a set of high-pass sub-bands (Figures 4(c) and 4(d)). Here, two and four shearing directions are used for the coarser and the finer scale, respectively. We can see from Figure 4 that the LF record (Figure 4(b)) decomposed by the NSCT basically contains the effective synthesized seismic signals. For HF records (Figure 4(d)) of scale 1, all they contain is noise, while HF records (Figure 4(c)) of scale 2 contain partially effective signals and noise, which needs to be further processed via signal-noise separation.Figure 4
Example of NSCT processing. (a) Synthesized seismic signal data with noise. (b) Approximate NSCT coefficients. Seismic signals of the detailed NSCT coefficients at scale 2, 2 directions (c) and scale 1, 4 directions (d).
(a)(b)(c)(d)
## 2.2. Denoising Seismic Data Using the Threshold Shrink in the NSCT Domain
To remarkably suppress seismic random noise while not damaging the effective signal in the process of denoising, this paper proposes a novel NSCT-based scheme with an adaptive threshold value setting for suppressing seismic random noise. The NSCT with the properties of multiscale, multidirection, and relative optimized sparsity can not only calculate the exact contourlet coefficients through multiresolution analysis but also give an almost optimized approximation. As the NSCT is multidirectional, large coefficients can be obtained when the direction of the NSCT basic function is approximately the same as the direction of the seismic signals, while small coefficients can be obtained when they have large difference. Thus, the random noises are distributed on small coefficients, so we can remove smaller coefficients to achieve random noise attenuate using an appropriate thresholding operator.We first analyze the sparsity of NSCT before giving the steps of denoising. It is known that the degree of approximation of the decomposed effective data determines the effect of noise suppression [23]. That is to say, the denoising effect depends on the sparsity of the approach. Figure 5 presents the reconstruction error on synthesized data (Figure 6(a)) in the wavelet transform domain, curvelet transform domain, and NSCT domain. Clearly, the construction error of NSCT is the smallest at the same percentage of coefficients, and it is approximate to zero at 6% coefficient, showing its optimal sparsity. In Figure 6, we compare the high-frequency coefficients of NSCT, curvelet transform, and wavelet transform, where we can clearly see that the NSCT represents the curvature more accurately.Figure 5
Reconstruction errors in three transform domains.Figure 6
Comparison of the NSCT, curvelet transform, and WT coefficients on the synthetic seismic data. (a) The original synthetic seismic data, and the fusion of HF coefficients of (b) NSCT, (c) discrete curvelet transform, and (d) WT.
(a)(b)(c)(d)Generally, one threshold is used for the whole image/signals (or sub-bands) in signal denoising techniques based on the threshold shrink. Obviously, the threshold value should be smaller if the signals contain more effective information, and it should be larger if the signals have more smooth regions. For the two cases, a larger threshold value should be correspondingly used for a higher noise level. Evidently, detailed information with an optimal threshold value does not function adequately for smooth regions and vice versa. Therefore, setting for threshold value can be further optimized by introducing adaptive threshold for different regions in seismic signals to exploit the fact that most signals consist of smooth regions and effective seismic signal information.Specifically, the two-dimensional noisy seismic data can be calculated by(2)ft,g=xt,g+nt,g,where t denotes time, g represents the trace number, and xt,g, nt,g, and ft,g represent the effective seismic data, the additive random noise, and the noisy observed seismic signals, respectively. Signal xt,g will be recovered from ft,g.In NSCT-based denoising approaches, a threshold is properly set for the NSCT coefficients so that seismic signals can be retrieved from the acquired noisy seismic data. The proposed seismic random noise attenuation has the following main steps.Step 1.
Decomposing noisy seismic data with aK-level NSCT to yield one low-pass sub-band and one set of high-pass sub-bands Dk,jk=1,2,…,K;j=1,2,…,J, with the current scale k, the decomposition orientation j, and the total number J of decomposition directions.Step 2.
Calculating denoising threshold values of all sub-bandsDk,j. The level adaptive Bayesian threshold [24] is used and calculated as below:(i)
Using the robust median estimator to calculate noise varianceδ from sub-bands:(3)δ=MedianCx,y0.6745,Cx,y∈DK,J.(ii)
Using the maximum likelihood estimator (MLE) [24] to estimate signal variance δk,j for the noisy coefficients of each detail sub-band Dk,j:(4)δk,j=max0,1mn∑x=1m∑y=1nCk,jx,y2−δ2,whereCk,jx,y∈Dk,j, and m and n denote the size of seismic signals(iii)
Calculating discriminating thresholdδth with the near exponential prior of NSCT coefficients across scales:(5)δth=δ⋅∑kδk,j⋅2−k∑kk2⋅2−k,wherek denotes the current scale(iv)
Calculating denoising thresholdTk,δk,j of each sub-band for δk,j<δth:(6)Tk,δk,j=2k−J/2/J⋅δ2δk,j,where δk,j denotes the standard deviation of sub-band Dk,j.Step 3.
Processing the noise-related NSCT coefficients in high-frequency sub-bandsDk,j with the well-known soft-thresholding method [25]:(7)C^k,jx,y=0,otherwise,sgnCk,jx,y⋅Ck,jx,y−Tk,δk,j,Ck,jx,y≥Tk,δk,j,where C^k,jx,y and Ck,jx,y are the matrices of the coefficients after and before denoising in the NSCT domain, respectively. Tk,δk,j denotes the adaptive Bayesian thresholdStep 4.
Reconstructing the denoised seismic data by conducting an inverse NSCT on these denoised NSCT sub-bands.
The workflow of our method is figuratively presented in Figure7, where a two-level NSCT is performed on synthetic data to yield a low-pass sub-band and a set of high-pass sub-bands, and two and four shearing directions are used for coarser and finer scale, respectively.Figure 7
Demonstration for the analysis framework of seismic random denoising in the NSCT domain. (a) Before attack. (b) After attack.
## 3. Experiments
In this section, we evaluate our proposed method with two classic sparse transform based methods (wavelet- and the curvelet-based thresholding schemes) on synthetic seismic data and field data. The hardware used in this experiment is Intel 6226R CPU @2.90 GHz processor, 93 GB memory and NVIDIA RTX 3090 24G graphics card. The software used is Matlab R2016b.
### 3.1. Synthetic Seismic Example
To demonstrate the performance, an example of hyperbolic-events synthetic data is used. Figure8(a) presents the synthesized signals of 150 traces with 1 ms time sampling interval. The Ricker wavelet is expressed by(8)xt=1−2π2f2t2⋅e−π2f2t2.Figure 8
Processed results on synthesized data with noise for various approaches. (a) Noise-free. (b) Synthesized data with noise. Denoised data and removed noise by the wavelet-based method (c, f), the curvelet-based threshold denoising method (d, g), and our approach (e, h), respectively.
(a)(b)(c)(d)(e)(f)(g)(h)Figure8(b) is the corresponding noisy synthetic data, which is denoised by using two existing methods, namely, the wavelet- and the curvelet-based threshold denoising approach and the proposed approach. These three methods use the same threshold method. We set up K = 2 and J = 2 for the proposed approach. So, we can acquire the denoising threshold Tk,δk,j by computing formulas (3-6) step by step; thus, we can further obtain the denoised high-frequency results by the soft-thresholding method. Figures 8(c)–8(e) show the obtained results by three methods, respectively. Obviously, the result of our approach is much superior to the ones by other two approaches. Concretely, the results are evaluated by using SNRs [26]:(9)SNR=20⋅log10x02x1−x02,where x0 and x1 represent the noise-free data and the noisy or denoised data, respectively. The resulted SNR values for Figures 8(b)–8(e)) are −2.2063 dB, 5.8541 dB, 8.2496 dB, and 13.2078 dB, respectively. Obviously, wavelet- and the curvelet-based approaches implement insufficient noise removal, while our approach conducts good performance in attenuating most random noise and the SNR value has been significantly improved. Figures 8(f)–8(h) show the removed noise sections by these three approaches, respectively. The wavelet- and the curvelet-based approaches lose part of useful signals (red arrow). Obviously, the proposed approach does not harm any useful signals. Besides, Tables 1 and 2 present the summary of the results with various SNRs before and after denoising. Our approach shows the better denoising performance, especially for the low SNR seismic data.Table 1
Comparison of various SNRs before denoising and after denoising (dB).
Noisy dataWavelet-based denoisingCurvelet-based denoisingOur method−8.2293−3.1246−0.42348.6838−2.20635.85418.249613.20781.31167.756610.539415.95107.316911.914813.878419.4674Table 2
Comparison of various SNRs before denoising and after denoising in the NSCT domain (dB).
Noisy dataHard thresholdingSoft thresholdingShrink thresholding−6.44686.43256.87467.23140.432614.237614.362815.30715.618316.928317.142617.95528.634121.687522.432822.9457In addition, to validate the processing result of our method, real noisy seismic data are measured with same excitation and reception in the identical data area. Figure9 presents the acquired noisy signals (Figure 9(a)) and the denoising result with the proposed approach (Figure 9(b)). We can see that several highlighted effective signals, clearer interlayer structure, and improved event continuity can be observed from the patterns of the denoised data, which significantly improves the SNR value. Figure 10(a) shows the real stacked profile. Similarly, after processing using our proposed approach, effective signals are highlighted; information between layers is richer and noises are effectively suppressed, significantly improving the SNR (Figure 10(b)); it can be seen from the removed noise that there is basically no effective signal loss.Figure 9
(a) Real migration profile. (b) Processed result by our approach.
(a)(b)Figure 10
(a) Real stacked profile. (b) Processed result by the proposed approach. (c) Noise removed by our approach.
(a)(b)(c)
## 3.1. Synthetic Seismic Example
To demonstrate the performance, an example of hyperbolic-events synthetic data is used. Figure8(a) presents the synthesized signals of 150 traces with 1 ms time sampling interval. The Ricker wavelet is expressed by(8)xt=1−2π2f2t2⋅e−π2f2t2.Figure 8
Processed results on synthesized data with noise for various approaches. (a) Noise-free. (b) Synthesized data with noise. Denoised data and removed noise by the wavelet-based method (c, f), the curvelet-based threshold denoising method (d, g), and our approach (e, h), respectively.
(a)(b)(c)(d)(e)(f)(g)(h)Figure8(b) is the corresponding noisy synthetic data, which is denoised by using two existing methods, namely, the wavelet- and the curvelet-based threshold denoising approach and the proposed approach. These three methods use the same threshold method. We set up K = 2 and J = 2 for the proposed approach. So, we can acquire the denoising threshold Tk,δk,j by computing formulas (3-6) step by step; thus, we can further obtain the denoised high-frequency results by the soft-thresholding method. Figures 8(c)–8(e) show the obtained results by three methods, respectively. Obviously, the result of our approach is much superior to the ones by other two approaches. Concretely, the results are evaluated by using SNRs [26]:(9)SNR=20⋅log10x02x1−x02,where x0 and x1 represent the noise-free data and the noisy or denoised data, respectively. The resulted SNR values for Figures 8(b)–8(e)) are −2.2063 dB, 5.8541 dB, 8.2496 dB, and 13.2078 dB, respectively. Obviously, wavelet- and the curvelet-based approaches implement insufficient noise removal, while our approach conducts good performance in attenuating most random noise and the SNR value has been significantly improved. Figures 8(f)–8(h) show the removed noise sections by these three approaches, respectively. The wavelet- and the curvelet-based approaches lose part of useful signals (red arrow). Obviously, the proposed approach does not harm any useful signals. Besides, Tables 1 and 2 present the summary of the results with various SNRs before and after denoising. Our approach shows the better denoising performance, especially for the low SNR seismic data.Table 1
Comparison of various SNRs before denoising and after denoising (dB).
Noisy dataWavelet-based denoisingCurvelet-based denoisingOur method−8.2293−3.1246−0.42348.6838−2.20635.85418.249613.20781.31167.756610.539415.95107.316911.914813.878419.4674Table 2
Comparison of various SNRs before denoising and after denoising in the NSCT domain (dB).
Noisy dataHard thresholdingSoft thresholdingShrink thresholding−6.44686.43256.87467.23140.432614.237614.362815.30715.618316.928317.142617.95528.634121.687522.432822.9457In addition, to validate the processing result of our method, real noisy seismic data are measured with same excitation and reception in the identical data area. Figure9 presents the acquired noisy signals (Figure 9(a)) and the denoising result with the proposed approach (Figure 9(b)). We can see that several highlighted effective signals, clearer interlayer structure, and improved event continuity can be observed from the patterns of the denoised data, which significantly improves the SNR value. Figure 10(a) shows the real stacked profile. Similarly, after processing using our proposed approach, effective signals are highlighted; information between layers is richer and noises are effectively suppressed, significantly improving the SNR (Figure 10(b)); it can be seen from the removed noise that there is basically no effective signal loss.Figure 9
(a) Real migration profile. (b) Processed result by our approach.
(a)(b)Figure 10
(a) Real stacked profile. (b) Processed result by the proposed approach. (c) Noise removed by our approach.
(a)(b)(c)
## 4. Conclusion
This article presents a novel NSCT-based seismic random noise denoising method. The superior performance NSCT with an appropriate thresholding operator brings excellent denoising results for seismic signals. The proposed method can achieve seismic random noise attenuation while retaining effective signals to the maximum degree. The experiments are performed with both synthesized and real seismic signals and the results demonstrate effectiveness of our approach compared with existing ones. In the future, we will consider deep learning based techniques to denoise seismic data with low SNR in view of the powerful learning ability and feature recognition ability, which aim to highlight effective signals and suppress false signals.
---
*Source: 1013623-2022-08-08.xml* | 2022 |
# Identification of Phosphohistone H3 Cutoff Values Corresponding to Original WHO Grades but Distinguishable in Well-Differentiated Gastrointestinal Neuroendocrine Tumors
**Authors:** Min Jeong Kim; Mi Jung Kwon; Ho Suk Kang; Kyung Chan Choi; Eun Sook Nam; Seong Jin Cho; Hye-Rim Park; Soo Kee Min; Jinwon Seo; Ji-Young Choe; Hyoung-Chul Park
**Journal:** BioMed Research International
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1013640
---
## Abstract
Mitotic counts in the World Health Organization (WHO) grading system have narrow cutoff values. True mitotic figures, however, are not always distinguishable from apoptotic bodies and darkly stained nuclei, complicating the ability of the WHO grading system to diagnose well-differentiated neuroendocrine tumors (NETs). The mitosis-specific marker phosphohistone H3 (PHH3) can identify true mitoses and grade tumors reliably. The aim of this study was to investigate the correspondence of tumor grades, as determined by PHH3 mitotic index (MI) and mitotic counts according to WHO criteria, and to determine the clinically relevant cutoffs of PHH3 MI in rectal and nonrectal gastrointestinal NETs. Mitotic counts correlated with both the Ki-67 labeling index and PHH3 MI, but the correlation with PHH3 MI was slightly higher. The PHH3 MI cutoff ≥4 correlated most closely with original WHO grades for both rectal NETs. A PHH3 MI cutoff ≥4, which could distinguish between G1 and G2 tumors, was associated with disease-free survival in patients with rectal NETs, whereas that cutoff value showed marginal significance for overall survival in patient with rectal NETs. In conclusion, the use of PHH3 ≥4 correlated most closely with original WHO grades.
---
## Body
## 1. Introduction
Neuroendocrine tumors (NETs) are uncommon, heterogeneous groups of neoplasms, with most (54%) developing in the gastrointestinal tract [1–4]. The incidence and prognosis of gastrointestinal NETs depend on the tumor primary site, with the highest frequencies observed in the rectum (17.7%), small intestine (17.3%), and colon (10.1%), followed by the stomach (6.0%) and appendix (3.1%) and with survival ranging from 6 months to more than 20 years [1–4]. Gastrointestinal NETs largely arise from enterochromaffin and enteroglucagon cells found in the lamina propria and submucosa [5]. Histologically, NETs are composed of an organoid pattern of cells arranged into trabeculae, acini, or solid nests, separated by delicate and vascular stroma, which allows for easy recognition on low-power microscopic examination [5] (Figures 1(a)-1(b)). Well-differentiated NETs, which have malignant potential, are characterized cytologically by bland uniform cells with round to oval nuclei, indistinct nucleoli, and coarsely granular chromatin [6, 7]. Distant metastasis resulting from unexpected tumor aggressiveness is therefore of clinical concern in patients with well-differentiated NETs [8–11].Figure 1
Mitotic figures(Arrows) in a rectal neuroendocrine tumor (a, d, g) and a colonic neuroendocrine tumor (b, e, h) stained with H&E (a)–(c), Ki-67 (d)–(f), and PHH3 (g)–(i). (d)-(e) Ki-67 is more frequently positive in tumor cells, whereas (g)-(h) PHH3 highlights mitosis-specific nuclei, aiding in recognition. (c) Apoptotic bodies(Dotted arrows) mimicking mitosis are found in gastric neuroendocrine tumors. (f) Faint Ki-67 staining in an apoptotic nucleus, apparently false-positive. (i) Lack of PHH3 staining of apoptotic cells.
(a) (b) (c) (d) (e) (f) (g) (h) (i)The most important prognostic indicator in gastrointestinal NETs is the World Health Organization (WHO) grading system, which categorizes gastrointestinal NETs into three grades (G1, G2, and G3), based on mitotic counts and/or Ki-67 labeling index (LI). G1 NETs are low grade tumors, with <2 mitoses/10 high-power fields (HPFs) and/or Ki-67 LI <3%; G2 NETs are intermediate grade tumors (2–20 mitoses/10 HPFs and/or Ki-67 LI 3%–20%), and G3 NETs are high grade tumors (>20 mitoses/10 HPFs and/or Ki-67 LI >20%) [12, 13]. Most gastrointestinal NETs are G1 (59.7%) and G2 (31.2%), with few (9.1%) classified as G3 [4]. Because true mitotic figures are sometimes indistinguishable from darkly stained and/or shrunken irregular nuclei, apoptotic bodies, and karyorrhectic debris on hematoxylin and eosin (H&E) staining, identification of true mitotic figures is not always straightforward [5] (Figure 1(c)). Discrepancies have therefore been observed in correlations between Ki-67 and mitotic counts in various tumor types [14–16]. It may be difficult to unequivocally identify a mitotic figure versus apoptotic cells or karyorrhectic cells [16]. Manually calculating Ki-67 LI in 500–2000 cells is highly labor-intensive [14, 17]. The narrow cutoffs in mitotic counts and Ki-67 LI between G1 and G2 well-differentiated NETs may result in false upgrading or downgrading of tumors. Therefore, the supportive method for counting mitotic figures and Ki-67 LI is necessary to confirm the limitation of the current criteria for precisely determining the prognosis of patients with gastrointestinal well-differentiated NETs [14, 17].Phosphohistone H3 (PHH3), a core histone protein reaching a maximum during mitosis, is a mitosis-specific marker, making it useful in counting mitotic figures and for mitotic grading. PHH3 facilitates the counting of mitoses and can be used to predict prognosis in patients with several types of gastrointestinal neoplasm, including pancreatic NETs [14, 18–20]. However, the ability of PHH3 mitotic index (MI) to grade gastrointestinal NETs, especially for differentiating between G1 and G2 well-differentiated NETs, has not yet been fully evaluated. Furthermore, the clinically relevant cutoffs for PHH3 MI in rectal and nonrectal NETs have not yet been determined.The aim of this study was to compare tumor grades determined using the PHH3 MI and those determined by mitotic counts according to WHO criteria and to determine the clinically relevant cutoffs of PHH3 MI. In this study, Ki-67 LI was calculated digitally, because manual calculation may be a confounding factor.
## 2. Materials and Methods
### 2.1. Patients and Histologic Evaluation
This study retrospectively evaluated 141 patients with primary gastrointestinal NETs who underwent endoscopic or surgical resection at Hallym University Sacred Heart Hospital between 2005 and 2015. Only patients diagnosed with primary gastrointestinal NETs, who had not been treated with chemotherapy or targeted drug therapy at the time of tumor excision and whose formalin-fixed, paraffin-embedded (FFPE) tumor tissue blocks were available for analysis, were included in this study. The medical records of each patient were reviewed, and their demographic information, radiological data, treatment details, tumor recurrence, and survival status were recorded. All H&E-stained slides were reviewed by a gastrointestinal pathologist (MJK) to confirm the diagnosis and to reevaluate histopathological characteristics, including tumor size, mitotic count, tumor grade, resection margins, depth of invasion, lymphatic invasion, venous invasion, and perineural invasion. Staging was based on the 8th edition of American Joint Committee on Cancer staging system. The study was approved by the Institutional Review Board of the Hallym University Sacred Heart Hospital.
### 2.2. Immunohistochemistry
Immunohistochemical staining was performed on 4μm thick FFPE tumor tissue sections using the BenchMark XT automated tissue staining system (Ventana Medical Systems, Inc., Tucson, AZ, USA), according to the manufacturer’s instructions, as described in [21–24]. The primary antibodies were directed against PHH3 (polyclonal, 1 : 100; Cell Marque, Rocklin, CA, USA) and Ki-67 (1 : 250, clone MIB-1, Dako). Slides were incubated with primary antibody 37°C for 40 min, washed, and incubated with a secondary antibody (universal horseradish peroxidase (HRP) Multimer; Ventana Medical Systems) for 8 min at 37°C. After washing, the tissue sections were incubated with a chromogen diaminobenzidine (ultraView Universal DAB Kit, Ventana Medical Systems) and counterstained with hematoxylin.
### 2.3. Slide Scoring
Mitotic counts on both H&E- and PHH3-stained slides were counted in 50 high-powered fields (HPFs; 40 × objective, 10 × eyepiece with a field diameter of 0.55 mm and an area of 0.237 mm2; Olympus microscope BX51, Tokyo, Japan). PHH3 MI was calculated from the mean mitotic count (mean number of mitoses/10 HPFs) and the mean numbers of PHH3-positive nuclei/10 HPFs were calculated as the number of mitoses/10 HPFs and the number of PHH3-positive nuclei/10 HPFs to attain the PHH3 MI, respectively [14, 18, 25]. Mitotic figures were considered as cells in metaphase (clumped chromatin and chromatin arranged in a plane) and anaphase/telophase (separated clumped chromatin), as previously described [14]. Hyperchromatic or pyknotic nuclei were not counted, because these cells could represent cells undergoing necrosis or apoptosis, as previously described [14].Ki-67 LI was assessed using a GenASIs capture and analysis system (Applied Spectral Imaging, Carlsbad, CA, USA). Briefly, the highest labeled region at low magnification was selected, and the area was viewed at ×200 magnification. These captured images were analyzed with GenASIs software to quantify the positive tumor cells in each tumor region. Ki-67-positive lymphocytes were manually removed. At least 500 tumor cells per sample were counted to determine the percentage of cells that were positive for Ki-67, and Ki-67 LI was automatically calculated.Grades of H&E- and anti-PHH3-stained sections were determined independently. Tumors were classified as G1 (<2 mitoses per 10 HPFs and/or Ki-67 LI <3%), G2 (2–20 mitoses per 10 HPFs and/or Ki-67 LI 3%–20%), and G3 or NEC (>20 mitoses per HPF or Ki-67 >20%), according to the WHO 2010 classification [12, 13].
### 2.4. Statistical Analyses
Categorical variables were compared using Pearson’s chi-squared test or two-tailed Fisher’s exact test, and continuous variables, which were presented as means ± SD, were compared using Student’st-test. The Spearman rank correlation test was used to assess the relationships between mitotic counts, Ki-67 LI, and PHH3 mitotic index. The results obtained with the WHO grading system with those derived from PHH3-applied modified grading were compared by assessing the concordance rate (number of samples in which the two methods agreed/number of total samples) with the kappa (κ) statistic. Concordance rate was defined as the proportion of similar results achieved using 2 different methods, among total number of cases. The kappa value was evaluated to measure the degree of agreement between 2 different grading methods. Kappa values ≤0.20, 0.21–0.40, 0.41–0.60, 0.61–0.80, and ≥0.81 were regarded as indicating slight, fair, moderate, substantial, and almost perfect agreement, respectively. The volume under the receiver operator characteristic (ROC) curve was drawn to determine the optimal cutoff value in terms of sensitivity and specificity for WHO grades 1 and 2 or 3 by PHH3 MI.Overall survival was defined as the time from the date of initial surgery until death or the end of the stay (May 2017). Disease-free survival was defined as the time from the date of initial surgery until a documented relapse, including locoregional recurrence and distant metastasis, or the end of the study. Survival parameters were calculated using the Kaplan-Meier method and compared by log-rank tests. All statistical analyses were performed using SPSS software (version 18; SPSS Inc., Chicago, IL, USA), withP values <0.05 considered statistically significant.
## 2.1. Patients and Histologic Evaluation
This study retrospectively evaluated 141 patients with primary gastrointestinal NETs who underwent endoscopic or surgical resection at Hallym University Sacred Heart Hospital between 2005 and 2015. Only patients diagnosed with primary gastrointestinal NETs, who had not been treated with chemotherapy or targeted drug therapy at the time of tumor excision and whose formalin-fixed, paraffin-embedded (FFPE) tumor tissue blocks were available for analysis, were included in this study. The medical records of each patient were reviewed, and their demographic information, radiological data, treatment details, tumor recurrence, and survival status were recorded. All H&E-stained slides were reviewed by a gastrointestinal pathologist (MJK) to confirm the diagnosis and to reevaluate histopathological characteristics, including tumor size, mitotic count, tumor grade, resection margins, depth of invasion, lymphatic invasion, venous invasion, and perineural invasion. Staging was based on the 8th edition of American Joint Committee on Cancer staging system. The study was approved by the Institutional Review Board of the Hallym University Sacred Heart Hospital.
## 2.2. Immunohistochemistry
Immunohistochemical staining was performed on 4μm thick FFPE tumor tissue sections using the BenchMark XT automated tissue staining system (Ventana Medical Systems, Inc., Tucson, AZ, USA), according to the manufacturer’s instructions, as described in [21–24]. The primary antibodies were directed against PHH3 (polyclonal, 1 : 100; Cell Marque, Rocklin, CA, USA) and Ki-67 (1 : 250, clone MIB-1, Dako). Slides were incubated with primary antibody 37°C for 40 min, washed, and incubated with a secondary antibody (universal horseradish peroxidase (HRP) Multimer; Ventana Medical Systems) for 8 min at 37°C. After washing, the tissue sections were incubated with a chromogen diaminobenzidine (ultraView Universal DAB Kit, Ventana Medical Systems) and counterstained with hematoxylin.
## 2.3. Slide Scoring
Mitotic counts on both H&E- and PHH3-stained slides were counted in 50 high-powered fields (HPFs; 40 × objective, 10 × eyepiece with a field diameter of 0.55 mm and an area of 0.237 mm2; Olympus microscope BX51, Tokyo, Japan). PHH3 MI was calculated from the mean mitotic count (mean number of mitoses/10 HPFs) and the mean numbers of PHH3-positive nuclei/10 HPFs were calculated as the number of mitoses/10 HPFs and the number of PHH3-positive nuclei/10 HPFs to attain the PHH3 MI, respectively [14, 18, 25]. Mitotic figures were considered as cells in metaphase (clumped chromatin and chromatin arranged in a plane) and anaphase/telophase (separated clumped chromatin), as previously described [14]. Hyperchromatic or pyknotic nuclei were not counted, because these cells could represent cells undergoing necrosis or apoptosis, as previously described [14].Ki-67 LI was assessed using a GenASIs capture and analysis system (Applied Spectral Imaging, Carlsbad, CA, USA). Briefly, the highest labeled region at low magnification was selected, and the area was viewed at ×200 magnification. These captured images were analyzed with GenASIs software to quantify the positive tumor cells in each tumor region. Ki-67-positive lymphocytes were manually removed. At least 500 tumor cells per sample were counted to determine the percentage of cells that were positive for Ki-67, and Ki-67 LI was automatically calculated.Grades of H&E- and anti-PHH3-stained sections were determined independently. Tumors were classified as G1 (<2 mitoses per 10 HPFs and/or Ki-67 LI <3%), G2 (2–20 mitoses per 10 HPFs and/or Ki-67 LI 3%–20%), and G3 or NEC (>20 mitoses per HPF or Ki-67 >20%), according to the WHO 2010 classification [12, 13].
## 2.4. Statistical Analyses
Categorical variables were compared using Pearson’s chi-squared test or two-tailed Fisher’s exact test, and continuous variables, which were presented as means ± SD, were compared using Student’st-test. The Spearman rank correlation test was used to assess the relationships between mitotic counts, Ki-67 LI, and PHH3 mitotic index. The results obtained with the WHO grading system with those derived from PHH3-applied modified grading were compared by assessing the concordance rate (number of samples in which the two methods agreed/number of total samples) with the kappa (κ) statistic. Concordance rate was defined as the proportion of similar results achieved using 2 different methods, among total number of cases. The kappa value was evaluated to measure the degree of agreement between 2 different grading methods. Kappa values ≤0.20, 0.21–0.40, 0.41–0.60, 0.61–0.80, and ≥0.81 were regarded as indicating slight, fair, moderate, substantial, and almost perfect agreement, respectively. The volume under the receiver operator characteristic (ROC) curve was drawn to determine the optimal cutoff value in terms of sensitivity and specificity for WHO grades 1 and 2 or 3 by PHH3 MI.Overall survival was defined as the time from the date of initial surgery until death or the end of the stay (May 2017). Disease-free survival was defined as the time from the date of initial surgery until a documented relapse, including locoregional recurrence and distant metastasis, or the end of the study. Survival parameters were calculated using the Kaplan-Meier method and compared by log-rank tests. All statistical analyses were performed using SPSS software (version 18; SPSS Inc., Chicago, IL, USA), withP values <0.05 considered statistically significant.
## 3. Results
### 3.1. Patient and Tumor Characteristics
Table1 summarizes the characteristics of patients with rectal and nonrectal NETs. The study enrolled 141 patients, 88 men and 53 women, of median age 49 years (range 10–80 years). Of these patients, 115 (81.6%) had rectal NETs and 26 (18.4%) had nonrectal NETs. The nonrectal NETs included 12 (8.5%) originating from the stomach, eight (5.7%) from the appendix, three (2.1%) from the duodenum, and three (2.1%) from the colon. Tumor tissue was obtained by endoscopic resection from 112 (89.6%) patients with rectal NETs and 13 (10.4%) with nonrectal NETs. The remaining three rectal and 13 nonrectal NETs were resected surgically. Mean tumor size was 0.65 cm (range, 0.1–3.5 cm). Resection margins of 22 (15.6%) tumors were positive. Thirteen patients experienced recurrences and eight died during the follow-up period.Table 1
Associations of the clinicopathological characteristics of rectal and other gastrointestinal neuroendocrine tumors.
Rectal NET Nonrectal NET P n = 115 (%) n = 26 (%) Sex 0.148 M 75 (65.2) 13 (50.0) F 40 (34.8) 13 (50.0) Age (y) 0.001∗ <60 98 (85.2) 15 (57.7) ≥60 17 (14.8) 11 (42.3) Tumor size (cm) <0.001∗ 0.1–1 111 (96.5) 20 (76.9) >1 4 (3.5) 6 (23.1) Tumor depth 0.001∗ T1 114 (99.1) 21 (80.8) T2-3 1 (0.9) 5 (19.2) LN metastasis 0.460 N0 113 (98.3) 25 (96.2) N1 2 (1.7) 1 (3.8) Distant metastasis 0.184 M0 115 (100) 25 (96.2) M1 0 (0.0) 1 (3.8) Stage 0.007∗ I 112 (97.4) 22 (84.6) II-III 3 (2.6) 4 (15.4) Grade 0.001∗ G1 96 (83.5) 15 (57.7) G2 19 (16.5) 9 (34.6) G3 0 (0.0) 2 (7.7) Mitosis/10 HPF 0.55 ± 0.79 2.62 ± 7.03 <0.001∗ <2 100 (87.0) 17 (65.4) 0.004∗ 2–20 15 (13.0) 8 (30.8) >20 0 (0.0) 1 (3.8) Ki-67 LI (%) 1.15 ± 1.02 4.06 ± 7.87 0.002∗ <3 109 (94.8) 20 (76.9) 0.001∗ 3–20 6 (5.2) 4 (15.4) >20 0 (0.0) 2 (7.7) PHH3 MI/10 HPF 1.37 ± 1.37 2.77 ± 5.42 0.014∗ <2 75 (65.2) 16 (61.6) 0.485 2–20 40 (34.8) 9 (34.6) >20 0 (0.0) 1 (3.8) Vascular invasion 0.375 Positive 22 (19.1) 7 (26.9) Negative 93 (80.9) 19 (73.1) Lymphatic invasion 0.363 Positive 18 (15.7) 6 (23.1) Negative 97 (84.3) 20 (76.9) Perineural invasion 0.123 Positive 12 (10.4) 0 (0.0) Negative 103 (89.6) 26 (100) Resection margin 1.000 R0 97 (84.3) 22 (84.6) R1 18 (15.7) 4 (15.4) Recurrence 0.001∗ Yes 6 (5.2) 7 (26.9) No 109 (94.8) 19 (73.1) Died 0.018∗ Yes 4 (3.5) 4 (15.4) No 111 (96.5) 22 (84.6) NET, neuroendocrine tumor; HPF, high-power field; LI, labeling index; MI, mitotic index.Statistically∗ significant. P value <0.05.Several demographic and clinical characteristics differed significantly in patients with rectal and nonrectal NETs. Patients with rectal NETs were significantly younger in age (48 versus 56 years,P=0.001) and had smaller-sized tumors (0.58±0.35 versus 0.92±0.90 cm, P<0.001). The depth of tumor invasion was more superficial in patients with rectal NETs, with 99.1% of these patients having tumors confined to the submucosa, whereas a higher percentage of nonrectal NETs (19.2%) infiltrated the muscle layer or adipose tissue (P=0.001). Tumor stage (P=0.007) and tumor grade (P=0.001) were significantly lower in patients with rectal than nonrectal NETs, with 83.5% and 58.3%, respectively, having grade 1 tumors. Recurrence (5.2% versus 26.9%, P=0.001) and mortality (3.5% versus 15.4%, P=0.018) rates were also significantly lower in patients with rectal than nonrectal NETs.
### 3.2. Mitotic Counts, PHH3, and Ki-67 LI of Rectal and Nonrectal NETs
In all 141 NETs, significant positive correlations were observed between mitotic counts and Ki-67 LI (r=0.739, P<0.001), between mitotic counts and PHH3 MI (r=0.839, P<0.001) (Figure 2(a)), and between PHH3 MI and Ki-67 LI (r=0.724, P<0.001). All of these three parameters, however, differed significantly in rectal and nonrectal NETs. The mean numbers of mitotic counts (0.55±0.79/10 HPFs [range, 0–3/10 HPFs] versus 2.62±7.03/10 HPFs [range, 0–35/10 HPFs], P<0.001), mean Ki-67 LI (mean, 1.15%±1.02% [range, 0%–5.3%] versus 4.06%±7.87% [range, 0%–35%], P=0.002), and mean PHH3 MI (1.37±1.37/10 HPFs [range, 0–6/10 HPFs] versus 2.77±5.42/10 HPFs [range, 0–25/10 HPFs], P=0.014) were all significantly lower in rectal than in nonrectal NETs (Figures 2(b)–2(d)).Figure 2
(a) Correlations of mitotic counts obtained from H&E slides with Ki-67 LI and PHH3 MI. Comparisons of mitosis (b), Ki-67 LI (c), and PHH3 MI (d) in rectal and nonrectal neuroendocrine tumors of the gastrointestinal tract. (e) Receiver operating characteristic (ROC) curve for PHH3 MI with original WHO grades 1 and 2.
(a) (b) (c) (d) (e)
### 3.3. Comparisons between Original WHO Grades and Grades Modified by PHH3
Classification of the 141 NETs according to the WHO grading system showed that 110 (78.0%) were of grade 1, 29 (20.6%) were of grade 2, and two (1.4%) were of grade 3.To determine the PHH3 MI cutoff values that mostly closely matched the established WHO grade, we applied PHH3 MI in two ways (Table2): (1) counting PHH3 MI according to the mitosis count on H&E slides, following by application of PHH3 MI to the WHO grading system instead of mitosis; (2) using a 4 PHH3 MI cutoff value, followed by application of PHH3 MI to the WHO grading system instead of mitosis or Ki-67 LI. Then, we generated a ROC curve to validate the optimal cutoff value, which showed an area under curve of 0.701 (95% confidence interval, 0.561–0.826), which was statistically significant (P=0.007) (Figure 2(e)). At an optimal cutoff of 4, the sensitivity and specificity using 4 PHH3 MI to differentiate the WHO grade 1 and grades 2-3 were 73.3% and 31%, respectively.Table 2
Comparison of histologic grades combined with PHH3 staining and cutoff value of ≥4/10 HPFs.
TotalN=141 (%) WHO grade P Kappa Grade 1 Grade 2 Grade 3 n = 110 (%) n = 29 (%) n = 2 (%) Grades in replacement of H&E-mitosis by PHH3 <0.001∗ 0.428 Grade 1 86 (61.0) 80 (80.0) 6 (20.7) 0 (0.0) Grade 2 53 (37.6) 30 (30.0) 23 (79.3) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) Grades with PHH3 cutoff ≥4/10 HPFs <0.001∗ 0.810 Grade 1 104 (73.8) 102 (92.7) 2 (6.9) 0 (0.0) Grade 2 35 (24.8) 8 (7.3) 27 (93.1) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) HPF, high-power field.Statistically∗ significant. P value <0.05.Replacement of mitotic counts with the PHH3 MI in the WHO grading system resulted in 86 (61.0%) tumors being classified as grade 1, 53 (37.6%) as grade 2, and two (1.4%) as grade 3. The concordance rate of this modified system with the WHO grades was 75.9%. Replacement of mitotic counts with the PHH3 MI resulted in a change of grade of 36 tumors (25.5%), with 30 (21.3%) changed from grade 1 to grade 2 and six (4.3%) changed from grade 2 to grade 1. The association between these modified grades and the WHO grades was moderate (κ=0.428) but statistically significant (P<0.001).The application of a PHH3 MI cutoff ≥4 in the WHO grading system resulted in 104 (73.8%) tumors being classified as grade 1 and 35 (24.8%) as grade 2. Use of this modified grading system with PHH3 MI ≥4 resulted in change of grade of 10 (7.1%) tumors, with eight (5.7%) changed from grade 1 to grade 2 and two (1.4%) changed from grade 2 to grade 1. The concordance rate of these modified grades with the original WHO grades was 92.9%, with almost perfect agreement between the two (κ=0.810), a result that was statistically significant (P<0.001).Use of PHH3 ≥4 combined with the WHO grading criteria resulted in 10 tumors being reclassified (Table3), nine rectal NETs and one gastric NET. Eight of these 10 tumors were upgraded by the addition of PHH3 MI to the WHO grading system compared with mitotic counts by the WHO grading system alone.Table 3
Comparison of clinicopathological features of tumors of grades stratified before and after use of PHH3 MI (≥4/10 HPFs) determining mitotic counts.
Patient number Sex/age Location Size Mitosis Ki-67 LI PHH3 MI WHO grade PHH3 grade 1 F/37 Rectum 0.5 0 0.7 4 1 2 2 M/47 Rectum 0.7 2 1.6 2 2 1 3 M/46 Rectum 0.5 2 2.8 1 2 1 4 M/49 Rectum 0.5 0 2.1 5 1 2 5 M/47 Rectum 0.5 0 2.5 6 1 2 6 F/35 Rectum 0.4 0 2 4 1 2 7 M/60 Rectum 1 0 2.4 4 1 2 8 F/55 Rectum 0.5 0 2.5 4 1 2 9 M/21 Rectum 0.5 0 2.5 6 1 2 10 F/56 Stomach 1 1 0.5 4 1 2 MI, mitotic index; HPF, high-power field; LI, labeling index.
### 3.4. Prognostic Significance of the Inclusion of the PHH3 Cutoff
Because the use of PHH3 ≥4 in the WHO grading criteria yielded grades closest to those determined by the original WHO grading system, we analyzed the prognostic relevance of the combined criteria for overall survival and disease-free survival in patients with rectal NET (Figures3(a)-3(b)). The modified grading system showed that disease-free survival was significantly worse (96.49±7.10 months versus 150.81±2.22 months; P=0.001) and overall survival tended to be worse (P=0.063), in patients with G2 than G1 rectal NETs.Figure 3
Impact of using PHH3 ≥4 combined with WHO grading criteria on overall survival and recurrence-free survival in patients with rectal NETs. Associations of PHH3 MI with (a) disease-free survival and (b) overall survival in patients with rectal NETs.
(a) (b)
## 3.1. Patient and Tumor Characteristics
Table1 summarizes the characteristics of patients with rectal and nonrectal NETs. The study enrolled 141 patients, 88 men and 53 women, of median age 49 years (range 10–80 years). Of these patients, 115 (81.6%) had rectal NETs and 26 (18.4%) had nonrectal NETs. The nonrectal NETs included 12 (8.5%) originating from the stomach, eight (5.7%) from the appendix, three (2.1%) from the duodenum, and three (2.1%) from the colon. Tumor tissue was obtained by endoscopic resection from 112 (89.6%) patients with rectal NETs and 13 (10.4%) with nonrectal NETs. The remaining three rectal and 13 nonrectal NETs were resected surgically. Mean tumor size was 0.65 cm (range, 0.1–3.5 cm). Resection margins of 22 (15.6%) tumors were positive. Thirteen patients experienced recurrences and eight died during the follow-up period.Table 1
Associations of the clinicopathological characteristics of rectal and other gastrointestinal neuroendocrine tumors.
Rectal NET Nonrectal NET P n = 115 (%) n = 26 (%) Sex 0.148 M 75 (65.2) 13 (50.0) F 40 (34.8) 13 (50.0) Age (y) 0.001∗ <60 98 (85.2) 15 (57.7) ≥60 17 (14.8) 11 (42.3) Tumor size (cm) <0.001∗ 0.1–1 111 (96.5) 20 (76.9) >1 4 (3.5) 6 (23.1) Tumor depth 0.001∗ T1 114 (99.1) 21 (80.8) T2-3 1 (0.9) 5 (19.2) LN metastasis 0.460 N0 113 (98.3) 25 (96.2) N1 2 (1.7) 1 (3.8) Distant metastasis 0.184 M0 115 (100) 25 (96.2) M1 0 (0.0) 1 (3.8) Stage 0.007∗ I 112 (97.4) 22 (84.6) II-III 3 (2.6) 4 (15.4) Grade 0.001∗ G1 96 (83.5) 15 (57.7) G2 19 (16.5) 9 (34.6) G3 0 (0.0) 2 (7.7) Mitosis/10 HPF 0.55 ± 0.79 2.62 ± 7.03 <0.001∗ <2 100 (87.0) 17 (65.4) 0.004∗ 2–20 15 (13.0) 8 (30.8) >20 0 (0.0) 1 (3.8) Ki-67 LI (%) 1.15 ± 1.02 4.06 ± 7.87 0.002∗ <3 109 (94.8) 20 (76.9) 0.001∗ 3–20 6 (5.2) 4 (15.4) >20 0 (0.0) 2 (7.7) PHH3 MI/10 HPF 1.37 ± 1.37 2.77 ± 5.42 0.014∗ <2 75 (65.2) 16 (61.6) 0.485 2–20 40 (34.8) 9 (34.6) >20 0 (0.0) 1 (3.8) Vascular invasion 0.375 Positive 22 (19.1) 7 (26.9) Negative 93 (80.9) 19 (73.1) Lymphatic invasion 0.363 Positive 18 (15.7) 6 (23.1) Negative 97 (84.3) 20 (76.9) Perineural invasion 0.123 Positive 12 (10.4) 0 (0.0) Negative 103 (89.6) 26 (100) Resection margin 1.000 R0 97 (84.3) 22 (84.6) R1 18 (15.7) 4 (15.4) Recurrence 0.001∗ Yes 6 (5.2) 7 (26.9) No 109 (94.8) 19 (73.1) Died 0.018∗ Yes 4 (3.5) 4 (15.4) No 111 (96.5) 22 (84.6) NET, neuroendocrine tumor; HPF, high-power field; LI, labeling index; MI, mitotic index.Statistically∗ significant. P value <0.05.Several demographic and clinical characteristics differed significantly in patients with rectal and nonrectal NETs. Patients with rectal NETs were significantly younger in age (48 versus 56 years,P=0.001) and had smaller-sized tumors (0.58±0.35 versus 0.92±0.90 cm, P<0.001). The depth of tumor invasion was more superficial in patients with rectal NETs, with 99.1% of these patients having tumors confined to the submucosa, whereas a higher percentage of nonrectal NETs (19.2%) infiltrated the muscle layer or adipose tissue (P=0.001). Tumor stage (P=0.007) and tumor grade (P=0.001) were significantly lower in patients with rectal than nonrectal NETs, with 83.5% and 58.3%, respectively, having grade 1 tumors. Recurrence (5.2% versus 26.9%, P=0.001) and mortality (3.5% versus 15.4%, P=0.018) rates were also significantly lower in patients with rectal than nonrectal NETs.
## 3.2. Mitotic Counts, PHH3, and Ki-67 LI of Rectal and Nonrectal NETs
In all 141 NETs, significant positive correlations were observed between mitotic counts and Ki-67 LI (r=0.739, P<0.001), between mitotic counts and PHH3 MI (r=0.839, P<0.001) (Figure 2(a)), and between PHH3 MI and Ki-67 LI (r=0.724, P<0.001). All of these three parameters, however, differed significantly in rectal and nonrectal NETs. The mean numbers of mitotic counts (0.55±0.79/10 HPFs [range, 0–3/10 HPFs] versus 2.62±7.03/10 HPFs [range, 0–35/10 HPFs], P<0.001), mean Ki-67 LI (mean, 1.15%±1.02% [range, 0%–5.3%] versus 4.06%±7.87% [range, 0%–35%], P=0.002), and mean PHH3 MI (1.37±1.37/10 HPFs [range, 0–6/10 HPFs] versus 2.77±5.42/10 HPFs [range, 0–25/10 HPFs], P=0.014) were all significantly lower in rectal than in nonrectal NETs (Figures 2(b)–2(d)).Figure 2
(a) Correlations of mitotic counts obtained from H&E slides with Ki-67 LI and PHH3 MI. Comparisons of mitosis (b), Ki-67 LI (c), and PHH3 MI (d) in rectal and nonrectal neuroendocrine tumors of the gastrointestinal tract. (e) Receiver operating characteristic (ROC) curve for PHH3 MI with original WHO grades 1 and 2.
(a) (b) (c) (d) (e)
## 3.3. Comparisons between Original WHO Grades and Grades Modified by PHH3
Classification of the 141 NETs according to the WHO grading system showed that 110 (78.0%) were of grade 1, 29 (20.6%) were of grade 2, and two (1.4%) were of grade 3.To determine the PHH3 MI cutoff values that mostly closely matched the established WHO grade, we applied PHH3 MI in two ways (Table2): (1) counting PHH3 MI according to the mitosis count on H&E slides, following by application of PHH3 MI to the WHO grading system instead of mitosis; (2) using a 4 PHH3 MI cutoff value, followed by application of PHH3 MI to the WHO grading system instead of mitosis or Ki-67 LI. Then, we generated a ROC curve to validate the optimal cutoff value, which showed an area under curve of 0.701 (95% confidence interval, 0.561–0.826), which was statistically significant (P=0.007) (Figure 2(e)). At an optimal cutoff of 4, the sensitivity and specificity using 4 PHH3 MI to differentiate the WHO grade 1 and grades 2-3 were 73.3% and 31%, respectively.Table 2
Comparison of histologic grades combined with PHH3 staining and cutoff value of ≥4/10 HPFs.
TotalN=141 (%) WHO grade P Kappa Grade 1 Grade 2 Grade 3 n = 110 (%) n = 29 (%) n = 2 (%) Grades in replacement of H&E-mitosis by PHH3 <0.001∗ 0.428 Grade 1 86 (61.0) 80 (80.0) 6 (20.7) 0 (0.0) Grade 2 53 (37.6) 30 (30.0) 23 (79.3) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) Grades with PHH3 cutoff ≥4/10 HPFs <0.001∗ 0.810 Grade 1 104 (73.8) 102 (92.7) 2 (6.9) 0 (0.0) Grade 2 35 (24.8) 8 (7.3) 27 (93.1) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) HPF, high-power field.Statistically∗ significant. P value <0.05.Replacement of mitotic counts with the PHH3 MI in the WHO grading system resulted in 86 (61.0%) tumors being classified as grade 1, 53 (37.6%) as grade 2, and two (1.4%) as grade 3. The concordance rate of this modified system with the WHO grades was 75.9%. Replacement of mitotic counts with the PHH3 MI resulted in a change of grade of 36 tumors (25.5%), with 30 (21.3%) changed from grade 1 to grade 2 and six (4.3%) changed from grade 2 to grade 1. The association between these modified grades and the WHO grades was moderate (κ=0.428) but statistically significant (P<0.001).The application of a PHH3 MI cutoff ≥4 in the WHO grading system resulted in 104 (73.8%) tumors being classified as grade 1 and 35 (24.8%) as grade 2. Use of this modified grading system with PHH3 MI ≥4 resulted in change of grade of 10 (7.1%) tumors, with eight (5.7%) changed from grade 1 to grade 2 and two (1.4%) changed from grade 2 to grade 1. The concordance rate of these modified grades with the original WHO grades was 92.9%, with almost perfect agreement between the two (κ=0.810), a result that was statistically significant (P<0.001).Use of PHH3 ≥4 combined with the WHO grading criteria resulted in 10 tumors being reclassified (Table3), nine rectal NETs and one gastric NET. Eight of these 10 tumors were upgraded by the addition of PHH3 MI to the WHO grading system compared with mitotic counts by the WHO grading system alone.Table 3
Comparison of clinicopathological features of tumors of grades stratified before and after use of PHH3 MI (≥4/10 HPFs) determining mitotic counts.
Patient number Sex/age Location Size Mitosis Ki-67 LI PHH3 MI WHO grade PHH3 grade 1 F/37 Rectum 0.5 0 0.7 4 1 2 2 M/47 Rectum 0.7 2 1.6 2 2 1 3 M/46 Rectum 0.5 2 2.8 1 2 1 4 M/49 Rectum 0.5 0 2.1 5 1 2 5 M/47 Rectum 0.5 0 2.5 6 1 2 6 F/35 Rectum 0.4 0 2 4 1 2 7 M/60 Rectum 1 0 2.4 4 1 2 8 F/55 Rectum 0.5 0 2.5 4 1 2 9 M/21 Rectum 0.5 0 2.5 6 1 2 10 F/56 Stomach 1 1 0.5 4 1 2 MI, mitotic index; HPF, high-power field; LI, labeling index.
## 3.4. Prognostic Significance of the Inclusion of the PHH3 Cutoff
Because the use of PHH3 ≥4 in the WHO grading criteria yielded grades closest to those determined by the original WHO grading system, we analyzed the prognostic relevance of the combined criteria for overall survival and disease-free survival in patients with rectal NET (Figures3(a)-3(b)). The modified grading system showed that disease-free survival was significantly worse (96.49±7.10 months versus 150.81±2.22 months; P=0.001) and overall survival tended to be worse (P=0.063), in patients with G2 than G1 rectal NETs.Figure 3
Impact of using PHH3 ≥4 combined with WHO grading criteria on overall survival and recurrence-free survival in patients with rectal NETs. Associations of PHH3 MI with (a) disease-free survival and (b) overall survival in patients with rectal NETs.
(a) (b)
## 4. Discussion
This study was designed to explore the diagnostic utility of PHH3 MI as an ancillary mitotic marker and the clinically relevant cutoff value of PHH3 MI in patients with gastrointestinal well-differentiated NETs, by comparing WHO grades and WHO grades modified by PHH3 MI. We found that a PHH3 MI cutoff of 4 was most similar to WHO grade.The most accurate evaluation of mitoses in patients with NETs using the WHO grading system remains unclear, because mitoses may be mimicked by darkly stained or shrunken irregular nuclei, apoptotic bodies, and karyorrhectic debris, yielding false positives. In addition, diagnosis of mitoses is limited by the narrow cutoffs in mitotic counts between grades 1 and 2. PHH3 is only expressed during mitosis, not during interphase or apoptosis, making PHH3 a specific marker of mitosis [19, 20]. We found that mitotic counts correlated with both the Ki-67 LI and PHH3 MI, but its correlation with PHH3 MI was slightly higher, indicating that PHH3 MI is more closely associated with mitosis in gastrointestinal NETs. PHH3 only stains cells during the late G2 and M phases of mitosis [20], whereas Ki-67 is expressed throughout the cell cycle except in the G0 phase [26]. PHH3 would therefore stain far fewer tumor cells than Ki-67, resulting in a lower PHH3 MI.Most determinations of the prognostic impact of mitoses in gastrointestinal NETs are based on the evaluation of mitoses by H&E staining [21]. Although the results using PHH3 correlated with mitosis on H&E slides [16, 27], it is unclear if these two types of mitoses have the same prognostic impact. In addition, no standards have yet been developed for the quantification in gastrointestinal NETs. PHH3 MI is comparable to the current WHO grading system but is superior to H&E and Ki-67, in predicting disease-free survival, with PHH3 appearing to be both easier to interpret and more accurate than current prognostic markers [14]. Evaluations in the present study of the prognostic utility of PHH3 MI instead of mitotic counts found that a PHH3 MI cutoff of 3 was no better than 3 mitotic counts per 10 HPFs in the WHO grading system for predicting outcomes in patients with rectal NETs. Of the 141 tumors, 36 showed discrepancies from the original WHO grades, with 30 upgraded and six downgraded when a PHH3 MI cutoff was used. Similarly, approximately one-third of discordant gastrointestinal stromal tumors were upgraded when determined by PHH3 application compared with H&E-stained slides [15]. The use of PHH3 in melanomas has been reported to upgrade 6–14% of tumors from pT1a to pT1b [16], indicating that replacement of mitotic counts by PHH3 MI in the grading system resulted in higher tumor grades. In contrast, a PHH3 MI cutoff of 4 could significantly distinguish between grades 1 and 2. Using this criterion, only 10 tumors showed discrepancies, with eight being upgraded and two (1.4%) downgraded. Furthermore, use of a PHH3 MI cutoff ≥4 in the WHO grading criteria instead of mitosis or KI-67 LI showed almost perfect agreement with the original WHO grades (κ=0.810). Therefore, PHH3 MI ≥4 is likely to yield results comparable to the original WHO grades.Use of a PHH3 MI cutoff ≥4 was associated with disease-free survival in patients with rectal NETs and could distinguish between grade 1 and grade 2 tumors. In contrast, this cutoff value was marginally significant in predicting overall survival in patients with rectal NETs. Thus, a PHH3 ≥4 cutoff value could approximate the results of the original WHO grading system in rectal NETs, as well as their prognostic correlations. Similarly, findings in pancreatic well-differentiated NETs, histologic grade, determined that ≥4 PHH3-stained mitoses/10 HPFs significantly correlated with patient survival [25].Many studies in American and European populations [1–4] have shown that the majority of gastrointestinal NETs are located in the rectum, followed by the small intestine, colon, stomach, and appendix, and that the incidence of these tumors at all primary sites, especially the rectum and small intestine, increases with age [28]. In the present study, 115 (81.6%) of the 141 gastrointestinal NETs were located in the rectum, whereas only 26 (18.4%) were nonrectal NETs. Compared with nonrectal NETs, rectal NETs were associated with younger age, smaller tumor size, more superficial invasion, lower stage, lower grade, lower recurrence rate, and lower mortality rate. Most (83.5%) rectal NETs were classified as grade 1, whereas 41.3% of nonrectal NETs were of grade 2 or 3. Similarly, the primary tumor site distribution in our study was similar to that previously reported in the Korean, Japanese, and Chinese populations [7, 29, 30]. These findings suggest that the distribution of primary sites of gastrointestinal NETs may differ in Asian and Caucasian populations [7, 30].In conclusion, the cutoff value of PHH3 ≥4 yielded results most similar to the original WHO grades. These findings suggest that this PHH3 MI cutoff may be a helpful adjunct prognostic strategy most likely reflecting the original WHO grades of gastrointestinal NETs. Although the number of patients in this study was relatively small, limiting the robustness of our conclusions, PHH3 appears to impart a useful ancillary marker for tumor grading. Additional studies are needed to confirm the optimal cutoff value of PHH3 MI for tumor grading of gastrointestinal NETs.
---
*Source: 1013640-2018-03-27.xml* | 1013640-2018-03-27_1013640-2018-03-27.md | 41,478 | Identification of Phosphohistone H3 Cutoff Values Corresponding to Original WHO Grades but Distinguishable in Well-Differentiated Gastrointestinal Neuroendocrine Tumors | Min Jeong Kim; Mi Jung Kwon; Ho Suk Kang; Kyung Chan Choi; Eun Sook Nam; Seong Jin Cho; Hye-Rim Park; Soo Kee Min; Jinwon Seo; Ji-Young Choe; Hyoung-Chul Park | BioMed Research International
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1013640 | 1013640-2018-03-27.xml | ---
## Abstract
Mitotic counts in the World Health Organization (WHO) grading system have narrow cutoff values. True mitotic figures, however, are not always distinguishable from apoptotic bodies and darkly stained nuclei, complicating the ability of the WHO grading system to diagnose well-differentiated neuroendocrine tumors (NETs). The mitosis-specific marker phosphohistone H3 (PHH3) can identify true mitoses and grade tumors reliably. The aim of this study was to investigate the correspondence of tumor grades, as determined by PHH3 mitotic index (MI) and mitotic counts according to WHO criteria, and to determine the clinically relevant cutoffs of PHH3 MI in rectal and nonrectal gastrointestinal NETs. Mitotic counts correlated with both the Ki-67 labeling index and PHH3 MI, but the correlation with PHH3 MI was slightly higher. The PHH3 MI cutoff ≥4 correlated most closely with original WHO grades for both rectal NETs. A PHH3 MI cutoff ≥4, which could distinguish between G1 and G2 tumors, was associated with disease-free survival in patients with rectal NETs, whereas that cutoff value showed marginal significance for overall survival in patient with rectal NETs. In conclusion, the use of PHH3 ≥4 correlated most closely with original WHO grades.
---
## Body
## 1. Introduction
Neuroendocrine tumors (NETs) are uncommon, heterogeneous groups of neoplasms, with most (54%) developing in the gastrointestinal tract [1–4]. The incidence and prognosis of gastrointestinal NETs depend on the tumor primary site, with the highest frequencies observed in the rectum (17.7%), small intestine (17.3%), and colon (10.1%), followed by the stomach (6.0%) and appendix (3.1%) and with survival ranging from 6 months to more than 20 years [1–4]. Gastrointestinal NETs largely arise from enterochromaffin and enteroglucagon cells found in the lamina propria and submucosa [5]. Histologically, NETs are composed of an organoid pattern of cells arranged into trabeculae, acini, or solid nests, separated by delicate and vascular stroma, which allows for easy recognition on low-power microscopic examination [5] (Figures 1(a)-1(b)). Well-differentiated NETs, which have malignant potential, are characterized cytologically by bland uniform cells with round to oval nuclei, indistinct nucleoli, and coarsely granular chromatin [6, 7]. Distant metastasis resulting from unexpected tumor aggressiveness is therefore of clinical concern in patients with well-differentiated NETs [8–11].Figure 1
Mitotic figures(Arrows) in a rectal neuroendocrine tumor (a, d, g) and a colonic neuroendocrine tumor (b, e, h) stained with H&E (a)–(c), Ki-67 (d)–(f), and PHH3 (g)–(i). (d)-(e) Ki-67 is more frequently positive in tumor cells, whereas (g)-(h) PHH3 highlights mitosis-specific nuclei, aiding in recognition. (c) Apoptotic bodies(Dotted arrows) mimicking mitosis are found in gastric neuroendocrine tumors. (f) Faint Ki-67 staining in an apoptotic nucleus, apparently false-positive. (i) Lack of PHH3 staining of apoptotic cells.
(a) (b) (c) (d) (e) (f) (g) (h) (i)The most important prognostic indicator in gastrointestinal NETs is the World Health Organization (WHO) grading system, which categorizes gastrointestinal NETs into three grades (G1, G2, and G3), based on mitotic counts and/or Ki-67 labeling index (LI). G1 NETs are low grade tumors, with <2 mitoses/10 high-power fields (HPFs) and/or Ki-67 LI <3%; G2 NETs are intermediate grade tumors (2–20 mitoses/10 HPFs and/or Ki-67 LI 3%–20%), and G3 NETs are high grade tumors (>20 mitoses/10 HPFs and/or Ki-67 LI >20%) [12, 13]. Most gastrointestinal NETs are G1 (59.7%) and G2 (31.2%), with few (9.1%) classified as G3 [4]. Because true mitotic figures are sometimes indistinguishable from darkly stained and/or shrunken irregular nuclei, apoptotic bodies, and karyorrhectic debris on hematoxylin and eosin (H&E) staining, identification of true mitotic figures is not always straightforward [5] (Figure 1(c)). Discrepancies have therefore been observed in correlations between Ki-67 and mitotic counts in various tumor types [14–16]. It may be difficult to unequivocally identify a mitotic figure versus apoptotic cells or karyorrhectic cells [16]. Manually calculating Ki-67 LI in 500–2000 cells is highly labor-intensive [14, 17]. The narrow cutoffs in mitotic counts and Ki-67 LI between G1 and G2 well-differentiated NETs may result in false upgrading or downgrading of tumors. Therefore, the supportive method for counting mitotic figures and Ki-67 LI is necessary to confirm the limitation of the current criteria for precisely determining the prognosis of patients with gastrointestinal well-differentiated NETs [14, 17].Phosphohistone H3 (PHH3), a core histone protein reaching a maximum during mitosis, is a mitosis-specific marker, making it useful in counting mitotic figures and for mitotic grading. PHH3 facilitates the counting of mitoses and can be used to predict prognosis in patients with several types of gastrointestinal neoplasm, including pancreatic NETs [14, 18–20]. However, the ability of PHH3 mitotic index (MI) to grade gastrointestinal NETs, especially for differentiating between G1 and G2 well-differentiated NETs, has not yet been fully evaluated. Furthermore, the clinically relevant cutoffs for PHH3 MI in rectal and nonrectal NETs have not yet been determined.The aim of this study was to compare tumor grades determined using the PHH3 MI and those determined by mitotic counts according to WHO criteria and to determine the clinically relevant cutoffs of PHH3 MI. In this study, Ki-67 LI was calculated digitally, because manual calculation may be a confounding factor.
## 2. Materials and Methods
### 2.1. Patients and Histologic Evaluation
This study retrospectively evaluated 141 patients with primary gastrointestinal NETs who underwent endoscopic or surgical resection at Hallym University Sacred Heart Hospital between 2005 and 2015. Only patients diagnosed with primary gastrointestinal NETs, who had not been treated with chemotherapy or targeted drug therapy at the time of tumor excision and whose formalin-fixed, paraffin-embedded (FFPE) tumor tissue blocks were available for analysis, were included in this study. The medical records of each patient were reviewed, and their demographic information, radiological data, treatment details, tumor recurrence, and survival status were recorded. All H&E-stained slides were reviewed by a gastrointestinal pathologist (MJK) to confirm the diagnosis and to reevaluate histopathological characteristics, including tumor size, mitotic count, tumor grade, resection margins, depth of invasion, lymphatic invasion, venous invasion, and perineural invasion. Staging was based on the 8th edition of American Joint Committee on Cancer staging system. The study was approved by the Institutional Review Board of the Hallym University Sacred Heart Hospital.
### 2.2. Immunohistochemistry
Immunohistochemical staining was performed on 4μm thick FFPE tumor tissue sections using the BenchMark XT automated tissue staining system (Ventana Medical Systems, Inc., Tucson, AZ, USA), according to the manufacturer’s instructions, as described in [21–24]. The primary antibodies were directed against PHH3 (polyclonal, 1 : 100; Cell Marque, Rocklin, CA, USA) and Ki-67 (1 : 250, clone MIB-1, Dako). Slides were incubated with primary antibody 37°C for 40 min, washed, and incubated with a secondary antibody (universal horseradish peroxidase (HRP) Multimer; Ventana Medical Systems) for 8 min at 37°C. After washing, the tissue sections were incubated with a chromogen diaminobenzidine (ultraView Universal DAB Kit, Ventana Medical Systems) and counterstained with hematoxylin.
### 2.3. Slide Scoring
Mitotic counts on both H&E- and PHH3-stained slides were counted in 50 high-powered fields (HPFs; 40 × objective, 10 × eyepiece with a field diameter of 0.55 mm and an area of 0.237 mm2; Olympus microscope BX51, Tokyo, Japan). PHH3 MI was calculated from the mean mitotic count (mean number of mitoses/10 HPFs) and the mean numbers of PHH3-positive nuclei/10 HPFs were calculated as the number of mitoses/10 HPFs and the number of PHH3-positive nuclei/10 HPFs to attain the PHH3 MI, respectively [14, 18, 25]. Mitotic figures were considered as cells in metaphase (clumped chromatin and chromatin arranged in a plane) and anaphase/telophase (separated clumped chromatin), as previously described [14]. Hyperchromatic or pyknotic nuclei were not counted, because these cells could represent cells undergoing necrosis or apoptosis, as previously described [14].Ki-67 LI was assessed using a GenASIs capture and analysis system (Applied Spectral Imaging, Carlsbad, CA, USA). Briefly, the highest labeled region at low magnification was selected, and the area was viewed at ×200 magnification. These captured images were analyzed with GenASIs software to quantify the positive tumor cells in each tumor region. Ki-67-positive lymphocytes were manually removed. At least 500 tumor cells per sample were counted to determine the percentage of cells that were positive for Ki-67, and Ki-67 LI was automatically calculated.Grades of H&E- and anti-PHH3-stained sections were determined independently. Tumors were classified as G1 (<2 mitoses per 10 HPFs and/or Ki-67 LI <3%), G2 (2–20 mitoses per 10 HPFs and/or Ki-67 LI 3%–20%), and G3 or NEC (>20 mitoses per HPF or Ki-67 >20%), according to the WHO 2010 classification [12, 13].
### 2.4. Statistical Analyses
Categorical variables were compared using Pearson’s chi-squared test or two-tailed Fisher’s exact test, and continuous variables, which were presented as means ± SD, were compared using Student’st-test. The Spearman rank correlation test was used to assess the relationships between mitotic counts, Ki-67 LI, and PHH3 mitotic index. The results obtained with the WHO grading system with those derived from PHH3-applied modified grading were compared by assessing the concordance rate (number of samples in which the two methods agreed/number of total samples) with the kappa (κ) statistic. Concordance rate was defined as the proportion of similar results achieved using 2 different methods, among total number of cases. The kappa value was evaluated to measure the degree of agreement between 2 different grading methods. Kappa values ≤0.20, 0.21–0.40, 0.41–0.60, 0.61–0.80, and ≥0.81 were regarded as indicating slight, fair, moderate, substantial, and almost perfect agreement, respectively. The volume under the receiver operator characteristic (ROC) curve was drawn to determine the optimal cutoff value in terms of sensitivity and specificity for WHO grades 1 and 2 or 3 by PHH3 MI.Overall survival was defined as the time from the date of initial surgery until death or the end of the stay (May 2017). Disease-free survival was defined as the time from the date of initial surgery until a documented relapse, including locoregional recurrence and distant metastasis, or the end of the study. Survival parameters were calculated using the Kaplan-Meier method and compared by log-rank tests. All statistical analyses were performed using SPSS software (version 18; SPSS Inc., Chicago, IL, USA), withP values <0.05 considered statistically significant.
## 2.1. Patients and Histologic Evaluation
This study retrospectively evaluated 141 patients with primary gastrointestinal NETs who underwent endoscopic or surgical resection at Hallym University Sacred Heart Hospital between 2005 and 2015. Only patients diagnosed with primary gastrointestinal NETs, who had not been treated with chemotherapy or targeted drug therapy at the time of tumor excision and whose formalin-fixed, paraffin-embedded (FFPE) tumor tissue blocks were available for analysis, were included in this study. The medical records of each patient were reviewed, and their demographic information, radiological data, treatment details, tumor recurrence, and survival status were recorded. All H&E-stained slides were reviewed by a gastrointestinal pathologist (MJK) to confirm the diagnosis and to reevaluate histopathological characteristics, including tumor size, mitotic count, tumor grade, resection margins, depth of invasion, lymphatic invasion, venous invasion, and perineural invasion. Staging was based on the 8th edition of American Joint Committee on Cancer staging system. The study was approved by the Institutional Review Board of the Hallym University Sacred Heart Hospital.
## 2.2. Immunohistochemistry
Immunohistochemical staining was performed on 4μm thick FFPE tumor tissue sections using the BenchMark XT automated tissue staining system (Ventana Medical Systems, Inc., Tucson, AZ, USA), according to the manufacturer’s instructions, as described in [21–24]. The primary antibodies were directed against PHH3 (polyclonal, 1 : 100; Cell Marque, Rocklin, CA, USA) and Ki-67 (1 : 250, clone MIB-1, Dako). Slides were incubated with primary antibody 37°C for 40 min, washed, and incubated with a secondary antibody (universal horseradish peroxidase (HRP) Multimer; Ventana Medical Systems) for 8 min at 37°C. After washing, the tissue sections were incubated with a chromogen diaminobenzidine (ultraView Universal DAB Kit, Ventana Medical Systems) and counterstained with hematoxylin.
## 2.3. Slide Scoring
Mitotic counts on both H&E- and PHH3-stained slides were counted in 50 high-powered fields (HPFs; 40 × objective, 10 × eyepiece with a field diameter of 0.55 mm and an area of 0.237 mm2; Olympus microscope BX51, Tokyo, Japan). PHH3 MI was calculated from the mean mitotic count (mean number of mitoses/10 HPFs) and the mean numbers of PHH3-positive nuclei/10 HPFs were calculated as the number of mitoses/10 HPFs and the number of PHH3-positive nuclei/10 HPFs to attain the PHH3 MI, respectively [14, 18, 25]. Mitotic figures were considered as cells in metaphase (clumped chromatin and chromatin arranged in a plane) and anaphase/telophase (separated clumped chromatin), as previously described [14]. Hyperchromatic or pyknotic nuclei were not counted, because these cells could represent cells undergoing necrosis or apoptosis, as previously described [14].Ki-67 LI was assessed using a GenASIs capture and analysis system (Applied Spectral Imaging, Carlsbad, CA, USA). Briefly, the highest labeled region at low magnification was selected, and the area was viewed at ×200 magnification. These captured images were analyzed with GenASIs software to quantify the positive tumor cells in each tumor region. Ki-67-positive lymphocytes were manually removed. At least 500 tumor cells per sample were counted to determine the percentage of cells that were positive for Ki-67, and Ki-67 LI was automatically calculated.Grades of H&E- and anti-PHH3-stained sections were determined independently. Tumors were classified as G1 (<2 mitoses per 10 HPFs and/or Ki-67 LI <3%), G2 (2–20 mitoses per 10 HPFs and/or Ki-67 LI 3%–20%), and G3 or NEC (>20 mitoses per HPF or Ki-67 >20%), according to the WHO 2010 classification [12, 13].
## 2.4. Statistical Analyses
Categorical variables were compared using Pearson’s chi-squared test or two-tailed Fisher’s exact test, and continuous variables, which were presented as means ± SD, were compared using Student’st-test. The Spearman rank correlation test was used to assess the relationships between mitotic counts, Ki-67 LI, and PHH3 mitotic index. The results obtained with the WHO grading system with those derived from PHH3-applied modified grading were compared by assessing the concordance rate (number of samples in which the two methods agreed/number of total samples) with the kappa (κ) statistic. Concordance rate was defined as the proportion of similar results achieved using 2 different methods, among total number of cases. The kappa value was evaluated to measure the degree of agreement between 2 different grading methods. Kappa values ≤0.20, 0.21–0.40, 0.41–0.60, 0.61–0.80, and ≥0.81 were regarded as indicating slight, fair, moderate, substantial, and almost perfect agreement, respectively. The volume under the receiver operator characteristic (ROC) curve was drawn to determine the optimal cutoff value in terms of sensitivity and specificity for WHO grades 1 and 2 or 3 by PHH3 MI.Overall survival was defined as the time from the date of initial surgery until death or the end of the stay (May 2017). Disease-free survival was defined as the time from the date of initial surgery until a documented relapse, including locoregional recurrence and distant metastasis, or the end of the study. Survival parameters were calculated using the Kaplan-Meier method and compared by log-rank tests. All statistical analyses were performed using SPSS software (version 18; SPSS Inc., Chicago, IL, USA), withP values <0.05 considered statistically significant.
## 3. Results
### 3.1. Patient and Tumor Characteristics
Table1 summarizes the characteristics of patients with rectal and nonrectal NETs. The study enrolled 141 patients, 88 men and 53 women, of median age 49 years (range 10–80 years). Of these patients, 115 (81.6%) had rectal NETs and 26 (18.4%) had nonrectal NETs. The nonrectal NETs included 12 (8.5%) originating from the stomach, eight (5.7%) from the appendix, three (2.1%) from the duodenum, and three (2.1%) from the colon. Tumor tissue was obtained by endoscopic resection from 112 (89.6%) patients with rectal NETs and 13 (10.4%) with nonrectal NETs. The remaining three rectal and 13 nonrectal NETs were resected surgically. Mean tumor size was 0.65 cm (range, 0.1–3.5 cm). Resection margins of 22 (15.6%) tumors were positive. Thirteen patients experienced recurrences and eight died during the follow-up period.Table 1
Associations of the clinicopathological characteristics of rectal and other gastrointestinal neuroendocrine tumors.
Rectal NET Nonrectal NET P n = 115 (%) n = 26 (%) Sex 0.148 M 75 (65.2) 13 (50.0) F 40 (34.8) 13 (50.0) Age (y) 0.001∗ <60 98 (85.2) 15 (57.7) ≥60 17 (14.8) 11 (42.3) Tumor size (cm) <0.001∗ 0.1–1 111 (96.5) 20 (76.9) >1 4 (3.5) 6 (23.1) Tumor depth 0.001∗ T1 114 (99.1) 21 (80.8) T2-3 1 (0.9) 5 (19.2) LN metastasis 0.460 N0 113 (98.3) 25 (96.2) N1 2 (1.7) 1 (3.8) Distant metastasis 0.184 M0 115 (100) 25 (96.2) M1 0 (0.0) 1 (3.8) Stage 0.007∗ I 112 (97.4) 22 (84.6) II-III 3 (2.6) 4 (15.4) Grade 0.001∗ G1 96 (83.5) 15 (57.7) G2 19 (16.5) 9 (34.6) G3 0 (0.0) 2 (7.7) Mitosis/10 HPF 0.55 ± 0.79 2.62 ± 7.03 <0.001∗ <2 100 (87.0) 17 (65.4) 0.004∗ 2–20 15 (13.0) 8 (30.8) >20 0 (0.0) 1 (3.8) Ki-67 LI (%) 1.15 ± 1.02 4.06 ± 7.87 0.002∗ <3 109 (94.8) 20 (76.9) 0.001∗ 3–20 6 (5.2) 4 (15.4) >20 0 (0.0) 2 (7.7) PHH3 MI/10 HPF 1.37 ± 1.37 2.77 ± 5.42 0.014∗ <2 75 (65.2) 16 (61.6) 0.485 2–20 40 (34.8) 9 (34.6) >20 0 (0.0) 1 (3.8) Vascular invasion 0.375 Positive 22 (19.1) 7 (26.9) Negative 93 (80.9) 19 (73.1) Lymphatic invasion 0.363 Positive 18 (15.7) 6 (23.1) Negative 97 (84.3) 20 (76.9) Perineural invasion 0.123 Positive 12 (10.4) 0 (0.0) Negative 103 (89.6) 26 (100) Resection margin 1.000 R0 97 (84.3) 22 (84.6) R1 18 (15.7) 4 (15.4) Recurrence 0.001∗ Yes 6 (5.2) 7 (26.9) No 109 (94.8) 19 (73.1) Died 0.018∗ Yes 4 (3.5) 4 (15.4) No 111 (96.5) 22 (84.6) NET, neuroendocrine tumor; HPF, high-power field; LI, labeling index; MI, mitotic index.Statistically∗ significant. P value <0.05.Several demographic and clinical characteristics differed significantly in patients with rectal and nonrectal NETs. Patients with rectal NETs were significantly younger in age (48 versus 56 years,P=0.001) and had smaller-sized tumors (0.58±0.35 versus 0.92±0.90 cm, P<0.001). The depth of tumor invasion was more superficial in patients with rectal NETs, with 99.1% of these patients having tumors confined to the submucosa, whereas a higher percentage of nonrectal NETs (19.2%) infiltrated the muscle layer or adipose tissue (P=0.001). Tumor stage (P=0.007) and tumor grade (P=0.001) were significantly lower in patients with rectal than nonrectal NETs, with 83.5% and 58.3%, respectively, having grade 1 tumors. Recurrence (5.2% versus 26.9%, P=0.001) and mortality (3.5% versus 15.4%, P=0.018) rates were also significantly lower in patients with rectal than nonrectal NETs.
### 3.2. Mitotic Counts, PHH3, and Ki-67 LI of Rectal and Nonrectal NETs
In all 141 NETs, significant positive correlations were observed between mitotic counts and Ki-67 LI (r=0.739, P<0.001), between mitotic counts and PHH3 MI (r=0.839, P<0.001) (Figure 2(a)), and between PHH3 MI and Ki-67 LI (r=0.724, P<0.001). All of these three parameters, however, differed significantly in rectal and nonrectal NETs. The mean numbers of mitotic counts (0.55±0.79/10 HPFs [range, 0–3/10 HPFs] versus 2.62±7.03/10 HPFs [range, 0–35/10 HPFs], P<0.001), mean Ki-67 LI (mean, 1.15%±1.02% [range, 0%–5.3%] versus 4.06%±7.87% [range, 0%–35%], P=0.002), and mean PHH3 MI (1.37±1.37/10 HPFs [range, 0–6/10 HPFs] versus 2.77±5.42/10 HPFs [range, 0–25/10 HPFs], P=0.014) were all significantly lower in rectal than in nonrectal NETs (Figures 2(b)–2(d)).Figure 2
(a) Correlations of mitotic counts obtained from H&E slides with Ki-67 LI and PHH3 MI. Comparisons of mitosis (b), Ki-67 LI (c), and PHH3 MI (d) in rectal and nonrectal neuroendocrine tumors of the gastrointestinal tract. (e) Receiver operating characteristic (ROC) curve for PHH3 MI with original WHO grades 1 and 2.
(a) (b) (c) (d) (e)
### 3.3. Comparisons between Original WHO Grades and Grades Modified by PHH3
Classification of the 141 NETs according to the WHO grading system showed that 110 (78.0%) were of grade 1, 29 (20.6%) were of grade 2, and two (1.4%) were of grade 3.To determine the PHH3 MI cutoff values that mostly closely matched the established WHO grade, we applied PHH3 MI in two ways (Table2): (1) counting PHH3 MI according to the mitosis count on H&E slides, following by application of PHH3 MI to the WHO grading system instead of mitosis; (2) using a 4 PHH3 MI cutoff value, followed by application of PHH3 MI to the WHO grading system instead of mitosis or Ki-67 LI. Then, we generated a ROC curve to validate the optimal cutoff value, which showed an area under curve of 0.701 (95% confidence interval, 0.561–0.826), which was statistically significant (P=0.007) (Figure 2(e)). At an optimal cutoff of 4, the sensitivity and specificity using 4 PHH3 MI to differentiate the WHO grade 1 and grades 2-3 were 73.3% and 31%, respectively.Table 2
Comparison of histologic grades combined with PHH3 staining and cutoff value of ≥4/10 HPFs.
TotalN=141 (%) WHO grade P Kappa Grade 1 Grade 2 Grade 3 n = 110 (%) n = 29 (%) n = 2 (%) Grades in replacement of H&E-mitosis by PHH3 <0.001∗ 0.428 Grade 1 86 (61.0) 80 (80.0) 6 (20.7) 0 (0.0) Grade 2 53 (37.6) 30 (30.0) 23 (79.3) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) Grades with PHH3 cutoff ≥4/10 HPFs <0.001∗ 0.810 Grade 1 104 (73.8) 102 (92.7) 2 (6.9) 0 (0.0) Grade 2 35 (24.8) 8 (7.3) 27 (93.1) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) HPF, high-power field.Statistically∗ significant. P value <0.05.Replacement of mitotic counts with the PHH3 MI in the WHO grading system resulted in 86 (61.0%) tumors being classified as grade 1, 53 (37.6%) as grade 2, and two (1.4%) as grade 3. The concordance rate of this modified system with the WHO grades was 75.9%. Replacement of mitotic counts with the PHH3 MI resulted in a change of grade of 36 tumors (25.5%), with 30 (21.3%) changed from grade 1 to grade 2 and six (4.3%) changed from grade 2 to grade 1. The association between these modified grades and the WHO grades was moderate (κ=0.428) but statistically significant (P<0.001).The application of a PHH3 MI cutoff ≥4 in the WHO grading system resulted in 104 (73.8%) tumors being classified as grade 1 and 35 (24.8%) as grade 2. Use of this modified grading system with PHH3 MI ≥4 resulted in change of grade of 10 (7.1%) tumors, with eight (5.7%) changed from grade 1 to grade 2 and two (1.4%) changed from grade 2 to grade 1. The concordance rate of these modified grades with the original WHO grades was 92.9%, with almost perfect agreement between the two (κ=0.810), a result that was statistically significant (P<0.001).Use of PHH3 ≥4 combined with the WHO grading criteria resulted in 10 tumors being reclassified (Table3), nine rectal NETs and one gastric NET. Eight of these 10 tumors were upgraded by the addition of PHH3 MI to the WHO grading system compared with mitotic counts by the WHO grading system alone.Table 3
Comparison of clinicopathological features of tumors of grades stratified before and after use of PHH3 MI (≥4/10 HPFs) determining mitotic counts.
Patient number Sex/age Location Size Mitosis Ki-67 LI PHH3 MI WHO grade PHH3 grade 1 F/37 Rectum 0.5 0 0.7 4 1 2 2 M/47 Rectum 0.7 2 1.6 2 2 1 3 M/46 Rectum 0.5 2 2.8 1 2 1 4 M/49 Rectum 0.5 0 2.1 5 1 2 5 M/47 Rectum 0.5 0 2.5 6 1 2 6 F/35 Rectum 0.4 0 2 4 1 2 7 M/60 Rectum 1 0 2.4 4 1 2 8 F/55 Rectum 0.5 0 2.5 4 1 2 9 M/21 Rectum 0.5 0 2.5 6 1 2 10 F/56 Stomach 1 1 0.5 4 1 2 MI, mitotic index; HPF, high-power field; LI, labeling index.
### 3.4. Prognostic Significance of the Inclusion of the PHH3 Cutoff
Because the use of PHH3 ≥4 in the WHO grading criteria yielded grades closest to those determined by the original WHO grading system, we analyzed the prognostic relevance of the combined criteria for overall survival and disease-free survival in patients with rectal NET (Figures3(a)-3(b)). The modified grading system showed that disease-free survival was significantly worse (96.49±7.10 months versus 150.81±2.22 months; P=0.001) and overall survival tended to be worse (P=0.063), in patients with G2 than G1 rectal NETs.Figure 3
Impact of using PHH3 ≥4 combined with WHO grading criteria on overall survival and recurrence-free survival in patients with rectal NETs. Associations of PHH3 MI with (a) disease-free survival and (b) overall survival in patients with rectal NETs.
(a) (b)
## 3.1. Patient and Tumor Characteristics
Table1 summarizes the characteristics of patients with rectal and nonrectal NETs. The study enrolled 141 patients, 88 men and 53 women, of median age 49 years (range 10–80 years). Of these patients, 115 (81.6%) had rectal NETs and 26 (18.4%) had nonrectal NETs. The nonrectal NETs included 12 (8.5%) originating from the stomach, eight (5.7%) from the appendix, three (2.1%) from the duodenum, and three (2.1%) from the colon. Tumor tissue was obtained by endoscopic resection from 112 (89.6%) patients with rectal NETs and 13 (10.4%) with nonrectal NETs. The remaining three rectal and 13 nonrectal NETs were resected surgically. Mean tumor size was 0.65 cm (range, 0.1–3.5 cm). Resection margins of 22 (15.6%) tumors were positive. Thirteen patients experienced recurrences and eight died during the follow-up period.Table 1
Associations of the clinicopathological characteristics of rectal and other gastrointestinal neuroendocrine tumors.
Rectal NET Nonrectal NET P n = 115 (%) n = 26 (%) Sex 0.148 M 75 (65.2) 13 (50.0) F 40 (34.8) 13 (50.0) Age (y) 0.001∗ <60 98 (85.2) 15 (57.7) ≥60 17 (14.8) 11 (42.3) Tumor size (cm) <0.001∗ 0.1–1 111 (96.5) 20 (76.9) >1 4 (3.5) 6 (23.1) Tumor depth 0.001∗ T1 114 (99.1) 21 (80.8) T2-3 1 (0.9) 5 (19.2) LN metastasis 0.460 N0 113 (98.3) 25 (96.2) N1 2 (1.7) 1 (3.8) Distant metastasis 0.184 M0 115 (100) 25 (96.2) M1 0 (0.0) 1 (3.8) Stage 0.007∗ I 112 (97.4) 22 (84.6) II-III 3 (2.6) 4 (15.4) Grade 0.001∗ G1 96 (83.5) 15 (57.7) G2 19 (16.5) 9 (34.6) G3 0 (0.0) 2 (7.7) Mitosis/10 HPF 0.55 ± 0.79 2.62 ± 7.03 <0.001∗ <2 100 (87.0) 17 (65.4) 0.004∗ 2–20 15 (13.0) 8 (30.8) >20 0 (0.0) 1 (3.8) Ki-67 LI (%) 1.15 ± 1.02 4.06 ± 7.87 0.002∗ <3 109 (94.8) 20 (76.9) 0.001∗ 3–20 6 (5.2) 4 (15.4) >20 0 (0.0) 2 (7.7) PHH3 MI/10 HPF 1.37 ± 1.37 2.77 ± 5.42 0.014∗ <2 75 (65.2) 16 (61.6) 0.485 2–20 40 (34.8) 9 (34.6) >20 0 (0.0) 1 (3.8) Vascular invasion 0.375 Positive 22 (19.1) 7 (26.9) Negative 93 (80.9) 19 (73.1) Lymphatic invasion 0.363 Positive 18 (15.7) 6 (23.1) Negative 97 (84.3) 20 (76.9) Perineural invasion 0.123 Positive 12 (10.4) 0 (0.0) Negative 103 (89.6) 26 (100) Resection margin 1.000 R0 97 (84.3) 22 (84.6) R1 18 (15.7) 4 (15.4) Recurrence 0.001∗ Yes 6 (5.2) 7 (26.9) No 109 (94.8) 19 (73.1) Died 0.018∗ Yes 4 (3.5) 4 (15.4) No 111 (96.5) 22 (84.6) NET, neuroendocrine tumor; HPF, high-power field; LI, labeling index; MI, mitotic index.Statistically∗ significant. P value <0.05.Several demographic and clinical characteristics differed significantly in patients with rectal and nonrectal NETs. Patients with rectal NETs were significantly younger in age (48 versus 56 years,P=0.001) and had smaller-sized tumors (0.58±0.35 versus 0.92±0.90 cm, P<0.001). The depth of tumor invasion was more superficial in patients with rectal NETs, with 99.1% of these patients having tumors confined to the submucosa, whereas a higher percentage of nonrectal NETs (19.2%) infiltrated the muscle layer or adipose tissue (P=0.001). Tumor stage (P=0.007) and tumor grade (P=0.001) were significantly lower in patients with rectal than nonrectal NETs, with 83.5% and 58.3%, respectively, having grade 1 tumors. Recurrence (5.2% versus 26.9%, P=0.001) and mortality (3.5% versus 15.4%, P=0.018) rates were also significantly lower in patients with rectal than nonrectal NETs.
## 3.2. Mitotic Counts, PHH3, and Ki-67 LI of Rectal and Nonrectal NETs
In all 141 NETs, significant positive correlations were observed between mitotic counts and Ki-67 LI (r=0.739, P<0.001), between mitotic counts and PHH3 MI (r=0.839, P<0.001) (Figure 2(a)), and between PHH3 MI and Ki-67 LI (r=0.724, P<0.001). All of these three parameters, however, differed significantly in rectal and nonrectal NETs. The mean numbers of mitotic counts (0.55±0.79/10 HPFs [range, 0–3/10 HPFs] versus 2.62±7.03/10 HPFs [range, 0–35/10 HPFs], P<0.001), mean Ki-67 LI (mean, 1.15%±1.02% [range, 0%–5.3%] versus 4.06%±7.87% [range, 0%–35%], P=0.002), and mean PHH3 MI (1.37±1.37/10 HPFs [range, 0–6/10 HPFs] versus 2.77±5.42/10 HPFs [range, 0–25/10 HPFs], P=0.014) were all significantly lower in rectal than in nonrectal NETs (Figures 2(b)–2(d)).Figure 2
(a) Correlations of mitotic counts obtained from H&E slides with Ki-67 LI and PHH3 MI. Comparisons of mitosis (b), Ki-67 LI (c), and PHH3 MI (d) in rectal and nonrectal neuroendocrine tumors of the gastrointestinal tract. (e) Receiver operating characteristic (ROC) curve for PHH3 MI with original WHO grades 1 and 2.
(a) (b) (c) (d) (e)
## 3.3. Comparisons between Original WHO Grades and Grades Modified by PHH3
Classification of the 141 NETs according to the WHO grading system showed that 110 (78.0%) were of grade 1, 29 (20.6%) were of grade 2, and two (1.4%) were of grade 3.To determine the PHH3 MI cutoff values that mostly closely matched the established WHO grade, we applied PHH3 MI in two ways (Table2): (1) counting PHH3 MI according to the mitosis count on H&E slides, following by application of PHH3 MI to the WHO grading system instead of mitosis; (2) using a 4 PHH3 MI cutoff value, followed by application of PHH3 MI to the WHO grading system instead of mitosis or Ki-67 LI. Then, we generated a ROC curve to validate the optimal cutoff value, which showed an area under curve of 0.701 (95% confidence interval, 0.561–0.826), which was statistically significant (P=0.007) (Figure 2(e)). At an optimal cutoff of 4, the sensitivity and specificity using 4 PHH3 MI to differentiate the WHO grade 1 and grades 2-3 were 73.3% and 31%, respectively.Table 2
Comparison of histologic grades combined with PHH3 staining and cutoff value of ≥4/10 HPFs.
TotalN=141 (%) WHO grade P Kappa Grade 1 Grade 2 Grade 3 n = 110 (%) n = 29 (%) n = 2 (%) Grades in replacement of H&E-mitosis by PHH3 <0.001∗ 0.428 Grade 1 86 (61.0) 80 (80.0) 6 (20.7) 0 (0.0) Grade 2 53 (37.6) 30 (30.0) 23 (79.3) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) Grades with PHH3 cutoff ≥4/10 HPFs <0.001∗ 0.810 Grade 1 104 (73.8) 102 (92.7) 2 (6.9) 0 (0.0) Grade 2 35 (24.8) 8 (7.3) 27 (93.1) 0 (0.0) Grade 3 2 (1.4) 0 (0.0) 0 (0.0) 2 (100) HPF, high-power field.Statistically∗ significant. P value <0.05.Replacement of mitotic counts with the PHH3 MI in the WHO grading system resulted in 86 (61.0%) tumors being classified as grade 1, 53 (37.6%) as grade 2, and two (1.4%) as grade 3. The concordance rate of this modified system with the WHO grades was 75.9%. Replacement of mitotic counts with the PHH3 MI resulted in a change of grade of 36 tumors (25.5%), with 30 (21.3%) changed from grade 1 to grade 2 and six (4.3%) changed from grade 2 to grade 1. The association between these modified grades and the WHO grades was moderate (κ=0.428) but statistically significant (P<0.001).The application of a PHH3 MI cutoff ≥4 in the WHO grading system resulted in 104 (73.8%) tumors being classified as grade 1 and 35 (24.8%) as grade 2. Use of this modified grading system with PHH3 MI ≥4 resulted in change of grade of 10 (7.1%) tumors, with eight (5.7%) changed from grade 1 to grade 2 and two (1.4%) changed from grade 2 to grade 1. The concordance rate of these modified grades with the original WHO grades was 92.9%, with almost perfect agreement between the two (κ=0.810), a result that was statistically significant (P<0.001).Use of PHH3 ≥4 combined with the WHO grading criteria resulted in 10 tumors being reclassified (Table3), nine rectal NETs and one gastric NET. Eight of these 10 tumors were upgraded by the addition of PHH3 MI to the WHO grading system compared with mitotic counts by the WHO grading system alone.Table 3
Comparison of clinicopathological features of tumors of grades stratified before and after use of PHH3 MI (≥4/10 HPFs) determining mitotic counts.
Patient number Sex/age Location Size Mitosis Ki-67 LI PHH3 MI WHO grade PHH3 grade 1 F/37 Rectum 0.5 0 0.7 4 1 2 2 M/47 Rectum 0.7 2 1.6 2 2 1 3 M/46 Rectum 0.5 2 2.8 1 2 1 4 M/49 Rectum 0.5 0 2.1 5 1 2 5 M/47 Rectum 0.5 0 2.5 6 1 2 6 F/35 Rectum 0.4 0 2 4 1 2 7 M/60 Rectum 1 0 2.4 4 1 2 8 F/55 Rectum 0.5 0 2.5 4 1 2 9 M/21 Rectum 0.5 0 2.5 6 1 2 10 F/56 Stomach 1 1 0.5 4 1 2 MI, mitotic index; HPF, high-power field; LI, labeling index.
## 3.4. Prognostic Significance of the Inclusion of the PHH3 Cutoff
Because the use of PHH3 ≥4 in the WHO grading criteria yielded grades closest to those determined by the original WHO grading system, we analyzed the prognostic relevance of the combined criteria for overall survival and disease-free survival in patients with rectal NET (Figures3(a)-3(b)). The modified grading system showed that disease-free survival was significantly worse (96.49±7.10 months versus 150.81±2.22 months; P=0.001) and overall survival tended to be worse (P=0.063), in patients with G2 than G1 rectal NETs.Figure 3
Impact of using PHH3 ≥4 combined with WHO grading criteria on overall survival and recurrence-free survival in patients with rectal NETs. Associations of PHH3 MI with (a) disease-free survival and (b) overall survival in patients with rectal NETs.
(a) (b)
## 4. Discussion
This study was designed to explore the diagnostic utility of PHH3 MI as an ancillary mitotic marker and the clinically relevant cutoff value of PHH3 MI in patients with gastrointestinal well-differentiated NETs, by comparing WHO grades and WHO grades modified by PHH3 MI. We found that a PHH3 MI cutoff of 4 was most similar to WHO grade.The most accurate evaluation of mitoses in patients with NETs using the WHO grading system remains unclear, because mitoses may be mimicked by darkly stained or shrunken irregular nuclei, apoptotic bodies, and karyorrhectic debris, yielding false positives. In addition, diagnosis of mitoses is limited by the narrow cutoffs in mitotic counts between grades 1 and 2. PHH3 is only expressed during mitosis, not during interphase or apoptosis, making PHH3 a specific marker of mitosis [19, 20]. We found that mitotic counts correlated with both the Ki-67 LI and PHH3 MI, but its correlation with PHH3 MI was slightly higher, indicating that PHH3 MI is more closely associated with mitosis in gastrointestinal NETs. PHH3 only stains cells during the late G2 and M phases of mitosis [20], whereas Ki-67 is expressed throughout the cell cycle except in the G0 phase [26]. PHH3 would therefore stain far fewer tumor cells than Ki-67, resulting in a lower PHH3 MI.Most determinations of the prognostic impact of mitoses in gastrointestinal NETs are based on the evaluation of mitoses by H&E staining [21]. Although the results using PHH3 correlated with mitosis on H&E slides [16, 27], it is unclear if these two types of mitoses have the same prognostic impact. In addition, no standards have yet been developed for the quantification in gastrointestinal NETs. PHH3 MI is comparable to the current WHO grading system but is superior to H&E and Ki-67, in predicting disease-free survival, with PHH3 appearing to be both easier to interpret and more accurate than current prognostic markers [14]. Evaluations in the present study of the prognostic utility of PHH3 MI instead of mitotic counts found that a PHH3 MI cutoff of 3 was no better than 3 mitotic counts per 10 HPFs in the WHO grading system for predicting outcomes in patients with rectal NETs. Of the 141 tumors, 36 showed discrepancies from the original WHO grades, with 30 upgraded and six downgraded when a PHH3 MI cutoff was used. Similarly, approximately one-third of discordant gastrointestinal stromal tumors were upgraded when determined by PHH3 application compared with H&E-stained slides [15]. The use of PHH3 in melanomas has been reported to upgrade 6–14% of tumors from pT1a to pT1b [16], indicating that replacement of mitotic counts by PHH3 MI in the grading system resulted in higher tumor grades. In contrast, a PHH3 MI cutoff of 4 could significantly distinguish between grades 1 and 2. Using this criterion, only 10 tumors showed discrepancies, with eight being upgraded and two (1.4%) downgraded. Furthermore, use of a PHH3 MI cutoff ≥4 in the WHO grading criteria instead of mitosis or KI-67 LI showed almost perfect agreement with the original WHO grades (κ=0.810). Therefore, PHH3 MI ≥4 is likely to yield results comparable to the original WHO grades.Use of a PHH3 MI cutoff ≥4 was associated with disease-free survival in patients with rectal NETs and could distinguish between grade 1 and grade 2 tumors. In contrast, this cutoff value was marginally significant in predicting overall survival in patients with rectal NETs. Thus, a PHH3 ≥4 cutoff value could approximate the results of the original WHO grading system in rectal NETs, as well as their prognostic correlations. Similarly, findings in pancreatic well-differentiated NETs, histologic grade, determined that ≥4 PHH3-stained mitoses/10 HPFs significantly correlated with patient survival [25].Many studies in American and European populations [1–4] have shown that the majority of gastrointestinal NETs are located in the rectum, followed by the small intestine, colon, stomach, and appendix, and that the incidence of these tumors at all primary sites, especially the rectum and small intestine, increases with age [28]. In the present study, 115 (81.6%) of the 141 gastrointestinal NETs were located in the rectum, whereas only 26 (18.4%) were nonrectal NETs. Compared with nonrectal NETs, rectal NETs were associated with younger age, smaller tumor size, more superficial invasion, lower stage, lower grade, lower recurrence rate, and lower mortality rate. Most (83.5%) rectal NETs were classified as grade 1, whereas 41.3% of nonrectal NETs were of grade 2 or 3. Similarly, the primary tumor site distribution in our study was similar to that previously reported in the Korean, Japanese, and Chinese populations [7, 29, 30]. These findings suggest that the distribution of primary sites of gastrointestinal NETs may differ in Asian and Caucasian populations [7, 30].In conclusion, the cutoff value of PHH3 ≥4 yielded results most similar to the original WHO grades. These findings suggest that this PHH3 MI cutoff may be a helpful adjunct prognostic strategy most likely reflecting the original WHO grades of gastrointestinal NETs. Although the number of patients in this study was relatively small, limiting the robustness of our conclusions, PHH3 appears to impart a useful ancillary marker for tumor grading. Additional studies are needed to confirm the optimal cutoff value of PHH3 MI for tumor grading of gastrointestinal NETs.
---
*Source: 1013640-2018-03-27.xml* | 2018 |
# Postpartum Ovarian Vein Thrombosis: Two Cases and Review of Literature
**Authors:** Amos A. Akinbiyi; Rita Nguyen; Michael Katz
**Journal:** Case Reports in Medicine
(2009)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2009/101367
---
## Abstract
Introduction. We presented two cases of late presentation of ovarian vein thrombosis postpartum following vaginal delivery and cesarean section within a short period in our institution. Both of them had pelvic pain following their deliveries which was associated with fever and chills. One of them was quite a big-sized thrombophlebitic vein which was about 10×6×5 centimeters following a computed tomography. They were both treated initially for urinary tract infection, while a large ovarian vein thrombosis was not diagnosed in the second patient until her emergency department admission.
Conclusion. Ovarian vein thrombosis is rare, but could present late, and difficult to diagnose, hence, should be considered as a differential diagnosis in a postpartum woman with fever and tender pelvic mass.
---
## Body
## 1. Introduction
Ovarian vein thrombosis is an uncommon but potentially serious disorder that is associated with a variety of pelvic conditions—most notably, recent childbirth. It could also be associated with pelvic inflammatory disease, malignancies, and pelvic surgery. Ovarian vein thrombosis occurs in 0.05–0.18% of pregnancies and is diagnosed on the right side in 80–90% of the affected postpartum patients [1–3]. Prompt diagnosis and treatment of this condition is needed to avoid the morbidity and mortality that are related both to the thrombosis and to any associated infection/sepsis. One of these cases illustrates the importance of including ovarian vein thrombosis as a differential diagnosis in women who present in the postpartum period with a tender pelvic mass.
## 2. Case Presentation
### 2.1. Case 1
A 26-year-old woman presented at 13 days postpartum to an emergency department with severe, stabbing, right flank pain. The pain had been present since postpartum day 2, with associated fever (temperature 39.5 degrees centigrade) and chills. At that time, diagnosis of urinary tract infection was made by urinalysis and culture which confirmedEscherichiacoli as the infective organism, and the patient was treated with amoxicillin based on the sensitivity results. Unfortunately, the pain did not resolve. There was no associated vagina bleeding, nausea, or vomiting.On examination, the patient was afebrile and had tenderness in the umbilical and right flank area. Ultrasonography was not performed after the delivery because it was not considered appropriate.Her antenatal period was uneventful. She had a spontaneous vaginal delivery of a live born-term female. The immediate postpartum period was unremarkable. There was no other significant past medical or surgical history.Investigations showed white blood cell count of12.6×109/L, hemoglobin of 114 g/L, and reactive thrombocytosis with a platelet count of 587×109/L. The rest of the laboratory investigations were within normal limit. A pelvic computed tomography showed findings consistent with a thrombosed right ovarian vein, measuring 8×5×5 cm (Figure 1).Figure 1
Longitudinal section right and left ovarian veins case 2.A consult to internal medicine was subsequently made, the diagnosis confirmed, and the patient was initiated on therapy of low-molecular-weight heparin (Dalteparin sodium) at a dose of 12,500 units/day by subcutaneous injection and discharged home, to be followed up with internal medicine.
### 2.2. Case 2
A 19-year-old G3 P 2-0-1-3 female presented 3 weeks after a lower segment cesarean section for monochorionic diamniotic twins with a right-sided abdominal mass and abdominal pain and cramping of 1-week duration. The pain was described as gas-like, nonradiating, and admitted to passing flatus and a bowel movement that day. The patient was afebrile, and also denied nausea, vomiting, diarrhoea, and difficulty with voiding but did admit to a fever the previous week. Her fever was not based on any objective measurement. Her babies were reported to be doing well. The patient was otherwise healthy with no allergies and only taking iron. The patient was nonsmoker and denied alcohol or drug use.On physical examination, the patient looked well with normal vital signs. Her abdomen was distended, nontender, and an 8 cm× 10 cm mass was found below the right costal margin with a consistency of an ovarian mass. The mass felt irregular in consistency. Her incision had healed well. On pelvic examination, a 10-centimeter long mass was felt in the right lower abdominal region which was slightly mobile and nontender, which extends from just below the right renal vein down to the right iliac fossa. The uterus was barely palpable above the pubic symphysis which was considered normal.The rest of her physical examination was unremarkable. Complete blood count and an abdominal ultrasound showed numerous hypoechoic tubular structures just inferior to the right kidney. A computed tomogram of the abdomen/pelvis with contrast identified numerous nonenhancing dilated tubular structures extending from the right renal vein down to the ovary measuring10×6×5 cm. The left side also showed a similar but less obvious structure (see Figures 2 and 3). There was also found a large amount of air within the endometrial cavity concerning for endometritis. The patient was admitted and treated as a pelvic septic thrombophlebitis and anticoagulated. She was commenced on low-molecular weight heparin at a dose of 12,500 units per day while Cefazolin 1 gm was given every 8 hours intravenously for five days. The patient had an uneventful stay in hospital and was discharged home after 5 days with a followup-computed tomogram in two weeks. She was also given an appointment to see her primary care provider in one week if any problem arise.Figure 2
Coronal section right ovarian vein case 1.Figure 3
Coronal section right ovarian vein case 2.
## 2.1. Case 1
A 26-year-old woman presented at 13 days postpartum to an emergency department with severe, stabbing, right flank pain. The pain had been present since postpartum day 2, with associated fever (temperature 39.5 degrees centigrade) and chills. At that time, diagnosis of urinary tract infection was made by urinalysis and culture which confirmedEscherichiacoli as the infective organism, and the patient was treated with amoxicillin based on the sensitivity results. Unfortunately, the pain did not resolve. There was no associated vagina bleeding, nausea, or vomiting.On examination, the patient was afebrile and had tenderness in the umbilical and right flank area. Ultrasonography was not performed after the delivery because it was not considered appropriate.Her antenatal period was uneventful. She had a spontaneous vaginal delivery of a live born-term female. The immediate postpartum period was unremarkable. There was no other significant past medical or surgical history.Investigations showed white blood cell count of12.6×109/L, hemoglobin of 114 g/L, and reactive thrombocytosis with a platelet count of 587×109/L. The rest of the laboratory investigations were within normal limit. A pelvic computed tomography showed findings consistent with a thrombosed right ovarian vein, measuring 8×5×5 cm (Figure 1).Figure 1
Longitudinal section right and left ovarian veins case 2.A consult to internal medicine was subsequently made, the diagnosis confirmed, and the patient was initiated on therapy of low-molecular-weight heparin (Dalteparin sodium) at a dose of 12,500 units/day by subcutaneous injection and discharged home, to be followed up with internal medicine.
## 2.2. Case 2
A 19-year-old G3 P 2-0-1-3 female presented 3 weeks after a lower segment cesarean section for monochorionic diamniotic twins with a right-sided abdominal mass and abdominal pain and cramping of 1-week duration. The pain was described as gas-like, nonradiating, and admitted to passing flatus and a bowel movement that day. The patient was afebrile, and also denied nausea, vomiting, diarrhoea, and difficulty with voiding but did admit to a fever the previous week. Her fever was not based on any objective measurement. Her babies were reported to be doing well. The patient was otherwise healthy with no allergies and only taking iron. The patient was nonsmoker and denied alcohol or drug use.On physical examination, the patient looked well with normal vital signs. Her abdomen was distended, nontender, and an 8 cm× 10 cm mass was found below the right costal margin with a consistency of an ovarian mass. The mass felt irregular in consistency. Her incision had healed well. On pelvic examination, a 10-centimeter long mass was felt in the right lower abdominal region which was slightly mobile and nontender, which extends from just below the right renal vein down to the right iliac fossa. The uterus was barely palpable above the pubic symphysis which was considered normal.The rest of her physical examination was unremarkable. Complete blood count and an abdominal ultrasound showed numerous hypoechoic tubular structures just inferior to the right kidney. A computed tomogram of the abdomen/pelvis with contrast identified numerous nonenhancing dilated tubular structures extending from the right renal vein down to the ovary measuring10×6×5 cm. The left side also showed a similar but less obvious structure (see Figures 2 and 3). There was also found a large amount of air within the endometrial cavity concerning for endometritis. The patient was admitted and treated as a pelvic septic thrombophlebitis and anticoagulated. She was commenced on low-molecular weight heparin at a dose of 12,500 units per day while Cefazolin 1 gm was given every 8 hours intravenously for five days. The patient had an uneventful stay in hospital and was discharged home after 5 days with a followup-computed tomogram in two weeks. She was also given an appointment to see her primary care provider in one week if any problem arise.Figure 2
Coronal section right ovarian vein case 1.Figure 3
Coronal section right ovarian vein case 2.
## 3. Discussion
Women are five times more likely to suffer from a thromboembolic event when they are pregnant [1]. The overall incidence of thromboembolic events ranges from 0.3% to 1.2% [2]. The most common postpartum thromboembolic events include deep vein thrombosis and pulmonary emboli. However, ovarian vein thrombosis complicates 0.05%–0.18% of pregnancies [3–5].The first case of postpartum ovarian vein thrombosis was described by Austin in 1956 [6]. The pathophysiology of ovarian vein thrombosis is ascribed to Virchow’s triad of hypercoagulability, venous stasis, and endothelial trauma. Pregnancy is a period where women are at a hypercoagulable state due to normal physiological changes. These changes include an increase in clotting factors such as factors VII, VIII, IX, X, XII, vWF, and fibrinogen. As well, free levels of protein S are decreased. There is venous stasis of the lower limbs due to compression of the pelvic veins and inferior vena cava by the uterus. Increased levels of estrogen and increased local production of nitric oxide and prostacyclin also contribute to increased deep vein capacitance.Endothelial trauma can occur at the time delivery or from local inflammation. These pregnancy-induced changes help protect women from hemorrhagic complications during placentation and labour; however, they also place women at an increased risk of venous thromboembolic diseases.The right ovarian vein is implicated in 90% of cases of ovarian vein thrombosis [3]. Several explanations have been proposed for this skew towards the right ovarian vein ranging from retrograde drainage from the left ovarian vein and anterograde flow into the right ovarian vein in the postpartum setting to dextrorotation of the enlarging uterus, causing compression of the right ovarian vein and right ureter as they cross the pelvic brim and the fact that the right ovarian vein is longer than the left, and when dilated, the valves become incompetent, making it easier for a thrombus to form [2, 3, 7, 8].Patients with ovarian vein thrombosis typically present with fever, pelvic pain, and a “ropelike’’ palpable abdominal mass [5, 9]. Case 2, however, did not present with fever, but she gave a history of being feverish few days prior to her second admission. We, however, decided to treat her with antibiotics despite her lack of any evidence of fever. We do not understand the reason for air in her uterus but we were very suspicious of endometritis hence we considered to treat her with antibiotics. Case 1, however, was discharged prematurely with the followup to be conducted by her family physician. This patient should have been kept for few more days in hindsight. The incidence peaks around postpartum day 2 for full-term deliveries and occurs within 10 days postpartum in 90% of cases [9]. As symptoms are nonspecific, the diagnosis of ovarian vein thrombosis may be delayed. The differential diagnosis for ovarian vein thrombosis includes appendicitis, endometritis, adnexal torsion, pyelonephritis, and septic pelvic thrombophlebitis. Ovarian vein thrombosis is differentiated clinically from septic thrombophlebitis in that patients with septic thrombophlebitis appear clinically well but have continuing high spiking fevers, but the physical examination is also normal [3].The diagnosis of ovarian vein thrombosis is ideally made with pelvic CT scanning, which will show an enlarged vein with a low-density lumen and sharply defined walls [9, 10]. However, ultrasound is commonly used as the first radiographic investigation in postpartum women. Ultrasound scan in Case 2 was not conclusive, but the computed tomogram enabled us to make a definitive diagnosis. Ovarian vein thrombosis on ultrasound will appear as an anechoic to hypoechoic mass between the adnexa and the inferior vena cava, with absence of blood flow [3]. The sensitivity of CT scanning for diagnosing ovarian vein thrombosis is 100%, and 52% for Doppler ultrasonography [2]. Magnetic resonance image is considered ideal for its sensitivity and lack of ionizing radiation.Treatment for ovarian vein thrombosis includes antibiotics and anticoagulation. Appropriate antibiotics include clindamycin, or gentamicin, or a second- or third-generation cephalosporin. Although low-molecular-weight heparins have been shown to be as effective as unfractionated heparin for treating ovarian vein thrombosis, the studies providing this evidence are of small design with unsatisfactory data. Further investigation is required to determine if low-molecular-weight heparins are appropriate to use in treatment of ovarian vein thrombosis [3, 9]. We were quite concerned about the size of the thrombosed ovarian vein in Case 2 despite that, without any evidence of fever, she was given Cefazolin to which she responded quite well.Complications of ovarian vein thrombosis include sepsis, thrombus extending to the inferior vena cava and renal veins, and pulmonary embolism. The incidence of pulmonary embolism is reported to be 13.2% [5]. These complications can be managed surgically with thrombectomy or with an inferior vena cava filter. Mortality due to ovarian vein thrombosis is less than 5%, most cases of which are due to pulmonary embolism [3]. Some degree of morbidity could be encountered in cases inappropriately and promptly managed.
## 4. Conclusion
Ovarian vein thrombosis, rare as it, could present late in postpartum women with serious consequences, hence a high index of suspicion for diagnosis and management is required to avoid an associated mortality and morbidity. There was no mortality in these two patients, but morbidity was reduced with prompt diagnosis and appropriate treatment.
---
*Source: 101367-2009-09-30.xml* | 101367-2009-09-30_101367-2009-09-30.md | 16,031 | Postpartum Ovarian Vein Thrombosis: Two Cases and Review of Literature | Amos A. Akinbiyi; Rita Nguyen; Michael Katz | Case Reports in Medicine
(2009) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2009/101367 | 101367-2009-09-30.xml | ---
## Abstract
Introduction. We presented two cases of late presentation of ovarian vein thrombosis postpartum following vaginal delivery and cesarean section within a short period in our institution. Both of them had pelvic pain following their deliveries which was associated with fever and chills. One of them was quite a big-sized thrombophlebitic vein which was about 10×6×5 centimeters following a computed tomography. They were both treated initially for urinary tract infection, while a large ovarian vein thrombosis was not diagnosed in the second patient until her emergency department admission.
Conclusion. Ovarian vein thrombosis is rare, but could present late, and difficult to diagnose, hence, should be considered as a differential diagnosis in a postpartum woman with fever and tender pelvic mass.
---
## Body
## 1. Introduction
Ovarian vein thrombosis is an uncommon but potentially serious disorder that is associated with a variety of pelvic conditions—most notably, recent childbirth. It could also be associated with pelvic inflammatory disease, malignancies, and pelvic surgery. Ovarian vein thrombosis occurs in 0.05–0.18% of pregnancies and is diagnosed on the right side in 80–90% of the affected postpartum patients [1–3]. Prompt diagnosis and treatment of this condition is needed to avoid the morbidity and mortality that are related both to the thrombosis and to any associated infection/sepsis. One of these cases illustrates the importance of including ovarian vein thrombosis as a differential diagnosis in women who present in the postpartum period with a tender pelvic mass.
## 2. Case Presentation
### 2.1. Case 1
A 26-year-old woman presented at 13 days postpartum to an emergency department with severe, stabbing, right flank pain. The pain had been present since postpartum day 2, with associated fever (temperature 39.5 degrees centigrade) and chills. At that time, diagnosis of urinary tract infection was made by urinalysis and culture which confirmedEscherichiacoli as the infective organism, and the patient was treated with amoxicillin based on the sensitivity results. Unfortunately, the pain did not resolve. There was no associated vagina bleeding, nausea, or vomiting.On examination, the patient was afebrile and had tenderness in the umbilical and right flank area. Ultrasonography was not performed after the delivery because it was not considered appropriate.Her antenatal period was uneventful. She had a spontaneous vaginal delivery of a live born-term female. The immediate postpartum period was unremarkable. There was no other significant past medical or surgical history.Investigations showed white blood cell count of12.6×109/L, hemoglobin of 114 g/L, and reactive thrombocytosis with a platelet count of 587×109/L. The rest of the laboratory investigations were within normal limit. A pelvic computed tomography showed findings consistent with a thrombosed right ovarian vein, measuring 8×5×5 cm (Figure 1).Figure 1
Longitudinal section right and left ovarian veins case 2.A consult to internal medicine was subsequently made, the diagnosis confirmed, and the patient was initiated on therapy of low-molecular-weight heparin (Dalteparin sodium) at a dose of 12,500 units/day by subcutaneous injection and discharged home, to be followed up with internal medicine.
### 2.2. Case 2
A 19-year-old G3 P 2-0-1-3 female presented 3 weeks after a lower segment cesarean section for monochorionic diamniotic twins with a right-sided abdominal mass and abdominal pain and cramping of 1-week duration. The pain was described as gas-like, nonradiating, and admitted to passing flatus and a bowel movement that day. The patient was afebrile, and also denied nausea, vomiting, diarrhoea, and difficulty with voiding but did admit to a fever the previous week. Her fever was not based on any objective measurement. Her babies were reported to be doing well. The patient was otherwise healthy with no allergies and only taking iron. The patient was nonsmoker and denied alcohol or drug use.On physical examination, the patient looked well with normal vital signs. Her abdomen was distended, nontender, and an 8 cm× 10 cm mass was found below the right costal margin with a consistency of an ovarian mass. The mass felt irregular in consistency. Her incision had healed well. On pelvic examination, a 10-centimeter long mass was felt in the right lower abdominal region which was slightly mobile and nontender, which extends from just below the right renal vein down to the right iliac fossa. The uterus was barely palpable above the pubic symphysis which was considered normal.The rest of her physical examination was unremarkable. Complete blood count and an abdominal ultrasound showed numerous hypoechoic tubular structures just inferior to the right kidney. A computed tomogram of the abdomen/pelvis with contrast identified numerous nonenhancing dilated tubular structures extending from the right renal vein down to the ovary measuring10×6×5 cm. The left side also showed a similar but less obvious structure (see Figures 2 and 3). There was also found a large amount of air within the endometrial cavity concerning for endometritis. The patient was admitted and treated as a pelvic septic thrombophlebitis and anticoagulated. She was commenced on low-molecular weight heparin at a dose of 12,500 units per day while Cefazolin 1 gm was given every 8 hours intravenously for five days. The patient had an uneventful stay in hospital and was discharged home after 5 days with a followup-computed tomogram in two weeks. She was also given an appointment to see her primary care provider in one week if any problem arise.Figure 2
Coronal section right ovarian vein case 1.Figure 3
Coronal section right ovarian vein case 2.
## 2.1. Case 1
A 26-year-old woman presented at 13 days postpartum to an emergency department with severe, stabbing, right flank pain. The pain had been present since postpartum day 2, with associated fever (temperature 39.5 degrees centigrade) and chills. At that time, diagnosis of urinary tract infection was made by urinalysis and culture which confirmedEscherichiacoli as the infective organism, and the patient was treated with amoxicillin based on the sensitivity results. Unfortunately, the pain did not resolve. There was no associated vagina bleeding, nausea, or vomiting.On examination, the patient was afebrile and had tenderness in the umbilical and right flank area. Ultrasonography was not performed after the delivery because it was not considered appropriate.Her antenatal period was uneventful. She had a spontaneous vaginal delivery of a live born-term female. The immediate postpartum period was unremarkable. There was no other significant past medical or surgical history.Investigations showed white blood cell count of12.6×109/L, hemoglobin of 114 g/L, and reactive thrombocytosis with a platelet count of 587×109/L. The rest of the laboratory investigations were within normal limit. A pelvic computed tomography showed findings consistent with a thrombosed right ovarian vein, measuring 8×5×5 cm (Figure 1).Figure 1
Longitudinal section right and left ovarian veins case 2.A consult to internal medicine was subsequently made, the diagnosis confirmed, and the patient was initiated on therapy of low-molecular-weight heparin (Dalteparin sodium) at a dose of 12,500 units/day by subcutaneous injection and discharged home, to be followed up with internal medicine.
## 2.2. Case 2
A 19-year-old G3 P 2-0-1-3 female presented 3 weeks after a lower segment cesarean section for monochorionic diamniotic twins with a right-sided abdominal mass and abdominal pain and cramping of 1-week duration. The pain was described as gas-like, nonradiating, and admitted to passing flatus and a bowel movement that day. The patient was afebrile, and also denied nausea, vomiting, diarrhoea, and difficulty with voiding but did admit to a fever the previous week. Her fever was not based on any objective measurement. Her babies were reported to be doing well. The patient was otherwise healthy with no allergies and only taking iron. The patient was nonsmoker and denied alcohol or drug use.On physical examination, the patient looked well with normal vital signs. Her abdomen was distended, nontender, and an 8 cm× 10 cm mass was found below the right costal margin with a consistency of an ovarian mass. The mass felt irregular in consistency. Her incision had healed well. On pelvic examination, a 10-centimeter long mass was felt in the right lower abdominal region which was slightly mobile and nontender, which extends from just below the right renal vein down to the right iliac fossa. The uterus was barely palpable above the pubic symphysis which was considered normal.The rest of her physical examination was unremarkable. Complete blood count and an abdominal ultrasound showed numerous hypoechoic tubular structures just inferior to the right kidney. A computed tomogram of the abdomen/pelvis with contrast identified numerous nonenhancing dilated tubular structures extending from the right renal vein down to the ovary measuring10×6×5 cm. The left side also showed a similar but less obvious structure (see Figures 2 and 3). There was also found a large amount of air within the endometrial cavity concerning for endometritis. The patient was admitted and treated as a pelvic septic thrombophlebitis and anticoagulated. She was commenced on low-molecular weight heparin at a dose of 12,500 units per day while Cefazolin 1 gm was given every 8 hours intravenously for five days. The patient had an uneventful stay in hospital and was discharged home after 5 days with a followup-computed tomogram in two weeks. She was also given an appointment to see her primary care provider in one week if any problem arise.Figure 2
Coronal section right ovarian vein case 1.Figure 3
Coronal section right ovarian vein case 2.
## 3. Discussion
Women are five times more likely to suffer from a thromboembolic event when they are pregnant [1]. The overall incidence of thromboembolic events ranges from 0.3% to 1.2% [2]. The most common postpartum thromboembolic events include deep vein thrombosis and pulmonary emboli. However, ovarian vein thrombosis complicates 0.05%–0.18% of pregnancies [3–5].The first case of postpartum ovarian vein thrombosis was described by Austin in 1956 [6]. The pathophysiology of ovarian vein thrombosis is ascribed to Virchow’s triad of hypercoagulability, venous stasis, and endothelial trauma. Pregnancy is a period where women are at a hypercoagulable state due to normal physiological changes. These changes include an increase in clotting factors such as factors VII, VIII, IX, X, XII, vWF, and fibrinogen. As well, free levels of protein S are decreased. There is venous stasis of the lower limbs due to compression of the pelvic veins and inferior vena cava by the uterus. Increased levels of estrogen and increased local production of nitric oxide and prostacyclin also contribute to increased deep vein capacitance.Endothelial trauma can occur at the time delivery or from local inflammation. These pregnancy-induced changes help protect women from hemorrhagic complications during placentation and labour; however, they also place women at an increased risk of venous thromboembolic diseases.The right ovarian vein is implicated in 90% of cases of ovarian vein thrombosis [3]. Several explanations have been proposed for this skew towards the right ovarian vein ranging from retrograde drainage from the left ovarian vein and anterograde flow into the right ovarian vein in the postpartum setting to dextrorotation of the enlarging uterus, causing compression of the right ovarian vein and right ureter as they cross the pelvic brim and the fact that the right ovarian vein is longer than the left, and when dilated, the valves become incompetent, making it easier for a thrombus to form [2, 3, 7, 8].Patients with ovarian vein thrombosis typically present with fever, pelvic pain, and a “ropelike’’ palpable abdominal mass [5, 9]. Case 2, however, did not present with fever, but she gave a history of being feverish few days prior to her second admission. We, however, decided to treat her with antibiotics despite her lack of any evidence of fever. We do not understand the reason for air in her uterus but we were very suspicious of endometritis hence we considered to treat her with antibiotics. Case 1, however, was discharged prematurely with the followup to be conducted by her family physician. This patient should have been kept for few more days in hindsight. The incidence peaks around postpartum day 2 for full-term deliveries and occurs within 10 days postpartum in 90% of cases [9]. As symptoms are nonspecific, the diagnosis of ovarian vein thrombosis may be delayed. The differential diagnosis for ovarian vein thrombosis includes appendicitis, endometritis, adnexal torsion, pyelonephritis, and septic pelvic thrombophlebitis. Ovarian vein thrombosis is differentiated clinically from septic thrombophlebitis in that patients with septic thrombophlebitis appear clinically well but have continuing high spiking fevers, but the physical examination is also normal [3].The diagnosis of ovarian vein thrombosis is ideally made with pelvic CT scanning, which will show an enlarged vein with a low-density lumen and sharply defined walls [9, 10]. However, ultrasound is commonly used as the first radiographic investigation in postpartum women. Ultrasound scan in Case 2 was not conclusive, but the computed tomogram enabled us to make a definitive diagnosis. Ovarian vein thrombosis on ultrasound will appear as an anechoic to hypoechoic mass between the adnexa and the inferior vena cava, with absence of blood flow [3]. The sensitivity of CT scanning for diagnosing ovarian vein thrombosis is 100%, and 52% for Doppler ultrasonography [2]. Magnetic resonance image is considered ideal for its sensitivity and lack of ionizing radiation.Treatment for ovarian vein thrombosis includes antibiotics and anticoagulation. Appropriate antibiotics include clindamycin, or gentamicin, or a second- or third-generation cephalosporin. Although low-molecular-weight heparins have been shown to be as effective as unfractionated heparin for treating ovarian vein thrombosis, the studies providing this evidence are of small design with unsatisfactory data. Further investigation is required to determine if low-molecular-weight heparins are appropriate to use in treatment of ovarian vein thrombosis [3, 9]. We were quite concerned about the size of the thrombosed ovarian vein in Case 2 despite that, without any evidence of fever, she was given Cefazolin to which she responded quite well.Complications of ovarian vein thrombosis include sepsis, thrombus extending to the inferior vena cava and renal veins, and pulmonary embolism. The incidence of pulmonary embolism is reported to be 13.2% [5]. These complications can be managed surgically with thrombectomy or with an inferior vena cava filter. Mortality due to ovarian vein thrombosis is less than 5%, most cases of which are due to pulmonary embolism [3]. Some degree of morbidity could be encountered in cases inappropriately and promptly managed.
## 4. Conclusion
Ovarian vein thrombosis, rare as it, could present late in postpartum women with serious consequences, hence a high index of suspicion for diagnosis and management is required to avoid an associated mortality and morbidity. There was no mortality in these two patients, but morbidity was reduced with prompt diagnosis and appropriate treatment.
---
*Source: 101367-2009-09-30.xml* | 2009 |
# Service Composition Recommendation Method Based on Recurrent Neural Network and Naive Bayes
**Authors:** Ming Chen; Junqiang Cheng; Guanghua Ma; Liang Tian; Xiaohong Li; Qingmin Shi
**Journal:** Scientific Programming
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1013682
---
## Abstract
Due to the lack of domain and interface knowledge, it is difficult for users to create suitable service processes according to their needs. Thus, the paper puts forward a new service composition recommendation method. The method is composed of two steps: the first step is service component recommendation based on recurrent neural network (RNN). When a user selects a service component, the RNN algorithm is exploited to recommend other matched services to the user, aiding the completion of a service composition. The second step is service composition recommendation based on Naive Bayes. When the user completes a service composition, considering the diversity of user interests, the Bayesian classifier is used to model their interests, and other service compositions that satisfy the user interests are recommended to the user. Experiments show that the proposed method can accurately recommend relevant service components and service compositions to users.
---
## Body
## 1. Introduction
With the rapid development of Web 2.0, users gradually participate in the creation of web content. However, it is becoming more and more difficult to meet users’ complex needs with the single service. Thus, users begin to combine different services for the generation of their own service composition [1–3]. Service compositions refer to several services in a certain logical order to form an integrated application. For example, IFTTT (If This Then That) was used to customize smog SMS. IFTTT has started a new trend, shifting users from creating content to creating service composition. The traditional service system, however, is too complicated and has poor scalability. It is difficult for users to combine services. Therefore, the lightweight WEB-API has become the future direction of service composition, owing to its easy access, extensibility, and easy development.The user-oriented lightweight service composition allows users to drag and drop service components on a lightweight service composition platform to generate a new service sequence. It can thus fulfill users’ individual needs. Generally speaking, the platform tools of lightweight service composition can support graphical components encapsulated by third parties, such as RSS/Atom feeds, web services, and various programming APIs (Google Maps, Flickr). Users can create service compositions through a visual operation interface without programming skills. Both the industry and the academia have shown great interest in this user-oriented lightweight service composition method.Although the service composition tools are recognized by users, they still need strategic guidelines when combining lightweight services [4]. These guidelines include initial user guidance and user interest extraction. For the initial user guidance, at the beginning phase of the service selection, when a user selects a service component, other service components that effectively associate with the selected service should also be added in the recommendation list and be recommended to the user because of their potential unknown interests. For user interest extraction, owing to the diversity of user interests, the current user interest scenario should be modelled according to the user’s selection. Other service compositions similar to the user interests should be recommended to the user.In view of the above reasons, the paper puts forward a service composition recommendation method based on recurrent neural network and Naive Bayes. The method is divided into two stages: (1) When a user’s initial interests are unknown, according to the user’s selection, the method firstly recommendsn service components with the highest correlation to the user by the RNN algorithm. (2) When the user completes a service composition, considering the diversity of their interests, the Bayesian classifier is used to model these interests, and other service compositions suitable for the user interests are recommended to the user. In the current research, the RNN algorithm recommends related services to users, which is likely to alleviate the problem of service mismatch. The Naive Bayes algorithm provides users with other service compositions that can satisfy their interests. It not only can meet the diversity of user interests but also can create excellent service compositions in the template with reused library. Experiments show that the proposed method is able to accurately recommend service components and service compositions to users.
## 2. Related Works
Previous researchers mainly utilized the topic model to obtain the latent topics for the improvement of the recommendation accuracy, for instance, the Latent Dirichlet Allocation (LDA) [5]. However the training of topic model was time-consuming. Subsequently, the matrix factorization was widely applied in service recommendations [6]. As the matrix factorization was not suitable for general prediction tasks, a general predictor method named Factorization Machines (FM) [7] was proposed. By exploiting Factorization Machines, Cao et al. [8] proposed the Self-Organization Map-Based functionality clustering algorithm and the Deep Factorization Machine-Based quality prediction algorithm to recommend API services. In addition, to solve the sparsity problem of historical interactions, Cao et al. [9] used topic models to extract the relationships between mashups and to model the latent topics. Although the above methods had generated several satisfactory results, traditional service recommendation approaches usually overlooked the dynamic nature of usage pattern. Therefore, it is suggested by Bai et al. [10] to incorporate both the textual content and the historical usage to build latent vector models for service recommendation. Meanwhile, to address the cold-start problem, Ma et al. [11] proposed learning the interaction information between candidate services and mashups based on the content and the history. Then, according to interaction vectors, a multiple layer perceptron was used to predict the rank of candidate services. Through the user’s historical service access records, Gao et al. [12] utilized a PLSA-based semantic analysis model to capture the user’s interests and to recommend the services that meet the user’s preference.In recent years, a few researchers have begun to pay attention to service recommendation from the perspective of Quality of Service (QoS) [9,13–19]. By focusing on the network resource consumption, Zhou et al. [13] used an integer nonlinear programming to solve microservice mashup problems, and an approximation algorithm was designed to solve the NP-hard problem. Xia et al. [14] offered to determine each service’s virtual cost according to the service’s attributes and the user’s preference. As a result, the service composition with the least total virtual cost was recommended to users. In addition, in terms of service function, by weighing the relationship between maximizing service cousage, maximizing functional diversity, and maximizing functional matching, Almarimi et al. [20] provided the nondominated sorting genetic algorithm to extract an optimal set to create a mashup. Shin et al. [21] proposed a service composition method based on functional semantics, and Shi et al. [22] employed a Long Short-Term Memory- (LSTM-) based method with a functional attention mechanism and a contextual attention mechanism to recommend services. In terms of semantic relevance, Ge et al.[23] suggested to effectively use the existing service composition and semantic association to expand the scope of service recommendation. In the research of Duan et al. [24], the integration of the probabilistic matrix factorization (PMF) model and the probabilistic latent semantic index (PLSI) model were adopted to recommend services to users. In the present study, the PLSI model was used to train user access records. By mining historical execution trajectories, He et al. [25] discovered potential behavior patterns based on the context and user characteristics. And context-related and preference-related user activity selection probability models were established. This potentially supported the construction and the recommendation of optimized personalized mashup.In summary, when a user clicks on the service “Flickr,” combining the user has clicked the service “Facebook” before, it can thus recommend other services linked to the sequence Facebook- > Flickr. After the user finishes a service process, because of the diversity of user interests, it is necessary to recommend other service processes that are of potential interests. However, most schemes in previous studies only focus on one point, weakening the user’s experience. In addition, the service component recommendation based on association rules ignores the relevance between word orders; thereby, it has a relatively low recommendation accuracy. The service composition recommendation based on QoS mainly focuses on the nonfunctional needs of users. In the Mobile Internet and the 5G era, users, however, pay more attention to their functional requirements. Therefore, this paper proposes a service composition recommendation method based on the RNN and Naive Bayes. The RNN is used to ensure the relevance between word orders. Naive Bayes is adopted to identify users’ potential interests according to the component function and provides users with excellent service processes in the template library.
## 3. Algorithm Description
### 3.1. Service Component Recommendation Based on Recurrent Neural Network
In this section, service compositions are first sent to RNN for training. The training is divided into two phases: forward propagation and back propagation. Then the error losses of the output layers at different times are obtained through forward propagation. Subsequently, according to the cross entropy of error losses, weight increments∇U,∇V,∇W are calculated through back propagation. Finally, the weights U,V,W are updated by a gradient descent method.
#### 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
#### 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
#### 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
#### 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
### 3.2. Service Composition Recommendation Based on Naive Bayes
It is noted that service components selected by users need to be further reduced through the information gain, and then the Naive Bayes classifier is exploited to extract user interests based on the reduced service component set. Finally, similar service compositions are recommended to the user according to their interests. Bayesian can quickly and efficiently identify the user’s interest according to several service components clicked by the user; and those with similar interest in the user template library can directly match the common components clicked by the user.
#### 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
#### 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
#### 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 3.1. Service Component Recommendation Based on Recurrent Neural Network
In this section, service compositions are first sent to RNN for training. The training is divided into two phases: forward propagation and back propagation. Then the error losses of the output layers at different times are obtained through forward propagation. Subsequently, according to the cross entropy of error losses, weight increments∇U,∇V,∇W are calculated through back propagation. Finally, the weights U,V,W are updated by a gradient descent method.
### 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
### 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
### 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
### 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
## 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
## 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
## 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
## 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
## 3.2. Service Composition Recommendation Based on Naive Bayes
It is noted that service components selected by users need to be further reduced through the information gain, and then the Naive Bayes classifier is exploited to extract user interests based on the reduced service component set. Finally, similar service compositions are recommended to the user according to their interests. Bayesian can quickly and efficiently identify the user’s interest according to several service components clicked by the user; and those with similar interest in the user template library can directly match the common components clicked by the user.
### 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
### 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
### 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
## 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
## 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 4. Experiments
Experiments in this paper attempt to verify the effectiveness of RNN and Naive Bayes. Section4.1 describes the data set used in Sections 3.1 and 3.2. Section 4.2 depicts the linked prediction performance of RNN, including the number of RNN’s iterations, the precision, and the time comparison with the traditional algorithms (Apriori and N-gram). Section 4.3 reports the classification performance of Naive Bayes. Section 4.4 explores the recommendation performance of N-gram distance.
### 4.1. Dataset
This paper uses service process call records and the service composition data set from the ProgrammableWeb website to conduct experiments. Service process call records include 20035 users’ records. To improve the precision of experiments, the paper eliminates records of inactive users. In particular, users who call service processes less than 3 are regarded as inactive users; thus, there are 11730 service process call records used for our experiments. The service composition data set includes 13,082 service processes, and there are 24 types of the classification labels of service processes.
### 4.2. The Linked Prediction Performance of RNN
#### 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
#### 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
### 4.3. The Classification Performance of Naive Bayes
As shown in Figure6, precision%classfication refers to the precision of classification prediction through the service composition data set. The predicted label is compared with the real label, and finally the classification precision of the algorithm is obtained. As can be seen, with the increase in the training data, the recommendation precision becomes higher. When the density of training data is 80% and that of the test data is 20%, the classification precision of Naive Bayes reaches 89.1%.Figure 6
The classification performance of Naive Bayes.
### 4.4. The Recommendation Performance ofN-Gram Distance
Figure7 analyzes the recommendation performance of N-gram distance. As the length of the recommendation list increases, the recommendation precision first increases and then decreases. When the length is 13, the recommendation performance is the optimal. At this time, the recommendation precision is 21.3%.Figure 7
The recommendation performance of (N)-gram distance.
## 4.1. Dataset
This paper uses service process call records and the service composition data set from the ProgrammableWeb website to conduct experiments. Service process call records include 20035 users’ records. To improve the precision of experiments, the paper eliminates records of inactive users. In particular, users who call service processes less than 3 are regarded as inactive users; thus, there are 11730 service process call records used for our experiments. The service composition data set includes 13,082 service processes, and there are 24 types of the classification labels of service processes.
## 4.2. The Linked Prediction Performance of RNN
### 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
### 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
## 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
## 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
## 4.3. The Classification Performance of Naive Bayes
As shown in Figure6, precision%classfication refers to the precision of classification prediction through the service composition data set. The predicted label is compared with the real label, and finally the classification precision of the algorithm is obtained. As can be seen, with the increase in the training data, the recommendation precision becomes higher. When the density of training data is 80% and that of the test data is 20%, the classification precision of Naive Bayes reaches 89.1%.Figure 6
The classification performance of Naive Bayes.
## 4.4. The Recommendation Performance ofN-Gram Distance
Figure7 analyzes the recommendation performance of N-gram distance. As the length of the recommendation list increases, the recommendation precision first increases and then decreases. When the length is 13, the recommendation performance is the optimal. At this time, the recommendation precision is 21.3%.Figure 7
The recommendation performance of (N)-gram distance.
## 5. Conclusions
In order to optimize the assistance to users in their decision-making, this paper proposes a service composition recommendation method based on the RNN and Naive Bayes. This method has the following contributions:(1)
Different from traditional algorithms, this paper uses the context learning to reduce the recommendation space and provides users with more accurate service-linked components.(2)
To fulfill the diversity of user interests, this paper adopts the interest modeling to recommend other service processes that meet users’ current interests. This can effectively promote the reuse of the template library.It is yet worth noting that the interest modeling of Naive Bayes does not take the semantic similarity into consideration. As a result, future research would consider using the semantic analysis to model user interests.
---
*Source: 1013682-2021-10-29.xml* | 1013682-2021-10-29_1013682-2021-10-29.md | 58,413 | Service Composition Recommendation Method Based on Recurrent Neural Network and Naive Bayes | Ming Chen; Junqiang Cheng; Guanghua Ma; Liang Tian; Xiaohong Li; Qingmin Shi | Scientific Programming
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1013682 | 1013682-2021-10-29.xml | ---
## Abstract
Due to the lack of domain and interface knowledge, it is difficult for users to create suitable service processes according to their needs. Thus, the paper puts forward a new service composition recommendation method. The method is composed of two steps: the first step is service component recommendation based on recurrent neural network (RNN). When a user selects a service component, the RNN algorithm is exploited to recommend other matched services to the user, aiding the completion of a service composition. The second step is service composition recommendation based on Naive Bayes. When the user completes a service composition, considering the diversity of user interests, the Bayesian classifier is used to model their interests, and other service compositions that satisfy the user interests are recommended to the user. Experiments show that the proposed method can accurately recommend relevant service components and service compositions to users.
---
## Body
## 1. Introduction
With the rapid development of Web 2.0, users gradually participate in the creation of web content. However, it is becoming more and more difficult to meet users’ complex needs with the single service. Thus, users begin to combine different services for the generation of their own service composition [1–3]. Service compositions refer to several services in a certain logical order to form an integrated application. For example, IFTTT (If This Then That) was used to customize smog SMS. IFTTT has started a new trend, shifting users from creating content to creating service composition. The traditional service system, however, is too complicated and has poor scalability. It is difficult for users to combine services. Therefore, the lightweight WEB-API has become the future direction of service composition, owing to its easy access, extensibility, and easy development.The user-oriented lightweight service composition allows users to drag and drop service components on a lightweight service composition platform to generate a new service sequence. It can thus fulfill users’ individual needs. Generally speaking, the platform tools of lightweight service composition can support graphical components encapsulated by third parties, such as RSS/Atom feeds, web services, and various programming APIs (Google Maps, Flickr). Users can create service compositions through a visual operation interface without programming skills. Both the industry and the academia have shown great interest in this user-oriented lightweight service composition method.Although the service composition tools are recognized by users, they still need strategic guidelines when combining lightweight services [4]. These guidelines include initial user guidance and user interest extraction. For the initial user guidance, at the beginning phase of the service selection, when a user selects a service component, other service components that effectively associate with the selected service should also be added in the recommendation list and be recommended to the user because of their potential unknown interests. For user interest extraction, owing to the diversity of user interests, the current user interest scenario should be modelled according to the user’s selection. Other service compositions similar to the user interests should be recommended to the user.In view of the above reasons, the paper puts forward a service composition recommendation method based on recurrent neural network and Naive Bayes. The method is divided into two stages: (1) When a user’s initial interests are unknown, according to the user’s selection, the method firstly recommendsn service components with the highest correlation to the user by the RNN algorithm. (2) When the user completes a service composition, considering the diversity of their interests, the Bayesian classifier is used to model these interests, and other service compositions suitable for the user interests are recommended to the user. In the current research, the RNN algorithm recommends related services to users, which is likely to alleviate the problem of service mismatch. The Naive Bayes algorithm provides users with other service compositions that can satisfy their interests. It not only can meet the diversity of user interests but also can create excellent service compositions in the template with reused library. Experiments show that the proposed method is able to accurately recommend service components and service compositions to users.
## 2. Related Works
Previous researchers mainly utilized the topic model to obtain the latent topics for the improvement of the recommendation accuracy, for instance, the Latent Dirichlet Allocation (LDA) [5]. However the training of topic model was time-consuming. Subsequently, the matrix factorization was widely applied in service recommendations [6]. As the matrix factorization was not suitable for general prediction tasks, a general predictor method named Factorization Machines (FM) [7] was proposed. By exploiting Factorization Machines, Cao et al. [8] proposed the Self-Organization Map-Based functionality clustering algorithm and the Deep Factorization Machine-Based quality prediction algorithm to recommend API services. In addition, to solve the sparsity problem of historical interactions, Cao et al. [9] used topic models to extract the relationships between mashups and to model the latent topics. Although the above methods had generated several satisfactory results, traditional service recommendation approaches usually overlooked the dynamic nature of usage pattern. Therefore, it is suggested by Bai et al. [10] to incorporate both the textual content and the historical usage to build latent vector models for service recommendation. Meanwhile, to address the cold-start problem, Ma et al. [11] proposed learning the interaction information between candidate services and mashups based on the content and the history. Then, according to interaction vectors, a multiple layer perceptron was used to predict the rank of candidate services. Through the user’s historical service access records, Gao et al. [12] utilized a PLSA-based semantic analysis model to capture the user’s interests and to recommend the services that meet the user’s preference.In recent years, a few researchers have begun to pay attention to service recommendation from the perspective of Quality of Service (QoS) [9,13–19]. By focusing on the network resource consumption, Zhou et al. [13] used an integer nonlinear programming to solve microservice mashup problems, and an approximation algorithm was designed to solve the NP-hard problem. Xia et al. [14] offered to determine each service’s virtual cost according to the service’s attributes and the user’s preference. As a result, the service composition with the least total virtual cost was recommended to users. In addition, in terms of service function, by weighing the relationship between maximizing service cousage, maximizing functional diversity, and maximizing functional matching, Almarimi et al. [20] provided the nondominated sorting genetic algorithm to extract an optimal set to create a mashup. Shin et al. [21] proposed a service composition method based on functional semantics, and Shi et al. [22] employed a Long Short-Term Memory- (LSTM-) based method with a functional attention mechanism and a contextual attention mechanism to recommend services. In terms of semantic relevance, Ge et al.[23] suggested to effectively use the existing service composition and semantic association to expand the scope of service recommendation. In the research of Duan et al. [24], the integration of the probabilistic matrix factorization (PMF) model and the probabilistic latent semantic index (PLSI) model were adopted to recommend services to users. In the present study, the PLSI model was used to train user access records. By mining historical execution trajectories, He et al. [25] discovered potential behavior patterns based on the context and user characteristics. And context-related and preference-related user activity selection probability models were established. This potentially supported the construction and the recommendation of optimized personalized mashup.In summary, when a user clicks on the service “Flickr,” combining the user has clicked the service “Facebook” before, it can thus recommend other services linked to the sequence Facebook- > Flickr. After the user finishes a service process, because of the diversity of user interests, it is necessary to recommend other service processes that are of potential interests. However, most schemes in previous studies only focus on one point, weakening the user’s experience. In addition, the service component recommendation based on association rules ignores the relevance between word orders; thereby, it has a relatively low recommendation accuracy. The service composition recommendation based on QoS mainly focuses on the nonfunctional needs of users. In the Mobile Internet and the 5G era, users, however, pay more attention to their functional requirements. Therefore, this paper proposes a service composition recommendation method based on the RNN and Naive Bayes. The RNN is used to ensure the relevance between word orders. Naive Bayes is adopted to identify users’ potential interests according to the component function and provides users with excellent service processes in the template library.
## 3. Algorithm Description
### 3.1. Service Component Recommendation Based on Recurrent Neural Network
In this section, service compositions are first sent to RNN for training. The training is divided into two phases: forward propagation and back propagation. Then the error losses of the output layers at different times are obtained through forward propagation. Subsequently, according to the cross entropy of error losses, weight increments∇U,∇V,∇W are calculated through back propagation. Finally, the weights U,V,W are updated by a gradient descent method.
#### 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
#### 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
#### 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
#### 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
### 3.2. Service Composition Recommendation Based on Naive Bayes
It is noted that service components selected by users need to be further reduced through the information gain, and then the Naive Bayes classifier is exploited to extract user interests based on the reduced service component set. Finally, similar service compositions are recommended to the user according to their interests. Bayesian can quickly and efficiently identify the user’s interest according to several service components clicked by the user; and those with similar interest in the user template library can directly match the common components clicked by the user.
#### 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
#### 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
#### 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 3.1. Service Component Recommendation Based on Recurrent Neural Network
In this section, service compositions are first sent to RNN for training. The training is divided into two phases: forward propagation and back propagation. Then the error losses of the output layers at different times are obtained through forward propagation. Subsequently, according to the cross entropy of error losses, weight increments∇U,∇V,∇W are calculated through back propagation. Finally, the weights U,V,W are updated by a gradient descent method.
### 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
### 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
### 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
### 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
## 3.1.1. Preprocessing of Service Process Call Records
To train a suitable RNN, it is needed to preprocess the service process call records as the input data and the predefined output data. Here, for each service composition in the training set, the last service component is deleted, and other service components are inserted into the listx_data as a list element. For each service composition, the first service component is deleted, and other service components as the corresponding list element are inserted into the list y_data. For example, if two service compositions are Facebook- > Flickr- > GoogleMaps and Time- > Weather- > Text, where - > represents the linked sequence between service components, then x_data = [[“Facebook”, “Flickr”], [“Time”, “Weather”]], y_data = [[“Flickr”, “GoogleMasp”], and [“Weather”, “Text”]]. Here x_dataj will be used as the input data of forward propagation, and y_dataj will be used as the predefined output data of back propagation. x_dataj and y_dataj need to be converted to a one-hot vector before training. In other words, there are L words in the dictionary; if the position of a service in the dictionary is j, then the service can be represented as an L-dimensional vector, where the jth dimension is set to 1, and the remaining dimensions are all set to 0.
## 3.1.2. Forward Propagation
The forward propagation process of RNN is shown in Figure1. Here U represents the weight between the input layer and the hidden layer. V represents the weight between the hidden layer and the output layer. W represents the weight of the adjacent hidden layers.Figure 1
The forward propagation process of RNN.At timet, xt is the input value, and st is the state in the hidden layer, which is related to the input value xt and the state st−1 of the previous hidden layer. The mathematical expression is st=fUxt+Wst−1. f represents an activation function in the hidden layer. In the paper, f=tanh.y^t is the output value at time t. The mathematical expression is y^t=gVst. In the output layer, g represents an activation function. In the paper, g() = softmax().
## 3.1.3. Back Propagation
RNN uses the back propagation to add up the error losses of the output layers at different times to obtain the total error lossE and then calculates the gradient of each weight U,V,W to obtain weight increments ∇U,∇V,∇W and finally employs a gradient descent method to update each weight U,V,W.(1) Error Loss Function. For each time t, there will be an error loss et between the output value y^t of RNN and the predefined output value yt. Assuming that the cross entropy is used as the error loss function, there is a total error loss E=∑t=1Net , and et=−∑i=1Lytilny^ti , where N represents the length of x_dataj or y_dataj, L represents the length of the one-hot vector, and xt∈x_dataj,yt∈y_dataj.(2) Gradient Calculation. For ∇V, V does not depend on the previous states; thus, it is relatively easy to obtain. However, for ∇W,∇U, the chain derivation rule is needed to obtain them. The calculation process is as follows:(1)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U,∇V=∂et∂V=∂et∂y^t⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=∂et∂y^t⋅∂y^t∂st+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W.Assuming the error variation of the hidden layerδth=∂et/∂y^t⋅∂y^t/∂st+∂et+1/∂y^t+1⋅∂y^t+1/∂st+1⋅∂st+1/∂st and the error variation of the output layer δto=∂et/∂y^t, ∇U,∇V,∇W can be expressed as follows:(2)∇U=∂et∂U=∂et∂y^t⋅∂y^t∂st⋅∂st∂U+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂U=δth⋅∂st∂U,∇V=∂et∂V=δto⋅∂y^t∂V,∇W=∂et∂W=∂et∂y^t⋅∂y^t∂st⋅∂st∂W+∂et+1∂y^t+1⋅∂y^t+1∂st+1⋅∂st+1∂st⋅∂st∂W=δth⋅∂st∂W.In RNN, the calculation of back propagation is from the back to the front. At each moment, weight increments∇U,∇V,∇W are updated as follows:(3)ΔU=ΔU+δth⋅∂st∂U,ΔV=ΔV+δto⋅∂y^t∂V,ΔW=ΔW+δth⋅∂st∂W.(3) Weight Update. When the training of a service composition is completed, the RNN uses a gradient descent method to update U,V,W along the negative gradient direction. The updated process is as follows:(4)U=U−lr∗ΔU,V=V−lr∗ΔV,W=W−lr∗ΔW.Here the initialU,V,W are randomly generated. lr is the step length of the gradient descent method.After the update ofU,V,W is completed, the loop is repeated until the error loss E reaches the threshold. At this time, the weights U,V,W are used to predict the output results according to the input data.
## 3.1.4. Service Components Recommendation
When users select a service component, the service component is sent to the RNN. It uses the weightsU,V,W obtained in Section 3.1.3 to compute the following prediction services and then sends the top n prediction services to the recommendation list and posts them to users.
## 3.2. Service Composition Recommendation Based on Naive Bayes
It is noted that service components selected by users need to be further reduced through the information gain, and then the Naive Bayes classifier is exploited to extract user interests based on the reduced service component set. Finally, similar service compositions are recommended to the user according to their interests. Bayesian can quickly and efficiently identify the user’s interest according to several service components clicked by the user; and those with similar interest in the user template library can directly match the common components clicked by the user.
### 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
### 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
### 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 3.2.1. Information Gain
After users finish a service composition, we need to determine user interests based on this service composition. To decrease the interference of noncritical service components, the information gain algorithm is used to reduce the service component set. The gain valueIG(s) of each service component in the service composition can be calculated. The service components are sorted by the gain value IG(s), and the first n service components are regarded as the reduced service component set.The process is as follows:(1)
The entropy of each service componentSCj in the service composition SC is calculated, which is HSCj|SC.(2)
The entropy without this service componentSCj in the service composition SC is calculated, which is HSCj|SC¯ .(3)
The difference between the entropyHSCj|SC and the entropy HSCj|SC¯ is the classification gain value of this service component, which is IGSCj, as shown in the following formula:(5)IGSCj=HSCj|SC−HSCj|SC¯=−∑i=1nPci.log2Pci+∑i=1nPci|SCjlog2Pci|SCj+∑i=1nPci|SCj¯log2Pci|SCj¯,Pci|SCj=nSCj|cinci,Pci|SCj¯=nSCj¯|cinci.HerePci|SCj represents the probability of the service component SCj belonging to interest ci. Pci|SCj¯ denotes the probability of the service component SCj not belonging to interest ci. nSCj|ci means the number of service compositions including SCj in interest ci. nSCj¯|ci means the number of service compositions excluding SCj in interest ci. nci is the number of service compositions in interest ci. Pci represents the proportion of services compositions belonging to interest ci in all services compositions.(4)
The service components are sorted according to the classification gain value, and the firstn service components form a reduced service component set.
## 3.2.2. User Interest Modeling
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.The process is specified as follows:(1)
As discussed in Section3.2.1, the probability of the reduced service component set belonging to each interest category is calculated by the Naive Bayes classifier, which is P(ci|SC). According to the Bayesian formula, Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci. Assuming that SC1, SC2,…, SCn are independent, PSC1,SC2,...,SCn|ci=∏j=1nPSCj|ci. As shown in formula (6), SC represents the sequence of the reduced service components (SC1,SC2,…,SCn).(6)Pci|SC=Pci|SC1,SC2,…,SCn∝PSC1,SC2,…,SCn|ciPci,PSC1,SC2,…,SCn|ci=∏j=1nPSCj|ci.(2)
According to formula (6), Pci|SC∝Pci∏j=1nPSCj|ci. This paper selects the interest category with the highest probability as the user interest; therefore, formula (7) is feasible.(7)argmaxPci|SC∝argmaxPci∏j=1nPSCj|ci.
## 3.2.3. Service Compositions Recommendation
According to the reduced service component set, the Naive Bayes classifier is exploited to determine the user interests.TheN-gram distance is used to compute the distance between different service compositions, and the service compositions are recommended to the user based on the similarities from high to low.The process is specified as follows:(1)
In the service composition data set, service compositions consistent with the user interests are selected.(2)
TheN-gram distance is used to compute the distance between the selected service compositions and the reduced service component set. Depending on the distance, service compositions that are most similar to the reduced service component set are recommended, as shown in the following formula:(8)distanceSCp,SCq=GNSCp+GNSCq−2×GNSCp∩GNSCq.(3)
Here,GNSCp denotes the number of service components in service composition SCp. GNSCq denotes the number of service components in service composition SCq. GNSCl∩GNSCp is the number of the same service components in two service compositions. The similarity between two service compositions increases with the decrease in their distance.
## 4. Experiments
Experiments in this paper attempt to verify the effectiveness of RNN and Naive Bayes. Section4.1 describes the data set used in Sections 3.1 and 3.2. Section 4.2 depicts the linked prediction performance of RNN, including the number of RNN’s iterations, the precision, and the time comparison with the traditional algorithms (Apriori and N-gram). Section 4.3 reports the classification performance of Naive Bayes. Section 4.4 explores the recommendation performance of N-gram distance.
### 4.1. Dataset
This paper uses service process call records and the service composition data set from the ProgrammableWeb website to conduct experiments. Service process call records include 20035 users’ records. To improve the precision of experiments, the paper eliminates records of inactive users. In particular, users who call service processes less than 3 are regarded as inactive users; thus, there are 11730 service process call records used for our experiments. The service composition data set includes 13,082 service processes, and there are 24 types of the classification labels of service processes.
### 4.2. The Linked Prediction Performance of RNN
#### 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
#### 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
### 4.3. The Classification Performance of Naive Bayes
As shown in Figure6, precision%classfication refers to the precision of classification prediction through the service composition data set. The predicted label is compared with the real label, and finally the classification precision of the algorithm is obtained. As can be seen, with the increase in the training data, the recommendation precision becomes higher. When the density of training data is 80% and that of the test data is 20%, the classification precision of Naive Bayes reaches 89.1%.Figure 6
The classification performance of Naive Bayes.
### 4.4. The Recommendation Performance ofN-Gram Distance
Figure7 analyzes the recommendation performance of N-gram distance. As the length of the recommendation list increases, the recommendation precision first increases and then decreases. When the length is 13, the recommendation performance is the optimal. At this time, the recommendation precision is 21.3%.Figure 7
The recommendation performance of (N)-gram distance.
## 4.1. Dataset
This paper uses service process call records and the service composition data set from the ProgrammableWeb website to conduct experiments. Service process call records include 20035 users’ records. To improve the precision of experiments, the paper eliminates records of inactive users. In particular, users who call service processes less than 3 are regarded as inactive users; thus, there are 11730 service process call records used for our experiments. The service composition data set includes 13,082 service processes, and there are 24 types of the classification labels of service processes.
## 4.2. The Linked Prediction Performance of RNN
### 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
### 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
## 4.2.1. The Number of Iterations
The mean loss is given as follows:(9)meanloss=∑Ethenumberofiterations.This paper adopts the free-running mode for training. The training results are shown in Figure2 and the mean loss is shown in formula (9). Here E represents the loss of each round iteration. It can be seen that as the number of iterations increases, the mean loss of each epoch gradually decreases. When the number of iterations reaches 2000, the convergence of the RNN algorithm is achieved.Figure 2
The number of RNN’s iterations.
## 4.2.2. Algorithm Comparison
This section compares the RNN algorithm with the traditional Apriori algorithm and theN-gram algorithm. The Apriori algorithm is a common association rule algorithm in data mining, mainly used in recommendation systems. The N-gram algorithm is also used in recommendation systems, but it can effectively reduce the recommendation space through learning the context. The comparison results demonstrate the feasibility of the RNN algorithm.(1) Comparison of the Recommendation Precision between RNN(1), Apriori(1), and N-Gram(1). RNN(1) represents the recommendation algorithm RNN after the user calls a service component. Apriori(1) represents the recommendation algorithm Apriori after the user calls a service component. N-gram(1) represents the recommendation algorithm N-gram after the user calls a service component.(10)precision=LRecsc1…sci∩sci+1LRecsc1…sci.As shown in Figure3, the ordinate represents the recommendation precision of service components, as shown in formula (10). Here, LRecsc1…sci represents the number of recommended service components for the called service component sequence sc1…sci. sci+1 represents the actually linked service component for the called service component sequence sc1…sci. ∩ represents the intersection. LRecsc1…sci∩sci+1 equals 0 or 1. The abscissa Top-P represents the number of service components required to be recommended. In practice, due to the control of the predefined threshold T, LRecsc1…sci≤Top−P, where T = 0.42. As can be seen, the precision of RNN(1) is superior to those of Apriori(1) and N-gram(1). When the Top-P is 5, RNN(1) presents the best performance. At this time, the precision of RNN(1) is 0.41, higher than 0.17 of Apriori(1) and 0.24 of N-gram(1). This is because the RNN and the N-gram can learn the linked relationships between service components through training, while Apriori can only learn the correlations between service components and cannot capture the linked order between service components. Meanwhile, due to the limitation of the Markov model, the RNN has shown superior context learning effects than the N-gram.Figure 3
Comparison of the recommendation precision between RNN(1), Apriori(1), and (N)-gram(1).(2) Comparison of the Recommendation Precision between RNN(2), Apriori(2), and (N)-Gram(2). As shown in Figure 4, RNN(2) represents the recommendation algorithm RNN after the user calls two components. Apriori(2) represents the recommendation algorithm Apriori after the user calls two components. N-gram(2) represents the recommendation algorithm N-gram after the user calls two components. When the user’s initial selection of service components is greater than 1, the precision of RNN(2) is still superior to those of Apriori(2) and N-gram(2). When the Top-P is 5, the recommendation precision of RNN(2) is 0.85. The recommendation precision of Apriori(2) is 0.65. The recommendation precision of N-gram(2) is 0.79. At this time, the recommendation precisions of RNN(2), Apriori(2), and N-gram(2) are higher than those of RNN(1), Apriori(1), and N-gram(1). This is mainly because when the user selects more initial component sequences, there are fewer subsequently related service components, and the recommendation precision becomes higher.Figure 4
Comparison of the recommendation precision between RNN(2), Apriori(2), and (N)-gram(2).(3) Comparison of Training Time. As shown in Figure 5, when the training data is small, the training times of the Apriori(2) algorithm and the N-gram(2) algorithm are lesser than that of the RNN(2) algorithm. But as the training data increases, the amount of data processed by the Apriori(2) algorithm and the N-gram(2) algorithm will increase exponentially. The training time of RNN(2) is lesser than those of Apriori(2) and N-gram(2). When the data density is 80%, it costs RNN(2) 720 minutes to train, while Apriori(2) takes 1065 minutes and N-gram(2) takes 1123 minutes.Figure 5
Comparison of the training time between RNN(2), Apriori(2), and (N)-gram(2).
## 4.3. The Classification Performance of Naive Bayes
As shown in Figure6, precision%classfication refers to the precision of classification prediction through the service composition data set. The predicted label is compared with the real label, and finally the classification precision of the algorithm is obtained. As can be seen, with the increase in the training data, the recommendation precision becomes higher. When the density of training data is 80% and that of the test data is 20%, the classification precision of Naive Bayes reaches 89.1%.Figure 6
The classification performance of Naive Bayes.
## 4.4. The Recommendation Performance ofN-Gram Distance
Figure7 analyzes the recommendation performance of N-gram distance. As the length of the recommendation list increases, the recommendation precision first increases and then decreases. When the length is 13, the recommendation performance is the optimal. At this time, the recommendation precision is 21.3%.Figure 7
The recommendation performance of (N)-gram distance.
## 5. Conclusions
In order to optimize the assistance to users in their decision-making, this paper proposes a service composition recommendation method based on the RNN and Naive Bayes. This method has the following contributions:(1)
Different from traditional algorithms, this paper uses the context learning to reduce the recommendation space and provides users with more accurate service-linked components.(2)
To fulfill the diversity of user interests, this paper adopts the interest modeling to recommend other service processes that meet users’ current interests. This can effectively promote the reuse of the template library.It is yet worth noting that the interest modeling of Naive Bayes does not take the semantic similarity into consideration. As a result, future research would consider using the semantic analysis to model user interests.
---
*Source: 1013682-2021-10-29.xml* | 2021 |
# Maxillofacial Fractures in the Province of Pescara, Italy: A Retrospective Study
**Authors:** Giuliano Ascani; Francesca Di Cosimo; Michele Costa; Paolo Mancini; Claudio Caporale
**Journal:** ISRN Otolaryngology
(2014)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2014/101370
---
## Abstract
The aim of the present study was to assess the etiology and pattern of maxillofacial fractures in the Province of Pescara, Abruzzo, Central Italy. Was performed a retrospective review of patients treated at the Department of Maxillofacial Surgery of Spirito Santo Hospital from January 2010 to December 2012. Data collected and analyzed included sex, age, cause of injury, site of fracture, monthly distribution, and alcohol misuse. A total of 306 patients sustaining 401 maxillofacial fractures were treated. There were 173 males (56.5%) and 133 females (43.5%). Most of the patients (36.9%) were in the age group of 18–44 years. The most common causes of injuries were road traffic accidents (26.4%); the second leading cause was interpersonal violence (23.2%), followed by injuries associated with falls (19.2%). Fractures of the mandible (31%) and zygoma (23%) were the most common maxillofacial fractures in our study. The monthly distribution peaked in the summer (July and August, 30.4%) and in October (13.1%). In conclusion, this study confirms the close correlation between the incidence and etiology of facial fractures and the geographical, cultural, and socioeconomic features of a population. The data obtained provide important information for the design of future plans for injury prevention and for education of citizens.
---
## Body
## 1. Introduction
There is a remarkable regional variation in the incidence, sex and age distributions, aetiology, and site distribution of maxillofacial fractures depending upon the geographic conditions, cultural characteristics, and socioeconomic trends [1–6].The Province of Pescara is a province in the Abruzzo region of Italy; it has an area of 1.187 km² and a total population of 314.391 inhabitants.Since the Department of Maxillofacial Surgery in the Hospital of Pescara was opened in January 2010 acting as facial trauma centre, no study has been carried out so far to find out the epidemiological data of maxillofacial fractures in our province.We therefore assessed the etiology and pattern of maxillofacial fractures in patients treated at our centre with the aim to give valuable information for both health care providers and government officials that can be used for the development of public health programs for education and prevention.
## 2. Materials and Methods
We have carried out a retrospective analysis of all patients with maxillofacial fractures surgically treated at the Department of Maxillofacial Surgery of Spirito Santo Hospital, Pescara, Italy, from January 2010 to December 2012.Data collected and analyzed included sex, age, cause of injury, site of fracture, monthly distribution, and alcohol misuse.The aetiological factors were classified into road traffic accidents (RTA), interpersonal violence, falls, work- related accidents, and others (iatrogenic, gunshot, pathological, etc.).In patients with multiple facial bone fractures, each affected bone was evaluated as separate case.Patients with dentoalveolar fractures and patients with nasal bone fractures were excluded from the study because in our hospital they are treated by dentist and otorhinolaryngologist, respectively.
## 3. Results
A total of 306 patients sustaining 401 maxillofacial fractures were treated at our centre between January 2010 and December 2012.There were 173 males (56.5%) and 133 females (43.5%), giving a male to female ratio of 1.3 : 1. Distribution of patients according to gender and causes of injuries is shown in Figure1.Figure 1
Distribution of patients according to gender and causes of injuries.Most of the patients (125; 36.9%) were in the age group of 18–44 years, while the smallest number of patients (32; 10.4%) was over the age of 80. Distribution of patients according to age group and causes of fractures is shown in Figure2.Figure 2
Distribution of patients according to age group and causes of fractures.The most common causes of injuries were road traffic accidents (81 patients; 26.4%); out of these 81 patients, 35 (43.2%) were involved in bicycle accidents, 24 (29.6%) in motorcycle accidents, and 22 (27.2%) in car accidents.The second leading cause was interpersonal violence (71 patients; 23.2%), followed by injuries associated with falls accounting for 59 (19.2%) of all injuries; out of these 59 patients, 56 (94.9%) injuries were associated with falls on the ground and 3 (5.1%) with falls from height.Sport-related injuries were documented in 48 (15.7%) patients: 23 playing soccer, 6 rugby, 6 capoeira, 12 bicycle racing, 1 volleyball, and 1 basketball.Work-related injuries caused maxillofacial fracture in 24 (7.8%) patients.In 23 (7.5%) patients other causes were recorded: 11 pathological fractures, 8 iatrogenic fractures, 2 collision with a heavy object, 1 hit by horse kick, and 1 gunshot.Out of the total of 306 patients, 91 (29.7%) presented two or more sites of fractures, average 1.3 fractures per patient; panfacial fractures were treated in 11 (3.6%) patients.Fractures of the mandible (123; 31%) and zygoma (92; 23%) were the most common; site distribution of fractures is shown in Figure3.Figure 3
Site distribution of fractures.Among the 306 patients, 121 (39.5%) were under the effect of alcohol at the time of injury.The monthly distribution peaked in the summer (July and August, 30.4%) and in October (13.1%); in Figure4 is shown the yearly and monthly distribution of maxillofacial fractures.Figure 4
Yearly and monthly distribution of maxillofacial fractures.
## 4. Discussion
The epidemiological features of maxillofacial fractures are consistently influenced depending on environmental, cultural, and socioeconomic factors, with great variations among populations of different countries and even within the same country [1–7].Like previous studies, our finding shows that maxillofacial fractures are more common in males, but the male to female ratio in our study (1.3 : 1) was lower than those reported in the international literature [3–8].In this study, the peak incidence was in the age group of 18–44 years, in agreement with the results of many other authors [3, 4, 9] and reflecting the fact that people in the third decade of life are more active regarding work, sports, violent activities, and high speed transportation.As throughout the world [3, 9–12] even in our province the primary causes of maxillofacial fractures are road traffic accidents, interpersonal violence, and falls. According to this study, road traffic accidents remain the leading cause of injuries in both males and females, although the numbers of females injured by road traffic accidents and by interpersonal violence are practically equal (35 versus 34) (Figure 1). In contrast to other previous reports [2, 9, 11, 12], violence-related fractures proved to be higher in females (25.6%) than in males (21.4%). In this study, interpersonal violence is the main cause of maxillofacial fractures in patients between 18 and 44 years while falls were the most common cause in children (<18 years) and the elderly (>65 years) (Figure 2).Among road traffic accidents, two wheelers were responsible for the majority of maxillofacial fractures (59 patients; 72.8% of RTAs); these results were similar to previous studies reported in the literature [2, 3, 10, 11] and can be explained by the fact that bicycles and motorcycles are very popular as means of transport in our province because of its geographical and climatic features.Among sports-related fractures we reported 6 patients injured by playing capoeira. All of these patients were treated in a period time of 5 months, between September 2011 and January 2012; this is due to the fact that during that period there was a large spread in Italy of capoeira and many people have started to practice this sport. This data confirms the variability over time of the etiology of maxillofacial trauma and the close correlation with the sociocultural trends.In the present series, the most commonly involved bones were the mandible followed by zygoma. These reports are consistent with studies in several other countries [2–4, 9, 10] but in contrast with one recent study from Italy [13] that indicates the zygoma as the most fractured anatomical site, followed by isolated fractures of the orbital floor.Our study shows a close association between maxillofacial injuries and alcohol consumption (39.5% of patients); these findings are similar with previous studies from several countries [3, 10, 14, 15]. Alcohol significantly impairs judgment and coordination, facilitates aggression and interpersonal violence, and is a major cause of road traffic accidents.The monthly distribution of the maxillofacial fractures in our province showed two peaks.The first one was in July-August and was related to the tourist season during which there is an exponential increase of population in our territory. This was similar to other studies [4, 13]. The second peak was in October and can be explained by the olive-harvesting season. In our province, there is a long tradition of olive oil production that is often a family activity. For this many people, even the elderly and those with little experience, are involved in the olive harvest, resulting in significant number of maxillofacial traumas related to it.
## 5. Conclusions
This study confirms the close correlation between the incidence and etiology of facial fractures and the geographical, cultural, and socioeconomic features of a population.The data obtained provide important information for the design of future plans for injury prevention and for education of citizens.
---
*Source: 101370-2014-01-23.xml* | 101370-2014-01-23_101370-2014-01-23.md | 10,042 | Maxillofacial Fractures in the Province of Pescara, Italy: A Retrospective Study | Giuliano Ascani; Francesca Di Cosimo; Michele Costa; Paolo Mancini; Claudio Caporale | ISRN Otolaryngology
(2014) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2014/101370 | 101370-2014-01-23.xml | ---
## Abstract
The aim of the present study was to assess the etiology and pattern of maxillofacial fractures in the Province of Pescara, Abruzzo, Central Italy. Was performed a retrospective review of patients treated at the Department of Maxillofacial Surgery of Spirito Santo Hospital from January 2010 to December 2012. Data collected and analyzed included sex, age, cause of injury, site of fracture, monthly distribution, and alcohol misuse. A total of 306 patients sustaining 401 maxillofacial fractures were treated. There were 173 males (56.5%) and 133 females (43.5%). Most of the patients (36.9%) were in the age group of 18–44 years. The most common causes of injuries were road traffic accidents (26.4%); the second leading cause was interpersonal violence (23.2%), followed by injuries associated with falls (19.2%). Fractures of the mandible (31%) and zygoma (23%) were the most common maxillofacial fractures in our study. The monthly distribution peaked in the summer (July and August, 30.4%) and in October (13.1%). In conclusion, this study confirms the close correlation between the incidence and etiology of facial fractures and the geographical, cultural, and socioeconomic features of a population. The data obtained provide important information for the design of future plans for injury prevention and for education of citizens.
---
## Body
## 1. Introduction
There is a remarkable regional variation in the incidence, sex and age distributions, aetiology, and site distribution of maxillofacial fractures depending upon the geographic conditions, cultural characteristics, and socioeconomic trends [1–6].The Province of Pescara is a province in the Abruzzo region of Italy; it has an area of 1.187 km² and a total population of 314.391 inhabitants.Since the Department of Maxillofacial Surgery in the Hospital of Pescara was opened in January 2010 acting as facial trauma centre, no study has been carried out so far to find out the epidemiological data of maxillofacial fractures in our province.We therefore assessed the etiology and pattern of maxillofacial fractures in patients treated at our centre with the aim to give valuable information for both health care providers and government officials that can be used for the development of public health programs for education and prevention.
## 2. Materials and Methods
We have carried out a retrospective analysis of all patients with maxillofacial fractures surgically treated at the Department of Maxillofacial Surgery of Spirito Santo Hospital, Pescara, Italy, from January 2010 to December 2012.Data collected and analyzed included sex, age, cause of injury, site of fracture, monthly distribution, and alcohol misuse.The aetiological factors were classified into road traffic accidents (RTA), interpersonal violence, falls, work- related accidents, and others (iatrogenic, gunshot, pathological, etc.).In patients with multiple facial bone fractures, each affected bone was evaluated as separate case.Patients with dentoalveolar fractures and patients with nasal bone fractures were excluded from the study because in our hospital they are treated by dentist and otorhinolaryngologist, respectively.
## 3. Results
A total of 306 patients sustaining 401 maxillofacial fractures were treated at our centre between January 2010 and December 2012.There were 173 males (56.5%) and 133 females (43.5%), giving a male to female ratio of 1.3 : 1. Distribution of patients according to gender and causes of injuries is shown in Figure1.Figure 1
Distribution of patients according to gender and causes of injuries.Most of the patients (125; 36.9%) were in the age group of 18–44 years, while the smallest number of patients (32; 10.4%) was over the age of 80. Distribution of patients according to age group and causes of fractures is shown in Figure2.Figure 2
Distribution of patients according to age group and causes of fractures.The most common causes of injuries were road traffic accidents (81 patients; 26.4%); out of these 81 patients, 35 (43.2%) were involved in bicycle accidents, 24 (29.6%) in motorcycle accidents, and 22 (27.2%) in car accidents.The second leading cause was interpersonal violence (71 patients; 23.2%), followed by injuries associated with falls accounting for 59 (19.2%) of all injuries; out of these 59 patients, 56 (94.9%) injuries were associated with falls on the ground and 3 (5.1%) with falls from height.Sport-related injuries were documented in 48 (15.7%) patients: 23 playing soccer, 6 rugby, 6 capoeira, 12 bicycle racing, 1 volleyball, and 1 basketball.Work-related injuries caused maxillofacial fracture in 24 (7.8%) patients.In 23 (7.5%) patients other causes were recorded: 11 pathological fractures, 8 iatrogenic fractures, 2 collision with a heavy object, 1 hit by horse kick, and 1 gunshot.Out of the total of 306 patients, 91 (29.7%) presented two or more sites of fractures, average 1.3 fractures per patient; panfacial fractures were treated in 11 (3.6%) patients.Fractures of the mandible (123; 31%) and zygoma (92; 23%) were the most common; site distribution of fractures is shown in Figure3.Figure 3
Site distribution of fractures.Among the 306 patients, 121 (39.5%) were under the effect of alcohol at the time of injury.The monthly distribution peaked in the summer (July and August, 30.4%) and in October (13.1%); in Figure4 is shown the yearly and monthly distribution of maxillofacial fractures.Figure 4
Yearly and monthly distribution of maxillofacial fractures.
## 4. Discussion
The epidemiological features of maxillofacial fractures are consistently influenced depending on environmental, cultural, and socioeconomic factors, with great variations among populations of different countries and even within the same country [1–7].Like previous studies, our finding shows that maxillofacial fractures are more common in males, but the male to female ratio in our study (1.3 : 1) was lower than those reported in the international literature [3–8].In this study, the peak incidence was in the age group of 18–44 years, in agreement with the results of many other authors [3, 4, 9] and reflecting the fact that people in the third decade of life are more active regarding work, sports, violent activities, and high speed transportation.As throughout the world [3, 9–12] even in our province the primary causes of maxillofacial fractures are road traffic accidents, interpersonal violence, and falls. According to this study, road traffic accidents remain the leading cause of injuries in both males and females, although the numbers of females injured by road traffic accidents and by interpersonal violence are practically equal (35 versus 34) (Figure 1). In contrast to other previous reports [2, 9, 11, 12], violence-related fractures proved to be higher in females (25.6%) than in males (21.4%). In this study, interpersonal violence is the main cause of maxillofacial fractures in patients between 18 and 44 years while falls were the most common cause in children (<18 years) and the elderly (>65 years) (Figure 2).Among road traffic accidents, two wheelers were responsible for the majority of maxillofacial fractures (59 patients; 72.8% of RTAs); these results were similar to previous studies reported in the literature [2, 3, 10, 11] and can be explained by the fact that bicycles and motorcycles are very popular as means of transport in our province because of its geographical and climatic features.Among sports-related fractures we reported 6 patients injured by playing capoeira. All of these patients were treated in a period time of 5 months, between September 2011 and January 2012; this is due to the fact that during that period there was a large spread in Italy of capoeira and many people have started to practice this sport. This data confirms the variability over time of the etiology of maxillofacial trauma and the close correlation with the sociocultural trends.In the present series, the most commonly involved bones were the mandible followed by zygoma. These reports are consistent with studies in several other countries [2–4, 9, 10] but in contrast with one recent study from Italy [13] that indicates the zygoma as the most fractured anatomical site, followed by isolated fractures of the orbital floor.Our study shows a close association between maxillofacial injuries and alcohol consumption (39.5% of patients); these findings are similar with previous studies from several countries [3, 10, 14, 15]. Alcohol significantly impairs judgment and coordination, facilitates aggression and interpersonal violence, and is a major cause of road traffic accidents.The monthly distribution of the maxillofacial fractures in our province showed two peaks.The first one was in July-August and was related to the tourist season during which there is an exponential increase of population in our territory. This was similar to other studies [4, 13]. The second peak was in October and can be explained by the olive-harvesting season. In our province, there is a long tradition of olive oil production that is often a family activity. For this many people, even the elderly and those with little experience, are involved in the olive harvest, resulting in significant number of maxillofacial traumas related to it.
## 5. Conclusions
This study confirms the close correlation between the incidence and etiology of facial fractures and the geographical, cultural, and socioeconomic features of a population.The data obtained provide important information for the design of future plans for injury prevention and for education of citizens.
---
*Source: 101370-2014-01-23.xml* | 2014 |
# RBF-Based 3D Visual Detection Method for Chinese Martial Art Wrong Movements
**Authors:** Xi Wang; Yi-Hsiang Pan; Zongbai Li; Bing Li
**Journal:** Wireless Communications and Mobile Computing
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013714
---
## Abstract
The accuracy of action detection is limited by the extracted action, and there are problems of high processing complexity and low efficiency. Therefore, a three-dimensional visual detection method of martial art wrong action based on RBF is proposed. After noise reduction and weighting processing of martial art action video images, a martial art action 3D visual transformation model is established. According to the 3D visual model, C3D features are used to represent martial art actions. The video is segmented using sparse coding to determine the detection range. RBF neural network model is established, and the combination of the above 3D visual model and network parameters is obtained by sample training to detect martial art wrong actions. The test method of the experimental results shows the detection of the research under the condition of different degrees of precision, an average of at least 5%, and the method of detection of high efficiency and stability.
---
## Body
## 1. Introduction
Martial art is an ancient science in China’s traditional sports, with attack action as the main content, routines, and combat as the main movement form, paying attention to the internal and external repairing of traditional ethnic sports [1]. As an excellent national traditional culture in China, Chinese martial art has formed its own unique expression and development means in the process of development and derivation for thousands of years. At present, the teaching of Chinese martial art at home and abroad mostly stays in the traditional way of face-to-face teaching or practitioners only follow video learning [2]. On the one hand, the inheritors of Chinese martial art are scarce, and it is difficult for people to get access to authentic face-to-face teaching; on the other hand, there are problems such as poor intuition and low efficiency in learning skillful movements only by following videos, and practitioners are very likely to do wrong movements and cause muscle damage. Therefore, an effective human movement posture detection method can play a role in correcting wrong movements for athletes’ regular training. Action detection is widely used in industrial production, daily safety behavior monitoring, social operation management, and other work areas, and scholars at home and abroad have made some research results. Ohl and Rolfs use causality in the human visual system to adapt to show the causal relationship linked between action directions. It is used as a key low-level feature of visual events to detect motion direction [3]. Reference [4] combines perceptual learning and statistical learning to improve information acquired through experience and achieve action detection by statistical cooccurrence between environmental features. Reference [5] uses the frame difference method to subtract the background to achieve effective detection of the slightest motion. Reference [6] uses the YOLOv4 deep-learning motion target detection algorithm to achieve localization and recognition of motion targets. Real-time image detection of pictures, videos, and cameras is achieved by identifying and tagging the location and type of objects contained in the image, which improves the accuracy and speed of detection. NagiReddy et al. proposed a novel background modeling mechanism using a bias illumination field fuzzy C-means algorithm to separate nonstationary pixels from stationary ones by background subtraction. Feature extraction under the condition of noise and illumination changes is completed by using the fuzzy C-means method of biased lighting field, and the detection accuracy is improved through clustering [7]. The above methods can achieve high-quality motion detection in different environments, but the overall detection performance of the method is degraded due to the strong coherence of martial art actions and the influence of the accuracy of extracting wrong action features when wrong actions occur.The neural network can simulate some systems that cannot be described by mathematical models, and it has strong learning and adaptability, but it also has obvious nonlinear characteristics. The radial basis function (RBF) neural network belongs to the forward neural network; this kind of network to the structure of the multilayer forward network control is similar, and it is a kind of forward neural network with three layers structure [8]. The transformation function of neurons in the hidden layer refers to the radial basis function and is aimed at the center of radial symmetry and nonlinear function with an attenuation trend [9]. In order to better detect the wrong movements of Wushu, in future research, we should conduct in-depth research on the martial art movements with rapid changes in the movement connection, so as to improve the detection accuracy of the detection method. On the basis of the above analysis, this paper will study the three-dimensional visual detection method of martial art wrong action based on RBF, combined with the advantages of the RBF neural network, combined with the three-dimensional visual model to realize martial art wrong action detection and test the comprehensive performance of this method.
## 2. Research on RBF-Based 3D Visual Detection Method for Martial Art Error Movements
### 2.1. Martial Art Action Video Image Processing
#### 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
#### 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
### 2.2. Generate Chinese Martial Art Error Action Fragments to Be Detected
In this paper, C3D features are used for video motion representation, and C3D features show excellent performance in motion recognition tasks. C3D features are generated by a 3D-CNN deep network. Compared with traditional features, C3D features can better represent the characteristics of action videos in time and space. Compared with 2D-CNN, 3D-CNN can better extract timing features of videos, which is very suitable for motion detection tasks. The C3D network consists of 8 convolution layers, 5 maximum pooling layers, 2 full connection layers, and one softening output layer. All the convolution layers use3×3×3 3D convolution cores, the first pooling layer uses 1×2×2 pooling cores, and the other pooling layers use 2×2×2 pooling cores [16, 17].After feature extraction in the C3D network, the visual dictionary needs to be established when sparse coding is used to generate action fragments to be detected. The establishment of the visual dictionary is based on the sample video set. For each video sample, the run-time space point of the interest detection algorithm is used to extract 3D HOG features at each detected point of interest location to obtain a feature vector. The feature vector cannot be directly used as a visual word due to its high dimension and large variance, so it needs to be quantitatively processed. Moreover, as the feature quantity of the whole sample set is very large, the calculation is usually carried out on its subset [18]. All feature vectors extracted from all video sets are taken as a set, and a subset is obtained by random sampling. The clustering algorithm is performed on the subset to obtain K categories. The center of each category is computed as the visual words of that category, which constitute the dictionary of visual features on this data set.The traditional sparse coding method of multiple dictionaries is used, and the basic sparse dictionary learning method is used to learn each dictionary.T represents the feature of the action fragment used to train the dictionary, and ZD represents the dictionary to be learned. Dictionary learning for each category uses the following formula [19, 20]:
(5)ZD,B=argminZD,B1bT−ZDB2+λBt2,where B is the sparse representation coefficient and λ is the length of the sparse window. The learning process of a dictionary is the same as that of using a dictionary. In each iteration, the dictionary is updated by the fixed coefficient matrix first, then the coefficient matrix is updated by the fixed dictionary, and the result of minimizing formula (5) is finally obtained. Each dictionary learned was used to encode the candidate fragments. Formula (6) was used to calculate the reconstruction error of each dictionary, and the corresponding fragment score of each dictionary was calculated using the normalized formula. At this point, each candidate fragment has different scores from each category dictionary, and the final score of the candidate fragment can be obtained by calculating these scores [21].
(6)Bm=argminbm1bmUm−ZDB2+λBt2.According to the above-obtained clip scores, the video clips that may contain wrong moves are selected using the correlation coefficient coding between Chinese martial art moves. After selecting the video clips that may contain wrong moves, the Chinese martial art move features in the images are extracted by combining the Chinese martial art move 3D visual transformation model, and the Chinese martial art wrong moves are detected using RBF.
### 2.3. The RBF Model Is Used to Detect the Wrong Action of Chinese Martial Art
Since there are differences in the initial position relative to the camera and the body orientation when people perform actions, this has a significant impact on the description of human posture and action recognition. Therefore, it is necessary to first regularize the coordinated system so that the initial position and orientation of the human skeleton after coordinated transformation are the same. Since the set of 3D trajectories of all skeletal joint points contains the full information of the complete action, the original action data can be reconstructed from three projections, i.e., by a 2D sequence of motion units. 2D human joint point detection is performed using a cascade pyramid network (CPN) to determine the relative position between the human joint points corresponding to each frame of the Chinese martial art action in the video. Use thek-means clustering algorithm to extract the Chinese martial art movements and the Chinese martial art action characteristics, and use the convolution network fusion [22].In the feature fusion structure, multiple video segments are fed into the network structure at the same time, but in this paper, only the same network model is used, and these segments fed into the network at the same time will share all the parameters of the convolutional layer and some of the parameters of the fully connected layer in the network. More specifically, for a given video, the video will be segmented for the first time into multiple nonoverlapping video segments of the same duration, and then a sequence of images will be obtained in each video segment using a certain sampling strategy [23]. In the proposed framework, the extracted image sequences will be fed into the 3D convolutional neural network, and each image sequence will be given a corresponding spatiotemporal feature. These features will be merged in the training phase and the resulting features will be considered the spatiotemporal features of the whole video and will be used in the subsequent optimization process. In this way, during the whole learning process, the target of optimization becomes the loss of the whole video, rather than the loss of a video segment or slice [24].After extracting and fusing the Chinese martial art action features from the video clip, an RBF neural network model is built to detect the wrong Chinese martial art action. The radial basis function often used in RBF neural networks is a Gaussian function, from which the activation function involved in the RBF neural network can be represented by the following equation [25].
(7)Rbfxr−cg=exp−12σ2xr−cg2,where xr−cg is a Euclidean norm, cg represents the center of the Gaussian function, and σ is the variance of the Gaussian. Thus, the relationship between processing output Oi and input Ij of the RBF neural network is as follows:
(8)Oi=∑i=1nQijRbfxr−cgIj.Based on the above analysis, this paper designs a stereo matching reconstruction model including four input nodes and three output nodes in the constructed RBF network model. The input nodes take the pixel values of the standard martial art action video image and the action video image to be detected in turn, and the output node is the three-dimensional coordinates of the corresponding points. The RBF network is trained with the training sample set, and the network parameters that minimize the output error are selected as the parameters of the final detection model. Input the processed standard Wushu action video image into the RBF network model, and the processed output result vector is the Wushu action detection result. Compare it with the standard Wushu action to find out whether there are wrong Wushu actions. Above, the research of three-dimensional visual detection of Chinese martial art wrong movements is realized by using radial basis function neural network technology.
## 2.1. Martial Art Action Video Image Processing
### 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
### 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
## 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
## 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
## 2.2. Generate Chinese Martial Art Error Action Fragments to Be Detected
In this paper, C3D features are used for video motion representation, and C3D features show excellent performance in motion recognition tasks. C3D features are generated by a 3D-CNN deep network. Compared with traditional features, C3D features can better represent the characteristics of action videos in time and space. Compared with 2D-CNN, 3D-CNN can better extract timing features of videos, which is very suitable for motion detection tasks. The C3D network consists of 8 convolution layers, 5 maximum pooling layers, 2 full connection layers, and one softening output layer. All the convolution layers use3×3×3 3D convolution cores, the first pooling layer uses 1×2×2 pooling cores, and the other pooling layers use 2×2×2 pooling cores [16, 17].After feature extraction in the C3D network, the visual dictionary needs to be established when sparse coding is used to generate action fragments to be detected. The establishment of the visual dictionary is based on the sample video set. For each video sample, the run-time space point of the interest detection algorithm is used to extract 3D HOG features at each detected point of interest location to obtain a feature vector. The feature vector cannot be directly used as a visual word due to its high dimension and large variance, so it needs to be quantitatively processed. Moreover, as the feature quantity of the whole sample set is very large, the calculation is usually carried out on its subset [18]. All feature vectors extracted from all video sets are taken as a set, and a subset is obtained by random sampling. The clustering algorithm is performed on the subset to obtain K categories. The center of each category is computed as the visual words of that category, which constitute the dictionary of visual features on this data set.The traditional sparse coding method of multiple dictionaries is used, and the basic sparse dictionary learning method is used to learn each dictionary.T represents the feature of the action fragment used to train the dictionary, and ZD represents the dictionary to be learned. Dictionary learning for each category uses the following formula [19, 20]:
(5)ZD,B=argminZD,B1bT−ZDB2+λBt2,where B is the sparse representation coefficient and λ is the length of the sparse window. The learning process of a dictionary is the same as that of using a dictionary. In each iteration, the dictionary is updated by the fixed coefficient matrix first, then the coefficient matrix is updated by the fixed dictionary, and the result of minimizing formula (5) is finally obtained. Each dictionary learned was used to encode the candidate fragments. Formula (6) was used to calculate the reconstruction error of each dictionary, and the corresponding fragment score of each dictionary was calculated using the normalized formula. At this point, each candidate fragment has different scores from each category dictionary, and the final score of the candidate fragment can be obtained by calculating these scores [21].
(6)Bm=argminbm1bmUm−ZDB2+λBt2.According to the above-obtained clip scores, the video clips that may contain wrong moves are selected using the correlation coefficient coding between Chinese martial art moves. After selecting the video clips that may contain wrong moves, the Chinese martial art move features in the images are extracted by combining the Chinese martial art move 3D visual transformation model, and the Chinese martial art wrong moves are detected using RBF.
## 2.3. The RBF Model Is Used to Detect the Wrong Action of Chinese Martial Art
Since there are differences in the initial position relative to the camera and the body orientation when people perform actions, this has a significant impact on the description of human posture and action recognition. Therefore, it is necessary to first regularize the coordinated system so that the initial position and orientation of the human skeleton after coordinated transformation are the same. Since the set of 3D trajectories of all skeletal joint points contains the full information of the complete action, the original action data can be reconstructed from three projections, i.e., by a 2D sequence of motion units. 2D human joint point detection is performed using a cascade pyramid network (CPN) to determine the relative position between the human joint points corresponding to each frame of the Chinese martial art action in the video. Use thek-means clustering algorithm to extract the Chinese martial art movements and the Chinese martial art action characteristics, and use the convolution network fusion [22].In the feature fusion structure, multiple video segments are fed into the network structure at the same time, but in this paper, only the same network model is used, and these segments fed into the network at the same time will share all the parameters of the convolutional layer and some of the parameters of the fully connected layer in the network. More specifically, for a given video, the video will be segmented for the first time into multiple nonoverlapping video segments of the same duration, and then a sequence of images will be obtained in each video segment using a certain sampling strategy [23]. In the proposed framework, the extracted image sequences will be fed into the 3D convolutional neural network, and each image sequence will be given a corresponding spatiotemporal feature. These features will be merged in the training phase and the resulting features will be considered the spatiotemporal features of the whole video and will be used in the subsequent optimization process. In this way, during the whole learning process, the target of optimization becomes the loss of the whole video, rather than the loss of a video segment or slice [24].After extracting and fusing the Chinese martial art action features from the video clip, an RBF neural network model is built to detect the wrong Chinese martial art action. The radial basis function often used in RBF neural networks is a Gaussian function, from which the activation function involved in the RBF neural network can be represented by the following equation [25].
(7)Rbfxr−cg=exp−12σ2xr−cg2,where xr−cg is a Euclidean norm, cg represents the center of the Gaussian function, and σ is the variance of the Gaussian. Thus, the relationship between processing output Oi and input Ij of the RBF neural network is as follows:
(8)Oi=∑i=1nQijRbfxr−cgIj.Based on the above analysis, this paper designs a stereo matching reconstruction model including four input nodes and three output nodes in the constructed RBF network model. The input nodes take the pixel values of the standard martial art action video image and the action video image to be detected in turn, and the output node is the three-dimensional coordinates of the corresponding points. The RBF network is trained with the training sample set, and the network parameters that minimize the output error are selected as the parameters of the final detection model. Input the processed standard Wushu action video image into the RBF network model, and the processed output result vector is the Wushu action detection result. Compare it with the standard Wushu action to find out whether there are wrong Wushu actions. Above, the research of three-dimensional visual detection of Chinese martial art wrong movements is realized by using radial basis function neural network technology.
## 3. Experimental Study
In order to verify the feasibility of the above theoretical design and the performance of the proposed detection method, experimental research on the detection method will be conducted in this section. According to the final experimental data analysis, the comprehensive performance and practical application of the proposed motion detection method are evaluated.
### 3.1. Experiment Content
In this experiment, various experimental standards were calibrated for a single user test environment for subsequent threshold settings. In the experiment, all experimental factors were consistent except for experimental control variables. The motion detection method based on the YOLOv algorithm in reference [6] and the motion detection method based on the fuzzy C-means algorithm in reference [7] are selected as comparison method 1 and comparison method 2, respectively. The two comparison methods were applied to the same experimental background and comprehensively compared with the 3D vision detection method based on RBF proposed in this paper. In order to verify the stability of this method, this experiment simulates the method test in different background environments and selects several subjects to test the method in the simple background and complex background, respectively.
### 3.2. Experimental Data and Preparation
Considering that martial art actions include kicking, hitting, falling, holding, falling, hitting, splitting, stabbing, and other actions, this paper uses KTH, UCF101, HMDB51, and dynamics data sets as the test data sets of algorithm performance in the experiment. The above four data sets contain different movements similar to Wushu movements. Among them, the contents of the KTH data set are simple six types of actions completed by 25 adults in four different scenes, including walking, jogging, running, boxing, hand waiting, and hand clipping, with a total of 2391 video samples. The fixed camera and single background used for image acquisition in this data set are not close to the objective and real scene performance. Part of the data set UCF101 comes from various sports samples collected by BBC/ESPN radio and television channels, and part comes from video samples downloaded from the Internet. The video website with the most sources is YouTube. UCF101 contains 13320 video samples, which are divided into 101 categories in total. Most of the data samples in the HMDB51 data set are collected from movies, which are more difficult to understand than videos in natural scenes. The data set has 6849 samples and 51 categories, and each category contains at least 101 data samples. The kinetics data set comes from YouTube and contains 400 kinds of actions, with a total of about 300000 videos. Professional practitioners in the martial art industry are invited to make a demonstration video and compare the video data as the detection standard sample of action detection methods. Using the known data set with parameters as above, the YOLOv algorithm, fuzzy C-means algorithm, and RBF neural network used in the three methods are tested.
### 3.3. Experimental Results
Table1 shows the processing algorithms and models for the application of the detection method on the selection of test data set processing, and different data sets with corresponding processing compare the average detection accuracy and time.Table 1
Comparison of average detection accuracy and time consumption between processing algorithm and model.
Data setYOLOv algorithmFuzzy C-means algorithmRBF neural networkAccuracy, %Time consumed, msAccuracy, %Time consumed, msAccuracy, %Time consumed, msKTH97.6151.597.4149.898.8102.3UCF10196.5232.695.7241.798.0137.8HMDB5196.9298.193.2301.597.6151.4Kinetics95.7364.390.3344.397.1156.9Analyzing the data in Table1, when processing the KTH data set, the detection accuracy of the three algorithm models is basically the same. With the increase in the complexity of samples in the data set, the accuracy of the three algorithms decreases, and the processing time increases rapidly. Among them, the processing accuracy of the RBF neural network for the four data sets is higher than 97%, and the processing time is 156.9 ms, which is far less than the other two algorithms, indicating that the performance of this algorithm is relatively better.In order to further verify the effectiveness of the proposed method, the average accuracy of martial art error action detection under the use of the three methods is compared under the simple background and complex background. The results are shown in Figures2 and 3. The simple background refers to the background of martial art error action, which is a single background, has low noise, and has stable illumination, and the complex background refers to the background with many interference factors, the background with more noise and unclear light and dark lines.Figure 2
Comparison of detection accuracy in simple scenes.Figure 3
Comparison of detection accuracy in complex scenes.By comparing and analyzing the figures in Figures2 and 3, it can be seen that in a simple scene, the detection accuracy of the three detection methods has little difference. The main reason is that the simple background is a single background, has low noise, and has stable illumination, which has little impact on the extraction action of the three methods. In complex scenes, the complexity of the scene gradually increases with the scene number, the interference factors such as background and noise in the scene increase resulting in the gradual decline of the detection accuracy of the three methods, and the decline of the detection accuracy of the comparison method is the largest. Based on the above analysis, in simple and complex detection scenarios, the detection accuracy of this method is higher than that of the two comparison methods. On average, the detection accuracy of this method is improved by at least 5%. Therefore, the detection efficiency of this method is higher, and the detection is less affected by the background environment. The stability of this detection method is good.Summarizing the above test data, it can be seen that the three-dimensional visual detection method of martial art wrong action based on RBF proposed in this paper has high detection accuracy and sensitivity, and the application stability of the method is good, which is suitable for different action detection conditions. The comprehensive performance of this method is significantly improved and has higher practical application value and application effect. This method meets the research expectation.
## 3.1. Experiment Content
In this experiment, various experimental standards were calibrated for a single user test environment for subsequent threshold settings. In the experiment, all experimental factors were consistent except for experimental control variables. The motion detection method based on the YOLOv algorithm in reference [6] and the motion detection method based on the fuzzy C-means algorithm in reference [7] are selected as comparison method 1 and comparison method 2, respectively. The two comparison methods were applied to the same experimental background and comprehensively compared with the 3D vision detection method based on RBF proposed in this paper. In order to verify the stability of this method, this experiment simulates the method test in different background environments and selects several subjects to test the method in the simple background and complex background, respectively.
## 3.2. Experimental Data and Preparation
Considering that martial art actions include kicking, hitting, falling, holding, falling, hitting, splitting, stabbing, and other actions, this paper uses KTH, UCF101, HMDB51, and dynamics data sets as the test data sets of algorithm performance in the experiment. The above four data sets contain different movements similar to Wushu movements. Among them, the contents of the KTH data set are simple six types of actions completed by 25 adults in four different scenes, including walking, jogging, running, boxing, hand waiting, and hand clipping, with a total of 2391 video samples. The fixed camera and single background used for image acquisition in this data set are not close to the objective and real scene performance. Part of the data set UCF101 comes from various sports samples collected by BBC/ESPN radio and television channels, and part comes from video samples downloaded from the Internet. The video website with the most sources is YouTube. UCF101 contains 13320 video samples, which are divided into 101 categories in total. Most of the data samples in the HMDB51 data set are collected from movies, which are more difficult to understand than videos in natural scenes. The data set has 6849 samples and 51 categories, and each category contains at least 101 data samples. The kinetics data set comes from YouTube and contains 400 kinds of actions, with a total of about 300000 videos. Professional practitioners in the martial art industry are invited to make a demonstration video and compare the video data as the detection standard sample of action detection methods. Using the known data set with parameters as above, the YOLOv algorithm, fuzzy C-means algorithm, and RBF neural network used in the three methods are tested.
## 3.3. Experimental Results
Table1 shows the processing algorithms and models for the application of the detection method on the selection of test data set processing, and different data sets with corresponding processing compare the average detection accuracy and time.Table 1
Comparison of average detection accuracy and time consumption between processing algorithm and model.
Data setYOLOv algorithmFuzzy C-means algorithmRBF neural networkAccuracy, %Time consumed, msAccuracy, %Time consumed, msAccuracy, %Time consumed, msKTH97.6151.597.4149.898.8102.3UCF10196.5232.695.7241.798.0137.8HMDB5196.9298.193.2301.597.6151.4Kinetics95.7364.390.3344.397.1156.9Analyzing the data in Table1, when processing the KTH data set, the detection accuracy of the three algorithm models is basically the same. With the increase in the complexity of samples in the data set, the accuracy of the three algorithms decreases, and the processing time increases rapidly. Among them, the processing accuracy of the RBF neural network for the four data sets is higher than 97%, and the processing time is 156.9 ms, which is far less than the other two algorithms, indicating that the performance of this algorithm is relatively better.In order to further verify the effectiveness of the proposed method, the average accuracy of martial art error action detection under the use of the three methods is compared under the simple background and complex background. The results are shown in Figures2 and 3. The simple background refers to the background of martial art error action, which is a single background, has low noise, and has stable illumination, and the complex background refers to the background with many interference factors, the background with more noise and unclear light and dark lines.Figure 2
Comparison of detection accuracy in simple scenes.Figure 3
Comparison of detection accuracy in complex scenes.By comparing and analyzing the figures in Figures2 and 3, it can be seen that in a simple scene, the detection accuracy of the three detection methods has little difference. The main reason is that the simple background is a single background, has low noise, and has stable illumination, which has little impact on the extraction action of the three methods. In complex scenes, the complexity of the scene gradually increases with the scene number, the interference factors such as background and noise in the scene increase resulting in the gradual decline of the detection accuracy of the three methods, and the decline of the detection accuracy of the comparison method is the largest. Based on the above analysis, in simple and complex detection scenarios, the detection accuracy of this method is higher than that of the two comparison methods. On average, the detection accuracy of this method is improved by at least 5%. Therefore, the detection efficiency of this method is higher, and the detection is less affected by the background environment. The stability of this detection method is good.Summarizing the above test data, it can be seen that the three-dimensional visual detection method of martial art wrong action based on RBF proposed in this paper has high detection accuracy and sensitivity, and the application stability of the method is good, which is suitable for different action detection conditions. The comprehensive performance of this method is significantly improved and has higher practical application value and application effect. This method meets the research expectation.
## 4. Conclusion
With the continuous maturity of video image processing technology, the use of a computer to process images to detect martial art movements is gradually applied. However, in order to solve the problems of low detection accuracy, slow detection speed, and slow detection response, this paper proposes a three-dimensional visual detection method of martial art wrong action based on RBF. Using the characteristics of radial basis function neural network, this method realizes the accurate detection of martial art wrong actions. Experiments show that the proposed three-dimensional vision detection method has high precision, high efficiency, and good stability and can be used to effectively detect martial art wrong movements.
---
*Source: 1013714-2022-04-30.xml* | 1013714-2022-04-30_1013714-2022-04-30.md | 53,297 | RBF-Based 3D Visual Detection Method for Chinese Martial Art Wrong Movements | Xi Wang; Yi-Hsiang Pan; Zongbai Li; Bing Li | Wireless Communications and Mobile Computing
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013714 | 1013714-2022-04-30.xml | ---
## Abstract
The accuracy of action detection is limited by the extracted action, and there are problems of high processing complexity and low efficiency. Therefore, a three-dimensional visual detection method of martial art wrong action based on RBF is proposed. After noise reduction and weighting processing of martial art action video images, a martial art action 3D visual transformation model is established. According to the 3D visual model, C3D features are used to represent martial art actions. The video is segmented using sparse coding to determine the detection range. RBF neural network model is established, and the combination of the above 3D visual model and network parameters is obtained by sample training to detect martial art wrong actions. The test method of the experimental results shows the detection of the research under the condition of different degrees of precision, an average of at least 5%, and the method of detection of high efficiency and stability.
---
## Body
## 1. Introduction
Martial art is an ancient science in China’s traditional sports, with attack action as the main content, routines, and combat as the main movement form, paying attention to the internal and external repairing of traditional ethnic sports [1]. As an excellent national traditional culture in China, Chinese martial art has formed its own unique expression and development means in the process of development and derivation for thousands of years. At present, the teaching of Chinese martial art at home and abroad mostly stays in the traditional way of face-to-face teaching or practitioners only follow video learning [2]. On the one hand, the inheritors of Chinese martial art are scarce, and it is difficult for people to get access to authentic face-to-face teaching; on the other hand, there are problems such as poor intuition and low efficiency in learning skillful movements only by following videos, and practitioners are very likely to do wrong movements and cause muscle damage. Therefore, an effective human movement posture detection method can play a role in correcting wrong movements for athletes’ regular training. Action detection is widely used in industrial production, daily safety behavior monitoring, social operation management, and other work areas, and scholars at home and abroad have made some research results. Ohl and Rolfs use causality in the human visual system to adapt to show the causal relationship linked between action directions. It is used as a key low-level feature of visual events to detect motion direction [3]. Reference [4] combines perceptual learning and statistical learning to improve information acquired through experience and achieve action detection by statistical cooccurrence between environmental features. Reference [5] uses the frame difference method to subtract the background to achieve effective detection of the slightest motion. Reference [6] uses the YOLOv4 deep-learning motion target detection algorithm to achieve localization and recognition of motion targets. Real-time image detection of pictures, videos, and cameras is achieved by identifying and tagging the location and type of objects contained in the image, which improves the accuracy and speed of detection. NagiReddy et al. proposed a novel background modeling mechanism using a bias illumination field fuzzy C-means algorithm to separate nonstationary pixels from stationary ones by background subtraction. Feature extraction under the condition of noise and illumination changes is completed by using the fuzzy C-means method of biased lighting field, and the detection accuracy is improved through clustering [7]. The above methods can achieve high-quality motion detection in different environments, but the overall detection performance of the method is degraded due to the strong coherence of martial art actions and the influence of the accuracy of extracting wrong action features when wrong actions occur.The neural network can simulate some systems that cannot be described by mathematical models, and it has strong learning and adaptability, but it also has obvious nonlinear characteristics. The radial basis function (RBF) neural network belongs to the forward neural network; this kind of network to the structure of the multilayer forward network control is similar, and it is a kind of forward neural network with three layers structure [8]. The transformation function of neurons in the hidden layer refers to the radial basis function and is aimed at the center of radial symmetry and nonlinear function with an attenuation trend [9]. In order to better detect the wrong movements of Wushu, in future research, we should conduct in-depth research on the martial art movements with rapid changes in the movement connection, so as to improve the detection accuracy of the detection method. On the basis of the above analysis, this paper will study the three-dimensional visual detection method of martial art wrong action based on RBF, combined with the advantages of the RBF neural network, combined with the three-dimensional visual model to realize martial art wrong action detection and test the comprehensive performance of this method.
## 2. Research on RBF-Based 3D Visual Detection Method for Martial Art Error Movements
### 2.1. Martial Art Action Video Image Processing
#### 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
#### 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
### 2.2. Generate Chinese Martial Art Error Action Fragments to Be Detected
In this paper, C3D features are used for video motion representation, and C3D features show excellent performance in motion recognition tasks. C3D features are generated by a 3D-CNN deep network. Compared with traditional features, C3D features can better represent the characteristics of action videos in time and space. Compared with 2D-CNN, 3D-CNN can better extract timing features of videos, which is very suitable for motion detection tasks. The C3D network consists of 8 convolution layers, 5 maximum pooling layers, 2 full connection layers, and one softening output layer. All the convolution layers use3×3×3 3D convolution cores, the first pooling layer uses 1×2×2 pooling cores, and the other pooling layers use 2×2×2 pooling cores [16, 17].After feature extraction in the C3D network, the visual dictionary needs to be established when sparse coding is used to generate action fragments to be detected. The establishment of the visual dictionary is based on the sample video set. For each video sample, the run-time space point of the interest detection algorithm is used to extract 3D HOG features at each detected point of interest location to obtain a feature vector. The feature vector cannot be directly used as a visual word due to its high dimension and large variance, so it needs to be quantitatively processed. Moreover, as the feature quantity of the whole sample set is very large, the calculation is usually carried out on its subset [18]. All feature vectors extracted from all video sets are taken as a set, and a subset is obtained by random sampling. The clustering algorithm is performed on the subset to obtain K categories. The center of each category is computed as the visual words of that category, which constitute the dictionary of visual features on this data set.The traditional sparse coding method of multiple dictionaries is used, and the basic sparse dictionary learning method is used to learn each dictionary.T represents the feature of the action fragment used to train the dictionary, and ZD represents the dictionary to be learned. Dictionary learning for each category uses the following formula [19, 20]:
(5)ZD,B=argminZD,B1bT−ZDB2+λBt2,where B is the sparse representation coefficient and λ is the length of the sparse window. The learning process of a dictionary is the same as that of using a dictionary. In each iteration, the dictionary is updated by the fixed coefficient matrix first, then the coefficient matrix is updated by the fixed dictionary, and the result of minimizing formula (5) is finally obtained. Each dictionary learned was used to encode the candidate fragments. Formula (6) was used to calculate the reconstruction error of each dictionary, and the corresponding fragment score of each dictionary was calculated using the normalized formula. At this point, each candidate fragment has different scores from each category dictionary, and the final score of the candidate fragment can be obtained by calculating these scores [21].
(6)Bm=argminbm1bmUm−ZDB2+λBt2.According to the above-obtained clip scores, the video clips that may contain wrong moves are selected using the correlation coefficient coding between Chinese martial art moves. After selecting the video clips that may contain wrong moves, the Chinese martial art move features in the images are extracted by combining the Chinese martial art move 3D visual transformation model, and the Chinese martial art wrong moves are detected using RBF.
### 2.3. The RBF Model Is Used to Detect the Wrong Action of Chinese Martial Art
Since there are differences in the initial position relative to the camera and the body orientation when people perform actions, this has a significant impact on the description of human posture and action recognition. Therefore, it is necessary to first regularize the coordinated system so that the initial position and orientation of the human skeleton after coordinated transformation are the same. Since the set of 3D trajectories of all skeletal joint points contains the full information of the complete action, the original action data can be reconstructed from three projections, i.e., by a 2D sequence of motion units. 2D human joint point detection is performed using a cascade pyramid network (CPN) to determine the relative position between the human joint points corresponding to each frame of the Chinese martial art action in the video. Use thek-means clustering algorithm to extract the Chinese martial art movements and the Chinese martial art action characteristics, and use the convolution network fusion [22].In the feature fusion structure, multiple video segments are fed into the network structure at the same time, but in this paper, only the same network model is used, and these segments fed into the network at the same time will share all the parameters of the convolutional layer and some of the parameters of the fully connected layer in the network. More specifically, for a given video, the video will be segmented for the first time into multiple nonoverlapping video segments of the same duration, and then a sequence of images will be obtained in each video segment using a certain sampling strategy [23]. In the proposed framework, the extracted image sequences will be fed into the 3D convolutional neural network, and each image sequence will be given a corresponding spatiotemporal feature. These features will be merged in the training phase and the resulting features will be considered the spatiotemporal features of the whole video and will be used in the subsequent optimization process. In this way, during the whole learning process, the target of optimization becomes the loss of the whole video, rather than the loss of a video segment or slice [24].After extracting and fusing the Chinese martial art action features from the video clip, an RBF neural network model is built to detect the wrong Chinese martial art action. The radial basis function often used in RBF neural networks is a Gaussian function, from which the activation function involved in the RBF neural network can be represented by the following equation [25].
(7)Rbfxr−cg=exp−12σ2xr−cg2,where xr−cg is a Euclidean norm, cg represents the center of the Gaussian function, and σ is the variance of the Gaussian. Thus, the relationship between processing output Oi and input Ij of the RBF neural network is as follows:
(8)Oi=∑i=1nQijRbfxr−cgIj.Based on the above analysis, this paper designs a stereo matching reconstruction model including four input nodes and three output nodes in the constructed RBF network model. The input nodes take the pixel values of the standard martial art action video image and the action video image to be detected in turn, and the output node is the three-dimensional coordinates of the corresponding points. The RBF network is trained with the training sample set, and the network parameters that minimize the output error are selected as the parameters of the final detection model. Input the processed standard Wushu action video image into the RBF network model, and the processed output result vector is the Wushu action detection result. Compare it with the standard Wushu action to find out whether there are wrong Wushu actions. Above, the research of three-dimensional visual detection of Chinese martial art wrong movements is realized by using radial basis function neural network technology.
## 2.1. Martial Art Action Video Image Processing
### 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
### 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
## 2.1.1. Video Image Preprocessing
The video image collection of Chinese martial art action is mainly completed by a high-resolution color camera. The image with a continuous signal taken in photogrammetry is the analog image, and its two-dimensional function is represented bypx,y. Any x,y in the image can be used as the two-dimensional coordinate point here. In the intake of the original image, there will be noise interference before certain processing, in the filtering process, and will lose part of the details of the picture, so in the process of noise; at the same time, we need to do our best to ensure the quality of the original picture. In this study, the commonly used mean filter and median filter are used to process Chinese martial art action video images to make the images smooth [10].In the actual mean filtering, a filtering template is also set based on a point pixelx,y, which is composed of the remaining surrounding pixels except for this pixel. The average value can be calculated by using the pixels in this template to replace the value of this point pixel xi,yi. The gray value hx,y corresponding to this point in the digital image can be obtained. In this way, the pixel value of each position in the original image can be replaced by the mean value solved by the filter template, which is the basic principle in the application of mean filtering. The formula is as follows [11]:
(1)hx,y=1n∑i=1npixi,yi.In the above formula,n is the total number of pixels in the filtering template after the target pixels.Median filtering was adopted while on noise suppression to eliminate the nonlinear smoothing technique, which uses the principle of order statistics, based on the different pixels to build a template. The selected pixel values in the template, sorted somewhere in the middle of pixel values to replace the target pixel gray value, thus reduce the image noise pixel gray value. In practical operation, the neighborhood around the target pixel is required to conduct size-sorting statistics of some columns according to gray value, and the median of the two-dimensional sequence is filtered, which is expressed as follows [12]:
(2)h′x,y=Medhx−q,y−w,q,w∈M.In the above formula,M is defined as a two-dimensional template. The function Med is used to get the median of the entire two-dimensional sequence; h′x,y represents the gray value of the image after processing, and hx,y represents the gray value of the image after the previous step of mean filtering.Using the method of spatial domain and frequency domain method, the two methods are used to describe the image-enhancement processing. In the spatial domain method, the gray value of each image point is directly processed to ensure the enhancement effect of the image. Specifically, taking a target pixel and its adjacent pixels as a whole, the pixels excluding the target pixel can be set as a template, so that the original gray value can be represented by the average value solved by the filtering template. The processing of spatial domain method is as follows [13]:
(3)Hx,y=HTh′x,y.In the above formula,HT is a spatial operation with respect to h′; Hx,y is the Chinese martial art action video image after enhanced processing; h′x,y is the Chinese martial art action video image after smooth noise reduction.
## 2.1.2. The 3D Visual Transformation Model of Chinese Martial Art Action Is Established
The difference between Chinese martial art action and the human body’s routine action is that Chinese martial art action has certain obvious changes in three-dimensional space, and the detection of Chinese martial art wrong action also needs to be discriminated from the perspective of three-dimensional space. Therefore, combined with the collection of Chinese martial art movements, this paper will establish a three-dimensional visual transformation model of Chinese martial art movements. The 3D visual data of Chinese martial art movement is accomplished by using the 3D visual collector of line structured light. Based on the principle of optical triangulation, the optical projector projects the structured light onto the surface of the human body, and the camera captures the information of the light strip, so as to obtain the two-dimensional distorted image of the light strip. The degree to which the strip varies depends on the position of the light plane relative to the camera and the surface shape of the object. Since the brightness of the strip is obviously different from that of the unilluminated region, the two-dimensional coordinates of the strip in the camera image can be obtained by using a specific image processing method. The mathematical model of line structured light sensor is used to establish the mapping between the image plane coordinate system and world coordinate system. According to this model, the coordinates of points on the strip can be calculated according to the pixel coordinates of the image. The camera model, the basic imaging model, often referred to as the basic pinhole model, is given by a central projection transformation from three-dimensional space to plane. The relation diagram of the Chinese martial art action acquisition object in the coordinate system of the online structured light image collector is shown in Figure1 [14, 15].Figure 1
Coordinate diagram of the three-dimensional visual-spatial relationship of the detected object.If the coordinate of the Chinese martial art movement acquisition object isR=xs,ys,zs in the spatial coordinate system, and the equation αxs+βys+γzs+θ=0 of the spatial plane where the imaging object is located in the collector coordinate system is known, then the spatial linear equation between the collector imaging spot center and the measured object is as follows:
(4)xsxs−Xs=ysys−Ys=zszs−Zs.In the above formula,Xs,Ys,Zs is the coordinate position of the measured object on the collector imaging plane. By putting the equation of the space plane of the imaging object in the collector coordinate system into formula (3), the three-dimensional space coordinates of any point on the light plane in the camera coordinate system can be worked out, and the three-dimensional visual transformation model H′x,y,z=newXs,Ys,Zs of martial art action can be established.
## 2.2. Generate Chinese Martial Art Error Action Fragments to Be Detected
In this paper, C3D features are used for video motion representation, and C3D features show excellent performance in motion recognition tasks. C3D features are generated by a 3D-CNN deep network. Compared with traditional features, C3D features can better represent the characteristics of action videos in time and space. Compared with 2D-CNN, 3D-CNN can better extract timing features of videos, which is very suitable for motion detection tasks. The C3D network consists of 8 convolution layers, 5 maximum pooling layers, 2 full connection layers, and one softening output layer. All the convolution layers use3×3×3 3D convolution cores, the first pooling layer uses 1×2×2 pooling cores, and the other pooling layers use 2×2×2 pooling cores [16, 17].After feature extraction in the C3D network, the visual dictionary needs to be established when sparse coding is used to generate action fragments to be detected. The establishment of the visual dictionary is based on the sample video set. For each video sample, the run-time space point of the interest detection algorithm is used to extract 3D HOG features at each detected point of interest location to obtain a feature vector. The feature vector cannot be directly used as a visual word due to its high dimension and large variance, so it needs to be quantitatively processed. Moreover, as the feature quantity of the whole sample set is very large, the calculation is usually carried out on its subset [18]. All feature vectors extracted from all video sets are taken as a set, and a subset is obtained by random sampling. The clustering algorithm is performed on the subset to obtain K categories. The center of each category is computed as the visual words of that category, which constitute the dictionary of visual features on this data set.The traditional sparse coding method of multiple dictionaries is used, and the basic sparse dictionary learning method is used to learn each dictionary.T represents the feature of the action fragment used to train the dictionary, and ZD represents the dictionary to be learned. Dictionary learning for each category uses the following formula [19, 20]:
(5)ZD,B=argminZD,B1bT−ZDB2+λBt2,where B is the sparse representation coefficient and λ is the length of the sparse window. The learning process of a dictionary is the same as that of using a dictionary. In each iteration, the dictionary is updated by the fixed coefficient matrix first, then the coefficient matrix is updated by the fixed dictionary, and the result of minimizing formula (5) is finally obtained. Each dictionary learned was used to encode the candidate fragments. Formula (6) was used to calculate the reconstruction error of each dictionary, and the corresponding fragment score of each dictionary was calculated using the normalized formula. At this point, each candidate fragment has different scores from each category dictionary, and the final score of the candidate fragment can be obtained by calculating these scores [21].
(6)Bm=argminbm1bmUm−ZDB2+λBt2.According to the above-obtained clip scores, the video clips that may contain wrong moves are selected using the correlation coefficient coding between Chinese martial art moves. After selecting the video clips that may contain wrong moves, the Chinese martial art move features in the images are extracted by combining the Chinese martial art move 3D visual transformation model, and the Chinese martial art wrong moves are detected using RBF.
## 2.3. The RBF Model Is Used to Detect the Wrong Action of Chinese Martial Art
Since there are differences in the initial position relative to the camera and the body orientation when people perform actions, this has a significant impact on the description of human posture and action recognition. Therefore, it is necessary to first regularize the coordinated system so that the initial position and orientation of the human skeleton after coordinated transformation are the same. Since the set of 3D trajectories of all skeletal joint points contains the full information of the complete action, the original action data can be reconstructed from three projections, i.e., by a 2D sequence of motion units. 2D human joint point detection is performed using a cascade pyramid network (CPN) to determine the relative position between the human joint points corresponding to each frame of the Chinese martial art action in the video. Use thek-means clustering algorithm to extract the Chinese martial art movements and the Chinese martial art action characteristics, and use the convolution network fusion [22].In the feature fusion structure, multiple video segments are fed into the network structure at the same time, but in this paper, only the same network model is used, and these segments fed into the network at the same time will share all the parameters of the convolutional layer and some of the parameters of the fully connected layer in the network. More specifically, for a given video, the video will be segmented for the first time into multiple nonoverlapping video segments of the same duration, and then a sequence of images will be obtained in each video segment using a certain sampling strategy [23]. In the proposed framework, the extracted image sequences will be fed into the 3D convolutional neural network, and each image sequence will be given a corresponding spatiotemporal feature. These features will be merged in the training phase and the resulting features will be considered the spatiotemporal features of the whole video and will be used in the subsequent optimization process. In this way, during the whole learning process, the target of optimization becomes the loss of the whole video, rather than the loss of a video segment or slice [24].After extracting and fusing the Chinese martial art action features from the video clip, an RBF neural network model is built to detect the wrong Chinese martial art action. The radial basis function often used in RBF neural networks is a Gaussian function, from which the activation function involved in the RBF neural network can be represented by the following equation [25].
(7)Rbfxr−cg=exp−12σ2xr−cg2,where xr−cg is a Euclidean norm, cg represents the center of the Gaussian function, and σ is the variance of the Gaussian. Thus, the relationship between processing output Oi and input Ij of the RBF neural network is as follows:
(8)Oi=∑i=1nQijRbfxr−cgIj.Based on the above analysis, this paper designs a stereo matching reconstruction model including four input nodes and three output nodes in the constructed RBF network model. The input nodes take the pixel values of the standard martial art action video image and the action video image to be detected in turn, and the output node is the three-dimensional coordinates of the corresponding points. The RBF network is trained with the training sample set, and the network parameters that minimize the output error are selected as the parameters of the final detection model. Input the processed standard Wushu action video image into the RBF network model, and the processed output result vector is the Wushu action detection result. Compare it with the standard Wushu action to find out whether there are wrong Wushu actions. Above, the research of three-dimensional visual detection of Chinese martial art wrong movements is realized by using radial basis function neural network technology.
## 3. Experimental Study
In order to verify the feasibility of the above theoretical design and the performance of the proposed detection method, experimental research on the detection method will be conducted in this section. According to the final experimental data analysis, the comprehensive performance and practical application of the proposed motion detection method are evaluated.
### 3.1. Experiment Content
In this experiment, various experimental standards were calibrated for a single user test environment for subsequent threshold settings. In the experiment, all experimental factors were consistent except for experimental control variables. The motion detection method based on the YOLOv algorithm in reference [6] and the motion detection method based on the fuzzy C-means algorithm in reference [7] are selected as comparison method 1 and comparison method 2, respectively. The two comparison methods were applied to the same experimental background and comprehensively compared with the 3D vision detection method based on RBF proposed in this paper. In order to verify the stability of this method, this experiment simulates the method test in different background environments and selects several subjects to test the method in the simple background and complex background, respectively.
### 3.2. Experimental Data and Preparation
Considering that martial art actions include kicking, hitting, falling, holding, falling, hitting, splitting, stabbing, and other actions, this paper uses KTH, UCF101, HMDB51, and dynamics data sets as the test data sets of algorithm performance in the experiment. The above four data sets contain different movements similar to Wushu movements. Among them, the contents of the KTH data set are simple six types of actions completed by 25 adults in four different scenes, including walking, jogging, running, boxing, hand waiting, and hand clipping, with a total of 2391 video samples. The fixed camera and single background used for image acquisition in this data set are not close to the objective and real scene performance. Part of the data set UCF101 comes from various sports samples collected by BBC/ESPN radio and television channels, and part comes from video samples downloaded from the Internet. The video website with the most sources is YouTube. UCF101 contains 13320 video samples, which are divided into 101 categories in total. Most of the data samples in the HMDB51 data set are collected from movies, which are more difficult to understand than videos in natural scenes. The data set has 6849 samples and 51 categories, and each category contains at least 101 data samples. The kinetics data set comes from YouTube and contains 400 kinds of actions, with a total of about 300000 videos. Professional practitioners in the martial art industry are invited to make a demonstration video and compare the video data as the detection standard sample of action detection methods. Using the known data set with parameters as above, the YOLOv algorithm, fuzzy C-means algorithm, and RBF neural network used in the three methods are tested.
### 3.3. Experimental Results
Table1 shows the processing algorithms and models for the application of the detection method on the selection of test data set processing, and different data sets with corresponding processing compare the average detection accuracy and time.Table 1
Comparison of average detection accuracy and time consumption between processing algorithm and model.
Data setYOLOv algorithmFuzzy C-means algorithmRBF neural networkAccuracy, %Time consumed, msAccuracy, %Time consumed, msAccuracy, %Time consumed, msKTH97.6151.597.4149.898.8102.3UCF10196.5232.695.7241.798.0137.8HMDB5196.9298.193.2301.597.6151.4Kinetics95.7364.390.3344.397.1156.9Analyzing the data in Table1, when processing the KTH data set, the detection accuracy of the three algorithm models is basically the same. With the increase in the complexity of samples in the data set, the accuracy of the three algorithms decreases, and the processing time increases rapidly. Among them, the processing accuracy of the RBF neural network for the four data sets is higher than 97%, and the processing time is 156.9 ms, which is far less than the other two algorithms, indicating that the performance of this algorithm is relatively better.In order to further verify the effectiveness of the proposed method, the average accuracy of martial art error action detection under the use of the three methods is compared under the simple background and complex background. The results are shown in Figures2 and 3. The simple background refers to the background of martial art error action, which is a single background, has low noise, and has stable illumination, and the complex background refers to the background with many interference factors, the background with more noise and unclear light and dark lines.Figure 2
Comparison of detection accuracy in simple scenes.Figure 3
Comparison of detection accuracy in complex scenes.By comparing and analyzing the figures in Figures2 and 3, it can be seen that in a simple scene, the detection accuracy of the three detection methods has little difference. The main reason is that the simple background is a single background, has low noise, and has stable illumination, which has little impact on the extraction action of the three methods. In complex scenes, the complexity of the scene gradually increases with the scene number, the interference factors such as background and noise in the scene increase resulting in the gradual decline of the detection accuracy of the three methods, and the decline of the detection accuracy of the comparison method is the largest. Based on the above analysis, in simple and complex detection scenarios, the detection accuracy of this method is higher than that of the two comparison methods. On average, the detection accuracy of this method is improved by at least 5%. Therefore, the detection efficiency of this method is higher, and the detection is less affected by the background environment. The stability of this detection method is good.Summarizing the above test data, it can be seen that the three-dimensional visual detection method of martial art wrong action based on RBF proposed in this paper has high detection accuracy and sensitivity, and the application stability of the method is good, which is suitable for different action detection conditions. The comprehensive performance of this method is significantly improved and has higher practical application value and application effect. This method meets the research expectation.
## 3.1. Experiment Content
In this experiment, various experimental standards were calibrated for a single user test environment for subsequent threshold settings. In the experiment, all experimental factors were consistent except for experimental control variables. The motion detection method based on the YOLOv algorithm in reference [6] and the motion detection method based on the fuzzy C-means algorithm in reference [7] are selected as comparison method 1 and comparison method 2, respectively. The two comparison methods were applied to the same experimental background and comprehensively compared with the 3D vision detection method based on RBF proposed in this paper. In order to verify the stability of this method, this experiment simulates the method test in different background environments and selects several subjects to test the method in the simple background and complex background, respectively.
## 3.2. Experimental Data and Preparation
Considering that martial art actions include kicking, hitting, falling, holding, falling, hitting, splitting, stabbing, and other actions, this paper uses KTH, UCF101, HMDB51, and dynamics data sets as the test data sets of algorithm performance in the experiment. The above four data sets contain different movements similar to Wushu movements. Among them, the contents of the KTH data set are simple six types of actions completed by 25 adults in four different scenes, including walking, jogging, running, boxing, hand waiting, and hand clipping, with a total of 2391 video samples. The fixed camera and single background used for image acquisition in this data set are not close to the objective and real scene performance. Part of the data set UCF101 comes from various sports samples collected by BBC/ESPN radio and television channels, and part comes from video samples downloaded from the Internet. The video website with the most sources is YouTube. UCF101 contains 13320 video samples, which are divided into 101 categories in total. Most of the data samples in the HMDB51 data set are collected from movies, which are more difficult to understand than videos in natural scenes. The data set has 6849 samples and 51 categories, and each category contains at least 101 data samples. The kinetics data set comes from YouTube and contains 400 kinds of actions, with a total of about 300000 videos. Professional practitioners in the martial art industry are invited to make a demonstration video and compare the video data as the detection standard sample of action detection methods. Using the known data set with parameters as above, the YOLOv algorithm, fuzzy C-means algorithm, and RBF neural network used in the three methods are tested.
## 3.3. Experimental Results
Table1 shows the processing algorithms and models for the application of the detection method on the selection of test data set processing, and different data sets with corresponding processing compare the average detection accuracy and time.Table 1
Comparison of average detection accuracy and time consumption between processing algorithm and model.
Data setYOLOv algorithmFuzzy C-means algorithmRBF neural networkAccuracy, %Time consumed, msAccuracy, %Time consumed, msAccuracy, %Time consumed, msKTH97.6151.597.4149.898.8102.3UCF10196.5232.695.7241.798.0137.8HMDB5196.9298.193.2301.597.6151.4Kinetics95.7364.390.3344.397.1156.9Analyzing the data in Table1, when processing the KTH data set, the detection accuracy of the three algorithm models is basically the same. With the increase in the complexity of samples in the data set, the accuracy of the three algorithms decreases, and the processing time increases rapidly. Among them, the processing accuracy of the RBF neural network for the four data sets is higher than 97%, and the processing time is 156.9 ms, which is far less than the other two algorithms, indicating that the performance of this algorithm is relatively better.In order to further verify the effectiveness of the proposed method, the average accuracy of martial art error action detection under the use of the three methods is compared under the simple background and complex background. The results are shown in Figures2 and 3. The simple background refers to the background of martial art error action, which is a single background, has low noise, and has stable illumination, and the complex background refers to the background with many interference factors, the background with more noise and unclear light and dark lines.Figure 2
Comparison of detection accuracy in simple scenes.Figure 3
Comparison of detection accuracy in complex scenes.By comparing and analyzing the figures in Figures2 and 3, it can be seen that in a simple scene, the detection accuracy of the three detection methods has little difference. The main reason is that the simple background is a single background, has low noise, and has stable illumination, which has little impact on the extraction action of the three methods. In complex scenes, the complexity of the scene gradually increases with the scene number, the interference factors such as background and noise in the scene increase resulting in the gradual decline of the detection accuracy of the three methods, and the decline of the detection accuracy of the comparison method is the largest. Based on the above analysis, in simple and complex detection scenarios, the detection accuracy of this method is higher than that of the two comparison methods. On average, the detection accuracy of this method is improved by at least 5%. Therefore, the detection efficiency of this method is higher, and the detection is less affected by the background environment. The stability of this detection method is good.Summarizing the above test data, it can be seen that the three-dimensional visual detection method of martial art wrong action based on RBF proposed in this paper has high detection accuracy and sensitivity, and the application stability of the method is good, which is suitable for different action detection conditions. The comprehensive performance of this method is significantly improved and has higher practical application value and application effect. This method meets the research expectation.
## 4. Conclusion
With the continuous maturity of video image processing technology, the use of a computer to process images to detect martial art movements is gradually applied. However, in order to solve the problems of low detection accuracy, slow detection speed, and slow detection response, this paper proposes a three-dimensional visual detection method of martial art wrong action based on RBF. Using the characteristics of radial basis function neural network, this method realizes the accurate detection of martial art wrong actions. Experiments show that the proposed three-dimensional vision detection method has high precision, high efficiency, and good stability and can be used to effectively detect martial art wrong movements.
---
*Source: 1013714-2022-04-30.xml* | 2022 |
# Structural Characterization ofProsopis africana Populations (Guill., Perrott., and Rich.) Taub in Benin
**Authors:** Towanou Houètchégnon; Dossou Seblodo Judes Charlemagne Gbèmavo; Christine Ajokè Ifètayo Nougbodé Ouinsavi; Nestor Sokpon
**Journal:** International Journal of Forestry Research
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101373
---
## Abstract
The structural characterization ofProsopis africana of Benin was studied on the basis of forest inventory conducted in three different vegetation types (savannah, fallow, and field) and three climate zones. The data collected in 139 plots of 1000 m2 each related to the diameter at breast (1.3 m above ground), total height, identification, and measurement of DBH related P. africana species height. Tree-ring parameters such as Blackman and Green indices, basal area, average diameter, height of Lorey, and density were calculated and interpreted. Dendrometric settings of vegetation type and climate zone (Guinea, Sudan-Guinea, and Sudan) were compared through analysis of variance (ANOVA). There is a significant difference in dendrometric settings according to the type of vegetation and climate zone. Basal area, density, and average diameter are, respectively, 4.47 m2/ha, 34.95 stems/ha, and 37.02 cm in the fields; 3.01 m2/ha, 34.74 stems/ha, and 33.66 cm in fallows; 3.31 m2/ha, 52.39 stems/ha, and 29.61 cm in the savannahs. The diameter distribution and height observed at the theoretical Weibull distribution show that the diameter and height of the populations of the species are present in all positively skewed distributions or asymmetric left, a characteristic of single-species stands with predominance of young individuals or small diameters or heights.
---
## Body
## 1. Introduction
A wide range of perennial woody species distributed in the humid tropics meet many needs of indigenous populations [1]. In Benin, we count 172 species consumed by the local population as food plants [2] and 814 as medicinal plants [3]. With the continual increase in demand for products derived from these species, traditional collection methods have gradually given way to irrational methods of collection [4]. Those woody species of great usefulness to local communities are threatened in their distribution areas because of pressure exerted on them and/or their habitats:Adansonia digitata [5],Afzelia africana andKhaya senegalensis [6],Garcinia lucida [7],Anogeissus leiocarpa [8],Pentadesma butyracea [9], andProsopis africana [10] are edifying cases. The frequency ofP. africana, for example, is becoming weaker in its range because of excessive overexploitation by cutting the stems and branches of it, which limits its natural regeneration capacity [11].P. africana enriches the soil by fixing nitrogen; its leaves are rich in protein, and sugar pods are used as foodstuffs for feeding ruminants in Nigeria [12]. The pulp of the pods contains 9.6% protein, 3% fat, and 53% carbohydrate and provides energy value 1168J [13]. In some areas, its fermented seeds are used as condiment in preparations in Nigeria [14, 15]. AsParkia biglobosa theProsopis africana seeds are fermented and used as condiments [16]. TheP. africana seeds are used in Nigeria and Benin in the preparation as a local condiment [17, 18]. Similarly,P. africana is used in the preparation of foods such as soup and baked products and in the manufacture of sausages or sausages and cakes. The pods of some mesquite species are used as a staple food by many native populations in the desert of Mexico and the Southwest United States (Simpson [19] quoted by Geesing et al. [20, 21]). In Kaka and Seydou [22] cited by Geesing et al. [20, 21], tasting panels have found that a partial substitution of corn flour, sorghum, or millet flour mesquite at a rate of 10% does not affect the taste of traditional dishes but helps to elevate the flavor. The pods are very palatable to cattle in Burkina Faso [23, 24]. Despite the recognized importance of the species for the rural population, the report of Benin on food tree species has clearly mentioned near absence of information and scientific data on its ecology, its production, and its management in traditional agroforestry systems [4].P. africana is often found in fallow, on sandy clay soil above the laterite. The strong anthropic pressure due to slash-and-burn agriculture practiced by 70% of the agricultural population of Benin and fallow periods increasingly reduced locally affects the population structure ofP. africana. This is compounded by the fact that until today the species exists in natural stands and has not a planning study or regeneration study in Benin while structure, regeneration, and the likely risk of the disappearance of the species are still less studied. However, the acquisition of reliable data on the ecology, distribution, and the structure of a forest species are necessary for the development of an optimal development plan and conservation that are effective [7, 25, 26]. The purpose of this study is to describe the characteristics of dendrometric populations ofP. africana in different plant communities for future development. It is a specific way(1)
to determine the dendrometric characteristics of the different plant formations (savannahs and fallow fields) toProsopis africana and different climatic zones (Guinean, Sudano-Guinean, and Sudan) of Benin,(2)
to determine the structure ofP. africana trees in each of different plant formations and climatic zones and compare between them. We made the assumptions that (i) the dendrometric characteristics ofP. africana vary from plant formation to another and from one climatic zone to another and (ii) structuralP. africana trees vary among different plant formations (savannahs and fallow fields) and different climate zones.
## 2. Material and Methods
### 2.1. Study Area
Benin is situated between 9°30′N and 2°15′E with an annual mean rainfall of 1039 mm and a mean temperature of 35°C. It covers a surface area of 114763 km2 with a population size of 6769914 inhabitants dominated by women (3485795) [27]. Three climatic zones associated with their vegetation types can broadly be distinguished (Figure 1).(1)
The southern zone gathering the coastal and Guineo-Congolese zones: from the coast up to the latitude 7°N, the climate is subequatorial with two rain seasons alternating with a long dry season from December to February. The coastal one is dominated by mangrove swamps with predominant species such asIpomea pescaprae,Remirea maritime,Rhizophora racemosa,Avicennia germinans, andDalbergia ecastaphyllum. The Guineo-Congolese zones are dominated by semideciduous forests with predominant species such asDialium guineense,Triplochiton scleroxylon,Strombosia glaucescens,Cleistopholis patens,Ficus mucuso,Cola cordifolia,Ceiba pentandra,Trilepisium madagascariense,Celtisspp.,Albiziaspp.,Antiaris toxicaria,Diospyros mespiliformis,Drypetes floribunda,Memecylon afzelii,Celtis brownii,Mimusops andogensis,Daniellia oliveri,Parkiaspp., andVitellaria paradoxa [28–31].(2)
The transition zone: this zone is situated between the latitudes 7°N and 9°N. The climate becomes tropical one and subhumid with a tendency to a pattern of one rainy season and one dry season. The two rainfall peaks’ pattern indicates a unimodal rainfall regime. Dominant vegetation types are galleries and savannahs with predominant species such asIsoberlinia doka,I. tomentosa,Monotes kerstingii,Uapaca togoensis,Anogeissus leiocarpa,Antiaris toxicaria,Ceiba pentandra,Blighia sapida,Dialium guineense,Combretum fragrans,Entada africana,Maranthes polyandra,Pterocarpus erinaceus,Terminalia laxiflora, andDetarium microcarpum [31].(3)
The northern zone or Sudanian zones: this zone is characterized by a tropical climate with a unimodal rainfall regime. The rain season lasts on average for seven months from April to October with its maximum on August or September. Dominant vegetation types are dry woodland and savannahs. Predominant species areHaematostaphis barteri, Lanneaspp., Khaya senegalensis,Anogeissus leiocarpa,Tamarindus indica,Capparis spinosa,Ziziphus mucronata,Combretumspp., andCissus quadrangularis. The high pressure of human activities on forests in this zone led to the extinction of species such asMilicia excelsa,Khaya senegalensis,Afzelia africana, andPterocarpus erinaceus [31, 32]. This is the case ofProsopis africana which became rare in fallows according to von Maydell [16].Figure 1
Map showing zones of study.
### 2.2. Data Collection
The ecological and structural characterization ofP. africanawas done using inventory in three habitats ofP. africana (farm, fallow, and savannah) according to climatic zones. Adults (DBH ≥ 10 cm) were measured within circular plots of 1000 m² size and regenerations were measured within 5 subplots of about 28 m2 size. A standard distance of 100 m was observed between two plots in each of the vegetation types. Table 1 shows plots distribution according to ecological zones of the country. Variables measured on each tree included the diameter at breast height (DBH ≥ 10 cm) and the total and bole height.Table 1
Plots distributions according to ecological zones.
Climatic zones
Farm
Fallows
Savannah
Total
Guinean
16
12
11
39
Sudano-Guinean
14
14
18
46
Sudanian
10
21
23
54
Total
40
47
52
139
### 2.3. Data Analysis
To determine the dendrometric characteristic ofP. africana,dendrometric parameters were calculated. These parameters are presented in Table 2.Table 2
Dendrometric parameters and their formula.
Parameters
Formula
Density
D
g
=
1
n
∑
i
=
1
n
d
i
²
Medium basal surface area
G
=
π
40000
s
∑
i
=
1
n
d
i
²
Lorey height of individuals
H
L = ∑i=1ngihi∑i=1ngi
Diameter of tree with medium basal surface area
D
g = 1n∑i=1ndi²
Blackman index
I
B = SN2N
Green index
I
G = IB-1n-1
Notes:n, total number of trees within one plot; di, diameter of the ith tree; SN2, variance of population trees; N, mean of population trees.The structural characterization ofP. africanaaccording to vegetation types and climatic zones was done using the diameter and height class-size distribution. Different histograms of frequency from the diameters and heights were adjusted to Weibull 3-parameter distribution using the software Minitab 16. This distribution was used as it is simple in usage [33]. According to Rondeux [34], its probability density function is given by the following equation:(1)fx=cbx-abc-1exp-x-abc.In this equationx denotes the diameter or height of trees; a, b, and c are, respectively, position parameter, scale parameter, and form parameter. Considering the form parameter, different forms of distributions can be distinguished. Table 3 shows the different forms of distribution using Weibull 3-parameter model.Table 3
Distribution forms from 3-parameter Weibull model according to the parameterc.
Value ofc
Types of distribution
References
c
<
1
J inversed distribution describing multispecific groups of species
[35]
c
=
1
Exponential distribution describing populations in extinction
1
<
c
<
3.6
Left skew distribution describing monospecific groups of trees with small diameters
c
=
3.6
Bell shaped distribution describing monospecific groups or plantation species
c
>
3.6
Positive distribution describing monospecific groups of trees with big diametersDendrometric parameters according to vegetation types and climatic zones were compared using two-way ANOVA with the software Minitab 16.
## 2.1. Study Area
Benin is situated between 9°30′N and 2°15′E with an annual mean rainfall of 1039 mm and a mean temperature of 35°C. It covers a surface area of 114763 km2 with a population size of 6769914 inhabitants dominated by women (3485795) [27]. Three climatic zones associated with their vegetation types can broadly be distinguished (Figure 1).(1)
The southern zone gathering the coastal and Guineo-Congolese zones: from the coast up to the latitude 7°N, the climate is subequatorial with two rain seasons alternating with a long dry season from December to February. The coastal one is dominated by mangrove swamps with predominant species such asIpomea pescaprae,Remirea maritime,Rhizophora racemosa,Avicennia germinans, andDalbergia ecastaphyllum. The Guineo-Congolese zones are dominated by semideciduous forests with predominant species such asDialium guineense,Triplochiton scleroxylon,Strombosia glaucescens,Cleistopholis patens,Ficus mucuso,Cola cordifolia,Ceiba pentandra,Trilepisium madagascariense,Celtisspp.,Albiziaspp.,Antiaris toxicaria,Diospyros mespiliformis,Drypetes floribunda,Memecylon afzelii,Celtis brownii,Mimusops andogensis,Daniellia oliveri,Parkiaspp., andVitellaria paradoxa [28–31].(2)
The transition zone: this zone is situated between the latitudes 7°N and 9°N. The climate becomes tropical one and subhumid with a tendency to a pattern of one rainy season and one dry season. The two rainfall peaks’ pattern indicates a unimodal rainfall regime. Dominant vegetation types are galleries and savannahs with predominant species such asIsoberlinia doka,I. tomentosa,Monotes kerstingii,Uapaca togoensis,Anogeissus leiocarpa,Antiaris toxicaria,Ceiba pentandra,Blighia sapida,Dialium guineense,Combretum fragrans,Entada africana,Maranthes polyandra,Pterocarpus erinaceus,Terminalia laxiflora, andDetarium microcarpum [31].(3)
The northern zone or Sudanian zones: this zone is characterized by a tropical climate with a unimodal rainfall regime. The rain season lasts on average for seven months from April to October with its maximum on August or September. Dominant vegetation types are dry woodland and savannahs. Predominant species areHaematostaphis barteri, Lanneaspp., Khaya senegalensis,Anogeissus leiocarpa,Tamarindus indica,Capparis spinosa,Ziziphus mucronata,Combretumspp., andCissus quadrangularis. The high pressure of human activities on forests in this zone led to the extinction of species such asMilicia excelsa,Khaya senegalensis,Afzelia africana, andPterocarpus erinaceus [31, 32]. This is the case ofProsopis africana which became rare in fallows according to von Maydell [16].Figure 1
Map showing zones of study.
## 2.2. Data Collection
The ecological and structural characterization ofP. africanawas done using inventory in three habitats ofP. africana (farm, fallow, and savannah) according to climatic zones. Adults (DBH ≥ 10 cm) were measured within circular plots of 1000 m² size and regenerations were measured within 5 subplots of about 28 m2 size. A standard distance of 100 m was observed between two plots in each of the vegetation types. Table 1 shows plots distribution according to ecological zones of the country. Variables measured on each tree included the diameter at breast height (DBH ≥ 10 cm) and the total and bole height.Table 1
Plots distributions according to ecological zones.
Climatic zones
Farm
Fallows
Savannah
Total
Guinean
16
12
11
39
Sudano-Guinean
14
14
18
46
Sudanian
10
21
23
54
Total
40
47
52
139
## 2.3. Data Analysis
To determine the dendrometric characteristic ofP. africana,dendrometric parameters were calculated. These parameters are presented in Table 2.Table 2
Dendrometric parameters and their formula.
Parameters
Formula
Density
D
g
=
1
n
∑
i
=
1
n
d
i
²
Medium basal surface area
G
=
π
40000
s
∑
i
=
1
n
d
i
²
Lorey height of individuals
H
L = ∑i=1ngihi∑i=1ngi
Diameter of tree with medium basal surface area
D
g = 1n∑i=1ndi²
Blackman index
I
B = SN2N
Green index
I
G = IB-1n-1
Notes:n, total number of trees within one plot; di, diameter of the ith tree; SN2, variance of population trees; N, mean of population trees.The structural characterization ofP. africanaaccording to vegetation types and climatic zones was done using the diameter and height class-size distribution. Different histograms of frequency from the diameters and heights were adjusted to Weibull 3-parameter distribution using the software Minitab 16. This distribution was used as it is simple in usage [33]. According to Rondeux [34], its probability density function is given by the following equation:(1)fx=cbx-abc-1exp-x-abc.In this equationx denotes the diameter or height of trees; a, b, and c are, respectively, position parameter, scale parameter, and form parameter. Considering the form parameter, different forms of distributions can be distinguished. Table 3 shows the different forms of distribution using Weibull 3-parameter model.Table 3
Distribution forms from 3-parameter Weibull model according to the parameterc.
Value ofc
Types of distribution
References
c
<
1
J inversed distribution describing multispecific groups of species
[35]
c
=
1
Exponential distribution describing populations in extinction
1
<
c
<
3.6
Left skew distribution describing monospecific groups of trees with small diameters
c
=
3.6
Bell shaped distribution describing monospecific groups or plantation species
c
>
3.6
Positive distribution describing monospecific groups of trees with big diametersDendrometric parameters according to vegetation types and climatic zones were compared using two-way ANOVA with the software Minitab 16.
## 3. Results
### 3.1. Dendrometric Parameters according to Vegetation Types
Table4 shows dendrometric characteristics atP. africanapopulations and at all populations’ levels according to vegetation types. Parameters’ means compared with Student t-test revealed significant differences of mean (P<0.01). In fact, the diameter, basal surface area, and Lorey height ofP. africanapopulations range, respectively, from 30 to 37 cm, 3 to 4 m²/ha, and 9 to 11 m. High values of diameters and basal surface areas were observed from the farms whereas high values of heights were observed from fallows. As shown in Table 4, probability values indicate a significant difference of parameters (density, diameters, and basal surface area) according to vegetation types. Besides, the regeneration density was found to be high in habitats under low pressure.Table 4
Dendrometric characterization ofP. africana according to vegetations types.
Parameters
Farms (pl = 40)
Fallows (pl = 47)
Savannah (pl = 52)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
34.95ab
5.54
34.74b
5.16
52.39a
4.97
0.022
Diameter (Dg, cm)
37.02a
2.36
33.66a
2.20
29.61a
2.12
0.067
Basal surface area (G, m²/ha)
4.47a
0.71
3.01a
0.67
3.31a
0.65
0.303
Lorey height (HL, m)
9.25a
0.42
10.72b
0.39
8.66b
0.38
0.001
Contribution of Basal surface area (Cs, %)
86.99a
4.35
73.96ab
4.06
65.69b
3.90
0.002
Density of regeneration (Nr, stems/ha)
7.28a
12.3
23.21a
11.48
27.96a
11.05
0.438
Global
Density (N, stems/ha)
58.23b
16.81
108.62ab
15.68
126.28a
15.09
0.010
Diameter (Dg, cm)
10.73a
0.66
8.91ab
0.62
8.57b
0.60
0.041
Basal surface area (G, m²/ha)
4.44a
0.89
5.47a
0.83
4.97a
0.80
0.696
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).
### 3.2. Dendrometric Parameters according to Climatic Zones
Table5 shows dendrometric characteristics at all vegetation types levels and atP. africanapopulations ones according to different climatic zones. Considering the whole populations, comparison of parameters means using Student t-test revealed a significant difference (P<0.01) of parameters (density, diameters, and basal surface area). As forP. africanapopulations, the diameter and Lorey height were in average, respectively, 33 cm and 10 m. The table analysis showed that the diameter increases according to rainfall gradient. In fact, the diameter increases as vegetation becomes more and more watered. But climatic gradient did not affect trees height. Means of heights were, respectively, 11 m in the Guinean zone, 9 m in Sudano-Guinean zone, and 9 m in Sudanian zone. Parameters means compared using Student t-test revealed a significant difference (P<0.05) of parameters (density, diameters, and basal surface area) according to climatic zones. The basal surface areas were, respectively, 3 m²/ha in the Guinean zone, 6 m²/ha in Sudano-Guinean zone, and 1 m²/ha in Sudanian zone.Table 5
Dendrometric characterization ofP. africana according to climate zone.
Parameters
Guinean zone (pl = 39)
Soudanian zone (pl = 54)
Soudano-guinean zone (pl = 46)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
28.45b
5.59
35.41b
4.98
58.21a
5.09
0.000
Diameter (Dg, cm)
40.38a
2.38
22.63b
2.12
37.28a
2.17
0.000
Basal surface area (G, m²/ha)
3.33b
0.73
1.38b
0.65
6.08a
0.66
0.000
Lorey height (HL, m)
11.25a
0.43
8.88b
0.38
8.51b
0.39
0.000
Contribution of Basal surface area (Cs, %)
69.38b
4.39
83.37a
3.91
73.89ab
4.00
0.051
Density of regeneration (Nr, stems/ha)
4.85a
12.44
11.10a
11.08
42.50a
11.31
0.051
Global
Density (N, stems/ha)
108.15ab
16.98
61.71b
15.13
123.27a
15.45
0.014
Diameter (Dg, cm)
11.57a
0.67
6.70b
0.60
9.94a
0.61
0.000
Basal surface area (G, m²/ha)
5.07a
0.90
2.10b
0.80
7.72a
0.82
0.000
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).Besides, Blackman index (IB) obtained was 32.20 and Green index was 0.054 which is near 0 and shows a random distribution ofP. africanapopulations according to climatic zones.
### 3.3. Diameter Class-Size Distribution ofP. africanaPopulations
Figures2, 3, and 4 show the diameter class-size distribution ofP. africanapopulations according to the three climatic zones of Benin. Figure 3 indicates a J inversed distribution ofP. africanapopulations describing multispecific groups of species. The two others figures (Figures 2 and 4) indicate left skew distribution describing monospecific groups of trees dominated by subjects with small diameters. In fact, subjects with diameter ranging from 10 to 70 cm are the predominant in the Guinean zone. Besides, subjects with diameter over 80 cm are quasi-absent. Unlike the Guinean zone, small subjects with diameter class-size distribution of 10–20 cm were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with diameter ranging between 10 and 30 cm are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone.Figure 2
Diameter class-size distribution in the Guinean zone.Figure 3
Diameter class-size distribution in the Sudano-Guinean zone.Figure 4
Diameter class-size distribution in the Sudanian zone.
### 3.4. Height Class-Size Distribution ofP. africanaPopulations
Figures5, 6, and 7 show the height class-size distribution ofP. africanapopulations to the three climatic zones of Benin. On the whole, the parameter of form (c) ranges between 1 and 3.6 indicating left skew distribution describing monospecific groups dominated by subjects with small heights. In fact, subjects with height ranging between 8 and 12 m are the predominant one in the Guinean zone. Unlike the Guinean zone, subjects with height ranging between 6 and 10 m were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with height ranging between 6 and 12 m are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone. Besides, subjects with height over 21 m are quasi-absent in the Guinean zone. Those whose height is over 23 m are quasi-absent in the Sudano-Guinean zone and subjects with height over 22 m are quasi-absent in the Sudanian zone.Figure 5
Height class-size distribution in the Guinean zone.Figure 6
Height class-size distribution in the Sudano-Guinean zone.Figure 7
Height class-size distribution in the Sudanian zone.
## 3.1. Dendrometric Parameters according to Vegetation Types
Table4 shows dendrometric characteristics atP. africanapopulations and at all populations’ levels according to vegetation types. Parameters’ means compared with Student t-test revealed significant differences of mean (P<0.01). In fact, the diameter, basal surface area, and Lorey height ofP. africanapopulations range, respectively, from 30 to 37 cm, 3 to 4 m²/ha, and 9 to 11 m. High values of diameters and basal surface areas were observed from the farms whereas high values of heights were observed from fallows. As shown in Table 4, probability values indicate a significant difference of parameters (density, diameters, and basal surface area) according to vegetation types. Besides, the regeneration density was found to be high in habitats under low pressure.Table 4
Dendrometric characterization ofP. africana according to vegetations types.
Parameters
Farms (pl = 40)
Fallows (pl = 47)
Savannah (pl = 52)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
34.95ab
5.54
34.74b
5.16
52.39a
4.97
0.022
Diameter (Dg, cm)
37.02a
2.36
33.66a
2.20
29.61a
2.12
0.067
Basal surface area (G, m²/ha)
4.47a
0.71
3.01a
0.67
3.31a
0.65
0.303
Lorey height (HL, m)
9.25a
0.42
10.72b
0.39
8.66b
0.38
0.001
Contribution of Basal surface area (Cs, %)
86.99a
4.35
73.96ab
4.06
65.69b
3.90
0.002
Density of regeneration (Nr, stems/ha)
7.28a
12.3
23.21a
11.48
27.96a
11.05
0.438
Global
Density (N, stems/ha)
58.23b
16.81
108.62ab
15.68
126.28a
15.09
0.010
Diameter (Dg, cm)
10.73a
0.66
8.91ab
0.62
8.57b
0.60
0.041
Basal surface area (G, m²/ha)
4.44a
0.89
5.47a
0.83
4.97a
0.80
0.696
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).
## 3.2. Dendrometric Parameters according to Climatic Zones
Table5 shows dendrometric characteristics at all vegetation types levels and atP. africanapopulations ones according to different climatic zones. Considering the whole populations, comparison of parameters means using Student t-test revealed a significant difference (P<0.01) of parameters (density, diameters, and basal surface area). As forP. africanapopulations, the diameter and Lorey height were in average, respectively, 33 cm and 10 m. The table analysis showed that the diameter increases according to rainfall gradient. In fact, the diameter increases as vegetation becomes more and more watered. But climatic gradient did not affect trees height. Means of heights were, respectively, 11 m in the Guinean zone, 9 m in Sudano-Guinean zone, and 9 m in Sudanian zone. Parameters means compared using Student t-test revealed a significant difference (P<0.05) of parameters (density, diameters, and basal surface area) according to climatic zones. The basal surface areas were, respectively, 3 m²/ha in the Guinean zone, 6 m²/ha in Sudano-Guinean zone, and 1 m²/ha in Sudanian zone.Table 5
Dendrometric characterization ofP. africana according to climate zone.
Parameters
Guinean zone (pl = 39)
Soudanian zone (pl = 54)
Soudano-guinean zone (pl = 46)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
28.45b
5.59
35.41b
4.98
58.21a
5.09
0.000
Diameter (Dg, cm)
40.38a
2.38
22.63b
2.12
37.28a
2.17
0.000
Basal surface area (G, m²/ha)
3.33b
0.73
1.38b
0.65
6.08a
0.66
0.000
Lorey height (HL, m)
11.25a
0.43
8.88b
0.38
8.51b
0.39
0.000
Contribution of Basal surface area (Cs, %)
69.38b
4.39
83.37a
3.91
73.89ab
4.00
0.051
Density of regeneration (Nr, stems/ha)
4.85a
12.44
11.10a
11.08
42.50a
11.31
0.051
Global
Density (N, stems/ha)
108.15ab
16.98
61.71b
15.13
123.27a
15.45
0.014
Diameter (Dg, cm)
11.57a
0.67
6.70b
0.60
9.94a
0.61
0.000
Basal surface area (G, m²/ha)
5.07a
0.90
2.10b
0.80
7.72a
0.82
0.000
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).Besides, Blackman index (IB) obtained was 32.20 and Green index was 0.054 which is near 0 and shows a random distribution ofP. africanapopulations according to climatic zones.
## 3.3. Diameter Class-Size Distribution ofP. africanaPopulations
Figures2, 3, and 4 show the diameter class-size distribution ofP. africanapopulations according to the three climatic zones of Benin. Figure 3 indicates a J inversed distribution ofP. africanapopulations describing multispecific groups of species. The two others figures (Figures 2 and 4) indicate left skew distribution describing monospecific groups of trees dominated by subjects with small diameters. In fact, subjects with diameter ranging from 10 to 70 cm are the predominant in the Guinean zone. Besides, subjects with diameter over 80 cm are quasi-absent. Unlike the Guinean zone, small subjects with diameter class-size distribution of 10–20 cm were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with diameter ranging between 10 and 30 cm are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone.Figure 2
Diameter class-size distribution in the Guinean zone.Figure 3
Diameter class-size distribution in the Sudano-Guinean zone.Figure 4
Diameter class-size distribution in the Sudanian zone.
## 3.4. Height Class-Size Distribution ofP. africanaPopulations
Figures5, 6, and 7 show the height class-size distribution ofP. africanapopulations to the three climatic zones of Benin. On the whole, the parameter of form (c) ranges between 1 and 3.6 indicating left skew distribution describing monospecific groups dominated by subjects with small heights. In fact, subjects with height ranging between 8 and 12 m are the predominant one in the Guinean zone. Unlike the Guinean zone, subjects with height ranging between 6 and 10 m were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with height ranging between 6 and 12 m are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone. Besides, subjects with height over 21 m are quasi-absent in the Guinean zone. Those whose height is over 23 m are quasi-absent in the Sudano-Guinean zone and subjects with height over 22 m are quasi-absent in the Sudanian zone.Figure 5
Height class-size distribution in the Guinean zone.Figure 6
Height class-size distribution in the Sudano-Guinean zone.Figure 7
Height class-size distribution in the Sudanian zone.
## 4. Discussion
### 4.1. Dendrometric Characterization ofP. africanaPopulations
#### 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
### 4.2. Structures in Diameter and Height
The development of forest stands requires the mastery of the structure diameter and height of trees. These structures are indicative of events related to the life of stands [34]. Forest stands, according to whether they are single species or multispecies, even-aged, young, and old, have structures types. It is known that the structures in diameter and height of these forest types are adjusted to known theoretical distributions [35]. According to Rondeux [34], Philip [47], and McElhinny et al. [48], in even-aged structure, sizes by diameter classes have typical distribution often resembling a Gaussian curve that can become asymmetrical bimodal seen in certain circumstances. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height which is mainly explained by their social position (dominant, codominant). The horizontal structures of populations have mostly left asymmetrical characteristic of single-species stands with predominance of young individuals or small diameters or low heights. According to Arbonnier [49], Sudanian climate was suitable for the optimal development of the African mesquite trees. There is thus generally a relationship between species temperament and their stem diameter class distribution. However, the diameter structure of the Sudano-Guinean zone presented a distribution whose appearance is in “inverted J” which, according to Rondeux [34] and Husch et al. [35], is a characteristic of multispecies stands. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height that is mainly explained by their social position (dominant, codominant). In the case of this study, only the diameter structure at the Sudano-Guinean zone has a distribution whose appearance is in “inverted J” feature of multispecies stands. This reflects a relative predominance of individuals with small diameters. Similarly, the structure ofP. africana diameters, at this climatic zone, has a bell shaped appearance, a characteristic of single-species stands. This distribution is left-skewed (1<c<3.6), a characteristic of a relative predominance of young individuals or small diameters. But we can say that individuals ofP. africana of this climatic zone are not all the same age or young, and the observed left asymmetries cannot be explained by the youth of the population of the species but by their disruption or vulnerability at certain stages of their development. Regarding the distribution of tree height, it generally has a Gaussian shape which may be asymmetrical in the conditions of life of the stand. As for the overhead structure, the assembly has a bell shape of a distribution to the left asymmetry characteristic of stands with predominance of individuals with low heights. According to Bonou et al. [45], the use of Weibull distribution probability density function is becoming increasingly popular for modeling the diameter distributions of both even- and uneven-aged forest stands. The popularity of Weibull is derived from its flexibility to take on a number of different shapes corresponding to many different observed unimodal tree-diameter distributions. In addition, the cumulative distribution function of Weibull exists in closed form and thus allows for quick and easy estimation of the number of trees by diameter class without integration of the probability density function once the parameters have been fitted. The bell shaped function obtained with the diameter or height classes distribution of the African mesquite with a left dissymmetry, a notable exception of the diameter structure of the Sudano-Guinean zone, corroborates the results of Cassou and Depomier [50] with the African fan palm population of Wolokonto in Burkina Faso and the results of Ouinsavi et al. [44], with the palm trees in Benin. Similar results were also obtained by Kperkouma et al. [51], with the Shea butter trees of Donfelgou in Togo. Also Bonou et al. [45] obtained the same distribution as far asAfzelia africanatrees populations are concerned in Benin. However this structure might not be derived only from the species temperament but also from human pressure.
## 4.1. Dendrometric Characterization ofP. africanaPopulations
### 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
## 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
## 4.2. Structures in Diameter and Height
The development of forest stands requires the mastery of the structure diameter and height of trees. These structures are indicative of events related to the life of stands [34]. Forest stands, according to whether they are single species or multispecies, even-aged, young, and old, have structures types. It is known that the structures in diameter and height of these forest types are adjusted to known theoretical distributions [35]. According to Rondeux [34], Philip [47], and McElhinny et al. [48], in even-aged structure, sizes by diameter classes have typical distribution often resembling a Gaussian curve that can become asymmetrical bimodal seen in certain circumstances. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height which is mainly explained by their social position (dominant, codominant). The horizontal structures of populations have mostly left asymmetrical characteristic of single-species stands with predominance of young individuals or small diameters or low heights. According to Arbonnier [49], Sudanian climate was suitable for the optimal development of the African mesquite trees. There is thus generally a relationship between species temperament and their stem diameter class distribution. However, the diameter structure of the Sudano-Guinean zone presented a distribution whose appearance is in “inverted J” which, according to Rondeux [34] and Husch et al. [35], is a characteristic of multispecies stands. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height that is mainly explained by their social position (dominant, codominant). In the case of this study, only the diameter structure at the Sudano-Guinean zone has a distribution whose appearance is in “inverted J” feature of multispecies stands. This reflects a relative predominance of individuals with small diameters. Similarly, the structure ofP. africana diameters, at this climatic zone, has a bell shaped appearance, a characteristic of single-species stands. This distribution is left-skewed (1<c<3.6), a characteristic of a relative predominance of young individuals or small diameters. But we can say that individuals ofP. africana of this climatic zone are not all the same age or young, and the observed left asymmetries cannot be explained by the youth of the population of the species but by their disruption or vulnerability at certain stages of their development. Regarding the distribution of tree height, it generally has a Gaussian shape which may be asymmetrical in the conditions of life of the stand. As for the overhead structure, the assembly has a bell shape of a distribution to the left asymmetry characteristic of stands with predominance of individuals with low heights. According to Bonou et al. [45], the use of Weibull distribution probability density function is becoming increasingly popular for modeling the diameter distributions of both even- and uneven-aged forest stands. The popularity of Weibull is derived from its flexibility to take on a number of different shapes corresponding to many different observed unimodal tree-diameter distributions. In addition, the cumulative distribution function of Weibull exists in closed form and thus allows for quick and easy estimation of the number of trees by diameter class without integration of the probability density function once the parameters have been fitted. The bell shaped function obtained with the diameter or height classes distribution of the African mesquite with a left dissymmetry, a notable exception of the diameter structure of the Sudano-Guinean zone, corroborates the results of Cassou and Depomier [50] with the African fan palm population of Wolokonto in Burkina Faso and the results of Ouinsavi et al. [44], with the palm trees in Benin. Similar results were also obtained by Kperkouma et al. [51], with the Shea butter trees of Donfelgou in Togo. Also Bonou et al. [45] obtained the same distribution as far asAfzelia africanatrees populations are concerned in Benin. However this structure might not be derived only from the species temperament but also from human pressure.
## 5. Conclusion
The structural characterization of populations ofP. africana has helped dendrometric and horizontal structuring ofP. africana stands groups, distinct in their specific traits induced by climatic conditions and vegetative strata that are the fields, fallow, and savannahs. The structural characteristics of populations varied greatly from one climate zone to another and from one formation to another plant. It can be concluded that the species is present in all climatic zones of Benin but with varied densities and that it is in the Sudan and Sudano-Guinean areas quite abundant. The density is an average of 126.28 stems/ha, 109 stems/ha, and 58 trees/ha, respectively, in savannahs, fallows, and fields. The diameter of the shaft means for these formations varies between 9 and 11 cm with the high value in the field. In terms of the basal area, it is an average of 4.44 m²/ha at the field level, 5 m²/ha at the savannah, and then 5 m/ha at the fallow. For the regeneration of density, the variation is between 5 and 43 stems/ha. It is higher in the Sudano-Guinean areas (43 individuals/ha) and Sudanese areas (11 individuals/ha). The lower regeneration density is in the Guinean zone (5 individuals/ha). The average diameter of the basal area of tree is most interesting in the Guinean area compared to other areas. The stands of this area offer a potential timber into lumber and service development which would draw added value from the sale of wood. Ecological structure of the African mesquite populations of Benin, adjusted to Weibull distribution, showed a bell shaped curve with a left dissymmetry proving the predominance of young trees within these populations.
---
*Source: 101373-2015-07-29.xml* | 101373-2015-07-29_101373-2015-07-29.md | 49,917 | Structural Characterization ofProsopis africana Populations (Guill., Perrott., and Rich.) Taub in Benin | Towanou Houètchégnon; Dossou Seblodo Judes Charlemagne Gbèmavo; Christine Ajokè Ifètayo Nougbodé Ouinsavi; Nestor Sokpon | International Journal of Forestry Research
(2015) | Earth and Environmental Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101373 | 101373-2015-07-29.xml | ---
## Abstract
The structural characterization ofProsopis africana of Benin was studied on the basis of forest inventory conducted in three different vegetation types (savannah, fallow, and field) and three climate zones. The data collected in 139 plots of 1000 m2 each related to the diameter at breast (1.3 m above ground), total height, identification, and measurement of DBH related P. africana species height. Tree-ring parameters such as Blackman and Green indices, basal area, average diameter, height of Lorey, and density were calculated and interpreted. Dendrometric settings of vegetation type and climate zone (Guinea, Sudan-Guinea, and Sudan) were compared through analysis of variance (ANOVA). There is a significant difference in dendrometric settings according to the type of vegetation and climate zone. Basal area, density, and average diameter are, respectively, 4.47 m2/ha, 34.95 stems/ha, and 37.02 cm in the fields; 3.01 m2/ha, 34.74 stems/ha, and 33.66 cm in fallows; 3.31 m2/ha, 52.39 stems/ha, and 29.61 cm in the savannahs. The diameter distribution and height observed at the theoretical Weibull distribution show that the diameter and height of the populations of the species are present in all positively skewed distributions or asymmetric left, a characteristic of single-species stands with predominance of young individuals or small diameters or heights.
---
## Body
## 1. Introduction
A wide range of perennial woody species distributed in the humid tropics meet many needs of indigenous populations [1]. In Benin, we count 172 species consumed by the local population as food plants [2] and 814 as medicinal plants [3]. With the continual increase in demand for products derived from these species, traditional collection methods have gradually given way to irrational methods of collection [4]. Those woody species of great usefulness to local communities are threatened in their distribution areas because of pressure exerted on them and/or their habitats:Adansonia digitata [5],Afzelia africana andKhaya senegalensis [6],Garcinia lucida [7],Anogeissus leiocarpa [8],Pentadesma butyracea [9], andProsopis africana [10] are edifying cases. The frequency ofP. africana, for example, is becoming weaker in its range because of excessive overexploitation by cutting the stems and branches of it, which limits its natural regeneration capacity [11].P. africana enriches the soil by fixing nitrogen; its leaves are rich in protein, and sugar pods are used as foodstuffs for feeding ruminants in Nigeria [12]. The pulp of the pods contains 9.6% protein, 3% fat, and 53% carbohydrate and provides energy value 1168J [13]. In some areas, its fermented seeds are used as condiment in preparations in Nigeria [14, 15]. AsParkia biglobosa theProsopis africana seeds are fermented and used as condiments [16]. TheP. africana seeds are used in Nigeria and Benin in the preparation as a local condiment [17, 18]. Similarly,P. africana is used in the preparation of foods such as soup and baked products and in the manufacture of sausages or sausages and cakes. The pods of some mesquite species are used as a staple food by many native populations in the desert of Mexico and the Southwest United States (Simpson [19] quoted by Geesing et al. [20, 21]). In Kaka and Seydou [22] cited by Geesing et al. [20, 21], tasting panels have found that a partial substitution of corn flour, sorghum, or millet flour mesquite at a rate of 10% does not affect the taste of traditional dishes but helps to elevate the flavor. The pods are very palatable to cattle in Burkina Faso [23, 24]. Despite the recognized importance of the species for the rural population, the report of Benin on food tree species has clearly mentioned near absence of information and scientific data on its ecology, its production, and its management in traditional agroforestry systems [4].P. africana is often found in fallow, on sandy clay soil above the laterite. The strong anthropic pressure due to slash-and-burn agriculture practiced by 70% of the agricultural population of Benin and fallow periods increasingly reduced locally affects the population structure ofP. africana. This is compounded by the fact that until today the species exists in natural stands and has not a planning study or regeneration study in Benin while structure, regeneration, and the likely risk of the disappearance of the species are still less studied. However, the acquisition of reliable data on the ecology, distribution, and the structure of a forest species are necessary for the development of an optimal development plan and conservation that are effective [7, 25, 26]. The purpose of this study is to describe the characteristics of dendrometric populations ofP. africana in different plant communities for future development. It is a specific way(1)
to determine the dendrometric characteristics of the different plant formations (savannahs and fallow fields) toProsopis africana and different climatic zones (Guinean, Sudano-Guinean, and Sudan) of Benin,(2)
to determine the structure ofP. africana trees in each of different plant formations and climatic zones and compare between them. We made the assumptions that (i) the dendrometric characteristics ofP. africana vary from plant formation to another and from one climatic zone to another and (ii) structuralP. africana trees vary among different plant formations (savannahs and fallow fields) and different climate zones.
## 2. Material and Methods
### 2.1. Study Area
Benin is situated between 9°30′N and 2°15′E with an annual mean rainfall of 1039 mm and a mean temperature of 35°C. It covers a surface area of 114763 km2 with a population size of 6769914 inhabitants dominated by women (3485795) [27]. Three climatic zones associated with their vegetation types can broadly be distinguished (Figure 1).(1)
The southern zone gathering the coastal and Guineo-Congolese zones: from the coast up to the latitude 7°N, the climate is subequatorial with two rain seasons alternating with a long dry season from December to February. The coastal one is dominated by mangrove swamps with predominant species such asIpomea pescaprae,Remirea maritime,Rhizophora racemosa,Avicennia germinans, andDalbergia ecastaphyllum. The Guineo-Congolese zones are dominated by semideciduous forests with predominant species such asDialium guineense,Triplochiton scleroxylon,Strombosia glaucescens,Cleistopholis patens,Ficus mucuso,Cola cordifolia,Ceiba pentandra,Trilepisium madagascariense,Celtisspp.,Albiziaspp.,Antiaris toxicaria,Diospyros mespiliformis,Drypetes floribunda,Memecylon afzelii,Celtis brownii,Mimusops andogensis,Daniellia oliveri,Parkiaspp., andVitellaria paradoxa [28–31].(2)
The transition zone: this zone is situated between the latitudes 7°N and 9°N. The climate becomes tropical one and subhumid with a tendency to a pattern of one rainy season and one dry season. The two rainfall peaks’ pattern indicates a unimodal rainfall regime. Dominant vegetation types are galleries and savannahs with predominant species such asIsoberlinia doka,I. tomentosa,Monotes kerstingii,Uapaca togoensis,Anogeissus leiocarpa,Antiaris toxicaria,Ceiba pentandra,Blighia sapida,Dialium guineense,Combretum fragrans,Entada africana,Maranthes polyandra,Pterocarpus erinaceus,Terminalia laxiflora, andDetarium microcarpum [31].(3)
The northern zone or Sudanian zones: this zone is characterized by a tropical climate with a unimodal rainfall regime. The rain season lasts on average for seven months from April to October with its maximum on August or September. Dominant vegetation types are dry woodland and savannahs. Predominant species areHaematostaphis barteri, Lanneaspp., Khaya senegalensis,Anogeissus leiocarpa,Tamarindus indica,Capparis spinosa,Ziziphus mucronata,Combretumspp., andCissus quadrangularis. The high pressure of human activities on forests in this zone led to the extinction of species such asMilicia excelsa,Khaya senegalensis,Afzelia africana, andPterocarpus erinaceus [31, 32]. This is the case ofProsopis africana which became rare in fallows according to von Maydell [16].Figure 1
Map showing zones of study.
### 2.2. Data Collection
The ecological and structural characterization ofP. africanawas done using inventory in three habitats ofP. africana (farm, fallow, and savannah) according to climatic zones. Adults (DBH ≥ 10 cm) were measured within circular plots of 1000 m² size and regenerations were measured within 5 subplots of about 28 m2 size. A standard distance of 100 m was observed between two plots in each of the vegetation types. Table 1 shows plots distribution according to ecological zones of the country. Variables measured on each tree included the diameter at breast height (DBH ≥ 10 cm) and the total and bole height.Table 1
Plots distributions according to ecological zones.
Climatic zones
Farm
Fallows
Savannah
Total
Guinean
16
12
11
39
Sudano-Guinean
14
14
18
46
Sudanian
10
21
23
54
Total
40
47
52
139
### 2.3. Data Analysis
To determine the dendrometric characteristic ofP. africana,dendrometric parameters were calculated. These parameters are presented in Table 2.Table 2
Dendrometric parameters and their formula.
Parameters
Formula
Density
D
g
=
1
n
∑
i
=
1
n
d
i
²
Medium basal surface area
G
=
π
40000
s
∑
i
=
1
n
d
i
²
Lorey height of individuals
H
L = ∑i=1ngihi∑i=1ngi
Diameter of tree with medium basal surface area
D
g = 1n∑i=1ndi²
Blackman index
I
B = SN2N
Green index
I
G = IB-1n-1
Notes:n, total number of trees within one plot; di, diameter of the ith tree; SN2, variance of population trees; N, mean of population trees.The structural characterization ofP. africanaaccording to vegetation types and climatic zones was done using the diameter and height class-size distribution. Different histograms of frequency from the diameters and heights were adjusted to Weibull 3-parameter distribution using the software Minitab 16. This distribution was used as it is simple in usage [33]. According to Rondeux [34], its probability density function is given by the following equation:(1)fx=cbx-abc-1exp-x-abc.In this equationx denotes the diameter or height of trees; a, b, and c are, respectively, position parameter, scale parameter, and form parameter. Considering the form parameter, different forms of distributions can be distinguished. Table 3 shows the different forms of distribution using Weibull 3-parameter model.Table 3
Distribution forms from 3-parameter Weibull model according to the parameterc.
Value ofc
Types of distribution
References
c
<
1
J inversed distribution describing multispecific groups of species
[35]
c
=
1
Exponential distribution describing populations in extinction
1
<
c
<
3.6
Left skew distribution describing monospecific groups of trees with small diameters
c
=
3.6
Bell shaped distribution describing monospecific groups or plantation species
c
>
3.6
Positive distribution describing monospecific groups of trees with big diametersDendrometric parameters according to vegetation types and climatic zones were compared using two-way ANOVA with the software Minitab 16.
## 2.1. Study Area
Benin is situated between 9°30′N and 2°15′E with an annual mean rainfall of 1039 mm and a mean temperature of 35°C. It covers a surface area of 114763 km2 with a population size of 6769914 inhabitants dominated by women (3485795) [27]. Three climatic zones associated with their vegetation types can broadly be distinguished (Figure 1).(1)
The southern zone gathering the coastal and Guineo-Congolese zones: from the coast up to the latitude 7°N, the climate is subequatorial with two rain seasons alternating with a long dry season from December to February. The coastal one is dominated by mangrove swamps with predominant species such asIpomea pescaprae,Remirea maritime,Rhizophora racemosa,Avicennia germinans, andDalbergia ecastaphyllum. The Guineo-Congolese zones are dominated by semideciduous forests with predominant species such asDialium guineense,Triplochiton scleroxylon,Strombosia glaucescens,Cleistopholis patens,Ficus mucuso,Cola cordifolia,Ceiba pentandra,Trilepisium madagascariense,Celtisspp.,Albiziaspp.,Antiaris toxicaria,Diospyros mespiliformis,Drypetes floribunda,Memecylon afzelii,Celtis brownii,Mimusops andogensis,Daniellia oliveri,Parkiaspp., andVitellaria paradoxa [28–31].(2)
The transition zone: this zone is situated between the latitudes 7°N and 9°N. The climate becomes tropical one and subhumid with a tendency to a pattern of one rainy season and one dry season. The two rainfall peaks’ pattern indicates a unimodal rainfall regime. Dominant vegetation types are galleries and savannahs with predominant species such asIsoberlinia doka,I. tomentosa,Monotes kerstingii,Uapaca togoensis,Anogeissus leiocarpa,Antiaris toxicaria,Ceiba pentandra,Blighia sapida,Dialium guineense,Combretum fragrans,Entada africana,Maranthes polyandra,Pterocarpus erinaceus,Terminalia laxiflora, andDetarium microcarpum [31].(3)
The northern zone or Sudanian zones: this zone is characterized by a tropical climate with a unimodal rainfall regime. The rain season lasts on average for seven months from April to October with its maximum on August or September. Dominant vegetation types are dry woodland and savannahs. Predominant species areHaematostaphis barteri, Lanneaspp., Khaya senegalensis,Anogeissus leiocarpa,Tamarindus indica,Capparis spinosa,Ziziphus mucronata,Combretumspp., andCissus quadrangularis. The high pressure of human activities on forests in this zone led to the extinction of species such asMilicia excelsa,Khaya senegalensis,Afzelia africana, andPterocarpus erinaceus [31, 32]. This is the case ofProsopis africana which became rare in fallows according to von Maydell [16].Figure 1
Map showing zones of study.
## 2.2. Data Collection
The ecological and structural characterization ofP. africanawas done using inventory in three habitats ofP. africana (farm, fallow, and savannah) according to climatic zones. Adults (DBH ≥ 10 cm) were measured within circular plots of 1000 m² size and regenerations were measured within 5 subplots of about 28 m2 size. A standard distance of 100 m was observed between two plots in each of the vegetation types. Table 1 shows plots distribution according to ecological zones of the country. Variables measured on each tree included the diameter at breast height (DBH ≥ 10 cm) and the total and bole height.Table 1
Plots distributions according to ecological zones.
Climatic zones
Farm
Fallows
Savannah
Total
Guinean
16
12
11
39
Sudano-Guinean
14
14
18
46
Sudanian
10
21
23
54
Total
40
47
52
139
## 2.3. Data Analysis
To determine the dendrometric characteristic ofP. africana,dendrometric parameters were calculated. These parameters are presented in Table 2.Table 2
Dendrometric parameters and their formula.
Parameters
Formula
Density
D
g
=
1
n
∑
i
=
1
n
d
i
²
Medium basal surface area
G
=
π
40000
s
∑
i
=
1
n
d
i
²
Lorey height of individuals
H
L = ∑i=1ngihi∑i=1ngi
Diameter of tree with medium basal surface area
D
g = 1n∑i=1ndi²
Blackman index
I
B = SN2N
Green index
I
G = IB-1n-1
Notes:n, total number of trees within one plot; di, diameter of the ith tree; SN2, variance of population trees; N, mean of population trees.The structural characterization ofP. africanaaccording to vegetation types and climatic zones was done using the diameter and height class-size distribution. Different histograms of frequency from the diameters and heights were adjusted to Weibull 3-parameter distribution using the software Minitab 16. This distribution was used as it is simple in usage [33]. According to Rondeux [34], its probability density function is given by the following equation:(1)fx=cbx-abc-1exp-x-abc.In this equationx denotes the diameter or height of trees; a, b, and c are, respectively, position parameter, scale parameter, and form parameter. Considering the form parameter, different forms of distributions can be distinguished. Table 3 shows the different forms of distribution using Weibull 3-parameter model.Table 3
Distribution forms from 3-parameter Weibull model according to the parameterc.
Value ofc
Types of distribution
References
c
<
1
J inversed distribution describing multispecific groups of species
[35]
c
=
1
Exponential distribution describing populations in extinction
1
<
c
<
3.6
Left skew distribution describing monospecific groups of trees with small diameters
c
=
3.6
Bell shaped distribution describing monospecific groups or plantation species
c
>
3.6
Positive distribution describing monospecific groups of trees with big diametersDendrometric parameters according to vegetation types and climatic zones were compared using two-way ANOVA with the software Minitab 16.
## 3. Results
### 3.1. Dendrometric Parameters according to Vegetation Types
Table4 shows dendrometric characteristics atP. africanapopulations and at all populations’ levels according to vegetation types. Parameters’ means compared with Student t-test revealed significant differences of mean (P<0.01). In fact, the diameter, basal surface area, and Lorey height ofP. africanapopulations range, respectively, from 30 to 37 cm, 3 to 4 m²/ha, and 9 to 11 m. High values of diameters and basal surface areas were observed from the farms whereas high values of heights were observed from fallows. As shown in Table 4, probability values indicate a significant difference of parameters (density, diameters, and basal surface area) according to vegetation types. Besides, the regeneration density was found to be high in habitats under low pressure.Table 4
Dendrometric characterization ofP. africana according to vegetations types.
Parameters
Farms (pl = 40)
Fallows (pl = 47)
Savannah (pl = 52)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
34.95ab
5.54
34.74b
5.16
52.39a
4.97
0.022
Diameter (Dg, cm)
37.02a
2.36
33.66a
2.20
29.61a
2.12
0.067
Basal surface area (G, m²/ha)
4.47a
0.71
3.01a
0.67
3.31a
0.65
0.303
Lorey height (HL, m)
9.25a
0.42
10.72b
0.39
8.66b
0.38
0.001
Contribution of Basal surface area (Cs, %)
86.99a
4.35
73.96ab
4.06
65.69b
3.90
0.002
Density of regeneration (Nr, stems/ha)
7.28a
12.3
23.21a
11.48
27.96a
11.05
0.438
Global
Density (N, stems/ha)
58.23b
16.81
108.62ab
15.68
126.28a
15.09
0.010
Diameter (Dg, cm)
10.73a
0.66
8.91ab
0.62
8.57b
0.60
0.041
Basal surface area (G, m²/ha)
4.44a
0.89
5.47a
0.83
4.97a
0.80
0.696
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).
### 3.2. Dendrometric Parameters according to Climatic Zones
Table5 shows dendrometric characteristics at all vegetation types levels and atP. africanapopulations ones according to different climatic zones. Considering the whole populations, comparison of parameters means using Student t-test revealed a significant difference (P<0.01) of parameters (density, diameters, and basal surface area). As forP. africanapopulations, the diameter and Lorey height were in average, respectively, 33 cm and 10 m. The table analysis showed that the diameter increases according to rainfall gradient. In fact, the diameter increases as vegetation becomes more and more watered. But climatic gradient did not affect trees height. Means of heights were, respectively, 11 m in the Guinean zone, 9 m in Sudano-Guinean zone, and 9 m in Sudanian zone. Parameters means compared using Student t-test revealed a significant difference (P<0.05) of parameters (density, diameters, and basal surface area) according to climatic zones. The basal surface areas were, respectively, 3 m²/ha in the Guinean zone, 6 m²/ha in Sudano-Guinean zone, and 1 m²/ha in Sudanian zone.Table 5
Dendrometric characterization ofP. africana according to climate zone.
Parameters
Guinean zone (pl = 39)
Soudanian zone (pl = 54)
Soudano-guinean zone (pl = 46)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
28.45b
5.59
35.41b
4.98
58.21a
5.09
0.000
Diameter (Dg, cm)
40.38a
2.38
22.63b
2.12
37.28a
2.17
0.000
Basal surface area (G, m²/ha)
3.33b
0.73
1.38b
0.65
6.08a
0.66
0.000
Lorey height (HL, m)
11.25a
0.43
8.88b
0.38
8.51b
0.39
0.000
Contribution of Basal surface area (Cs, %)
69.38b
4.39
83.37a
3.91
73.89ab
4.00
0.051
Density of regeneration (Nr, stems/ha)
4.85a
12.44
11.10a
11.08
42.50a
11.31
0.051
Global
Density (N, stems/ha)
108.15ab
16.98
61.71b
15.13
123.27a
15.45
0.014
Diameter (Dg, cm)
11.57a
0.67
6.70b
0.60
9.94a
0.61
0.000
Basal surface area (G, m²/ha)
5.07a
0.90
2.10b
0.80
7.72a
0.82
0.000
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).Besides, Blackman index (IB) obtained was 32.20 and Green index was 0.054 which is near 0 and shows a random distribution ofP. africanapopulations according to climatic zones.
### 3.3. Diameter Class-Size Distribution ofP. africanaPopulations
Figures2, 3, and 4 show the diameter class-size distribution ofP. africanapopulations according to the three climatic zones of Benin. Figure 3 indicates a J inversed distribution ofP. africanapopulations describing multispecific groups of species. The two others figures (Figures 2 and 4) indicate left skew distribution describing monospecific groups of trees dominated by subjects with small diameters. In fact, subjects with diameter ranging from 10 to 70 cm are the predominant in the Guinean zone. Besides, subjects with diameter over 80 cm are quasi-absent. Unlike the Guinean zone, small subjects with diameter class-size distribution of 10–20 cm were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with diameter ranging between 10 and 30 cm are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone.Figure 2
Diameter class-size distribution in the Guinean zone.Figure 3
Diameter class-size distribution in the Sudano-Guinean zone.Figure 4
Diameter class-size distribution in the Sudanian zone.
### 3.4. Height Class-Size Distribution ofP. africanaPopulations
Figures5, 6, and 7 show the height class-size distribution ofP. africanapopulations to the three climatic zones of Benin. On the whole, the parameter of form (c) ranges between 1 and 3.6 indicating left skew distribution describing monospecific groups dominated by subjects with small heights. In fact, subjects with height ranging between 8 and 12 m are the predominant one in the Guinean zone. Unlike the Guinean zone, subjects with height ranging between 6 and 10 m were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with height ranging between 6 and 12 m are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone. Besides, subjects with height over 21 m are quasi-absent in the Guinean zone. Those whose height is over 23 m are quasi-absent in the Sudano-Guinean zone and subjects with height over 22 m are quasi-absent in the Sudanian zone.Figure 5
Height class-size distribution in the Guinean zone.Figure 6
Height class-size distribution in the Sudano-Guinean zone.Figure 7
Height class-size distribution in the Sudanian zone.
## 3.1. Dendrometric Parameters according to Vegetation Types
Table4 shows dendrometric characteristics atP. africanapopulations and at all populations’ levels according to vegetation types. Parameters’ means compared with Student t-test revealed significant differences of mean (P<0.01). In fact, the diameter, basal surface area, and Lorey height ofP. africanapopulations range, respectively, from 30 to 37 cm, 3 to 4 m²/ha, and 9 to 11 m. High values of diameters and basal surface areas were observed from the farms whereas high values of heights were observed from fallows. As shown in Table 4, probability values indicate a significant difference of parameters (density, diameters, and basal surface area) according to vegetation types. Besides, the regeneration density was found to be high in habitats under low pressure.Table 4
Dendrometric characterization ofP. africana according to vegetations types.
Parameters
Farms (pl = 40)
Fallows (pl = 47)
Savannah (pl = 52)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
34.95ab
5.54
34.74b
5.16
52.39a
4.97
0.022
Diameter (Dg, cm)
37.02a
2.36
33.66a
2.20
29.61a
2.12
0.067
Basal surface area (G, m²/ha)
4.47a
0.71
3.01a
0.67
3.31a
0.65
0.303
Lorey height (HL, m)
9.25a
0.42
10.72b
0.39
8.66b
0.38
0.001
Contribution of Basal surface area (Cs, %)
86.99a
4.35
73.96ab
4.06
65.69b
3.90
0.002
Density of regeneration (Nr, stems/ha)
7.28a
12.3
23.21a
11.48
27.96a
11.05
0.438
Global
Density (N, stems/ha)
58.23b
16.81
108.62ab
15.68
126.28a
15.09
0.010
Diameter (Dg, cm)
10.73a
0.66
8.91ab
0.62
8.57b
0.60
0.041
Basal surface area (G, m²/ha)
4.44a
0.89
5.47a
0.83
4.97a
0.80
0.696
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).
## 3.2. Dendrometric Parameters according to Climatic Zones
Table5 shows dendrometric characteristics at all vegetation types levels and atP. africanapopulations ones according to different climatic zones. Considering the whole populations, comparison of parameters means using Student t-test revealed a significant difference (P<0.01) of parameters (density, diameters, and basal surface area). As forP. africanapopulations, the diameter and Lorey height were in average, respectively, 33 cm and 10 m. The table analysis showed that the diameter increases according to rainfall gradient. In fact, the diameter increases as vegetation becomes more and more watered. But climatic gradient did not affect trees height. Means of heights were, respectively, 11 m in the Guinean zone, 9 m in Sudano-Guinean zone, and 9 m in Sudanian zone. Parameters means compared using Student t-test revealed a significant difference (P<0.05) of parameters (density, diameters, and basal surface area) according to climatic zones. The basal surface areas were, respectively, 3 m²/ha in the Guinean zone, 6 m²/ha in Sudano-Guinean zone, and 1 m²/ha in Sudanian zone.Table 5
Dendrometric characterization ofP. africana according to climate zone.
Parameters
Guinean zone (pl = 39)
Soudanian zone (pl = 54)
Soudano-guinean zone (pl = 46)
P values
M
SE
M
SE
M
SE
P. africana
Density (N, stems/ha)
28.45b
5.59
35.41b
4.98
58.21a
5.09
0.000
Diameter (Dg, cm)
40.38a
2.38
22.63b
2.12
37.28a
2.17
0.000
Basal surface area (G, m²/ha)
3.33b
0.73
1.38b
0.65
6.08a
0.66
0.000
Lorey height (HL, m)
11.25a
0.43
8.88b
0.38
8.51b
0.39
0.000
Contribution of Basal surface area (Cs, %)
69.38b
4.39
83.37a
3.91
73.89ab
4.00
0.051
Density of regeneration (Nr, stems/ha)
4.85a
12.44
11.10a
11.08
42.50a
11.31
0.051
Global
Density (N, stems/ha)
108.15ab
16.98
61.71b
15.13
123.27a
15.45
0.014
Diameter (Dg, cm)
11.57a
0.67
6.70b
0.60
9.94a
0.61
0.000
Basal surface area (G, m²/ha)
5.07a
0.90
2.10b
0.80
7.72a
0.82
0.000
Note: M = mean, SE = standard deviation.
The averages followed the same line of the same letters (a or b or ab) are not significantly different at the 5% level (test Tuskey).Besides, Blackman index (IB) obtained was 32.20 and Green index was 0.054 which is near 0 and shows a random distribution ofP. africanapopulations according to climatic zones.
## 3.3. Diameter Class-Size Distribution ofP. africanaPopulations
Figures2, 3, and 4 show the diameter class-size distribution ofP. africanapopulations according to the three climatic zones of Benin. Figure 3 indicates a J inversed distribution ofP. africanapopulations describing multispecific groups of species. The two others figures (Figures 2 and 4) indicate left skew distribution describing monospecific groups of trees dominated by subjects with small diameters. In fact, subjects with diameter ranging from 10 to 70 cm are the predominant in the Guinean zone. Besides, subjects with diameter over 80 cm are quasi-absent. Unlike the Guinean zone, small subjects with diameter class-size distribution of 10–20 cm were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with diameter ranging between 10 and 30 cm are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone.Figure 2
Diameter class-size distribution in the Guinean zone.Figure 3
Diameter class-size distribution in the Sudano-Guinean zone.Figure 4
Diameter class-size distribution in the Sudanian zone.
## 3.4. Height Class-Size Distribution ofP. africanaPopulations
Figures5, 6, and 7 show the height class-size distribution ofP. africanapopulations to the three climatic zones of Benin. On the whole, the parameter of form (c) ranges between 1 and 3.6 indicating left skew distribution describing monospecific groups dominated by subjects with small heights. In fact, subjects with height ranging between 8 and 12 m are the predominant one in the Guinean zone. Unlike the Guinean zone, subjects with height ranging between 6 and 10 m were found to be the predominant in the Sudano-Guinean zone. As for the Sudanian zone, subjects with height ranging between 6 and 12 m are the most abundant. Subjects with diameter over 90 cm are quasi-absent in this zone. Besides, subjects with height over 21 m are quasi-absent in the Guinean zone. Those whose height is over 23 m are quasi-absent in the Sudano-Guinean zone and subjects with height over 22 m are quasi-absent in the Sudanian zone.Figure 5
Height class-size distribution in the Guinean zone.Figure 6
Height class-size distribution in the Sudano-Guinean zone.Figure 7
Height class-size distribution in the Sudanian zone.
## 4. Discussion
### 4.1. Dendrometric Characterization ofP. africanaPopulations
#### 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
### 4.2. Structures in Diameter and Height
The development of forest stands requires the mastery of the structure diameter and height of trees. These structures are indicative of events related to the life of stands [34]. Forest stands, according to whether they are single species or multispecies, even-aged, young, and old, have structures types. It is known that the structures in diameter and height of these forest types are adjusted to known theoretical distributions [35]. According to Rondeux [34], Philip [47], and McElhinny et al. [48], in even-aged structure, sizes by diameter classes have typical distribution often resembling a Gaussian curve that can become asymmetrical bimodal seen in certain circumstances. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height which is mainly explained by their social position (dominant, codominant). The horizontal structures of populations have mostly left asymmetrical characteristic of single-species stands with predominance of young individuals or small diameters or low heights. According to Arbonnier [49], Sudanian climate was suitable for the optimal development of the African mesquite trees. There is thus generally a relationship between species temperament and their stem diameter class distribution. However, the diameter structure of the Sudano-Guinean zone presented a distribution whose appearance is in “inverted J” which, according to Rondeux [34] and Husch et al. [35], is a characteristic of multispecies stands. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height that is mainly explained by their social position (dominant, codominant). In the case of this study, only the diameter structure at the Sudano-Guinean zone has a distribution whose appearance is in “inverted J” feature of multispecies stands. This reflects a relative predominance of individuals with small diameters. Similarly, the structure ofP. africana diameters, at this climatic zone, has a bell shaped appearance, a characteristic of single-species stands. This distribution is left-skewed (1<c<3.6), a characteristic of a relative predominance of young individuals or small diameters. But we can say that individuals ofP. africana of this climatic zone are not all the same age or young, and the observed left asymmetries cannot be explained by the youth of the population of the species but by their disruption or vulnerability at certain stages of their development. Regarding the distribution of tree height, it generally has a Gaussian shape which may be asymmetrical in the conditions of life of the stand. As for the overhead structure, the assembly has a bell shape of a distribution to the left asymmetry characteristic of stands with predominance of individuals with low heights. According to Bonou et al. [45], the use of Weibull distribution probability density function is becoming increasingly popular for modeling the diameter distributions of both even- and uneven-aged forest stands. The popularity of Weibull is derived from its flexibility to take on a number of different shapes corresponding to many different observed unimodal tree-diameter distributions. In addition, the cumulative distribution function of Weibull exists in closed form and thus allows for quick and easy estimation of the number of trees by diameter class without integration of the probability density function once the parameters have been fitted. The bell shaped function obtained with the diameter or height classes distribution of the African mesquite with a left dissymmetry, a notable exception of the diameter structure of the Sudano-Guinean zone, corroborates the results of Cassou and Depomier [50] with the African fan palm population of Wolokonto in Burkina Faso and the results of Ouinsavi et al. [44], with the palm trees in Benin. Similar results were also obtained by Kperkouma et al. [51], with the Shea butter trees of Donfelgou in Togo. Also Bonou et al. [45] obtained the same distribution as far asAfzelia africanatrees populations are concerned in Benin. However this structure might not be derived only from the species temperament but also from human pressure.
## 4.1. Dendrometric Characterization ofP. africanaPopulations
### 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
## 4.1.1. Dendrometric Features
Dendrometric parameters are important tools used in forestry. Sokpon [36] reported that the average diameter of the tree is a useful parameter of interest and is often recommended in forestry. The average density values noted in savannah stands (52.39 trees/ha) are significantly lower than those obtained by Glèlè Kakaï et al. [37] in stands ofPterocarpus erinaceus (169.4 trees/ha), by Sagbo [38] in the stands dominated byIsoberliniaspp. (205 trees/ha), and by Ouédraogo et al. [39] in Burkina Faso (4000 individuals/ha). The values in the Sudano-Guinean zone (58.21 stems/ha) are also lower than Gbesso et al. [40] in BeninBorassus aethiopum (78 and 133 stems/ha) in this same area. These differences may be partly due to inventory methods used and also because the inventoried stands are not exactly the same. They can also reduce, in part, the strong anthropic pressure from local populations on forest trees of value. The diameter of the populations ofP. africana is higher in fields and in Guinean and Sudano-Guinean areas. This can be explained by the abundance of rainfall that could have a positive effect on the size of diameters. Note that conservation in the fields by local people [18] to human food purposes (because seeds are condiments that sell in markets of Effèoutè in Kétou, Dassa-Zoumé, Glazoué, Aplahoué, and Klouékanmè) in these climatic zones could have a positive effect. Trees have benefited interviews from crop in the fields. Regarding the basal area of the plants groups studied, it varies between 3.31 and 4.47 m2/ha in the vegetation and then 1.38 and 6.08 m²/ha in climate zones. This reveals the importance in the exploitation ofP. africana in arboriculture. The number of plants per hectare is between 34.74 and 52.39 for adults trees in vegetation and 28.45 and 58.21 in climate zones. As for regeneration, the variation is between 7.28 and 27.96 in the vegetation and then between 4.85 and 42.50 in climate zones. These results indicate that the population is very dense in savannahs than in the anthropic formations, which can mean that the species is under pressure in anthropogenic environments. These results are similar to those of Assogbadjo et al. [41] in the forest reserve Wari-Maro which showed that dendrometric features have more values forAnogeissus leiocarpa in stands under low pressure. Such results were also obtained by Kiki [42] onVitex doniana and Fandohan et al. [43] on theTamarindus indica and have shown that human pressures have a negative effect on dendrometric parameters such as density and regeneration adult density but a positive effect on the mean diameter. The heights are higher in the wettest area (Guinea) than other areas. Thus we find individuals of 11.25 m. These results are lower than those of Ouinsavi et al. [44] who obtained, respectively, the palmyra over 15 m high in the Sudan region and those of Bonou et al. [45] (16.9 m) and Sinsin et al. [46] (17 m) ofAfzelia africana.
## 4.2. Structures in Diameter and Height
The development of forest stands requires the mastery of the structure diameter and height of trees. These structures are indicative of events related to the life of stands [34]. Forest stands, according to whether they are single species or multispecies, even-aged, young, and old, have structures types. It is known that the structures in diameter and height of these forest types are adjusted to known theoretical distributions [35]. According to Rondeux [34], Philip [47], and McElhinny et al. [48], in even-aged structure, sizes by diameter classes have typical distribution often resembling a Gaussian curve that can become asymmetrical bimodal seen in certain circumstances. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height which is mainly explained by their social position (dominant, codominant). The horizontal structures of populations have mostly left asymmetrical characteristic of single-species stands with predominance of young individuals or small diameters or low heights. According to Arbonnier [49], Sudanian climate was suitable for the optimal development of the African mesquite trees. There is thus generally a relationship between species temperament and their stem diameter class distribution. However, the diameter structure of the Sudano-Guinean zone presented a distribution whose appearance is in “inverted J” which, according to Rondeux [34] and Husch et al. [35], is a characteristic of multispecies stands. According to the same authors, in an even-aged stand, all trees have the same age or close with a low variable height that is mainly explained by their social position (dominant, codominant). In the case of this study, only the diameter structure at the Sudano-Guinean zone has a distribution whose appearance is in “inverted J” feature of multispecies stands. This reflects a relative predominance of individuals with small diameters. Similarly, the structure ofP. africana diameters, at this climatic zone, has a bell shaped appearance, a characteristic of single-species stands. This distribution is left-skewed (1<c<3.6), a characteristic of a relative predominance of young individuals or small diameters. But we can say that individuals ofP. africana of this climatic zone are not all the same age or young, and the observed left asymmetries cannot be explained by the youth of the population of the species but by their disruption or vulnerability at certain stages of their development. Regarding the distribution of tree height, it generally has a Gaussian shape which may be asymmetrical in the conditions of life of the stand. As for the overhead structure, the assembly has a bell shape of a distribution to the left asymmetry characteristic of stands with predominance of individuals with low heights. According to Bonou et al. [45], the use of Weibull distribution probability density function is becoming increasingly popular for modeling the diameter distributions of both even- and uneven-aged forest stands. The popularity of Weibull is derived from its flexibility to take on a number of different shapes corresponding to many different observed unimodal tree-diameter distributions. In addition, the cumulative distribution function of Weibull exists in closed form and thus allows for quick and easy estimation of the number of trees by diameter class without integration of the probability density function once the parameters have been fitted. The bell shaped function obtained with the diameter or height classes distribution of the African mesquite with a left dissymmetry, a notable exception of the diameter structure of the Sudano-Guinean zone, corroborates the results of Cassou and Depomier [50] with the African fan palm population of Wolokonto in Burkina Faso and the results of Ouinsavi et al. [44], with the palm trees in Benin. Similar results were also obtained by Kperkouma et al. [51], with the Shea butter trees of Donfelgou in Togo. Also Bonou et al. [45] obtained the same distribution as far asAfzelia africanatrees populations are concerned in Benin. However this structure might not be derived only from the species temperament but also from human pressure.
## 5. Conclusion
The structural characterization of populations ofP. africana has helped dendrometric and horizontal structuring ofP. africana stands groups, distinct in their specific traits induced by climatic conditions and vegetative strata that are the fields, fallow, and savannahs. The structural characteristics of populations varied greatly from one climate zone to another and from one formation to another plant. It can be concluded that the species is present in all climatic zones of Benin but with varied densities and that it is in the Sudan and Sudano-Guinean areas quite abundant. The density is an average of 126.28 stems/ha, 109 stems/ha, and 58 trees/ha, respectively, in savannahs, fallows, and fields. The diameter of the shaft means for these formations varies between 9 and 11 cm with the high value in the field. In terms of the basal area, it is an average of 4.44 m²/ha at the field level, 5 m²/ha at the savannah, and then 5 m/ha at the fallow. For the regeneration of density, the variation is between 5 and 43 stems/ha. It is higher in the Sudano-Guinean areas (43 individuals/ha) and Sudanese areas (11 individuals/ha). The lower regeneration density is in the Guinean zone (5 individuals/ha). The average diameter of the basal area of tree is most interesting in the Guinean area compared to other areas. The stands of this area offer a potential timber into lumber and service development which would draw added value from the sale of wood. Ecological structure of the African mesquite populations of Benin, adjusted to Weibull distribution, showed a bell shaped curve with a left dissymmetry proving the predominance of young trees within these populations.
---
*Source: 101373-2015-07-29.xml* | 2015 |
# A Review of Piecewise Linearization Methods
**Authors:** Ming-Hua Lin; John Gunnar Carlsson; Dongdong Ge; Jianming Shi; Jung-Fa Tsai
**Journal:** Mathematical Problems in Engineering
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101376
---
## Abstract
Various optimization problems in engineering and management are formulated as nonlinear programming problems. Because of the nonconvexity nature of this kind of problems, no efficient approach is available to derive the global optimum of the problems. How to locate a global optimal solution of a nonlinear programming problem is an important issue in optimization theory. In the last few decades, piecewise linearization methods have been widely applied to convert a nonlinear programming problem into a linear programming problem or a mixed-integer convex programming problem for obtaining an approximated global optimal solution. In the transformation process, extra binary variables, continuous variables, and constraints are introduced to reformulate the original problem. These extra variables and constraints mainly determine the solution efficiency of the converted problem. This study therefore provides a review of piecewise linearization methods and analyzes the computational efficiency of various piecewise linearization methods.
---
## Body
## 1. Introduction
Piecewise linear functions are frequently used in various applications to approximate nonlinear programs with nonconvex functions in the objective or constraints by adding extra binary variables, continuous variables, and constraints. They naturally appear as cost functions of supply chain problems to model quantity discount functions for bulk procurement and fixed charges. For example, the transportation cost, inventory cost, and production cost in a supply chain network are often constructed as a sum of nonconvex piecewise linear functions due to economies of scale [1]. Optimization problems with piecewise linear costs arise in many application domains, including transportation, telecommunications, and production planning. Specific applications include variants of the minimum cost network flow problem with nonconvex piecewise linear costs [2–7], the network loading problem [8–11], the facility location problem with staircase costs [12, 13], the merge-in-transit problem [14], and the packing problem [15–17]. Other applications also include production planning [18], optimization of electronic circuits [19], operation planning of gas networks [20], process engineering [21, 22], engineering design [23, 24], appointment scheduling [25], and other network flow problems with nonconvex piecewise linear objective functions [7].Various methods of piecewisely linearizing a nonlinear function have been proposed in the literature [26–39]. Two well-known mixed-integer formulations for piecewise linear functions are the incremental cost [40] and the convex combination [41] formulations. Padberg [35] compared the linear programming relaxations of the two mixed-integer programming models for piecewise linear functions in the simplest case when no constraint exists. He showed that the feasible set of the linear programming relaxation of the incremental cost formulation is integral; that is, the binary variables are integers at every vertex of the set. He called such formulations locally ideal. On the other hand, the convex combination formulation is not locally ideal, and it strictly contains the feasible set of the linear programming relaxation of the incremental cost formulation. Then, Sherali [42] proposed a modified convex combination formulation that is locally ideal. Alternatively, Beale and Tomlin [43] suggested a formulation for the piecewise linear function similar to convex combination, except that no binary variable is included in the model and the nonlinearities are enforced algorithmically, directly in the branch-and-bound algorithm, by branching on sets of variables, which they called special ordered sets of type 2 (SOS2). It is also possible to formulate piecewise linear functions similar to incremental cost but without binary variables and enforcing the nonlinearities directly in the branch-and-bound algorithm. Two advantages of eliminating binary variables are the substantial reduction in the size of the model and the use of the polyhedral structure of the problem [44, 45]. Keha et al. [46] studied formulations of linear programs with piecewise linear objective functions with and without additional binary variables and showed that adding binary variables does not improve the bound of the linear programming relaxation. Keha et al. [47] also presented a branch-and-cut algorithm for solving linear programs with continuous separable piecewise-linear cost functions. Instead of introducing auxiliary binary variables and other linear constraints to represent SOS2 constraints used in the traditional approach, they enforced SOS2 constraints by branching on them without auxiliary binary variables.Due to the broad applications of piecewise linear functions, many studies have conducted related research on this topic. The main purpose of these studies is to find a better way to represent a piecewise linear function or to tighten the linear programming relaxation. A superior representation of piecewise linear functions can effectively reduce the problem size and enhance the computational efficiency. However, for expressing a piecewise linear function of a single variablex with m+1 break points, most of the methods in the textbooks and literature require adding extra m binary variables and 4m constraints, which may cause a heavy computational burden when m is large. Recently, Li et al. [48] developed a representation method for piecewise linear functions with fewer binary variables compared to the traditional methods. Although their method needs only ⌈log2m⌉ extra binary variables to piecewisely linearize a nonlinear function with m+1 break points, the approximation process still requires 8+8⌈log2m⌉ extra constraints, 2m nonnegative continuous variables, and 2⌈log2m⌉ free-signed continuous variables. Vielma et al. [39] presented a note on Li et al.’s paper and showed that two representations for piecewise linear functions introduced by Li et al. [48] are both theoretically and computationally inferior to standard formulations for piecewise linear functions. Tsai and Lin [49] applied the Vielma et al. [39] techniques to express a piecewise linear function for solving a posynomial optimization problem. Croxton et al. [31] indicated that most models of expressing piecewise linear functions are equivalent to each other. Additionally, it is well known that the numbers of extra variables and constraints required in the linearization process for a nonlinear function obviously impact the computational performance of the converted problem. Therefore, this paper focuses on discussing and reviewing the recent advances in piecewise linearization methods. Section 2 reviews the piecewise linearization methods. Section 3 compares the formulations of various methods with the numbers of extra binary/continuous variables and constraints. Section 4 discusses error evaluation in piecewise linear approximation. Conclusions are made in Section 5.
## 2. Formulations of Piecewise Linearization Functions
Consider a general nonlinear functionf(x) of a single variable x; f(x) is a continuous function, and x is within the interval [a0,am]. Most commonly used textbooks of nonlinear programming [26–28] approximate the nonlinear function by a piecewise linear function as follows.Firstly, denoteak(k=0,2,…,m) as the break points of f(x), a0<a1<⋯<am, and Figure 1 indicates the piecewise linearization of f(x).Figure 1
Piecewise linearization off(x).f
(
x
) can then be approximately linearized over the interval [a0,am] as
(1)L(f(x))=∑k=0mf(ak)tk,
where x=∑k=0maktk, ∑k=0mtk=1, tk≥0, in which only two adjacent tk’s are allowed to be nonzero. A nonlinear function is then converted into the following expressions.Method 1.
Consider(2)L(f(x))=∑k=1mf(ak)tk,x=∑k=1maktk,t0≤y0,tk≤yk-1+yk,fork=1,2,…,m-1,tm≤ym-1,∑k=0m-1yk=1,∑k=0mtk=1,
whereyk∈{0,1}, tk≥0, k=0,1,…,m-1.The above expressions involvem new binary variables y0,y1,…,ym-1. The number of newly added 0-1 variables for piecewisely linearizing a function f(x) equals the number of breaking intervals (i.e., m). If m is large, it may cause a heavy computational burden.Li and Yu [33] proposed another global optimization method for nonlinear programming problems where the objective function and the constraints might be nonconvex. A univariate function is initially expressed by a piecewise linear function with a summation of absolute terms. Denote sk (k=0,1,…,m-1) as the slopes of line segments between ak and ak+1, expressed as sk=[f(ak+1)-f(ak)]/[ak+1-ak]. f(x) can then be written as follows:
(3)L(f(x))=f(a0)+s0(x-a0)+∑k=1m-1sk-sk-12(|x-ak|+x-ak).f
(
x
) is convex in the interval [ak-1,ak] if sk-sk-1≥0, and otherwise f(x) is a non-convex function which needs to be linearized by adding extra binary variables. By linearizing the absolute terms, Li and Yu [33] converted the nonlinear function into a piecewise linear function as shown below.Method 2.
Consider(4)L(f(x))=f(a0)+s0(x-a0)+∑k:sk>sk-1(sk-sk-1)(x-ak+∑l=0k-1dl)+12∑k:sk<sk-1(sk-sk-1)(x-2zk+2akuk-ak),x+∑l=0m-2dl≥am-1,0≤dl≤al+1-al,wheresk>sk-1,x+x-(uk-1)≤zk,zk≥0,wheresk<sk-1,
wherex≥0, dl≥0, zk≥0, uk∈{0,1}, x- are upper bounds of x and uk are extra binary variables used to linearize a non-convex function f(x) for the interval [ak-1,ak].Comparing Method2 with Method 1, Method 1 uses binary variables to linearize f(x) for whole x interval. But the binary variables used in Method 2 are only applied to linearize the non-convex parts of f(x). Method 2 therefore uses fewer 0-1 variables than Method 1. However, for f(x) with q intervals of the non-convex parts, Method 2 still requires q binary variables to linearize f(x).Another general form of representing a piecewise linear function is proposed in the articles of Croxton et al. [31], Li [32], Padberg [35], Topaloglu and Powell [36], and Li and Tsai [38]. The expressions are formulated as shown below.Method 3.
Consider(5)ak-(am-a0)(1-λk)≤x≤ak+1-(am-a0)(1-λk),k=0,1,…,m-1,
where∑k=0m-1λk=1, λk∈{0,1}, and
(6)f(ak)+sk(x-ak)-M(1-λk)≤f(x)≤f(ak)+sk(x-ak)+M(1-λk),k=0,1,…,m-1,
where M is a large constant and sk=(f(ak+1)-f(ak))/(ak+1-ak).The above expressions require extram binary variables and 4m constraints, where m+1 break points are used to represent a piecewise linear function.Form the above discussions, we can know that Methods1, 2, and 3 require a number of extra binary variables and extra constraints linear in m to express a piecewise linear function. To approximate a nonlinear function by using a piecewise linear function, the numbers of extra binary variable and constraints significantly influence the computational efficiency. If fewer binary variables and constraints are used to represent a piecewise linear function, then less CPU time is needed to solve the transformed problem. For decreasing the extra binary variables involved in the approximation process, Li et al. [48] developed a representation method for piecewise linear functions with the number of binary variables logarithmic in m. Consider the same piecewise linear function f(x) discussed above, where x is within the interval [a0,am] and m+1 break points exist within [a0,am]. Let θ be an integer, 0≤θ≤m-1, expressed as
(7)θ=∑j=1h2j-1uj,h=⌈log2m⌉,uj∈{0,1}.LetG(θ)⊆{1,2,…,h} be a set composed of all indices such that ∑j∈G(θ)2j-1=θ. For instance, G(0)=ϕ, G(3)={1,2}.Denote∥G(θ)∥ to be the number of elements in G(θ). For instance, ∥G(0)∥=0, ∥G(3)∥=2.To approximate a univariate nonlinear function by using a piecewise linear function, the following expressions are deduced by the Li et al. [48] method.Method 4.
Consider(8)∑θ=0m-1rθaθ≤x≤∑θ=0m-1rθaθ+1,f(x)=∑θ=0m-1(f(aθ)-sθ(aθ-a0))rθ+∑θ=0m-1sθwθ,sθ=f(aθ+1)-f(aθ)aθ+1-aθ,∑θ=0m-1rθ=1,rθ≥0,∑θ=0m-1rθ∥G(θ)∥+∑j=1hzj=0,-uj′≤zj≤uj′,j=1,2,…,h,∑θ=0m-1rθcθ,j-(1-uj′)≤zj≤∑θ=0m-1rθcθ,j+(1-uj′),j=1,2,…,h,∑θ=0m-1wθ=x-a0,∑θ=0m-1wθ∥G(θ)∥+∑j=1hδj=0,-(am-a0)uj′≤δj≤(am-a0)uj′,j=1,2,…,h,∑θ=0m-1wθcθ,j-(am-a0)(1-uj′)≤δj≤∑θ=0m-1wθcθ,j+(am-a0)(1-uj′),j=1,2,…,h,∑j=1h2j-1uj′≤m,
whereuj′∈{0,1}, cθ,j, zj, and δj are free continuous variables, rθ and wθ are nonnegative continuous, and all the variables are the same as defined before.The expressions of Method4 for representing a piecewise linear function f(x) with m+1 break points use ⌈log2m⌉ binary variables, 8+8⌈log2m⌉ constraints, 2m non-negative variables, and 2⌈log2m⌉ free-signed continuous variables. Comparing with Methods 1, 2, and 3, Method 4 indeed reduces the number of binary variables used such that the computational efficiency is improved. Although Li et al. [48] developed a superior way of expressing a piecewise linear function by using fewer binary variables, Vielma et al. [39] investigated that this representation for piecewise linear functions is theoretically and computationally inferior to standard formulations for piecewise linear functions. Vielma and Nemhauser [50] recently developed a novel piecewise linear expression requiring fewer variables and constraints than the current piecewise linearization techniques to approximate the univariate nonlinear functions. Their method needs a logarithmic number of binary variables and constraints to express a piecewise linear function. The formulation is described as shown below.LetP={0,1,2,…,m} and p∈P. An injective function B:{1,2,…,m}→{0,1}θ, θ=⌈log2m⌉, where the vectors B(p) and B(p+1) differ in at most one component for all p∈{1,2,…,m-1}.LetB(p)=(u1,u2,…,uθ), for all uk∈{0,1}, k=1,2,…,θ, and B(0)=B(1). Some notations are introduced below.S
+
(
k
): a set composed of all p, where uk=1 of B(p) and B(p+1) for p=1,2,…,m-1 or uk=1 of B(p) for p∈{0,m}; that is, S+(k)={p∣∀B(p)andB(p+1),uk=1,p=1,2,…,m-1}∪{p∣∀B(p),uk=1,p∈{0,m}}.S
-
(
k
): a set composed of all p, where uk=0 of B(p) and B(p+1) for p=1,2,…,m-1 or uk=0 of B(p) for p∈{0,m}; that is, S-(k)={p∣∀B(p)andB(p+1),uk=0,p=1,2,…,m-1}∪{p∣∀B(p),uk=0,p∈{0,m}}.The linear approximation of a univariatef(x), a0≤x≤am, by the technique of Vielma and Nemhauser [50] is formulated as follows.Method 5.
DenoteL(f(x)) as the piecewise linear function of f(x), where a0<a1<a2<⋯<am be the m+1 break points of L(f(x)). L(f(x)) can be expressed as
(9)L(f(x))=∑p=0mf(ap)λp,x=∑p=0mapλp,∑p=0mλp=1,∑p∈S+(k)λp≤uk,∑p∈S-(k)λp≤1-uk,∀λp∈ℜ+,∀uk∈{0,1}.Method5 uses ⌈log2m⌉ binary variables, m+1 continuous variables, and 3+2⌈log2m⌉ constraints to express a piecewise linearization function with m line segments.
## 3. Formulation Comparisons
The comparison results of the above five methods in terms of the numbers of binary variables, continuous variables, and constraints are listed in Table1. The number of extra binary variables of Methods 1 and 3 is linear in the number of line segments. Methods 4 and 5 have the logarithmic number of extra binary variables with m line segments, and the number of extra binary variables of Method 2 is equal to the number of concave piecewise line segments. In the deterministic global optimization for a minimization problem, inverse, power, and exponential transformations generate nonconvex expressions that require to be linearly approximated in the reformulated problem. That means Methods 4 and 5 are superior to Methods 1, 2, and 3 in terms of the numbers of extra binary variables and constraints as shown in Table 1. Moreover, Method 5 has fewer extra continuous variables and constraints than Method 4 in linearizing a nonlinear function.Table 1
Comparison results of five methods in expressing a piecewise linearization function withm line segments (i.e., m+1 break points).
Items
Method1
Method2
Method3
Method4
Method5
No. of binary variables
m
q (no. of concave piecewise segments)
m
⌈
lo
g
2
m
)
⌉
⌈
lo
g
2
m
⌉
No. of continuous variables
m
+
1
m
+
1
0
2
m
+
2
⌈
lo
g
2
m
⌉
m
+
1
No. of constraints
m
+
5
m
+
1
4
m
8
+
8
⌈
lo
g
2
m
⌉
3
+
2
⌈
lo
g
2
m
⌉Till et al. [51] reviewed the literature on the complexity of mixed-integer linear programming (MILP) problems and summarized that the computational complexity varies from O(d·n2) to O(2d·n3), where n is the number of constraints and d is the number of binaries. Therefore, reducing constraints and binary variables makes a greater impact than reducing continuous variables on computational efficiency of solving MILP problems. For finding a global solution of a nonlinear programming problem by a piecewise linearization method, if the linearization method generates a large number of additional constraints and binaries, the computational efficiency will decrease and cause heavy computational burdens. According to the above discussions, Method 5 is more computationally efficient than the other four methods. Experiment results from the literature [39, 48, 49] also support the statement.Beale and Tomlin [43] suggested a formulation for piecewise linear functions by using continuous variables in special ordered sets of type 2 (SOS2). Although no binary variables are included in the SOS2 formulation, the nonlinearities are enforced algorithmically and directly in the branch-and-bound algorithm by branching on sets of variables. Since the traditional SOS2 branching schemes have too many dichotomies, the piecewise linearization technique in Method 5 induces an independent branching scheme of logarithm depth and provides a significant computational advantage [50]. The computational results in Vielma and Nemhauser [50] show that Method 5 outperforms the SOS2 model without binary variables.The factors affecting the computational efficiency in solving nonlinear programming problems include the tightness of the constructed convex underestimator, the efficiency of the piecewise linearization technique, and the number of the transformed variables. An appropriate variable transformation constructs a tighter convex underestimator and makes fewer break points required in the linearization process to satisfy the same optimality tolerance and feasibility tolerance. Vielma and Nemhauser [50] indicated that the formulation of Method 5 is sharp and locally ideal and has favorable tightness properties. They presented experimental results showing that Method 5 significantly outperforms other methods, especially when the number of break points becomes large. Vielma et al. [39] explained that the formulation of Method 4 is not sharp and is theoretically and computationally inferior to standard MILP formulations (convex combination model, logarithmic convex combination model) for piecewise linear functions.
## 4. Error Evaluation
For evaluating the error of piecewise linear approximation, Tsai and Lin [49, 52] and Lin and Tsai [53] utilized the expression |f(x)-L(f(x))| to estimate the error indicated in Figure 2. If f(x) is the objective function, gi(x)<0 is the ith constraint, and x* is the solution derived from the transformed program, then the linearization does not require to be refined until |f(x*)-L(f(x*))|≤ε1 and Maxi(gi(x*))≤ε2, where |f(x*)-L(f(x*))| is the evaluated error in objective, ε1 is the optimality tolerance, gi(x*) is the error in the ith constraint, and ε2 is the feasibility tolerance.Error evaluation of the linear approximation.
(a)
(b)The accuracy of the linear approximation significantly depends on the selection of break points and more break points can increase the accuracy of the linear approximation. Since adding numerous break points leads to a significant increase in the computational burden, the break point selection strategies can be applied to improve the computational efficiency in solving optimization problems by the deterministic approaches. Existing break point selection strategies are classified into three categories as follows [54]:(i)
add a new break point at the midpoint of each interval of existing break points;(ii)
add a new break point at the point with largest approximation error of each interval;(iii)
add a new break point at the previously obtained solution point.According to the deterministic optimization methods for solving nonconvex nonlinear problems [29, 33, 38, 39, 48, 49, 53–56], the inverse or logarithmic transformation is required to be approximated by the piecewise linearization function. For example, the function y=lnx or y=x-1 is required to be piecewisely linearized by using an appropriate breakpoint selection strategy, if a new break point is added at the midpoint of each interval of existing break points or at the point with largest approximation error, the number of line segments becomes double in each iteration. If a new breakpoint is added at the previously obtained solution point, only one breakpoint is added in each iteration. How to improve the computational efficiency by a better break point selection strategy still needs more investigations or experiments to get concrete results.
## 5. Conclusions
This study provides an overview on some of the most commonly used piecewise linearization methods in deterministic optimization. From the formulation point of view, the numbers of extra binaries, continuous variables, and constraints are decreasing in the latest development methods especially for the number of extra binaries which may cause heavy computational burdens. Additionally, a good piecewise linearization method must consider the tightness properties such as sharp and locally ideal. Since effective break points selection strategy is important to enhance the computational efficiency in linear approximation, more work should be done to study the optimal positioning of the break points. Although a logarithmic piecewise linearization method with good tightness properties has been proposed, it is still too time consuming for finding an approximately global optimum of a large scale nonconvex problem. Developing an efficient polynomial time algorithm for solving nonconvex problems by piecewise linearization techniques is still a challenging question. Obviously, this contribution gives only a few preliminary insights and might point toward issues deserving additional research.
---
*Source: 101376-2013-11-06.xml* | 101376-2013-11-06_101376-2013-11-06.md | 22,810 | A Review of Piecewise Linearization Methods | Ming-Hua Lin; John Gunnar Carlsson; Dongdong Ge; Jianming Shi; Jung-Fa Tsai | Mathematical Problems in Engineering
(2013) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101376 | 101376-2013-11-06.xml | ---
## Abstract
Various optimization problems in engineering and management are formulated as nonlinear programming problems. Because of the nonconvexity nature of this kind of problems, no efficient approach is available to derive the global optimum of the problems. How to locate a global optimal solution of a nonlinear programming problem is an important issue in optimization theory. In the last few decades, piecewise linearization methods have been widely applied to convert a nonlinear programming problem into a linear programming problem or a mixed-integer convex programming problem for obtaining an approximated global optimal solution. In the transformation process, extra binary variables, continuous variables, and constraints are introduced to reformulate the original problem. These extra variables and constraints mainly determine the solution efficiency of the converted problem. This study therefore provides a review of piecewise linearization methods and analyzes the computational efficiency of various piecewise linearization methods.
---
## Body
## 1. Introduction
Piecewise linear functions are frequently used in various applications to approximate nonlinear programs with nonconvex functions in the objective or constraints by adding extra binary variables, continuous variables, and constraints. They naturally appear as cost functions of supply chain problems to model quantity discount functions for bulk procurement and fixed charges. For example, the transportation cost, inventory cost, and production cost in a supply chain network are often constructed as a sum of nonconvex piecewise linear functions due to economies of scale [1]. Optimization problems with piecewise linear costs arise in many application domains, including transportation, telecommunications, and production planning. Specific applications include variants of the minimum cost network flow problem with nonconvex piecewise linear costs [2–7], the network loading problem [8–11], the facility location problem with staircase costs [12, 13], the merge-in-transit problem [14], and the packing problem [15–17]. Other applications also include production planning [18], optimization of electronic circuits [19], operation planning of gas networks [20], process engineering [21, 22], engineering design [23, 24], appointment scheduling [25], and other network flow problems with nonconvex piecewise linear objective functions [7].Various methods of piecewisely linearizing a nonlinear function have been proposed in the literature [26–39]. Two well-known mixed-integer formulations for piecewise linear functions are the incremental cost [40] and the convex combination [41] formulations. Padberg [35] compared the linear programming relaxations of the two mixed-integer programming models for piecewise linear functions in the simplest case when no constraint exists. He showed that the feasible set of the linear programming relaxation of the incremental cost formulation is integral; that is, the binary variables are integers at every vertex of the set. He called such formulations locally ideal. On the other hand, the convex combination formulation is not locally ideal, and it strictly contains the feasible set of the linear programming relaxation of the incremental cost formulation. Then, Sherali [42] proposed a modified convex combination formulation that is locally ideal. Alternatively, Beale and Tomlin [43] suggested a formulation for the piecewise linear function similar to convex combination, except that no binary variable is included in the model and the nonlinearities are enforced algorithmically, directly in the branch-and-bound algorithm, by branching on sets of variables, which they called special ordered sets of type 2 (SOS2). It is also possible to formulate piecewise linear functions similar to incremental cost but without binary variables and enforcing the nonlinearities directly in the branch-and-bound algorithm. Two advantages of eliminating binary variables are the substantial reduction in the size of the model and the use of the polyhedral structure of the problem [44, 45]. Keha et al. [46] studied formulations of linear programs with piecewise linear objective functions with and without additional binary variables and showed that adding binary variables does not improve the bound of the linear programming relaxation. Keha et al. [47] also presented a branch-and-cut algorithm for solving linear programs with continuous separable piecewise-linear cost functions. Instead of introducing auxiliary binary variables and other linear constraints to represent SOS2 constraints used in the traditional approach, they enforced SOS2 constraints by branching on them without auxiliary binary variables.Due to the broad applications of piecewise linear functions, many studies have conducted related research on this topic. The main purpose of these studies is to find a better way to represent a piecewise linear function or to tighten the linear programming relaxation. A superior representation of piecewise linear functions can effectively reduce the problem size and enhance the computational efficiency. However, for expressing a piecewise linear function of a single variablex with m+1 break points, most of the methods in the textbooks and literature require adding extra m binary variables and 4m constraints, which may cause a heavy computational burden when m is large. Recently, Li et al. [48] developed a representation method for piecewise linear functions with fewer binary variables compared to the traditional methods. Although their method needs only ⌈log2m⌉ extra binary variables to piecewisely linearize a nonlinear function with m+1 break points, the approximation process still requires 8+8⌈log2m⌉ extra constraints, 2m nonnegative continuous variables, and 2⌈log2m⌉ free-signed continuous variables. Vielma et al. [39] presented a note on Li et al.’s paper and showed that two representations for piecewise linear functions introduced by Li et al. [48] are both theoretically and computationally inferior to standard formulations for piecewise linear functions. Tsai and Lin [49] applied the Vielma et al. [39] techniques to express a piecewise linear function for solving a posynomial optimization problem. Croxton et al. [31] indicated that most models of expressing piecewise linear functions are equivalent to each other. Additionally, it is well known that the numbers of extra variables and constraints required in the linearization process for a nonlinear function obviously impact the computational performance of the converted problem. Therefore, this paper focuses on discussing and reviewing the recent advances in piecewise linearization methods. Section 2 reviews the piecewise linearization methods. Section 3 compares the formulations of various methods with the numbers of extra binary/continuous variables and constraints. Section 4 discusses error evaluation in piecewise linear approximation. Conclusions are made in Section 5.
## 2. Formulations of Piecewise Linearization Functions
Consider a general nonlinear functionf(x) of a single variable x; f(x) is a continuous function, and x is within the interval [a0,am]. Most commonly used textbooks of nonlinear programming [26–28] approximate the nonlinear function by a piecewise linear function as follows.Firstly, denoteak(k=0,2,…,m) as the break points of f(x), a0<a1<⋯<am, and Figure 1 indicates the piecewise linearization of f(x).Figure 1
Piecewise linearization off(x).f
(
x
) can then be approximately linearized over the interval [a0,am] as
(1)L(f(x))=∑k=0mf(ak)tk,
where x=∑k=0maktk, ∑k=0mtk=1, tk≥0, in which only two adjacent tk’s are allowed to be nonzero. A nonlinear function is then converted into the following expressions.Method 1.
Consider(2)L(f(x))=∑k=1mf(ak)tk,x=∑k=1maktk,t0≤y0,tk≤yk-1+yk,fork=1,2,…,m-1,tm≤ym-1,∑k=0m-1yk=1,∑k=0mtk=1,
whereyk∈{0,1}, tk≥0, k=0,1,…,m-1.The above expressions involvem new binary variables y0,y1,…,ym-1. The number of newly added 0-1 variables for piecewisely linearizing a function f(x) equals the number of breaking intervals (i.e., m). If m is large, it may cause a heavy computational burden.Li and Yu [33] proposed another global optimization method for nonlinear programming problems where the objective function and the constraints might be nonconvex. A univariate function is initially expressed by a piecewise linear function with a summation of absolute terms. Denote sk (k=0,1,…,m-1) as the slopes of line segments between ak and ak+1, expressed as sk=[f(ak+1)-f(ak)]/[ak+1-ak]. f(x) can then be written as follows:
(3)L(f(x))=f(a0)+s0(x-a0)+∑k=1m-1sk-sk-12(|x-ak|+x-ak).f
(
x
) is convex in the interval [ak-1,ak] if sk-sk-1≥0, and otherwise f(x) is a non-convex function which needs to be linearized by adding extra binary variables. By linearizing the absolute terms, Li and Yu [33] converted the nonlinear function into a piecewise linear function as shown below.Method 2.
Consider(4)L(f(x))=f(a0)+s0(x-a0)+∑k:sk>sk-1(sk-sk-1)(x-ak+∑l=0k-1dl)+12∑k:sk<sk-1(sk-sk-1)(x-2zk+2akuk-ak),x+∑l=0m-2dl≥am-1,0≤dl≤al+1-al,wheresk>sk-1,x+x-(uk-1)≤zk,zk≥0,wheresk<sk-1,
wherex≥0, dl≥0, zk≥0, uk∈{0,1}, x- are upper bounds of x and uk are extra binary variables used to linearize a non-convex function f(x) for the interval [ak-1,ak].Comparing Method2 with Method 1, Method 1 uses binary variables to linearize f(x) for whole x interval. But the binary variables used in Method 2 are only applied to linearize the non-convex parts of f(x). Method 2 therefore uses fewer 0-1 variables than Method 1. However, for f(x) with q intervals of the non-convex parts, Method 2 still requires q binary variables to linearize f(x).Another general form of representing a piecewise linear function is proposed in the articles of Croxton et al. [31], Li [32], Padberg [35], Topaloglu and Powell [36], and Li and Tsai [38]. The expressions are formulated as shown below.Method 3.
Consider(5)ak-(am-a0)(1-λk)≤x≤ak+1-(am-a0)(1-λk),k=0,1,…,m-1,
where∑k=0m-1λk=1, λk∈{0,1}, and
(6)f(ak)+sk(x-ak)-M(1-λk)≤f(x)≤f(ak)+sk(x-ak)+M(1-λk),k=0,1,…,m-1,
where M is a large constant and sk=(f(ak+1)-f(ak))/(ak+1-ak).The above expressions require extram binary variables and 4m constraints, where m+1 break points are used to represent a piecewise linear function.Form the above discussions, we can know that Methods1, 2, and 3 require a number of extra binary variables and extra constraints linear in m to express a piecewise linear function. To approximate a nonlinear function by using a piecewise linear function, the numbers of extra binary variable and constraints significantly influence the computational efficiency. If fewer binary variables and constraints are used to represent a piecewise linear function, then less CPU time is needed to solve the transformed problem. For decreasing the extra binary variables involved in the approximation process, Li et al. [48] developed a representation method for piecewise linear functions with the number of binary variables logarithmic in m. Consider the same piecewise linear function f(x) discussed above, where x is within the interval [a0,am] and m+1 break points exist within [a0,am]. Let θ be an integer, 0≤θ≤m-1, expressed as
(7)θ=∑j=1h2j-1uj,h=⌈log2m⌉,uj∈{0,1}.LetG(θ)⊆{1,2,…,h} be a set composed of all indices such that ∑j∈G(θ)2j-1=θ. For instance, G(0)=ϕ, G(3)={1,2}.Denote∥G(θ)∥ to be the number of elements in G(θ). For instance, ∥G(0)∥=0, ∥G(3)∥=2.To approximate a univariate nonlinear function by using a piecewise linear function, the following expressions are deduced by the Li et al. [48] method.Method 4.
Consider(8)∑θ=0m-1rθaθ≤x≤∑θ=0m-1rθaθ+1,f(x)=∑θ=0m-1(f(aθ)-sθ(aθ-a0))rθ+∑θ=0m-1sθwθ,sθ=f(aθ+1)-f(aθ)aθ+1-aθ,∑θ=0m-1rθ=1,rθ≥0,∑θ=0m-1rθ∥G(θ)∥+∑j=1hzj=0,-uj′≤zj≤uj′,j=1,2,…,h,∑θ=0m-1rθcθ,j-(1-uj′)≤zj≤∑θ=0m-1rθcθ,j+(1-uj′),j=1,2,…,h,∑θ=0m-1wθ=x-a0,∑θ=0m-1wθ∥G(θ)∥+∑j=1hδj=0,-(am-a0)uj′≤δj≤(am-a0)uj′,j=1,2,…,h,∑θ=0m-1wθcθ,j-(am-a0)(1-uj′)≤δj≤∑θ=0m-1wθcθ,j+(am-a0)(1-uj′),j=1,2,…,h,∑j=1h2j-1uj′≤m,
whereuj′∈{0,1}, cθ,j, zj, and δj are free continuous variables, rθ and wθ are nonnegative continuous, and all the variables are the same as defined before.The expressions of Method4 for representing a piecewise linear function f(x) with m+1 break points use ⌈log2m⌉ binary variables, 8+8⌈log2m⌉ constraints, 2m non-negative variables, and 2⌈log2m⌉ free-signed continuous variables. Comparing with Methods 1, 2, and 3, Method 4 indeed reduces the number of binary variables used such that the computational efficiency is improved. Although Li et al. [48] developed a superior way of expressing a piecewise linear function by using fewer binary variables, Vielma et al. [39] investigated that this representation for piecewise linear functions is theoretically and computationally inferior to standard formulations for piecewise linear functions. Vielma and Nemhauser [50] recently developed a novel piecewise linear expression requiring fewer variables and constraints than the current piecewise linearization techniques to approximate the univariate nonlinear functions. Their method needs a logarithmic number of binary variables and constraints to express a piecewise linear function. The formulation is described as shown below.LetP={0,1,2,…,m} and p∈P. An injective function B:{1,2,…,m}→{0,1}θ, θ=⌈log2m⌉, where the vectors B(p) and B(p+1) differ in at most one component for all p∈{1,2,…,m-1}.LetB(p)=(u1,u2,…,uθ), for all uk∈{0,1}, k=1,2,…,θ, and B(0)=B(1). Some notations are introduced below.S
+
(
k
): a set composed of all p, where uk=1 of B(p) and B(p+1) for p=1,2,…,m-1 or uk=1 of B(p) for p∈{0,m}; that is, S+(k)={p∣∀B(p)andB(p+1),uk=1,p=1,2,…,m-1}∪{p∣∀B(p),uk=1,p∈{0,m}}.S
-
(
k
): a set composed of all p, where uk=0 of B(p) and B(p+1) for p=1,2,…,m-1 or uk=0 of B(p) for p∈{0,m}; that is, S-(k)={p∣∀B(p)andB(p+1),uk=0,p=1,2,…,m-1}∪{p∣∀B(p),uk=0,p∈{0,m}}.The linear approximation of a univariatef(x), a0≤x≤am, by the technique of Vielma and Nemhauser [50] is formulated as follows.Method 5.
DenoteL(f(x)) as the piecewise linear function of f(x), where a0<a1<a2<⋯<am be the m+1 break points of L(f(x)). L(f(x)) can be expressed as
(9)L(f(x))=∑p=0mf(ap)λp,x=∑p=0mapλp,∑p=0mλp=1,∑p∈S+(k)λp≤uk,∑p∈S-(k)λp≤1-uk,∀λp∈ℜ+,∀uk∈{0,1}.Method5 uses ⌈log2m⌉ binary variables, m+1 continuous variables, and 3+2⌈log2m⌉ constraints to express a piecewise linearization function with m line segments.
## 3. Formulation Comparisons
The comparison results of the above five methods in terms of the numbers of binary variables, continuous variables, and constraints are listed in Table1. The number of extra binary variables of Methods 1 and 3 is linear in the number of line segments. Methods 4 and 5 have the logarithmic number of extra binary variables with m line segments, and the number of extra binary variables of Method 2 is equal to the number of concave piecewise line segments. In the deterministic global optimization for a minimization problem, inverse, power, and exponential transformations generate nonconvex expressions that require to be linearly approximated in the reformulated problem. That means Methods 4 and 5 are superior to Methods 1, 2, and 3 in terms of the numbers of extra binary variables and constraints as shown in Table 1. Moreover, Method 5 has fewer extra continuous variables and constraints than Method 4 in linearizing a nonlinear function.Table 1
Comparison results of five methods in expressing a piecewise linearization function withm line segments (i.e., m+1 break points).
Items
Method1
Method2
Method3
Method4
Method5
No. of binary variables
m
q (no. of concave piecewise segments)
m
⌈
lo
g
2
m
)
⌉
⌈
lo
g
2
m
⌉
No. of continuous variables
m
+
1
m
+
1
0
2
m
+
2
⌈
lo
g
2
m
⌉
m
+
1
No. of constraints
m
+
5
m
+
1
4
m
8
+
8
⌈
lo
g
2
m
⌉
3
+
2
⌈
lo
g
2
m
⌉Till et al. [51] reviewed the literature on the complexity of mixed-integer linear programming (MILP) problems and summarized that the computational complexity varies from O(d·n2) to O(2d·n3), where n is the number of constraints and d is the number of binaries. Therefore, reducing constraints and binary variables makes a greater impact than reducing continuous variables on computational efficiency of solving MILP problems. For finding a global solution of a nonlinear programming problem by a piecewise linearization method, if the linearization method generates a large number of additional constraints and binaries, the computational efficiency will decrease and cause heavy computational burdens. According to the above discussions, Method 5 is more computationally efficient than the other four methods. Experiment results from the literature [39, 48, 49] also support the statement.Beale and Tomlin [43] suggested a formulation for piecewise linear functions by using continuous variables in special ordered sets of type 2 (SOS2). Although no binary variables are included in the SOS2 formulation, the nonlinearities are enforced algorithmically and directly in the branch-and-bound algorithm by branching on sets of variables. Since the traditional SOS2 branching schemes have too many dichotomies, the piecewise linearization technique in Method 5 induces an independent branching scheme of logarithm depth and provides a significant computational advantage [50]. The computational results in Vielma and Nemhauser [50] show that Method 5 outperforms the SOS2 model without binary variables.The factors affecting the computational efficiency in solving nonlinear programming problems include the tightness of the constructed convex underestimator, the efficiency of the piecewise linearization technique, and the number of the transformed variables. An appropriate variable transformation constructs a tighter convex underestimator and makes fewer break points required in the linearization process to satisfy the same optimality tolerance and feasibility tolerance. Vielma and Nemhauser [50] indicated that the formulation of Method 5 is sharp and locally ideal and has favorable tightness properties. They presented experimental results showing that Method 5 significantly outperforms other methods, especially when the number of break points becomes large. Vielma et al. [39] explained that the formulation of Method 4 is not sharp and is theoretically and computationally inferior to standard MILP formulations (convex combination model, logarithmic convex combination model) for piecewise linear functions.
## 4. Error Evaluation
For evaluating the error of piecewise linear approximation, Tsai and Lin [49, 52] and Lin and Tsai [53] utilized the expression |f(x)-L(f(x))| to estimate the error indicated in Figure 2. If f(x) is the objective function, gi(x)<0 is the ith constraint, and x* is the solution derived from the transformed program, then the linearization does not require to be refined until |f(x*)-L(f(x*))|≤ε1 and Maxi(gi(x*))≤ε2, where |f(x*)-L(f(x*))| is the evaluated error in objective, ε1 is the optimality tolerance, gi(x*) is the error in the ith constraint, and ε2 is the feasibility tolerance.Error evaluation of the linear approximation.
(a)
(b)The accuracy of the linear approximation significantly depends on the selection of break points and more break points can increase the accuracy of the linear approximation. Since adding numerous break points leads to a significant increase in the computational burden, the break point selection strategies can be applied to improve the computational efficiency in solving optimization problems by the deterministic approaches. Existing break point selection strategies are classified into three categories as follows [54]:(i)
add a new break point at the midpoint of each interval of existing break points;(ii)
add a new break point at the point with largest approximation error of each interval;(iii)
add a new break point at the previously obtained solution point.According to the deterministic optimization methods for solving nonconvex nonlinear problems [29, 33, 38, 39, 48, 49, 53–56], the inverse or logarithmic transformation is required to be approximated by the piecewise linearization function. For example, the function y=lnx or y=x-1 is required to be piecewisely linearized by using an appropriate breakpoint selection strategy, if a new break point is added at the midpoint of each interval of existing break points or at the point with largest approximation error, the number of line segments becomes double in each iteration. If a new breakpoint is added at the previously obtained solution point, only one breakpoint is added in each iteration. How to improve the computational efficiency by a better break point selection strategy still needs more investigations or experiments to get concrete results.
## 5. Conclusions
This study provides an overview on some of the most commonly used piecewise linearization methods in deterministic optimization. From the formulation point of view, the numbers of extra binaries, continuous variables, and constraints are decreasing in the latest development methods especially for the number of extra binaries which may cause heavy computational burdens. Additionally, a good piecewise linearization method must consider the tightness properties such as sharp and locally ideal. Since effective break points selection strategy is important to enhance the computational efficiency in linear approximation, more work should be done to study the optimal positioning of the break points. Although a logarithmic piecewise linearization method with good tightness properties has been proposed, it is still too time consuming for finding an approximately global optimum of a large scale nonconvex problem. Developing an efficient polynomial time algorithm for solving nonconvex problems by piecewise linearization techniques is still a challenging question. Obviously, this contribution gives only a few preliminary insights and might point toward issues deserving additional research.
---
*Source: 101376-2013-11-06.xml* | 2013 |
# Prevalence of Soil-Transmitted Helminthiases and Schistosomiasis in Preschool Age Children in Mwea Division, Kirinyaga South District, Kirinyaga County, and Their Potential Effect on Physical Growth
**Authors:** Stephen Sifuna Wefwafwa Sakari; Amos K. Mbugua; Gerald M. Mkoji
**Journal:** Journal of Tropical Medicine
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1013802
---
## Abstract
Intestinal parasitic infections can significantly contribute to the burden of disease, may cause nutritional and energetic stress, and negatively impact the quality of life in low income countries of the world. This cross-sectional study done in Mwea irrigation scheme, in Kirinyaga, central Kenya, assessed the public health significance of soil-transmitted helminthiases (STH), schistosomiasis, and other intestinal parasitic infections, among 361 preschool age children (PSAC) through fecal examination, by measuring anthropometric indices, and through their parents/guardians, by obtaining sociodemographic information. Both intestinal helminth and protozoan infections were detected, and, among the soil-transmitted helminth parasites, there wereAscaris lumbricoides (prevalence, 3%),Ancylostoma duodenale (<1%), andTrichuris trichiura (<1%). Other intestinal helminths wereHymenolepis nana (prevalence, 3.6%) andEnterobius vermicularis (<1%).Schistosoma mansoni occurred at a prevalence of 5.5%. Interestingly, the protozoan,Giardia lamblia (prevalence, 14.7%), was the most common among the PSAC. Other protozoans wereEntamoeba coli (3.9%) andEntamoeba histolytica (<1). Anthropometric indices showed evidence of malnutrition. Intestinal parasites were associated with hand washing behavior, family size, water purification, and home location. These findings suggest thatG. lamblia infection and malnutrition may be significant causes of ill health among the PSAC in Mwea, and, therefore, an intervention plan is needed.
---
## Body
## 1. Introduction
Soil-transmitted helminthiases (STH) and schistosomiasis are listed among the many Neglected Tropical Diseases, with an established association to chronic, disabling, and disfiguring conditions occurring in settings of extreme poverty and even more so in rural poor and disadvantaged urban populations characterized by poor sanitation [1–3]. They contribute significantly to the burden of disease causing nutritional and energetic stress negatively impacting the quality of life and as such these parasitic infections have also been associated with malnutrition which contributes to more than one-third of all deaths of under-five children [2].Estimates show that, in sub-Saharan Africa (SSA), about 198 million people are infected with hookworms [4], 192 million with schistosomiasis infection [5], 173 million with ascariasis infection [4], and 162 million with trichuriasis infection [4]. Based on the initial global percentage prevalence determined over 60 years ago [6] it is believed that the prevalence of STH has remained relatively constant in sub-Saharan Africa [4] where between one-quarter and one-third of sub-Saharan Africa’s population is affected by one or more STH infections [4] with preschool age children and school age children carrying the highest prevalence and intensities [7, 8]. Available data estimates over 270 million preschool age children and over 600 million school age children live in areas characterized by intense transmission of intestinal parasites [9]. These infections have also been strongly associated with malnutrition [10] which is known to contribute to more than one-third of all deaths of under-five children [11].In Kenya, the National Multi Year Strategic Plan for the Control of Neglected Tropical Diseases has prioritized intestinal worms among other NTDs (Neglected Tropical Diseases) as diseases of great public health importance mostly affecting the poorest of the poor [12].Recent studies in Kenya estimate that about 6 million people are infected with schistosomiasis and even more are at risk [13]. The prevalence is set to range from 5% to over 65% in various communities in Kenya. It is endemic in 56 districts with the highest prevalence forSchistosoma mansoni occurring in lower Eastern and Lake Regions of Kenya and in irrigation schemes [14]. The Kenya Demographic and Health survey has also shown that 35.3% of under-five children were stunted nationwide, 6.7% were wasted, and 16.3% were underweight suggesting the significance of the burden of malnutrition particularly in rural Kenya [15]. To what extent the burden of malnutrition is contributed by intestinal parasites, in particular, helminth infections, remains to be accurately determined [16].The prevalence of intestinal schistosomiasis, STH, and other intestinal parasitic infections in preschool age children (PSAC) in the Mwea rice irrigation scheme of Kirinyaga County in Central Kenya is not well documented, but, according to research done in an endemic community in Western Kenya, the prevalence in PSAC was demonstrated to be up to 37% [17] indicating the significant risk of infection in this age group in an endemic setup. Although there is a national school deworming programme which to date is still being implemented at the national level, the control programme has no clear policy for inclusion of PSAC (≤5 years old) in the mass treatment for STH and schistosomiasis. This thus highlights the need for a baseline survey to determine the prevalence, intensity, and possible effects on nutritional status of schistosomiasis and STH among other intestinal parasites in PSAC.In view of the lack of information regarding the preschool age children, this study was undertaken to determine the prevalence of intestinal parasites in this age group, the risk factors favoring the spread of the parasites, and, subsequently, the possible association between the parasitic infections and the nutritional status.
## 2. Materials and Methods
### 2.1. Study Area
This study was conducted in the Mwea Division of Kirinyaga South district in Kirinyaga County, central Kenya (00°40′54′′S, 037°20′36′′E). This area is approximately 110 km North East of Nairobi and the main agricultural activity is rice farming which is grown under flood irrigation. Mwea is situated in the lower altitude zone (approx. 1150 mASL) of the district in an expansive flat land mainly characterized by black cotton and red volcanic soils. Mwea Division has a land area of approximately 542.8 sq. Km. Mwea Division has a population of 190,512 with an urban population of 7,625 (census, 2009). The specific area (survey area) of study was Thiba ward which comprises Nguka and Thiba sublocations of Kirinyaga County which is approximately 34 sq. Km with a population of 31,689. The nearest large town and administrative centre for Thiba ward is Wang’uru Town.The geography of the area is mainly flat at an altitude ranging from 1150 to 1200 mASLThe area is mainly known for its horticultural crop farming where the main cash crop is rice grown under flood irrigation followed by maize.The setting for the study site was largely a rural and peri-urban population.
### 2.2. Study Design
The study was a comparative cross-sectional study carried out to collect both quantitative and qualitative data. Based on the objectives, the study design investigated the possible association between infections and intensities of STH and schistosomiasis among other intestinal helminth infections on the one hand, with indicators of current physical growth status, on the other.
### 2.3. Study Population
The target population of the study was generally preschool age children between ≥2 and ≤5 years of age who have at least lived in the area under study for the past 6 months. Using a random sampling technique, the study selected 13 schools within the study area. The schools included Kandongu Primary School, Kiorugari Primary School, Mbui Njeru Primary School, Mukou Primary School, Ngurubani Primary School, AIPCA Primary School, Rurumi Nursery School, Thiba Primary School, Midland Day Care, Sibling Day Care, St Joseph Day Care, Thiba Glory Day Care, and Vision Day Care centres. Parents and guardians of all eligible children were invited to a meeting where, out of 517 parents in attendance, 361 consented to allow their children to participate in the study, and 361 children were enrolled into the study.
### 2.4. Data Collection
For every child recruited, a unique identifier number was assigned and information regarding the child/infant’s name, sex and age, and area of residence (i.e., rural or urban) was collected. A questionnaire was also administered to consenting parents and guardian and was used to collect socioeconomic information of the parents/guardians and other behavioral information of the participating children considered to be relevant in contributing to the risk of infection.
#### 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
#### 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
#### 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
### 2.5. Study Approval
The study protocol was approved by the Scientific and Ethics Review Unit of the Kenya Medical Research Institute. Approval to carry out the study in the area was also sought from administrative authorities in the schools, the Mwea Division Health Administration, and the Kirinyaga County Health Administration. Prior to enrollment of the study subjects, a meeting with parents/guardians of all eligible children was called with the help of the schools’ administration, so that the study purpose, objectives, and procedures to be used could be explained including participants’ rights if they both accept or decline to have their children participate in the study. Written informed consent was obtained and the children were recruited into the study. The parents/guardians were assured of the privacy and confidentiality of the information collected. All children found to be infected with intestinal parasitic infections received the appropriate medication prescribed by a qualified and registered clinician where albendazole (for soil-transmitted helminthes) and praziquantel (for schistosomiasis) were administered in their recommended doses as per the WHO recommendations [18]. Other infections or conditions were referred to the local health clinic.
### 2.6. Statistical Analysis
The data collected was first entered and stored into Microsoft Excel 2010. The data was verified and crosschecked for errors. A copy of the data was then recoded and exported into Statistical Package for Social Sciences (SPSS) Version 20 and baseline descriptive statistics were drawn.Comparison of weight and height against infection status was done using independentT-test to assess significant differences in weight and height between the infected and the noninfected. ANOVA test was used to assess difference in height and weight between the noninfected, infected, and those with multiple infections.Anthropometric data was exported to WHO Anthro [20] where WAZ, HAZ, and WHZ were derived and used to determine nutritional status. The anthropometric variables where applicable were reported as mean ± standard deviation (SD) 95% confidence interval.Based on theZ-score values obtained for WAZ, HAZ, and WHZ, the children were categorized as normal (≤2 and ≥−2 Z-score), underweight (≥−3 and <−2 Z-score), or severely underweight (<−3 Z-score); stunted (≥−3 and <−2 Z-score) or severely stunted (<−3 Z-score); and wasted (≥−3 and <−2 Z-score) or severely wasted (<−3 Z-score).Binary variables were compared using Student’st- test and Chi-square test where applicable.Demographic and socioeconomic data were entered as categorical variables and the frequencies and percentages were calculated. Later they were assessed using a binary logistic regression model with the baseline category as the least likely to result to an infection outcome.All statistical tests were evaluated for significance atP<0.05; 95% CI (confidence interval).
## 2.1. Study Area
This study was conducted in the Mwea Division of Kirinyaga South district in Kirinyaga County, central Kenya (00°40′54′′S, 037°20′36′′E). This area is approximately 110 km North East of Nairobi and the main agricultural activity is rice farming which is grown under flood irrigation. Mwea is situated in the lower altitude zone (approx. 1150 mASL) of the district in an expansive flat land mainly characterized by black cotton and red volcanic soils. Mwea Division has a land area of approximately 542.8 sq. Km. Mwea Division has a population of 190,512 with an urban population of 7,625 (census, 2009). The specific area (survey area) of study was Thiba ward which comprises Nguka and Thiba sublocations of Kirinyaga County which is approximately 34 sq. Km with a population of 31,689. The nearest large town and administrative centre for Thiba ward is Wang’uru Town.The geography of the area is mainly flat at an altitude ranging from 1150 to 1200 mASLThe area is mainly known for its horticultural crop farming where the main cash crop is rice grown under flood irrigation followed by maize.The setting for the study site was largely a rural and peri-urban population.
## 2.2. Study Design
The study was a comparative cross-sectional study carried out to collect both quantitative and qualitative data. Based on the objectives, the study design investigated the possible association between infections and intensities of STH and schistosomiasis among other intestinal helminth infections on the one hand, with indicators of current physical growth status, on the other.
## 2.3. Study Population
The target population of the study was generally preschool age children between ≥2 and ≤5 years of age who have at least lived in the area under study for the past 6 months. Using a random sampling technique, the study selected 13 schools within the study area. The schools included Kandongu Primary School, Kiorugari Primary School, Mbui Njeru Primary School, Mukou Primary School, Ngurubani Primary School, AIPCA Primary School, Rurumi Nursery School, Thiba Primary School, Midland Day Care, Sibling Day Care, St Joseph Day Care, Thiba Glory Day Care, and Vision Day Care centres. Parents and guardians of all eligible children were invited to a meeting where, out of 517 parents in attendance, 361 consented to allow their children to participate in the study, and 361 children were enrolled into the study.
## 2.4. Data Collection
For every child recruited, a unique identifier number was assigned and information regarding the child/infant’s name, sex and age, and area of residence (i.e., rural or urban) was collected. A questionnaire was also administered to consenting parents and guardian and was used to collect socioeconomic information of the parents/guardians and other behavioral information of the participating children considered to be relevant in contributing to the risk of infection.
### 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
### 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
### 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
## 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
## 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
## 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
## 2.5. Study Approval
The study protocol was approved by the Scientific and Ethics Review Unit of the Kenya Medical Research Institute. Approval to carry out the study in the area was also sought from administrative authorities in the schools, the Mwea Division Health Administration, and the Kirinyaga County Health Administration. Prior to enrollment of the study subjects, a meeting with parents/guardians of all eligible children was called with the help of the schools’ administration, so that the study purpose, objectives, and procedures to be used could be explained including participants’ rights if they both accept or decline to have their children participate in the study. Written informed consent was obtained and the children were recruited into the study. The parents/guardians were assured of the privacy and confidentiality of the information collected. All children found to be infected with intestinal parasitic infections received the appropriate medication prescribed by a qualified and registered clinician where albendazole (for soil-transmitted helminthes) and praziquantel (for schistosomiasis) were administered in their recommended doses as per the WHO recommendations [18]. Other infections or conditions were referred to the local health clinic.
## 2.6. Statistical Analysis
The data collected was first entered and stored into Microsoft Excel 2010. The data was verified and crosschecked for errors. A copy of the data was then recoded and exported into Statistical Package for Social Sciences (SPSS) Version 20 and baseline descriptive statistics were drawn.Comparison of weight and height against infection status was done using independentT-test to assess significant differences in weight and height between the infected and the noninfected. ANOVA test was used to assess difference in height and weight between the noninfected, infected, and those with multiple infections.Anthropometric data was exported to WHO Anthro [20] where WAZ, HAZ, and WHZ were derived and used to determine nutritional status. The anthropometric variables where applicable were reported as mean ± standard deviation (SD) 95% confidence interval.Based on theZ-score values obtained for WAZ, HAZ, and WHZ, the children were categorized as normal (≤2 and ≥−2 Z-score), underweight (≥−3 and <−2 Z-score), or severely underweight (<−3 Z-score); stunted (≥−3 and <−2 Z-score) or severely stunted (<−3 Z-score); and wasted (≥−3 and <−2 Z-score) or severely wasted (<−3 Z-score).Binary variables were compared using Student’st- test and Chi-square test where applicable.Demographic and socioeconomic data were entered as categorical variables and the frequencies and percentages were calculated. Later they were assessed using a binary logistic regression model with the baseline category as the least likely to result to an infection outcome.All statistical tests were evaluated for significance atP<0.05; 95% CI (confidence interval).
## 3. Results
### 3.1. General Characteristics of the Study Group
A total of 361 children were recruited into the study of which 50.40% were male (n=183) and 49.60% female (n=178). The mean age in months was 46.62±9.68 (45.62–47.62), 95% CI. Mean height was 101.78±6.57 cm (101.10–102.45), 95% CI, and mean weight was 14.71±2.08 kg (14.49–14.92), 95% CI. Table 1 gives an overall summary of the study group demographics while Table 2 provides an age group sex distribution of the population.Table 1
Summary of anthropometric descriptive statistics of the sampled study population.
Mean
Confidence interval
Age in months
Male (n = 183)
46.30 ± 10.01
(44.85–47.75) 95% CI
Female (n = 179)
46.93 ± 9.36
(45.55–48.30) 95% CI
Total 361
46.62 ± 9.68
(45.62–47.62) 95% CI
Height in cm
Male (n = 183)
101.34 ± 6.43
(100.41–102.27) 95% CI
Female (n = 178)
102.23 ± 6.69
(101.24–103.21) 95% CI
Total: 361
101.78 ± 6.57
(101.10–102.45) 95% CI
Weight in kg
Male (n = 183)
14.80 ± 2.06
(14.50–15.10) 95% CI
Female (n = 178)
14.61 ± 2.11
(14.30–14.92) 95% CI
Total: 361
14.71 ± 2.08
(14.49–14.92) 95% CI
n = total number of children.Table 2
Age/sex distribution of the sampled study population (n = 361).
Age group
Female
Male
Total
Count
%
Count
%
Count
%
<2.5 years
14
3.88%
20
5.54%
34
9.42%
2.5–3.0 years
13
3.60%
16
4.43%
29
8.03%
3.0–3.5 years
24
6.65%
22
6.09%
46
12.74%
3.5–4.0 years
38
10.53%
39
10.80%
77
21.33%
4.0–4.5 years
44
12.19%
46
12.74%
90
24.93%
>4.5 years
45
12.47%
40
11.08%
85
23.55%
Grand total
178
49.31%
183
50.69%
361
100.00%The same number of families participated in the questionnaires determining behavioral trends and socioeconomic status and summary of the responses is tabulated on Table3.Table 3
Frequency distribution of socioeconomic characteristics of the sampled study population.
Attribute
Response
Frequency
% frequency
Knowledge of disease transmission
No
114
31.6%
Yes
247
68.4%
Geophagy (soil eating)
No
74
20.5%
Yes
287
79.5%
Hand washing (child)
Never
118
32.7%
Sometimes
213
59.0%
Always
30
8.3%
Shoe wearing
Sometimes
325
90.0%
Always
36
10.0%
Water source (domestic)
River/canal
292
80.9%
Borehole
43
11.9%
Piped
26
7.2%
River bathing child
No
98
27.1%
Yes
263
72.9%
Water purification method
None
71
19.7%
Filtration
115
31.9%
Boiling
79
21.9%
Chlorination
96
26.6%
Bathroom waste water disposal
Open ground
275
76.2%
Latrine
86
23.8%
Employment status (father)
No
75
20.8%
Yes
286
79.2%
Employment status (mother)
No
236
65.4%
Yes
125
34.6%
Home ownership
Self-own
208
57.6%
Rental
153
42.4%
Home location classification
Rural
284
78.7%
Urban
77
21.3%
Family with children above 5 yrs
No
249
69.0%
Yes
112
31.0%
House type
Rural
289
80.1%
Wooden
8
2.2%
Iron sheets
12
3.3%
Brick/stone
52
14.4%
### 3.2. Parasitological Investigations
Out of the total 361 children enrolled in the study, 108 children (29.9%) were found to be infected with an intestinal parasite of which 15 (3.9%) had multiple parasite infections. Prevalence of each parasitic infection is shown in Table4. The prevalence ofAncylostoma duodenale was at 0.6%,Ascaris lumbricoides 3.3%,Entamoeba histolytica 0.3%Enterobius vermicularis 0.83%,Entamoeba coli3.88%,Giardia lamblia 14.68%,Hymenolepis nana 3.6%,Schistosoma mansoni 5.54%, andTrichuris trichiura 1.11%, combining single and multiple infections. It was noted that prevalence for most infections showed a tendency to increase with age as is illustrated in Table 5. There was a significant difference in prevalence ofSchistosoma mansoni infection between boys and girls where boys showed a higher tendency to be infected with schistosomiasis (t = 3.308; P=0.03; 0.026–0.119 at 95% CI). All other infections showed no statistically significant difference between boys and girls. Generally, infection prevalence showed tendency to increase with age. Based on independent t-tests done to compare weights and heights of those infected versus the uninfected, there was no statistically significant difference based on the overall infection status (weight: P=0.07482, t = 1.6520; height: P=0.2230, t = 1.6519); there was however statistical significant difference in weight between those infected withGiardia lamblia and those not infected (P=0.0362, t = 1.8015). All other infections individually showed no significant difference in weight and height between those infected and the noninfected.Table 4
Prevalence of parasitic infections in sampled study population in Mwea Division.
Row labels
Frequency
Percentage
Boys
Percentage
Girls
Percentage
Ancylostoma duodenale
2
0.55%
1
0.53%
1
0.58%
Ascaris lumbricoides
12
3.05%
6
3.19%
5
2.89%
E. coli
77∗
3.88%
5
2.66%
9
5.20%
E. histolytica
1
0.28%
0
0.00%
1
0.58%
E. vermicularis
3
0.83%
1
0.53%
2
1.16%
G. lamblia
54
14.68%
29
15.43%
25
13.87%
H. nana
94∗
3.60%
4
2.13%
9
5.20%
No infection
253
66.48%
123
65.43%
117
67.63%
Schistosoma mansoni
182∗
5.54%
17
9.04%
3
1.73%
Trichuris trichiura
22∗
1.11%
2
1.06%
2
1.16%
Grand total
361
100.00%
188
100.00%
173
100.00%
O
c
c
u
r
r
e
n
c
e
∗ as multiple infections.Table 5
Frequency distribution of parasitic infections per age groups.
Age Group
S. mansoni
Hookworm
A. lumbricoides
T. trichiura
G. lamblia
H. nana
E.vermicularis
E. histolytica
E. coli
<2.5 yrs
1
0
1
0
4
1
0
0
3
2.5–3 yrs
1
0
1
0
1
0
0
0
1
3.0–3.5 yrs
2
0
1
0
6
0
1
0
3
3.5–4.0 yrs
3
1
1
0
9
1
0
1
5
4.0–4.5 yrs
5
1
3
4
15
7
2
0
0
4.5–5 yrs
8
0
4
0
19
4
0
0
2
Grand total
20
2
11
4
54
13
3
1
14
### 3.3. Nutritional Status
#### 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 3.1. General Characteristics of the Study Group
A total of 361 children were recruited into the study of which 50.40% were male (n=183) and 49.60% female (n=178). The mean age in months was 46.62±9.68 (45.62–47.62), 95% CI. Mean height was 101.78±6.57 cm (101.10–102.45), 95% CI, and mean weight was 14.71±2.08 kg (14.49–14.92), 95% CI. Table 1 gives an overall summary of the study group demographics while Table 2 provides an age group sex distribution of the population.Table 1
Summary of anthropometric descriptive statistics of the sampled study population.
Mean
Confidence interval
Age in months
Male (n = 183)
46.30 ± 10.01
(44.85–47.75) 95% CI
Female (n = 179)
46.93 ± 9.36
(45.55–48.30) 95% CI
Total 361
46.62 ± 9.68
(45.62–47.62) 95% CI
Height in cm
Male (n = 183)
101.34 ± 6.43
(100.41–102.27) 95% CI
Female (n = 178)
102.23 ± 6.69
(101.24–103.21) 95% CI
Total: 361
101.78 ± 6.57
(101.10–102.45) 95% CI
Weight in kg
Male (n = 183)
14.80 ± 2.06
(14.50–15.10) 95% CI
Female (n = 178)
14.61 ± 2.11
(14.30–14.92) 95% CI
Total: 361
14.71 ± 2.08
(14.49–14.92) 95% CI
n = total number of children.Table 2
Age/sex distribution of the sampled study population (n = 361).
Age group
Female
Male
Total
Count
%
Count
%
Count
%
<2.5 years
14
3.88%
20
5.54%
34
9.42%
2.5–3.0 years
13
3.60%
16
4.43%
29
8.03%
3.0–3.5 years
24
6.65%
22
6.09%
46
12.74%
3.5–4.0 years
38
10.53%
39
10.80%
77
21.33%
4.0–4.5 years
44
12.19%
46
12.74%
90
24.93%
>4.5 years
45
12.47%
40
11.08%
85
23.55%
Grand total
178
49.31%
183
50.69%
361
100.00%The same number of families participated in the questionnaires determining behavioral trends and socioeconomic status and summary of the responses is tabulated on Table3.Table 3
Frequency distribution of socioeconomic characteristics of the sampled study population.
Attribute
Response
Frequency
% frequency
Knowledge of disease transmission
No
114
31.6%
Yes
247
68.4%
Geophagy (soil eating)
No
74
20.5%
Yes
287
79.5%
Hand washing (child)
Never
118
32.7%
Sometimes
213
59.0%
Always
30
8.3%
Shoe wearing
Sometimes
325
90.0%
Always
36
10.0%
Water source (domestic)
River/canal
292
80.9%
Borehole
43
11.9%
Piped
26
7.2%
River bathing child
No
98
27.1%
Yes
263
72.9%
Water purification method
None
71
19.7%
Filtration
115
31.9%
Boiling
79
21.9%
Chlorination
96
26.6%
Bathroom waste water disposal
Open ground
275
76.2%
Latrine
86
23.8%
Employment status (father)
No
75
20.8%
Yes
286
79.2%
Employment status (mother)
No
236
65.4%
Yes
125
34.6%
Home ownership
Self-own
208
57.6%
Rental
153
42.4%
Home location classification
Rural
284
78.7%
Urban
77
21.3%
Family with children above 5 yrs
No
249
69.0%
Yes
112
31.0%
House type
Rural
289
80.1%
Wooden
8
2.2%
Iron sheets
12
3.3%
Brick/stone
52
14.4%
## 3.2. Parasitological Investigations
Out of the total 361 children enrolled in the study, 108 children (29.9%) were found to be infected with an intestinal parasite of which 15 (3.9%) had multiple parasite infections. Prevalence of each parasitic infection is shown in Table4. The prevalence ofAncylostoma duodenale was at 0.6%,Ascaris lumbricoides 3.3%,Entamoeba histolytica 0.3%Enterobius vermicularis 0.83%,Entamoeba coli3.88%,Giardia lamblia 14.68%,Hymenolepis nana 3.6%,Schistosoma mansoni 5.54%, andTrichuris trichiura 1.11%, combining single and multiple infections. It was noted that prevalence for most infections showed a tendency to increase with age as is illustrated in Table 5. There was a significant difference in prevalence ofSchistosoma mansoni infection between boys and girls where boys showed a higher tendency to be infected with schistosomiasis (t = 3.308; P=0.03; 0.026–0.119 at 95% CI). All other infections showed no statistically significant difference between boys and girls. Generally, infection prevalence showed tendency to increase with age. Based on independent t-tests done to compare weights and heights of those infected versus the uninfected, there was no statistically significant difference based on the overall infection status (weight: P=0.07482, t = 1.6520; height: P=0.2230, t = 1.6519); there was however statistical significant difference in weight between those infected withGiardia lamblia and those not infected (P=0.0362, t = 1.8015). All other infections individually showed no significant difference in weight and height between those infected and the noninfected.Table 4
Prevalence of parasitic infections in sampled study population in Mwea Division.
Row labels
Frequency
Percentage
Boys
Percentage
Girls
Percentage
Ancylostoma duodenale
2
0.55%
1
0.53%
1
0.58%
Ascaris lumbricoides
12
3.05%
6
3.19%
5
2.89%
E. coli
77∗
3.88%
5
2.66%
9
5.20%
E. histolytica
1
0.28%
0
0.00%
1
0.58%
E. vermicularis
3
0.83%
1
0.53%
2
1.16%
G. lamblia
54
14.68%
29
15.43%
25
13.87%
H. nana
94∗
3.60%
4
2.13%
9
5.20%
No infection
253
66.48%
123
65.43%
117
67.63%
Schistosoma mansoni
182∗
5.54%
17
9.04%
3
1.73%
Trichuris trichiura
22∗
1.11%
2
1.06%
2
1.16%
Grand total
361
100.00%
188
100.00%
173
100.00%
O
c
c
u
r
r
e
n
c
e
∗ as multiple infections.Table 5
Frequency distribution of parasitic infections per age groups.
Age Group
S. mansoni
Hookworm
A. lumbricoides
T. trichiura
G. lamblia
H. nana
E.vermicularis
E. histolytica
E. coli
<2.5 yrs
1
0
1
0
4
1
0
0
3
2.5–3 yrs
1
0
1
0
1
0
0
0
1
3.0–3.5 yrs
2
0
1
0
6
0
1
0
3
3.5–4.0 yrs
3
1
1
0
9
1
0
1
5
4.0–4.5 yrs
5
1
3
4
15
7
2
0
0
4.5–5 yrs
8
0
4
0
19
4
0
0
2
Grand total
20
2
11
4
54
13
3
1
14
## 3.3. Nutritional Status
### 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 4. Discussion
Parasitic infections are well known for their burden of disease mainly attributed to their chronic and insidious impact on the health, nutrition, and quality of life of those infected rather than to the mortality they cause [21]. The study showed that 29.9% of the children were infected with various parasitic infections. The prevalence of specific parasitic infections was generally low with prevalence of below 6%. However, a prevalence of 15% forGiardia lamblia,a parasite often associated with diarrhea and acquired through drinking contaminated water and consumption of contaminated soil or food [22], was interesting but not surprising. This finding suggests that this parasite is most likely common in this area and a cause of ill health among children of 5 years of age or less, in this area. Since there were no previous studies to investigate their prevalence, this study served as a baseline survey providing information on the status of infection in PSAC. The study was also able to demonstrate that 3.6%, 1.7%, and 0.6% of the children were severely wasted, underweight, and stunted. Based on the general infection status, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected. The study demonstrated a significant lower mean weights, mean weight for age, and mean height for age for children infected withGiardia lamblia infection, a clear indication of the impact ofGiardia lamblia on the nutritional status of children [22]. Other studies have also documented similar findings with regard to the effects ofGiardia lamblia on weight and height of children [22] where chronic infections with giardia lamblia have been associated with clinical manifestation of malnutrition. The study however could not demonstrate statistically significant association linking other specific parasitic infections to malnutrition. This could be attributed to the low prevalence of these infections. This study has also shown that hand washing behavior, water source for drinking, water purification methods, and classification of home location and family size were strongly associated with the general status of infection. Similar studies have also demonstrated association between soil-transmitted helminth infection with water supply source, hand washing behavior, and family size [23].The results of the binary logistic regression in Table7 show that the transmission ofSchistosomaspp., STH, among other parasitic infections have been strongly associated with sanitation and hygiene and the lack of clean and safe water supply. Most of these conditions have mostly been linked to poverty as the root cause and as such have been linked to malnutrition and many other health problems including parasitic infections [2, 16]. Of the total number of infections 93.5% (101 children) occurred in the rural setting and only 6.5% (7 children) occurring in the urban setting. Also from the regression analysis the odds of a child living in rural areas is up to 8.1 times higher (See Table 7) compared to the children in urban settlement. This presents a clear association of infection with the rural setting which is well known to be associated with poverty and lack of access to clean and safe water [23, 24].The study findings of the study have also demonstrated an association between malnutrition and family size where families with more than 3 children above the age of 5 had a lower mean weight compared to families with <3 children. Other studies have demonstrated this to be especially common in rural and poor socioeconomic communities due to inadequate distribution of food among family members [2]. Also to note is the association between families where children have siblings above the age of 5 had a higher risk of infection which presents a likelihood of infection being transmitted from older siblings to younger ones.Regardless of infection status, the study populations showed high prevalence of malnutrition, with prevalence and severity showing tendency to increase with age as is illustrated in Table8. This observation is consistent with findings from other studies [2] that demonstrated significant increase of risk of malnutrition with increase in age for children under 5. These observations could as well be attributed to poverty and other health problems which do not exclude other parasitic infections beyond the scope of this study. Figures 1, 2, and 3 provide a graphical representation of the nutritional status of the preschool age children in Mwea Division.Table 8
Prevalence of malnutrition by age groups in PSAC in Mwea Division.
Age(months)
Total number
Severe wasting(<−3 z-score)
Moderate wasting(≥−3 and <−2 z-score)
Severe underweight(<−3 z-score)
Underweight(≥−3 and <−2 z-score)
Severe stunted(<−3 z-score)
Moderate stunted(≥−3 and <−2 z-score)
Number
%
Number
%
Number
%
Number
%
Number
%
Number
%
6–17
18–29
34
1
2.9
5
14.7
0
0.0
0
0.0
0
0.0
0
0.0
30–41
71
3
4.2
8
11.3
1
1.4
4
4.2
1
1.4
2
2.8
42–53
163
4
2.5
31
19.0
6
2.5
40
20.2
1
0.6
14
8.6
54–59
93
5
5.4
17
18.3
1
1.1
12
11.8
0
0.0
6
5.4
Total
361
13
3.6
61
16.9
8
2.2
56
15.5
2
0.8
22
15.5The deviation observed for WHZ scores showing skewness to the left (negatively skewed) and a shift to the left (see Figure1) is indicative that many of the children deviate negatively from the WHO standard WHZ means. Low weight for height Z-scores is known to result from recent nutritional deficiency which has been associated with availability of food and disease prevalence.In comparison to the WHO standards, the sampled population HAZ distribution is platykurtic with lower and broader central peaks (see Figure2). This is indicative of the population mean not being centered around the WHO recommended standards. Height for age Z-scores (HAZ) is an indicator for stunting represented by low HAZ and has been demonstrated to result from prolonged periods of either inadequate food intake, poor diet quality of morbidity from disease, or a combination of the same. Figure 2 shows distinct deviation from the WHO standard which may be indicative of either one or a combination of factors [2]. In this instance, boys have been shown to be more affected compared to girls.Weight for age being an indicator of underweight is usually a composite of both WHZ and HAZ. This therefore also serves as an indicator of malnutrition which among the many causes chronic parasitism cannot be ruled out.The study also showed that the number of boys affected by malnutrition was slightly higher compared to that of girls affected by malnutrition (see Table6). In general, prevalence of malnutrition stood at 27.7% for wasting, 17.7% for underweight, and 6.94% for stunting with a majority of these cases occurring in the rural areas. This is a reflection of the 2008-2009 Kenya Demographic Health Survey for children under 5 years which showed that, nationwide, 35.3%, 6.7%, and 16.3% of the children were stunted, wasted, and underweight, respectively, and further suggested the greatest burden of malnutrition was in rural areas [2, 15].The synergistic relationship between nutrition and infection can be attributed to the observed findings whereby either exposure to infections may be the cause of the malnutrition or the malnutrition predisposed the children making them more susceptible to infection. This is but a hypothetical deduction based on the study finding and thus further study is needed to ascertain the underlying cause of the observations made in this population
## 5. Conclusion
In conclusion, this study has demonstrated that the prevalence of STH and schistosomiasis in Mwea division in Kirinyaga County, Central Kenya is relatively low with a tendency to increase with age. While children in this age group were found to be infected with bothS. mansoni and STH, prevalence was generally low (<6%), therefore not likely to have a major public health impact in this age group. Nevertheless, regular intervention will be necessary. A high prevalence ofGiardia lamblia infections (15%), while interesting, was not surprising, as this infection is fairly common, in environments where hygiene is poor. This finding in particular suggests theG. lamblia is likely to be a major public health concern among children aged 5 years or less in Mwea, as they are at a high risk. It is, therefore, important to consider establishing an intervention program targeting this particular age group. The study further suggests the need for further investigations into other parasitic infections that cause ill health in this age group in the study area. While the prevalence of schistosomiasis and STH may have been low, these are likely to increase in prevalence given the conducive environment for transmission of these parasites in the area.This study has also shown that hand washing practices, water purification methods, rural homes, and families with siblings above 5 years to are associated with infection in this age group. It is thus important to provide health education programmes for disease prevention, improved access to clean and safe water for domestic use, and appropriate sanitation.Although the study was not able to establish a firm association between infection and malnutrition, the moderate prevalence of malnutrition in this age group cannot be ignored and the contribution of parasitic infections to the malnutrition cannot be entirely ruled out. It therefore calls for further investigations into the nutritional status of this age group to identify the underlying cause(s). Inclusion of nutrition in education is also recommended with a focus on families with preschool age children.
---
*Source: 1013802-2017-08-23.xml* | 1013802-2017-08-23_1013802-2017-08-23.md | 70,650 | Prevalence of Soil-Transmitted Helminthiases and Schistosomiasis in Preschool Age Children in Mwea Division, Kirinyaga South District, Kirinyaga County, and Their Potential Effect on Physical Growth | Stephen Sifuna Wefwafwa Sakari; Amos K. Mbugua; Gerald M. Mkoji | Journal of Tropical Medicine
(2017) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1013802 | 1013802-2017-08-23.xml | ---
## Abstract
Intestinal parasitic infections can significantly contribute to the burden of disease, may cause nutritional and energetic stress, and negatively impact the quality of life in low income countries of the world. This cross-sectional study done in Mwea irrigation scheme, in Kirinyaga, central Kenya, assessed the public health significance of soil-transmitted helminthiases (STH), schistosomiasis, and other intestinal parasitic infections, among 361 preschool age children (PSAC) through fecal examination, by measuring anthropometric indices, and through their parents/guardians, by obtaining sociodemographic information. Both intestinal helminth and protozoan infections were detected, and, among the soil-transmitted helminth parasites, there wereAscaris lumbricoides (prevalence, 3%),Ancylostoma duodenale (<1%), andTrichuris trichiura (<1%). Other intestinal helminths wereHymenolepis nana (prevalence, 3.6%) andEnterobius vermicularis (<1%).Schistosoma mansoni occurred at a prevalence of 5.5%. Interestingly, the protozoan,Giardia lamblia (prevalence, 14.7%), was the most common among the PSAC. Other protozoans wereEntamoeba coli (3.9%) andEntamoeba histolytica (<1). Anthropometric indices showed evidence of malnutrition. Intestinal parasites were associated with hand washing behavior, family size, water purification, and home location. These findings suggest thatG. lamblia infection and malnutrition may be significant causes of ill health among the PSAC in Mwea, and, therefore, an intervention plan is needed.
---
## Body
## 1. Introduction
Soil-transmitted helminthiases (STH) and schistosomiasis are listed among the many Neglected Tropical Diseases, with an established association to chronic, disabling, and disfiguring conditions occurring in settings of extreme poverty and even more so in rural poor and disadvantaged urban populations characterized by poor sanitation [1–3]. They contribute significantly to the burden of disease causing nutritional and energetic stress negatively impacting the quality of life and as such these parasitic infections have also been associated with malnutrition which contributes to more than one-third of all deaths of under-five children [2].Estimates show that, in sub-Saharan Africa (SSA), about 198 million people are infected with hookworms [4], 192 million with schistosomiasis infection [5], 173 million with ascariasis infection [4], and 162 million with trichuriasis infection [4]. Based on the initial global percentage prevalence determined over 60 years ago [6] it is believed that the prevalence of STH has remained relatively constant in sub-Saharan Africa [4] where between one-quarter and one-third of sub-Saharan Africa’s population is affected by one or more STH infections [4] with preschool age children and school age children carrying the highest prevalence and intensities [7, 8]. Available data estimates over 270 million preschool age children and over 600 million school age children live in areas characterized by intense transmission of intestinal parasites [9]. These infections have also been strongly associated with malnutrition [10] which is known to contribute to more than one-third of all deaths of under-five children [11].In Kenya, the National Multi Year Strategic Plan for the Control of Neglected Tropical Diseases has prioritized intestinal worms among other NTDs (Neglected Tropical Diseases) as diseases of great public health importance mostly affecting the poorest of the poor [12].Recent studies in Kenya estimate that about 6 million people are infected with schistosomiasis and even more are at risk [13]. The prevalence is set to range from 5% to over 65% in various communities in Kenya. It is endemic in 56 districts with the highest prevalence forSchistosoma mansoni occurring in lower Eastern and Lake Regions of Kenya and in irrigation schemes [14]. The Kenya Demographic and Health survey has also shown that 35.3% of under-five children were stunted nationwide, 6.7% were wasted, and 16.3% were underweight suggesting the significance of the burden of malnutrition particularly in rural Kenya [15]. To what extent the burden of malnutrition is contributed by intestinal parasites, in particular, helminth infections, remains to be accurately determined [16].The prevalence of intestinal schistosomiasis, STH, and other intestinal parasitic infections in preschool age children (PSAC) in the Mwea rice irrigation scheme of Kirinyaga County in Central Kenya is not well documented, but, according to research done in an endemic community in Western Kenya, the prevalence in PSAC was demonstrated to be up to 37% [17] indicating the significant risk of infection in this age group in an endemic setup. Although there is a national school deworming programme which to date is still being implemented at the national level, the control programme has no clear policy for inclusion of PSAC (≤5 years old) in the mass treatment for STH and schistosomiasis. This thus highlights the need for a baseline survey to determine the prevalence, intensity, and possible effects on nutritional status of schistosomiasis and STH among other intestinal parasites in PSAC.In view of the lack of information regarding the preschool age children, this study was undertaken to determine the prevalence of intestinal parasites in this age group, the risk factors favoring the spread of the parasites, and, subsequently, the possible association between the parasitic infections and the nutritional status.
## 2. Materials and Methods
### 2.1. Study Area
This study was conducted in the Mwea Division of Kirinyaga South district in Kirinyaga County, central Kenya (00°40′54′′S, 037°20′36′′E). This area is approximately 110 km North East of Nairobi and the main agricultural activity is rice farming which is grown under flood irrigation. Mwea is situated in the lower altitude zone (approx. 1150 mASL) of the district in an expansive flat land mainly characterized by black cotton and red volcanic soils. Mwea Division has a land area of approximately 542.8 sq. Km. Mwea Division has a population of 190,512 with an urban population of 7,625 (census, 2009). The specific area (survey area) of study was Thiba ward which comprises Nguka and Thiba sublocations of Kirinyaga County which is approximately 34 sq. Km with a population of 31,689. The nearest large town and administrative centre for Thiba ward is Wang’uru Town.The geography of the area is mainly flat at an altitude ranging from 1150 to 1200 mASLThe area is mainly known for its horticultural crop farming where the main cash crop is rice grown under flood irrigation followed by maize.The setting for the study site was largely a rural and peri-urban population.
### 2.2. Study Design
The study was a comparative cross-sectional study carried out to collect both quantitative and qualitative data. Based on the objectives, the study design investigated the possible association between infections and intensities of STH and schistosomiasis among other intestinal helminth infections on the one hand, with indicators of current physical growth status, on the other.
### 2.3. Study Population
The target population of the study was generally preschool age children between ≥2 and ≤5 years of age who have at least lived in the area under study for the past 6 months. Using a random sampling technique, the study selected 13 schools within the study area. The schools included Kandongu Primary School, Kiorugari Primary School, Mbui Njeru Primary School, Mukou Primary School, Ngurubani Primary School, AIPCA Primary School, Rurumi Nursery School, Thiba Primary School, Midland Day Care, Sibling Day Care, St Joseph Day Care, Thiba Glory Day Care, and Vision Day Care centres. Parents and guardians of all eligible children were invited to a meeting where, out of 517 parents in attendance, 361 consented to allow their children to participate in the study, and 361 children were enrolled into the study.
### 2.4. Data Collection
For every child recruited, a unique identifier number was assigned and information regarding the child/infant’s name, sex and age, and area of residence (i.e., rural or urban) was collected. A questionnaire was also administered to consenting parents and guardian and was used to collect socioeconomic information of the parents/guardians and other behavioral information of the participating children considered to be relevant in contributing to the risk of infection.
#### 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
#### 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
#### 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
### 2.5. Study Approval
The study protocol was approved by the Scientific and Ethics Review Unit of the Kenya Medical Research Institute. Approval to carry out the study in the area was also sought from administrative authorities in the schools, the Mwea Division Health Administration, and the Kirinyaga County Health Administration. Prior to enrollment of the study subjects, a meeting with parents/guardians of all eligible children was called with the help of the schools’ administration, so that the study purpose, objectives, and procedures to be used could be explained including participants’ rights if they both accept or decline to have their children participate in the study. Written informed consent was obtained and the children were recruited into the study. The parents/guardians were assured of the privacy and confidentiality of the information collected. All children found to be infected with intestinal parasitic infections received the appropriate medication prescribed by a qualified and registered clinician where albendazole (for soil-transmitted helminthes) and praziquantel (for schistosomiasis) were administered in their recommended doses as per the WHO recommendations [18]. Other infections or conditions were referred to the local health clinic.
### 2.6. Statistical Analysis
The data collected was first entered and stored into Microsoft Excel 2010. The data was verified and crosschecked for errors. A copy of the data was then recoded and exported into Statistical Package for Social Sciences (SPSS) Version 20 and baseline descriptive statistics were drawn.Comparison of weight and height against infection status was done using independentT-test to assess significant differences in weight and height between the infected and the noninfected. ANOVA test was used to assess difference in height and weight between the noninfected, infected, and those with multiple infections.Anthropometric data was exported to WHO Anthro [20] where WAZ, HAZ, and WHZ were derived and used to determine nutritional status. The anthropometric variables where applicable were reported as mean ± standard deviation (SD) 95% confidence interval.Based on theZ-score values obtained for WAZ, HAZ, and WHZ, the children were categorized as normal (≤2 and ≥−2 Z-score), underweight (≥−3 and <−2 Z-score), or severely underweight (<−3 Z-score); stunted (≥−3 and <−2 Z-score) or severely stunted (<−3 Z-score); and wasted (≥−3 and <−2 Z-score) or severely wasted (<−3 Z-score).Binary variables were compared using Student’st- test and Chi-square test where applicable.Demographic and socioeconomic data were entered as categorical variables and the frequencies and percentages were calculated. Later they were assessed using a binary logistic regression model with the baseline category as the least likely to result to an infection outcome.All statistical tests were evaluated for significance atP<0.05; 95% CI (confidence interval).
## 2.1. Study Area
This study was conducted in the Mwea Division of Kirinyaga South district in Kirinyaga County, central Kenya (00°40′54′′S, 037°20′36′′E). This area is approximately 110 km North East of Nairobi and the main agricultural activity is rice farming which is grown under flood irrigation. Mwea is situated in the lower altitude zone (approx. 1150 mASL) of the district in an expansive flat land mainly characterized by black cotton and red volcanic soils. Mwea Division has a land area of approximately 542.8 sq. Km. Mwea Division has a population of 190,512 with an urban population of 7,625 (census, 2009). The specific area (survey area) of study was Thiba ward which comprises Nguka and Thiba sublocations of Kirinyaga County which is approximately 34 sq. Km with a population of 31,689. The nearest large town and administrative centre for Thiba ward is Wang’uru Town.The geography of the area is mainly flat at an altitude ranging from 1150 to 1200 mASLThe area is mainly known for its horticultural crop farming where the main cash crop is rice grown under flood irrigation followed by maize.The setting for the study site was largely a rural and peri-urban population.
## 2.2. Study Design
The study was a comparative cross-sectional study carried out to collect both quantitative and qualitative data. Based on the objectives, the study design investigated the possible association between infections and intensities of STH and schistosomiasis among other intestinal helminth infections on the one hand, with indicators of current physical growth status, on the other.
## 2.3. Study Population
The target population of the study was generally preschool age children between ≥2 and ≤5 years of age who have at least lived in the area under study for the past 6 months. Using a random sampling technique, the study selected 13 schools within the study area. The schools included Kandongu Primary School, Kiorugari Primary School, Mbui Njeru Primary School, Mukou Primary School, Ngurubani Primary School, AIPCA Primary School, Rurumi Nursery School, Thiba Primary School, Midland Day Care, Sibling Day Care, St Joseph Day Care, Thiba Glory Day Care, and Vision Day Care centres. Parents and guardians of all eligible children were invited to a meeting where, out of 517 parents in attendance, 361 consented to allow their children to participate in the study, and 361 children were enrolled into the study.
## 2.4. Data Collection
For every child recruited, a unique identifier number was assigned and information regarding the child/infant’s name, sex and age, and area of residence (i.e., rural or urban) was collected. A questionnaire was also administered to consenting parents and guardian and was used to collect socioeconomic information of the parents/guardians and other behavioral information of the participating children considered to be relevant in contributing to the risk of infection.
### 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
### 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
### 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
## 2.4.1. Questionnaire
Following the acquisition of an informed consent from the parents or guardians, questionnaires were administered to the parents of the enrolled children. The questionnaires were provided in both English and Swahili. The study also recruited translators in the local language (Kikuyu) to help parents better understand the questionnaire.
## 2.4.2. Anthropometry
All children were examined by a qualified and registered community nurse/community health worker recruited by the study who carried out physical examination and measurements to obtain their weight, age, height, and mid-upper arm circumference. These parameters were collected as per the guidelines in the National Health and Nutrition Examination Survey’s Anthropometry Procedures Manual developed by the United States Centre for Disease Control and Prevention (CDC). For purposes of accuracy, the instruments were calibrated regularly and random repeat measurements were done as a quality control measure. From the measurements,Z-score values for height-for-age (HAZ), weight-for-age (WAZ), and weight-for-height (WHZ) were calculated and used as indices for nutritional status.
## 2.4.3. Stool Samples Collection and Examination
Each participant was provided with a stool sample collection container with unique identifiers, and with the help of activity coordinators approximately 4 grams (gm) of fresh stool sample was collected using polypots from each participating child.From each sample collected, Kato-Katz thick smears were prepared for examination under a compound microscope. The fecal smears were prepared in duplicate on glass microscope slides to improve detection levels. The samples were processed within an hour of collection time. The Kato-Katz technique was mainly used to detect eggs and ova ofSchistosoma mansoni,Ancylostoma duodenale,Ascaris lumbricoides, andTrichuris trichiura. Where infection was detected, intensity of infection was also noted and graded as either heavy, moderate, or low in accordance with the WHO proposed criteria [18, 19].Further diagnosis using the formol concentration technique was done to detect presence of other intestinal parasites of public health significance that may have passed undetected in the Kato-Katz technique. Following diagnosis, subjects were divided into 3 groups: uninfected, infected with a single species, and infected with two or more species of intestinal helminthes.
## 2.5. Study Approval
The study protocol was approved by the Scientific and Ethics Review Unit of the Kenya Medical Research Institute. Approval to carry out the study in the area was also sought from administrative authorities in the schools, the Mwea Division Health Administration, and the Kirinyaga County Health Administration. Prior to enrollment of the study subjects, a meeting with parents/guardians of all eligible children was called with the help of the schools’ administration, so that the study purpose, objectives, and procedures to be used could be explained including participants’ rights if they both accept or decline to have their children participate in the study. Written informed consent was obtained and the children were recruited into the study. The parents/guardians were assured of the privacy and confidentiality of the information collected. All children found to be infected with intestinal parasitic infections received the appropriate medication prescribed by a qualified and registered clinician where albendazole (for soil-transmitted helminthes) and praziquantel (for schistosomiasis) were administered in their recommended doses as per the WHO recommendations [18]. Other infections or conditions were referred to the local health clinic.
## 2.6. Statistical Analysis
The data collected was first entered and stored into Microsoft Excel 2010. The data was verified and crosschecked for errors. A copy of the data was then recoded and exported into Statistical Package for Social Sciences (SPSS) Version 20 and baseline descriptive statistics were drawn.Comparison of weight and height against infection status was done using independentT-test to assess significant differences in weight and height between the infected and the noninfected. ANOVA test was used to assess difference in height and weight between the noninfected, infected, and those with multiple infections.Anthropometric data was exported to WHO Anthro [20] where WAZ, HAZ, and WHZ were derived and used to determine nutritional status. The anthropometric variables where applicable were reported as mean ± standard deviation (SD) 95% confidence interval.Based on theZ-score values obtained for WAZ, HAZ, and WHZ, the children were categorized as normal (≤2 and ≥−2 Z-score), underweight (≥−3 and <−2 Z-score), or severely underweight (<−3 Z-score); stunted (≥−3 and <−2 Z-score) or severely stunted (<−3 Z-score); and wasted (≥−3 and <−2 Z-score) or severely wasted (<−3 Z-score).Binary variables were compared using Student’st- test and Chi-square test where applicable.Demographic and socioeconomic data were entered as categorical variables and the frequencies and percentages were calculated. Later they were assessed using a binary logistic regression model with the baseline category as the least likely to result to an infection outcome.All statistical tests were evaluated for significance atP<0.05; 95% CI (confidence interval).
## 3. Results
### 3.1. General Characteristics of the Study Group
A total of 361 children were recruited into the study of which 50.40% were male (n=183) and 49.60% female (n=178). The mean age in months was 46.62±9.68 (45.62–47.62), 95% CI. Mean height was 101.78±6.57 cm (101.10–102.45), 95% CI, and mean weight was 14.71±2.08 kg (14.49–14.92), 95% CI. Table 1 gives an overall summary of the study group demographics while Table 2 provides an age group sex distribution of the population.Table 1
Summary of anthropometric descriptive statistics of the sampled study population.
Mean
Confidence interval
Age in months
Male (n = 183)
46.30 ± 10.01
(44.85–47.75) 95% CI
Female (n = 179)
46.93 ± 9.36
(45.55–48.30) 95% CI
Total 361
46.62 ± 9.68
(45.62–47.62) 95% CI
Height in cm
Male (n = 183)
101.34 ± 6.43
(100.41–102.27) 95% CI
Female (n = 178)
102.23 ± 6.69
(101.24–103.21) 95% CI
Total: 361
101.78 ± 6.57
(101.10–102.45) 95% CI
Weight in kg
Male (n = 183)
14.80 ± 2.06
(14.50–15.10) 95% CI
Female (n = 178)
14.61 ± 2.11
(14.30–14.92) 95% CI
Total: 361
14.71 ± 2.08
(14.49–14.92) 95% CI
n = total number of children.Table 2
Age/sex distribution of the sampled study population (n = 361).
Age group
Female
Male
Total
Count
%
Count
%
Count
%
<2.5 years
14
3.88%
20
5.54%
34
9.42%
2.5–3.0 years
13
3.60%
16
4.43%
29
8.03%
3.0–3.5 years
24
6.65%
22
6.09%
46
12.74%
3.5–4.0 years
38
10.53%
39
10.80%
77
21.33%
4.0–4.5 years
44
12.19%
46
12.74%
90
24.93%
>4.5 years
45
12.47%
40
11.08%
85
23.55%
Grand total
178
49.31%
183
50.69%
361
100.00%The same number of families participated in the questionnaires determining behavioral trends and socioeconomic status and summary of the responses is tabulated on Table3.Table 3
Frequency distribution of socioeconomic characteristics of the sampled study population.
Attribute
Response
Frequency
% frequency
Knowledge of disease transmission
No
114
31.6%
Yes
247
68.4%
Geophagy (soil eating)
No
74
20.5%
Yes
287
79.5%
Hand washing (child)
Never
118
32.7%
Sometimes
213
59.0%
Always
30
8.3%
Shoe wearing
Sometimes
325
90.0%
Always
36
10.0%
Water source (domestic)
River/canal
292
80.9%
Borehole
43
11.9%
Piped
26
7.2%
River bathing child
No
98
27.1%
Yes
263
72.9%
Water purification method
None
71
19.7%
Filtration
115
31.9%
Boiling
79
21.9%
Chlorination
96
26.6%
Bathroom waste water disposal
Open ground
275
76.2%
Latrine
86
23.8%
Employment status (father)
No
75
20.8%
Yes
286
79.2%
Employment status (mother)
No
236
65.4%
Yes
125
34.6%
Home ownership
Self-own
208
57.6%
Rental
153
42.4%
Home location classification
Rural
284
78.7%
Urban
77
21.3%
Family with children above 5 yrs
No
249
69.0%
Yes
112
31.0%
House type
Rural
289
80.1%
Wooden
8
2.2%
Iron sheets
12
3.3%
Brick/stone
52
14.4%
### 3.2. Parasitological Investigations
Out of the total 361 children enrolled in the study, 108 children (29.9%) were found to be infected with an intestinal parasite of which 15 (3.9%) had multiple parasite infections. Prevalence of each parasitic infection is shown in Table4. The prevalence ofAncylostoma duodenale was at 0.6%,Ascaris lumbricoides 3.3%,Entamoeba histolytica 0.3%Enterobius vermicularis 0.83%,Entamoeba coli3.88%,Giardia lamblia 14.68%,Hymenolepis nana 3.6%,Schistosoma mansoni 5.54%, andTrichuris trichiura 1.11%, combining single and multiple infections. It was noted that prevalence for most infections showed a tendency to increase with age as is illustrated in Table 5. There was a significant difference in prevalence ofSchistosoma mansoni infection between boys and girls where boys showed a higher tendency to be infected with schistosomiasis (t = 3.308; P=0.03; 0.026–0.119 at 95% CI). All other infections showed no statistically significant difference between boys and girls. Generally, infection prevalence showed tendency to increase with age. Based on independent t-tests done to compare weights and heights of those infected versus the uninfected, there was no statistically significant difference based on the overall infection status (weight: P=0.07482, t = 1.6520; height: P=0.2230, t = 1.6519); there was however statistical significant difference in weight between those infected withGiardia lamblia and those not infected (P=0.0362, t = 1.8015). All other infections individually showed no significant difference in weight and height between those infected and the noninfected.Table 4
Prevalence of parasitic infections in sampled study population in Mwea Division.
Row labels
Frequency
Percentage
Boys
Percentage
Girls
Percentage
Ancylostoma duodenale
2
0.55%
1
0.53%
1
0.58%
Ascaris lumbricoides
12
3.05%
6
3.19%
5
2.89%
E. coli
77∗
3.88%
5
2.66%
9
5.20%
E. histolytica
1
0.28%
0
0.00%
1
0.58%
E. vermicularis
3
0.83%
1
0.53%
2
1.16%
G. lamblia
54
14.68%
29
15.43%
25
13.87%
H. nana
94∗
3.60%
4
2.13%
9
5.20%
No infection
253
66.48%
123
65.43%
117
67.63%
Schistosoma mansoni
182∗
5.54%
17
9.04%
3
1.73%
Trichuris trichiura
22∗
1.11%
2
1.06%
2
1.16%
Grand total
361
100.00%
188
100.00%
173
100.00%
O
c
c
u
r
r
e
n
c
e
∗ as multiple infections.Table 5
Frequency distribution of parasitic infections per age groups.
Age Group
S. mansoni
Hookworm
A. lumbricoides
T. trichiura
G. lamblia
H. nana
E.vermicularis
E. histolytica
E. coli
<2.5 yrs
1
0
1
0
4
1
0
0
3
2.5–3 yrs
1
0
1
0
1
0
0
0
1
3.0–3.5 yrs
2
0
1
0
6
0
1
0
3
3.5–4.0 yrs
3
1
1
0
9
1
0
1
5
4.0–4.5 yrs
5
1
3
4
15
7
2
0
0
4.5–5 yrs
8
0
4
0
19
4
0
0
2
Grand total
20
2
11
4
54
13
3
1
14
### 3.3. Nutritional Status
#### 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 3.1. General Characteristics of the Study Group
A total of 361 children were recruited into the study of which 50.40% were male (n=183) and 49.60% female (n=178). The mean age in months was 46.62±9.68 (45.62–47.62), 95% CI. Mean height was 101.78±6.57 cm (101.10–102.45), 95% CI, and mean weight was 14.71±2.08 kg (14.49–14.92), 95% CI. Table 1 gives an overall summary of the study group demographics while Table 2 provides an age group sex distribution of the population.Table 1
Summary of anthropometric descriptive statistics of the sampled study population.
Mean
Confidence interval
Age in months
Male (n = 183)
46.30 ± 10.01
(44.85–47.75) 95% CI
Female (n = 179)
46.93 ± 9.36
(45.55–48.30) 95% CI
Total 361
46.62 ± 9.68
(45.62–47.62) 95% CI
Height in cm
Male (n = 183)
101.34 ± 6.43
(100.41–102.27) 95% CI
Female (n = 178)
102.23 ± 6.69
(101.24–103.21) 95% CI
Total: 361
101.78 ± 6.57
(101.10–102.45) 95% CI
Weight in kg
Male (n = 183)
14.80 ± 2.06
(14.50–15.10) 95% CI
Female (n = 178)
14.61 ± 2.11
(14.30–14.92) 95% CI
Total: 361
14.71 ± 2.08
(14.49–14.92) 95% CI
n = total number of children.Table 2
Age/sex distribution of the sampled study population (n = 361).
Age group
Female
Male
Total
Count
%
Count
%
Count
%
<2.5 years
14
3.88%
20
5.54%
34
9.42%
2.5–3.0 years
13
3.60%
16
4.43%
29
8.03%
3.0–3.5 years
24
6.65%
22
6.09%
46
12.74%
3.5–4.0 years
38
10.53%
39
10.80%
77
21.33%
4.0–4.5 years
44
12.19%
46
12.74%
90
24.93%
>4.5 years
45
12.47%
40
11.08%
85
23.55%
Grand total
178
49.31%
183
50.69%
361
100.00%The same number of families participated in the questionnaires determining behavioral trends and socioeconomic status and summary of the responses is tabulated on Table3.Table 3
Frequency distribution of socioeconomic characteristics of the sampled study population.
Attribute
Response
Frequency
% frequency
Knowledge of disease transmission
No
114
31.6%
Yes
247
68.4%
Geophagy (soil eating)
No
74
20.5%
Yes
287
79.5%
Hand washing (child)
Never
118
32.7%
Sometimes
213
59.0%
Always
30
8.3%
Shoe wearing
Sometimes
325
90.0%
Always
36
10.0%
Water source (domestic)
River/canal
292
80.9%
Borehole
43
11.9%
Piped
26
7.2%
River bathing child
No
98
27.1%
Yes
263
72.9%
Water purification method
None
71
19.7%
Filtration
115
31.9%
Boiling
79
21.9%
Chlorination
96
26.6%
Bathroom waste water disposal
Open ground
275
76.2%
Latrine
86
23.8%
Employment status (father)
No
75
20.8%
Yes
286
79.2%
Employment status (mother)
No
236
65.4%
Yes
125
34.6%
Home ownership
Self-own
208
57.6%
Rental
153
42.4%
Home location classification
Rural
284
78.7%
Urban
77
21.3%
Family with children above 5 yrs
No
249
69.0%
Yes
112
31.0%
House type
Rural
289
80.1%
Wooden
8
2.2%
Iron sheets
12
3.3%
Brick/stone
52
14.4%
## 3.2. Parasitological Investigations
Out of the total 361 children enrolled in the study, 108 children (29.9%) were found to be infected with an intestinal parasite of which 15 (3.9%) had multiple parasite infections. Prevalence of each parasitic infection is shown in Table4. The prevalence ofAncylostoma duodenale was at 0.6%,Ascaris lumbricoides 3.3%,Entamoeba histolytica 0.3%Enterobius vermicularis 0.83%,Entamoeba coli3.88%,Giardia lamblia 14.68%,Hymenolepis nana 3.6%,Schistosoma mansoni 5.54%, andTrichuris trichiura 1.11%, combining single and multiple infections. It was noted that prevalence for most infections showed a tendency to increase with age as is illustrated in Table 5. There was a significant difference in prevalence ofSchistosoma mansoni infection between boys and girls where boys showed a higher tendency to be infected with schistosomiasis (t = 3.308; P=0.03; 0.026–0.119 at 95% CI). All other infections showed no statistically significant difference between boys and girls. Generally, infection prevalence showed tendency to increase with age. Based on independent t-tests done to compare weights and heights of those infected versus the uninfected, there was no statistically significant difference based on the overall infection status (weight: P=0.07482, t = 1.6520; height: P=0.2230, t = 1.6519); there was however statistical significant difference in weight between those infected withGiardia lamblia and those not infected (P=0.0362, t = 1.8015). All other infections individually showed no significant difference in weight and height between those infected and the noninfected.Table 4
Prevalence of parasitic infections in sampled study population in Mwea Division.
Row labels
Frequency
Percentage
Boys
Percentage
Girls
Percentage
Ancylostoma duodenale
2
0.55%
1
0.53%
1
0.58%
Ascaris lumbricoides
12
3.05%
6
3.19%
5
2.89%
E. coli
77∗
3.88%
5
2.66%
9
5.20%
E. histolytica
1
0.28%
0
0.00%
1
0.58%
E. vermicularis
3
0.83%
1
0.53%
2
1.16%
G. lamblia
54
14.68%
29
15.43%
25
13.87%
H. nana
94∗
3.60%
4
2.13%
9
5.20%
No infection
253
66.48%
123
65.43%
117
67.63%
Schistosoma mansoni
182∗
5.54%
17
9.04%
3
1.73%
Trichuris trichiura
22∗
1.11%
2
1.06%
2
1.16%
Grand total
361
100.00%
188
100.00%
173
100.00%
O
c
c
u
r
r
e
n
c
e
∗ as multiple infections.Table 5
Frequency distribution of parasitic infections per age groups.
Age Group
S. mansoni
Hookworm
A. lumbricoides
T. trichiura
G. lamblia
H. nana
E.vermicularis
E. histolytica
E. coli
<2.5 yrs
1
0
1
0
4
1
0
0
3
2.5–3 yrs
1
0
1
0
1
0
0
0
1
3.0–3.5 yrs
2
0
1
0
6
0
1
0
3
3.5–4.0 yrs
3
1
1
0
9
1
0
1
5
4.0–4.5 yrs
5
1
3
4
15
7
2
0
0
4.5–5 yrs
8
0
4
0
19
4
0
0
2
Grand total
20
2
11
4
54
13
3
1
14
## 3.3. Nutritional Status
### 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 3.3.1. Weight and Height
Based on the weight for height of the children, the prevalence of malnutrition was determined and is presented in Table6. The mean weights of the participants (n=361) were 14.71 kg (14.49–14.92), 95% CI, and height was 101.78 (101.10–102.45) 95% CI. The mean heights and weights of the children showed no statistical difference between males and females.Table 6
Prevalence of malnutrition in PSAC in Mwea Division based on the children’sZ-scores.
MeanZ-score values
95% confidence interval
% of moderately malnourished children
% of severely malnourished children
WAZ
Male (n=183)
−0.66 ± 1.08
(−0.82–−0.51) 95% CI
14.2% underweight (<−2z)
2.2% severe underweight (<−3z)
Female (n=178)
−0.64 ± 1.07
(−0.79–−0.48) 95% CI
11.8% underweight (<−2z)
1.1% severe underweight (<−3z)
HAZ
Male (n=183)
−0.11 ± 1.37
(−0.31–0.09) 95% CI
8.2% stunted (<−2z)
0.5% severe stunted (<−3z)
Female (n=178)
0.15 ± 1.25
(−0.04–0.33) 95% CI
3.4% stunted (<−2z)
0.16% severe stunted (<−3z)
WHZ
Male (n=183)
−0.90 ± 1.12
(−1.07–−0.74) 95% CI
20.8% wasted (<−2z) 0.0% obese (>2z)
3.8% severe wasted (<−3z)
Female (n=178)
−1.10 ± 1.04
(−1.25–−0.95) 95% CI
20.2% wasted (<−2z) 0.0% obese (>2z)
3.4% severe wasted (<−3z)
CI = confidence interval,n = total number of children, and z = Z-score.Prevalence of severe stunting, severe underweight, and severe wasting were 0.6% (2) (−0.2–1.3—95% CI), 1.7% (6) (0.3–3.0—95% CI), and 3.6% (13) (2.1–6.2—95% CI), respectivelySeven boys and 8 girls were found to be severely wasted, 1 boy and 1 girl were severely stunted, and 4 girls and 2 boys were severely underweight. The prevalence of wasting, underweight and stunting was also noted to increase with age. There was also significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) and WHZ (P=0.022, t = 2.303, 95% CI = 0.0372–0.4738) between boys and girls. The results of height and weight and prevalence of malnutrition are shown in Tables 6 and 7.Table 7
Factors associated with the general prevalence of infection in preschool age children in Mwea division: a binary logistic regression model.
Variable
OR
(P value)
95% CI
Knowledge of disease transmission
.862
.635
.629
2.137
Geophagy
.975
.947
.459
2.072
Hand washing
Never
6.478
.010∗
1.553
27.015
Sometimes
3.401
.093
.817
14.167
Shoe wearing
.405
.155
.117
1.406
Water source
Borehole
.621
.566
.122
3.167
River/canal
.194
.088
.029
1.278
Water purification method
None
3.602
.008∗
1.397
9.288
Filtration
.778
.537
.351
1.725
Boiling
1.272
.572
.552
2.932
Family with children above 5 years
.390
.007∗
1.293
5.088
Constant
6.206
.216
OR = odds ratio, CI = confidence interval, and∗ = variables with statistical significance.Based on the general status of infection of the children, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected for all parasitic infections. With regard to specific infections, children withGiardia lamblia infections showed significantly lower mean weights (14.14 versus 14.80 kg; P=0.031, t = 2.171; 95% CI = 0.0626–1.2669), mean weight for age Z-scores (−1.275 versus −0.542; P=0.000, t = 4.728; 95% CI = 0.4285–1.0387), and mean height for age Z-scores (−0.7582 versus 0.2776; P=0.000, t = 4.728; 95% CI = 0.6075–1.464) when compared to the noninfected children.Based on the sex of the children with regard to wasting, both boys were affected with boys showing slightly higher degree of severe wasting in contrast to girls who show slightly higher number of moderate wasting. Comparison by a Student’st-test showed that the slight difference was of no statistical significance. Table 6 gives a summary of the percentages of children affected with malnutrition. In Figure 1 it further shows that the majority of girls although within normal limits, that is, Z-score values within the normal limits of 2 standard deviations, showed a tendency to deviate towards the negative with a mean Z-score value of −1.10. This is likely as a result of the many of the girls recording lower weight to height Z-score values although within the normal interval. On the other hand, majority of boys within the normal WHO confidence interval recorded Z-score values closer to the WHO mean Z-score value. Figure 1 also draws attention to the percentage of children falling outside the −2 Standard deviation mark indicating percentage of children with wasting.Figure 1
A plot of weight for heightZ-scores by gender for the PSAC in Mwea Division against the recommended WHO standards.With regard to height for age, Figure2 is indicative of more boys affected by stunting with 8.2% of the boys being moderately stunted compared to 3.4% of girls, that is, percentage of children falling outside the −2SD WHO standard interval. As for severe stunting, boys again showed slightly higher percentage compared to girls. This was confirmed by the Studentt-test which showed a statistically significant difference in HAZ (P=0.036, t = 2.108, 95% CI = −0.6486–−0.2251) between the boys and the girls.Figure 2
A plot of height for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.The weight for ageZ-score values show boys to be slightly more affected by malnutrition with a percentage of 14.2% compared to girls, 11.8%. The same trend is observed with severe malnutrition as is shown in Table 6. As per Figure 3, the boys’ curve shows some degree of skewness to the left although it is centered towards the mean and the skewness translates to the slightly higher percentage of boys affected by malnutrition. This is confirmed by Student’st-test showing statistically significant difference in WHZ (P=0.022, t = 2.303, and 95% CI = 0.0372–0.4738) between boys and girls. As for the girls’ curve, there is tendency to slightly shift towards the left which is indicative of the girls being centred towards the negative side of the WHO mean.Figure 3
A plot of weight for ageZ-scores by gender for the PSAC in Mwea Division against the WHO recommended standards.With regard to socioeconomic and demographic factors, the mean weight of the children was found to be significantly lower among those whose parents had other children above the age of 5 years (weight: 15.021 kg Vs 13.96 Kg; 95% CI = 0.5931–1.51168;t = 4.507; P=0.000). A look at the summary of the socioeconomic and behavioral characteristics of the study population (see Table 3) focusing on factors that may have an influence on the infection and nutritional status of the target study group showed that 68.4 percent of the sampled population proved to be aware of the ways to prevent transmission of intestinal parasites. However, a vast majority fall short of applying preventive measures most of who lack the means to implement such measures.A binary logistic regression model performed to ascertain the effects of demographic, behavioral, and socioeconomic status of the population on the children’s infection status was statistically significant,χ2 = 104.4, P=0.000. It explained 35.6% (Nagelkerke R2) of the variance in infection and correctly classified 78.1% of the cases. The model revealed that the infection status of children was significantly influenced by their hand washing behavior, their water purification method, classification of home location, and whether the family had other children above the age of 5. Children who reported never washing hands on the key recommended times were 6.4 times likely to be infected (odds ratio (OR) 6.4, P=0.010, 95% CI), children in families with siblings above 5 years were 2.6 times more likely to be infected with a parasitic infection (OR: 2.565, P=0.07, 95% CI), and those families that reported not using any water purification methods were 3.6 times more likely to be infected (OR 3.602, P=0.08, and 95% CI) while children living in the rural areas were at a 8.1 times (OR 8.051, P<0.001, and 95% CI) higher risk of infection with a parasitic infection.
## 4. Discussion
Parasitic infections are well known for their burden of disease mainly attributed to their chronic and insidious impact on the health, nutrition, and quality of life of those infected rather than to the mortality they cause [21]. The study showed that 29.9% of the children were infected with various parasitic infections. The prevalence of specific parasitic infections was generally low with prevalence of below 6%. However, a prevalence of 15% forGiardia lamblia,a parasite often associated with diarrhea and acquired through drinking contaminated water and consumption of contaminated soil or food [22], was interesting but not surprising. This finding suggests that this parasite is most likely common in this area and a cause of ill health among children of 5 years of age or less, in this area. Since there were no previous studies to investigate their prevalence, this study served as a baseline survey providing information on the status of infection in PSAC. The study was also able to demonstrate that 3.6%, 1.7%, and 0.6% of the children were severely wasted, underweight, and stunted. Based on the general infection status, there was a significant difference in WAZ (P = 0.000; t = 3.675; 95% CI = 0.2162–0.7175) and HAZ (P=0.001; t = 3.383; 95% CI = 0.2438–0.9210) between the infected and the noninfected. The study demonstrated a significant lower mean weights, mean weight for age, and mean height for age for children infected withGiardia lamblia infection, a clear indication of the impact ofGiardia lamblia on the nutritional status of children [22]. Other studies have also documented similar findings with regard to the effects ofGiardia lamblia on weight and height of children [22] where chronic infections with giardia lamblia have been associated with clinical manifestation of malnutrition. The study however could not demonstrate statistically significant association linking other specific parasitic infections to malnutrition. This could be attributed to the low prevalence of these infections. This study has also shown that hand washing behavior, water source for drinking, water purification methods, and classification of home location and family size were strongly associated with the general status of infection. Similar studies have also demonstrated association between soil-transmitted helminth infection with water supply source, hand washing behavior, and family size [23].The results of the binary logistic regression in Table7 show that the transmission ofSchistosomaspp., STH, among other parasitic infections have been strongly associated with sanitation and hygiene and the lack of clean and safe water supply. Most of these conditions have mostly been linked to poverty as the root cause and as such have been linked to malnutrition and many other health problems including parasitic infections [2, 16]. Of the total number of infections 93.5% (101 children) occurred in the rural setting and only 6.5% (7 children) occurring in the urban setting. Also from the regression analysis the odds of a child living in rural areas is up to 8.1 times higher (See Table 7) compared to the children in urban settlement. This presents a clear association of infection with the rural setting which is well known to be associated with poverty and lack of access to clean and safe water [23, 24].The study findings of the study have also demonstrated an association between malnutrition and family size where families with more than 3 children above the age of 5 had a lower mean weight compared to families with <3 children. Other studies have demonstrated this to be especially common in rural and poor socioeconomic communities due to inadequate distribution of food among family members [2]. Also to note is the association between families where children have siblings above the age of 5 had a higher risk of infection which presents a likelihood of infection being transmitted from older siblings to younger ones.Regardless of infection status, the study populations showed high prevalence of malnutrition, with prevalence and severity showing tendency to increase with age as is illustrated in Table8. This observation is consistent with findings from other studies [2] that demonstrated significant increase of risk of malnutrition with increase in age for children under 5. These observations could as well be attributed to poverty and other health problems which do not exclude other parasitic infections beyond the scope of this study. Figures 1, 2, and 3 provide a graphical representation of the nutritional status of the preschool age children in Mwea Division.Table 8
Prevalence of malnutrition by age groups in PSAC in Mwea Division.
Age(months)
Total number
Severe wasting(<−3 z-score)
Moderate wasting(≥−3 and <−2 z-score)
Severe underweight(<−3 z-score)
Underweight(≥−3 and <−2 z-score)
Severe stunted(<−3 z-score)
Moderate stunted(≥−3 and <−2 z-score)
Number
%
Number
%
Number
%
Number
%
Number
%
Number
%
6–17
18–29
34
1
2.9
5
14.7
0
0.0
0
0.0
0
0.0
0
0.0
30–41
71
3
4.2
8
11.3
1
1.4
4
4.2
1
1.4
2
2.8
42–53
163
4
2.5
31
19.0
6
2.5
40
20.2
1
0.6
14
8.6
54–59
93
5
5.4
17
18.3
1
1.1
12
11.8
0
0.0
6
5.4
Total
361
13
3.6
61
16.9
8
2.2
56
15.5
2
0.8
22
15.5The deviation observed for WHZ scores showing skewness to the left (negatively skewed) and a shift to the left (see Figure1) is indicative that many of the children deviate negatively from the WHO standard WHZ means. Low weight for height Z-scores is known to result from recent nutritional deficiency which has been associated with availability of food and disease prevalence.In comparison to the WHO standards, the sampled population HAZ distribution is platykurtic with lower and broader central peaks (see Figure2). This is indicative of the population mean not being centered around the WHO recommended standards. Height for age Z-scores (HAZ) is an indicator for stunting represented by low HAZ and has been demonstrated to result from prolonged periods of either inadequate food intake, poor diet quality of morbidity from disease, or a combination of the same. Figure 2 shows distinct deviation from the WHO standard which may be indicative of either one or a combination of factors [2]. In this instance, boys have been shown to be more affected compared to girls.Weight for age being an indicator of underweight is usually a composite of both WHZ and HAZ. This therefore also serves as an indicator of malnutrition which among the many causes chronic parasitism cannot be ruled out.The study also showed that the number of boys affected by malnutrition was slightly higher compared to that of girls affected by malnutrition (see Table6). In general, prevalence of malnutrition stood at 27.7% for wasting, 17.7% for underweight, and 6.94% for stunting with a majority of these cases occurring in the rural areas. This is a reflection of the 2008-2009 Kenya Demographic Health Survey for children under 5 years which showed that, nationwide, 35.3%, 6.7%, and 16.3% of the children were stunted, wasted, and underweight, respectively, and further suggested the greatest burden of malnutrition was in rural areas [2, 15].The synergistic relationship between nutrition and infection can be attributed to the observed findings whereby either exposure to infections may be the cause of the malnutrition or the malnutrition predisposed the children making them more susceptible to infection. This is but a hypothetical deduction based on the study finding and thus further study is needed to ascertain the underlying cause of the observations made in this population
## 5. Conclusion
In conclusion, this study has demonstrated that the prevalence of STH and schistosomiasis in Mwea division in Kirinyaga County, Central Kenya is relatively low with a tendency to increase with age. While children in this age group were found to be infected with bothS. mansoni and STH, prevalence was generally low (<6%), therefore not likely to have a major public health impact in this age group. Nevertheless, regular intervention will be necessary. A high prevalence ofGiardia lamblia infections (15%), while interesting, was not surprising, as this infection is fairly common, in environments where hygiene is poor. This finding in particular suggests theG. lamblia is likely to be a major public health concern among children aged 5 years or less in Mwea, as they are at a high risk. It is, therefore, important to consider establishing an intervention program targeting this particular age group. The study further suggests the need for further investigations into other parasitic infections that cause ill health in this age group in the study area. While the prevalence of schistosomiasis and STH may have been low, these are likely to increase in prevalence given the conducive environment for transmission of these parasites in the area.This study has also shown that hand washing practices, water purification methods, rural homes, and families with siblings above 5 years to are associated with infection in this age group. It is thus important to provide health education programmes for disease prevention, improved access to clean and safe water for domestic use, and appropriate sanitation.Although the study was not able to establish a firm association between infection and malnutrition, the moderate prevalence of malnutrition in this age group cannot be ignored and the contribution of parasitic infections to the malnutrition cannot be entirely ruled out. It therefore calls for further investigations into the nutritional status of this age group to identify the underlying cause(s). Inclusion of nutrition in education is also recommended with a focus on families with preschool age children.
---
*Source: 1013802-2017-08-23.xml* | 2017 |
# Agent-Oriented Privacy-Based Information Brokering Architecture for Healthcare Environments
**Authors:** Abdulmutalib Masaud-Wahaishi; Hamada Ghenniwa
**Journal:** International Journal of Telemedicine and Applications
(2009)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2009/101382
---
## Abstract
Healthcare industry is facing a major reform at all levels—locally, regionally, nationally, and internationally. Healthcare services and systems become very complex and comprise of a vast number of components (software systems, doctors, patients, etc.) that are characterized by shared, distributed and heterogeneous information sources with varieties of clinical and other settings. The challenge now faced with decision making, and management of care is to operate effectively in order to meet the information needs of healthcare personnel. Currently, researchers, developers, and systems engineers are working toward achieving better efficiency and quality of service in various sectors of healthcare, such as hospital management, patient care, and treatment. This paper presents a novel information brokering architecture that supports privacy-based information gathering in healthcare. Architecturally, the brokering is viewed as a layer of services where a brokering service is modeled as an agent with a specific architecture and interaction protocol that are appropriate to serve various requests. Within the context of brokering, we model privacy in terms of the entities ability to hide or reveal information related to its identities, requests, and/or capabilities. A prototype of the proposed architecture has been implemented to support information-gathering capabilities in healthcare environments using FIPA-complaint platform JADE.
---
## Body
## 1. Introduction
Healthcare systems are characterized by shared and distributed
decision making and management of care. The distributed nature of the knowledge
among different healthcare locations implies that a request may not be
completely satisfied at a specific location or that one or more healthcare
location may contain information similar to, though not exactly the same as, that required by
the request.Many initiatives and programs have been established to promote
the development of less costly and more effective healthcare networks and
systems at national and international scales. The objectives of these healthcare
networks are to improve
diagnosis through online access to medical specialists, online reservation of analysis
and hospital services by practitioners extended on wide global scale, transplant
matching, and so forth. A complete electronic medical patient-case file, which might be shared between
specialists and can be interchanged between hospitals and with general practitioners
(GPs), will be crucial in diagnosing diseases correctly, avoiding duplicative
risky and expensive tests, and developing effective treatment plans.However, medical patient-case files may contain some sensitive
information about critical and vital topics such as abortions, emotional and
psychiatric care, sexual behaviors, sexually transmitted diseases, HIV status, and
genetic predisposition diseases. Privacy and the confidentiality of medical
records have to be especially safeguarded. Without broad trust in medical
privacy, patients may avoid crucial healthcare provision.Healthcare professionals and care providers prefer to have the
ability of controlling the collection, retention, and distribution of
information about themselves. On the other hand, healthcare service providers
need to effectively manage and prevent any abuse of the information or service they
provide in addition to the ability of protecting their identities. An important feature of the various
healthcare sectors is that they share similar problems and are faced with
challenges that can be characterized as follows.(i) In open-distributed healthcare
environments, it is no longer practical to expect healthcare clinicians, staff,
care providers, and patients to determine and keep track of the information and
services relevant to his/her requests and demands. For example, a patient will
be ubiquitously able to access his/her medical record from anywhere at any time
or may request medical services offered by available healthcare centers in a
particular city without being aware of the distributed sources and irrespective
of their locations. In addition, an application should be able to manage
distributed data in a unified fashion. This involves several tasks, such as
maintaining consistency and data integrity among distributed data sources, and
auditing access.(ii) The distributed nature of
the knowledge among multiple healthcare locations may require collaboration for
information gathering. For example, each unit in a hospital keeps its own
information about patients’ records.(iii) The solution of specific
medical problem includes complex activities and requires collaborative effort
of different individuals who posses distinct roles and skills. For example, the
provision of care to hospitalized patients involves various procedures and
requires the coordinated interaction amongst various staff and medical members.
It is essential that all the involved medical staff and professionals must
coordinate their activities in a manner that will guarantee the best
appropriate treatment that can be offered to the patient.(iv) A recent survey shows that
67% of the American national respondents are concerned about the privacy of
their personal medical records, 52% fear that their health insurance
information might be used by employers to limit job opportunities, while only
30% are willing to share their personal health information with health
professionals not directly involved in their case. As few as 27% are willing to
share their medical records with drug companies [1].To explore such issues, distributed healthcare systems need to
have an access to a service that can enable collaboration between different
healthcare service requesters and providers. Brokering facilitates achieving
better coordination among various healthcare service requesters and providers,
and permits healthcare personnel to get access to different services managed by
various providers without having to be aware of the location, identities,
access mechanisms, or the contents of these services.The proactive health systems have the potential to improve healthcare
access and management which significantly lower the associated incurred costs through
efficiently controlled information flow between various physicians, patients, and medical personnel,
yet threaten to facilitate data sharing beyond any privacy concerns. The high
degree of collaborative work needed in healthcare environments implies that
developers and researchers should think of other venues that can manage and
automate this collaboration efficiently.However, privacy concerns over
inappropriate use of the information make it hard to successfully exploit and achieve
the gains from sharing such information. This dilemma restricts the willingness
of individuals and personnel to disseminate or publicize information that might
lead to adverse outcomes. This paper presents an agent privacy-based
information brokering architecture that supports ad hoc system configurations
emphasizing the strategies for achieving privacy in healthcare environments. Within
the context of brokering, we view privacy as “the ability of entities to
decide upon revealing or hiding information related to their identities, requests
and capabilities in open distributed environments.”
## 2. Related Work
Privacy concerns are key barriers to the growth of health-based systems.
Legislation to protect personal medical information was proposed and put in effect to help
building a mutual confidence between various participants in the healthcare
domain.Privacy-based brokering protocols were proposed in many application
domain such as E-auctions [2], data
mining [3], and E-commerce.
Different techniques were used to enable collaboration among heterogeneous cooperative
agents in distributed systems including brokering via middle agents. These
middle agents differ from the role they play within the agent community [4–6]. The work
in [7] has proposed an agent-based mediation
approach, in which privacy has been treated as a base for classifying the
various mediation architectures only for the initial state of the system. In
another approach, agents capabilities and preferences are assumed to be common
knowledge, which might violate the privacy requirements of the involved participants [8]. Other
approaches such as in [9–11] have proposed frameworks to facilitate coordination
between web services by providing semantic-based discovery and mediation
services that utilize semantic description languages such as OWL-S [12] and RDF [13]. Another
recent approach distinguishes a resource brokering architecture that manages
and schedules different tasks on various distributed resources on the large-scale
grid [14]. However,
none of the above-mentioned approaches has treated privacy as an architectural
element that facilitates the integration of various distributed systems of an
enterprise.Several approaches were proposed for integration of distributed
information sources in healthcare [15]. In one
approach [16], the focus was on providing management
assistance to different teams across several hospitals by coordinating their
access to distributed information. The brokering architecture is centralized
around a mediator agent, which allocates the appropriate medical team to an
available operating theatre in which the transplant operation may be performed.
Other approach attempts to provide agent-based medical appointments scheduling [17, 18], in these
approaches the architecture provides matchmaking mechanisms for the selection
of appropriate recipient candidates whenever organs become available through a
matchmaking agent that accesses a domain-specific ontology.Other approaches proposed the use of privacy policies along with
physical access means (such as smartcards), in which the access of private
information is granted through the presence of another trusted authority that
mediate between information requesters and information providers [19, 20]. A
European IST project [21], TelemediaCare, Lincoln, UK,
developed an agent-based framework to support patient-focused distant care and
assistance, in the architecture composes two different types of agents, namely,
stationary “static” and mobile agents. Web service-based tools were developed to enable
patients to remotely schedule appointments, doctor visits, and to access
medical data [22].Different approaches had been suggested to protect the location
privacy in open-distributed systems [23]. Location
privacy is a particular type of information privacy that can be defined as “the
ability to prevent other parties from learning one’s current or past location”.
These approaches range from anonymity, pseudonymity, to cryptographic techniques. Some
approaches focus on using anonymity by unlinking user personal information from
their identity. One available tool is called anonymizer [24]. The
service protects the Internet protocol (IP) address or the identity of the user
who views web pages
or submits information (including personal preferences) to a remote site. The
solution uses anonymous proxies (gateways to the Internet) to route user’s
Internet traffic through the tool. However, this technique requires a trusted
third party because the anonymizer servers (or the user’s Internet service
provider, ISP) can certainly identify the user. Other tools try not to rely on
a trusted third party to achieve complete anonymity of the user’s identity on
the Internet, such as Crowds [25], Onion routing [26], and MIX
networks [27].Various programs and initiatives have proposed a set of guidelines for
secure collection, transmission, and storage of patients’ data. Some of these
programs include the Initiative for Privacy Standardization in Europe (IPSE)
and the Health Insurance Portability and Accountability Act (HIPAA) [28, 29]. Yet,
these guidelines need the adoption of new technology for healthcare requester/provider
interaction.
## 3. Brokering Requirements for Distributed Healthcare Systems
Brokering enables collaboration between different service requesters
and providers, and allows the dynamic interpretation of requests for the
determination of relevant service providers. For service providers, the
brokering services permit dynamic creation of services’ repositories after
suitable assembly of service advertisements available from the various
providers, or other additional activities. The major functional requirements of
a brokering service include the following.(i)Provision of
registration services: the registration and naming service allows building
up a knowledge base of the environment that can be utilized to facilitate
locating and identifying the relevant existing service sources and their
contents for serving a specific request. It is crucial to be able to identify
the subset of relevant information at a source, and to combine partially
relevant information across different sources; this requires the process of
identification and retrieval of a subset of required service at any source. It
is clear that in such environment, different sources would provide relevant
information to a different extent. The most obvious choice of the source from
which information will be retrieved is the one which returns most (or all) of
the relevant request. In that case, the user will have to keep track of which
source has the most relevant information.(ii)The acceptance of
providers’ service descriptions: to enable the dynamic discovery of
services, a mechanism is required to describe the capability aspects of
services, such as the functional description of a service, the conditions and
the constraints of the service, and the nature of the results.(iii)Receiving services’
requests: to enable requesters to define and describe the required
parameters that are needed to represent a request.(iv)Interaction: brokers
may engage (on behalf of requesters) in the process of negotiation with various
service providers to serve a request. The interaction requires a set of agreed
messages, rules for actions based upon reception of various messages.(v)Communication: the
communication capability allows the entities to exchange messages with the
other elements of the environment, including users, agents, and objects. In
order to perform their tasks, these entities need to depend heavily on
expressive communication with others not only to perform requests, but also to
propagate their capabilities, advertise their own services, and explicitly
delegate tasks or requests for assistance.
## 4. The Brokering Layer: Privacy-Based Agent-Orinted Architecture
Developing the brokering services comprises the automation of
privacy to enhance the overall security of the system and accordingly entities
should be able to define the desired degree of privacy. In fact, the brokering
service permits entities to participate in the environment with different roles,
and hence be capable of automating their privacy concerns and select a
particular privacy. The challenge here is how to architect a service that could
provide means and mechanisms by which entities would be able to interact with
each other and determine any privacy degree that suits a particular situation.
Such interaction is characterized by the nondeterministic aspect in addition to
the dynamic nature of the environment, where these entities exist and operate
for which they require to be able to change configurations to participate in
different roles. These requirements could not be accomplished using traditional
ways of manually configuring software.We strongly believe that agent orientation is an appropriate
design paradigm for providing coordination services and mechanisms in such
settings. Indeed, such a paradigm is essential to modeling open, distributed,
and heterogeneous environments in which an agent should be able to operate as a
part of a community of cooperatively
distributed systems environments, including human users. A key aspect of agent orientation
is the ability to design artifacts that are able to perceive, reason, interact,
and act in a coordinated fashion. Here, we view agent orientation as a
metaphorical conceptualization tool at a high level of abstraction (knowledge
level) that captures supports and implements features that are useful for
distributed computation in open environments. These features include
cooperation, coordination, interaction, as well as intelligence, adaptability,
economic and logical rationalities. We
define an agent as an individual collection of primitive components that
provide a focused and cohesive set of capabilities. We focus on the notion of
agenthood as a metaphorical conceptualization tool at a high level of
abstraction (knowledge level) that captures supports and implements features
that are useful for distributed computation in open environments.Architecturally, the brokering service is viewed as a layer of
services and is modeled as an agent with a specific architecture and
interaction protocol that are appropriate to carry the required privacy degree.
The challenge in this context is how to architect the brokering layer with the
appropriate set of services that enable cooperation across the different
degrees of privacy. The interaction protocols represent both the message
communication and the corresponding constraints on the content of messages.
They describe the sequence of messages among agents, and illustrate various
protocols that satisfy a desired privacy requirement. The focus for designing
these patterns is to provide a mechanism to reduce the costs and risks that
might be a result of violating privacy requirements. The patterns provide
mechanisms allowing users (human/agents) to adjust the privacy attributes, and allowing these users to
achieve and accomplish their tasks in addition to protecting their desired
privacy attributes.The agent interaction requires a set of agreed messages, rules
and assumption of communication channels. These rules and constraints can be
abstracted as agents’ patterns that define various protocols for every possible
privacy requirement. Using these protocols, agents would be able to protect the
privacy aspects of the most concern.
From the privacy standpoint, the brokering services are categorized into
different roles that are classified according to the participants’ (providers
and requesters) desired degree of privacy. These degrees of privacy control the
proper interaction patterns and will vary from a specific scenario to another.
The brokering layer takes in consideration the protection of any privacy
desires required by requesters, providers, or both.Here, we define the degree of privacy in terms of three
attributes: the entity identity, capability, and goals. Therefore, an agent can
categorize its role under several privacy degrees. Formally, an agent can be
represented as a 2-tupleAg≡〈(RA:Id,G);(PA:Id,Cap)〉, where RA and PA refer to the agent role as requester and
provider while Id, G,
and Cap,
respectively, refer to the agent identity, goals, and capabilities, which might
have a null value. For example, an agent might participate with a privacy
degree that enables the hiding of its identity as a requester by setting the
value of Id to null.
Tables 1, 2
summarize the different scenarios and
roles that might be played by the brokering layer categorized by the possible
privacy concern of the requester
(RA)
and provider
(PA)
agents.Table 1
The brokering layer interaction categorized by the privacy concern of service requesters.
Privacy attributesInteractionCaseGId1RevealedRevealed(i) Receive service request.(ii) Forward request to broker-provider side.(iii) Deliver result to requester.2HiddenRevealed(iv) Retrieve service request posted by a requester.(v) Forwards request to broker-provider side.(vi) Store result to be retrieved by requester.3RevealedHidden(vii) Postservice request to service repository.(viii) Requester to search repository and request service.(ix) Retrieve a service request that was stored by a requester.(x) Forward request to available and capable providers.(xi) Store result to be retrieved by requester.4HiddenHidden(xii) Requester to store service request.(xiii) Retrieve service request that was stored by a requester.(xiv) Forward request to available and capable providers.(xv) Store result to be retrieved by requester.Table 2
The brokering layer interaction categorized by the privacy concern of service providers.
Privacy attributesInteractionCaseIdCap1RevealedRevealed(i) Search for capable provider.(ii) Forward request.(iii) Negotiate and assign a service request.(iv) Get service result and deliver result.2HiddenRevealed(v) Postservice request to service repository.(vi) Providers to access service repository.(vii) Providers to evaluate service parameters(viii) Store result.(ix) Brokering layer to retrieve and deliver result.3RevealedHidden(x) Forward service request.(xi) Provider to evaluate request.(xii) Brokering layer to receive and deliver result back.4HiddenHidden(xiii) Providers to access repository.(xiv) Provider to evaluate request.(xv) Provider to store service result.(xvi) Brokering layer to retrieve and deliver result back.The layer permits various entities to participate in the
environment with different roles, and hence be capable of automating their
privacy concerns and select a particular degree. Each layer role is represented
as a special broker with a specific architecture and interaction protocol that
is appropriate to serve requests from various participants while maintaining
the required privacy degree. An agent role is an abstract description of an entity
with the specified functionalities. The brokering layer has the ability to
interact, solicit help, and delegate services’ requests from other available
brokering agents who support different privacy degrees.Responsibilities are separated and defined according to the roles
played and the required degree of privacy. Within the layer two sets of
brokering agents are available to service requesters and providers. The first
set handles interactions with requesters according to the desired privacy
degree that is appropriate to their preferences while the other set supports privacy degrees
required by service providers.Figure1 shows a logical view of the brokering services and
the relevant entities that are involved in any brokering scenario. Every
brokering pattern is accomplished by the composition of the requester role,
brokering agents, and the provider role, in which the interaction scenarios are produced
automatically. A complete brokering session is divided into several stages,
starting from requester-to-brokering layer interaction, brokering layer intra-interaction, and broker
layer-to-provider interaction. Note that in the figure a negation on a specific
privacy attribute variable exemplifies that the corresponding privacy attribute
is hidden from the environment.Figure 1
Logical view of the brokering service.
## 5. The Brokering Protocols: Privacy-Based Interaction Patterns
The brokering protocols describe a cooperative multibrokering
system, which provides the solution for interaction among participants in a
dynamic and heterogeneous environment of service providers and requesters. Each
brokering entity performs basic brokering functionality, such as service
discovery, dynamic service composition, and knowledge sharing with the
community according to a required privacy degree. A brokering entity within the
layer is called a broker hereafter.Brokers within the layer might represent a set of services in
which providers can advertise their service capability. The brokering protocols
regulate and govern service knowledge discovery and sharing of acquired
knowledge by defining interaction patterns that are composed of a set of
messages that can be exchanged by other brokers within the layer or other
registered entities that might benefit of the functionalities supported by the
overall brokering service. The architecture permits the brokering agents to
have various combinations with other brokering entities which support different
privacy degrees. The following section describes the different interaction
patterns supported by the brokering layer for entities that might play either a
requester or a provider role.
### 5.1. The Requester-Brokering Layer Interaction
#### 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
#### 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
#### 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
#### 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
### 5.2. The Provider-Brokering Layer Interaction
#### 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
#### 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
#### 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
#### 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 5.1. The Requester-Brokering Layer Interaction
### 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
### 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
### 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
### 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
## 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
## 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
## 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
## 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
## 5.2. The Provider-Brokering Layer Interaction
### 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
### 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
### 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
### 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
## 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
## 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
## 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 6. Design and Implementation
### 6.1. Modelling Healthcare-Distributed Systems
It is clear that the development of coordination solutions in open
and distributed healthcare environments requires a new design paradigm,
improved integration architectures and services. A cooperative distributed
systems (CDSs)
approach is an ideal and appropriate design paradigm which allows the various
healthcare entities to exercise some degree of authority in sharing their
information and capabilities.The architecture must describe the organization and the
interconnection among the software entities. In this architecture, the
environment can be envisioned as a cooperative distributed system (CDS)
comprised of a collection of economically motivated software agents that
interact competitively or cooperatively, find and process information, and
disseminate it to humans and other agents. It also enables common services that
facilitate the coordination and the cooperation activities amongst various
domain entities and support ad hoc and automated configurations.In our proposed model, a CDS is conceptualized as a dynamic
community of agent and nonagent entities that contribute with different
services. Based on the above view, an agent might play different roles and be
able to coordinate cooperatively or competitively with other agents, including
humans. Therefore, healthcare CDS entities are mapped as follows.(i)Service requester: is a domain specific entity that can
interact with the environment and request services.(ii)Service provider: a domain entity that provide
application-specific services.(iii)Brokering entity: is an agent that provides common coordination services, and facilities for the
generic cooperative distributed systems environment.
### 6.2. The Coordinated Intelligent Rational Agent (CIR-Agent) Model
The representative agents of domain and brokering entities within
the context of healthcare-based CDS are built on the foundation of CIR-agent
architecture with focuses on utilizing the model to capture the participants’
individual behavior toward achieving a desirable goal while maintaining a
required privacy degree.The CIR-agent is an individual collection of primitive components
that provide a focused and cohesive set of capabilities. The basic components
include problem-solving, interaction, and communication components, as shown in
Figure10(b). A particular arrangement (or interconnection) of
components is required to constitute an agent. This arrangement reflects the
pattern of the agent mental state as related to its reasoning about achieving a
goal. However, no specific assumptions need to be made on the detailed design
of the agent components. Therefore, the internal structure of the components
can be designed and implemented using object oriented or another technology,
provided that the developer conceptualizes the specified architecture of the
agent as described in Figure 10.The CIR agent architecture.
(a)
Detailed architecture of CIR agent(b)
Logical architecture of CIR agentBasically, each agent consists of knowledge and capability components.
Each of which is tailored according to the agent specific role. The agent
knowledge contains information about the environment and the expected world.
The knowledge includes the agent self-model, other agents' model, goals that
need to be satisfied, possible solutions generated to satisfy each goal, and
the local history of the world that consists of all possible local views for an
agent at any given time. The agent knowledge also includes the agent desires,
commitments, and intentions toward achieving each goal. The capability package
includes the reasoning component; the domain actions component which contains
the possible set of domain actions that when executed, the state of the world
will be changed; the communication component where the agent sends and receives
messages to and from other agents and the outside world.The problem solver component represents the particular role of
the agent and provides the agent with the capability of reasoning about its
knowledge to generate appropriate solutions directed to satisfy its goal. During
the interaction processes, the agents engage with each other while resolving
problems that are related to different types of interdependencies. The
coordination mechanisms are meant to reduce and resolve the problems associated
with interdependencies. Interdependencies are goal-relevant interrelationships
between actions performed by various agents.As argued in [30], the
agent interaction module identifies the type of interdependencies that may
exist in a particular domain. Consequently, agents select an appropriate
interaction device that is suitable to resolve a particular interdependency. (Interaction device is an
agent component by which it interacts with the other elements of the
environment through a communication device. A device is a piece or a component
with software characteristics that is designed to service a special purpose or
perform a special function). These devices are categorized as follows.(i)
Contract based
includes the assignment device.(ii)
Negotiation
based includes resource scheduling, conflict resolution, synchronization,
and redundancy avoidance devices.Within the context of brokering, the interdependency problem is
classified as capability interdependency, and the interaction device is the “assignment”.
The basic characteristics of the assignment device are problem specifications,
evaluation parameters, and the subprocesses. The problem specifications might
include, for example, the request, the desired satisfying time, and the
expiration time. A collection of basic components comprises the structure of
the agent model and represents its capabilities. The agents architectures are
based on the CIR-agent model as shown in Figure11. A brokering session mainly recognizes two types of
agents, namely, domain agent (requester or provider) and brokering agent (ReqBroker
or ProvBroker). The architecture of each agent type is described in detail below.Figure 11
The overall system model.
#### 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
#### 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
#### 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 6.1. Modelling Healthcare-Distributed Systems
It is clear that the development of coordination solutions in open
and distributed healthcare environments requires a new design paradigm,
improved integration architectures and services. A cooperative distributed
systems (CDSs)
approach is an ideal and appropriate design paradigm which allows the various
healthcare entities to exercise some degree of authority in sharing their
information and capabilities.The architecture must describe the organization and the
interconnection among the software entities. In this architecture, the
environment can be envisioned as a cooperative distributed system (CDS)
comprised of a collection of economically motivated software agents that
interact competitively or cooperatively, find and process information, and
disseminate it to humans and other agents. It also enables common services that
facilitate the coordination and the cooperation activities amongst various
domain entities and support ad hoc and automated configurations.In our proposed model, a CDS is conceptualized as a dynamic
community of agent and nonagent entities that contribute with different
services. Based on the above view, an agent might play different roles and be
able to coordinate cooperatively or competitively with other agents, including
humans. Therefore, healthcare CDS entities are mapped as follows.(i)Service requester: is a domain specific entity that can
interact with the environment and request services.(ii)Service provider: a domain entity that provide
application-specific services.(iii)Brokering entity: is an agent that provides common coordination services, and facilities for the
generic cooperative distributed systems environment.
## 6.2. The Coordinated Intelligent Rational Agent (CIR-Agent) Model
The representative agents of domain and brokering entities within
the context of healthcare-based CDS are built on the foundation of CIR-agent
architecture with focuses on utilizing the model to capture the participants’
individual behavior toward achieving a desirable goal while maintaining a
required privacy degree.The CIR-agent is an individual collection of primitive components
that provide a focused and cohesive set of capabilities. The basic components
include problem-solving, interaction, and communication components, as shown in
Figure10(b). A particular arrangement (or interconnection) of
components is required to constitute an agent. This arrangement reflects the
pattern of the agent mental state as related to its reasoning about achieving a
goal. However, no specific assumptions need to be made on the detailed design
of the agent components. Therefore, the internal structure of the components
can be designed and implemented using object oriented or another technology,
provided that the developer conceptualizes the specified architecture of the
agent as described in Figure 10.The CIR agent architecture.
(a)
Detailed architecture of CIR agent(b)
Logical architecture of CIR agentBasically, each agent consists of knowledge and capability components.
Each of which is tailored according to the agent specific role. The agent
knowledge contains information about the environment and the expected world.
The knowledge includes the agent self-model, other agents' model, goals that
need to be satisfied, possible solutions generated to satisfy each goal, and
the local history of the world that consists of all possible local views for an
agent at any given time. The agent knowledge also includes the agent desires,
commitments, and intentions toward achieving each goal. The capability package
includes the reasoning component; the domain actions component which contains
the possible set of domain actions that when executed, the state of the world
will be changed; the communication component where the agent sends and receives
messages to and from other agents and the outside world.The problem solver component represents the particular role of
the agent and provides the agent with the capability of reasoning about its
knowledge to generate appropriate solutions directed to satisfy its goal. During
the interaction processes, the agents engage with each other while resolving
problems that are related to different types of interdependencies. The
coordination mechanisms are meant to reduce and resolve the problems associated
with interdependencies. Interdependencies are goal-relevant interrelationships
between actions performed by various agents.As argued in [30], the
agent interaction module identifies the type of interdependencies that may
exist in a particular domain. Consequently, agents select an appropriate
interaction device that is suitable to resolve a particular interdependency. (Interaction device is an
agent component by which it interacts with the other elements of the
environment through a communication device. A device is a piece or a component
with software characteristics that is designed to service a special purpose or
perform a special function). These devices are categorized as follows.(i)
Contract based
includes the assignment device.(ii)
Negotiation
based includes resource scheduling, conflict resolution, synchronization,
and redundancy avoidance devices.Within the context of brokering, the interdependency problem is
classified as capability interdependency, and the interaction device is the “assignment”.
The basic characteristics of the assignment device are problem specifications,
evaluation parameters, and the subprocesses. The problem specifications might
include, for example, the request, the desired satisfying time, and the
expiration time. A collection of basic components comprises the structure of
the agent model and represents its capabilities. The agents architectures are
based on the CIR-agent model as shown in Figure11. A brokering session mainly recognizes two types of
agents, namely, domain agent (requester or provider) and brokering agent (ReqBroker
or ProvBroker). The architecture of each agent type is described in detail below.Figure 11
The overall system model.
### 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
### 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
### 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
## 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
## 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 7. Discussion and Conclusion
Current advances in nowadays technologies coupled with the
rapidly evolving healthcare paradigms allow us to foresee novel applications
and services for improving the quality of daily life health activities. The
increasing demand and dependency on information in healthcare organizations has
brought the issues of privacy to every aspect of the healthcare environments.
It is expected and with no doubt that medical data such as genome information,
medical records, and other critical personal information must be respected and
treated with a great concern. As awareness of the threats that organizations
face becomes more well understood, the need for additional privacy
specifications for open, distributed, and heterogeneous systems grows
clear. Tremendous efforts have been devoted to privacy and security
issues in distributed systems for the last few decades to find technological
means of guaranteeing privacy by employing state-of-the-art encryption and anonymization
technology. The proposed architecture provides feasible solution to privacy
protection in open environments, and presents myriad of additional privacy and
security opportunities without negative impact to the utilization of these
services.Architecturally,
the proposed model is viewed as a layer of services, where different roles can
be played by the various entities (requesters, brokers, and providers). While
existing approaches provide traditional information brokering by incorporating
agent-based solutions to make healthcare information more accessible to
individuals, the proposed architecture classifies the brokering role into
several subroles based on the attributes designated to describe the privacy-desired
degree of both the information provider and the information requester. Each
role is modeled as an agent with a specific architecture and interaction
protocol that are appropriate to support a required privacy degree.Within the layer, two sets of brokering entities are available to
service requesters and providers. The first set handles interactions with requesters
according to the desired privacy degree that is appropriate to their
preferences, while the other set supports privacy degrees required by service
providers. A brokering pattern is realized by the different roles played by the
domain entities and their corresponding brokering agent. A complete brokering scenario
is accomplished by performing different levels of interaction, namely, (1) requester-to-broker
interaction, (2) broker-to-broker interaction, and (3) broker-to-provider interaction.
Different combinations within
the layer can take place to support the interbrokering interactions. The
proposed layered architecture provides an appropriate separation of
responsibilities, allowing developers and programmers to focus on modeling
solutions and solving their particular application problems in a manner and
semantics most suitable to the local perspective. Agent technology has been
viewed as one of the key technologies for supporting information brokering in
heterogeneous open environments. The use of agent technology provides high degree
of decentralization of capabilities, which is the key to system scalability and
extensibility.Another important aspect of the model is that it treats the
privacy as a design issue that has to be taken into consideration in developing
healthcare information brokering systems. In healthcare environments, the
proposed model provides feasible solution to the problem of information
overload and privacy concerns. It enables transparent integration amongst different
participants of healthcare CDS, and provides querying ability and coordination solutions
that enhance the overall connectivity of distributed, autonomous, and possibly
heterogeneous information sources (databases) of different healthcare sectors.
It can efficiently govern different types of health-oriented information and
critical medical data such as genetic, HIV, mental health, and pharmacy records
from not distributed, disseminated, or abused. Based on the level and the
amount of information that can be released, patients, clinicians, service
providers, and medical staff members can securely translate their privacy
policies to an applicable-related privacy case in the proposed model.The proposed approach is innovative in the sense that it treats
the privacy as a design issue for information brokering systems, and it
supports ad hoc and automated configurations among distributed, possibly
autonomous, and heterogeneous entities with various degrees of privacy
requirements. The multilayer architecture minimizes the architecture complexity
encountered in direct-interaction architectures (where interactions between
agents often take more complex processes for encompassing series of message
exchange and forming a single point of failure), and makes it less vulnerable
to failures. The proposed layered architecture provides an appropriate
separation of responsibilities, letting developers and programmers focus on
solving their particular application problems in a manner and semantics most
suitable to the local perspective.
---
*Source: 101382-2009-03-23.xml* | 101382-2009-03-23_101382-2009-03-23.md | 111,296 | Agent-Oriented Privacy-Based Information Brokering Architecture for Healthcare Environments | Abdulmutalib Masaud-Wahaishi; Hamada Ghenniwa | International Journal of Telemedicine and Applications
(2009) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2009/101382 | 101382-2009-03-23.xml | ---
## Abstract
Healthcare industry is facing a major reform at all levels—locally, regionally, nationally, and internationally. Healthcare services and systems become very complex and comprise of a vast number of components (software systems, doctors, patients, etc.) that are characterized by shared, distributed and heterogeneous information sources with varieties of clinical and other settings. The challenge now faced with decision making, and management of care is to operate effectively in order to meet the information needs of healthcare personnel. Currently, researchers, developers, and systems engineers are working toward achieving better efficiency and quality of service in various sectors of healthcare, such as hospital management, patient care, and treatment. This paper presents a novel information brokering architecture that supports privacy-based information gathering in healthcare. Architecturally, the brokering is viewed as a layer of services where a brokering service is modeled as an agent with a specific architecture and interaction protocol that are appropriate to serve various requests. Within the context of brokering, we model privacy in terms of the entities ability to hide or reveal information related to its identities, requests, and/or capabilities. A prototype of the proposed architecture has been implemented to support information-gathering capabilities in healthcare environments using FIPA-complaint platform JADE.
---
## Body
## 1. Introduction
Healthcare systems are characterized by shared and distributed
decision making and management of care. The distributed nature of the knowledge
among different healthcare locations implies that a request may not be
completely satisfied at a specific location or that one or more healthcare
location may contain information similar to, though not exactly the same as, that required by
the request.Many initiatives and programs have been established to promote
the development of less costly and more effective healthcare networks and
systems at national and international scales. The objectives of these healthcare
networks are to improve
diagnosis through online access to medical specialists, online reservation of analysis
and hospital services by practitioners extended on wide global scale, transplant
matching, and so forth. A complete electronic medical patient-case file, which might be shared between
specialists and can be interchanged between hospitals and with general practitioners
(GPs), will be crucial in diagnosing diseases correctly, avoiding duplicative
risky and expensive tests, and developing effective treatment plans.However, medical patient-case files may contain some sensitive
information about critical and vital topics such as abortions, emotional and
psychiatric care, sexual behaviors, sexually transmitted diseases, HIV status, and
genetic predisposition diseases. Privacy and the confidentiality of medical
records have to be especially safeguarded. Without broad trust in medical
privacy, patients may avoid crucial healthcare provision.Healthcare professionals and care providers prefer to have the
ability of controlling the collection, retention, and distribution of
information about themselves. On the other hand, healthcare service providers
need to effectively manage and prevent any abuse of the information or service they
provide in addition to the ability of protecting their identities. An important feature of the various
healthcare sectors is that they share similar problems and are faced with
challenges that can be characterized as follows.(i) In open-distributed healthcare
environments, it is no longer practical to expect healthcare clinicians, staff,
care providers, and patients to determine and keep track of the information and
services relevant to his/her requests and demands. For example, a patient will
be ubiquitously able to access his/her medical record from anywhere at any time
or may request medical services offered by available healthcare centers in a
particular city without being aware of the distributed sources and irrespective
of their locations. In addition, an application should be able to manage
distributed data in a unified fashion. This involves several tasks, such as
maintaining consistency and data integrity among distributed data sources, and
auditing access.(ii) The distributed nature of
the knowledge among multiple healthcare locations may require collaboration for
information gathering. For example, each unit in a hospital keeps its own
information about patients’ records.(iii) The solution of specific
medical problem includes complex activities and requires collaborative effort
of different individuals who posses distinct roles and skills. For example, the
provision of care to hospitalized patients involves various procedures and
requires the coordinated interaction amongst various staff and medical members.
It is essential that all the involved medical staff and professionals must
coordinate their activities in a manner that will guarantee the best
appropriate treatment that can be offered to the patient.(iv) A recent survey shows that
67% of the American national respondents are concerned about the privacy of
their personal medical records, 52% fear that their health insurance
information might be used by employers to limit job opportunities, while only
30% are willing to share their personal health information with health
professionals not directly involved in their case. As few as 27% are willing to
share their medical records with drug companies [1].To explore such issues, distributed healthcare systems need to
have an access to a service that can enable collaboration between different
healthcare service requesters and providers. Brokering facilitates achieving
better coordination among various healthcare service requesters and providers,
and permits healthcare personnel to get access to different services managed by
various providers without having to be aware of the location, identities,
access mechanisms, or the contents of these services.The proactive health systems have the potential to improve healthcare
access and management which significantly lower the associated incurred costs through
efficiently controlled information flow between various physicians, patients, and medical personnel,
yet threaten to facilitate data sharing beyond any privacy concerns. The high
degree of collaborative work needed in healthcare environments implies that
developers and researchers should think of other venues that can manage and
automate this collaboration efficiently.However, privacy concerns over
inappropriate use of the information make it hard to successfully exploit and achieve
the gains from sharing such information. This dilemma restricts the willingness
of individuals and personnel to disseminate or publicize information that might
lead to adverse outcomes. This paper presents an agent privacy-based
information brokering architecture that supports ad hoc system configurations
emphasizing the strategies for achieving privacy in healthcare environments. Within
the context of brokering, we view privacy as “the ability of entities to
decide upon revealing or hiding information related to their identities, requests
and capabilities in open distributed environments.”
## 2. Related Work
Privacy concerns are key barriers to the growth of health-based systems.
Legislation to protect personal medical information was proposed and put in effect to help
building a mutual confidence between various participants in the healthcare
domain.Privacy-based brokering protocols were proposed in many application
domain such as E-auctions [2], data
mining [3], and E-commerce.
Different techniques were used to enable collaboration among heterogeneous cooperative
agents in distributed systems including brokering via middle agents. These
middle agents differ from the role they play within the agent community [4–6]. The work
in [7] has proposed an agent-based mediation
approach, in which privacy has been treated as a base for classifying the
various mediation architectures only for the initial state of the system. In
another approach, agents capabilities and preferences are assumed to be common
knowledge, which might violate the privacy requirements of the involved participants [8]. Other
approaches such as in [9–11] have proposed frameworks to facilitate coordination
between web services by providing semantic-based discovery and mediation
services that utilize semantic description languages such as OWL-S [12] and RDF [13]. Another
recent approach distinguishes a resource brokering architecture that manages
and schedules different tasks on various distributed resources on the large-scale
grid [14]. However,
none of the above-mentioned approaches has treated privacy as an architectural
element that facilitates the integration of various distributed systems of an
enterprise.Several approaches were proposed for integration of distributed
information sources in healthcare [15]. In one
approach [16], the focus was on providing management
assistance to different teams across several hospitals by coordinating their
access to distributed information. The brokering architecture is centralized
around a mediator agent, which allocates the appropriate medical team to an
available operating theatre in which the transplant operation may be performed.
Other approach attempts to provide agent-based medical appointments scheduling [17, 18], in these
approaches the architecture provides matchmaking mechanisms for the selection
of appropriate recipient candidates whenever organs become available through a
matchmaking agent that accesses a domain-specific ontology.Other approaches proposed the use of privacy policies along with
physical access means (such as smartcards), in which the access of private
information is granted through the presence of another trusted authority that
mediate between information requesters and information providers [19, 20]. A
European IST project [21], TelemediaCare, Lincoln, UK,
developed an agent-based framework to support patient-focused distant care and
assistance, in the architecture composes two different types of agents, namely,
stationary “static” and mobile agents. Web service-based tools were developed to enable
patients to remotely schedule appointments, doctor visits, and to access
medical data [22].Different approaches had been suggested to protect the location
privacy in open-distributed systems [23]. Location
privacy is a particular type of information privacy that can be defined as “the
ability to prevent other parties from learning one’s current or past location”.
These approaches range from anonymity, pseudonymity, to cryptographic techniques. Some
approaches focus on using anonymity by unlinking user personal information from
their identity. One available tool is called anonymizer [24]. The
service protects the Internet protocol (IP) address or the identity of the user
who views web pages
or submits information (including personal preferences) to a remote site. The
solution uses anonymous proxies (gateways to the Internet) to route user’s
Internet traffic through the tool. However, this technique requires a trusted
third party because the anonymizer servers (or the user’s Internet service
provider, ISP) can certainly identify the user. Other tools try not to rely on
a trusted third party to achieve complete anonymity of the user’s identity on
the Internet, such as Crowds [25], Onion routing [26], and MIX
networks [27].Various programs and initiatives have proposed a set of guidelines for
secure collection, transmission, and storage of patients’ data. Some of these
programs include the Initiative for Privacy Standardization in Europe (IPSE)
and the Health Insurance Portability and Accountability Act (HIPAA) [28, 29]. Yet,
these guidelines need the adoption of new technology for healthcare requester/provider
interaction.
## 3. Brokering Requirements for Distributed Healthcare Systems
Brokering enables collaboration between different service requesters
and providers, and allows the dynamic interpretation of requests for the
determination of relevant service providers. For service providers, the
brokering services permit dynamic creation of services’ repositories after
suitable assembly of service advertisements available from the various
providers, or other additional activities. The major functional requirements of
a brokering service include the following.(i)Provision of
registration services: the registration and naming service allows building
up a knowledge base of the environment that can be utilized to facilitate
locating and identifying the relevant existing service sources and their
contents for serving a specific request. It is crucial to be able to identify
the subset of relevant information at a source, and to combine partially
relevant information across different sources; this requires the process of
identification and retrieval of a subset of required service at any source. It
is clear that in such environment, different sources would provide relevant
information to a different extent. The most obvious choice of the source from
which information will be retrieved is the one which returns most (or all) of
the relevant request. In that case, the user will have to keep track of which
source has the most relevant information.(ii)The acceptance of
providers’ service descriptions: to enable the dynamic discovery of
services, a mechanism is required to describe the capability aspects of
services, such as the functional description of a service, the conditions and
the constraints of the service, and the nature of the results.(iii)Receiving services’
requests: to enable requesters to define and describe the required
parameters that are needed to represent a request.(iv)Interaction: brokers
may engage (on behalf of requesters) in the process of negotiation with various
service providers to serve a request. The interaction requires a set of agreed
messages, rules for actions based upon reception of various messages.(v)Communication: the
communication capability allows the entities to exchange messages with the
other elements of the environment, including users, agents, and objects. In
order to perform their tasks, these entities need to depend heavily on
expressive communication with others not only to perform requests, but also to
propagate their capabilities, advertise their own services, and explicitly
delegate tasks or requests for assistance.
## 4. The Brokering Layer: Privacy-Based Agent-Orinted Architecture
Developing the brokering services comprises the automation of
privacy to enhance the overall security of the system and accordingly entities
should be able to define the desired degree of privacy. In fact, the brokering
service permits entities to participate in the environment with different roles,
and hence be capable of automating their privacy concerns and select a
particular privacy. The challenge here is how to architect a service that could
provide means and mechanisms by which entities would be able to interact with
each other and determine any privacy degree that suits a particular situation.
Such interaction is characterized by the nondeterministic aspect in addition to
the dynamic nature of the environment, where these entities exist and operate
for which they require to be able to change configurations to participate in
different roles. These requirements could not be accomplished using traditional
ways of manually configuring software.We strongly believe that agent orientation is an appropriate
design paradigm for providing coordination services and mechanisms in such
settings. Indeed, such a paradigm is essential to modeling open, distributed,
and heterogeneous environments in which an agent should be able to operate as a
part of a community of cooperatively
distributed systems environments, including human users. A key aspect of agent orientation
is the ability to design artifacts that are able to perceive, reason, interact,
and act in a coordinated fashion. Here, we view agent orientation as a
metaphorical conceptualization tool at a high level of abstraction (knowledge
level) that captures supports and implements features that are useful for
distributed computation in open environments. These features include
cooperation, coordination, interaction, as well as intelligence, adaptability,
economic and logical rationalities. We
define an agent as an individual collection of primitive components that
provide a focused and cohesive set of capabilities. We focus on the notion of
agenthood as a metaphorical conceptualization tool at a high level of
abstraction (knowledge level) that captures supports and implements features
that are useful for distributed computation in open environments.Architecturally, the brokering service is viewed as a layer of
services and is modeled as an agent with a specific architecture and
interaction protocol that are appropriate to carry the required privacy degree.
The challenge in this context is how to architect the brokering layer with the
appropriate set of services that enable cooperation across the different
degrees of privacy. The interaction protocols represent both the message
communication and the corresponding constraints on the content of messages.
They describe the sequence of messages among agents, and illustrate various
protocols that satisfy a desired privacy requirement. The focus for designing
these patterns is to provide a mechanism to reduce the costs and risks that
might be a result of violating privacy requirements. The patterns provide
mechanisms allowing users (human/agents) to adjust the privacy attributes, and allowing these users to
achieve and accomplish their tasks in addition to protecting their desired
privacy attributes.The agent interaction requires a set of agreed messages, rules
and assumption of communication channels. These rules and constraints can be
abstracted as agents’ patterns that define various protocols for every possible
privacy requirement. Using these protocols, agents would be able to protect the
privacy aspects of the most concern.
From the privacy standpoint, the brokering services are categorized into
different roles that are classified according to the participants’ (providers
and requesters) desired degree of privacy. These degrees of privacy control the
proper interaction patterns and will vary from a specific scenario to another.
The brokering layer takes in consideration the protection of any privacy
desires required by requesters, providers, or both.Here, we define the degree of privacy in terms of three
attributes: the entity identity, capability, and goals. Therefore, an agent can
categorize its role under several privacy degrees. Formally, an agent can be
represented as a 2-tupleAg≡〈(RA:Id,G);(PA:Id,Cap)〉, where RA and PA refer to the agent role as requester and
provider while Id, G,
and Cap,
respectively, refer to the agent identity, goals, and capabilities, which might
have a null value. For example, an agent might participate with a privacy
degree that enables the hiding of its identity as a requester by setting the
value of Id to null.
Tables 1, 2
summarize the different scenarios and
roles that might be played by the brokering layer categorized by the possible
privacy concern of the requester
(RA)
and provider
(PA)
agents.Table 1
The brokering layer interaction categorized by the privacy concern of service requesters.
Privacy attributesInteractionCaseGId1RevealedRevealed(i) Receive service request.(ii) Forward request to broker-provider side.(iii) Deliver result to requester.2HiddenRevealed(iv) Retrieve service request posted by a requester.(v) Forwards request to broker-provider side.(vi) Store result to be retrieved by requester.3RevealedHidden(vii) Postservice request to service repository.(viii) Requester to search repository and request service.(ix) Retrieve a service request that was stored by a requester.(x) Forward request to available and capable providers.(xi) Store result to be retrieved by requester.4HiddenHidden(xii) Requester to store service request.(xiii) Retrieve service request that was stored by a requester.(xiv) Forward request to available and capable providers.(xv) Store result to be retrieved by requester.Table 2
The brokering layer interaction categorized by the privacy concern of service providers.
Privacy attributesInteractionCaseIdCap1RevealedRevealed(i) Search for capable provider.(ii) Forward request.(iii) Negotiate and assign a service request.(iv) Get service result and deliver result.2HiddenRevealed(v) Postservice request to service repository.(vi) Providers to access service repository.(vii) Providers to evaluate service parameters(viii) Store result.(ix) Brokering layer to retrieve and deliver result.3RevealedHidden(x) Forward service request.(xi) Provider to evaluate request.(xii) Brokering layer to receive and deliver result back.4HiddenHidden(xiii) Providers to access repository.(xiv) Provider to evaluate request.(xv) Provider to store service result.(xvi) Brokering layer to retrieve and deliver result back.The layer permits various entities to participate in the
environment with different roles, and hence be capable of automating their
privacy concerns and select a particular degree. Each layer role is represented
as a special broker with a specific architecture and interaction protocol that
is appropriate to serve requests from various participants while maintaining
the required privacy degree. An agent role is an abstract description of an entity
with the specified functionalities. The brokering layer has the ability to
interact, solicit help, and delegate services’ requests from other available
brokering agents who support different privacy degrees.Responsibilities are separated and defined according to the roles
played and the required degree of privacy. Within the layer two sets of
brokering agents are available to service requesters and providers. The first
set handles interactions with requesters according to the desired privacy
degree that is appropriate to their preferences while the other set supports privacy degrees
required by service providers.Figure1 shows a logical view of the brokering services and
the relevant entities that are involved in any brokering scenario. Every
brokering pattern is accomplished by the composition of the requester role,
brokering agents, and the provider role, in which the interaction scenarios are produced
automatically. A complete brokering session is divided into several stages,
starting from requester-to-brokering layer interaction, brokering layer intra-interaction, and broker
layer-to-provider interaction. Note that in the figure a negation on a specific
privacy attribute variable exemplifies that the corresponding privacy attribute
is hidden from the environment.Figure 1
Logical view of the brokering service.
## 5. The Brokering Protocols: Privacy-Based Interaction Patterns
The brokering protocols describe a cooperative multibrokering
system, which provides the solution for interaction among participants in a
dynamic and heterogeneous environment of service providers and requesters. Each
brokering entity performs basic brokering functionality, such as service
discovery, dynamic service composition, and knowledge sharing with the
community according to a required privacy degree. A brokering entity within the
layer is called a broker hereafter.Brokers within the layer might represent a set of services in
which providers can advertise their service capability. The brokering protocols
regulate and govern service knowledge discovery and sharing of acquired
knowledge by defining interaction patterns that are composed of a set of
messages that can be exchanged by other brokers within the layer or other
registered entities that might benefit of the functionalities supported by the
overall brokering service. The architecture permits the brokering agents to
have various combinations with other brokering entities which support different
privacy degrees. The following section describes the different interaction
patterns supported by the brokering layer for entities that might play either a
requester or a provider role.
### 5.1. The Requester-Brokering Layer Interaction
#### 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
#### 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
#### 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
#### 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
### 5.2. The Provider-Brokering Layer Interaction
#### 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
#### 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
#### 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
#### 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 5.1. The Requester-Brokering Layer Interaction
### 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
### 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
### 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
### 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
## 5.1.1. Requesters Revealing Identities and Goals
The
broker protects the privacy of healthcare personnel, patients, or staff. It
assists service requesters to achieve their goals without exposing their
identities to the environment. For example, information about the number of patients who have
Hepatitis B in a specific city and wanted by a doctor can be assessed by the
broker agent without revealing, neither the doctor nor the patients identities. However, agents playing the role of requesters and wanting to benefit
from such a service are required to reveal their identities and goals to the
relevant broker within the layer. Note that each privacy degree is described in
terms of two main interactions: an interaction amongst the various brokers within
the brokering layer (intra-interaction) and the interaction between the domain
(i.e., a requester or a provider) with the relevant broker that supports a
particular privacy degree (inter-interaction).Intra-Interaction
As shown in Figure2, the broker might extend the pattern to
include interaction with various brokers associated with supporting other
privacy degrees of service providers, consequently the broker solicit help and
forward request to all available provider-related brokers within the layer
incorporating various interaction compositions. Note that for every potential
composition, the provider-related brokers receive only a notification of a
service request, and accordingly carry on its own interaction pattern to
satisfy that request without exaggerating, overstressing, or overemphasizing
any incurred rights or privileges (e.g., cost).Figure 2
Interaction pattern for requesters revealing privacy attributes.Inter-Interaction
The typical interaction pattern for this particular privacy
degree comprises that the layer engages in performing the following: (1) accepting
and interpreting service requests from pertinent requesters; (2) identifying and
contacting a set of available providers, forwarding service requests, and controlling
appropriate transactions to fulfill any required service request. These
transactions should adhere to agreed appropriate interaction mechanism (e.g., auction,
negotiation, etc.); (3) receives result of a service request and delivers it
back to the relevant requester.
## 5.1.2. Requesters Hiding Identities
Requesters such as
patients with fatal diseases may wish to access services or seek further
assistance without revealing their identities. The brokering service
dynamically identifies relevant service providers, and acts on behalf of those
requesters to fulfill their goal(s). As shown in Figure3, requesters will be responsible of checking the
availability of the service result, which implies that requesters should be
aware of a designated result location. The interaction imposes a significant
effort on the performance and efficiency. System performance is clearly
dependent on number of parameters, including the number of providers willing to
carry out the request and the time needed by each provider to fulfill that
request.Figure 3
Interaction pattern for requesters hiding identity.Intra-Interaction
As described in the previous case, the broker might extend its
pattern to include an interaction composition with various brokers associated
with supporting other privacy degrees for service providers. Upon receiving a
service result, the broker stores the result in a dedicated repository (result
repository) to be retrieved by the relevant requester.Inter-Interaction
Requesters may wish to access services or seek further assistance
without revealing their identities. The interaction pattern for this particular
privacy degree is as follows: (1) requesters are required to store services
requests in a predefined service repository along with preferred parameters.
(2) As shown in Figure3, requesters are responsible of checking the
availability of the service result and hence retrieve it; this implies that
requesters are able to link a service result to their own requests.
## 5.1.3. Requesters Hiding Goals
There might be certain
situations where requesters prefer to hide their goals from the environment;
the layer functionality entails the forwarding of every advertised service out
to every registered requester with unknown preferences or interests. For
example, clinician might benefit from variety of service advertisements
regarding new medications, tools, medical equipments, and health-related
notifications. The
brokering service permits these clinicians to check a service repository for further information
or to browse other service offerings that have been previously posted and
accordingly determine an appropriate and interested service.Intra-Interaction
Provider-related brokers representing providers with known
capabilities will have the possibility to advertise existing service offerings
to the broker which in turn promotes forwarding every received advertisement to
the relevant requester. It is to be noted that whenever a requester decides on
a particular service offering, the inter-interaction is not restricted only to
contacting those who had offered such services, but might extend to all
available provider-related brokers supporting other privacy degrees. For
example, the same advertised service offering might be achieved by other
providers in the environment who had the interest of hiding their own
capabilities.Inter-Interaction
They broker permits healthcare requesters to check a service
repository for further information or to browse other service offerings that
have been previously posted and accordingly determine an appropriate and
interested service as shown in Figure4. Once a requester selects a particular service
advertisement and forwards that request to the broker, then it is the broker
responsibility to determine the most suitable service provider that fulfills
that request. Upon achieving the requester goal, the broker delivers back the
service result to the requester. In an open environment, where many different
services providers are in continual increase and with a competitive manner to
sell their services, requesters would be flooded by a variety of service
advertisements and notifications. Requesters have to determine whether the
service advertised to them is of an interest or not. Clearly, this process
implies that a significant time is required to assess every single-service
notification. The broker sends the notifications along with any related
parameters required for providing the service (such as name of the service,
cost, and location).Figure 4
Interaction pattern for requesters hiding goals.
## 5.1.4. Requesters Hiding Identities and Goals
Requesters would have the
possibility to hide their identities and goals from the entire environment; as
shown in Figure5, they have the option either to post their want ads to the layer
service repository directly, or might check for any services that would be of
an interest. For example, patients with narcotic-related problems (such as drug
or alcohol addiction) can seek services that provide information about
rehabilitation centers, specialized psychiatrists, or programs that will help
overcoming a particular critical situation without revealing either their
identities nor the desired information.Figure 5
Interaction pattern for requesters hiding privacy attributes.Inter-Interaction
Requesters will have the option to either post their want ads to a
service repository directly, or might check for any service offerings that
would be of an interest. In both cases, requesters will be permitted to store
their service requests and retrieve services results. The broker identifies and
interprets the required requests, and accordingly will determine the applicable
provider which is capable of achieving and fulfilling the requester goal. Note
that, for this degree of privacy, it is the requester responsibility to check
for the availability of the service result, and hence retrieve it.
## 5.2. The Provider-Brokering Layer Interaction
### 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
### 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
### 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
### 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 5.2.1. Providers Revealing Identities and Capabilities
Providers with this degree of privacy will have the
ability to register their presence along with the capability of the service
they offer. Although providers with this privacy degree are required to reveal
their privacy attributes to the relevant broker, the protocol will suppress any
other entity from knowing the provider attributes.Intra-Interaction
The interaction between the broker and other requester-related brokers is accomplished through
sending and receiving messages related to service proposals, service offerings,
and services results.Inter-Interaction
As shown in Figure6, a service provider registers itself with the broking
service, along with the description of its service capabilities which is stored
as an advertisement in a repository maintained by the broker and contains all
available service descriptions. Assigning requests to providers with known
capabilities and identities can be based on either broadcasting or focusing,
however, the interaction is neither restricted to specific service providers
nor committed to a fixed number of them. This ability is particularly useful in
which a brokering agent acts in a dynamic environment in which entities may
continually enter and leave the society unpredictably. For every received
service request, the broker matches the most applicable providers that are
appropriate to fulfill that request, and thus maintains a pertinent queue that
contains the capable providers along with their identities.Figure 6
Interaction pattern for providers revealing privacy attributes.
## 5.2.2. Providers Hiding Identities
Healthcare providers can
have the option to hide their identities from the environment and advertise
their service offerings to the relevant brokering agent. Protection for the
core identity prevents service abuses that impact availability of service and
hence improving the ability to consistently deliver reliable access. Since the
service capabilities are known to the broker, service requests that are
believed to be fulfilled by such providers will be posted to a dedicated
repository for which providers will have the possibility to browse such requests
and select whichever of an interest.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes
(1) receiving service requests; (2) determining whether these requests are within the provider capabilities; (3) storing service
requests to be browsed by authorized registered providers (providers hiding
identities); (4) retrieving and delivering back
service result. A broker supporting this privacy case will have the ability to
advertise registered provider capabilities, and hence engage in various
interaction patterns of available requester-related brokers.Inter-Interaction
A provider can participate in any interaction mechanism and may
respond to call-for-proposal requests by proposing service offerings that are
stored in a queue-structured repository. Upon assigning and delegating a
service request to a provider with this degree of privacy, it is the provider
responsibility to store pertinent service result to be retrieved by the broker,
and thus delivered to the proper destination as shown in Figure7.Figure 7
Interaction pattern for providers hiding identity.
## 5.2.3. Providers Hiding Capabilities
The brokering services
allow providers that do not wish to reveal their own capabilities to
participate in fulfilling a service request. After receiving a request, the
brokering interaction protocol exemplifies the forming out of requests to every
registered provider with unknown capability. It is noteworthy that, for every
advertised request, providers have to determine whether the request is within
their capabilities and/or of an interest. Clearly, such an interaction implies
that a considerable elapsed time will be spent on evaluating every single
request. Therefore (under the assumption of an open dynamic environment),
providers would be deluged by a variety of service requests, which
significantly impact performance and efficiency. Figure8 shows the interaction pattern.Figure 8
Interaction pattern for provider hiding capability.Intra-Interaction
The broker interacts with other entities in the layer to engage
in receiving and sending messages related to service requests and offerings.
The broker task includes (1) receiving service requests from requester-related
brokers; (2) receiving service proposals; (3) delivering back service result.Inter-Interaction
After receiving a service request, the broker sends out requests
in the form of broadcasting to every registered provider with unknown
capabilities. Figure8 shows the interaction pattern. Once a provider
selects a particular service request, it forwards a service proposal to the
broker who controls the remaining transaction according the appropriate
negotiation mechanisms similar to what has been described in the former
patterns.
## 5.2.4. Providers Hiding Identities and Capabilities
Providers will have the
ability to browse a special request repository and consequently determine the
relevant requests that might be of an interest and within their capabilities.
As shown in Figure9, the broker-provider side agent responds back with
the service result (a result location within the layer has to be identified to
the provider upon registration within the brokering layer).Figure 9
Interaction pattern for provider hiding privacy attributes.Intra-Interaction
The broker intra-interaction comprises the following: (1)
receiving service requests from requester-related brokers; (2) storing service
requests; (3) accessing and evaluating service proposals; (4) retrieving and
delivering back service result.Inter-Interaction
In this protocol, the brokering functionality is mainly seen as a
directory service, in which the broker maintains a repository of service
requests along with any required preferences. Providers will have the ability
to browse this repository to determine applicable relevant requests that might
be fulfilled. As shown in Figure9, providers with this degree of privacy have to take
in consideration linking the result of the service to the request.
## 6. Design and Implementation
### 6.1. Modelling Healthcare-Distributed Systems
It is clear that the development of coordination solutions in open
and distributed healthcare environments requires a new design paradigm,
improved integration architectures and services. A cooperative distributed
systems (CDSs)
approach is an ideal and appropriate design paradigm which allows the various
healthcare entities to exercise some degree of authority in sharing their
information and capabilities.The architecture must describe the organization and the
interconnection among the software entities. In this architecture, the
environment can be envisioned as a cooperative distributed system (CDS)
comprised of a collection of economically motivated software agents that
interact competitively or cooperatively, find and process information, and
disseminate it to humans and other agents. It also enables common services that
facilitate the coordination and the cooperation activities amongst various
domain entities and support ad hoc and automated configurations.In our proposed model, a CDS is conceptualized as a dynamic
community of agent and nonagent entities that contribute with different
services. Based on the above view, an agent might play different roles and be
able to coordinate cooperatively or competitively with other agents, including
humans. Therefore, healthcare CDS entities are mapped as follows.(i)Service requester: is a domain specific entity that can
interact with the environment and request services.(ii)Service provider: a domain entity that provide
application-specific services.(iii)Brokering entity: is an agent that provides common coordination services, and facilities for the
generic cooperative distributed systems environment.
### 6.2. The Coordinated Intelligent Rational Agent (CIR-Agent) Model
The representative agents of domain and brokering entities within
the context of healthcare-based CDS are built on the foundation of CIR-agent
architecture with focuses on utilizing the model to capture the participants’
individual behavior toward achieving a desirable goal while maintaining a
required privacy degree.The CIR-agent is an individual collection of primitive components
that provide a focused and cohesive set of capabilities. The basic components
include problem-solving, interaction, and communication components, as shown in
Figure10(b). A particular arrangement (or interconnection) of
components is required to constitute an agent. This arrangement reflects the
pattern of the agent mental state as related to its reasoning about achieving a
goal. However, no specific assumptions need to be made on the detailed design
of the agent components. Therefore, the internal structure of the components
can be designed and implemented using object oriented or another technology,
provided that the developer conceptualizes the specified architecture of the
agent as described in Figure 10.The CIR agent architecture.
(a)
Detailed architecture of CIR agent(b)
Logical architecture of CIR agentBasically, each agent consists of knowledge and capability components.
Each of which is tailored according to the agent specific role. The agent
knowledge contains information about the environment and the expected world.
The knowledge includes the agent self-model, other agents' model, goals that
need to be satisfied, possible solutions generated to satisfy each goal, and
the local history of the world that consists of all possible local views for an
agent at any given time. The agent knowledge also includes the agent desires,
commitments, and intentions toward achieving each goal. The capability package
includes the reasoning component; the domain actions component which contains
the possible set of domain actions that when executed, the state of the world
will be changed; the communication component where the agent sends and receives
messages to and from other agents and the outside world.The problem solver component represents the particular role of
the agent and provides the agent with the capability of reasoning about its
knowledge to generate appropriate solutions directed to satisfy its goal. During
the interaction processes, the agents engage with each other while resolving
problems that are related to different types of interdependencies. The
coordination mechanisms are meant to reduce and resolve the problems associated
with interdependencies. Interdependencies are goal-relevant interrelationships
between actions performed by various agents.As argued in [30], the
agent interaction module identifies the type of interdependencies that may
exist in a particular domain. Consequently, agents select an appropriate
interaction device that is suitable to resolve a particular interdependency. (Interaction device is an
agent component by which it interacts with the other elements of the
environment through a communication device. A device is a piece or a component
with software characteristics that is designed to service a special purpose or
perform a special function). These devices are categorized as follows.(i)
Contract based
includes the assignment device.(ii)
Negotiation
based includes resource scheduling, conflict resolution, synchronization,
and redundancy avoidance devices.Within the context of brokering, the interdependency problem is
classified as capability interdependency, and the interaction device is the “assignment”.
The basic characteristics of the assignment device are problem specifications,
evaluation parameters, and the subprocesses. The problem specifications might
include, for example, the request, the desired satisfying time, and the
expiration time. A collection of basic components comprises the structure of
the agent model and represents its capabilities. The agents architectures are
based on the CIR-agent model as shown in Figure11. A brokering session mainly recognizes two types of
agents, namely, domain agent (requester or provider) and brokering agent (ReqBroker
or ProvBroker). The architecture of each agent type is described in detail below.Figure 11
The overall system model.
#### 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
#### 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
#### 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 6.1. Modelling Healthcare-Distributed Systems
It is clear that the development of coordination solutions in open
and distributed healthcare environments requires a new design paradigm,
improved integration architectures and services. A cooperative distributed
systems (CDSs)
approach is an ideal and appropriate design paradigm which allows the various
healthcare entities to exercise some degree of authority in sharing their
information and capabilities.The architecture must describe the organization and the
interconnection among the software entities. In this architecture, the
environment can be envisioned as a cooperative distributed system (CDS)
comprised of a collection of economically motivated software agents that
interact competitively or cooperatively, find and process information, and
disseminate it to humans and other agents. It also enables common services that
facilitate the coordination and the cooperation activities amongst various
domain entities and support ad hoc and automated configurations.In our proposed model, a CDS is conceptualized as a dynamic
community of agent and nonagent entities that contribute with different
services. Based on the above view, an agent might play different roles and be
able to coordinate cooperatively or competitively with other agents, including
humans. Therefore, healthcare CDS entities are mapped as follows.(i)Service requester: is a domain specific entity that can
interact with the environment and request services.(ii)Service provider: a domain entity that provide
application-specific services.(iii)Brokering entity: is an agent that provides common coordination services, and facilities for the
generic cooperative distributed systems environment.
## 6.2. The Coordinated Intelligent Rational Agent (CIR-Agent) Model
The representative agents of domain and brokering entities within
the context of healthcare-based CDS are built on the foundation of CIR-agent
architecture with focuses on utilizing the model to capture the participants’
individual behavior toward achieving a desirable goal while maintaining a
required privacy degree.The CIR-agent is an individual collection of primitive components
that provide a focused and cohesive set of capabilities. The basic components
include problem-solving, interaction, and communication components, as shown in
Figure10(b). A particular arrangement (or interconnection) of
components is required to constitute an agent. This arrangement reflects the
pattern of the agent mental state as related to its reasoning about achieving a
goal. However, no specific assumptions need to be made on the detailed design
of the agent components. Therefore, the internal structure of the components
can be designed and implemented using object oriented or another technology,
provided that the developer conceptualizes the specified architecture of the
agent as described in Figure 10.The CIR agent architecture.
(a)
Detailed architecture of CIR agent(b)
Logical architecture of CIR agentBasically, each agent consists of knowledge and capability components.
Each of which is tailored according to the agent specific role. The agent
knowledge contains information about the environment and the expected world.
The knowledge includes the agent self-model, other agents' model, goals that
need to be satisfied, possible solutions generated to satisfy each goal, and
the local history of the world that consists of all possible local views for an
agent at any given time. The agent knowledge also includes the agent desires,
commitments, and intentions toward achieving each goal. The capability package
includes the reasoning component; the domain actions component which contains
the possible set of domain actions that when executed, the state of the world
will be changed; the communication component where the agent sends and receives
messages to and from other agents and the outside world.The problem solver component represents the particular role of
the agent and provides the agent with the capability of reasoning about its
knowledge to generate appropriate solutions directed to satisfy its goal. During
the interaction processes, the agents engage with each other while resolving
problems that are related to different types of interdependencies. The
coordination mechanisms are meant to reduce and resolve the problems associated
with interdependencies. Interdependencies are goal-relevant interrelationships
between actions performed by various agents.As argued in [30], the
agent interaction module identifies the type of interdependencies that may
exist in a particular domain. Consequently, agents select an appropriate
interaction device that is suitable to resolve a particular interdependency. (Interaction device is an
agent component by which it interacts with the other elements of the
environment through a communication device. A device is a piece or a component
with software characteristics that is designed to service a special purpose or
perform a special function). These devices are categorized as follows.(i)
Contract based
includes the assignment device.(ii)
Negotiation
based includes resource scheduling, conflict resolution, synchronization,
and redundancy avoidance devices.Within the context of brokering, the interdependency problem is
classified as capability interdependency, and the interaction device is the “assignment”.
The basic characteristics of the assignment device are problem specifications,
evaluation parameters, and the subprocesses. The problem specifications might
include, for example, the request, the desired satisfying time, and the
expiration time. A collection of basic components comprises the structure of
the agent model and represents its capabilities. The agents architectures are
based on the CIR-agent model as shown in Figure11. A brokering session mainly recognizes two types of
agents, namely, domain agent (requester or provider) and brokering agent (ReqBroker
or ProvBroker). The architecture of each agent type is described in detail below.Figure 11
The overall system model.
### 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
### 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
### 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 6.2.1. The Domain Agent: Service Providers and Requesters
Service providers and
requesters are modeled as domain agents as shown in Figure12. The requester agent can participate with various
privacy degrees and request services from the brokering layer. A requester
delegates the service request(s) to the relevant brokering agent according to
the interaction protocol of
the selected privacy degree. The domain agent possesses knowledge and
capability. The knowledge includes the model of the brokering agents in terms
of the supported privacy degree, self-model, and the local history. The
capability is categorized into three components: reasoning that includes
problem-solving and coordination, communication, and a set of domain actions.Figure 12
The domain agent architecture.A domain agent playing the role of a service provider can select
the appropriate privacy degree, and thus participate in providing the capability that meets the
needs of another domain entity. The problem solver of the domain agent hiding any of the
privacy attributes encompasses the accessing of different storage repositories.
For example, the problem solver of a requester includes functionalities related
to formulating service requests, checks for available service offerings, and accesses various storage
repositories to store requests or to retrieve service results. On the other
hand, the problem solver of a provider hiding its identity and capability
attributes consists of modules related to accessing storage repositories to
check for stored service requests that might be fulfilled and hence
participating in storing service
proposals and service results.The coordination component of a requester comprises the
interaction device which entails soliciting service from the relevant ReqBroker
agent. The interaction device of the provider agent manages the coordination
activities which involve proposing services to specific CFP messages and engage
in bidding processes.
## 6.2.2. The Brokering Agents: Reqbrokers and Provbrokers
A brokering agent is
composed of two components, namely, the knowledge and capability. The knowledge
component contains the information in the agent memory about the environment
and the expected world. As shown in Figure13, this includes the agent self-model, models of the
domain agents in terms of their roles (requester/provider) and/or capabilities,
and the local history of the world. The knowledge includes all possible local
views for an agent at any given time (such as the knowledge of physical
repositories, available services requests, services offerings, and service
results).Figure 13
The brokering agent architecture.
## 6.2.3. Implementation Example: Agent-Oriented Privacy Brokering for Healthcare CDS
In this section,
we show an example of our proposed model applied to healthcare environments to
support information-gathering capabilities. We describe the implementation of
one pattern associated with an information requester hiding identities and goals
and with three information providers; one is revealing privacy attributes, the
second hiding its identity, while third is hiding its own privacy attributes (identities and capabilities). The broker agent (called ReqBroker henceforth)
protects the privacy of requesters, understands the preferences, routes
requests, and replies appropriately. All the inter-interactions utilize the
FIPA Contract Net Protocol [13] as a negotiation
mechanism. Consider an online three information providers, E-VirtualMedInfo Inc.,
E-VirtualDiagnosis Inc., and FutureDocAssistants Inc. (names are fictitious), each is represented by an agent.The three providers offer medical-related information, healthcare
guidelines, and clinical diagnosis procedures that can be supplied to various
medical students, clinicians, staff, doctors, and physicians in various formats
(online delivery, hard copies, or access to online medical repositories). All
the three companies decided to register and subscribe to the brokering service
and make use of the various privacy degrees.E-VirtualMedInfo registered with the brokering service while revealing it privacy
attributes, E-VirtualDiagnosis
comprises diagnosis capabilities jointly derived
retired medical doctors and had selected hiding its identity, whereas FutureDocAssistants,
a company that can also provide various online samples of medical exams and virtual evaluation
assessments for different medical specialties, decided to hide both the
identity and the capabilities. Upon registration, a dedicated brokering agent
(ProvBroker) will be assigned to each company.Alice, a four-year medical student, is conducting a research on
the most top fatal diseases in Canada, the mortality and death rates of each
and the possible diagnosis and prevention procedures that would help a
trainee-student in examining and diagnosing patients with such diseases.
Deciding to hide her
own identity, Alice anonymously requests this information by posting the
required information in special repository dedicated to such privacy
degree.After storing the request, Alice’s assigned broker (ReqBroker)
interacts with various ProvBrokers associated with supporting other privacy
degrees of service providers (including the three mentioned companies) and consequently
(acts as a manager) issues, and announces a call-for-proposals (CFPs) to those ProvBrokers (act
as potential contractors) informing them of Alice’s request specifications
(note that Alice’s identity is anonymous to each participant including its own
supporting ReqBroker).The announcement includes task abstraction, a brief description
of the required information; bid specification, a description of the expected
format of the information; expiration time, a specified time interval during
which the required information is valid.Each ProvBroker working on behalf of each company contacts the
registered company agent and sends the request. Note that for theFutureDocAssistants company, the request is dispatched in special dedicate storing repository
allowing its own agent to browse this repository and retrieve the request (if
interested).Every company (through its representing agent) determines the
evaluation parameters (such as information quality, expiration time, and cost)
and accordingly submits a bid along with the offer parameters (such as quality,
cost, and availability). TheE-VirtualMedInfo and E-VirtualDiagnosis agents will send the bids directly to their assigned ProvBrokers, while the FutureDocAssistants agent stores the bid in a repository that will be retrieved by the relevant ProvBroker.Alice’s
dedicated ReqBroker receives those bids from every ProvBroker and carries on
the evaluation process and accordingly determines the most bid (or bids) that
fulfills Alice’s request for all the interested, and sends back an
acceptance-proposal message to the potential companies (winners) and a
rejection-message to the bids that do not meet the evaluation parameters. After receiving the information that Alice
was requesting, the ReqBroker stores it in a special repository for which she
has a valid access to retrieve it without having to reveal her own identity or being
exposed to the identity and the capabilities of the three companies which had
participated in fulfilling her request.A web-based prototype of the proposed system has been implemented
using Jade [31], an FIPA [32] compliant, and Java Web Services Development
Pack (JWSDP) platform to support and provide information-gathering
capabilities to different participants in healthcare environments, where the
accessibility of private information is a desirable feature to various
categories of the healthcare personnel, patients, and clinicians.The proposed architecture has been implemented using coordinated
intelligent, rational agent (CIR-agent). As shown in Figure14, three relational databases represent various medical
data for three distributed locations, each being managed by a dedicated agent
that can play both roles of an information requester as well as a provider.Figure 14
Privacy based brokering prototype for information gathering in healthcare.Upon the required privacy degree and the role desired, the A Web interface is available for healthcare
participants to select and register their desired privacy degree along with any
information capability they might posses (medical data, patient diagnosis and
treatment reports, pharmaceutical data reports, etc.). Based on the privacy
degree required by the both the requester and information provider, a dedicated broker agent
within the brokering layer will handle all the interaction required to fulfill
an information request.
## 7. Discussion and Conclusion
Current advances in nowadays technologies coupled with the
rapidly evolving healthcare paradigms allow us to foresee novel applications
and services for improving the quality of daily life health activities. The
increasing demand and dependency on information in healthcare organizations has
brought the issues of privacy to every aspect of the healthcare environments.
It is expected and with no doubt that medical data such as genome information,
medical records, and other critical personal information must be respected and
treated with a great concern. As awareness of the threats that organizations
face becomes more well understood, the need for additional privacy
specifications for open, distributed, and heterogeneous systems grows
clear. Tremendous efforts have been devoted to privacy and security
issues in distributed systems for the last few decades to find technological
means of guaranteeing privacy by employing state-of-the-art encryption and anonymization
technology. The proposed architecture provides feasible solution to privacy
protection in open environments, and presents myriad of additional privacy and
security opportunities without negative impact to the utilization of these
services.Architecturally,
the proposed model is viewed as a layer of services, where different roles can
be played by the various entities (requesters, brokers, and providers). While
existing approaches provide traditional information brokering by incorporating
agent-based solutions to make healthcare information more accessible to
individuals, the proposed architecture classifies the brokering role into
several subroles based on the attributes designated to describe the privacy-desired
degree of both the information provider and the information requester. Each
role is modeled as an agent with a specific architecture and interaction
protocol that are appropriate to support a required privacy degree.Within the layer, two sets of brokering entities are available to
service requesters and providers. The first set handles interactions with requesters
according to the desired privacy degree that is appropriate to their
preferences, while the other set supports privacy degrees required by service
providers. A brokering pattern is realized by the different roles played by the
domain entities and their corresponding brokering agent. A complete brokering scenario
is accomplished by performing different levels of interaction, namely, (1) requester-to-broker
interaction, (2) broker-to-broker interaction, and (3) broker-to-provider interaction.
Different combinations within
the layer can take place to support the interbrokering interactions. The
proposed layered architecture provides an appropriate separation of
responsibilities, allowing developers and programmers to focus on modeling
solutions and solving their particular application problems in a manner and
semantics most suitable to the local perspective. Agent technology has been
viewed as one of the key technologies for supporting information brokering in
heterogeneous open environments. The use of agent technology provides high degree
of decentralization of capabilities, which is the key to system scalability and
extensibility.Another important aspect of the model is that it treats the
privacy as a design issue that has to be taken into consideration in developing
healthcare information brokering systems. In healthcare environments, the
proposed model provides feasible solution to the problem of information
overload and privacy concerns. It enables transparent integration amongst different
participants of healthcare CDS, and provides querying ability and coordination solutions
that enhance the overall connectivity of distributed, autonomous, and possibly
heterogeneous information sources (databases) of different healthcare sectors.
It can efficiently govern different types of health-oriented information and
critical medical data such as genetic, HIV, mental health, and pharmacy records
from not distributed, disseminated, or abused. Based on the level and the
amount of information that can be released, patients, clinicians, service
providers, and medical staff members can securely translate their privacy
policies to an applicable-related privacy case in the proposed model.The proposed approach is innovative in the sense that it treats
the privacy as a design issue for information brokering systems, and it
supports ad hoc and automated configurations among distributed, possibly
autonomous, and heterogeneous entities with various degrees of privacy
requirements. The multilayer architecture minimizes the architecture complexity
encountered in direct-interaction architectures (where interactions between
agents often take more complex processes for encompassing series of message
exchange and forming a single point of failure), and makes it less vulnerable
to failures. The proposed layered architecture provides an appropriate
separation of responsibilities, letting developers and programmers focus on
solving their particular application problems in a manner and semantics most
suitable to the local perspective.
---
*Source: 101382-2009-03-23.xml* | 2009 |
# Corrigendum to “Intraspecific and Intracolonial Variation in the Profile of Venom Alkaloids and Cuticular Hydrocarbons of the Fire AntSolenopsis saevissima Smith (Hymenoptera: Formicidae)”
**Authors:** Eduardo Gonçalves Paterson Fox; Adriana Pianaro; Daniel Russ Solis; Jacques Hubert Charles Delabie; Bruno Cunha Vairo; Ednildo de Alcântara Machado; Odair Correa Bueno
**Journal:** Psyche
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1013859
---
## Body
---
*Source: 1013859-2016-11-21.xml* | 1013859-2016-11-21_1013859-2016-11-21.md | 599 | Corrigendum to “Intraspecific and Intracolonial Variation in the Profile of Venom Alkaloids and Cuticular Hydrocarbons of the Fire AntSolenopsis saevissima Smith (Hymenoptera: Formicidae)” | Eduardo Gonçalves Paterson Fox; Adriana Pianaro; Daniel Russ Solis; Jacques Hubert Charles Delabie; Bruno Cunha Vairo; Ednildo de Alcântara Machado; Odair Correa Bueno | Psyche
(2016) | Social Sciences & Business | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1013859 | 1013859-2016-11-21.xml | ---
## Body
---
*Source: 1013859-2016-11-21.xml* | 2016 |
# The Dynamics of a Predator-Prey System with State-Dependent Feedback Control
**Authors:** Hunki Baek
**Journal:** Abstract and Applied Analysis
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101386
---
## Abstract
A Lotka-Volterra-type predator-prey system with state-dependent feedback control is investigated in both theoretical and numerical ways. Using the Poincaré map and the analogue of the Poincaré criterion, the sufficient conditions for the existence and stability of semitrivial periodic solutions and positive periodic solutions are obtained. In addition, we show that there is no positive periodic solution with period greater than and equal to three under some conditions. The qualitative analysis shows that the positive period-one solution bifurcates from the semitrivial solution through a fold bifurcation. Numerical simulations to substantiate our theoretical results are provided. Also, the bifurcation diagrams of solutions are illustrated by using the Poincaré map, and it is shown that the chaotic solutions take place via a cascade of period-doubling bifurcations.
---
## Body
## 1. Introduction
In the last decades, some impulsive systems have been studied in population dynamics such as impulsive birth [1, 2], impulsive vaccination [3, 4], and chemotherapeutic treatment of disease [5, 6]. In particular, the impulsively controlled prey-predator population systems have been investigated by a number of researchers [7–15]. Thus the field of research of impulsive differential equations seems to be a new growing interesting area in recent years. Many authors in the articles cited above have shown theoretically and numerically that prey-predator systems with impulsive control are more efficient and economical than classical ones to control the prey (pest) population. However, the majority of these studies only consider impulsive control at fixed time intervals to eradiate the prey (pest) population. Such control measure of prey (pest) management is called fixed-time control strategy, modeled by impulsive differential equations. Although this control measure is better than classical one, it has shortcomings, regardless of the growth rules of the prey (pest) and the cost of management. In recent years, in order to overcome such drawbacks, several researchers have started paying attention to another control measure based on the state feedback control strategy, which is taken only when the amount of the monitored prey (pest) population reaches a threshold value [2, 16–19]. Obviously, the latter control measure is more reasonable and suitable for prey (pest) control.In order to investigate the dynamic behaviors of a population model with the state feedback control strategy, an autonomous Lotka-Volterra system, which is one of the most basic and important models, is considered. Actually, the principles of Lotka-Volterra models have remained valid until today and many theoretical ecologists adhere to their principles (cf. [8, 20–22]).Thus, in this paper, we consider the following Lotka-Volterra type prey-predator system with impulsive state feedback control:(1.1)x′(t)=x(t)(a-bx(t)-cy(t)),y′(t)=y(t)(-D+ex(t)),x≠h,Δx(t)=-px(t),Δy(t)=qy(t)+r,x=h,
where all parameters except q and r are positive constants. Here, x(t) and y(t) are functions of the time representing population densities of the prey and the predator, respectively, a is the inherent net birth rate per unit of population per unit time of the prey, b is the self-inhibition coefficient, c is the per capita rate of predation of the predator, D denotes the death rate of the predator, e is the rate of conversion of a consumed prey to a predator, 0<p<1 presents the fraction of the prey which die due to the harvesting or pesticide, and so forth, and q>-1 and r≥0 represent the amount of immigration or stock of the predator. We denote by h the economic threshold and Δx(t)=x(t+)-x(t) and Δy(t)=y(t+)-y(t). When the amount of the prey reaches the threshold h at time th, controlling measures are taken and hence the amounts of the prey and predator immediately become (1-p)h and (1+q)y(th)+r, respectively.The main purpose of this research is to investigate theoretically and numerically the dynamical behaviors of system (1.1).This paper is organized as follows. In the next section, we present a useful lemma and notations and construct a Poincaré map to discuss the dynamics of the system. In Section3, the sufficient conditions for the existence of a semi-periodic solution of system (1.1) with r=0 are established via the Poincaré criterion. On the other hand, in Section 4, we find out some conditions for the existence and stability of stable positive period-one solutions of system (1.1). Further, under some conditions, we show that there exists a stable positive periodic solution of period 1 or 2; however, there is no positive periodic solutions with period greater than and equal to three. In order to testify our theoretical results by numerical simulations, in Section 5, we give some numerical examples and the bifurcation diagrams of solutions that show the existence of a chaotic solution of system (1.1). Finally, we have a discussion in Section 6.
## 2. Preliminaries
Many considerable investigators have studied the dynamic behaviors of system (1.1) without the state feedback control. (cf. [23, 24].) It has a saddle (0,0), one locally stable focus (D/e,(ae-bD)/ce) and a saddle (a/b,0) if the condition D/e<a/b holds. Since the carrying capacity of the prey population x(t) is b/a, so it is meaningful that the economical threshold h is less than b/a. Thus, throughout this paper, we set up the following two assumptions:(2.1)(A1)De<ab,(A2)h≤ba.
From the biological point of view, it is reasonable that system (1.1) is considered to control the prey population in the biological meaning space {(x,y):x≥0,y≥0}.The smoothness properties off, which denotes the right hand of (1.1), guarantee the global existence and uniqueness of a solution of system (1.1) (see [25, 26] for the details).LetR=(-∞,∞) and R+2={(x,y)∣x≥0,y≥0}. Firstly, we denote the distance between the point p and the set S by d(p,S)=infp0∈S|p-p0| and define, for any solution z(t)=(x(t),y(t)) of system (1.1), the positive orbit of z(t) through the point z0∈R+2 as(2.2)O+(z0,t0)={z∈R+2∣z=z(t),t≥t0,z(t0)=z0}.
Now, we introduce some definitions (cf. [27]).Definition 2.1 (orbital stability).
z*(t) is said to be orbitally stable if, given ϵ>0,there exists δ=δ(ϵ)>0 such that, for any other solution z(t) of system (1.1) satisfying |z*(t0)-z(t0)|<δ,then d(z(t),O+(z0,t0))<ϵ for t>t0.Definition 2.2 (asymptotic orbital stability).
z*(t) is said to be asymptotically orbitally stable if it is orbitally stable and for any other solution z(t) of system (1.1), there exists a constant η>0 such that, if |z*(t0)-z(t0)|<η,then limt→∞d(z(t),O+(z0,t0))=0.In order to discuss the orbital asymptotical stability of a positive periodic solution of system (1.1), a useful lemma, which follows from Corollary 2 of Theorem 1 given in Simeonov and Bainov [28], is considered as follows.Lemma 2.3 (analogue of the Poincaré criterion).
TheT-periodic solution x=φ(t),y=ζ(t) of system
(2.3)x′(t)=P(x,y),y′(t)=Q(x,y),ifϕ(x,y)≠0,Δx=α(x,y),Δy=β(x,y),ifϕ(x,y)=0,
is orbitally asymptotically stable if the multiplier μ2 satisfies the condition |μ2|<1, where
(2.4)μ2=∏k=1qΔkexp[∫0T∂P∂x(ζ(t),η(t))+∂Q∂y(ζ(t),η(t))dt],Δk=P+((∂β/∂y)ϖ-(∂β/∂x)ϱ+ϖ)+Q+((∂α/∂x)ϱ-(∂α/∂y)ϖ+ϱ)Pϖ+Qϱ,
where ϖ denotes (∂ϕ/∂x) and ϱ denotes (∂ϕ/∂y) and P, Q, ∂α/∂x, ∂α/∂y, ∂β/∂x, ∂β/∂y, ∂ϕ/∂x, and ∂ϕ/∂y are calculated at the point (φ(τk),ζ(τk)), P+=P(φ(τk+),ζ(τk+)), and Q+=Q(φ(τk+),ζ(τk+)). Also ϕ(x,y) is a sufficiently smooth function on a neighborhood of the points (φ(τk),ζ(τk)) such that gradϕ(x,y)≠0 and τk is the moment of the kth jump, where k=1,2,…,q.From now on, we construct two Poincaré maps to discuss the dynamics of system (1.1). For this, we introduce two cross-sections ∑1={(x,y):x=(1-p)h,y≥0} and ∑2={(x,y):x=h,y≥0}. In order to establish the Poincaré map of ∑2 via an approximate formula, suppose that system (1.1) has a positive period-1 solution z(t)=(φ(t),ζ(t)) with period T and the initial condition z0=A+((1-p)h,y0)∈∑1, where y(0)≡y0>0. Then the periodic trajectory intersects the Poincaré section ∑2 at the point A(h,y1) and then jumps to the point A+ due to the impulsive effects with Δx(t)=-px(t) and Δy(t)=qy(t)+r. Thus(2.5)φ(0)=(1-p)h,ζ(0)=y0,φ(T)=h,ζ(T)=y1=y01+q-r.Now, we consider another solutionz̅(t)=(φ̅(t),ζ̅(t)) with the initial condition z̅0=A0((1-p)h,y0+δy0). Suppose that this trajectory which starts form A0 first intersects ∑2 at the point A1(h,y̅1) when t=T+δt and then jumps to the point A1+((1-p)h,y̅2) on ∑1. Then we have(2.6)φ̅(0)=(1-p)h,ζ̅(0)=y0+δy0,φ̅(T+δt)=h,ζ̅(T+δt)=y̅1.
Set u(t)=φ̅(t)-φ(t) and v(t)=ζ̅(t)-ζ(t), then u0=u(0)=φ̅(0)-φ(0)=0 and v0=v(0)=ζ̅(0)-ζ(0). Let v1=y̅2-y0 and v0*=y̅1-y1. It is well known that, for 0<t<T, the variables u(t) and v(t) are described by the relation(2.7)(u(t)v(t))=Φ(t)(u0v0)+o(u02+v02)=Φ(t)(0v0)+o(0v02),
where the fundamental solution matrix Φ(t) satisfies the matrix equation(2.8)dΦ(t)dt=(a-2bφ(t)-cζ(t)-cφ(t)eζ(t)-D+eφ(t))Φ(t)
with Φ(0)=I(the identity matrix). Set g1(t)=φ(t)(a-bφ(t)-cζ(t)) and g2(t)=ζ(t)(-D+eφ(t)). We can express the perturbed trajectory in a first-order Taylor expansion(2.9)φ̅(T+δt)≈φ(T)+u(T)+g1(T)δt,ζ̅(T+δt)≈ζ(T)+v(T)+g2(T)δt.
It follows from φ̅(T+δt)=φ(T)=h that(2.10)δt=-u(T)g1(T)andhencev0*=y̅1-y1=v(T)-g2(T)u(T)g1(T).
Since y̅2=(1+q)y̅1+r and y̅2-y0=(1+q)(y̅1-y1), we obtain v1=(1+q)v0*. So, we can construct a Poincaré map F of ∑1 as follows:(2.11)v1=Fq(v0)=(1+q)[v(T)-g2(T)u(T)g1(T)],
where u(T) and v(T) are calculated according to (2.7).Now we construct another type of Poincaré maps. Suppose that the pointBk(h,yk) is on the section ∑2. Then Bk+((1-p)h,(1+q)yk+r) is on ∑1 due to the impulsive effects, and the trajectory with the initial point Bk+ intersects ∑2 at the point Bk+1(h,yk+1), where yk+1 is determined by yk and the parameters q and r. Thus we can define a Poincaré map F as follows:(2.12)yk+1=F(q,r,yk).
The function F is continuous on q,r, and ykbecause of the dependence of the solutions on the initial conditions.Definition 2.4.
A trajectoryO+(z0,t0) of system (1.1) is said to be order k-periodic if there exists a positive integer k≥1 such that k is the smallest integer for y0=yk.Definition 2.5.
A solutionz(t)=(x(t),y(t)) of system (1.1) is called a semitrivial solution if its one component is zero and another is nonzero.Note that, for each fixed point of the mapF in (2.12), there is an associated periodic solution of system (1.1), and vice versa.
## 3. The Existence and Stability of a Periodic Solution Whenr=0
In this section, we consider system (1.1) with r=0 as follows:(3.1)x′(t)=x(t)(a-bx(t)-cy(t)),y′(t)=y(t)(-D+ex(t)),x≠h,Δx(t)=-px(t),Δy(t)=qy(t),x=h.First, lety(t)=0 to calculate a semitrivial periodic solution of system (3.1). Then system (3.1) can be changed into the following impulsive differential equation:(3.2)x′(t)=x(t)(a-bx(t)),x(t)≠h,Δx(t)=-px(t),x(t)=h.
Under the initial value x(0)=(1-p)h≡x0, the solution of the equation x′(t)=x(t)(a-bx(t)) can be obtained as x(t)=aexp(at)/(β+bexp(at)), where β=(a-bh(1-p))/(1-p)h. Assume that x(T)=h and x(T+)=x0 in order to get a periodic solution of (3.2). Then we have the period T=(1/a)ln((a-bh(1-p))/(a-bh)(1-p)) of a semitrivial periodic solution of (3.1). Thus system (1.1) with r=0 has a semitrivial periodic solution with the period T as follows:(3.3)φ(t)=aexp(a(t-(k-1)T))β+bexp(a(t-(k-1)T)),ζ(t)=0,
where (k-1)T<t<kT.Using the Poincaré mapF defined in (2.12), we will have a criterion for the stability of this semitrivial periodic solution (φ(t),ζ(t)).Theorem 3.1.
The semitrivial periodic solution of system (1.1) with r=0 is locally stable if the condition
(3.4)-1<q<q0
holds, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1.Proof.
We already discussed the existence of the semitrivial periodic solution(φ(t),0). It follows from (2.8) that
(3.5)dΦ(t)dt=(a-2bφ(t)-cφ(t)0-D+eφ(t))Φ(t),Φ(0)=I2.
Let Φ(t)=(w1(t)w2(t)w3(t)w4(t)). Then we can infer from (3.5) that, for 0<t<T=(1/a)ln((1-(1-p)h)/(1-h)(1-p)),
(3.6)w1′(t)=(a-2bφ(t))w1(t)-cφ(t)w3(t),w1(0)=1,w2′(t)=(a-2bφ(t))w2(t)-cφ(t)w4(t),w2(0)=0,w3′(t)=(-D+eφ(t))w3(t),w3(0)=0,w4′(t)=(-D+eφ(t))w4(t),w4(0)=1.
Since u0=u(0)=0 and g2(t)=0, we obtain that v1=Fq(v0)=(1+q)[v(T)-g2(T)u(T)/g1(T)]=(1+q)w4(T)v0. Thus it is only necessary to calculate w4(t). From the fourth equation of (3.6), we obtain w4(t)=w̅exp(∫-D+eφ(t)dt). Since ∫φ(t)dt=(1/b)ln(β+bexp(at)) and w4(0)=1, so we obtain w4(T)=((β+bexp(aT))/(β+b))e/bexp(-DT). Therefore,
(3.7)v1=(1+q)(1-p)D/a(a-bh(1-p)a-bh)e/b-D/av0.
Note that v0 is a fixed point of Fq(v0) and
(3.8)Dv0Fq(0)=(1+q)(1-p)D/a(a-bh(1-p)a-bh)e/b-D/a.
Under condition (3.4), we get 0<Dv0Fq(0)<1. So system (1.1) with r=0 has a stable semitrivial periodic solution.Remark 3.2.
From the proof of Theorem3.1, we note that Dv0Fq(0)>1 if q>q0.It means that the semitrivial periodic solution system (1.1) with r=0 is unstable if q>q0.Now, we discuss the existence of a positive periodic solution of the system (3.1) with r=0.Theorem 3.3.
System (1.1) with r=0 has a positive period-one solution if the condition
(3.9)q>q0
holds, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1.Proof.
It follows from Theorem3.1 that the semitrivial periodic solution passing through the points A((1-p)h,0) and B(h,0) is stable if -1<q<q0, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1. Now, define G(x)=F(q,0,x)-x, where F is the Poincaré map. From now on, we will show that there exist two positive numbers ϵ1 and ω0 such that G(ϵ1)>0 and G(ω0)≤0 by following two steps.
Step 1.
We will show thatG(ϵ1)>0 for some ϵ1>0. First, consider the trajectory starting with the point A1=((1-p)h,ϵ) for a sufficiently small number ϵ>0. This trajectory meets the Poincaré section ∑2 at the point B1=(h,ϵ1) and then jumps to the point A2=((1-p)h,(1+q)ϵ1) and reaches the point B2=(h,ϵ2). Since q>q0, the semitrivial solution is unstable by Remark 5.4. So we can choose an ϵ̅ such that (1+q)ϵ1>ϵ for q>q0+ϵ̅. Thus the point B2 is above the point B1. So we have ϵ1<ϵ2. From (2.12), we know that
(3.10)ϵ1-F(q,0,ϵ1)=ϵ1-ϵ2<0.
Thus we know that G(ϵ1)>0.Step 2.
We will show thatG(ω0)≤0 for some ω0>0. To do this, suppose that the line bx+cy-a=0 meets ∑1 at A3=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A3 meets the line ∑2 at B3=(h,ω0) then jumps to the point A3+=((1-p)h,(1+q)ω0) and then reaches the point B4=(h,ω̅0) on the Poincaré section ∑2 again. However, for any q>0, the point B4 is not above the point B3 in view of the vector field of system (1.1). Thus ω̅0≤ω0. So we have only to consider the following two cases.Case (i): Ifω̅0=ω0, that is, G(ω0)=0, then system (1.1) has a positive period-one solution.Case (ii): Ifω̅0<ω0, then(3.11)ω0-F(q,0,ω0)=ω0-ω̅0>0,that is,G(ω0)<0.
Thus, it follows from (3.10) and (3.11) that the Poincaré map F has a fixed point, which corresponds to a positive period-one solution for system (1.1) with r=0. Thus we complete the proof.Remark 3.4.
Under the conditionr=0,we show that the semitrivial periodic solution of system (1.1) is stable when -1<q<q0 and there exists a positive period-one solution of system (1.1). Since Dv0Fq0(0)=1, a fold bifurcation takes place at q=q0.Furthermore, from the proof of Theorem 3.3, we know that system (1.1) with r=0 has a positive period-one solution (θ(t),ψ(t)) passing through the points L+=((1-p)h,(1+q)ψ(0)) and L=(h,ψ(0)) and satisfying the condition (a-b(1-p)h)/c=(1+q1)ψ(0) for some q1>q0.Now we discuss the stability of the positive periodic solution of system (1.1).Theorem 3.5.
Assume thatr=0. Let (θ(t),ψ(t)) be the positive period-one solution of system (1.1) with period τ passing through the points M+=((1-p)h,(1+q)ψ(0)) and M=(h,ψ(0)). Then the positive periodic solution is orbitally asymptotically stable if the condition
(3.12)q0<q<q2
holds, where g(q2)=-1 and g(u)=((a-b(1-p)h-c((1+u)ψ(0)))/(a-bh-cψ(0)))exp(∫0τ-bθ(t)dt).Proof.
In order to discuss the stability of the positive periodic solution(θ(t),ψ(t)) of system (1.1), we will use the Lemma 2.3. First, we note that
(3.13)P(x,y)=x(t)(a-bx(t)-cy(t)),Q(x,y)=y(t)(-D+ex(t)),α(x,y)=-px(t),β(x,y)=qy(t),ϕ(x,y)=x(t)-h,(θ(τ),ψ(τ))=(h,ψ(0)),(θ(τ+),ψ(τ+))=((1-p)h,(1+q)ψ(0)).
Since
(3.14)∂P∂x=a-2bx(t)-cy(t),∂Q∂y=-D+ex(t),∂ϕ∂x=1,∂ϕ∂y=0,∂α∂x=p,∂α∂y=0,∂β∂x=0,∂β∂y=q,
we obtain that
(3.15)Δ1=P+(θ(τ+),ψ(τ+))(1+q)P(θ(τ),ψ(τ))=(1-p)(a-b(1-p)h-c((1+q)ψ(0)))(1+q)a-bh-cψ(0),∫0τ∂P∂x+∂Q∂ydt=∫0τa-2bθ(t)-cψ(t)-D+eθ(t)dt=∫0τθ̇(t)θ(t)+ψ̇(t)ψ(t)(-bθ(t))dt=∫0τdln(θ(t)ψ(t))+∫0τ(-bθ(t))dt=ln(1(1-p)(1+q))+∫0τ(-bθ(t))dt.
Thus we have μ2=((a-b(1-p)h-c((1+q)ψ(0)))/(a-bh-cψ(0)))exp(∫0τ(-bθ(t))dt)≡g(q). By Remark 3.4, for q=q1, we have (1+q1)ψ(0)=(a-b(1-p)h)/c, and so we get μ2=0 when q=q1 which means that this periodic solution is stable. In addition, for q=q0, we know μ2=1 due to ψ(0)=0 and τ=(1/a)ln((a-bh(1-p))/(a-bh)(1-p)). Since the derivative dμ2/dq with respect to q is negative, so we know that 0<μ2<1 when q0<q<q1. Further, we can find q2>q1 such that μ2=g(q2)=-1. Therefore, if the condition (3.12) holds, then we obtain -1<μ2<1, which implies from Lemma 2.3 that the positive periodic solution (θ(t),ψ(t)) is orbitally asymptotically stable.Remark 3.6.
System (1.1) has a stable periodic semitrivial solution and a stable positive period-1 solution if 0<q<q0 and q0<q<q2, respectively. We already know from Remark 3.4 that a fold bifurcation occurs at q=q0. Thus, from the facts, we can suppose that a flip (period-doubling) bifurcation occurs at q=q2. Moreover, we can figure out that system (1.1) might have a chaotic solution via a cascade of period doubling.
## 4. The Existence and Stability of a Positive Periodic Solution Whenr>0
In this section we will take into account the existence and stability of positive periodic solutions in the two cases ofh<D/e and D/e<h. In fact, under the condition h<D/e, the trajectories starting from any initial point (x0,y0) with x0<h intersects the section ∑2 infinite times. However, under the condition D/e<h, the trajectories starting from any initial point (x0,y0) with x0<h do not intersect the section ∑2.
### 4.1. The Case ofh<D/e
Theorem 4.1.
Assume thath≤D/e, q>-1, and r>0. Then the system (1.1) has a positive period-one solution. Moreover, if this periodic solution (φ(t),ζ(t)) has a period λ and passes through the points M+=((1-p)h,(1+q)ζ(0)+r) and M=(h,ζ(0)), then it is asymptotically orbitally stable provided with
(4.1)q*<q<q**,
where γ(q*)=1 and γ(q**)=-1 and γ(q)=(a-b(1-p)h-c((1+q)ζ(0)+r))/(a-bh-cζ(0))exp(∫0λbζ(t)dt).Proof.
We will use the similar method to Theorem3.3 to prove the existence of a periodic solution of system (1.1).
Firstly, in order to showF(q,r,r̅1)>r̅1 for some r̅1>0, let U1=((1-p)h,r1) be in the Poincaré section ∑1, where r1 is small enough such that 0<r1<r. The trajectory of system (1.1) with the initial point U1 intersects the point V1=(h,r̅1) on the Poincaré section ∑2, then jumps to the point U2=((1-p)h,(1+q)r̅1+r), and then reaches the point V2=(h,r2) on ∑2 again. From the choice of the value r1, we know that (1+q)r̅1+r>r1 and hence the points U2 and V2 are above the points U1 and V1, respectively. Thus we have r̅1<r2. It follows from (2.12) that
(4.2)r̅1-F(q,r,r̅1)=r̅1-r2<0.
Secondly, to find a positive numberm0 such that m0-F(q,r,m0)≥0 suppose that the line bx+cy-a=0 meets ∑1 at A=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A meets the line ∑2 at B=(h,m0) then jumps to the point A+=((1-p)h,(1+q)m0+r) and then reaches the point B1=(h,m̅0) on the line ∑2 again. Suppose that there exists a q0>0 such that (1+q0)m0+r=(a-b(1-p)h)/c. Then the point A+ is just the point A if q=q0. The point A+ lies above the point A if q>q0, while it lies under A if q<q0. However, for any q>0, the point B1 is not above the point B in view of the vector field of the system (1.1). Thus m0≥m̅0 and hence m0-F(q,r,m0)≥0.
Therefore, we have a periodic solution by the similar method to Theorem3.3. Further, the stability condition for this period-one solution can be obtained by using the same method used in the proof of Theorem 3.5. Thus we complete the proof.
### 4.2. The Case ofD/e<h
Theorem 4.2.
Assume thatD/e<h, q>-1, and r>0. Then there exists r0>0 such that system (1.1) has a stable positive solution of period 1 or 2 if r>r0, where r0 depends on the value h. Moreover, system (1.1) has no periodic solutions of period k ( k≥3).Proof.
First, assume that the orbit, which just touches∑2 at the point B0=(h,y̅1) with y̅1=(a-bh)/c, meets ∑1 at the two points B=((1-p)h,y̅2) and B1=((1-p)h,y̅3), where y̅3<(a-b(1-p)h)/c<y̅2. We will prove this theorem by the following five steps.
Step 1.
We will show that ifr>y̅2, then any trajectory of system (1.1) intersects with ∑2 infinite times. Note that every trajectory passing through the point ((1-p)h,y) with y∈(y̅3,y̅2) cannot intersect with ∑1 as time goes to infinite and tends to the focus (D/e,(ae-bD)/ce) eventually. Therefore, if all trajectories of system (1.1) pass through the points ((1-p)h,y) with y∈(y̅3,y̅2) after finite times impulsive effects on ∑2, they all tend to the focus and there are no positive periodic solutions. From this fact, we know that the condition r>y̅2, in which y̅2 depends on the value h as a function g(h), is a sufficient condition for a trajectory of system (1.1) which intersects with ∑2 infinite times in view of the impulsive effects Δx=-px and Δy=qy+r.From now on, let the condition r>y̅2 hold.Step 2.
Next, we will show thatyj+1<ym+1 for ym<yj, where (h,yk+1) is the next point of (h,yk) that touches ∑2. Note that for any point (h,y) with 0<y<(a-b(1-p)h)/c, the point ((1-p)h,(1+q)y+r) is above the point B. Thus, for any two points Em(h,ym) and Ej(h,yj), where 0<ym<yj<(a-bh)/c, the points Em+((1-p)h,(1+q)ym+r) and Ej+((1-p)h,(1+q)yj+r) lie above the point B and, further, it follows from the vector field of the system (1.1) that 0<yj+1<ym+1<(a-bh)/c, that is,
(4.3)yj+1<ym+1forym<yj.
Thus, from the Poincaré map and r>y̅2, we obtain y1=F(q,r,y0), y2=F(q,r,y1), and yn+1=F(q,r,yn)(n=3,4,…) for given y0∈(0,(a-bh)/c). Therefore, we have only to consider three cases as follows:Case (i):y0=y1,Case (ii):y0≠y1,Case (iii):yi≠yj(0≤i<j≤k-1, k≥3).Step 3.
In order to show the existence of a positive solution of period 1 or 2, consider the Cases (i) and (ii). First, if Case (i) is satisfied, then it is easy to see that system (1.1) has a positive period-one solution. Now, suppose that Case (ii) is satisfied. Then without loss of generality, we can say that y1<y0. It follows form (4.3) that y2>y1. Furthermore, if y2=y0, then there exists a positive period-two solution of system (1.1).Step 4.
Now, we will prove that system (1.1) cannot have periodic solutions of period k (k≥3) if Case (iii) holds. For this, assume that y0=yk, which means that system (1.1) has a positive period-k solution. However, we will show that this is impossible. If y0<y1, then from (4.3), we obtain that y1<y2 and then y2<y0<y1 or y0<y2<y1. If y0>y1, then from (4.3), we have y1<y2 and then y1<y2<y0 and y1<y0<y2. So the relation of y0, y1, and y2 is one of the following:
(4.4)(a)y2<y0<y1,(b)y0<y2<y1,(c)y1<y2<y0,(d)y1<y0<y2.(a) If y2<y0<y1, then from (4.3), we have y2<y1<y3. It is also true that y2<y0<y1<y3. We again obtain y4<y2<y1<y3 and then y4<y2<y0<y1<y4. By means of induction, we have
(4.5)0<⋯<y2k<⋯<y4<y2<y0<y1<y3<y5<⋯<y2k+1<⋯<1.
Similar to (a), for Cases (b), (c), and (d), we obtain
(4.6)(b)0<y0<y2<y4<⋯<y2k<⋯<y2k+1<⋯<y5<y3<y1<1,(c)0<y1<y3<y5<⋯<y2k+1<⋯<y2k<⋯<y4<y2<y0<1,(d)0<⋯<y2k+1<⋯<y5<y3<y1<y2<y4<y6<⋯<y2k<⋯<1,
respectively. If there exists a positive period-k solution (k≥3) in the system (1.1), then yi≠yj, (0≤i<j≤k-1), yk=y0 which is a contradiction to (4.5)–(4.6). Thus there is no positive period-k solution (k≥3) if r>y̅2.Step 5.
From Step4, we can show that there exists a stable period-1 or-2 solution in these cases. In fact, it follows from (4.5) that
(4.7)limk→∞y2k=y0*,limk→∞y2k+1=y1*,
where 0<y0*<y1*<(a-bh)/c. Therefore, y1*=F(q,r,y0*) and y2*=F(q,r,y1*). Thus system (1.1) has a positive period-2 solution in the case (a). Moreover, it is easily proven from (4.5) and (4.7) that this positive period-2 solution is local stable. Similarly, we have system (1.1) has a stable positive period-1 solution in cases (b) and (c) and has a stable positive period-2 solution in case (d).
## 4.1. The Case ofh<D/e
Theorem 4.1.
Assume thath≤D/e, q>-1, and r>0. Then the system (1.1) has a positive period-one solution. Moreover, if this periodic solution (φ(t),ζ(t)) has a period λ and passes through the points M+=((1-p)h,(1+q)ζ(0)+r) and M=(h,ζ(0)), then it is asymptotically orbitally stable provided with
(4.1)q*<q<q**,
where γ(q*)=1 and γ(q**)=-1 and γ(q)=(a-b(1-p)h-c((1+q)ζ(0)+r))/(a-bh-cζ(0))exp(∫0λbζ(t)dt).Proof.
We will use the similar method to Theorem3.3 to prove the existence of a periodic solution of system (1.1).
Firstly, in order to showF(q,r,r̅1)>r̅1 for some r̅1>0, let U1=((1-p)h,r1) be in the Poincaré section ∑1, where r1 is small enough such that 0<r1<r. The trajectory of system (1.1) with the initial point U1 intersects the point V1=(h,r̅1) on the Poincaré section ∑2, then jumps to the point U2=((1-p)h,(1+q)r̅1+r), and then reaches the point V2=(h,r2) on ∑2 again. From the choice of the value r1, we know that (1+q)r̅1+r>r1 and hence the points U2 and V2 are above the points U1 and V1, respectively. Thus we have r̅1<r2. It follows from (2.12) that
(4.2)r̅1-F(q,r,r̅1)=r̅1-r2<0.
Secondly, to find a positive numberm0 such that m0-F(q,r,m0)≥0 suppose that the line bx+cy-a=0 meets ∑1 at A=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A meets the line ∑2 at B=(h,m0) then jumps to the point A+=((1-p)h,(1+q)m0+r) and then reaches the point B1=(h,m̅0) on the line ∑2 again. Suppose that there exists a q0>0 such that (1+q0)m0+r=(a-b(1-p)h)/c. Then the point A+ is just the point A if q=q0. The point A+ lies above the point A if q>q0, while it lies under A if q<q0. However, for any q>0, the point B1 is not above the point B in view of the vector field of the system (1.1). Thus m0≥m̅0 and hence m0-F(q,r,m0)≥0.
Therefore, we have a periodic solution by the similar method to Theorem3.3. Further, the stability condition for this period-one solution can be obtained by using the same method used in the proof of Theorem 3.5. Thus we complete the proof.
## 4.2. The Case ofD/e<h
Theorem 4.2.
Assume thatD/e<h, q>-1, and r>0. Then there exists r0>0 such that system (1.1) has a stable positive solution of period 1 or 2 if r>r0, where r0 depends on the value h. Moreover, system (1.1) has no periodic solutions of period k ( k≥3).Proof.
First, assume that the orbit, which just touches∑2 at the point B0=(h,y̅1) with y̅1=(a-bh)/c, meets ∑1 at the two points B=((1-p)h,y̅2) and B1=((1-p)h,y̅3), where y̅3<(a-b(1-p)h)/c<y̅2. We will prove this theorem by the following five steps.
Step 1.
We will show that ifr>y̅2, then any trajectory of system (1.1) intersects with ∑2 infinite times. Note that every trajectory passing through the point ((1-p)h,y) with y∈(y̅3,y̅2) cannot intersect with ∑1 as time goes to infinite and tends to the focus (D/e,(ae-bD)/ce) eventually. Therefore, if all trajectories of system (1.1) pass through the points ((1-p)h,y) with y∈(y̅3,y̅2) after finite times impulsive effects on ∑2, they all tend to the focus and there are no positive periodic solutions. From this fact, we know that the condition r>y̅2, in which y̅2 depends on the value h as a function g(h), is a sufficient condition for a trajectory of system (1.1) which intersects with ∑2 infinite times in view of the impulsive effects Δx=-px and Δy=qy+r.From now on, let the condition r>y̅2 hold.Step 2.
Next, we will show thatyj+1<ym+1 for ym<yj, where (h,yk+1) is the next point of (h,yk) that touches ∑2. Note that for any point (h,y) with 0<y<(a-b(1-p)h)/c, the point ((1-p)h,(1+q)y+r) is above the point B. Thus, for any two points Em(h,ym) and Ej(h,yj), where 0<ym<yj<(a-bh)/c, the points Em+((1-p)h,(1+q)ym+r) and Ej+((1-p)h,(1+q)yj+r) lie above the point B and, further, it follows from the vector field of the system (1.1) that 0<yj+1<ym+1<(a-bh)/c, that is,
(4.3)yj+1<ym+1forym<yj.
Thus, from the Poincaré map and r>y̅2, we obtain y1=F(q,r,y0), y2=F(q,r,y1), and yn+1=F(q,r,yn)(n=3,4,…) for given y0∈(0,(a-bh)/c). Therefore, we have only to consider three cases as follows:Case (i):y0=y1,Case (ii):y0≠y1,Case (iii):yi≠yj(0≤i<j≤k-1, k≥3).Step 3.
In order to show the existence of a positive solution of period 1 or 2, consider the Cases (i) and (ii). First, if Case (i) is satisfied, then it is easy to see that system (1.1) has a positive period-one solution. Now, suppose that Case (ii) is satisfied. Then without loss of generality, we can say that y1<y0. It follows form (4.3) that y2>y1. Furthermore, if y2=y0, then there exists a positive period-two solution of system (1.1).Step 4.
Now, we will prove that system (1.1) cannot have periodic solutions of period k (k≥3) if Case (iii) holds. For this, assume that y0=yk, which means that system (1.1) has a positive period-k solution. However, we will show that this is impossible. If y0<y1, then from (4.3), we obtain that y1<y2 and then y2<y0<y1 or y0<y2<y1. If y0>y1, then from (4.3), we have y1<y2 and then y1<y2<y0 and y1<y0<y2. So the relation of y0, y1, and y2 is one of the following:
(4.4)(a)y2<y0<y1,(b)y0<y2<y1,(c)y1<y2<y0,(d)y1<y0<y2.(a) If y2<y0<y1, then from (4.3), we have y2<y1<y3. It is also true that y2<y0<y1<y3. We again obtain y4<y2<y1<y3 and then y4<y2<y0<y1<y4. By means of induction, we have
(4.5)0<⋯<y2k<⋯<y4<y2<y0<y1<y3<y5<⋯<y2k+1<⋯<1.
Similar to (a), for Cases (b), (c), and (d), we obtain
(4.6)(b)0<y0<y2<y4<⋯<y2k<⋯<y2k+1<⋯<y5<y3<y1<1,(c)0<y1<y3<y5<⋯<y2k+1<⋯<y2k<⋯<y4<y2<y0<1,(d)0<⋯<y2k+1<⋯<y5<y3<y1<y2<y4<y6<⋯<y2k<⋯<1,
respectively. If there exists a positive period-k solution (k≥3) in the system (1.1), then yi≠yj, (0≤i<j≤k-1), yk=y0 which is a contradiction to (4.5)–(4.6). Thus there is no positive period-k solution (k≥3) if r>y̅2.Step 5.
From Step4, we can show that there exists a stable period-1 or-2 solution in these cases. In fact, it follows from (4.5) that
(4.7)limk→∞y2k=y0*,limk→∞y2k+1=y1*,
where 0<y0*<y1*<(a-bh)/c. Therefore, y1*=F(q,r,y0*) and y2*=F(q,r,y1*). Thus system (1.1) has a positive period-2 solution in the case (a). Moreover, it is easily proven from (4.5) and (4.7) that this positive period-2 solution is local stable. Similarly, we have system (1.1) has a stable positive period-1 solution in cases (b) and (c) and has a stable positive period-2 solution in case (d).
## 5. Numerical Examples
In this section, we will present some numerical examples to discuss the various dynamical aspects of system (1.1) and to testify the validity of our theoretical results obtained in the previous sections.Example 5.1.
In order to exhibit the dynamical complexity asq varies, let r=0 and fix the other parameters as follows:
(5.1)a=0.4,b=0.8,c=0.8,D=0.4,e=0.8,h=0.1,p=0.35.
In this example, we set an initial value as (0.05,0.1). It is from Theorem 3.1 that the periodic semitrivial solution is stable if -1<q<q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1≈0.4286 (see Figures 1 and 2). We display the bifurcation diagram in Figure 2(a). From the Remark 3.4, we know that a fold bifurcation takes place at q=q0. Figure 2(a) shows that a positive period-one solution bifurcates from the periodic semitrivial solution at q=q0≈0.4286 and a positive period-two solution bifurcates from the positive period-one solution via a flip bifurcation at q=q2≈6.25, which leads to the period-doubling bifurcation and then chaos (see Figures 2(b) and 3). It follows from Theorem 4.2 that system with r>0 cannot have positive period-3 solution under some conditions. However, if r=0, a period-3 solution can exist (see Figure 4).Figure 1
(a) The trajectory of system (1.1) with r=0 when q=0.42. (b-c) Time series.(a) The bifurcation diagram of system (1.1) r=0. (b) A chaotic solution of system (1.1) with r=0 when q=20.
(a)(b)(a) A period-4 solution whenr=0 and q=14. (b) A period-8 solution when r=0 and q=15.5.
(a)(b)(a) A period-3 solution whenr=0 and q=26.5. (b) The enlarged part of (a) for 0.063≤x≤1.
(a)(b)Example 5.2.
Under the conditionr>0,we know that there is no semitrivial solution in system (1.1). In this case, set the parameters as follows:
(5.2)a=1.0,b=0.6,c=0.8,D=0.4,e=0.8,p=0.2,r=0.1.
Throughout this example, we regard the point (0.1,0.2) as an initial value. Figure 5(a) shows the bifurcation diagrams of system (1.1) with q as a bifurcation parameter when h=0.3<D/e. It follows from Theorem 4.1 that there exists a period-one solution for any q>-1 and this solution is stable when q*<q<q**≈4.35 as shown in Figure 5(a). It is easy to see that there are no fold bifurcations. However, at q=q**≈4.35, a flip bifurcation occurs and the cascade of the flip bifurcation leads to chaotic solutions like the previous example. Thanks to Figure 5(a), we know that system (1.1) undergoes the complex dynamical behaviors including periodic doubling, chaotic behaviors, and periodic windows.(a) The bifurcation diagram of system (1.1) with h=0.3. (b) The bifurcation diagram of system (1.1) with h=0.52.
(a)(b)Example 5.3.
It follows from Theorem4.2 that if the value h satisfies the condition D/e<h<a/b, there exists some r0>0 such that, for all q>0, system (1.1) has a stable positive period-one or-two solution if r>r0, but does not have period-k (k≥3) solutions. To substantiate these theoretical results by numerical simulation, let h=0.52 and r=1.2, and let the other parameters be the same as in Example 5.2. Then we obtain D/e<h<b/a. Figure 5(b) of the bifurcation diagram of system (1.1) numerically displays that there exist no period-k solutions (k≥3) except stable positive period-1 or-2 solutions. Thus the value r is also an important parameter in the dynamical aspects of system (1.1). For this reason, we investigate the effects of the parameter r on system (1.1). For this, let q=5 and h=0.52, and letr be a bifurcation parameter. It is easy to see from Figure 6 that the parameter r causes various dynamical behaviors of system (1.1) such as a cascade of reverse period-doubling bifurcations, also called period halving, period windows, chaotic regions, stable period-2 solutions, and so forth.
From a biological point of view, as mentioned in Section1, the value r represents the amount of immigration or releasing of the predator. Particularly, from Figure 6, one can figure out that the number of the predator cannot be easily estimated when the amount of r is small due to chaotic behaviors of solutions to the system; on the contrary, if sufficient amount of the predator is released impulsively, then the number of the predator (eventually, the number of the prey) can be predictable due to periodic behaviors of solutions to the system.Figure 6
The bifurcation diagram of system (1.1) with h=0.3 and q=5 with respect to r>0.Remark 5.4.
Now, we will demonstrate the superiority of the state-dependent feedback control in comparison with the fixed-time control via an example. For this, assume thata=1.0, b=0.6, c=0.8, D=0.4, e=0.8, h=0.3, p=0.6, q=4, and r=0.1 in system (1.1) with an initial value (0.05,4.1). Figure 7(b) shows that the prey population cannot be controlled below the threshold value if we take the impulsive control measure at fixed time t=6k(k=1,2,…).However, it is seen from Figure 7(a) that only after several attempts of control does the solution approach the periodic solution. Thus this example shows that the impulsive state feedback measure is more effective in real biological control.The trajectories of system (1.1) with h=0.3 (a) under the state feedback control and (b) under the fixed time control when t=6k(k=1,2…).
(a)(b)
## 6. Conclusion
In this paper, a state-dependent impulsive dynamical system concerning control strategy has been proposed and analyzed. Particularly, a state feedback measure for controlling the prey population is taken when the amount of the prey reaches a threshold value. The dynamical behaviors have been investigated, including the existence of periodic solutions with period 1 and 2 and their stabilities. In addition, we have numerically shown that system (1.1) has various dynamical aspects including a chaotic behavior. Based on the main theorems of this paper, the amount of the prey population can be completely controlled below the threshold value by one, two, or at most finite number of applying impulsive effects. From a biological point of view, it will be very helpful and useful to control the prey population.
---
*Source: 101386-2012-02-13.xml* | 101386-2012-02-13_101386-2012-02-13.md | 37,900 | The Dynamics of a Predator-Prey System with State-Dependent Feedback Control | Hunki Baek | Abstract and Applied Analysis
(2012) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101386 | 101386-2012-02-13.xml | ---
## Abstract
A Lotka-Volterra-type predator-prey system with state-dependent feedback control is investigated in both theoretical and numerical ways. Using the Poincaré map and the analogue of the Poincaré criterion, the sufficient conditions for the existence and stability of semitrivial periodic solutions and positive periodic solutions are obtained. In addition, we show that there is no positive periodic solution with period greater than and equal to three under some conditions. The qualitative analysis shows that the positive period-one solution bifurcates from the semitrivial solution through a fold bifurcation. Numerical simulations to substantiate our theoretical results are provided. Also, the bifurcation diagrams of solutions are illustrated by using the Poincaré map, and it is shown that the chaotic solutions take place via a cascade of period-doubling bifurcations.
---
## Body
## 1. Introduction
In the last decades, some impulsive systems have been studied in population dynamics such as impulsive birth [1, 2], impulsive vaccination [3, 4], and chemotherapeutic treatment of disease [5, 6]. In particular, the impulsively controlled prey-predator population systems have been investigated by a number of researchers [7–15]. Thus the field of research of impulsive differential equations seems to be a new growing interesting area in recent years. Many authors in the articles cited above have shown theoretically and numerically that prey-predator systems with impulsive control are more efficient and economical than classical ones to control the prey (pest) population. However, the majority of these studies only consider impulsive control at fixed time intervals to eradiate the prey (pest) population. Such control measure of prey (pest) management is called fixed-time control strategy, modeled by impulsive differential equations. Although this control measure is better than classical one, it has shortcomings, regardless of the growth rules of the prey (pest) and the cost of management. In recent years, in order to overcome such drawbacks, several researchers have started paying attention to another control measure based on the state feedback control strategy, which is taken only when the amount of the monitored prey (pest) population reaches a threshold value [2, 16–19]. Obviously, the latter control measure is more reasonable and suitable for prey (pest) control.In order to investigate the dynamic behaviors of a population model with the state feedback control strategy, an autonomous Lotka-Volterra system, which is one of the most basic and important models, is considered. Actually, the principles of Lotka-Volterra models have remained valid until today and many theoretical ecologists adhere to their principles (cf. [8, 20–22]).Thus, in this paper, we consider the following Lotka-Volterra type prey-predator system with impulsive state feedback control:(1.1)x′(t)=x(t)(a-bx(t)-cy(t)),y′(t)=y(t)(-D+ex(t)),x≠h,Δx(t)=-px(t),Δy(t)=qy(t)+r,x=h,
where all parameters except q and r are positive constants. Here, x(t) and y(t) are functions of the time representing population densities of the prey and the predator, respectively, a is the inherent net birth rate per unit of population per unit time of the prey, b is the self-inhibition coefficient, c is the per capita rate of predation of the predator, D denotes the death rate of the predator, e is the rate of conversion of a consumed prey to a predator, 0<p<1 presents the fraction of the prey which die due to the harvesting or pesticide, and so forth, and q>-1 and r≥0 represent the amount of immigration or stock of the predator. We denote by h the economic threshold and Δx(t)=x(t+)-x(t) and Δy(t)=y(t+)-y(t). When the amount of the prey reaches the threshold h at time th, controlling measures are taken and hence the amounts of the prey and predator immediately become (1-p)h and (1+q)y(th)+r, respectively.The main purpose of this research is to investigate theoretically and numerically the dynamical behaviors of system (1.1).This paper is organized as follows. In the next section, we present a useful lemma and notations and construct a Poincaré map to discuss the dynamics of the system. In Section3, the sufficient conditions for the existence of a semi-periodic solution of system (1.1) with r=0 are established via the Poincaré criterion. On the other hand, in Section 4, we find out some conditions for the existence and stability of stable positive period-one solutions of system (1.1). Further, under some conditions, we show that there exists a stable positive periodic solution of period 1 or 2; however, there is no positive periodic solutions with period greater than and equal to three. In order to testify our theoretical results by numerical simulations, in Section 5, we give some numerical examples and the bifurcation diagrams of solutions that show the existence of a chaotic solution of system (1.1). Finally, we have a discussion in Section 6.
## 2. Preliminaries
Many considerable investigators have studied the dynamic behaviors of system (1.1) without the state feedback control. (cf. [23, 24].) It has a saddle (0,0), one locally stable focus (D/e,(ae-bD)/ce) and a saddle (a/b,0) if the condition D/e<a/b holds. Since the carrying capacity of the prey population x(t) is b/a, so it is meaningful that the economical threshold h is less than b/a. Thus, throughout this paper, we set up the following two assumptions:(2.1)(A1)De<ab,(A2)h≤ba.
From the biological point of view, it is reasonable that system (1.1) is considered to control the prey population in the biological meaning space {(x,y):x≥0,y≥0}.The smoothness properties off, which denotes the right hand of (1.1), guarantee the global existence and uniqueness of a solution of system (1.1) (see [25, 26] for the details).LetR=(-∞,∞) and R+2={(x,y)∣x≥0,y≥0}. Firstly, we denote the distance between the point p and the set S by d(p,S)=infp0∈S|p-p0| and define, for any solution z(t)=(x(t),y(t)) of system (1.1), the positive orbit of z(t) through the point z0∈R+2 as(2.2)O+(z0,t0)={z∈R+2∣z=z(t),t≥t0,z(t0)=z0}.
Now, we introduce some definitions (cf. [27]).Definition 2.1 (orbital stability).
z*(t) is said to be orbitally stable if, given ϵ>0,there exists δ=δ(ϵ)>0 such that, for any other solution z(t) of system (1.1) satisfying |z*(t0)-z(t0)|<δ,then d(z(t),O+(z0,t0))<ϵ for t>t0.Definition 2.2 (asymptotic orbital stability).
z*(t) is said to be asymptotically orbitally stable if it is orbitally stable and for any other solution z(t) of system (1.1), there exists a constant η>0 such that, if |z*(t0)-z(t0)|<η,then limt→∞d(z(t),O+(z0,t0))=0.In order to discuss the orbital asymptotical stability of a positive periodic solution of system (1.1), a useful lemma, which follows from Corollary 2 of Theorem 1 given in Simeonov and Bainov [28], is considered as follows.Lemma 2.3 (analogue of the Poincaré criterion).
TheT-periodic solution x=φ(t),y=ζ(t) of system
(2.3)x′(t)=P(x,y),y′(t)=Q(x,y),ifϕ(x,y)≠0,Δx=α(x,y),Δy=β(x,y),ifϕ(x,y)=0,
is orbitally asymptotically stable if the multiplier μ2 satisfies the condition |μ2|<1, where
(2.4)μ2=∏k=1qΔkexp[∫0T∂P∂x(ζ(t),η(t))+∂Q∂y(ζ(t),η(t))dt],Δk=P+((∂β/∂y)ϖ-(∂β/∂x)ϱ+ϖ)+Q+((∂α/∂x)ϱ-(∂α/∂y)ϖ+ϱ)Pϖ+Qϱ,
where ϖ denotes (∂ϕ/∂x) and ϱ denotes (∂ϕ/∂y) and P, Q, ∂α/∂x, ∂α/∂y, ∂β/∂x, ∂β/∂y, ∂ϕ/∂x, and ∂ϕ/∂y are calculated at the point (φ(τk),ζ(τk)), P+=P(φ(τk+),ζ(τk+)), and Q+=Q(φ(τk+),ζ(τk+)). Also ϕ(x,y) is a sufficiently smooth function on a neighborhood of the points (φ(τk),ζ(τk)) such that gradϕ(x,y)≠0 and τk is the moment of the kth jump, where k=1,2,…,q.From now on, we construct two Poincaré maps to discuss the dynamics of system (1.1). For this, we introduce two cross-sections ∑1={(x,y):x=(1-p)h,y≥0} and ∑2={(x,y):x=h,y≥0}. In order to establish the Poincaré map of ∑2 via an approximate formula, suppose that system (1.1) has a positive period-1 solution z(t)=(φ(t),ζ(t)) with period T and the initial condition z0=A+((1-p)h,y0)∈∑1, where y(0)≡y0>0. Then the periodic trajectory intersects the Poincaré section ∑2 at the point A(h,y1) and then jumps to the point A+ due to the impulsive effects with Δx(t)=-px(t) and Δy(t)=qy(t)+r. Thus(2.5)φ(0)=(1-p)h,ζ(0)=y0,φ(T)=h,ζ(T)=y1=y01+q-r.Now, we consider another solutionz̅(t)=(φ̅(t),ζ̅(t)) with the initial condition z̅0=A0((1-p)h,y0+δy0). Suppose that this trajectory which starts form A0 first intersects ∑2 at the point A1(h,y̅1) when t=T+δt and then jumps to the point A1+((1-p)h,y̅2) on ∑1. Then we have(2.6)φ̅(0)=(1-p)h,ζ̅(0)=y0+δy0,φ̅(T+δt)=h,ζ̅(T+δt)=y̅1.
Set u(t)=φ̅(t)-φ(t) and v(t)=ζ̅(t)-ζ(t), then u0=u(0)=φ̅(0)-φ(0)=0 and v0=v(0)=ζ̅(0)-ζ(0). Let v1=y̅2-y0 and v0*=y̅1-y1. It is well known that, for 0<t<T, the variables u(t) and v(t) are described by the relation(2.7)(u(t)v(t))=Φ(t)(u0v0)+o(u02+v02)=Φ(t)(0v0)+o(0v02),
where the fundamental solution matrix Φ(t) satisfies the matrix equation(2.8)dΦ(t)dt=(a-2bφ(t)-cζ(t)-cφ(t)eζ(t)-D+eφ(t))Φ(t)
with Φ(0)=I(the identity matrix). Set g1(t)=φ(t)(a-bφ(t)-cζ(t)) and g2(t)=ζ(t)(-D+eφ(t)). We can express the perturbed trajectory in a first-order Taylor expansion(2.9)φ̅(T+δt)≈φ(T)+u(T)+g1(T)δt,ζ̅(T+δt)≈ζ(T)+v(T)+g2(T)δt.
It follows from φ̅(T+δt)=φ(T)=h that(2.10)δt=-u(T)g1(T)andhencev0*=y̅1-y1=v(T)-g2(T)u(T)g1(T).
Since y̅2=(1+q)y̅1+r and y̅2-y0=(1+q)(y̅1-y1), we obtain v1=(1+q)v0*. So, we can construct a Poincaré map F of ∑1 as follows:(2.11)v1=Fq(v0)=(1+q)[v(T)-g2(T)u(T)g1(T)],
where u(T) and v(T) are calculated according to (2.7).Now we construct another type of Poincaré maps. Suppose that the pointBk(h,yk) is on the section ∑2. Then Bk+((1-p)h,(1+q)yk+r) is on ∑1 due to the impulsive effects, and the trajectory with the initial point Bk+ intersects ∑2 at the point Bk+1(h,yk+1), where yk+1 is determined by yk and the parameters q and r. Thus we can define a Poincaré map F as follows:(2.12)yk+1=F(q,r,yk).
The function F is continuous on q,r, and ykbecause of the dependence of the solutions on the initial conditions.Definition 2.4.
A trajectoryO+(z0,t0) of system (1.1) is said to be order k-periodic if there exists a positive integer k≥1 such that k is the smallest integer for y0=yk.Definition 2.5.
A solutionz(t)=(x(t),y(t)) of system (1.1) is called a semitrivial solution if its one component is zero and another is nonzero.Note that, for each fixed point of the mapF in (2.12), there is an associated periodic solution of system (1.1), and vice versa.
## 3. The Existence and Stability of a Periodic Solution Whenr=0
In this section, we consider system (1.1) with r=0 as follows:(3.1)x′(t)=x(t)(a-bx(t)-cy(t)),y′(t)=y(t)(-D+ex(t)),x≠h,Δx(t)=-px(t),Δy(t)=qy(t),x=h.First, lety(t)=0 to calculate a semitrivial periodic solution of system (3.1). Then system (3.1) can be changed into the following impulsive differential equation:(3.2)x′(t)=x(t)(a-bx(t)),x(t)≠h,Δx(t)=-px(t),x(t)=h.
Under the initial value x(0)=(1-p)h≡x0, the solution of the equation x′(t)=x(t)(a-bx(t)) can be obtained as x(t)=aexp(at)/(β+bexp(at)), where β=(a-bh(1-p))/(1-p)h. Assume that x(T)=h and x(T+)=x0 in order to get a periodic solution of (3.2). Then we have the period T=(1/a)ln((a-bh(1-p))/(a-bh)(1-p)) of a semitrivial periodic solution of (3.1). Thus system (1.1) with r=0 has a semitrivial periodic solution with the period T as follows:(3.3)φ(t)=aexp(a(t-(k-1)T))β+bexp(a(t-(k-1)T)),ζ(t)=0,
where (k-1)T<t<kT.Using the Poincaré mapF defined in (2.12), we will have a criterion for the stability of this semitrivial periodic solution (φ(t),ζ(t)).Theorem 3.1.
The semitrivial periodic solution of system (1.1) with r=0 is locally stable if the condition
(3.4)-1<q<q0
holds, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1.Proof.
We already discussed the existence of the semitrivial periodic solution(φ(t),0). It follows from (2.8) that
(3.5)dΦ(t)dt=(a-2bφ(t)-cφ(t)0-D+eφ(t))Φ(t),Φ(0)=I2.
Let Φ(t)=(w1(t)w2(t)w3(t)w4(t)). Then we can infer from (3.5) that, for 0<t<T=(1/a)ln((1-(1-p)h)/(1-h)(1-p)),
(3.6)w1′(t)=(a-2bφ(t))w1(t)-cφ(t)w3(t),w1(0)=1,w2′(t)=(a-2bφ(t))w2(t)-cφ(t)w4(t),w2(0)=0,w3′(t)=(-D+eφ(t))w3(t),w3(0)=0,w4′(t)=(-D+eφ(t))w4(t),w4(0)=1.
Since u0=u(0)=0 and g2(t)=0, we obtain that v1=Fq(v0)=(1+q)[v(T)-g2(T)u(T)/g1(T)]=(1+q)w4(T)v0. Thus it is only necessary to calculate w4(t). From the fourth equation of (3.6), we obtain w4(t)=w̅exp(∫-D+eφ(t)dt). Since ∫φ(t)dt=(1/b)ln(β+bexp(at)) and w4(0)=1, so we obtain w4(T)=((β+bexp(aT))/(β+b))e/bexp(-DT). Therefore,
(3.7)v1=(1+q)(1-p)D/a(a-bh(1-p)a-bh)e/b-D/av0.
Note that v0 is a fixed point of Fq(v0) and
(3.8)Dv0Fq(0)=(1+q)(1-p)D/a(a-bh(1-p)a-bh)e/b-D/a.
Under condition (3.4), we get 0<Dv0Fq(0)<1. So system (1.1) with r=0 has a stable semitrivial periodic solution.Remark 3.2.
From the proof of Theorem3.1, we note that Dv0Fq(0)>1 if q>q0.It means that the semitrivial periodic solution system (1.1) with r=0 is unstable if q>q0.Now, we discuss the existence of a positive periodic solution of the system (3.1) with r=0.Theorem 3.3.
System (1.1) with r=0 has a positive period-one solution if the condition
(3.9)q>q0
holds, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1.Proof.
It follows from Theorem3.1 that the semitrivial periodic solution passing through the points A((1-p)h,0) and B(h,0) is stable if -1<q<q0, where q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1. Now, define G(x)=F(q,0,x)-x, where F is the Poincaré map. From now on, we will show that there exist two positive numbers ϵ1 and ω0 such that G(ϵ1)>0 and G(ω0)≤0 by following two steps.
Step 1.
We will show thatG(ϵ1)>0 for some ϵ1>0. First, consider the trajectory starting with the point A1=((1-p)h,ϵ) for a sufficiently small number ϵ>0. This trajectory meets the Poincaré section ∑2 at the point B1=(h,ϵ1) and then jumps to the point A2=((1-p)h,(1+q)ϵ1) and reaches the point B2=(h,ϵ2). Since q>q0, the semitrivial solution is unstable by Remark 5.4. So we can choose an ϵ̅ such that (1+q)ϵ1>ϵ for q>q0+ϵ̅. Thus the point B2 is above the point B1. So we have ϵ1<ϵ2. From (2.12), we know that
(3.10)ϵ1-F(q,0,ϵ1)=ϵ1-ϵ2<0.
Thus we know that G(ϵ1)>0.Step 2.
We will show thatG(ω0)≤0 for some ω0>0. To do this, suppose that the line bx+cy-a=0 meets ∑1 at A3=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A3 meets the line ∑2 at B3=(h,ω0) then jumps to the point A3+=((1-p)h,(1+q)ω0) and then reaches the point B4=(h,ω̅0) on the Poincaré section ∑2 again. However, for any q>0, the point B4 is not above the point B3 in view of the vector field of system (1.1). Thus ω̅0≤ω0. So we have only to consider the following two cases.Case (i): Ifω̅0=ω0, that is, G(ω0)=0, then system (1.1) has a positive period-one solution.Case (ii): Ifω̅0<ω0, then(3.11)ω0-F(q,0,ω0)=ω0-ω̅0>0,that is,G(ω0)<0.
Thus, it follows from (3.10) and (3.11) that the Poincaré map F has a fixed point, which corresponds to a positive period-one solution for system (1.1) with r=0. Thus we complete the proof.Remark 3.4.
Under the conditionr=0,we show that the semitrivial periodic solution of system (1.1) is stable when -1<q<q0 and there exists a positive period-one solution of system (1.1). Since Dv0Fq0(0)=1, a fold bifurcation takes place at q=q0.Furthermore, from the proof of Theorem 3.3, we know that system (1.1) with r=0 has a positive period-one solution (θ(t),ψ(t)) passing through the points L+=((1-p)h,(1+q)ψ(0)) and L=(h,ψ(0)) and satisfying the condition (a-b(1-p)h)/c=(1+q1)ψ(0) for some q1>q0.Now we discuss the stability of the positive periodic solution of system (1.1).Theorem 3.5.
Assume thatr=0. Let (θ(t),ψ(t)) be the positive period-one solution of system (1.1) with period τ passing through the points M+=((1-p)h,(1+q)ψ(0)) and M=(h,ψ(0)). Then the positive periodic solution is orbitally asymptotically stable if the condition
(3.12)q0<q<q2
holds, where g(q2)=-1 and g(u)=((a-b(1-p)h-c((1+u)ψ(0)))/(a-bh-cψ(0)))exp(∫0τ-bθ(t)dt).Proof.
In order to discuss the stability of the positive periodic solution(θ(t),ψ(t)) of system (1.1), we will use the Lemma 2.3. First, we note that
(3.13)P(x,y)=x(t)(a-bx(t)-cy(t)),Q(x,y)=y(t)(-D+ex(t)),α(x,y)=-px(t),β(x,y)=qy(t),ϕ(x,y)=x(t)-h,(θ(τ),ψ(τ))=(h,ψ(0)),(θ(τ+),ψ(τ+))=((1-p)h,(1+q)ψ(0)).
Since
(3.14)∂P∂x=a-2bx(t)-cy(t),∂Q∂y=-D+ex(t),∂ϕ∂x=1,∂ϕ∂y=0,∂α∂x=p,∂α∂y=0,∂β∂x=0,∂β∂y=q,
we obtain that
(3.15)Δ1=P+(θ(τ+),ψ(τ+))(1+q)P(θ(τ),ψ(τ))=(1-p)(a-b(1-p)h-c((1+q)ψ(0)))(1+q)a-bh-cψ(0),∫0τ∂P∂x+∂Q∂ydt=∫0τa-2bθ(t)-cψ(t)-D+eθ(t)dt=∫0τθ̇(t)θ(t)+ψ̇(t)ψ(t)(-bθ(t))dt=∫0τdln(θ(t)ψ(t))+∫0τ(-bθ(t))dt=ln(1(1-p)(1+q))+∫0τ(-bθ(t))dt.
Thus we have μ2=((a-b(1-p)h-c((1+q)ψ(0)))/(a-bh-cψ(0)))exp(∫0τ(-bθ(t))dt)≡g(q). By Remark 3.4, for q=q1, we have (1+q1)ψ(0)=(a-b(1-p)h)/c, and so we get μ2=0 when q=q1 which means that this periodic solution is stable. In addition, for q=q0, we know μ2=1 due to ψ(0)=0 and τ=(1/a)ln((a-bh(1-p))/(a-bh)(1-p)). Since the derivative dμ2/dq with respect to q is negative, so we know that 0<μ2<1 when q0<q<q1. Further, we can find q2>q1 such that μ2=g(q2)=-1. Therefore, if the condition (3.12) holds, then we obtain -1<μ2<1, which implies from Lemma 2.3 that the positive periodic solution (θ(t),ψ(t)) is orbitally asymptotically stable.Remark 3.6.
System (1.1) has a stable periodic semitrivial solution and a stable positive period-1 solution if 0<q<q0 and q0<q<q2, respectively. We already know from Remark 3.4 that a fold bifurcation occurs at q=q0. Thus, from the facts, we can suppose that a flip (period-doubling) bifurcation occurs at q=q2. Moreover, we can figure out that system (1.1) might have a chaotic solution via a cascade of period doubling.
## 4. The Existence and Stability of a Positive Periodic Solution Whenr>0
In this section we will take into account the existence and stability of positive periodic solutions in the two cases ofh<D/e and D/e<h. In fact, under the condition h<D/e, the trajectories starting from any initial point (x0,y0) with x0<h intersects the section ∑2 infinite times. However, under the condition D/e<h, the trajectories starting from any initial point (x0,y0) with x0<h do not intersect the section ∑2.
### 4.1. The Case ofh<D/e
Theorem 4.1.
Assume thath≤D/e, q>-1, and r>0. Then the system (1.1) has a positive period-one solution. Moreover, if this periodic solution (φ(t),ζ(t)) has a period λ and passes through the points M+=((1-p)h,(1+q)ζ(0)+r) and M=(h,ζ(0)), then it is asymptotically orbitally stable provided with
(4.1)q*<q<q**,
where γ(q*)=1 and γ(q**)=-1 and γ(q)=(a-b(1-p)h-c((1+q)ζ(0)+r))/(a-bh-cζ(0))exp(∫0λbζ(t)dt).Proof.
We will use the similar method to Theorem3.3 to prove the existence of a periodic solution of system (1.1).
Firstly, in order to showF(q,r,r̅1)>r̅1 for some r̅1>0, let U1=((1-p)h,r1) be in the Poincaré section ∑1, where r1 is small enough such that 0<r1<r. The trajectory of system (1.1) with the initial point U1 intersects the point V1=(h,r̅1) on the Poincaré section ∑2, then jumps to the point U2=((1-p)h,(1+q)r̅1+r), and then reaches the point V2=(h,r2) on ∑2 again. From the choice of the value r1, we know that (1+q)r̅1+r>r1 and hence the points U2 and V2 are above the points U1 and V1, respectively. Thus we have r̅1<r2. It follows from (2.12) that
(4.2)r̅1-F(q,r,r̅1)=r̅1-r2<0.
Secondly, to find a positive numberm0 such that m0-F(q,r,m0)≥0 suppose that the line bx+cy-a=0 meets ∑1 at A=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A meets the line ∑2 at B=(h,m0) then jumps to the point A+=((1-p)h,(1+q)m0+r) and then reaches the point B1=(h,m̅0) on the line ∑2 again. Suppose that there exists a q0>0 such that (1+q0)m0+r=(a-b(1-p)h)/c. Then the point A+ is just the point A if q=q0. The point A+ lies above the point A if q>q0, while it lies under A if q<q0. However, for any q>0, the point B1 is not above the point B in view of the vector field of the system (1.1). Thus m0≥m̅0 and hence m0-F(q,r,m0)≥0.
Therefore, we have a periodic solution by the similar method to Theorem3.3. Further, the stability condition for this period-one solution can be obtained by using the same method used in the proof of Theorem 3.5. Thus we complete the proof.
### 4.2. The Case ofD/e<h
Theorem 4.2.
Assume thatD/e<h, q>-1, and r>0. Then there exists r0>0 such that system (1.1) has a stable positive solution of period 1 or 2 if r>r0, where r0 depends on the value h. Moreover, system (1.1) has no periodic solutions of period k ( k≥3).Proof.
First, assume that the orbit, which just touches∑2 at the point B0=(h,y̅1) with y̅1=(a-bh)/c, meets ∑1 at the two points B=((1-p)h,y̅2) and B1=((1-p)h,y̅3), where y̅3<(a-b(1-p)h)/c<y̅2. We will prove this theorem by the following five steps.
Step 1.
We will show that ifr>y̅2, then any trajectory of system (1.1) intersects with ∑2 infinite times. Note that every trajectory passing through the point ((1-p)h,y) with y∈(y̅3,y̅2) cannot intersect with ∑1 as time goes to infinite and tends to the focus (D/e,(ae-bD)/ce) eventually. Therefore, if all trajectories of system (1.1) pass through the points ((1-p)h,y) with y∈(y̅3,y̅2) after finite times impulsive effects on ∑2, they all tend to the focus and there are no positive periodic solutions. From this fact, we know that the condition r>y̅2, in which y̅2 depends on the value h as a function g(h), is a sufficient condition for a trajectory of system (1.1) which intersects with ∑2 infinite times in view of the impulsive effects Δx=-px and Δy=qy+r.From now on, let the condition r>y̅2 hold.Step 2.
Next, we will show thatyj+1<ym+1 for ym<yj, where (h,yk+1) is the next point of (h,yk) that touches ∑2. Note that for any point (h,y) with 0<y<(a-b(1-p)h)/c, the point ((1-p)h,(1+q)y+r) is above the point B. Thus, for any two points Em(h,ym) and Ej(h,yj), where 0<ym<yj<(a-bh)/c, the points Em+((1-p)h,(1+q)ym+r) and Ej+((1-p)h,(1+q)yj+r) lie above the point B and, further, it follows from the vector field of the system (1.1) that 0<yj+1<ym+1<(a-bh)/c, that is,
(4.3)yj+1<ym+1forym<yj.
Thus, from the Poincaré map and r>y̅2, we obtain y1=F(q,r,y0), y2=F(q,r,y1), and yn+1=F(q,r,yn)(n=3,4,…) for given y0∈(0,(a-bh)/c). Therefore, we have only to consider three cases as follows:Case (i):y0=y1,Case (ii):y0≠y1,Case (iii):yi≠yj(0≤i<j≤k-1, k≥3).Step 3.
In order to show the existence of a positive solution of period 1 or 2, consider the Cases (i) and (ii). First, if Case (i) is satisfied, then it is easy to see that system (1.1) has a positive period-one solution. Now, suppose that Case (ii) is satisfied. Then without loss of generality, we can say that y1<y0. It follows form (4.3) that y2>y1. Furthermore, if y2=y0, then there exists a positive period-two solution of system (1.1).Step 4.
Now, we will prove that system (1.1) cannot have periodic solutions of period k (k≥3) if Case (iii) holds. For this, assume that y0=yk, which means that system (1.1) has a positive period-k solution. However, we will show that this is impossible. If y0<y1, then from (4.3), we obtain that y1<y2 and then y2<y0<y1 or y0<y2<y1. If y0>y1, then from (4.3), we have y1<y2 and then y1<y2<y0 and y1<y0<y2. So the relation of y0, y1, and y2 is one of the following:
(4.4)(a)y2<y0<y1,(b)y0<y2<y1,(c)y1<y2<y0,(d)y1<y0<y2.(a) If y2<y0<y1, then from (4.3), we have y2<y1<y3. It is also true that y2<y0<y1<y3. We again obtain y4<y2<y1<y3 and then y4<y2<y0<y1<y4. By means of induction, we have
(4.5)0<⋯<y2k<⋯<y4<y2<y0<y1<y3<y5<⋯<y2k+1<⋯<1.
Similar to (a), for Cases (b), (c), and (d), we obtain
(4.6)(b)0<y0<y2<y4<⋯<y2k<⋯<y2k+1<⋯<y5<y3<y1<1,(c)0<y1<y3<y5<⋯<y2k+1<⋯<y2k<⋯<y4<y2<y0<1,(d)0<⋯<y2k+1<⋯<y5<y3<y1<y2<y4<y6<⋯<y2k<⋯<1,
respectively. If there exists a positive period-k solution (k≥3) in the system (1.1), then yi≠yj, (0≤i<j≤k-1), yk=y0 which is a contradiction to (4.5)–(4.6). Thus there is no positive period-k solution (k≥3) if r>y̅2.Step 5.
From Step4, we can show that there exists a stable period-1 or-2 solution in these cases. In fact, it follows from (4.5) that
(4.7)limk→∞y2k=y0*,limk→∞y2k+1=y1*,
where 0<y0*<y1*<(a-bh)/c. Therefore, y1*=F(q,r,y0*) and y2*=F(q,r,y1*). Thus system (1.1) has a positive period-2 solution in the case (a). Moreover, it is easily proven from (4.5) and (4.7) that this positive period-2 solution is local stable. Similarly, we have system (1.1) has a stable positive period-1 solution in cases (b) and (c) and has a stable positive period-2 solution in case (d).
## 4.1. The Case ofh<D/e
Theorem 4.1.
Assume thath≤D/e, q>-1, and r>0. Then the system (1.1) has a positive period-one solution. Moreover, if this periodic solution (φ(t),ζ(t)) has a period λ and passes through the points M+=((1-p)h,(1+q)ζ(0)+r) and M=(h,ζ(0)), then it is asymptotically orbitally stable provided with
(4.1)q*<q<q**,
where γ(q*)=1 and γ(q**)=-1 and γ(q)=(a-b(1-p)h-c((1+q)ζ(0)+r))/(a-bh-cζ(0))exp(∫0λbζ(t)dt).Proof.
We will use the similar method to Theorem3.3 to prove the existence of a periodic solution of system (1.1).
Firstly, in order to showF(q,r,r̅1)>r̅1 for some r̅1>0, let U1=((1-p)h,r1) be in the Poincaré section ∑1, where r1 is small enough such that 0<r1<r. The trajectory of system (1.1) with the initial point U1 intersects the point V1=(h,r̅1) on the Poincaré section ∑2, then jumps to the point U2=((1-p)h,(1+q)r̅1+r), and then reaches the point V2=(h,r2) on ∑2 again. From the choice of the value r1, we know that (1+q)r̅1+r>r1 and hence the points U2 and V2 are above the points U1 and V1, respectively. Thus we have r̅1<r2. It follows from (2.12) that
(4.2)r̅1-F(q,r,r̅1)=r̅1-r2<0.
Secondly, to find a positive numberm0 such that m0-F(q,r,m0)≥0 suppose that the line bx+cy-a=0 meets ∑1 at A=((1-p)h,(a-b(1-p)h)/c). The trajectory of system (1.1) with the initial point A meets the line ∑2 at B=(h,m0) then jumps to the point A+=((1-p)h,(1+q)m0+r) and then reaches the point B1=(h,m̅0) on the line ∑2 again. Suppose that there exists a q0>0 such that (1+q0)m0+r=(a-b(1-p)h)/c. Then the point A+ is just the point A if q=q0. The point A+ lies above the point A if q>q0, while it lies under A if q<q0. However, for any q>0, the point B1 is not above the point B in view of the vector field of the system (1.1). Thus m0≥m̅0 and hence m0-F(q,r,m0)≥0.
Therefore, we have a periodic solution by the similar method to Theorem3.3. Further, the stability condition for this period-one solution can be obtained by using the same method used in the proof of Theorem 3.5. Thus we complete the proof.
## 4.2. The Case ofD/e<h
Theorem 4.2.
Assume thatD/e<h, q>-1, and r>0. Then there exists r0>0 such that system (1.1) has a stable positive solution of period 1 or 2 if r>r0, where r0 depends on the value h. Moreover, system (1.1) has no periodic solutions of period k ( k≥3).Proof.
First, assume that the orbit, which just touches∑2 at the point B0=(h,y̅1) with y̅1=(a-bh)/c, meets ∑1 at the two points B=((1-p)h,y̅2) and B1=((1-p)h,y̅3), where y̅3<(a-b(1-p)h)/c<y̅2. We will prove this theorem by the following five steps.
Step 1.
We will show that ifr>y̅2, then any trajectory of system (1.1) intersects with ∑2 infinite times. Note that every trajectory passing through the point ((1-p)h,y) with y∈(y̅3,y̅2) cannot intersect with ∑1 as time goes to infinite and tends to the focus (D/e,(ae-bD)/ce) eventually. Therefore, if all trajectories of system (1.1) pass through the points ((1-p)h,y) with y∈(y̅3,y̅2) after finite times impulsive effects on ∑2, they all tend to the focus and there are no positive periodic solutions. From this fact, we know that the condition r>y̅2, in which y̅2 depends on the value h as a function g(h), is a sufficient condition for a trajectory of system (1.1) which intersects with ∑2 infinite times in view of the impulsive effects Δx=-px and Δy=qy+r.From now on, let the condition r>y̅2 hold.Step 2.
Next, we will show thatyj+1<ym+1 for ym<yj, where (h,yk+1) is the next point of (h,yk) that touches ∑2. Note that for any point (h,y) with 0<y<(a-b(1-p)h)/c, the point ((1-p)h,(1+q)y+r) is above the point B. Thus, for any two points Em(h,ym) and Ej(h,yj), where 0<ym<yj<(a-bh)/c, the points Em+((1-p)h,(1+q)ym+r) and Ej+((1-p)h,(1+q)yj+r) lie above the point B and, further, it follows from the vector field of the system (1.1) that 0<yj+1<ym+1<(a-bh)/c, that is,
(4.3)yj+1<ym+1forym<yj.
Thus, from the Poincaré map and r>y̅2, we obtain y1=F(q,r,y0), y2=F(q,r,y1), and yn+1=F(q,r,yn)(n=3,4,…) for given y0∈(0,(a-bh)/c). Therefore, we have only to consider three cases as follows:Case (i):y0=y1,Case (ii):y0≠y1,Case (iii):yi≠yj(0≤i<j≤k-1, k≥3).Step 3.
In order to show the existence of a positive solution of period 1 or 2, consider the Cases (i) and (ii). First, if Case (i) is satisfied, then it is easy to see that system (1.1) has a positive period-one solution. Now, suppose that Case (ii) is satisfied. Then without loss of generality, we can say that y1<y0. It follows form (4.3) that y2>y1. Furthermore, if y2=y0, then there exists a positive period-two solution of system (1.1).Step 4.
Now, we will prove that system (1.1) cannot have periodic solutions of period k (k≥3) if Case (iii) holds. For this, assume that y0=yk, which means that system (1.1) has a positive period-k solution. However, we will show that this is impossible. If y0<y1, then from (4.3), we obtain that y1<y2 and then y2<y0<y1 or y0<y2<y1. If y0>y1, then from (4.3), we have y1<y2 and then y1<y2<y0 and y1<y0<y2. So the relation of y0, y1, and y2 is one of the following:
(4.4)(a)y2<y0<y1,(b)y0<y2<y1,(c)y1<y2<y0,(d)y1<y0<y2.(a) If y2<y0<y1, then from (4.3), we have y2<y1<y3. It is also true that y2<y0<y1<y3. We again obtain y4<y2<y1<y3 and then y4<y2<y0<y1<y4. By means of induction, we have
(4.5)0<⋯<y2k<⋯<y4<y2<y0<y1<y3<y5<⋯<y2k+1<⋯<1.
Similar to (a), for Cases (b), (c), and (d), we obtain
(4.6)(b)0<y0<y2<y4<⋯<y2k<⋯<y2k+1<⋯<y5<y3<y1<1,(c)0<y1<y3<y5<⋯<y2k+1<⋯<y2k<⋯<y4<y2<y0<1,(d)0<⋯<y2k+1<⋯<y5<y3<y1<y2<y4<y6<⋯<y2k<⋯<1,
respectively. If there exists a positive period-k solution (k≥3) in the system (1.1), then yi≠yj, (0≤i<j≤k-1), yk=y0 which is a contradiction to (4.5)–(4.6). Thus there is no positive period-k solution (k≥3) if r>y̅2.Step 5.
From Step4, we can show that there exists a stable period-1 or-2 solution in these cases. In fact, it follows from (4.5) that
(4.7)limk→∞y2k=y0*,limk→∞y2k+1=y1*,
where 0<y0*<y1*<(a-bh)/c. Therefore, y1*=F(q,r,y0*) and y2*=F(q,r,y1*). Thus system (1.1) has a positive period-2 solution in the case (a). Moreover, it is easily proven from (4.5) and (4.7) that this positive period-2 solution is local stable. Similarly, we have system (1.1) has a stable positive period-1 solution in cases (b) and (c) and has a stable positive period-2 solution in case (d).
## 5. Numerical Examples
In this section, we will present some numerical examples to discuss the various dynamical aspects of system (1.1) and to testify the validity of our theoretical results obtained in the previous sections.Example 5.1.
In order to exhibit the dynamical complexity asq varies, let r=0 and fix the other parameters as follows:
(5.1)a=0.4,b=0.8,c=0.8,D=0.4,e=0.8,h=0.1,p=0.35.
In this example, we set an initial value as (0.05,0.1). It is from Theorem 3.1 that the periodic semitrivial solution is stable if -1<q<q0=(1-p)-D/a((a-bh(1-p))/(a-bh))D/a-e/b-1≈0.4286 (see Figures 1 and 2). We display the bifurcation diagram in Figure 2(a). From the Remark 3.4, we know that a fold bifurcation takes place at q=q0. Figure 2(a) shows that a positive period-one solution bifurcates from the periodic semitrivial solution at q=q0≈0.4286 and a positive period-two solution bifurcates from the positive period-one solution via a flip bifurcation at q=q2≈6.25, which leads to the period-doubling bifurcation and then chaos (see Figures 2(b) and 3). It follows from Theorem 4.2 that system with r>0 cannot have positive period-3 solution under some conditions. However, if r=0, a period-3 solution can exist (see Figure 4).Figure 1
(a) The trajectory of system (1.1) with r=0 when q=0.42. (b-c) Time series.(a) The bifurcation diagram of system (1.1) r=0. (b) A chaotic solution of system (1.1) with r=0 when q=20.
(a)(b)(a) A period-4 solution whenr=0 and q=14. (b) A period-8 solution when r=0 and q=15.5.
(a)(b)(a) A period-3 solution whenr=0 and q=26.5. (b) The enlarged part of (a) for 0.063≤x≤1.
(a)(b)Example 5.2.
Under the conditionr>0,we know that there is no semitrivial solution in system (1.1). In this case, set the parameters as follows:
(5.2)a=1.0,b=0.6,c=0.8,D=0.4,e=0.8,p=0.2,r=0.1.
Throughout this example, we regard the point (0.1,0.2) as an initial value. Figure 5(a) shows the bifurcation diagrams of system (1.1) with q as a bifurcation parameter when h=0.3<D/e. It follows from Theorem 4.1 that there exists a period-one solution for any q>-1 and this solution is stable when q*<q<q**≈4.35 as shown in Figure 5(a). It is easy to see that there are no fold bifurcations. However, at q=q**≈4.35, a flip bifurcation occurs and the cascade of the flip bifurcation leads to chaotic solutions like the previous example. Thanks to Figure 5(a), we know that system (1.1) undergoes the complex dynamical behaviors including periodic doubling, chaotic behaviors, and periodic windows.(a) The bifurcation diagram of system (1.1) with h=0.3. (b) The bifurcation diagram of system (1.1) with h=0.52.
(a)(b)Example 5.3.
It follows from Theorem4.2 that if the value h satisfies the condition D/e<h<a/b, there exists some r0>0 such that, for all q>0, system (1.1) has a stable positive period-one or-two solution if r>r0, but does not have period-k (k≥3) solutions. To substantiate these theoretical results by numerical simulation, let h=0.52 and r=1.2, and let the other parameters be the same as in Example 5.2. Then we obtain D/e<h<b/a. Figure 5(b) of the bifurcation diagram of system (1.1) numerically displays that there exist no period-k solutions (k≥3) except stable positive period-1 or-2 solutions. Thus the value r is also an important parameter in the dynamical aspects of system (1.1). For this reason, we investigate the effects of the parameter r on system (1.1). For this, let q=5 and h=0.52, and letr be a bifurcation parameter. It is easy to see from Figure 6 that the parameter r causes various dynamical behaviors of system (1.1) such as a cascade of reverse period-doubling bifurcations, also called period halving, period windows, chaotic regions, stable period-2 solutions, and so forth.
From a biological point of view, as mentioned in Section1, the value r represents the amount of immigration or releasing of the predator. Particularly, from Figure 6, one can figure out that the number of the predator cannot be easily estimated when the amount of r is small due to chaotic behaviors of solutions to the system; on the contrary, if sufficient amount of the predator is released impulsively, then the number of the predator (eventually, the number of the prey) can be predictable due to periodic behaviors of solutions to the system.Figure 6
The bifurcation diagram of system (1.1) with h=0.3 and q=5 with respect to r>0.Remark 5.4.
Now, we will demonstrate the superiority of the state-dependent feedback control in comparison with the fixed-time control via an example. For this, assume thata=1.0, b=0.6, c=0.8, D=0.4, e=0.8, h=0.3, p=0.6, q=4, and r=0.1 in system (1.1) with an initial value (0.05,4.1). Figure 7(b) shows that the prey population cannot be controlled below the threshold value if we take the impulsive control measure at fixed time t=6k(k=1,2,…).However, it is seen from Figure 7(a) that only after several attempts of control does the solution approach the periodic solution. Thus this example shows that the impulsive state feedback measure is more effective in real biological control.The trajectories of system (1.1) with h=0.3 (a) under the state feedback control and (b) under the fixed time control when t=6k(k=1,2…).
(a)(b)
## 6. Conclusion
In this paper, a state-dependent impulsive dynamical system concerning control strategy has been proposed and analyzed. Particularly, a state feedback measure for controlling the prey population is taken when the amount of the prey reaches a threshold value. The dynamical behaviors have been investigated, including the existence of periodic solutions with period 1 and 2 and their stabilities. In addition, we have numerically shown that system (1.1) has various dynamical aspects including a chaotic behavior. Based on the main theorems of this paper, the amount of the prey population can be completely controlled below the threshold value by one, two, or at most finite number of applying impulsive effects. From a biological point of view, it will be very helpful and useful to control the prey population.
---
*Source: 101386-2012-02-13.xml* | 2012 |
# Considering the Carbon Penalty Rates to Optimize the Urban Distribution Model in Time-Varying Network
**Authors:** Yuanyuan Ji; Shoufeng Ji; Tingting Ji
**Journal:** Complexity
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013861
---
## Abstract
The existing urban distribution model does not consider both the environmental influence and time-varying factors. This paper studies the urban distribution optimization model considering both time-varying network factors and carbon emission factors and constructs the single period time-varying network urban distribution optimization model and cross-period time-varying network urban distribution optimization model. In this paper, the time-varying network in urban distribution is mathematically described and calculated. By considering the carbon penalty rate, the distribution optimization model is analyzed to formulate the optimal urban distribution scheme for enterprises. An improved variable neighborhood search (VNS) heuristic algorithm is designed to solve the model. The numerical example demonstrates that the higher the carbon emission cost, the greater the carbon penalty rate. When the carbon emission cost increases by 50%, the total cost of the distribution scheme considering carbon emission is saved by 1.21%. The numerical analysis also finds that only when the unit carbon emission cost is higher than 2.937, the carbon penalty rate will stimulate enterprises to reduce emissions.
---
## Body
## 1. Introduction
The rapid development of the global economy and the rapid growth of the population have brought many adverse effects on the Earth’s ecological environment. The rapid development of urban distribution has also brought serious impacts on the urban environment. The government’s policies used to restrict carbon emissions are increasing rapidly. According to the fourth assessment report of the United Nations Intergovernmental Panel on climate change, the concentration of carbon dioxide in the atmosphere has increased from 280 ppm (one-millionth unit) before the industrial revolution to 379 ppm in 2005, exceeding the natural change range since nearly 650000 years. In the past century, the global average surface temperature has increased by 0.74°C. The report also predicts that the global average surface temperature may rise by 1.1–6.4°C in the next 100 years. It is urgent to deal with climate change caused by carbon emissions. In 2021, the outlines of the 14th five-year plan (2021–2025) for national economic and social development and vision 2035 of the People’s Republic of China proposed to achieve the objectives of China’s Intended Nationally Determined Contributions 2030 and formulate an action plan to reach the peak of carbon emissions by 2030. A system that focuses on carbon intensity control supplemented by total carbon emission control will be implemented to achieve carbon neutralization by 2060. To achieve the overall emission reduction target, macropolicy support and operable implementation schemes are needed. The pressure of emission reduction in practice has confirmed the necessity of studying the urban distribution problem considering carbon emission.The carbon generated by logistics activities accounts for 5.5% of human activities and 5–15% of the emissions in the whole product life cycle. In logistics activities, the carbon emissions from fossil fuels required for transportation and distribution account for more than 87% of the total. Previous studies have shown that by reasonably planning roads and distribution routes, carbon emissions in transportation can be reduced by 5% on the premise of ensuring the economic objectives of enterprises. Therefore, the research on urban distribution in the operation of a low-carbon supply chain is of great significance in reducing carbon emissions. At the same time, how to let enterprises reduce the carbon emission of urban distribution by optimizing operations without increasing cost or increasing little cost is a challenging topic.With the increase of urban car ownership year by year, the time-varying characteristics of the urban road network have become increasingly significant, and the speed of urban road shows periodic characteristics over time. The vehicle routing problem in a time-varying road network refers to the reasonable arrangement of the number of distribution vehicles, vehicle routes, and under the condition of considering the change of vehicle travel speed in the road network in different time periods.Combining the time-varying characteristics with the specific urban road network can better reflect the dynamics of urban distribution. At the same time, the time-varying characteristics of urban distribution are closely related to urban congestion. The time-varying periods can be divided by considering the characteristics of congestion. Thus, the time-varying network has become an important factor in urban distribution optimization problems. Considering time-varying factors in urban distribution optimization has also been a concern by many scholars. Meanwhile, a time-varying network will also have a significant impact on carbon emissions in urban distribution. Therefore, the urban distribution optimization problem considering carbon emission under a time-varying network has become a research hotspot.
## 2. Literature Review
Exiting research has considered the time-varying factors and carbon emissions factors in the urban distribution optimization problem. Considering carbon emissions, Pradenas et al. study the carbon emission in urban distribution considering distance, vehicle load, and back time window. The paper shows that the carbon emission in urban distribution is jointly affected by vehicle load and distribution distance and explains the relationship between carbon emission and vehicle load [1]. Different from Pradenas, this paper considers the carbon emission under the time-varying network in urban distribution and introduces a carbon penalty rate to analyze the distribution model to study the influence of the time-varying network on the enterprise’s decision-making. Eskandarpour et al. propose a biobjective mixed-integer linear programming model to minimize total costs as well as carbon emissions caused by the vehicles used in the fleet for a Heterogeneous Vehicle Routing Problem with Multiple Loading Capacities and Driving Ranges and develops an enhanced variant of MultiDirectional Local Search to solve the problem [2]. Yu et al. propose an improved branch-and-price algorithm to precisely solve the heterogeneous fleet green vehicle routing problem with time windows [3]. Li et al. study the impact of the carbon tax and carbon quota policy on distribution costs and carbon dioxide emission and developed a Genetic Algorithm-Tabu Search to solve the model [4]. Zeng et al. study a routing algorithm to find a path that consumes the minimum amount of gasoline while the travel time satisfies a specified travel time budget and an on-time arrival probability considering [5]. Xiao et al. present an ε-accurate approach to conduct continuous optimization on the pollution routing problem [6]. Liao et al. study a green distribution routing problem integrating distribution and vehicle routing problems and propose a multiobjective scheduling model to maximize customer satisfaction and minimize carbon footprint [7]. Yan et al. establish an open vehicle routing model for urban distribution aiming to minimize the total cost. A genetic algorithm supporting the implementation of smart contract is developed to verify the effectiveness of smart contracts [8]. The above researches propose that carbon emissions can be reduced by optimizing the routing problems, including the problem of considering urban distribution in green supply chain operations without discussing the impact of carbon emissions on enterprises’ urban distribution decision-making from the perspective of carbon trading and carbon restriction and minimum amount of gasoline. This paper studies the optimization of urban distribution path by considering carbon tax and carbon penalty rate in a time-varying network.For the carbon penalty rate, previous research in the supply chain has already taken it into consideration. Moghimi et al. studied the power supply chain that reduces carbon emission, discussed its impact on operation decision through carbon punishment, and showed that carbon punishment can reduce carbon emission in the supply chain by affecting the operation mode of enterprises [9]. Erel et al. studied the impact of incentive emission reduction framework and punishment emission reduction framework on reducing carbon emissions in transportation. The research data confirms the significance of studying carbon punishment in low-carbon transportation operation [10]. Tseng et al. developed a mixed-integer nonlinear programming model to realize the sustainable supply chain network [11]. Zhaleqian et al. introduced a new sustainable closed-loop location-routing-inventory model under mixed uncertainty [12]. Instead of studying the routing problem in a closed-loop supply chain, this paper studies the vehicle routing problem in forward logistics network by establishing a multiobjective optimization model and using an improved variable neighborhood search (VNS) heuristic algorithm to solve the model. Tang et al. integrated consumers’ environmental behavior into a joint location-routing-inventory model and used a multiobjective particle swarm optimization algorithm to solve the problem [13]. Wang et al. studied a green location-routing problem considering carbon emission in cold chain logistics [14]. Alhaj et al. focused on the joint location inventory problem with one factory, multiple DCS, and retailers and the problem of considering carbon penalties to reduce carbon emissions [15]. Bazan et al. proposed a two-level supply chain model with a coordination mechanism considering a carbon tax, emission penalty, cap-and-trade, and their combination [16]. Wang et al. studied an improved revenue sharing contract to explore the decision-making of product wholesale price and sales price under the differential pricing closed-loop supply chain coordination model considering government carbon emission rewards and punishments [17]. Samuel developed a robust model for the closed-loop supply chain considering carbon emissions and used carbon punishment to limit carbon emissions [18]. Wang et al. studied procurement and manufacturing/remanufacturing problems with the random core rate of return and random yield based on the dual mechanism of carbon cap trading and carbon subsidy/punishment [19]. Zhang et al. established four decision-making models to analyze the impact of government reward and punishment policy on dual channel closed-loop supply chains [20]. The above researches mainly study the joint location routing problem with the consideration of the carbon penalty rate and focus on the distance and routing problem in the green supply chain. Different from these researches, this paper focuses on the urban distribution problem with carbon emissions and takes speed in different time periods into consideration.Existing research studies the measurement of carbon emissions in urban distribution from the perspective of measurement and calculation. Akcelik and Besley studied the measurement and calculation of carbon emissions in distribution, mainly calculating the fuel consumption in each stage through the instantaneous speed or average speed of vehicle operation and then multiplying fuel consumption by the carbon emission factor to obtain the carbon emissions in each stage [21]; Based on the distribution characteristics of European countries, Panis et al. estimate the relationships between carbon emissions, speed, and distance by VeTESS software [22]. Abdallah et al. quantify the emission factors of traffic-related gaseous and particulate pollutants inside the Salim Slam urban tunnel in Beirut, Lebanon, and measure the fuel based emission factors of pollutants from the carbon mass balance model [23]. Lee et al. propose a new rapid method that is helpful for estimating carbon emission factors by using a mobile laboratory as a supplementary tool to traditional tunnel research [24]. In this paper, the environmental cost consists of carbon emission cost and carbon tax. Among them, the carbon emission is calculated by fuel consumption and diesel carbon emission factor of vehicles.The study of time-varying network is proposed on the basis of dynamic network urban distribution vehicle routing problems and time windows problems. The existing research mainly focuses on two aspects: the customer time windows and time-dependent rate. Considering carbon emissions in distribution, Wygonik and Goodchild employ ArcGIS software to combine the urban distribution with time-varying network and customer time windows in order to solve an emissions minimization vehicle routing problem with time windows [25]. The time window is modeled to represent the customer density. Different from Erica, this paper divides the day into four time periods to represent the different vehicle speed characteristics and minimizes both the economic and environmental costs. Berman et al. study the distribution network design, mainly considering road congestion and elastic demand factors and take time as a core variable [26]. Moshe et al. study the case of a highly congested urban distribution dynamic model and show the congestion situation in the form of time nodes [27]. Transchel et al. present a solution approach to the joint assortment and inventory planning problem for vertically differentiated products considering dynamic consumer-driven substitution [28]. This paper introduces a time-varying network based on different speeds in a day and establishes a multiobjective programming optimization problem with the consideration of the carbon penalty rate.The existing literature mainly studies three aspects: (1) distribution strategies to reduce carbon emissions through operation optimization; (2) carbon emissions measurement in urban distribution; (3) the effects of urban congestion and time-varying network on urban distribution strategy. Previous studies mainly combine the first two aspects, and research on the urban distribution problem considering carbon emission and time-varying factors merely uses a time window to represent different speed characteristics. Cost is also the major consideration in urban distribution optimization studies, and environmental factors like carbon emissions and carbon penalty rates are far less considered in decision-making. This generates deficiencies in the research on the operation optimization of emission reduction in urban distribution. Time-varying network not only affects the time of urban distribution but also significantly affects the carbon emissions in distribution. Therefore, we focus on the urban distribution optimization considering carbon emission under a time-varying network, construct a single period time-varying network urban distribution optimization model as well as a cross-period time-varying network urban distribution optimization model, and introduce carbon penalty rates to analyze the distribution model. The model analysis shows that when the carbon penalty rates are positive, enterprises will actively choose the distribution model with the lowest carbon emission. The model is solved by the VNS algorithm, and numerical analysis is carried out based on a real example.
## 3. Descriptions and Assumptions
### 3.1. Problem Description
Due to the influence of traffic factors such as congestion, urban distribution presents different speed characteristics in different time periods. Usually, the day is divided into several time periods according to these factors. The characteristics of vehicle operation in each time period are similar, and these time periods constitute a time-varying network. According to the speed characteristics, the paper segments a 24 hours daytime into four sectionsT1,T2,T3,T4, which are T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00, and T4=19:00−6:00 respectively. According to the characteristics of a time-varying network, vehicles are distributed on the same path, with different departure times and vehicle running times, as shown in Figure 1.Figure 1
The vehicle uptime in time-varying network.There are two conditions in time-varying network urban distribution: single period time-varying network distribution and cross-period time-varying network distribution. Single period time-varying network distribution means thatη distribution centers deliver in the same period. For instance, DCi services customers in time segment Ti. Cross-period time-varying network distribution is that n distribution centers deliver in the different time segments. For instance, DC1 services customers in segment T1, DC2 and DC3 service customers in segment T2 and T3. The main contribution of the paper is the identification of an urban distribution model which happens in the time-varying network. The model is a multiobjective optimization problem, including the shortest delivery time and the lowest carbon emissions. The model is better than these, not taking time-varying factors into consideration in reducing carbon emissions. The optimal network structure of urban distribution considering the carbon penalty rate under a time-varying network is shown in Figure 2.Figure 2
Urban distribution optimization network considering carbon penalty rate.
### 3.2. Assumptions
(1)
Each customer is serviced by only one distribution center and by the same vehicle; distribution center’s capacity can meet customer demands.(2)
The maximum load of each vehicle is the same, and the starting and ending points of distribution are in the same distribution center.(3)
The delivery needs to be completed within the customer demand time window. The paper assumes that the customer time window for every day is0:00−24:00.(4)
The distribution process of each distribution center is completed in the same time segment.(5)
The average speed of the road network in each time segment is related to the time-varying characteristics such as road congestion and road type in this segment.(6)
Vehicles run on the same route, and the vehicle runs in different time segments at different speeds.(7)
In the time-varying network, the distance and speeds from each distribution center to the demand point and between the demand points are known.
### 3.3. Symbols and Parameters’ Description
#### 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
#### 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
#### 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
#### 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
### 3.4. Mathematical Description and Calculation of Time-Varying Network
In building a distribution model, time-varying speed is one of the core variables in this paper. Therefore, the paper first constructs the speed model of a time-varying network. Based on the characteristics of the daily distribution time-varying network, one day is divided intoT1,T2,T3,T4 four segments and each segment stands for four hours.
#### 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
#### 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 3.1. Problem Description
Due to the influence of traffic factors such as congestion, urban distribution presents different speed characteristics in different time periods. Usually, the day is divided into several time periods according to these factors. The characteristics of vehicle operation in each time period are similar, and these time periods constitute a time-varying network. According to the speed characteristics, the paper segments a 24 hours daytime into four sectionsT1,T2,T3,T4, which are T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00, and T4=19:00−6:00 respectively. According to the characteristics of a time-varying network, vehicles are distributed on the same path, with different departure times and vehicle running times, as shown in Figure 1.Figure 1
The vehicle uptime in time-varying network.There are two conditions in time-varying network urban distribution: single period time-varying network distribution and cross-period time-varying network distribution. Single period time-varying network distribution means thatη distribution centers deliver in the same period. For instance, DCi services customers in time segment Ti. Cross-period time-varying network distribution is that n distribution centers deliver in the different time segments. For instance, DC1 services customers in segment T1, DC2 and DC3 service customers in segment T2 and T3. The main contribution of the paper is the identification of an urban distribution model which happens in the time-varying network. The model is a multiobjective optimization problem, including the shortest delivery time and the lowest carbon emissions. The model is better than these, not taking time-varying factors into consideration in reducing carbon emissions. The optimal network structure of urban distribution considering the carbon penalty rate under a time-varying network is shown in Figure 2.Figure 2
Urban distribution optimization network considering carbon penalty rate.
## 3.2. Assumptions
(1)
Each customer is serviced by only one distribution center and by the same vehicle; distribution center’s capacity can meet customer demands.(2)
The maximum load of each vehicle is the same, and the starting and ending points of distribution are in the same distribution center.(3)
The delivery needs to be completed within the customer demand time window. The paper assumes that the customer time window for every day is0:00−24:00.(4)
The distribution process of each distribution center is completed in the same time segment.(5)
The average speed of the road network in each time segment is related to the time-varying characteristics such as road congestion and road type in this segment.(6)
Vehicles run on the same route, and the vehicle runs in different time segments at different speeds.(7)
In the time-varying network, the distance and speeds from each distribution center to the demand point and between the demand points are known.
## 3.3. Symbols and Parameters’ Description
### 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
### 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
### 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
### 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
## 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
## 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
## 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
## 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
## 3.4. Mathematical Description and Calculation of Time-Varying Network
In building a distribution model, time-varying speed is one of the core variables in this paper. Therefore, the paper first constructs the speed model of a time-varying network. Based on the characteristics of the daily distribution time-varying network, one day is divided intoT1,T2,T3,T4 four segments and each segment stands for four hours.
### 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
### 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
## 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 4. Model Formulation
### 4.1. Single Period of Time-Varying Network Model Construction
#### 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
#### 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
### 4.2. Cross-Period Time-Varying Network Model Construction
To construct cross-period time-varying urban distribution network optimization model, we introduce a period decision variableXijt in the model. The distribution process is divided into two or more stages according to the period, and the distribution fuel consumption, distance, and distribution time in each interval are represented, respectively. The linear weighting method is also adopted to solve the model. The expression of the model is as follows:(20)UC2=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(21)s.t.9,10,12−14,16−19,N>1.The first part of model (19) presents the environmental costs, and the second part represents economic costs. Model (20) also obey constraints (9), (10), (12)–(14), and (16)–(19). Equation (21) represents the distribution carried out in the cross period of a time-varying network.
### 4.3. Consider the Rate of the Carbon Punishment Distribution Optimization Model
#### 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
#### 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 4.1. Single Period of Time-Varying Network Model Construction
### 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
### 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
## 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
## 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
## 4.2. Cross-Period Time-Varying Network Model Construction
To construct cross-period time-varying urban distribution network optimization model, we introduce a period decision variableXijt in the model. The distribution process is divided into two or more stages according to the period, and the distribution fuel consumption, distance, and distribution time in each interval are represented, respectively. The linear weighting method is also adopted to solve the model. The expression of the model is as follows:(20)UC2=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(21)s.t.9,10,12−14,16−19,N>1.The first part of model (19) presents the environmental costs, and the second part represents economic costs. Model (20) also obey constraints (9), (10), (12)–(14), and (16)–(19). Equation (21) represents the distribution carried out in the cross period of a time-varying network.
## 4.3. Consider the Rate of the Carbon Punishment Distribution Optimization Model
### 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
### 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
## 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 5. Solving Approach
### 5.1. VNS Algorithm Design
The urban low-carbon distribution optimization problem based on the time-varying network is a complex problem considering the time-varying factors of vehicle operation and carbon emission factors on the basis of the general VRP problem. It is a typical NP-hard problem. The variable neighborhood search algorithm has been applied to solve some TSP, CVRP, and VRPTW problems, and the effectiveness of the algorithm has been proved. The vehicle path planning studied can be regarded as a double-layer iterative process. The algorithm flow is shown in Figure3.Figure 3
VNS algorithm flowchart.We use the improved VNS algorithm to solve the model. The improved VNS algorithm uses the PSO algorithm to improve the search efficiency in the initial solution generation part. The algorithm flow is as follows: first, in the initial solution part, the relationship between customers and distribution centers is determined through the PSO algorithm; second, the initial VNS algorithm is recoded and substituted into the VNS neighborhood search, the time period of the distribution center is determined, and then the path arrangement between the customer and the distribution center is determined; third, the solution obtained by VNS algorithm is substituted into PSO to verify the rationality of the initial solution. If it is reasonable, stop the search, and if it is unreasonable, search again.Combined with the algorithm flowchart, this section describes the specific steps of the algorithm in this paper.
#### 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
#### 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
#### 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
#### 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
### 5.2. A Case Study of Vegetable Distribution in Shenyang Hospital
In this paper, a logistics enterprise in Shenyang is taken as the example background of this paper. The example data are the real data collected through investigation or a GIS system. See Table2 for the data.We select a logistics enterprise in Shenyang as an example. The enterprise takes two distribution centers as the canteens of ten hospitals in Shenyang for comprehensive vegetable distribution. Study data are the real data collected through investigation or the GIS system which are shown in Table3.Table 3
Basic parameters of each demand point.
NO.Demand pointStaffBedQuantity (ton)1Shengjing Hospital of China Medical University (Nanhu district)3400230052General Hospital of Northern Theater Command1700120033The People’s Hospital of Liaoning Province124288824Shenyang Women’s and Children’s Hospital63040015The first People’s Hospital of Shenyang120060026The Forth Affiliated Hospital of China Medical University1415100037JiuZhou Hospital40020018The first Hospital of China Medical University3043224959The Fifth People’s Hospital of Shenyang1300700210Shenyang 202 Hospital10008502Time-varying factors are considered in urban distribution, so each path has different speeds in different periods. That is, there are four speeds (depending on the number of periods) on each path. At the same time, we consider the directionality of the distribution path, and the round-trip on the same path has independent speed and distance. Therefore, there is a speed and path matrix between demand points and distribution centers and between demand points and demand points, respectively.The basic speed and path matrixes are given in Tables3 and 4 respectively. Shenyang Sitong vegetable distribution center (No. A distribution center) and Shenyang Shuangrui distribution center (No. B distribution center) are selected as the distribution starting points, and the path length between the demand point and the distribution center and the path running speed (taking period 2 as an example) are shown in the following tables. Wherein vAm2 represents the speed from distribution center A to each demand point in period 2 (Table 4), and vmA2 represents the speed from the demand point to distribution center A in period 2 (Table 5).Table 4
Speed-distance matrix from distribution center to demand point.
12345678910A (km)44.321.14.65.93.14.79.94.2B (km)5.56.85.85.24.55.81.72.893.9vAm2 (km/h)2023239233020162518vBm2 (km/h)22221823163012142420Table 5
Speed-distance matrix from demand point to demand point.
A (km)B (km)vAm2 (km/h)vBm2 (km/h)13.55.3192122.96.5251932.46.92222415.9152553.84.3242566.25.6241673.21.91623852.81728910.18.42824103.54.72324The basic parameters of vehicle distribution cost in the distribution center are given as follows. The 24-hour day is divided into four periods according to the road conditions, namely,T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00 and T4=19:00−6:00. Due to the actual situation of the example, the daytime distribution is mainly considered in the calculation, so the distribution in T4 is not considered. The selected distribution vehicle is a BAIC flag bell 5-ton load-carrying container, which is diesel powered, with a displacement of 3.168 L and a maximum speed of 95 km/h. The fixed cost of each vehicle is 142 CNY/day, and the diesel price is 6.94 CNY/L. The initial carbon tax rate is 57.69%, 4.195 CNY per liter of diesel.Substituting the basic data into the VNS algorithm, we obtain the optimal distribution scheme, total cost, carbon emission, and carbon penalty rate of the distribution center. In this section, the example will be analyzed from two aspects: single period time-varying network distribution and cross-period time-varying network distribution.
#### 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
#### 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
#### 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
### 5.3. Sensitivity Analysis
In this section, the impact of the time-varying network and carbon penalty rate on the distribution scheme will be discussed according to the data of the above example.
#### 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
#### 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
#### 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 5.1. VNS Algorithm Design
The urban low-carbon distribution optimization problem based on the time-varying network is a complex problem considering the time-varying factors of vehicle operation and carbon emission factors on the basis of the general VRP problem. It is a typical NP-hard problem. The variable neighborhood search algorithm has been applied to solve some TSP, CVRP, and VRPTW problems, and the effectiveness of the algorithm has been proved. The vehicle path planning studied can be regarded as a double-layer iterative process. The algorithm flow is shown in Figure3.Figure 3
VNS algorithm flowchart.We use the improved VNS algorithm to solve the model. The improved VNS algorithm uses the PSO algorithm to improve the search efficiency in the initial solution generation part. The algorithm flow is as follows: first, in the initial solution part, the relationship between customers and distribution centers is determined through the PSO algorithm; second, the initial VNS algorithm is recoded and substituted into the VNS neighborhood search, the time period of the distribution center is determined, and then the path arrangement between the customer and the distribution center is determined; third, the solution obtained by VNS algorithm is substituted into PSO to verify the rationality of the initial solution. If it is reasonable, stop the search, and if it is unreasonable, search again.Combined with the algorithm flowchart, this section describes the specific steps of the algorithm in this paper.
### 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
### 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
### 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
### 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
## 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
## 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
## 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
## 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
## 5.2. A Case Study of Vegetable Distribution in Shenyang Hospital
In this paper, a logistics enterprise in Shenyang is taken as the example background of this paper. The example data are the real data collected through investigation or a GIS system. See Table2 for the data.We select a logistics enterprise in Shenyang as an example. The enterprise takes two distribution centers as the canteens of ten hospitals in Shenyang for comprehensive vegetable distribution. Study data are the real data collected through investigation or the GIS system which are shown in Table3.Table 3
Basic parameters of each demand point.
NO.Demand pointStaffBedQuantity (ton)1Shengjing Hospital of China Medical University (Nanhu district)3400230052General Hospital of Northern Theater Command1700120033The People’s Hospital of Liaoning Province124288824Shenyang Women’s and Children’s Hospital63040015The first People’s Hospital of Shenyang120060026The Forth Affiliated Hospital of China Medical University1415100037JiuZhou Hospital40020018The first Hospital of China Medical University3043224959The Fifth People’s Hospital of Shenyang1300700210Shenyang 202 Hospital10008502Time-varying factors are considered in urban distribution, so each path has different speeds in different periods. That is, there are four speeds (depending on the number of periods) on each path. At the same time, we consider the directionality of the distribution path, and the round-trip on the same path has independent speed and distance. Therefore, there is a speed and path matrix between demand points and distribution centers and between demand points and demand points, respectively.The basic speed and path matrixes are given in Tables3 and 4 respectively. Shenyang Sitong vegetable distribution center (No. A distribution center) and Shenyang Shuangrui distribution center (No. B distribution center) are selected as the distribution starting points, and the path length between the demand point and the distribution center and the path running speed (taking period 2 as an example) are shown in the following tables. Wherein vAm2 represents the speed from distribution center A to each demand point in period 2 (Table 4), and vmA2 represents the speed from the demand point to distribution center A in period 2 (Table 5).Table 4
Speed-distance matrix from distribution center to demand point.
12345678910A (km)44.321.14.65.93.14.79.94.2B (km)5.56.85.85.24.55.81.72.893.9vAm2 (km/h)2023239233020162518vBm2 (km/h)22221823163012142420Table 5
Speed-distance matrix from demand point to demand point.
A (km)B (km)vAm2 (km/h)vBm2 (km/h)13.55.3192122.96.5251932.46.92222415.9152553.84.3242566.25.6241673.21.91623852.81728910.18.42824103.54.72324The basic parameters of vehicle distribution cost in the distribution center are given as follows. The 24-hour day is divided into four periods according to the road conditions, namely,T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00 and T4=19:00−6:00. Due to the actual situation of the example, the daytime distribution is mainly considered in the calculation, so the distribution in T4 is not considered. The selected distribution vehicle is a BAIC flag bell 5-ton load-carrying container, which is diesel powered, with a displacement of 3.168 L and a maximum speed of 95 km/h. The fixed cost of each vehicle is 142 CNY/day, and the diesel price is 6.94 CNY/L. The initial carbon tax rate is 57.69%, 4.195 CNY per liter of diesel.Substituting the basic data into the VNS algorithm, we obtain the optimal distribution scheme, total cost, carbon emission, and carbon penalty rate of the distribution center. In this section, the example will be analyzed from two aspects: single period time-varying network distribution and cross-period time-varying network distribution.
### 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
### 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
### 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
## 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
## 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
## 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
## 5.3. Sensitivity Analysis
In this section, the impact of the time-varying network and carbon penalty rate on the distribution scheme will be discussed according to the data of the above example.
### 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
### 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
### 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
## 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
## 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 6. Conclusion
Taking urban distribution as the research object, we mainly study how enterprises should choose the optimal distribution scheme under the influence of carbon emission and time-varying networks. We establish different distribution optimization schemes for enterprises by constructing urban distribution optimization models under a single period time-varying network and cross-period time-varying network. Through numerical example analysis and sensitivity analysis, it is verified that by adjusting the carbon tax level, the carbon penalty rate determines the choice of enterprise distribution schemes under a time-varying network. When the carbon tax rate is higher than a certain level, enterprises will actively choose the distribution scheme with a low-carbon emission level to reduce operating costs. There is a positive correlation between distance and speed in urban distribution. The lines with long distribution distances usually have a smooth road network and fewer vehicles. The higher the vehicle speed, the lower the carbon emission per unit distance. The total carbon emission in the distribution process is affected by the speed and vehicle running time. The speed has a negative correlation with the carbon emission, and the total running time has a positive correlation with the carbon emission.
---
*Source: 1013861-2022-06-03.xml* | 1013861-2022-06-03_1013861-2022-06-03.md | 123,255 | Considering the Carbon Penalty Rates to Optimize the Urban Distribution Model in Time-Varying Network | Yuanyuan Ji; Shoufeng Ji; Tingting Ji | Complexity
(2022) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013861 | 1013861-2022-06-03.xml | ---
## Abstract
The existing urban distribution model does not consider both the environmental influence and time-varying factors. This paper studies the urban distribution optimization model considering both time-varying network factors and carbon emission factors and constructs the single period time-varying network urban distribution optimization model and cross-period time-varying network urban distribution optimization model. In this paper, the time-varying network in urban distribution is mathematically described and calculated. By considering the carbon penalty rate, the distribution optimization model is analyzed to formulate the optimal urban distribution scheme for enterprises. An improved variable neighborhood search (VNS) heuristic algorithm is designed to solve the model. The numerical example demonstrates that the higher the carbon emission cost, the greater the carbon penalty rate. When the carbon emission cost increases by 50%, the total cost of the distribution scheme considering carbon emission is saved by 1.21%. The numerical analysis also finds that only when the unit carbon emission cost is higher than 2.937, the carbon penalty rate will stimulate enterprises to reduce emissions.
---
## Body
## 1. Introduction
The rapid development of the global economy and the rapid growth of the population have brought many adverse effects on the Earth’s ecological environment. The rapid development of urban distribution has also brought serious impacts on the urban environment. The government’s policies used to restrict carbon emissions are increasing rapidly. According to the fourth assessment report of the United Nations Intergovernmental Panel on climate change, the concentration of carbon dioxide in the atmosphere has increased from 280 ppm (one-millionth unit) before the industrial revolution to 379 ppm in 2005, exceeding the natural change range since nearly 650000 years. In the past century, the global average surface temperature has increased by 0.74°C. The report also predicts that the global average surface temperature may rise by 1.1–6.4°C in the next 100 years. It is urgent to deal with climate change caused by carbon emissions. In 2021, the outlines of the 14th five-year plan (2021–2025) for national economic and social development and vision 2035 of the People’s Republic of China proposed to achieve the objectives of China’s Intended Nationally Determined Contributions 2030 and formulate an action plan to reach the peak of carbon emissions by 2030. A system that focuses on carbon intensity control supplemented by total carbon emission control will be implemented to achieve carbon neutralization by 2060. To achieve the overall emission reduction target, macropolicy support and operable implementation schemes are needed. The pressure of emission reduction in practice has confirmed the necessity of studying the urban distribution problem considering carbon emission.The carbon generated by logistics activities accounts for 5.5% of human activities and 5–15% of the emissions in the whole product life cycle. In logistics activities, the carbon emissions from fossil fuels required for transportation and distribution account for more than 87% of the total. Previous studies have shown that by reasonably planning roads and distribution routes, carbon emissions in transportation can be reduced by 5% on the premise of ensuring the economic objectives of enterprises. Therefore, the research on urban distribution in the operation of a low-carbon supply chain is of great significance in reducing carbon emissions. At the same time, how to let enterprises reduce the carbon emission of urban distribution by optimizing operations without increasing cost or increasing little cost is a challenging topic.With the increase of urban car ownership year by year, the time-varying characteristics of the urban road network have become increasingly significant, and the speed of urban road shows periodic characteristics over time. The vehicle routing problem in a time-varying road network refers to the reasonable arrangement of the number of distribution vehicles, vehicle routes, and under the condition of considering the change of vehicle travel speed in the road network in different time periods.Combining the time-varying characteristics with the specific urban road network can better reflect the dynamics of urban distribution. At the same time, the time-varying characteristics of urban distribution are closely related to urban congestion. The time-varying periods can be divided by considering the characteristics of congestion. Thus, the time-varying network has become an important factor in urban distribution optimization problems. Considering time-varying factors in urban distribution optimization has also been a concern by many scholars. Meanwhile, a time-varying network will also have a significant impact on carbon emissions in urban distribution. Therefore, the urban distribution optimization problem considering carbon emission under a time-varying network has become a research hotspot.
## 2. Literature Review
Exiting research has considered the time-varying factors and carbon emissions factors in the urban distribution optimization problem. Considering carbon emissions, Pradenas et al. study the carbon emission in urban distribution considering distance, vehicle load, and back time window. The paper shows that the carbon emission in urban distribution is jointly affected by vehicle load and distribution distance and explains the relationship between carbon emission and vehicle load [1]. Different from Pradenas, this paper considers the carbon emission under the time-varying network in urban distribution and introduces a carbon penalty rate to analyze the distribution model to study the influence of the time-varying network on the enterprise’s decision-making. Eskandarpour et al. propose a biobjective mixed-integer linear programming model to minimize total costs as well as carbon emissions caused by the vehicles used in the fleet for a Heterogeneous Vehicle Routing Problem with Multiple Loading Capacities and Driving Ranges and develops an enhanced variant of MultiDirectional Local Search to solve the problem [2]. Yu et al. propose an improved branch-and-price algorithm to precisely solve the heterogeneous fleet green vehicle routing problem with time windows [3]. Li et al. study the impact of the carbon tax and carbon quota policy on distribution costs and carbon dioxide emission and developed a Genetic Algorithm-Tabu Search to solve the model [4]. Zeng et al. study a routing algorithm to find a path that consumes the minimum amount of gasoline while the travel time satisfies a specified travel time budget and an on-time arrival probability considering [5]. Xiao et al. present an ε-accurate approach to conduct continuous optimization on the pollution routing problem [6]. Liao et al. study a green distribution routing problem integrating distribution and vehicle routing problems and propose a multiobjective scheduling model to maximize customer satisfaction and minimize carbon footprint [7]. Yan et al. establish an open vehicle routing model for urban distribution aiming to minimize the total cost. A genetic algorithm supporting the implementation of smart contract is developed to verify the effectiveness of smart contracts [8]. The above researches propose that carbon emissions can be reduced by optimizing the routing problems, including the problem of considering urban distribution in green supply chain operations without discussing the impact of carbon emissions on enterprises’ urban distribution decision-making from the perspective of carbon trading and carbon restriction and minimum amount of gasoline. This paper studies the optimization of urban distribution path by considering carbon tax and carbon penalty rate in a time-varying network.For the carbon penalty rate, previous research in the supply chain has already taken it into consideration. Moghimi et al. studied the power supply chain that reduces carbon emission, discussed its impact on operation decision through carbon punishment, and showed that carbon punishment can reduce carbon emission in the supply chain by affecting the operation mode of enterprises [9]. Erel et al. studied the impact of incentive emission reduction framework and punishment emission reduction framework on reducing carbon emissions in transportation. The research data confirms the significance of studying carbon punishment in low-carbon transportation operation [10]. Tseng et al. developed a mixed-integer nonlinear programming model to realize the sustainable supply chain network [11]. Zhaleqian et al. introduced a new sustainable closed-loop location-routing-inventory model under mixed uncertainty [12]. Instead of studying the routing problem in a closed-loop supply chain, this paper studies the vehicle routing problem in forward logistics network by establishing a multiobjective optimization model and using an improved variable neighborhood search (VNS) heuristic algorithm to solve the model. Tang et al. integrated consumers’ environmental behavior into a joint location-routing-inventory model and used a multiobjective particle swarm optimization algorithm to solve the problem [13]. Wang et al. studied a green location-routing problem considering carbon emission in cold chain logistics [14]. Alhaj et al. focused on the joint location inventory problem with one factory, multiple DCS, and retailers and the problem of considering carbon penalties to reduce carbon emissions [15]. Bazan et al. proposed a two-level supply chain model with a coordination mechanism considering a carbon tax, emission penalty, cap-and-trade, and their combination [16]. Wang et al. studied an improved revenue sharing contract to explore the decision-making of product wholesale price and sales price under the differential pricing closed-loop supply chain coordination model considering government carbon emission rewards and punishments [17]. Samuel developed a robust model for the closed-loop supply chain considering carbon emissions and used carbon punishment to limit carbon emissions [18]. Wang et al. studied procurement and manufacturing/remanufacturing problems with the random core rate of return and random yield based on the dual mechanism of carbon cap trading and carbon subsidy/punishment [19]. Zhang et al. established four decision-making models to analyze the impact of government reward and punishment policy on dual channel closed-loop supply chains [20]. The above researches mainly study the joint location routing problem with the consideration of the carbon penalty rate and focus on the distance and routing problem in the green supply chain. Different from these researches, this paper focuses on the urban distribution problem with carbon emissions and takes speed in different time periods into consideration.Existing research studies the measurement of carbon emissions in urban distribution from the perspective of measurement and calculation. Akcelik and Besley studied the measurement and calculation of carbon emissions in distribution, mainly calculating the fuel consumption in each stage through the instantaneous speed or average speed of vehicle operation and then multiplying fuel consumption by the carbon emission factor to obtain the carbon emissions in each stage [21]; Based on the distribution characteristics of European countries, Panis et al. estimate the relationships between carbon emissions, speed, and distance by VeTESS software [22]. Abdallah et al. quantify the emission factors of traffic-related gaseous and particulate pollutants inside the Salim Slam urban tunnel in Beirut, Lebanon, and measure the fuel based emission factors of pollutants from the carbon mass balance model [23]. Lee et al. propose a new rapid method that is helpful for estimating carbon emission factors by using a mobile laboratory as a supplementary tool to traditional tunnel research [24]. In this paper, the environmental cost consists of carbon emission cost and carbon tax. Among them, the carbon emission is calculated by fuel consumption and diesel carbon emission factor of vehicles.The study of time-varying network is proposed on the basis of dynamic network urban distribution vehicle routing problems and time windows problems. The existing research mainly focuses on two aspects: the customer time windows and time-dependent rate. Considering carbon emissions in distribution, Wygonik and Goodchild employ ArcGIS software to combine the urban distribution with time-varying network and customer time windows in order to solve an emissions minimization vehicle routing problem with time windows [25]. The time window is modeled to represent the customer density. Different from Erica, this paper divides the day into four time periods to represent the different vehicle speed characteristics and minimizes both the economic and environmental costs. Berman et al. study the distribution network design, mainly considering road congestion and elastic demand factors and take time as a core variable [26]. Moshe et al. study the case of a highly congested urban distribution dynamic model and show the congestion situation in the form of time nodes [27]. Transchel et al. present a solution approach to the joint assortment and inventory planning problem for vertically differentiated products considering dynamic consumer-driven substitution [28]. This paper introduces a time-varying network based on different speeds in a day and establishes a multiobjective programming optimization problem with the consideration of the carbon penalty rate.The existing literature mainly studies three aspects: (1) distribution strategies to reduce carbon emissions through operation optimization; (2) carbon emissions measurement in urban distribution; (3) the effects of urban congestion and time-varying network on urban distribution strategy. Previous studies mainly combine the first two aspects, and research on the urban distribution problem considering carbon emission and time-varying factors merely uses a time window to represent different speed characteristics. Cost is also the major consideration in urban distribution optimization studies, and environmental factors like carbon emissions and carbon penalty rates are far less considered in decision-making. This generates deficiencies in the research on the operation optimization of emission reduction in urban distribution. Time-varying network not only affects the time of urban distribution but also significantly affects the carbon emissions in distribution. Therefore, we focus on the urban distribution optimization considering carbon emission under a time-varying network, construct a single period time-varying network urban distribution optimization model as well as a cross-period time-varying network urban distribution optimization model, and introduce carbon penalty rates to analyze the distribution model. The model analysis shows that when the carbon penalty rates are positive, enterprises will actively choose the distribution model with the lowest carbon emission. The model is solved by the VNS algorithm, and numerical analysis is carried out based on a real example.
## 3. Descriptions and Assumptions
### 3.1. Problem Description
Due to the influence of traffic factors such as congestion, urban distribution presents different speed characteristics in different time periods. Usually, the day is divided into several time periods according to these factors. The characteristics of vehicle operation in each time period are similar, and these time periods constitute a time-varying network. According to the speed characteristics, the paper segments a 24 hours daytime into four sectionsT1,T2,T3,T4, which are T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00, and T4=19:00−6:00 respectively. According to the characteristics of a time-varying network, vehicles are distributed on the same path, with different departure times and vehicle running times, as shown in Figure 1.Figure 1
The vehicle uptime in time-varying network.There are two conditions in time-varying network urban distribution: single period time-varying network distribution and cross-period time-varying network distribution. Single period time-varying network distribution means thatη distribution centers deliver in the same period. For instance, DCi services customers in time segment Ti. Cross-period time-varying network distribution is that n distribution centers deliver in the different time segments. For instance, DC1 services customers in segment T1, DC2 and DC3 service customers in segment T2 and T3. The main contribution of the paper is the identification of an urban distribution model which happens in the time-varying network. The model is a multiobjective optimization problem, including the shortest delivery time and the lowest carbon emissions. The model is better than these, not taking time-varying factors into consideration in reducing carbon emissions. The optimal network structure of urban distribution considering the carbon penalty rate under a time-varying network is shown in Figure 2.Figure 2
Urban distribution optimization network considering carbon penalty rate.
### 3.2. Assumptions
(1)
Each customer is serviced by only one distribution center and by the same vehicle; distribution center’s capacity can meet customer demands.(2)
The maximum load of each vehicle is the same, and the starting and ending points of distribution are in the same distribution center.(3)
The delivery needs to be completed within the customer demand time window. The paper assumes that the customer time window for every day is0:00−24:00.(4)
The distribution process of each distribution center is completed in the same time segment.(5)
The average speed of the road network in each time segment is related to the time-varying characteristics such as road congestion and road type in this segment.(6)
Vehicles run on the same route, and the vehicle runs in different time segments at different speeds.(7)
In the time-varying network, the distance and speeds from each distribution center to the demand point and between the demand points are known.
### 3.3. Symbols and Parameters’ Description
#### 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
#### 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
#### 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
#### 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
### 3.4. Mathematical Description and Calculation of Time-Varying Network
In building a distribution model, time-varying speed is one of the core variables in this paper. Therefore, the paper first constructs the speed model of a time-varying network. Based on the characteristics of the daily distribution time-varying network, one day is divided intoT1,T2,T3,T4 four segments and each segment stands for four hours.
#### 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
#### 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 3.1. Problem Description
Due to the influence of traffic factors such as congestion, urban distribution presents different speed characteristics in different time periods. Usually, the day is divided into several time periods according to these factors. The characteristics of vehicle operation in each time period are similar, and these time periods constitute a time-varying network. According to the speed characteristics, the paper segments a 24 hours daytime into four sectionsT1,T2,T3,T4, which are T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00, and T4=19:00−6:00 respectively. According to the characteristics of a time-varying network, vehicles are distributed on the same path, with different departure times and vehicle running times, as shown in Figure 1.Figure 1
The vehicle uptime in time-varying network.There are two conditions in time-varying network urban distribution: single period time-varying network distribution and cross-period time-varying network distribution. Single period time-varying network distribution means thatη distribution centers deliver in the same period. For instance, DCi services customers in time segment Ti. Cross-period time-varying network distribution is that n distribution centers deliver in the different time segments. For instance, DC1 services customers in segment T1, DC2 and DC3 service customers in segment T2 and T3. The main contribution of the paper is the identification of an urban distribution model which happens in the time-varying network. The model is a multiobjective optimization problem, including the shortest delivery time and the lowest carbon emissions. The model is better than these, not taking time-varying factors into consideration in reducing carbon emissions. The optimal network structure of urban distribution considering the carbon penalty rate under a time-varying network is shown in Figure 2.Figure 2
Urban distribution optimization network considering carbon penalty rate.
## 3.2. Assumptions
(1)
Each customer is serviced by only one distribution center and by the same vehicle; distribution center’s capacity can meet customer demands.(2)
The maximum load of each vehicle is the same, and the starting and ending points of distribution are in the same distribution center.(3)
The delivery needs to be completed within the customer demand time window. The paper assumes that the customer time window for every day is0:00−24:00.(4)
The distribution process of each distribution center is completed in the same time segment.(5)
The average speed of the road network in each time segment is related to the time-varying characteristics such as road congestion and road type in this segment.(6)
Vehicles run on the same route, and the vehicle runs in different time segments at different speeds.(7)
In the time-varying network, the distance and speeds from each distribution center to the demand point and between the demand points are known.
## 3.3. Symbols and Parameters’ Description
### 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
### 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
### 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
### 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
## 3.3.1. Urban Delivery Costs’ Parameters
I: customers’ set, i=1,2……,nJ: distribution center set, j=1,2……,mV: vehicle set, l=1,2……,nR: vehicle routing setv: speed matrix of the vehicleFc: fixed costs in distributionVc: variable costs in distributionTijt: total time of delivery in a certain time segmentvit: speed of the vehicle in a time segmentC: total economic costC∗: optimal total costP: operating costs per unit timeqij: the product number from distribution center j to retailer iq : distribution center capacityQ: vehicle capacity
## 3.3.2. Carbon Emissions’ Parameters
Fijt: the fuel consumption of vehicle operation in a certain period of timeCER: carbon emissions in distributionCe: the cost of carbon emissionsPc: carbon tax per unit carbon emissionθ: carbon penalty rate
## 3.3.3. Time-Varying Network Parameters
ta,tb: time windowN: the number of periods, n=1,2……,kXijt: the time period that the distribution center j services customers i inTi: time-varying period set, i=1,2,3,4
## 3.3.4. Decision Variables
dijl: the route between customers i and j by vehicle lYij=1,if vehicleldrives to distribution centerj,0,if vehicleldrives to other distribution center,Zij=1,if customeriis services by distribution centerj,0,if customeriis services by other distribution center.
## 3.4. Mathematical Description and Calculation of Time-Varying Network
In building a distribution model, time-varying speed is one of the core variables in this paper. Therefore, the paper first constructs the speed model of a time-varying network. Based on the characteristics of the daily distribution time-varying network, one day is divided intoT1,T2,T3,T4 four segments and each segment stands for four hours.
### 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
### 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 3.4.1. Time-dependent Velocity
In urban distribution, we firstly use a mathematical expression to represent the speed on each path in each period. If each day is divided into four periods, a speed path matrix can be formed to represent the characteristics of the road network, such as the speed expression corresponding to the pathsdi is shown in Table 1.Table 1
Time-varying road speed matrix.
T1T2T3T4divi1vi2vi3vi4Mathematical modeling method is used to establish the model for each time-varying network speedvijl, in which the total distribution time equation is obtained according to the time-dependent flow rate equation in [21]. The factor in the equation comes from a wide range of speed curve, which is expressed as follows:(1)vijl=∑i∈I∑j∈Jdijltcon+0.25tintx−1+x−12+8Jax/Qtint0.5.tcon is changed in different congestion conditions. Based on the characteristics of traffic congestion, the weight indicator tcon can be expressed as follows:(2)tcon=∑i=1dcon1Vcon1+∑i=2dcon2Vcon2+∑i=3dcon3Vcon3+∑i=4dcon4Vcon4.dconi represents the distance of each type of road, and road types are classified as follows: severe congestion, moderate congestion, mild congestion, and smooth. These four speeds are determined according to the congestion level evaluation and average speed value specified in the data of the National Bureau of Statistics in China (Table 2). Vconk, k=1,2,3,4 represents the overall average speed of the road network under different circumstances.Table 2
Road congestion rating km/h.
Congestion levelSmoothMild congestionModerate congestionSevere congestionCongestion index(0, 4](4, 6](6, 8](8, 9)Average speed(30, 37](25, 30](23, 25](19, 23]
## 3.4.2. The Average Speed of the Road Network
Equation (2) establishes the velocity model of each path. And through the speed matrix, we can obtain the average velocity model of each path in the distribution program and the average speed vi is expressed as follows:(3)vi=∑i∈I∑j∈Jdij∑i∈I∑j∈J∑l=1ndijl/vijl.Average speedvit varies with the path and the time period of the road network. The same distribution route has different average speeds in different time-varying networks.
## 4. Model Formulation
### 4.1. Single Period of Time-Varying Network Model Construction
#### 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
#### 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
### 4.2. Cross-Period Time-Varying Network Model Construction
To construct cross-period time-varying urban distribution network optimization model, we introduce a period decision variableXijt in the model. The distribution process is divided into two or more stages according to the period, and the distribution fuel consumption, distance, and distribution time in each interval are represented, respectively. The linear weighting method is also adopted to solve the model. The expression of the model is as follows:(20)UC2=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(21)s.t.9,10,12−14,16−19,N>1.The first part of model (19) presents the environmental costs, and the second part represents economic costs. Model (20) also obey constraints (9), (10), (12)–(14), and (16)–(19). Equation (21) represents the distribution carried out in the cross period of a time-varying network.
### 4.3. Consider the Rate of the Carbon Punishment Distribution Optimization Model
#### 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
#### 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 4.1. Single Period of Time-Varying Network Model Construction
### 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
### 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
## 4.1.1. Urban Distribution Model considering Cost
A single time-varying network delivery model refers to all distribution centers choose to distribute at the same time. Generally, urban distribution model is expressed in the form of cost function. Delivery model considering cost is expressed as follows:(4)minC0=Fc+Vc=∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij.The first part of equation (5) represents the fixed costs of distribution vehicle, and the second part represents the variable cost related to the vehicle operation.
## 4.1.2. Urban Distribution Model considering Carbon Emissions’ Cost
(1) Measurement of urban distribution of carbon emissions: carbon emission measurement is an important research content in building the model. In this study, carbon emissions’ measurement is achieved mainly through the conversion of vehicle’s fuel consumption. Numerous studies have demonstrated through empirical studies and proved that before the optimal speed (72.1 km/h), vehicle’s fuel consumption per unit decreases as speed increases, and after the optimal speed, vehicle’s fuel consumption per unit increases with speed increases [19].We use the recycling rate that is proposed by Akcelik [21] to calculate the fuel consumption on each path. The equation is expressed as follows:(5)F=3.6k11+v3/2vm3+k2vv⋅dv.F represent fuel consumption, v indicates vehicle speed; vm indicates the maximum vehicle speed; k1 and k2 are fuel consumption factors based on historical data.Because the carbon emission from different fuels is directly proportional to the fuel consumption (ICF, 2006), the carbon emission factor of fuel can be calculated by experimental calculation and model calculation. Letτ represents the carbon emission factor of fuel consumption, and the expression of carbon emission in distribution can be obtained as follows:(6)CER=τF,where in the carbon emission factor τ is obtained based on the carbon emission factor of fossil fuels in various countries, which is issued by the International Energy Organization in 2009. Through unit conversion, it can be calculated that the carbon emission from 1000 L fuel consumption is 7.369 tons. China’s carbon emission factor is much higher than that of other countries (the countries selected for comparison are all countries with carbon tax policies).(2) Carbon costs:(7)minCe=PcCER=Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij=Pcτ∑i∈I∑j∈J3.6k11+vit3/2vm3+k2vitvit⋅Xijt⋅∑i∈I∑j∈JdijYij⋅∑i∈I∑j∈JZij.The cost of carbon emissions is mainly composed of the amount of carbon emissions and carbon tax that companies need to pay.(3) Optimization model based on multiobjective: Equation (8) represents the lowest cost of carbon, and equation (5) represents the lowest economic cost of distribution. A linear weighting method is used to transform the double objective function into a single objective function, and the objective function UC1 is obtained as follows:(8)UC1=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈JFijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈JPTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(9)s.t.∑i∈I,j∈Jqij∑i∈MYilv≤Q,(10)∑i∈I∑j∈Jqij≤q,(11)ta<∑i∈I∑j∈J∑t∈T1,T2,T3,T4Tijt<tb,(12)v0≤vm,v≤vm,(13)∑l∈Mqilv−∑l∈Mqikv=0,∀l∈M,∀v∈V,(14)dijl>0,(15)N=1,(16)∑v∈V∑l∈MYilv=1,∀i∈I,(17)Yij=1,qij≤Q,0,qij≥Q,(18)Zij=0,1,(19)Xijt∈XijT1,XijTn,∀n∈N.λ1 represents the reciprocal of the minimum environmental cost of a single objective; λ2 represents the reciprocal of the minimum economic cost of a single objective. Equation (9) represents the vehicle capacity limit; equation (10) represents the capacity constraint of the distribution center; equation (11) represents the time window distribution to meet customer requirements; equation (12) represents the vehicle speed limit; equation (13) ensures the continuity of the distribution process; that is, the vehicle must leave after entering a distribution center or node; equation (14) indicates that the need of each customer i must be met; equation (15) represents a single distribution network within a time-varying period; equation (16) represents that each customer has one vehicle to distribute; equation (17) represents that when the vehicle capacity cannot meet the customer’s needs, it is necessary to return to the distribution center for replenishment; equation (18) represents the selection of distribution center; equation (9) represents the period of distribution.
## 4.2. Cross-Period Time-Varying Network Model Construction
To construct cross-period time-varying urban distribution network optimization model, we introduce a period decision variableXijt in the model. The distribution process is divided into two or more stages according to the period, and the distribution fuel consumption, distance, and distribution time in each interval are represented, respectively. The linear weighting method is also adopted to solve the model. The expression of the model is as follows:(20)UC2=λ1Ce+λ2C0=1Ce∗Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt⋅Xijt⋅Yij⋅∑i∈I∑j∈JZij+1C0∗∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij,(21)s.t.9,10,12−14,16−19,N>1.The first part of model (19) presents the environmental costs, and the second part represents economic costs. Model (20) also obey constraints (9), (10), (12)–(14), and (16)–(19). Equation (21) represents the distribution carried out in the cross period of a time-varying network.
## 4.3. Consider the Rate of the Carbon Punishment Distribution Optimization Model
### 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
### 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 4.3.1. Carbon Penalty Rates
The carbon penalty rate refers to the cost loss rate caused by the government levying carbon tax on carbon emissions and enterprises still adopting the distribution scheme without considering carbon emissions and is considered as an endogenous variable.C means the optimal total cost of urban distribution when the enterprise does not consider carbon emission, and C∗ means the optimal total cost of urban distribution when the enterprise considers both economic cost and carbon emission cost. The carbon penalty rate is expressed as follows:(22)θ=C−C∗C∗.The carbon penalty rate is of great significance to the choice of enterprise urban distribution model. Therefore, several scenarios of carbon penalty rate are discussed before constructing the distribution model. The abstract cost function of urban distribution is constructed to solve the different cases of carbon penalty rates. The total cost is expressed as follows:(23)C=ψR,v;=Fc+Vc+Ce.The carbon penalty rate will be discussed in the following propositions. We considerCe≥Ce∗, which means the minimum carbon emission must be considered in the optimal solution.Proposition 1.
WhenVc>Vc∗, θ>0.Proof.
KnownVc>Vc∗, then substitute it into equation (22).(24)θ=C−C∗C∗=Fc+Vc+Ce−Fc+Vc∗+Ce∗Fc∗+Vc∗+Ce∗=Vc−Vc∗+Ce−Ce∗Fc∗+Vc∗+Ce∗>0.
Proposition1 shows that when the model considering economic cost and carbon emission cost can reduce the enterprise’s carbon emission and variable economic cost, the enterprise will consider the cost balance brought by the carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.Proposition 2.
WhenVc−Vc∗/Ce−Ce∗<1, θ>0.Proof.
KnownCe≥Ce∗, Ce−Ce∗>0 and Vc−Vc∗/Ce−Ce∗<1, so Ce−Ce∗>Vc−Vc∗.
Substituting the known conditions we can obtainθ>0.
Proposition2 shows that when the model considers both economic cost and carbon emission cost, it may lead to the increase of variable cost, but when the increase of variable cost is less than the cost of carbon emission reduction, the enterprise will consider the cost balance brought by carbon penalty rate and take the initiative to adopt the urban distribution model that can reduce carbon emission.
## 4.3.2. Distribution Optimization Model
In urban distribution, the carbon penalty rate can help enterprises choose the distribution scheme according to their own situation. At the same time, it can also reflect the impact of carbon emissions on enterprise decision-making in urban distribution. In this section, the mathematical description and derivation concept of the carbon penalty rate will be carried out according to Section4.3.1, and the urban distribution scheme selection under a time-varying network based on the carbon penalty rate will be studied. When all distribution centers choose the same period for distribution, the carbon penalty rate of cross-period time-varying network is the same as that of single period time-varying network.A decision variableXijt is used to determine the selected time period of each distribution path. In order to establish a cross-period time-varying network θ, the model of C and C∗ is needed, which can obtain the minimum total cost when considering both cost factors and environmental factors. The expression is as follows:(25)minC∗=∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij,C=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij.Establishing the carbon penalty rate model of cross-period time-varying network, the model is expressed as follows:(26)θ=∑j∈Jfj+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt⋅XijtYij⋅∑i∈I∑j∈JZij−minC∗∑j∈Jfj+Pcτ∑i∈I∑j∈J∑t∈T1,T2,T3,T4Fijt∗⋅Xijt⋅Yij+∑i∈I∑j∈J∑t∈T1,T2,T3,T4PTijt∗⋅XijtYij⋅∑i∈I∑j∈JZij.θ in equation (15) is to compare the cost loss rate of different distribution schemes without considering the carbon emission cost, which may be negative, positive, or zero. Under the background of a time-varying network, the solution of the carbon penalty rate of urban distribution becomes more complex. The result of the carbon penalty rate will directly determine the enterprise’s choice of distribution scheme. If the carbon penalty rate of the distribution scheme considering reducing carbon emissions is positive, the enterprise will take the initiative to choose the distribution scheme with the lowest carbon emissions.
## 5. Solving Approach
### 5.1. VNS Algorithm Design
The urban low-carbon distribution optimization problem based on the time-varying network is a complex problem considering the time-varying factors of vehicle operation and carbon emission factors on the basis of the general VRP problem. It is a typical NP-hard problem. The variable neighborhood search algorithm has been applied to solve some TSP, CVRP, and VRPTW problems, and the effectiveness of the algorithm has been proved. The vehicle path planning studied can be regarded as a double-layer iterative process. The algorithm flow is shown in Figure3.Figure 3
VNS algorithm flowchart.We use the improved VNS algorithm to solve the model. The improved VNS algorithm uses the PSO algorithm to improve the search efficiency in the initial solution generation part. The algorithm flow is as follows: first, in the initial solution part, the relationship between customers and distribution centers is determined through the PSO algorithm; second, the initial VNS algorithm is recoded and substituted into the VNS neighborhood search, the time period of the distribution center is determined, and then the path arrangement between the customer and the distribution center is determined; third, the solution obtained by VNS algorithm is substituted into PSO to verify the rationality of the initial solution. If it is reasonable, stop the search, and if it is unreasonable, search again.Combined with the algorithm flowchart, this section describes the specific steps of the algorithm in this paper.
#### 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
#### 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
#### 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
#### 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
### 5.2. A Case Study of Vegetable Distribution in Shenyang Hospital
In this paper, a logistics enterprise in Shenyang is taken as the example background of this paper. The example data are the real data collected through investigation or a GIS system. See Table2 for the data.We select a logistics enterprise in Shenyang as an example. The enterprise takes two distribution centers as the canteens of ten hospitals in Shenyang for comprehensive vegetable distribution. Study data are the real data collected through investigation or the GIS system which are shown in Table3.Table 3
Basic parameters of each demand point.
NO.Demand pointStaffBedQuantity (ton)1Shengjing Hospital of China Medical University (Nanhu district)3400230052General Hospital of Northern Theater Command1700120033The People’s Hospital of Liaoning Province124288824Shenyang Women’s and Children’s Hospital63040015The first People’s Hospital of Shenyang120060026The Forth Affiliated Hospital of China Medical University1415100037JiuZhou Hospital40020018The first Hospital of China Medical University3043224959The Fifth People’s Hospital of Shenyang1300700210Shenyang 202 Hospital10008502Time-varying factors are considered in urban distribution, so each path has different speeds in different periods. That is, there are four speeds (depending on the number of periods) on each path. At the same time, we consider the directionality of the distribution path, and the round-trip on the same path has independent speed and distance. Therefore, there is a speed and path matrix between demand points and distribution centers and between demand points and demand points, respectively.The basic speed and path matrixes are given in Tables3 and 4 respectively. Shenyang Sitong vegetable distribution center (No. A distribution center) and Shenyang Shuangrui distribution center (No. B distribution center) are selected as the distribution starting points, and the path length between the demand point and the distribution center and the path running speed (taking period 2 as an example) are shown in the following tables. Wherein vAm2 represents the speed from distribution center A to each demand point in period 2 (Table 4), and vmA2 represents the speed from the demand point to distribution center A in period 2 (Table 5).Table 4
Speed-distance matrix from distribution center to demand point.
12345678910A (km)44.321.14.65.93.14.79.94.2B (km)5.56.85.85.24.55.81.72.893.9vAm2 (km/h)2023239233020162518vBm2 (km/h)22221823163012142420Table 5
Speed-distance matrix from demand point to demand point.
A (km)B (km)vAm2 (km/h)vBm2 (km/h)13.55.3192122.96.5251932.46.92222415.9152553.84.3242566.25.6241673.21.91623852.81728910.18.42824103.54.72324The basic parameters of vehicle distribution cost in the distribution center are given as follows. The 24-hour day is divided into four periods according to the road conditions, namely,T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00 and T4=19:00−6:00. Due to the actual situation of the example, the daytime distribution is mainly considered in the calculation, so the distribution in T4 is not considered. The selected distribution vehicle is a BAIC flag bell 5-ton load-carrying container, which is diesel powered, with a displacement of 3.168 L and a maximum speed of 95 km/h. The fixed cost of each vehicle is 142 CNY/day, and the diesel price is 6.94 CNY/L. The initial carbon tax rate is 57.69%, 4.195 CNY per liter of diesel.Substituting the basic data into the VNS algorithm, we obtain the optimal distribution scheme, total cost, carbon emission, and carbon penalty rate of the distribution center. In this section, the example will be analyzed from two aspects: single period time-varying network distribution and cross-period time-varying network distribution.
#### 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
#### 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
#### 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
### 5.3. Sensitivity Analysis
In this section, the impact of the time-varying network and carbon penalty rate on the distribution scheme will be discussed according to the data of the above example.
#### 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
#### 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
#### 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 5.1. VNS Algorithm Design
The urban low-carbon distribution optimization problem based on the time-varying network is a complex problem considering the time-varying factors of vehicle operation and carbon emission factors on the basis of the general VRP problem. It is a typical NP-hard problem. The variable neighborhood search algorithm has been applied to solve some TSP, CVRP, and VRPTW problems, and the effectiveness of the algorithm has been proved. The vehicle path planning studied can be regarded as a double-layer iterative process. The algorithm flow is shown in Figure3.Figure 3
VNS algorithm flowchart.We use the improved VNS algorithm to solve the model. The improved VNS algorithm uses the PSO algorithm to improve the search efficiency in the initial solution generation part. The algorithm flow is as follows: first, in the initial solution part, the relationship between customers and distribution centers is determined through the PSO algorithm; second, the initial VNS algorithm is recoded and substituted into the VNS neighborhood search, the time period of the distribution center is determined, and then the path arrangement between the customer and the distribution center is determined; third, the solution obtained by VNS algorithm is substituted into PSO to verify the rationality of the initial solution. If it is reasonable, stop the search, and if it is unreasonable, search again.Combined with the algorithm flowchart, this section describes the specific steps of the algorithm in this paper.
### 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
### 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
### 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
### 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
## 5.1.1. Encoding and Initialization
The basic parameters are assigned first, and then the initial coding is carried out. The upper coding of this algorithm is realized by the PSO algorithm, and the research problem needs to be coded. Upper layer coding refers to the establishment of the relationship between the distribution center and customers through the PSO algorithm. We designm distribution centers and n demand points, and then each particle corresponds to an N-dimension vector, and the value range of each element is 1,m, in which the vector coding is used to represent which distribution center each demand point is served by.Through the upper layer solution, the customer setn1,n2,n3,…,ni served by the distribution center i can be determined. At this time, we conduct secondary coding and set the candidate time-varying period set as Ti=i,1,2,3,4. The time period of each distribution center is determined by variable random selection (when studying single period time-varying network urban distribution, the same time period parameters are substituted in this operation). Let the set of candidate paths be Ii,i=1,2,…,M, the set of paths selected by the algorithm is Ji, and the set of running time corresponding to each path is Ti, m represents the distribution center set, and n represents the customer demand set. The distribution routes are arranged according to the initial results of the upper layer code, which is based on the following equation:(27)Sij=TOi,Da+TOj,Da−TOi,Oj,i≠j.Sij represents the distribution time difference between customers after allocation. Each node is connected by the method of minimum difference. The smaller the distribution time, the greater the vehicle running speed and the lower the carbon emission level.
## 5.1.2. Neighborhood Search
Neighborhood structure is the core part of the improved VNS algorithm. Four kinds of variable neighborhood structures are obtained through node and path disturbance. Through the search operation, we can get more high-quality solutions. The neighborhood structure is constructed as follows:(1)
Node insertion. Node insertion is to insert any customer of a distribution center into the distribution path of another distribution center, which changes the route arrangement of the two distribution centers.(2)
Node exchange. Node exchange is the exchange of two customers that are selected by two distribution centers respectively. It realizes the improvement of neighborhood structure.(3)
Cross interchange. Two distribution centers respectively select two nodes and the path contained therein. Then exchanging, the two paths and nodes to obtain a new neighborhood in order to prevent the search from falling into local optimization.
## 5.1.3. Solution Selection
By constantly changing the neighborhood structure, a series of solutions can be obtained, and rules need to be used to select the optimal solution. According to the research by Hansen and Mladenovic [29], Kirkpatrick et al. [30], we select the optimal solution. If the neighborhood solution x′ outperforms the solution x, then replace x with x′. If no better solution is found in the search, after certain iterations, the new neighborhood solution is chosen to prevent the search from falling into local optimization. The probability of becoming an alternative solution is expressed as follows:(28)Px′=1,fx′<fx,efx′−fx/Tk,fx′≥fx,where in fx represents the fitness function, which is substituted into the objective function equations (4), (7), and (8) in the solution process, k represents the number of iterations, Tk represents the total cost reduced for the kth iteration. According to equation (29), after the variable neighborhood search and iteration, the optimal solution is selected to determine the distribution center and distribution route.
## 5.1.4. Adjustment and Verification
Since the upper layer solution is solved by the PSO algorithm, after the VNS algorithm obtains the optimal solution, it needs to be brought back to PSO to verify whether the initial distribution center arrangement is the optimal scheme. If not, it needs to adjust the initial parameter and solves again.
## 5.2. A Case Study of Vegetable Distribution in Shenyang Hospital
In this paper, a logistics enterprise in Shenyang is taken as the example background of this paper. The example data are the real data collected through investigation or a GIS system. See Table2 for the data.We select a logistics enterprise in Shenyang as an example. The enterprise takes two distribution centers as the canteens of ten hospitals in Shenyang for comprehensive vegetable distribution. Study data are the real data collected through investigation or the GIS system which are shown in Table3.Table 3
Basic parameters of each demand point.
NO.Demand pointStaffBedQuantity (ton)1Shengjing Hospital of China Medical University (Nanhu district)3400230052General Hospital of Northern Theater Command1700120033The People’s Hospital of Liaoning Province124288824Shenyang Women’s and Children’s Hospital63040015The first People’s Hospital of Shenyang120060026The Forth Affiliated Hospital of China Medical University1415100037JiuZhou Hospital40020018The first Hospital of China Medical University3043224959The Fifth People’s Hospital of Shenyang1300700210Shenyang 202 Hospital10008502Time-varying factors are considered in urban distribution, so each path has different speeds in different periods. That is, there are four speeds (depending on the number of periods) on each path. At the same time, we consider the directionality of the distribution path, and the round-trip on the same path has independent speed and distance. Therefore, there is a speed and path matrix between demand points and distribution centers and between demand points and demand points, respectively.The basic speed and path matrixes are given in Tables3 and 4 respectively. Shenyang Sitong vegetable distribution center (No. A distribution center) and Shenyang Shuangrui distribution center (No. B distribution center) are selected as the distribution starting points, and the path length between the demand point and the distribution center and the path running speed (taking period 2 as an example) are shown in the following tables. Wherein vAm2 represents the speed from distribution center A to each demand point in period 2 (Table 4), and vmA2 represents the speed from the demand point to distribution center A in period 2 (Table 5).Table 4
Speed-distance matrix from distribution center to demand point.
12345678910A (km)44.321.14.65.93.14.79.94.2B (km)5.56.85.85.24.55.81.72.893.9vAm2 (km/h)2023239233020162518vBm2 (km/h)22221823163012142420Table 5
Speed-distance matrix from demand point to demand point.
A (km)B (km)vAm2 (km/h)vBm2 (km/h)13.55.3192122.96.5251932.46.92222415.9152553.84.3242566.25.6241673.21.91623852.81728910.18.42824103.54.72324The basic parameters of vehicle distribution cost in the distribution center are given as follows. The 24-hour day is divided into four periods according to the road conditions, namely,T1=6:00−9:00, T2=9:00−16:00, T3=16:00−19:00 and T4=19:00−6:00. Due to the actual situation of the example, the daytime distribution is mainly considered in the calculation, so the distribution in T4 is not considered. The selected distribution vehicle is a BAIC flag bell 5-ton load-carrying container, which is diesel powered, with a displacement of 3.168 L and a maximum speed of 95 km/h. The fixed cost of each vehicle is 142 CNY/day, and the diesel price is 6.94 CNY/L. The initial carbon tax rate is 57.69%, 4.195 CNY per liter of diesel.Substituting the basic data into the VNS algorithm, we obtain the optimal distribution scheme, total cost, carbon emission, and carbon penalty rate of the distribution center. In this section, the example will be analyzed from two aspects: single period time-varying network distribution and cross-period time-varying network distribution.
### 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
### 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
### 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
## 5.2.1. Single Time-Varying Network Distribution Schemes
The distribution schemes ofT1, T2, and T3 are shown in Table 6.Table 6
Distribution scheme of single period time-varying network.
TimeDistribution schemeTotal cost (¥)Carbon emissions (kg)Carbon emission cost (¥)T1CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B41.4476.3541.44C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B381.2476.3541.44T2CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B36.1066.5136.10C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B339--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B375.9066.5136.10T3CeA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B39.2772.3439.27C0A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B350.80--UCA ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ B379.0772.3439.27Under the single period time-varying network distribution, choosing to deliver in the periodT3 is the best scheme with the lowest total cost and the lowest carbon emission for the reason that during this period, the average speed of vehicles is high and the road conditions are relatively smoother than other periods. In the single period time-varying network distribution scheme, there exit a carbon penalty rate (cost loss rate) (Table 7). The existence of a carbon penalty rate proves that when carbon emission has certain policy constraints, enterprises can influence their own decision-making through economic factors and take the initiative to adopt a low-carbon distribution scheme.Table 7
Comparison of carbon penalty rates in single period time-varying network.
CC∗θT1383.273815.32E–03T2378.183756.07E–03T3379.45379.071.03E–03
## 5.2.2. Cross Period Time-Varying Network Distribution Schemes
Cross-period time-varying network refers to when there are several distribution centers. Each distribution center needs to choose to distribute in different periods (the demand of customer time window or the restriction of road conditions). We study how enterprises choose distribution schemes in different situations through this model. The distribution schemes of cross-period time-varying networkT1 and T2 are shown in Table 8 and the distribution scheme of cross-period time-varying network T2 and T3 are in Table 9.Table 8
The distribution schemes of cross-period time-varying networkT1 and T2.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost (¥)CeT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A38.2470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T1vBm1B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT1vAm1A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A378.0470.4538.24T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BTable 9
The distribution schemes of cross-period time-varying networkT2 and T3.
TimeSpeedDistribution programTotal cost (¥)Carbon emissions (kg)Carbon emissions’ cost(¥)CeT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A37.1468.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BC0T2vAm2A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ 5 ⟶ A339——T3vBm3B ⟶ 8 ⟶ 6 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BUCT3vAm3A ⟶ 1 ⟶ 2 ⟶ 3 ⟶ 4 ⟶ A376.9368.4137.13T2vBm2B ⟶ 8 ⟶ 6 ⟶ 5 ⟶ 7 ⟶ 10 ⟶ 9 ⟶ BAmong them, the route period represents the service period of the distribution center. Due to different time periods, the route speed has different time-varying characteristics, and the fuel consumption and carbon emission will change with different time periods.Through the numerical analysis of an example, it can be found that the total distribution cost and carbon emission of cross-period time-varying networkT2 and T3 are better than that of cross-period time-varying network T1 and T2. By comparing Tables 7 and 8 with Table 5, it can be noted that the carbon emission and total cost decrease when considering the time-varying network. The distribution scheme of a cross-period time-varying network is better than some single period time-varying network distribution schemes. What’s more, in the cross-period time-varying network distribution scheme, carbon penalty rates (Table 10) are positive, which proves that when carbon emission has certain policy constraints, economic factors will affect the enterprises to take the initiative to adopt low-carbon distribution schemes.Table 10
Comparison of carbon penalty rates in cross-period time-varying network.
CC∗θT1T2380.39378.046.20E–03T2T3377.46376.931.40E–03
## 5.2.3. Carbon Emission Factors’ Analysis
The speed and distance in urban distribution have a relationship with carbon emissions. The longer the distance, the greater the distribution speeds, and the smaller the carbon emission per unit distance. The longer the delivery distance, the greater the total carbon emission. The total carbon emission is positively correlated with the distance, as shown in Figure4.Figure 4
Relationship between distance and carbon emission.The characteristics of the urban distribution network show that the longer the distribution distance is, the greater the distribution speed is, as shown in Figure5. The main reason for this trend is that the shorter the distance in urban distribution, the greater the probability of congestion, and the lower the probability of congestion on longer routes. And the distribution speed (before reaching the optimal speed) is negatively correlated with carbon emissions, as shown in Figure 6.Figure 5
Relationship between distance and speed.Figure 6
The impact of vehicle speed on carbon emission.
## 5.3. Sensitivity Analysis
In this section, the impact of the time-varying network and carbon penalty rate on the distribution scheme will be discussed according to the data of the above example.
### 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
### 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
### 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 5.3.1. Analysis of the Impact of Time-Varying Network on Urban Distribution
The impact of a time-varying network on carbon emission is mainly the impact of speed that is based on road conditions. The change of speed is the embodiment of time-varying. At the same time, the change in speed will also affect the actual amount of carbon emission. Taking the data in the periodT2 as the benchmark, this paper studies the impact of the period on carbon emissions, environmental costs, and total costs by changing the vehicle running speed in the time-varying network period. The speed is increased by 30% step by step, and the results are shown in Table 11.Table 11
Analysis of interval time-varying sensitivity.
Speed (km/h)Carbon emission (kg)Environmental cost (¥)PercentageTotal cost (¥)Percentage15.076.3541.44−381.24−19.568.1336.98−10.76%376.78−1.17%24.062.6734.02−8.02%373.82−0.79%28.559.0632.06−5.76%371.86−0.52%33.056.5630.70−4.23%370.50−0.36%According to the data in Table11, the change in speed will reduce carbon emissions, environmental costs, and total costs. Among them, the change trend of the total cost is the most gentle because the carbon price does not reach a particularly high level, and the change in environmental cost accounts for a small proportion of the total cost. Moreover, the change trend of carbon emissions and environmental costs is the same, and the decline proportion is higher than the total cost. The more obvious the characteristics of time division in the time-varying network, the more obvious the change of speed, and the more significant the impact on the distribution scheme considering carbon emissions.
## 5.3.2. Analysis of the Impact of Carbon Penalty Rate on Urban Distribution
The sensitivity analysis of the change of carbon penalty rate and enterprise distribution plan are considered by reducing the unit carbon emission cost by 10%, 30%, and 50% and increasing it by 10%, 30%, and 50%, respectively. The results are shown in Figure7.Figure 7
Sensitivity analysis of carbon penalty rate.It can be seen in Figure7 that when the unit carbon emission cost changes, the carbon penalty rates of single period and cross-period distribution both show an upward trend. That is, the higher the carbon emission cost, the greater the carbon penalty rate. That is because, with higher carbon cost, the enterprise needs to pay more for carbon emission, which will increase the carbon cost so the enterprise will loss more when not choosing low-carbon emission scheme.With the same unit carbon emission cost, the change trend of carbon penalty rate in different periods shows different characteristics. The change range of carbon penalty rate in single periodT1, single period T2 , and cross period T12 is greater than that in single period T3 and cross period T23. The large range of carbon penalty rate changes will lead to a change in the optimization scheme. Tables 8 and 9 show that without considering carbon emission cost, T1 and T12 are the suboptimal scheme for the enterprise in single and cross periods, respectively. However, after considering the carbon emission cost and carbon penalty rate, because of the higher carbon emission cast and larger penalty rate change range, the optimal scheme in cross period is T23 and the suboptimal scheme in the single scheme is T3. The loss that high carbon penalty rate brings to the enterprise exceeds the benefit it gets when choosing other schemes. The enterprise chooses other scheme with low-carbon emissions instead of the original one. Also, in Figure 7, when carbon cost is below 2.937, the carbon penalty rate is negative, which means that the scheme considering carbon emission cost will increase its total cost, so the enterprise will not consider active emission reduction. So a low unit carbon emission cost will not stimulate the decrease of carbon emission. But carbon emission cost will increase the total cost, there are certain restrictions on the increase of unit carbon emission cost, which cannot grow upward at the expense of economic development.
## 5.3.3. Comparing the Improved VNS with VNS Algorithm
Variable Neighborhood Search is a well-known metaheuristic algorithm for solving complex optimization problems. Different variants of VNS have been applied to various VRPs. Specially, de Freitas and Penna proposed a heuristic algorithm based on VNS for urban distribution, which is named a hybrid general VNS (HGVNS). We adapt the HGVNS to the urban distribution model and compare the results with the improved VNS (IVNS) in Table12. The data in Table 12 are the data of the case study and the Gap reports the improvements in total cost by the IVNS compared to that of the HGVNS, which is calculated as follows:(29)GAP=CIVNS−CHGVNSCHGVNS×100%.Table 12
Comparing the IVNS with HGVNS.
TimeHGVNSIVNSTotal costCarbon emissions costTotal costCarbon emissions costGap (%)T1393.5448.56383.2741.44−2.63T2385.5840.21378.1836.10−1.92T3393.3148.08379.4539.27−3.52T12391.1146.63380.3938.24−2.74T23384.6742.45377.4637.13−1.88Table12 shows that the improved VNS outperforms the HGVNS in terms of both total costs and carbon emission costs.
## 6. Conclusion
Taking urban distribution as the research object, we mainly study how enterprises should choose the optimal distribution scheme under the influence of carbon emission and time-varying networks. We establish different distribution optimization schemes for enterprises by constructing urban distribution optimization models under a single period time-varying network and cross-period time-varying network. Through numerical example analysis and sensitivity analysis, it is verified that by adjusting the carbon tax level, the carbon penalty rate determines the choice of enterprise distribution schemes under a time-varying network. When the carbon tax rate is higher than a certain level, enterprises will actively choose the distribution scheme with a low-carbon emission level to reduce operating costs. There is a positive correlation between distance and speed in urban distribution. The lines with long distribution distances usually have a smooth road network and fewer vehicles. The higher the vehicle speed, the lower the carbon emission per unit distance. The total carbon emission in the distribution process is affected by the speed and vehicle running time. The speed has a negative correlation with the carbon emission, and the total running time has a positive correlation with the carbon emission.
---
*Source: 1013861-2022-06-03.xml* | 2022 |
# Toxicity Assessment of Sediments with Natural Anomalous Concentrations in Heavy Metals by the Use of Bioassay
**Authors:** Francisco Martín; Marlon Escoto; Juan Fernández; Emilia Fernández; Elena Arco; Manuel Sierra; Carlos Dorronsoro
**Journal:** International Journal of Chemical Engineering
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101390
---
## Abstract
The potential toxicity in riverbed sediments was assessed with a bioassay using the bioluminescent bacteriaVibrio fischeri. The selected area was characterized by the presence of ultramafic rocks (peridotites), and the sediments had high values in Ni, Cr, and Co. For the toxicity bioassay with Vibrio fischeri, water-soluble forms were used. The results indicated that most of the samples had a very low degree of toxicity, with 10% of reduction in luminescence in relation to the control; meanwhile 25% of the samples had a moderate degree of toxicity with a reduction in luminescence between 13 and 21% in relation to the control. The toxicity index correlated significantly with the concentrations of Ni and Cr in the water extracts. This toxicity bioassay was proved to be a sensitive and useful tool to detect potential toxicity in solutions, even with anomalous concentrations in heavy metals of natural origin.
---
## Body
## 1. Introduction
Today, in Ecological Risk Assessment (ERA), soil and sediment contamination studies are increasingly important. ERA processes involve several predictive and descriptive phases [1, 2] with special emphasis placed on the toxicity characterization of the contaminated media. In this field, many toxicity assays are applied in the study of contaminated soils [3–6], and for ecosystem protection, toxicity bioassays are key to support the regulation framework in the declaration of contaminated soils [7].Most bioassays applied to contaminated soils and sediments are based on the evaluation of the toxic effect of the solution extracted from the solid phase or by the solid phase itself over a living organism (animals, algae, plants, and bacterial bioassays) [8]. In this way, bacterial bioassays are commonly used because they are quick, cost effective, and reproducible [9]. Particularly, the bioassay using Vibrio fischeri relates the presence of contaminants to the inhibition in light emission from these luminescent bacteria. This test is defined as sensitive and has a high correlation with the response of other toxicity tests [10]; in addition, it has been used in the toxicity assessment of soils contaminated by heavy metals [11, 12].Rivers distribute heavy metals in the ecosystem by mobilizing pollutants and thus spreading the affected area, with potential toxicity risk to aquatic organisms as well as to human health through the food chain. Heavy metals can reach aquatic ecosystems by anthropic activities or by natural processes, and in such circumstances, the contaminants can be distributed as water-soluble species, colloids, suspended forms, or sedimentary phases [13]. According to Jain [14], heavy metal pollution in aquatic ecosystems has received increased scientific attention in the recent years because the contaminants tend to accumulate and progressively raise the toxicity risk to the living organisms [15]. In this sense, many studies have demonstrated that heavy metal concentration in river bed sediments can be good indicators of pollution in hydrological systems [16].The different forms of heavy metals in the sediments of an aquatic medium determine their bioavailability and toxicity. Thus, the study of the different fractions of the elements in sediments is vital, because the total concentrations are not representative of the real degree of the potential contamination. Heavy metals can be bound to or occluded in amorphous materials, adsorbed on clay surfaces or iron/manganese oxyhydroxides, coprecipitated in secondary minerals such as carbonates, sulphates, or oxides, complexed with organic matter, or included in the lattice of primary minerals such as silicates [13]. The fractionation techniques of heavy metals in the river sediments have been used by different authors [14, 17–20] to assess the mobility and bioavailability of pollutants in this media.The Verde River basin is located in the Province of Malaga (southern Spain), and its catchment area receives many streams flowing over peridotitic materials, characterized by high concentrations of Mn, Cr, Co, and Ni. In this basin lies La Concepción Reservoir, which contributes with more than 24% of the drinking water used in the western Costa del Sol (dominated by the city of Marbella), one of the main tourist areas in Spain and in southern Europe. The above-mentioned scenario prompted the examination of the river-bed sediments of this area.In this study, we analyse the concentration in the river-bed sediments of heavy metals, both total as well as water-soluble forms, to characterize the potential mobility of these elements in the Verde River basin. The potential toxicity of heavy metals was studied using bioassay of bioluminescent bacteria in order to assess the potential risk of contamination in the area.
## 2. Material and Methods
Verde River is approximately 36 km long, originating in the Sierra de Las Nieves mountains (2000 m.a.s.l.) and sharply descending to 400 m to reach the Mediterranean Sea. This abrupt change in altitude in a short distance involves many different slopes, with the steeper ones predominating (25%–55%). The lithology is dominated by peridotite and serpentine rocks and with carbonate and metamorphic rocks in lesser proportion (Figure1). The catchment area is comprised of the main channel of the Verde River and 11 tributaries, including La Concepción reservoir, holding 44,515 hm3/year.Figure 1
Location of the study area, sampling points, and lithological scheme of the River Verde basin.Sediments of the Verde River and main tributaries were collected in the bottom part of each stream (Figure1). At each sampling point, composite samples were taken by mixing 250 g of sediments from each corner and center of a square 0.5 m per side. Samples were taken from the river bed to 0–20 cm depth. In the laboratory, samples were air dried, and the fine fraction (<50 μm) of the sediments [19, 21] was used to characterize the main properties for the toxicity bioassay.The total heavy metals were determined by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) in a PE SCIEX ELAN-500A spectrophotometer. The analyses were made after acid digestion (HNO3 + HF; ratio 2 : 3) at a high temperature and pressure in a Teflon-lined vessel. The spectrometer was equipped with quartz torch, nickel sampler, and skimmer cones, a cross-flow type pneumatic nebulizer, and a double-pass Scott-type spray chamber. Instrumental drift was monitored by regularly running standard element solutions between samples. The water-soluble forms were obtained from sediment-water extract in a ratio of 1 : 5 [22, 23], and the heavy metals solubilized were also determined by ICP-MS. All ICP-MS standards were prepared from ICP single-element standard solutions (Merck quality) after appropriate dilution with 10% HNO3. For calibration, two sets of multielement standards containing all the analytes of interest at five concentrations were prepared using rhodium as an internal standard. Procedural blanks for estimating the detection limits (3*σ; n=6) were <0.96 ppb for Mn, <2.73 ppb for Cr, <0.24 ppb for Co, <0.42 ppb for Ni, <0.12 ppb for Cu, <2.68 ppb for Zn, <0.21 ppb for As, and <0.23 ppb for Pb. The analytical precision was better than ±5% in all cases.The toxicity bioassay was made with the water extract of the sediment. Prior to the assay, pH was measured potentiometrically in a 1 : 5 soil : water suspension in a CRISON 501 instrument, and electric conductivity (EC) was measured at 25°C in a CRISON 522 instrument. The toxicity bioassay was made with bacterium (Vibrio fischeri), which diminishes its bioluminescence capacity in the presence of toxic elements. The freeze-dried luminescent bacteria (NRLLB-11177) and the reconstitution solution were supplied by AZUR Environmental. The test was performed in a Microtox 500 analyser from Microbics Corporation, according to a modification of Microtox Basic Test for Aqueous Extracts Protocol [24], in which the water-sediment extracts and a control sample (distilled water) were used, with three replicates per sample. The luminescence was measured before the mixture with the extracts (0 min). The inhibition of bioluminescence was measured at 5 (Inh5) and 15 minutes (Inh15) after the mixture with the extracts of the samples. Afterwards, these measurements were used to calculate two Toxicity Indexes:(i)
normalized Inhibition of luminescence at 5 min (I5), calculated by:(1)I5=-(Inh5sample-Inh5control)100-Inh5control,
where Inh5sample is the percentage of luminescence reduction in the samples at 5 min, and Inh5control is percentage of luminescence reduction of control at 5 min.(ii)
normalized inhibition at 15 min (I15), calculated by:(2)I15=-(Inh15sample-Inh15control)100-Inh15control,
where Inh15sample is the percentage of reduction of the sample at 15 min, and Inh15control is the percentage of reduction of control at 15 min;The values of I5 and I15 can range from-1 (maximum toxicity) to >0, and the following classes can be established: (a) 0 to -0.25 low, (b) -0.25 to -0.5 moderate, (c) -0.5 to -0.75 high, and (d) -0.75 to -1 very high toxicity. Values >0 would indicate stimulation of the luminescence (hormesis).
## 3. Results and Discussion
The total concentrations of heavy metals in the sediments (Table1) indicate that the peridotite materials have very high concentrations in Cr, Ni, Mn, and Co while in the other materials (carbonate and metamorphic rocks) the values of these elements are low, and the concentrations in Zn and As are higher than in the peridotite area; the differences in Pb are not statistically significant between the two types of materials. Therefore, the total concentrations in heavy metals in the sediments of the Verde River are directly related to the different parent materials present in the area.Table 1
Total heavy-metal concentrations (mg kg-1) in sediments from peridotite materials and from other materials in the Verde River basin.
PeridotiteOther materialsMn1244.95±81.92708.07±165.37Cr1040.79±131.15236.00±125.62Co114.86±14.2134.00±13.49Ni1833.26±232.46372.78±273.77Cu23.93±1.4532.50±4.09Zn69.40±6.10166.64±67.25As4.94±0.8026.41±13.30Pb19.43±4.0921.75±6.97The highest heavy-metal concentrations were for Ni and Cr, with maximum values of 2552 mg kg-1 and 1514 mg kg-1, respectively. According to the geochemical background of the trace elements in soils of Andalusia [25], the sediments of the study area have anomalous values only for Ni, Cr, and Co in the peridotite materials, with concentrations exceeding, respectively, 36-, 10-, and 2-fold the reference values for the region. The concentrations of the other elements were within the normal range in all cases.For the assessment of the potential toxicity of the samples, water extracts of the sediments were obtained to make the toxicity bioassay using luminescent bacteria. The main variables affecting the measurement in the bioassay were pH and electric conductivity (EC); these properties should be determined to assess their influence in the test results. The water extract of the samples had a pH value of8.03±0.13, and the mean value of EC was 1.36±0.12. These values are within the recommended range for this toxicity bioassay [26]. The concentration of soluble heavy metals in the water extracts are presented in Table 2. The sediments coming from the peridotite area had significantly higher concentrations in soluble Ni, Cr, and Co than the sediments coming from other materials. The other elements analysed had no significant differences in their soluble concentration between the different materials considered.Table 2
Water-soluble heavy-metal concentrations (mg kg-1) in sediments from peridotite and from other materials in the Verde River basin.
PeridotiteOther materialsMn0.497±0.2450.280±0.151Cr0.013±0.0040.002±0.001Co0.015±0.0060.003±0.001Ni0.153±0.0480.008±0.002Cu0.009±0.0020.008±0.002Zn0.019±0.0060.011±0.006As0.005±0.0010.004±0.002Pb0.0004±0.00030.0004±0.0003According to the toxicity bioassay withVibrio fischeri, most samples showed a decrease in the luminescence in relation to the initial value (Figure 2). Because this bacterium is from a marine environment, the control samples (distilled water) had also a luminescence reduction of between 27 and 34%. The water extract of the samples showed a reduction at 5 min (Inh5sample) and 15 min (Inh15sample) below 50% in relation to the initial value in all cases although these values were normalized to calculate the inhibition in relation to the control. The lower inhibition of luminescence was found in the sediments belonging to the nonperidotite area (samples 5, 6, and 7) and in sample 12, which received a mixture of sediment both from the peridotite materials as well as from the metamorphic carbonate area.Figure 2
Luminescence inhibition (%) of the water extract in the sediment analysed (C = control sample).The water extracts of the sediments had a very low toxicity index in most cases (Figure3), with values below -0.1 (representing a 10% luminescence reduction in relation to control) in 75% of the samples. Values of the toxicity index at 5 and 15 min had a good correlation in the dataset studied. In the case of the sediments coming from the non-peridotite area or from a mixture of different parent materials (samples 5, 6, 7, and 12), the toxicity index had values higher than zero, indicating the occurrence of hormesis phenomena related to the stimulation of the bacterial activity. Only one sample (4) had values of the toxicity index close to -0.25 (representing a 25% reduction in luminescence with respect to the control), indicating a moderate degree of toxicity. The ANOVA of the toxicity index indicated that samples 1, 4, and 11 (located in the lower part of the peridotite area) significantly differed in relation to the other samples analysed (Table 3), with a toxicity index ranging from -0.13 to -0.21; therefore, these three samples had a luminescence reduction of more than 10% but less than 25%, which could be related to the heavy-metal concentrations in the water extracts used in the bioassay. To correlate the heavy metal concentration in the water extracts with the toxicity index based on the reduction of luminescence, we used the Spearman correlation coefficient. In the studied dataset, we found a negative and significant correlation (P<.05) between the toxicity index and the Ni and Cr concentration in the solutions. For I5, the coefficients were -0.636 with Ni and -0.622 with Cr, and for I15 the coefficients were -0.650 with Ni and -0.580 with Cr. The comparison with the toxic levels in the literature [27] indicates that the only elements exceeding these limits were Ni and Cr, for which the toxic levels in water solutions surpassed 3- and 10-fold, respectively. No significant correlations were detected for other heavy metals in the water extract, indicating the influence of the peridotite materials in the toxicity of the samples analysed.Table 3
Toxicity index of the water extract at 5 min (I5) and 15 min (I15). (M: mean; SD: standard deviation; a, b: significant differences (P<.05) in Tukey test).
Sample123456789101112I5M-0.13a-0.01b-0.07b-0.21a0.10b0.05b0.04b-0.03b0.01b-0.03b-0.14a0.08bSD0.020.070.080.050.100.030.020.010.080.030.030.10I15M-0.15a-0.02b-0.09b-0.19a0.08b0.08b-0.03b-0.06b0.00b-0.04b-0.15a-0.01bSD0.020.090.050.030.090.050.020.040.080.050.010.05Figure 3
Toxicity index of the water extract in the sediment analysed.
## 4. Conclusions
The study area is dominated by peridotite materials, and the riverbed sediments in the basin have high concentrations of Ni, Mn, Cr, and Co. The soluble forms were from the water extract of the sediments of the main river and tributaries in the basin. The toxicity bioassay withVibrio fischeri used the water extract of these sediments to assess the bioluminescence reduction in these bacteria. The toxicity degree was very low in 75% of the samples, with values of luminescence reduction below 10% in relation to the control. A moderate-to-low degree of toxicity was found in 25% of the samples (all belonging to the non-peridotite area), with a luminescence reduction between 13 and 21% in relation to the control. The correlation coefficient (Spearman) indicated a negative and significant relation between the toxicity index and the concentrations in Ni and Cr in the water extracts of the sediments. This toxicity bioassay was proved to be a sensitive and useful tool for detecting the potential toxicity of solutions, even in samples with anomalous concentrations in heavy metals of natural origin.
---
*Source: 101390-2010-07-07.xml* | 101390-2010-07-07_101390-2010-07-07.md | 17,205 | Toxicity Assessment of Sediments with Natural Anomalous Concentrations in Heavy Metals by the Use of Bioassay | Francisco Martín; Marlon Escoto; Juan Fernández; Emilia Fernández; Elena Arco; Manuel Sierra; Carlos Dorronsoro | International Journal of Chemical Engineering
(2010) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101390 | 101390-2010-07-07.xml | ---
## Abstract
The potential toxicity in riverbed sediments was assessed with a bioassay using the bioluminescent bacteriaVibrio fischeri. The selected area was characterized by the presence of ultramafic rocks (peridotites), and the sediments had high values in Ni, Cr, and Co. For the toxicity bioassay with Vibrio fischeri, water-soluble forms were used. The results indicated that most of the samples had a very low degree of toxicity, with 10% of reduction in luminescence in relation to the control; meanwhile 25% of the samples had a moderate degree of toxicity with a reduction in luminescence between 13 and 21% in relation to the control. The toxicity index correlated significantly with the concentrations of Ni and Cr in the water extracts. This toxicity bioassay was proved to be a sensitive and useful tool to detect potential toxicity in solutions, even with anomalous concentrations in heavy metals of natural origin.
---
## Body
## 1. Introduction
Today, in Ecological Risk Assessment (ERA), soil and sediment contamination studies are increasingly important. ERA processes involve several predictive and descriptive phases [1, 2] with special emphasis placed on the toxicity characterization of the contaminated media. In this field, many toxicity assays are applied in the study of contaminated soils [3–6], and for ecosystem protection, toxicity bioassays are key to support the regulation framework in the declaration of contaminated soils [7].Most bioassays applied to contaminated soils and sediments are based on the evaluation of the toxic effect of the solution extracted from the solid phase or by the solid phase itself over a living organism (animals, algae, plants, and bacterial bioassays) [8]. In this way, bacterial bioassays are commonly used because they are quick, cost effective, and reproducible [9]. Particularly, the bioassay using Vibrio fischeri relates the presence of contaminants to the inhibition in light emission from these luminescent bacteria. This test is defined as sensitive and has a high correlation with the response of other toxicity tests [10]; in addition, it has been used in the toxicity assessment of soils contaminated by heavy metals [11, 12].Rivers distribute heavy metals in the ecosystem by mobilizing pollutants and thus spreading the affected area, with potential toxicity risk to aquatic organisms as well as to human health through the food chain. Heavy metals can reach aquatic ecosystems by anthropic activities or by natural processes, and in such circumstances, the contaminants can be distributed as water-soluble species, colloids, suspended forms, or sedimentary phases [13]. According to Jain [14], heavy metal pollution in aquatic ecosystems has received increased scientific attention in the recent years because the contaminants tend to accumulate and progressively raise the toxicity risk to the living organisms [15]. In this sense, many studies have demonstrated that heavy metal concentration in river bed sediments can be good indicators of pollution in hydrological systems [16].The different forms of heavy metals in the sediments of an aquatic medium determine their bioavailability and toxicity. Thus, the study of the different fractions of the elements in sediments is vital, because the total concentrations are not representative of the real degree of the potential contamination. Heavy metals can be bound to or occluded in amorphous materials, adsorbed on clay surfaces or iron/manganese oxyhydroxides, coprecipitated in secondary minerals such as carbonates, sulphates, or oxides, complexed with organic matter, or included in the lattice of primary minerals such as silicates [13]. The fractionation techniques of heavy metals in the river sediments have been used by different authors [14, 17–20] to assess the mobility and bioavailability of pollutants in this media.The Verde River basin is located in the Province of Malaga (southern Spain), and its catchment area receives many streams flowing over peridotitic materials, characterized by high concentrations of Mn, Cr, Co, and Ni. In this basin lies La Concepción Reservoir, which contributes with more than 24% of the drinking water used in the western Costa del Sol (dominated by the city of Marbella), one of the main tourist areas in Spain and in southern Europe. The above-mentioned scenario prompted the examination of the river-bed sediments of this area.In this study, we analyse the concentration in the river-bed sediments of heavy metals, both total as well as water-soluble forms, to characterize the potential mobility of these elements in the Verde River basin. The potential toxicity of heavy metals was studied using bioassay of bioluminescent bacteria in order to assess the potential risk of contamination in the area.
## 2. Material and Methods
Verde River is approximately 36 km long, originating in the Sierra de Las Nieves mountains (2000 m.a.s.l.) and sharply descending to 400 m to reach the Mediterranean Sea. This abrupt change in altitude in a short distance involves many different slopes, with the steeper ones predominating (25%–55%). The lithology is dominated by peridotite and serpentine rocks and with carbonate and metamorphic rocks in lesser proportion (Figure1). The catchment area is comprised of the main channel of the Verde River and 11 tributaries, including La Concepción reservoir, holding 44,515 hm3/year.Figure 1
Location of the study area, sampling points, and lithological scheme of the River Verde basin.Sediments of the Verde River and main tributaries were collected in the bottom part of each stream (Figure1). At each sampling point, composite samples were taken by mixing 250 g of sediments from each corner and center of a square 0.5 m per side. Samples were taken from the river bed to 0–20 cm depth. In the laboratory, samples were air dried, and the fine fraction (<50 μm) of the sediments [19, 21] was used to characterize the main properties for the toxicity bioassay.The total heavy metals were determined by Inductively Coupled Plasma-Mass Spectrometry (ICP-MS) in a PE SCIEX ELAN-500A spectrophotometer. The analyses were made after acid digestion (HNO3 + HF; ratio 2 : 3) at a high temperature and pressure in a Teflon-lined vessel. The spectrometer was equipped with quartz torch, nickel sampler, and skimmer cones, a cross-flow type pneumatic nebulizer, and a double-pass Scott-type spray chamber. Instrumental drift was monitored by regularly running standard element solutions between samples. The water-soluble forms were obtained from sediment-water extract in a ratio of 1 : 5 [22, 23], and the heavy metals solubilized were also determined by ICP-MS. All ICP-MS standards were prepared from ICP single-element standard solutions (Merck quality) after appropriate dilution with 10% HNO3. For calibration, two sets of multielement standards containing all the analytes of interest at five concentrations were prepared using rhodium as an internal standard. Procedural blanks for estimating the detection limits (3*σ; n=6) were <0.96 ppb for Mn, <2.73 ppb for Cr, <0.24 ppb for Co, <0.42 ppb for Ni, <0.12 ppb for Cu, <2.68 ppb for Zn, <0.21 ppb for As, and <0.23 ppb for Pb. The analytical precision was better than ±5% in all cases.The toxicity bioassay was made with the water extract of the sediment. Prior to the assay, pH was measured potentiometrically in a 1 : 5 soil : water suspension in a CRISON 501 instrument, and electric conductivity (EC) was measured at 25°C in a CRISON 522 instrument. The toxicity bioassay was made with bacterium (Vibrio fischeri), which diminishes its bioluminescence capacity in the presence of toxic elements. The freeze-dried luminescent bacteria (NRLLB-11177) and the reconstitution solution were supplied by AZUR Environmental. The test was performed in a Microtox 500 analyser from Microbics Corporation, according to a modification of Microtox Basic Test for Aqueous Extracts Protocol [24], in which the water-sediment extracts and a control sample (distilled water) were used, with three replicates per sample. The luminescence was measured before the mixture with the extracts (0 min). The inhibition of bioluminescence was measured at 5 (Inh5) and 15 minutes (Inh15) after the mixture with the extracts of the samples. Afterwards, these measurements were used to calculate two Toxicity Indexes:(i)
normalized Inhibition of luminescence at 5 min (I5), calculated by:(1)I5=-(Inh5sample-Inh5control)100-Inh5control,
where Inh5sample is the percentage of luminescence reduction in the samples at 5 min, and Inh5control is percentage of luminescence reduction of control at 5 min.(ii)
normalized inhibition at 15 min (I15), calculated by:(2)I15=-(Inh15sample-Inh15control)100-Inh15control,
where Inh15sample is the percentage of reduction of the sample at 15 min, and Inh15control is the percentage of reduction of control at 15 min;The values of I5 and I15 can range from-1 (maximum toxicity) to >0, and the following classes can be established: (a) 0 to -0.25 low, (b) -0.25 to -0.5 moderate, (c) -0.5 to -0.75 high, and (d) -0.75 to -1 very high toxicity. Values >0 would indicate stimulation of the luminescence (hormesis).
## 3. Results and Discussion
The total concentrations of heavy metals in the sediments (Table1) indicate that the peridotite materials have very high concentrations in Cr, Ni, Mn, and Co while in the other materials (carbonate and metamorphic rocks) the values of these elements are low, and the concentrations in Zn and As are higher than in the peridotite area; the differences in Pb are not statistically significant between the two types of materials. Therefore, the total concentrations in heavy metals in the sediments of the Verde River are directly related to the different parent materials present in the area.Table 1
Total heavy-metal concentrations (mg kg-1) in sediments from peridotite materials and from other materials in the Verde River basin.
PeridotiteOther materialsMn1244.95±81.92708.07±165.37Cr1040.79±131.15236.00±125.62Co114.86±14.2134.00±13.49Ni1833.26±232.46372.78±273.77Cu23.93±1.4532.50±4.09Zn69.40±6.10166.64±67.25As4.94±0.8026.41±13.30Pb19.43±4.0921.75±6.97The highest heavy-metal concentrations were for Ni and Cr, with maximum values of 2552 mg kg-1 and 1514 mg kg-1, respectively. According to the geochemical background of the trace elements in soils of Andalusia [25], the sediments of the study area have anomalous values only for Ni, Cr, and Co in the peridotite materials, with concentrations exceeding, respectively, 36-, 10-, and 2-fold the reference values for the region. The concentrations of the other elements were within the normal range in all cases.For the assessment of the potential toxicity of the samples, water extracts of the sediments were obtained to make the toxicity bioassay using luminescent bacteria. The main variables affecting the measurement in the bioassay were pH and electric conductivity (EC); these properties should be determined to assess their influence in the test results. The water extract of the samples had a pH value of8.03±0.13, and the mean value of EC was 1.36±0.12. These values are within the recommended range for this toxicity bioassay [26]. The concentration of soluble heavy metals in the water extracts are presented in Table 2. The sediments coming from the peridotite area had significantly higher concentrations in soluble Ni, Cr, and Co than the sediments coming from other materials. The other elements analysed had no significant differences in their soluble concentration between the different materials considered.Table 2
Water-soluble heavy-metal concentrations (mg kg-1) in sediments from peridotite and from other materials in the Verde River basin.
PeridotiteOther materialsMn0.497±0.2450.280±0.151Cr0.013±0.0040.002±0.001Co0.015±0.0060.003±0.001Ni0.153±0.0480.008±0.002Cu0.009±0.0020.008±0.002Zn0.019±0.0060.011±0.006As0.005±0.0010.004±0.002Pb0.0004±0.00030.0004±0.0003According to the toxicity bioassay withVibrio fischeri, most samples showed a decrease in the luminescence in relation to the initial value (Figure 2). Because this bacterium is from a marine environment, the control samples (distilled water) had also a luminescence reduction of between 27 and 34%. The water extract of the samples showed a reduction at 5 min (Inh5sample) and 15 min (Inh15sample) below 50% in relation to the initial value in all cases although these values were normalized to calculate the inhibition in relation to the control. The lower inhibition of luminescence was found in the sediments belonging to the nonperidotite area (samples 5, 6, and 7) and in sample 12, which received a mixture of sediment both from the peridotite materials as well as from the metamorphic carbonate area.Figure 2
Luminescence inhibition (%) of the water extract in the sediment analysed (C = control sample).The water extracts of the sediments had a very low toxicity index in most cases (Figure3), with values below -0.1 (representing a 10% luminescence reduction in relation to control) in 75% of the samples. Values of the toxicity index at 5 and 15 min had a good correlation in the dataset studied. In the case of the sediments coming from the non-peridotite area or from a mixture of different parent materials (samples 5, 6, 7, and 12), the toxicity index had values higher than zero, indicating the occurrence of hormesis phenomena related to the stimulation of the bacterial activity. Only one sample (4) had values of the toxicity index close to -0.25 (representing a 25% reduction in luminescence with respect to the control), indicating a moderate degree of toxicity. The ANOVA of the toxicity index indicated that samples 1, 4, and 11 (located in the lower part of the peridotite area) significantly differed in relation to the other samples analysed (Table 3), with a toxicity index ranging from -0.13 to -0.21; therefore, these three samples had a luminescence reduction of more than 10% but less than 25%, which could be related to the heavy-metal concentrations in the water extracts used in the bioassay. To correlate the heavy metal concentration in the water extracts with the toxicity index based on the reduction of luminescence, we used the Spearman correlation coefficient. In the studied dataset, we found a negative and significant correlation (P<.05) between the toxicity index and the Ni and Cr concentration in the solutions. For I5, the coefficients were -0.636 with Ni and -0.622 with Cr, and for I15 the coefficients were -0.650 with Ni and -0.580 with Cr. The comparison with the toxic levels in the literature [27] indicates that the only elements exceeding these limits were Ni and Cr, for which the toxic levels in water solutions surpassed 3- and 10-fold, respectively. No significant correlations were detected for other heavy metals in the water extract, indicating the influence of the peridotite materials in the toxicity of the samples analysed.Table 3
Toxicity index of the water extract at 5 min (I5) and 15 min (I15). (M: mean; SD: standard deviation; a, b: significant differences (P<.05) in Tukey test).
Sample123456789101112I5M-0.13a-0.01b-0.07b-0.21a0.10b0.05b0.04b-0.03b0.01b-0.03b-0.14a0.08bSD0.020.070.080.050.100.030.020.010.080.030.030.10I15M-0.15a-0.02b-0.09b-0.19a0.08b0.08b-0.03b-0.06b0.00b-0.04b-0.15a-0.01bSD0.020.090.050.030.090.050.020.040.080.050.010.05Figure 3
Toxicity index of the water extract in the sediment analysed.
## 4. Conclusions
The study area is dominated by peridotite materials, and the riverbed sediments in the basin have high concentrations of Ni, Mn, Cr, and Co. The soluble forms were from the water extract of the sediments of the main river and tributaries in the basin. The toxicity bioassay withVibrio fischeri used the water extract of these sediments to assess the bioluminescence reduction in these bacteria. The toxicity degree was very low in 75% of the samples, with values of luminescence reduction below 10% in relation to the control. A moderate-to-low degree of toxicity was found in 25% of the samples (all belonging to the non-peridotite area), with a luminescence reduction between 13 and 21% in relation to the control. The correlation coefficient (Spearman) indicated a negative and significant relation between the toxicity index and the concentrations in Ni and Cr in the water extracts of the sediments. This toxicity bioassay was proved to be a sensitive and useful tool for detecting the potential toxicity of solutions, even in samples with anomalous concentrations in heavy metals of natural origin.
---
*Source: 101390-2010-07-07.xml* | 2010 |
# Mycoplasma pneumoniae-Induced Rash and Mucositis (MIRM) Mimicking Behçet’s Disease and Paraneoplastic Pemphigus (PNP)
**Authors:** Hanish Jain; Garima Singh; Timothy Endy
**Journal:** Case Reports in Infectious Diseases
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013922
---
## Abstract
MIRM is an uncommon entity characterized by prominent mucositis, usually with sparse cutaneous involvement. The diagnosis of MIRM can be challenging due to the lack of awareness amongst clinicians. Patients with Behçet’s disease usually present with recurrent and painful mucocutaneous ulcers, while other clinical manifestations of the disease are more variable. Here, we describe an interesting case of MIRM mimicking Behçet’s disease and PNP highlighting the overlapping manifestations and diagnostic challenges.
---
## Body
## 1. Introduction
MIRM was coined recently in 2015 to distinguish the mucocutaneous disease associated with mycoplasma from the Stevens-Johnson syndrome/toxic epidermal necrolysis (SJS/TEN) spectrum and erythema multiforme (EM) [1]. Behçet’s syndrome is a rare disease characterized by recurrent oral aphthae and any of several systemic manifestations including genital aphthae, ocular disease, skin lesions, gastrointestinal disease, neurologic disease, vascular disease, and arthritis. Most clinical manifestations of Behçet’s syndrome are believed to be due to vasculitis [2]. PNP is an often fatal autoimmune mucocutaneous blistering disease associated with malignancy and induced by lymphoproliferative disorders [3].
## 2. Case
A 20-year-old healthy Caucasian male with no significant past medical history presented with painful mucocutaneous lesions involving the glans penis, oral sores, bilateral conjunctival redness, and painless rash on his arms and legs (Figures1–7) over 3–5 days. The patient reported that his symptoms were preceded by upper respiratory infection-like symptoms with a runny nose, nonproductive cough, fatigue, sore throat, and a mild fever that resolved on its own 10 days prior. On exam, the patient had tachycardia with other vitals stable. The patient had difficulty urinating with the penile lesion at the urethral meatus causing urinary retention warranting the Foley insertion in the emergency department. He also received ceftriaxone and azithromycin for concerns of STDs. ENT performed nasopharyngeal scope and recommended steroids for laryngeal edema, no airway compromise was noted. He was admitted as an inpatient to the medicine service for further management. The Foley catheter was removed. The patient reported this was his third episode of developing oral lesions over the last 9 months. He had received steroids and antivirals from his PCP that resolved the oral lesions. He denied weight loss, joint pain, joint stiffness, photosensitivity, visual disturbances, personal history of blood clots, and family history of autoimmune diseases. The patient was a current smoker and in a monogamous relationship with his girlfriend who had been asymptomatic. Ophthalmology was consulted for bilateral conjunctival redness and reported no underlying ocular inflammation. Laboratory work came back with CBC showing leukocytosis of 15.9 with absolute neutrophilia, elevated inflammatory markers: ESR 52, CRP 115, ferritin 219. Rheumatology was consulted for concern for Behcet’s disease. They performed a pathergy test that resulted negative over 24–48 hours. The patient also had a few episodes of diarrhea, hematuria, and proteinuria noted on UA during hospitalization. Abdominal ultrasound was obtained which did not show any acute abnormality or hepatosplenomegaly. Autoimmune workup including ANA specific by IFA, CCP, RF, and HLA B 51 resulted within normal limits. Infectious workup with CXR, mycoplasma, mono spot test, syphilis, HIV, COVID-19, HSV PCR, rapid strep, respiratory panel, chlamydia, gonorrhea, cold agglutinins, mycoplasma IgM, and IgG all resulted in negative. The patient was started on 0.6 mg colchicine twice daily and showed improvement. Skin biopsy was performed with the histologic differential diagnosis including a hypersensitivity reaction such as mycoplasma-induced rash and mucositis (MIRM), a drug or other hypersensitivity reaction, or viral exanthem while the direct immunofluorescent test showed linear fibrinogen deposition in the basement membrane zone. The patient was discharged home on the continuation of steroid taper and colchicine which was continued for 2 more weeks. On follow-up with rheumatology, primary care, ophthalmology, and urology patient was feeling significantly better with the rashes resolving (Figures 8 and 9), and inflammatory markers back to normal.Figure 1
Lesion on the glans penis.Figure 2
Ulcer on the tongue.Figure 3
Mucocutaneous lesions involving the lower lip.Figure 4
Left eye redness.Figure 5
Right eye redness.Figure 6
Maculopapular lesions on the left arm.Figure 7
Maculopapular lesions on the right knee.Figure 8
Resolving oral mucocutaneous lesions.Figure 9
Resolving lesions on left arm involving lesion post-biopsy.
## 3. Discussion
Mycoplasma pneumoniae, a leading cause of community-acquired pneumonia, may cause extrapulmonary manifestations, including mucocutaneous eruptions, which have been reported in approximately 25 percent of pediatric patients and young adults [4]. MIRM should be suspected when a young patient presents with a mucosal or mucocutaneous eruption and a history of prodromal symptoms, including cough, malaise, and fever preceding the eruption by approximately one week [1]. MIRM is characterized by prominent mucositis, usually with sparse or even absent cutaneous involvement. Compared with SJS/TEN, MIRM has distinct pathophysiology, a milder course, and a generally good prognosis. Including MIRM in a broader category called “reactive infectious mucocutaneous eruption” (RIME) has been proposed. RIME describes mucocutaneous eruptions resulting from a variety of infectious triggers and differentiates infectious triggers, which are far more likely in children and adolescents, from drug triggers [5]. Proposed diagnostic criteria for classic cases of MIRM include mucocutaneous eruption with <10 percent body surface area involvement, involvement of two or more mucosal sites, presence of a few vesiculobullous lesions, or scattered, atypical, targetoid lesions, and clinical and laboratory evidence of M. pneumoniae infection [1]. Other authors have suggested adding young age to the diagnostic criteria, as MIRM is very rare in adults [6]. Confirmatory laboratory tests for M. pneumoniae include polymerase chain reaction (PCR) of pharyngeal swab and measurement of serum-specific immunoglobulin G (IgG), immunoglobulin M (IgM), and immunoglobulin A (IgA) titers [7]. Although PCR is highly sensitive and specific, it can remain positive for up to four months after infection, making it difficult to distinguish acute from past infection. IgM titers start to increase approximately seven to nine days after infection, peak at three to six weeks, and persist for months; IgG titers begin to rise and peak approximately two weeks after IgM titers and persist for years. Thus, as both IgM and IgG may be normal in the acute phase, documentation of titer increase in paired sera is needed for accurate serologic diagnosis. In our patients, all the diagnostic tests including cold agglutinins, PCR, IgM, and IgG came back negative. Patients with MIRM often have elevated acute phase reactants, including C-reactive protein and erythrocyte sedimentation rate [8]. Given the negative tests and systemic involvement, we were concerned about Behçet’s disease since this was the third episode of oral ulcers over the last nine months. Behçet’s disease is best diagnosed in the context of recurrent aphthous ulcerations along with characteristic systemic manifestations including ocular disease, especially hypopyon, pan uveitis, or retinal vasculitis; neurologic disease including characteristic central nervous system parenchymal findings; vascular disease, particularly pulmonary artery aneurysms, Budd-Chiari syndrome, and cerebral venous thrombosis; and patients with pathergy manifestations. Oral ulcerations also tend to be more frequent and severe in patients with Behçet’s disease [9]. Although ophthalmology had ruled out ocular inflammation, concern for systemic disease persisted with the patient having episodes of diarrhea, hematuria, and proteinuria noted on UA raising concern for renal and GI involvement. There are no pathognomonic laboratory tests for Behçet’s disease; as a result, the diagnosis is made based on the clinical findings. In the absence of other systemic diseases, the diagnosis of Behçet’s disease is made based on patients having recurrent oral aphthae (at least three times in one year) plus two of the following clinical features. Recurrent genital aphthae (aphthous ulceration or scarring), eye lesions (including anterior or posterior uveitis, cells in vitreous on slit-lamp examination, or retinal vasculitis observed by an ophthalmologist), skin lesions (including erythema nodosum, pseudofolliculitis, papulopustular lesions, or acneiform nodules consistent with Behçet’s disease), which indicate a positive pathergy test [10]. Pathergy is defined by a papule 2 mm or more in size developing 24 to 48 hours after oblique insertion of a 20-gauge needle 5 mm into the skin, generally performed on the forearm which was negative in our patient as per rheumatologist evaluation. Pathergy is less common in Northern European and North American patients. Thus, it has been suggested that other features might be substituted for pathergy in these populations, including aseptic meningoencephalitis, cerebral vasculitis, recurrent phlebitis, arthritis, synovitis, epididymitis, or focal bowel ulceration [11]. HLA-B∗51 remains the most important genetic factor in Behçet’s disease, despite the recent identification of several susceptibility genes [12]. Some patients do not meet these criteria in whom the diagnosis of Behçet’s disease is still made and establishing the diagnosis in such patients is much more difficult. Thus, our patient with a suspected diagnosis of Behçet’s disease was started on colchicine. A progressive and painful mucositis is uniformly present in patients with PNP that is erosive in nature. Oral involvement is the most common and initial manifestation in most patients who develop painful, erosive stomatitis that characteristically includes the involvement of the tongue [13]. Investigations to diagnose PNP include checking for systemic complications (to identify tumors) and skin biopsies [3]. With the diagnosis still in question, a skin biopsy was performed which resulted in MIRM. A mucosal or cutaneous biopsy is not routinely performed for the diagnosis of MIRM. However, a skin biopsy including direct immunofluorescence should be performed if an autoimmune blistering disorder is being considered in the differential diagnosis. The patient had already received treatment for MIRM empirically in the ED with antibiotics. Most mycoplasma infections are self-limiting; however, treatment with antibiotics is useful if the extrapulmonary manifestation is due to a direct invasion of the organism. Macrolide, tetracycline, or fluoroquinolone classes of antibiotics are preferred, considering the age of the patient and local antibiotic resistance patterns. The duration of antibiotics in pulmonary infection is usually 5 to 7 days, but in extrapulmonary infection undetermined. The use of steroids is controversial, but studies have shown benefits in the setting of immune-mediated manifestations [14]. Colchicine and steroid taper were continued for up to two weeks, which helped as an anti-inflammatory. The rheumatologist stopped colchicine at the clinic visit with HLA-B∗51 also resulted negative. Health care providers should be educated to recognize MIRM and differentiate it from autoimmune diseases like Behçet’s disease. Making a correct diagnosis is imperative to reassure patients and to avoid further costly referrals and additional unnecessary treatment.
---
*Source: 1013922-2022-08-22.xml* | 1013922-2022-08-22_1013922-2022-08-22.md | 12,295 | Mycoplasma pneumoniae-Induced Rash and Mucositis (MIRM) Mimicking Behçet’s Disease and Paraneoplastic Pemphigus (PNP) | Hanish Jain; Garima Singh; Timothy Endy | Case Reports in Infectious Diseases
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013922 | 1013922-2022-08-22.xml | ---
## Abstract
MIRM is an uncommon entity characterized by prominent mucositis, usually with sparse cutaneous involvement. The diagnosis of MIRM can be challenging due to the lack of awareness amongst clinicians. Patients with Behçet’s disease usually present with recurrent and painful mucocutaneous ulcers, while other clinical manifestations of the disease are more variable. Here, we describe an interesting case of MIRM mimicking Behçet’s disease and PNP highlighting the overlapping manifestations and diagnostic challenges.
---
## Body
## 1. Introduction
MIRM was coined recently in 2015 to distinguish the mucocutaneous disease associated with mycoplasma from the Stevens-Johnson syndrome/toxic epidermal necrolysis (SJS/TEN) spectrum and erythema multiforme (EM) [1]. Behçet’s syndrome is a rare disease characterized by recurrent oral aphthae and any of several systemic manifestations including genital aphthae, ocular disease, skin lesions, gastrointestinal disease, neurologic disease, vascular disease, and arthritis. Most clinical manifestations of Behçet’s syndrome are believed to be due to vasculitis [2]. PNP is an often fatal autoimmune mucocutaneous blistering disease associated with malignancy and induced by lymphoproliferative disorders [3].
## 2. Case
A 20-year-old healthy Caucasian male with no significant past medical history presented with painful mucocutaneous lesions involving the glans penis, oral sores, bilateral conjunctival redness, and painless rash on his arms and legs (Figures1–7) over 3–5 days. The patient reported that his symptoms were preceded by upper respiratory infection-like symptoms with a runny nose, nonproductive cough, fatigue, sore throat, and a mild fever that resolved on its own 10 days prior. On exam, the patient had tachycardia with other vitals stable. The patient had difficulty urinating with the penile lesion at the urethral meatus causing urinary retention warranting the Foley insertion in the emergency department. He also received ceftriaxone and azithromycin for concerns of STDs. ENT performed nasopharyngeal scope and recommended steroids for laryngeal edema, no airway compromise was noted. He was admitted as an inpatient to the medicine service for further management. The Foley catheter was removed. The patient reported this was his third episode of developing oral lesions over the last 9 months. He had received steroids and antivirals from his PCP that resolved the oral lesions. He denied weight loss, joint pain, joint stiffness, photosensitivity, visual disturbances, personal history of blood clots, and family history of autoimmune diseases. The patient was a current smoker and in a monogamous relationship with his girlfriend who had been asymptomatic. Ophthalmology was consulted for bilateral conjunctival redness and reported no underlying ocular inflammation. Laboratory work came back with CBC showing leukocytosis of 15.9 with absolute neutrophilia, elevated inflammatory markers: ESR 52, CRP 115, ferritin 219. Rheumatology was consulted for concern for Behcet’s disease. They performed a pathergy test that resulted negative over 24–48 hours. The patient also had a few episodes of diarrhea, hematuria, and proteinuria noted on UA during hospitalization. Abdominal ultrasound was obtained which did not show any acute abnormality or hepatosplenomegaly. Autoimmune workup including ANA specific by IFA, CCP, RF, and HLA B 51 resulted within normal limits. Infectious workup with CXR, mycoplasma, mono spot test, syphilis, HIV, COVID-19, HSV PCR, rapid strep, respiratory panel, chlamydia, gonorrhea, cold agglutinins, mycoplasma IgM, and IgG all resulted in negative. The patient was started on 0.6 mg colchicine twice daily and showed improvement. Skin biopsy was performed with the histologic differential diagnosis including a hypersensitivity reaction such as mycoplasma-induced rash and mucositis (MIRM), a drug or other hypersensitivity reaction, or viral exanthem while the direct immunofluorescent test showed linear fibrinogen deposition in the basement membrane zone. The patient was discharged home on the continuation of steroid taper and colchicine which was continued for 2 more weeks. On follow-up with rheumatology, primary care, ophthalmology, and urology patient was feeling significantly better with the rashes resolving (Figures 8 and 9), and inflammatory markers back to normal.Figure 1
Lesion on the glans penis.Figure 2
Ulcer on the tongue.Figure 3
Mucocutaneous lesions involving the lower lip.Figure 4
Left eye redness.Figure 5
Right eye redness.Figure 6
Maculopapular lesions on the left arm.Figure 7
Maculopapular lesions on the right knee.Figure 8
Resolving oral mucocutaneous lesions.Figure 9
Resolving lesions on left arm involving lesion post-biopsy.
## 3. Discussion
Mycoplasma pneumoniae, a leading cause of community-acquired pneumonia, may cause extrapulmonary manifestations, including mucocutaneous eruptions, which have been reported in approximately 25 percent of pediatric patients and young adults [4]. MIRM should be suspected when a young patient presents with a mucosal or mucocutaneous eruption and a history of prodromal symptoms, including cough, malaise, and fever preceding the eruption by approximately one week [1]. MIRM is characterized by prominent mucositis, usually with sparse or even absent cutaneous involvement. Compared with SJS/TEN, MIRM has distinct pathophysiology, a milder course, and a generally good prognosis. Including MIRM in a broader category called “reactive infectious mucocutaneous eruption” (RIME) has been proposed. RIME describes mucocutaneous eruptions resulting from a variety of infectious triggers and differentiates infectious triggers, which are far more likely in children and adolescents, from drug triggers [5]. Proposed diagnostic criteria for classic cases of MIRM include mucocutaneous eruption with <10 percent body surface area involvement, involvement of two or more mucosal sites, presence of a few vesiculobullous lesions, or scattered, atypical, targetoid lesions, and clinical and laboratory evidence of M. pneumoniae infection [1]. Other authors have suggested adding young age to the diagnostic criteria, as MIRM is very rare in adults [6]. Confirmatory laboratory tests for M. pneumoniae include polymerase chain reaction (PCR) of pharyngeal swab and measurement of serum-specific immunoglobulin G (IgG), immunoglobulin M (IgM), and immunoglobulin A (IgA) titers [7]. Although PCR is highly sensitive and specific, it can remain positive for up to four months after infection, making it difficult to distinguish acute from past infection. IgM titers start to increase approximately seven to nine days after infection, peak at three to six weeks, and persist for months; IgG titers begin to rise and peak approximately two weeks after IgM titers and persist for years. Thus, as both IgM and IgG may be normal in the acute phase, documentation of titer increase in paired sera is needed for accurate serologic diagnosis. In our patients, all the diagnostic tests including cold agglutinins, PCR, IgM, and IgG came back negative. Patients with MIRM often have elevated acute phase reactants, including C-reactive protein and erythrocyte sedimentation rate [8]. Given the negative tests and systemic involvement, we were concerned about Behçet’s disease since this was the third episode of oral ulcers over the last nine months. Behçet’s disease is best diagnosed in the context of recurrent aphthous ulcerations along with characteristic systemic manifestations including ocular disease, especially hypopyon, pan uveitis, or retinal vasculitis; neurologic disease including characteristic central nervous system parenchymal findings; vascular disease, particularly pulmonary artery aneurysms, Budd-Chiari syndrome, and cerebral venous thrombosis; and patients with pathergy manifestations. Oral ulcerations also tend to be more frequent and severe in patients with Behçet’s disease [9]. Although ophthalmology had ruled out ocular inflammation, concern for systemic disease persisted with the patient having episodes of diarrhea, hematuria, and proteinuria noted on UA raising concern for renal and GI involvement. There are no pathognomonic laboratory tests for Behçet’s disease; as a result, the diagnosis is made based on the clinical findings. In the absence of other systemic diseases, the diagnosis of Behçet’s disease is made based on patients having recurrent oral aphthae (at least three times in one year) plus two of the following clinical features. Recurrent genital aphthae (aphthous ulceration or scarring), eye lesions (including anterior or posterior uveitis, cells in vitreous on slit-lamp examination, or retinal vasculitis observed by an ophthalmologist), skin lesions (including erythema nodosum, pseudofolliculitis, papulopustular lesions, or acneiform nodules consistent with Behçet’s disease), which indicate a positive pathergy test [10]. Pathergy is defined by a papule 2 mm or more in size developing 24 to 48 hours after oblique insertion of a 20-gauge needle 5 mm into the skin, generally performed on the forearm which was negative in our patient as per rheumatologist evaluation. Pathergy is less common in Northern European and North American patients. Thus, it has been suggested that other features might be substituted for pathergy in these populations, including aseptic meningoencephalitis, cerebral vasculitis, recurrent phlebitis, arthritis, synovitis, epididymitis, or focal bowel ulceration [11]. HLA-B∗51 remains the most important genetic factor in Behçet’s disease, despite the recent identification of several susceptibility genes [12]. Some patients do not meet these criteria in whom the diagnosis of Behçet’s disease is still made and establishing the diagnosis in such patients is much more difficult. Thus, our patient with a suspected diagnosis of Behçet’s disease was started on colchicine. A progressive and painful mucositis is uniformly present in patients with PNP that is erosive in nature. Oral involvement is the most common and initial manifestation in most patients who develop painful, erosive stomatitis that characteristically includes the involvement of the tongue [13]. Investigations to diagnose PNP include checking for systemic complications (to identify tumors) and skin biopsies [3]. With the diagnosis still in question, a skin biopsy was performed which resulted in MIRM. A mucosal or cutaneous biopsy is not routinely performed for the diagnosis of MIRM. However, a skin biopsy including direct immunofluorescence should be performed if an autoimmune blistering disorder is being considered in the differential diagnosis. The patient had already received treatment for MIRM empirically in the ED with antibiotics. Most mycoplasma infections are self-limiting; however, treatment with antibiotics is useful if the extrapulmonary manifestation is due to a direct invasion of the organism. Macrolide, tetracycline, or fluoroquinolone classes of antibiotics are preferred, considering the age of the patient and local antibiotic resistance patterns. The duration of antibiotics in pulmonary infection is usually 5 to 7 days, but in extrapulmonary infection undetermined. The use of steroids is controversial, but studies have shown benefits in the setting of immune-mediated manifestations [14]. Colchicine and steroid taper were continued for up to two weeks, which helped as an anti-inflammatory. The rheumatologist stopped colchicine at the clinic visit with HLA-B∗51 also resulted negative. Health care providers should be educated to recognize MIRM and differentiate it from autoimmune diseases like Behçet’s disease. Making a correct diagnosis is imperative to reassure patients and to avoid further costly referrals and additional unnecessary treatment.
---
*Source: 1013922-2022-08-22.xml* | 2022 |
# The Effect of Electroacupuncture versus Manual Acupuncture through the Expression of TrkB/NF-κB in the Subgranular Zone of the Dentate Gyrus of Telomerase-Deficient Mice
**Authors:** Dong Lin; Jie Zhang; Wanyu Zhuang; Xiaodan Yan; Xiaoting Yang; Shen Lin; Lili Lin
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1013978
---
## Abstract
Our previous study showed that the acupuncture stimulation on the acupoint (ST-36) could activate the brain-derived neurotropic factor (BDNF) signaling pathways in telomerase-deficient mice. Recently, we set out to investigate whether the manual acupuncture (MA) or electroacupuncture (EA) displays a therapeutic advantage on age-related deterioration of learning and memory. Both telomerase-deficient mice (Terc−/− group, n=24) and wild-type mice (WT group, n=24) were randomly assigned to 3 subgroups (CON, controls with no treatment; MA, mice receiving manual acupuncture; EA, mice receiving electric acupuncture). The mice were subjected to behavior test, and EA/MA were applied at bilateral acupoints (ST36) 30 min daily for 7 successive days. The brain tissues were collected after the last Morris water maze (MWM) test and were subjected to the immunohistochemistry and western blot analysis. The MWM test showed that EA can significantly increase the time in target quadrant (P≤0.01) and frequency of locating platform for Terc−/− mice (P≤0.05), while nothing changed in WT mice. Furthermore, western blotting and immunohistochemistry suggested that EA could also specifically increase the expression of TrkB and NF-κB in Terc−/− mice but not in wild-type mice (P≤0.05). Meanwhile, the expression level and ratio of ERK/p-ERK did not exhibit significant changes in each subgroup. These results indicated that, compared with MA, the application of EA could specifically ameliorate the spatial learning and memory capability for telomerase-deficient mice through the activation of TrkB and NF-κB.
---
## Body
## 1. Introduction
Aging related neurodegeneration diseases are currently the mostly studied area in neuroscience. It is well known that aging is a multifactorial complex process that leads to the deterioration of biological functions, and the telomeres and telomerase may play a key role in this biological aging [1]. It is previously known that telomere protects chromosomes and plays important role in the cell life for its prolonged persistence. The telomerase is a DNA polymerase that plays an important role in telomere synthesis [2, 3]. In the nervous system, the neurons during embryonic and early postnatal life have high levels of telomerase activity, while in adult brain the level rapidly decreases, and at the same time the apoptosis of neurons occurs naturally during development. Therefore, some researchers believed that the reducing telomeres appear to be essential for the aging process in different organism [4, 5]. Previous research suggested that adult neurogenesis declines with age, and the age-related neurodegeneration could be due to dysfunctional telomeres, especially the telomerase with deficiency [3, 6, 7].It is well known that acupuncture treatment taking as an traditional Chinese medicine has been widely used in some neurological disorders. Furthermore, some studies have demonstrated that acupuncture or electroacupuncture exerted a vital function in the treatment of Alzheimer disease (AD), even proven a great efficiency in improving intelligence [8]. Some researches indicated that acupuncture treatment targeted the acupoints in surface and finally resulted in particularly neuroprotective effects in nervous system [9, 10]. Among the nonpharmacological techniques, manual acupuncture (MA) and electroacupuncture (EA) were the basic two categories to acupuncture [11]. Compared with the different patterns of stimulation, EA is more repeatable and adjustable, while MA is more flexible and suitable in many diseases. Furthermore, there were more and more research that revealed that both EA and MA could improve cognitive deficits in AD animal models. Meanwhile, our previous studies have indicated that manual acupuncture could activate the brain-derived neurotropic factor (BDNF) and its downstream signaling pathways for neuroprotection [12, 13]. However, accumulating evidence has demonstrated that the therapeutic effect of EA was focused on attenuating cognitive deficits and increasing pyramidal neuron number in hippocampal [14, 15]. In the present study, we investigated the difference of effect induced by EA and MA on telomerase-deficient mice. In addition, we further explored the expression of TrkB (tropomyosin receptor kinase B)/NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells)/ERK (extracellular regulated protein kinases)/p-ERK (phosphorylated-extracellular regulated protein kinases) protein in the subgranular zone (SGZ) of the dentate gyrus (DG) of telomerase-deficient mice.
## 2. Materials and Methods
### 2.1. Animals
The mice deficient for TERC genes were provided by the Jackson Laboratory in United States (Stock #004132). The experiments were approved by the Institutional Animal Care and Use Committee of Fujian University of Traditional Chinese Medicine, China, and performed according to the NIH Guideline for the Care and Use of Laboratory Animals. The mice were housed in an environmentally controlled vivarium under a 12 h light–dark cycle and temperature23±2°C, humidity 50–60%. Food and water were available for freedom usage. All the animals were generated by inbreed crossing of heterozygous knockout mice that were backcrossed to naïve C57BL/6J mice for more than 3 generations. The mice were genotyped using Polymerase Chain Reaction to confirm the genetic modifications. Two strains of 7-month-old adult mice were used for the current study (i.e., wild-type mice [WT, n=24] and telomerase-deficient mice [Terc−/−, n=24], with 8 mice each in 3 subgroups).
### 2.2. Experimental Protocol
The mice in both the WT group (n=24) and the Terc−/− group (n=24) were randomly assigned to 3 subgroups, 8 mice to each subgroup per group (Figure 1): (1) the control subgroup (CON) without any treatment; (2) the manual acupuncture subgroup (MA) that received manual acupuncture at the acupoint ST-36; (3) the electroacupuncture (EA) subgroup that received an electrical acupuncture stimulation on acupoint ST-36. All animals were observed while the acupuncture was performed, and if anyone looked uncomfortable, it was stroked gently on the back until it became calm again [16].Figure 1
Experiment design on grouping, schematic representation of the methodology used. (a) All of the WT group mice and Terc−/− group mice were randomly distributed to 3 subgroups (n=8 per subgroup): (1) controls with no treatment, (2) mice receiving manual acupuncture (MA), (3) mice receiving electric acupuncture (EA). (b) The mice received MWM test. In the 5th day, the MWM test was taken, and then MA/EA treatment was performed for 7 days. After the last MA/EA treatment administration, the last probe test was carried out. And all of the mice were sacrificed after the last behavioral observations (24 h later).
(a) (b)
### 2.3. Morris Water Maze (MWM) Behavioral Test
For the purpose of evaluating the ability of learning and memory, Morris water maze procedure was performed as described [17, 18]. The water maze consisted of a circular tank (120 cm in diameter, 50 cm in height) filled with water to a depth of 28.5 cm, maintained at 22±2°C. The area of the pool was conceptually divided into four quadrants (NE, NW, SW, and SE) of equal size. In the center of the 3rd quadrant, we placed a hidden escape platform as the target quadrant with 12.5 cm in diameter. The mice were given 60 s to locate the hidden platform. Once the mice found the submerged platform, it could remain on it for 10 s, and the latency to escape was recorded. Any mouse that failed to locate the platform within 60 s was placed on the platform by hand. Each mouse was subjected to 4 training trials per day for 4 consecutive days. Twenty-four hours after the final trial, the assessing spatial memory was taken in probe test. In this test, the mice need to swim freely for 60 s without the platform in the tank. Time spent in the target quadrant and the frequencies of locating platform were taken to indicate the degree of memory consolidation after learning. All data were collected by a video camera (TOTA-450III, Japan) and analyzed by an automated analyzing system (Dig-Behav, Jiliang Co. Ltd., Shanghai, China). Considering the following 7-day acupuncture treatment, we designed two probe tests after 4-day training time [19]. The probe test 1 was arranged before acupuncture intervention, and the probe test 2 was carried out after the last day of treatment.
### 2.4. Acupuncture Intervention
The control subgroup did not receive any treatment but were immobilized by hand with gentle plastic restraints just as the treatment groups. In the treatment group, acupuncture stimulation was performed by a small acupuncture needle (13 mm in length, 0.3 mm in diameter, from Suzhou Hua Tuo Medical Instrument Co., Suzhou, China). Because of the effectiveness in improving the brain function, the point of bilateral ST-36 was chosen to be used. The locations of ST-36 and acupuncture manipulation were chosen following our previously described protocol [13, 14]. In the MA subgroup, the manual acupuncture on the point of ST-36 was applied for 30 mins. The needles were inserted into acupoint for a depth of 1.5–2 mm, and twirling manipulation was applied every 5 min and lasted 20 s each time. Each needle was rotated bidirectionally within 90° at a speed of 180°/s. For EA subgroup, a pair of needles were tightly tied together and inserted to bilateral ST-36 just as reported previously [20]. The needles were was also inserted into ST-36 acupoint for the same depth just as the MA group and connected to a Han’s acupoint nerve stimulator (HANS, Han’s Acupoint Nerve Stimulator, Model LH 202H, Beijing Huawei Ltd., Beijing, China). The parameters were as follows: sparse-dense wave with a frequency of 2/50 Hz, current of 2 mA, 30 min/stimulation, and one stimulation per day, for 7 consecutive days.
### 2.5. Tissue Preparation and Immunohistochemistry
After behavioral test and acupuncture intervention all the animals were sacrificed under 10% chloral hydrate (0.35 ml/100 g, intraperitoneal [i.p.]), and the brain tissues were collected after intracardial perfusion with saline. The brain samples were halved for each of the subjects; the left-half was separated out for protein preparation, and the right was fixed with 4% (w/v) paraformaldehyde for next immunohistochemistry analysis. The tissue blocks containing hippocampus were dehydrated and embedded in paraffin. Fixed brains were cut in 5μm sagittal sections. The sections was mounted on 0.1% polylysine reagent (Sigma) coated slides. Subsequently, the sections were dewaxed and hydrated and incubated in 0.01 mol/L of citrate buffer for antigen thermal remediation for 5 min by being treated with microwave (700 W), and then for 10 min with 3% H2O2 at room temperature, and washed in phosphate-buffered saline (PBS) for 3 × 5 min. Next, the sections were blocked in 2% BSA for 10 min and incubated with primary antibody diluent (rabbit anti-TrkB 1 : 500, Cell Signalling Technology; rabbit anti-NF-κB 1 : 200, Cell Signalling Technology) for 12 h at 4°C. Then, the sections were rinsed by PBS and next incubated with secondary antibody diluent (biotinylated goat anti-rabbit IgG, diluted 1 : 1000, Vector Laboratories) for 30 min at room temperature. After wash by PBS for 3 × 5 min, the diaminobenzidine (DAB) kit (Vector Laboratories, Burlingame, USA) was used for color development for 5 min. After being redyed with hematoxylin, the brain slices were dehydrated and observed under a light microscope, BX53 (BX-51 Olympus, Tokyo, Japan), and analyzed using Image J software.
### 2.6. Western Blot Analysis
The frozen hippocampus tissues were obtained after behavior test and were homogenized on ice in 1.5 ml RIPA protein lysis buffer supplemented with 500 g PMSF. After centrifugation for 15 minutes at 12,000 ×g at 4°C, the protein in cleared supernatant was quantified and adjusted to 5 mg/ml. Equivalent amounts of protein (30μg/lane) were separated by SDS-PAGE and transferred to PVDF membranes. Membranes were blocked with 5% (w/v) bovine serum albumin in Tris-buffered saline with Tween 20 for 1 hour and then incubated with primary antibody (rabbit anti-mouse TrkB [1 : 1000], ERK [1 : 2000], P-ERK [1 : 1000], NF-κB [1 : 500], Santa Cruz Biotechnology,) overnight at 4°C. The immunoblots were then incubated with goat anti-rabbit horseradish peroxidase-conjugated IgG for 2 hours at room temperature (1 : 1,000), we applied the chemiluminescent to develop the films, and the protein bands were quantified by Quantity One. The protein expression level was controlled by the protein of β-actin. All the data were expressed as the ratio relative after normalization to the β-actin levels.
### 2.7. Statistical Analysis
All data is presented as mean ± SEM for each group. For the Morris water maze test the escape latency time of the hidden platform trial was analyzed by two-way ANOVA of repeated measures, and the probe trial including escape latencies and original angle was conducted in the form of a multifactorial analysis of variance (ANOVA). The immunohistochemistry and western blot assay data were also analyzed by one-way ANOVA analysis of variance followed by LSD (equal variances assumed) or Dunnett’s T3 (equal variances not assumed) post hoc test. All the analysis was performed with Prism 6.0 (GraphPad Software Inc., San Diego, USA), and theP values less than 0.05 were considered statistically significant.
## 2.1. Animals
The mice deficient for TERC genes were provided by the Jackson Laboratory in United States (Stock #004132). The experiments were approved by the Institutional Animal Care and Use Committee of Fujian University of Traditional Chinese Medicine, China, and performed according to the NIH Guideline for the Care and Use of Laboratory Animals. The mice were housed in an environmentally controlled vivarium under a 12 h light–dark cycle and temperature23±2°C, humidity 50–60%. Food and water were available for freedom usage. All the animals were generated by inbreed crossing of heterozygous knockout mice that were backcrossed to naïve C57BL/6J mice for more than 3 generations. The mice were genotyped using Polymerase Chain Reaction to confirm the genetic modifications. Two strains of 7-month-old adult mice were used for the current study (i.e., wild-type mice [WT, n=24] and telomerase-deficient mice [Terc−/−, n=24], with 8 mice each in 3 subgroups).
## 2.2. Experimental Protocol
The mice in both the WT group (n=24) and the Terc−/− group (n=24) were randomly assigned to 3 subgroups, 8 mice to each subgroup per group (Figure 1): (1) the control subgroup (CON) without any treatment; (2) the manual acupuncture subgroup (MA) that received manual acupuncture at the acupoint ST-36; (3) the electroacupuncture (EA) subgroup that received an electrical acupuncture stimulation on acupoint ST-36. All animals were observed while the acupuncture was performed, and if anyone looked uncomfortable, it was stroked gently on the back until it became calm again [16].Figure 1
Experiment design on grouping, schematic representation of the methodology used. (a) All of the WT group mice and Terc−/− group mice were randomly distributed to 3 subgroups (n=8 per subgroup): (1) controls with no treatment, (2) mice receiving manual acupuncture (MA), (3) mice receiving electric acupuncture (EA). (b) The mice received MWM test. In the 5th day, the MWM test was taken, and then MA/EA treatment was performed for 7 days. After the last MA/EA treatment administration, the last probe test was carried out. And all of the mice were sacrificed after the last behavioral observations (24 h later).
(a) (b)
## 2.3. Morris Water Maze (MWM) Behavioral Test
For the purpose of evaluating the ability of learning and memory, Morris water maze procedure was performed as described [17, 18]. The water maze consisted of a circular tank (120 cm in diameter, 50 cm in height) filled with water to a depth of 28.5 cm, maintained at 22±2°C. The area of the pool was conceptually divided into four quadrants (NE, NW, SW, and SE) of equal size. In the center of the 3rd quadrant, we placed a hidden escape platform as the target quadrant with 12.5 cm in diameter. The mice were given 60 s to locate the hidden platform. Once the mice found the submerged platform, it could remain on it for 10 s, and the latency to escape was recorded. Any mouse that failed to locate the platform within 60 s was placed on the platform by hand. Each mouse was subjected to 4 training trials per day for 4 consecutive days. Twenty-four hours after the final trial, the assessing spatial memory was taken in probe test. In this test, the mice need to swim freely for 60 s without the platform in the tank. Time spent in the target quadrant and the frequencies of locating platform were taken to indicate the degree of memory consolidation after learning. All data were collected by a video camera (TOTA-450III, Japan) and analyzed by an automated analyzing system (Dig-Behav, Jiliang Co. Ltd., Shanghai, China). Considering the following 7-day acupuncture treatment, we designed two probe tests after 4-day training time [19]. The probe test 1 was arranged before acupuncture intervention, and the probe test 2 was carried out after the last day of treatment.
## 2.4. Acupuncture Intervention
The control subgroup did not receive any treatment but were immobilized by hand with gentle plastic restraints just as the treatment groups. In the treatment group, acupuncture stimulation was performed by a small acupuncture needle (13 mm in length, 0.3 mm in diameter, from Suzhou Hua Tuo Medical Instrument Co., Suzhou, China). Because of the effectiveness in improving the brain function, the point of bilateral ST-36 was chosen to be used. The locations of ST-36 and acupuncture manipulation were chosen following our previously described protocol [13, 14]. In the MA subgroup, the manual acupuncture on the point of ST-36 was applied for 30 mins. The needles were inserted into acupoint for a depth of 1.5–2 mm, and twirling manipulation was applied every 5 min and lasted 20 s each time. Each needle was rotated bidirectionally within 90° at a speed of 180°/s. For EA subgroup, a pair of needles were tightly tied together and inserted to bilateral ST-36 just as reported previously [20]. The needles were was also inserted into ST-36 acupoint for the same depth just as the MA group and connected to a Han’s acupoint nerve stimulator (HANS, Han’s Acupoint Nerve Stimulator, Model LH 202H, Beijing Huawei Ltd., Beijing, China). The parameters were as follows: sparse-dense wave with a frequency of 2/50 Hz, current of 2 mA, 30 min/stimulation, and one stimulation per day, for 7 consecutive days.
## 2.5. Tissue Preparation and Immunohistochemistry
After behavioral test and acupuncture intervention all the animals were sacrificed under 10% chloral hydrate (0.35 ml/100 g, intraperitoneal [i.p.]), and the brain tissues were collected after intracardial perfusion with saline. The brain samples were halved for each of the subjects; the left-half was separated out for protein preparation, and the right was fixed with 4% (w/v) paraformaldehyde for next immunohistochemistry analysis. The tissue blocks containing hippocampus were dehydrated and embedded in paraffin. Fixed brains were cut in 5μm sagittal sections. The sections was mounted on 0.1% polylysine reagent (Sigma) coated slides. Subsequently, the sections were dewaxed and hydrated and incubated in 0.01 mol/L of citrate buffer for antigen thermal remediation for 5 min by being treated with microwave (700 W), and then for 10 min with 3% H2O2 at room temperature, and washed in phosphate-buffered saline (PBS) for 3 × 5 min. Next, the sections were blocked in 2% BSA for 10 min and incubated with primary antibody diluent (rabbit anti-TrkB 1 : 500, Cell Signalling Technology; rabbit anti-NF-κB 1 : 200, Cell Signalling Technology) for 12 h at 4°C. Then, the sections were rinsed by PBS and next incubated with secondary antibody diluent (biotinylated goat anti-rabbit IgG, diluted 1 : 1000, Vector Laboratories) for 30 min at room temperature. After wash by PBS for 3 × 5 min, the diaminobenzidine (DAB) kit (Vector Laboratories, Burlingame, USA) was used for color development for 5 min. After being redyed with hematoxylin, the brain slices were dehydrated and observed under a light microscope, BX53 (BX-51 Olympus, Tokyo, Japan), and analyzed using Image J software.
## 2.6. Western Blot Analysis
The frozen hippocampus tissues were obtained after behavior test and were homogenized on ice in 1.5 ml RIPA protein lysis buffer supplemented with 500 g PMSF. After centrifugation for 15 minutes at 12,000 ×g at 4°C, the protein in cleared supernatant was quantified and adjusted to 5 mg/ml. Equivalent amounts of protein (30μg/lane) were separated by SDS-PAGE and transferred to PVDF membranes. Membranes were blocked with 5% (w/v) bovine serum albumin in Tris-buffered saline with Tween 20 for 1 hour and then incubated with primary antibody (rabbit anti-mouse TrkB [1 : 1000], ERK [1 : 2000], P-ERK [1 : 1000], NF-κB [1 : 500], Santa Cruz Biotechnology,) overnight at 4°C. The immunoblots were then incubated with goat anti-rabbit horseradish peroxidase-conjugated IgG for 2 hours at room temperature (1 : 1,000), we applied the chemiluminescent to develop the films, and the protein bands were quantified by Quantity One. The protein expression level was controlled by the protein of β-actin. All the data were expressed as the ratio relative after normalization to the β-actin levels.
## 2.7. Statistical Analysis
All data is presented as mean ± SEM for each group. For the Morris water maze test the escape latency time of the hidden platform trial was analyzed by two-way ANOVA of repeated measures, and the probe trial including escape latencies and original angle was conducted in the form of a multifactorial analysis of variance (ANOVA). The immunohistochemistry and western blot assay data were also analyzed by one-way ANOVA analysis of variance followed by LSD (equal variances assumed) or Dunnett’s T3 (equal variances not assumed) post hoc test. All the analysis was performed with Prism 6.0 (GraphPad Software Inc., San Diego, USA), and theP values less than 0.05 were considered statistically significant.
## 3. Results
### 3.1. Effect of Electroacupuncture on Spatial Learning and Memory
The results of the Morris water maze test are presented in Figure2. In the hidden platform trial, the escape latency time in each group showed a downward trend in 4-day training time extension (Figures 2(a) and 2(b)). To analyze the effect of acupuncture treatment, the two probe trials were designed. The percentage of time spent in the target quadrant and the frequency of locating platform were used for statistical analysis. At the same time, the different values of the percentage of time and the frequency between pretreatment and posttreatment were further calculated to evaluate the significance of change after acupuncture. The results of probe trial showed that, compared with CON group, Terc−/− mice in EA subgroup had significantly more variation to the time in quadrant where the platform used to be (P≤0.01, Figure 2(c)), while there was almost no difference among three subgroups for WT mice. Meanwhile the D-value of frequency of locating platform between before and after treatment in EA group appeared significantly increased compared to CON group (P≤0.05, Figure 2(d)), whereas the variation among WT mice shows no difference among three subgroups. These results demonstrated that EA acupuncture could ameliorate the cognitive deficits in the Terc−/− mice, while it did not affect WT mice. Fortunately, these findings were consistent with our previous report [12].Figure 2
Acupuncture intervention prevented spatial and memory impairment in telomerase-deficient mice (Terc−/−) in Morris water maze task. (a) Performance in training trial of Terc−/− mice (n=8 for each subgroup) and (b) wild-type mice (n=8 for each subgroup), during 4-day hidden platform trial. The data shows that escape latency to reach the hidden platform before acupuncture intervention in three subgroups. (c) In the probe test, the difference value of time spent in the target quadrant between before and after treatment was calculated. It was interesting that the EA stimulation can significantly increase the time in target quadrant for Terc−/− mice (P∗∗≤0.01), while nothing changed in WT mice. (d) There was also no significant difference among each subgroup for WT mice, and, compared with CON group, EA stimulation could significantly increase the D-value of frequency of locating platform between before and after treatment (P∗≤0.05).
(a) (b) (c) (d)
### 3.2. Effects of EA Treatment Improved the Levels of TrkB Protein in the Hippocampus for Terc−/− Group
Brain tissue samples from the subjects were analyzed using immunohistochemistry and western blot analysis to investigate the effect of acupuncture stimulation in the two strains of mice (Figures3(a) and 3(c)). The TrkB protein is mainly distributed in the membrane in the subgranular zone (SGZ) around the dentate gyrus (DG) of the hippocampus. Even there were fewer weakly stains, they were still some positively stained cells. The result from the immunohistochemical evaluation indicated that only in Terc−/− mice the stimulation of electroacupuncture (EA group) could significantly increase the expression of TrkB protein (P∗≤0.05) compared with CON subgroup, and there was no obvious difference among any subgroup in WT mice (Figure 3(b)). The western blotting results of TrkB in the hippocampus were also shown that EA could promote the expression of TrkB compared with the other subgroup (Figure 3(c)), which was also consistent with our former reports [12].Figure 3
Immunohistochemistry and western blot analysis of TrkB/NF-κB in hippocampus of the WT and Terc−/− mice (n=8 per subgroup). The brain samples were sliced sagittally into 5 μm sections, and the representative photographs and the mean optical density of positive cell values are, respectively, shown in a and d; b and e. Data were expressed as mean ± SEM, and the data were analyzed using one-way ANOVA followed by Turkey’s test of multiple comparisons. (a and d) The mice brain slices were hematoxylin-stained in the same region of the hippocampus among the three subgroups. The TrkB/NF-κB positively stained cells appear brown (red arrow) in the subgranular zone (SGZ) around the dentate gyrus area (DG), and the scale bar = 100 μm. (b) Compared with CON subgroup, the mean optical density of positive TrkB protein in EA subgroup was significantly increased in Terc−/− mice (P∗≤0.05), and there was no difference among the three subgroups for WT mice. Although there was obviously more NF-κB positively strained cell observed in the picture, (e) no significantly differences were observed in NF-κB immunoreactivity in any subgroups in both strains. (c and f) Expression of TrkB/NF-κB in WT and Terc−/− mice (n = 8 per subgroup) was detected by western blotting assay. Data are represented as the ratio of TrkB (NF-κB)/β-actin. The bar graphs represent the levels of TrkB/NF-κB in hippocampus in both strains. In the Terc−/− mice, both electroacupuncture and manual acupuncture significantly increased the expression of TrkB/NF-κB (P∗≤0.05; P∗∗∗≤0.001), and nothing changed in WT mice.
(a) (b) (c) (d) (e) (f)
### 3.3. Effects of EA Treatment Increased the NF-κB Expression in the Hippocampus for Terc−/− Group
To investigate whether EA or MA can alter the expression of TrkB downstream signal pathway, the expression of NF-κB/ERK/p-ERK was measured in tissue sample. From the photograph, the positively stained NF-κB appears brown mainly in the subgranular zone (SGZ) around the dentate gyrus area (DG) of hippocampus (Figures 3(d) and 3(e)), and there were no significant differences among any groups in the two strains of mice (Terc−/− and WT mice). Meanwhile, the western blotting results of NF-κB showed that, compared with CON group, the relative expressions of NF-κB significantly increased in EA (P∗∗∗≤0.001) and MA (P∗≤0.05) subgroups for Terc−/− mice (Figure 3(f)).
### 3.4. The Effects of Acupuncture Treatment on the Phosphorylation Levels of ERK
In order to further explore the mechanisms of acupuncture, the protein levels of p-ERK and ERK were measured by western blot to evaluate the activation of ERK. As p-ERK is a marker of ERK activation, the ratio of them was also calculated (Figure4). The result demonstrated that neither electroacupuncture nor manual acupuncture showed significant differences in ERK/p-ERK expression among any subgroups in Terc−/− mice (P≥0.05) and likewise in the subgroups of WT mice. Furthermore, the ratio of p-ERK/ERK shows no significance in any subgroups for the 2 types of mice (P≥0.05).Figure 4
Expression level of ERK in the hippocampus of WT and Terc−/− mice (n = 8 per subgroup) was detected by western blot assay. Blots were reprobed for expression of β-actin to control for loading and transfer. Data were expressed as mean ± SEM. The degree of ERK activation was represented as the ratio of p-ERK/ERK. The result demonstrated that the expression of ERK/p-ERK shows nothing significantly changed in the subgroups for the 2 types of mice, even in the ratio of p-ERK/ERK.
## 3.1. Effect of Electroacupuncture on Spatial Learning and Memory
The results of the Morris water maze test are presented in Figure2. In the hidden platform trial, the escape latency time in each group showed a downward trend in 4-day training time extension (Figures 2(a) and 2(b)). To analyze the effect of acupuncture treatment, the two probe trials were designed. The percentage of time spent in the target quadrant and the frequency of locating platform were used for statistical analysis. At the same time, the different values of the percentage of time and the frequency between pretreatment and posttreatment were further calculated to evaluate the significance of change after acupuncture. The results of probe trial showed that, compared with CON group, Terc−/− mice in EA subgroup had significantly more variation to the time in quadrant where the platform used to be (P≤0.01, Figure 2(c)), while there was almost no difference among three subgroups for WT mice. Meanwhile the D-value of frequency of locating platform between before and after treatment in EA group appeared significantly increased compared to CON group (P≤0.05, Figure 2(d)), whereas the variation among WT mice shows no difference among three subgroups. These results demonstrated that EA acupuncture could ameliorate the cognitive deficits in the Terc−/− mice, while it did not affect WT mice. Fortunately, these findings were consistent with our previous report [12].Figure 2
Acupuncture intervention prevented spatial and memory impairment in telomerase-deficient mice (Terc−/−) in Morris water maze task. (a) Performance in training trial of Terc−/− mice (n=8 for each subgroup) and (b) wild-type mice (n=8 for each subgroup), during 4-day hidden platform trial. The data shows that escape latency to reach the hidden platform before acupuncture intervention in three subgroups. (c) In the probe test, the difference value of time spent in the target quadrant between before and after treatment was calculated. It was interesting that the EA stimulation can significantly increase the time in target quadrant for Terc−/− mice (P∗∗≤0.01), while nothing changed in WT mice. (d) There was also no significant difference among each subgroup for WT mice, and, compared with CON group, EA stimulation could significantly increase the D-value of frequency of locating platform between before and after treatment (P∗≤0.05).
(a) (b) (c) (d)
## 3.2. Effects of EA Treatment Improved the Levels of TrkB Protein in the Hippocampus for Terc−/− Group
Brain tissue samples from the subjects were analyzed using immunohistochemistry and western blot analysis to investigate the effect of acupuncture stimulation in the two strains of mice (Figures3(a) and 3(c)). The TrkB protein is mainly distributed in the membrane in the subgranular zone (SGZ) around the dentate gyrus (DG) of the hippocampus. Even there were fewer weakly stains, they were still some positively stained cells. The result from the immunohistochemical evaluation indicated that only in Terc−/− mice the stimulation of electroacupuncture (EA group) could significantly increase the expression of TrkB protein (P∗≤0.05) compared with CON subgroup, and there was no obvious difference among any subgroup in WT mice (Figure 3(b)). The western blotting results of TrkB in the hippocampus were also shown that EA could promote the expression of TrkB compared with the other subgroup (Figure 3(c)), which was also consistent with our former reports [12].Figure 3
Immunohistochemistry and western blot analysis of TrkB/NF-κB in hippocampus of the WT and Terc−/− mice (n=8 per subgroup). The brain samples were sliced sagittally into 5 μm sections, and the representative photographs and the mean optical density of positive cell values are, respectively, shown in a and d; b and e. Data were expressed as mean ± SEM, and the data were analyzed using one-way ANOVA followed by Turkey’s test of multiple comparisons. (a and d) The mice brain slices were hematoxylin-stained in the same region of the hippocampus among the three subgroups. The TrkB/NF-κB positively stained cells appear brown (red arrow) in the subgranular zone (SGZ) around the dentate gyrus area (DG), and the scale bar = 100 μm. (b) Compared with CON subgroup, the mean optical density of positive TrkB protein in EA subgroup was significantly increased in Terc−/− mice (P∗≤0.05), and there was no difference among the three subgroups for WT mice. Although there was obviously more NF-κB positively strained cell observed in the picture, (e) no significantly differences were observed in NF-κB immunoreactivity in any subgroups in both strains. (c and f) Expression of TrkB/NF-κB in WT and Terc−/− mice (n = 8 per subgroup) was detected by western blotting assay. Data are represented as the ratio of TrkB (NF-κB)/β-actin. The bar graphs represent the levels of TrkB/NF-κB in hippocampus in both strains. In the Terc−/− mice, both electroacupuncture and manual acupuncture significantly increased the expression of TrkB/NF-κB (P∗≤0.05; P∗∗∗≤0.001), and nothing changed in WT mice.
(a) (b) (c) (d) (e) (f)
## 3.3. Effects of EA Treatment Increased the NF-κB Expression in the Hippocampus for Terc−/− Group
To investigate whether EA or MA can alter the expression of TrkB downstream signal pathway, the expression of NF-κB/ERK/p-ERK was measured in tissue sample. From the photograph, the positively stained NF-κB appears brown mainly in the subgranular zone (SGZ) around the dentate gyrus area (DG) of hippocampus (Figures 3(d) and 3(e)), and there were no significant differences among any groups in the two strains of mice (Terc−/− and WT mice). Meanwhile, the western blotting results of NF-κB showed that, compared with CON group, the relative expressions of NF-κB significantly increased in EA (P∗∗∗≤0.001) and MA (P∗≤0.05) subgroups for Terc−/− mice (Figure 3(f)).
## 3.4. The Effects of Acupuncture Treatment on the Phosphorylation Levels of ERK
In order to further explore the mechanisms of acupuncture, the protein levels of p-ERK and ERK were measured by western blot to evaluate the activation of ERK. As p-ERK is a marker of ERK activation, the ratio of them was also calculated (Figure4). The result demonstrated that neither electroacupuncture nor manual acupuncture showed significant differences in ERK/p-ERK expression among any subgroups in Terc−/− mice (P≥0.05) and likewise in the subgroups of WT mice. Furthermore, the ratio of p-ERK/ERK shows no significance in any subgroups for the 2 types of mice (P≥0.05).Figure 4
Expression level of ERK in the hippocampus of WT and Terc−/− mice (n = 8 per subgroup) was detected by western blot assay. Blots were reprobed for expression of β-actin to control for loading and transfer. Data were expressed as mean ± SEM. The degree of ERK activation was represented as the ratio of p-ERK/ERK. The result demonstrated that the expression of ERK/p-ERK shows nothing significantly changed in the subgroups for the 2 types of mice, even in the ratio of p-ERK/ERK.
## 4. Discussion
As one of the most common tasks used to assess spatial learning and memory ability, the Morris water maze (MWM) was used in this study. The hidden platform trial and probe trial were used to assess the capabilities in spatial learning and memory, respectively. The abilities of spatial learning and memory were observed in the two strains of mice for 4 consecutive days [19]. The results of training period showed no significant difference among the various groups of mice, suggesting that all mice had the same learning and memory capacity before treatment (Figure 2).Even the acupuncture has been widely applied for different kinds of nervous system, but there were still few studies that described whether the acupuncture intervention had different effects in different strains. In our present study, the Terc−/− mice showed a better response to electroacupuncture. It implies the stimulation of acupuncture only produced therapeutic effects on animals at the pathological state. Some studies have reported that, in healthy animals [21], both electroacupuncture and manual acupuncture can lead to a significant increase in cell proliferation just in the SGZ of the DG. However, in our studies, only electroacupuncture can play a therapeutic role in the amelioration of learning and memory abilities for Terc−/− mice [15, 22].Recently, it has been reported that aging related neurodegenerative diseases are characterized by imbalance between neurogenesis and neurodegenerative diseases. And interestingly, some researches demonstrated that the stimulation of neurogenesis may be beneficial to patients with those diseases [23, 24]. In the current study, our research team found that, only in Terc−/− mice, after acupuncture treatment, the TrkB/NF-κB proteins were exhibited in the SGZ around the dentate gyrus area of hippocampus. We also found that EA administration showed more amelioration of reference memory impairment in Terc−/− mice [2]. This suggests that EA administration alleviates aging risk by inhibiting reference memory decline. On the other hand, the hippocampus, which plays an important role in learning and memory, demonstrates a high degree of neurogenesis, and only the DG of hippocampus continues to develop through adulthood [25]. Presently more and more research demonstrated that there are only two neurogenic areas in the brain including subventricular zone (SVZ) of the lateral ventricles and the subgranular zone (SGZ) of the DG in hippocampus [26]. So it is obviously that the ability of undifferentiated and rapidly proliferating for the progenitor cells that could differentiate into granule in the SGZ of DG throughout life. In our study, the NF-κB and TrkB positively strained cell could be found in SGZ area in hippocampus, and they showed significantly higher level compared with CON subgroup or MA subgroup. This indicates the electroacupuncture treatment can activate some protein signal pathways in the cell around DG for Terc−/− mice. And the WT mice were not affected after the acupuncture stimulation [15, 27, 28].For the difference between the effect of electroacupuncture and manual acupuncture, it is commonly accepted that EA stimulation shows a beneficial effect on neurodegeneration diseases. For manual acupuncture, the “De-Qi” feeling is essential to induce action. In clinical, acupuncture needles were repetitively penetrated up and down in different directions just for the purpose of “De-Qi” feeling [29, 30]. Consequently, some people believed that the effect of MA depends upon stimulating intensity (mild or strong) and duration of manipulation, even when the needle was tightly wound around by muscle fibers [31]. Some researches demonstrated that EA may cause electrical twitching of surrounding tissues and induce MA-like stimulation through mechanoreceptors [32]. The previous studies showed that manual acupuncture at ST36 significantly increased the number of BrdU-positive cells after ischemic injury [33, 34]. Subsequently, electroacupuncture stimulation at ST36 was reported to enhance cell proliferation in the DG a rat model of diabetes [9]. In our research, after electroacupuncture treatment, the hippocampal expression of TrkB was significantly increased in Terc−/− group compared with the WT group. These results may indicate that the stimulation of acupuncture may have a close relationship with the neurogenesis in the hippocampus. For acupuncture, the acupoint was ST36, which is located on the anterior tibia muscle, and is one of the most important acupoints in clinical acupuncture for antiaging. Simulation of ST36 is carried out for a wide range of conditions affecting digestive system, cardiovascular system, immune system, and nervous system. Furthermore, ST36 is one of the seven acupoints used for stroke treatment. As the high-affinity BDNF receptor, the tyrosine protein kinase receptor B (TrkB) just the same as the BDNF was expressed in different kinds of neurons in the brain. [35–37]. Our previous studies demonstrated that manual acupuncture stimulation can activate the BDNF and its downstream signaling pathways [2].Our study has shown that EA causes an increase in the positive cell of TrkB and NF-κB signal pathway, but there was no evidence supporting that acupuncture can activate downstream protein of TrkB through ERK signal pathway. Several studies support that the activation of TrkB can prevent cell death by activating the ERK pathway in cortical neurons and cerebellar neurons [38]. And some researchers suggested that the ability of BDNF-TrkB to stimulate telomerase activity can be partially decreased through the total inhibition of the extracellular signal-regulated protein kinase (ERK) [6]. However, our study indicated that acupuncture can only specifically increase the expression of TrkB and NF-κB in Terc−/− mice instead of via the activation of ERK/p-ERK (Figures 3 and 4) protein. From the result, we found that even the ERK signal pathway plays an important role in the overall effects of electroacupuncture, but there was nothing changed in neurons of hippocampus [39]. As such, based on our result, it can be inferred that EA stimulation increased the ability of spatial learning and memory in Terc−/− mice, and it might stem from the activation of NF-κB.Some studies demonstrated that NF-κB is proinflammatory transcription factor which is increased in aging brain, and the activation of NF-κB can protect neurons against death induced by neurodegeneration [40]. Therefore, the upregulated of NF-κB could show an neuroprotective effect in our brain [41]. At the same time, reactive oxygen species (ROS) have been implicated in many aspects of aging and in neurodegenerative diseases [42]. And NF-κB is oxygen sensitive and also is a precursor to VEGF (vascular endothelial growth factor) gene expression that leads to angiogenesis, it can regulate the proinflammatory response in endothelial cells [43]. Several studies supported that the role of NF-κB depends on the types of axoneuron, and the activation of NF-κB in ischemic dementia caused the neuron degeneration to microglia in cortex. However the neuroprotection effect was shown in the hippocampal neuron cell [43–45]. Our result indicated that NF-κB could be specifically increased by electroacupuncture in Terc−/− mice rather than WT mice, and the positive cells were exhibited in the SGZ around the dentate gyrus area of hippocampus. It suggests that the electroacupuncture may be involved in the nerve regeneration in SGZ; furthermore the positive increasing expression of TrkB and NF-κB in the subgranular zone (SGZ) around the dentate gyrus (DG) area may be a possible mechanism of EA in the treatment of aging in telomerase-deficient mice.
## 5. Conclusions
In summary, our key findings suggest that, compared with MA, the application of EA could ameliorate the spatial learning and memory ability for telomerase-deficient mice; furthermore, it could also increase the expression of TrkB and NF-κB in the subgranular zone (SGZ) around the dentate gyrus (DG) area. Based on this result, it is also suggested that the neuroprotection and neuron regeneration may play a critical role in electroacupuncture-induced antiaging effect. At the same time, the mechanisms of EA and MA effects on telomerase-deficient mice further provide the theoretical basis for antiaging clinical applications.
---
*Source: 1013978-2018-04-22.xml* | 1013978-2018-04-22_1013978-2018-04-22.md | 46,028 | The Effect of Electroacupuncture versus Manual Acupuncture through the Expression of TrkB/NF-κB in the Subgranular Zone of the Dentate Gyrus of Telomerase-Deficient Mice | Dong Lin; Jie Zhang; Wanyu Zhuang; Xiaodan Yan; Xiaoting Yang; Shen Lin; Lili Lin | Evidence-Based Complementary and Alternative Medicine
(2018) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1013978 | 1013978-2018-04-22.xml | ---
## Abstract
Our previous study showed that the acupuncture stimulation on the acupoint (ST-36) could activate the brain-derived neurotropic factor (BDNF) signaling pathways in telomerase-deficient mice. Recently, we set out to investigate whether the manual acupuncture (MA) or electroacupuncture (EA) displays a therapeutic advantage on age-related deterioration of learning and memory. Both telomerase-deficient mice (Terc−/− group, n=24) and wild-type mice (WT group, n=24) were randomly assigned to 3 subgroups (CON, controls with no treatment; MA, mice receiving manual acupuncture; EA, mice receiving electric acupuncture). The mice were subjected to behavior test, and EA/MA were applied at bilateral acupoints (ST36) 30 min daily for 7 successive days. The brain tissues were collected after the last Morris water maze (MWM) test and were subjected to the immunohistochemistry and western blot analysis. The MWM test showed that EA can significantly increase the time in target quadrant (P≤0.01) and frequency of locating platform for Terc−/− mice (P≤0.05), while nothing changed in WT mice. Furthermore, western blotting and immunohistochemistry suggested that EA could also specifically increase the expression of TrkB and NF-κB in Terc−/− mice but not in wild-type mice (P≤0.05). Meanwhile, the expression level and ratio of ERK/p-ERK did not exhibit significant changes in each subgroup. These results indicated that, compared with MA, the application of EA could specifically ameliorate the spatial learning and memory capability for telomerase-deficient mice through the activation of TrkB and NF-κB.
---
## Body
## 1. Introduction
Aging related neurodegeneration diseases are currently the mostly studied area in neuroscience. It is well known that aging is a multifactorial complex process that leads to the deterioration of biological functions, and the telomeres and telomerase may play a key role in this biological aging [1]. It is previously known that telomere protects chromosomes and plays important role in the cell life for its prolonged persistence. The telomerase is a DNA polymerase that plays an important role in telomere synthesis [2, 3]. In the nervous system, the neurons during embryonic and early postnatal life have high levels of telomerase activity, while in adult brain the level rapidly decreases, and at the same time the apoptosis of neurons occurs naturally during development. Therefore, some researchers believed that the reducing telomeres appear to be essential for the aging process in different organism [4, 5]. Previous research suggested that adult neurogenesis declines with age, and the age-related neurodegeneration could be due to dysfunctional telomeres, especially the telomerase with deficiency [3, 6, 7].It is well known that acupuncture treatment taking as an traditional Chinese medicine has been widely used in some neurological disorders. Furthermore, some studies have demonstrated that acupuncture or electroacupuncture exerted a vital function in the treatment of Alzheimer disease (AD), even proven a great efficiency in improving intelligence [8]. Some researches indicated that acupuncture treatment targeted the acupoints in surface and finally resulted in particularly neuroprotective effects in nervous system [9, 10]. Among the nonpharmacological techniques, manual acupuncture (MA) and electroacupuncture (EA) were the basic two categories to acupuncture [11]. Compared with the different patterns of stimulation, EA is more repeatable and adjustable, while MA is more flexible and suitable in many diseases. Furthermore, there were more and more research that revealed that both EA and MA could improve cognitive deficits in AD animal models. Meanwhile, our previous studies have indicated that manual acupuncture could activate the brain-derived neurotropic factor (BDNF) and its downstream signaling pathways for neuroprotection [12, 13]. However, accumulating evidence has demonstrated that the therapeutic effect of EA was focused on attenuating cognitive deficits and increasing pyramidal neuron number in hippocampal [14, 15]. In the present study, we investigated the difference of effect induced by EA and MA on telomerase-deficient mice. In addition, we further explored the expression of TrkB (tropomyosin receptor kinase B)/NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells)/ERK (extracellular regulated protein kinases)/p-ERK (phosphorylated-extracellular regulated protein kinases) protein in the subgranular zone (SGZ) of the dentate gyrus (DG) of telomerase-deficient mice.
## 2. Materials and Methods
### 2.1. Animals
The mice deficient for TERC genes were provided by the Jackson Laboratory in United States (Stock #004132). The experiments were approved by the Institutional Animal Care and Use Committee of Fujian University of Traditional Chinese Medicine, China, and performed according to the NIH Guideline for the Care and Use of Laboratory Animals. The mice were housed in an environmentally controlled vivarium under a 12 h light–dark cycle and temperature23±2°C, humidity 50–60%. Food and water were available for freedom usage. All the animals were generated by inbreed crossing of heterozygous knockout mice that were backcrossed to naïve C57BL/6J mice for more than 3 generations. The mice were genotyped using Polymerase Chain Reaction to confirm the genetic modifications. Two strains of 7-month-old adult mice were used for the current study (i.e., wild-type mice [WT, n=24] and telomerase-deficient mice [Terc−/−, n=24], with 8 mice each in 3 subgroups).
### 2.2. Experimental Protocol
The mice in both the WT group (n=24) and the Terc−/− group (n=24) were randomly assigned to 3 subgroups, 8 mice to each subgroup per group (Figure 1): (1) the control subgroup (CON) without any treatment; (2) the manual acupuncture subgroup (MA) that received manual acupuncture at the acupoint ST-36; (3) the electroacupuncture (EA) subgroup that received an electrical acupuncture stimulation on acupoint ST-36. All animals were observed while the acupuncture was performed, and if anyone looked uncomfortable, it was stroked gently on the back until it became calm again [16].Figure 1
Experiment design on grouping, schematic representation of the methodology used. (a) All of the WT group mice and Terc−/− group mice were randomly distributed to 3 subgroups (n=8 per subgroup): (1) controls with no treatment, (2) mice receiving manual acupuncture (MA), (3) mice receiving electric acupuncture (EA). (b) The mice received MWM test. In the 5th day, the MWM test was taken, and then MA/EA treatment was performed for 7 days. After the last MA/EA treatment administration, the last probe test was carried out. And all of the mice were sacrificed after the last behavioral observations (24 h later).
(a) (b)
### 2.3. Morris Water Maze (MWM) Behavioral Test
For the purpose of evaluating the ability of learning and memory, Morris water maze procedure was performed as described [17, 18]. The water maze consisted of a circular tank (120 cm in diameter, 50 cm in height) filled with water to a depth of 28.5 cm, maintained at 22±2°C. The area of the pool was conceptually divided into four quadrants (NE, NW, SW, and SE) of equal size. In the center of the 3rd quadrant, we placed a hidden escape platform as the target quadrant with 12.5 cm in diameter. The mice were given 60 s to locate the hidden platform. Once the mice found the submerged platform, it could remain on it for 10 s, and the latency to escape was recorded. Any mouse that failed to locate the platform within 60 s was placed on the platform by hand. Each mouse was subjected to 4 training trials per day for 4 consecutive days. Twenty-four hours after the final trial, the assessing spatial memory was taken in probe test. In this test, the mice need to swim freely for 60 s without the platform in the tank. Time spent in the target quadrant and the frequencies of locating platform were taken to indicate the degree of memory consolidation after learning. All data were collected by a video camera (TOTA-450III, Japan) and analyzed by an automated analyzing system (Dig-Behav, Jiliang Co. Ltd., Shanghai, China). Considering the following 7-day acupuncture treatment, we designed two probe tests after 4-day training time [19]. The probe test 1 was arranged before acupuncture intervention, and the probe test 2 was carried out after the last day of treatment.
### 2.4. Acupuncture Intervention
The control subgroup did not receive any treatment but were immobilized by hand with gentle plastic restraints just as the treatment groups. In the treatment group, acupuncture stimulation was performed by a small acupuncture needle (13 mm in length, 0.3 mm in diameter, from Suzhou Hua Tuo Medical Instrument Co., Suzhou, China). Because of the effectiveness in improving the brain function, the point of bilateral ST-36 was chosen to be used. The locations of ST-36 and acupuncture manipulation were chosen following our previously described protocol [13, 14]. In the MA subgroup, the manual acupuncture on the point of ST-36 was applied for 30 mins. The needles were inserted into acupoint for a depth of 1.5–2 mm, and twirling manipulation was applied every 5 min and lasted 20 s each time. Each needle was rotated bidirectionally within 90° at a speed of 180°/s. For EA subgroup, a pair of needles were tightly tied together and inserted to bilateral ST-36 just as reported previously [20]. The needles were was also inserted into ST-36 acupoint for the same depth just as the MA group and connected to a Han’s acupoint nerve stimulator (HANS, Han’s Acupoint Nerve Stimulator, Model LH 202H, Beijing Huawei Ltd., Beijing, China). The parameters were as follows: sparse-dense wave with a frequency of 2/50 Hz, current of 2 mA, 30 min/stimulation, and one stimulation per day, for 7 consecutive days.
### 2.5. Tissue Preparation and Immunohistochemistry
After behavioral test and acupuncture intervention all the animals were sacrificed under 10% chloral hydrate (0.35 ml/100 g, intraperitoneal [i.p.]), and the brain tissues were collected after intracardial perfusion with saline. The brain samples were halved for each of the subjects; the left-half was separated out for protein preparation, and the right was fixed with 4% (w/v) paraformaldehyde for next immunohistochemistry analysis. The tissue blocks containing hippocampus were dehydrated and embedded in paraffin. Fixed brains were cut in 5μm sagittal sections. The sections was mounted on 0.1% polylysine reagent (Sigma) coated slides. Subsequently, the sections were dewaxed and hydrated and incubated in 0.01 mol/L of citrate buffer for antigen thermal remediation for 5 min by being treated with microwave (700 W), and then for 10 min with 3% H2O2 at room temperature, and washed in phosphate-buffered saline (PBS) for 3 × 5 min. Next, the sections were blocked in 2% BSA for 10 min and incubated with primary antibody diluent (rabbit anti-TrkB 1 : 500, Cell Signalling Technology; rabbit anti-NF-κB 1 : 200, Cell Signalling Technology) for 12 h at 4°C. Then, the sections were rinsed by PBS and next incubated with secondary antibody diluent (biotinylated goat anti-rabbit IgG, diluted 1 : 1000, Vector Laboratories) for 30 min at room temperature. After wash by PBS for 3 × 5 min, the diaminobenzidine (DAB) kit (Vector Laboratories, Burlingame, USA) was used for color development for 5 min. After being redyed with hematoxylin, the brain slices were dehydrated and observed under a light microscope, BX53 (BX-51 Olympus, Tokyo, Japan), and analyzed using Image J software.
### 2.6. Western Blot Analysis
The frozen hippocampus tissues were obtained after behavior test and were homogenized on ice in 1.5 ml RIPA protein lysis buffer supplemented with 500 g PMSF. After centrifugation for 15 minutes at 12,000 ×g at 4°C, the protein in cleared supernatant was quantified and adjusted to 5 mg/ml. Equivalent amounts of protein (30μg/lane) were separated by SDS-PAGE and transferred to PVDF membranes. Membranes were blocked with 5% (w/v) bovine serum albumin in Tris-buffered saline with Tween 20 for 1 hour and then incubated with primary antibody (rabbit anti-mouse TrkB [1 : 1000], ERK [1 : 2000], P-ERK [1 : 1000], NF-κB [1 : 500], Santa Cruz Biotechnology,) overnight at 4°C. The immunoblots were then incubated with goat anti-rabbit horseradish peroxidase-conjugated IgG for 2 hours at room temperature (1 : 1,000), we applied the chemiluminescent to develop the films, and the protein bands were quantified by Quantity One. The protein expression level was controlled by the protein of β-actin. All the data were expressed as the ratio relative after normalization to the β-actin levels.
### 2.7. Statistical Analysis
All data is presented as mean ± SEM for each group. For the Morris water maze test the escape latency time of the hidden platform trial was analyzed by two-way ANOVA of repeated measures, and the probe trial including escape latencies and original angle was conducted in the form of a multifactorial analysis of variance (ANOVA). The immunohistochemistry and western blot assay data were also analyzed by one-way ANOVA analysis of variance followed by LSD (equal variances assumed) or Dunnett’s T3 (equal variances not assumed) post hoc test. All the analysis was performed with Prism 6.0 (GraphPad Software Inc., San Diego, USA), and theP values less than 0.05 were considered statistically significant.
## 2.1. Animals
The mice deficient for TERC genes were provided by the Jackson Laboratory in United States (Stock #004132). The experiments were approved by the Institutional Animal Care and Use Committee of Fujian University of Traditional Chinese Medicine, China, and performed according to the NIH Guideline for the Care and Use of Laboratory Animals. The mice were housed in an environmentally controlled vivarium under a 12 h light–dark cycle and temperature23±2°C, humidity 50–60%. Food and water were available for freedom usage. All the animals were generated by inbreed crossing of heterozygous knockout mice that were backcrossed to naïve C57BL/6J mice for more than 3 generations. The mice were genotyped using Polymerase Chain Reaction to confirm the genetic modifications. Two strains of 7-month-old adult mice were used for the current study (i.e., wild-type mice [WT, n=24] and telomerase-deficient mice [Terc−/−, n=24], with 8 mice each in 3 subgroups).
## 2.2. Experimental Protocol
The mice in both the WT group (n=24) and the Terc−/− group (n=24) were randomly assigned to 3 subgroups, 8 mice to each subgroup per group (Figure 1): (1) the control subgroup (CON) without any treatment; (2) the manual acupuncture subgroup (MA) that received manual acupuncture at the acupoint ST-36; (3) the electroacupuncture (EA) subgroup that received an electrical acupuncture stimulation on acupoint ST-36. All animals were observed while the acupuncture was performed, and if anyone looked uncomfortable, it was stroked gently on the back until it became calm again [16].Figure 1
Experiment design on grouping, schematic representation of the methodology used. (a) All of the WT group mice and Terc−/− group mice were randomly distributed to 3 subgroups (n=8 per subgroup): (1) controls with no treatment, (2) mice receiving manual acupuncture (MA), (3) mice receiving electric acupuncture (EA). (b) The mice received MWM test. In the 5th day, the MWM test was taken, and then MA/EA treatment was performed for 7 days. After the last MA/EA treatment administration, the last probe test was carried out. And all of the mice were sacrificed after the last behavioral observations (24 h later).
(a) (b)
## 2.3. Morris Water Maze (MWM) Behavioral Test
For the purpose of evaluating the ability of learning and memory, Morris water maze procedure was performed as described [17, 18]. The water maze consisted of a circular tank (120 cm in diameter, 50 cm in height) filled with water to a depth of 28.5 cm, maintained at 22±2°C. The area of the pool was conceptually divided into four quadrants (NE, NW, SW, and SE) of equal size. In the center of the 3rd quadrant, we placed a hidden escape platform as the target quadrant with 12.5 cm in diameter. The mice were given 60 s to locate the hidden platform. Once the mice found the submerged platform, it could remain on it for 10 s, and the latency to escape was recorded. Any mouse that failed to locate the platform within 60 s was placed on the platform by hand. Each mouse was subjected to 4 training trials per day for 4 consecutive days. Twenty-four hours after the final trial, the assessing spatial memory was taken in probe test. In this test, the mice need to swim freely for 60 s without the platform in the tank. Time spent in the target quadrant and the frequencies of locating platform were taken to indicate the degree of memory consolidation after learning. All data were collected by a video camera (TOTA-450III, Japan) and analyzed by an automated analyzing system (Dig-Behav, Jiliang Co. Ltd., Shanghai, China). Considering the following 7-day acupuncture treatment, we designed two probe tests after 4-day training time [19]. The probe test 1 was arranged before acupuncture intervention, and the probe test 2 was carried out after the last day of treatment.
## 2.4. Acupuncture Intervention
The control subgroup did not receive any treatment but were immobilized by hand with gentle plastic restraints just as the treatment groups. In the treatment group, acupuncture stimulation was performed by a small acupuncture needle (13 mm in length, 0.3 mm in diameter, from Suzhou Hua Tuo Medical Instrument Co., Suzhou, China). Because of the effectiveness in improving the brain function, the point of bilateral ST-36 was chosen to be used. The locations of ST-36 and acupuncture manipulation were chosen following our previously described protocol [13, 14]. In the MA subgroup, the manual acupuncture on the point of ST-36 was applied for 30 mins. The needles were inserted into acupoint for a depth of 1.5–2 mm, and twirling manipulation was applied every 5 min and lasted 20 s each time. Each needle was rotated bidirectionally within 90° at a speed of 180°/s. For EA subgroup, a pair of needles were tightly tied together and inserted to bilateral ST-36 just as reported previously [20]. The needles were was also inserted into ST-36 acupoint for the same depth just as the MA group and connected to a Han’s acupoint nerve stimulator (HANS, Han’s Acupoint Nerve Stimulator, Model LH 202H, Beijing Huawei Ltd., Beijing, China). The parameters were as follows: sparse-dense wave with a frequency of 2/50 Hz, current of 2 mA, 30 min/stimulation, and one stimulation per day, for 7 consecutive days.
## 2.5. Tissue Preparation and Immunohistochemistry
After behavioral test and acupuncture intervention all the animals were sacrificed under 10% chloral hydrate (0.35 ml/100 g, intraperitoneal [i.p.]), and the brain tissues were collected after intracardial perfusion with saline. The brain samples were halved for each of the subjects; the left-half was separated out for protein preparation, and the right was fixed with 4% (w/v) paraformaldehyde for next immunohistochemistry analysis. The tissue blocks containing hippocampus were dehydrated and embedded in paraffin. Fixed brains were cut in 5μm sagittal sections. The sections was mounted on 0.1% polylysine reagent (Sigma) coated slides. Subsequently, the sections were dewaxed and hydrated and incubated in 0.01 mol/L of citrate buffer for antigen thermal remediation for 5 min by being treated with microwave (700 W), and then for 10 min with 3% H2O2 at room temperature, and washed in phosphate-buffered saline (PBS) for 3 × 5 min. Next, the sections were blocked in 2% BSA for 10 min and incubated with primary antibody diluent (rabbit anti-TrkB 1 : 500, Cell Signalling Technology; rabbit anti-NF-κB 1 : 200, Cell Signalling Technology) for 12 h at 4°C. Then, the sections were rinsed by PBS and next incubated with secondary antibody diluent (biotinylated goat anti-rabbit IgG, diluted 1 : 1000, Vector Laboratories) for 30 min at room temperature. After wash by PBS for 3 × 5 min, the diaminobenzidine (DAB) kit (Vector Laboratories, Burlingame, USA) was used for color development for 5 min. After being redyed with hematoxylin, the brain slices were dehydrated and observed under a light microscope, BX53 (BX-51 Olympus, Tokyo, Japan), and analyzed using Image J software.
## 2.6. Western Blot Analysis
The frozen hippocampus tissues were obtained after behavior test and were homogenized on ice in 1.5 ml RIPA protein lysis buffer supplemented with 500 g PMSF. After centrifugation for 15 minutes at 12,000 ×g at 4°C, the protein in cleared supernatant was quantified and adjusted to 5 mg/ml. Equivalent amounts of protein (30μg/lane) were separated by SDS-PAGE and transferred to PVDF membranes. Membranes were blocked with 5% (w/v) bovine serum albumin in Tris-buffered saline with Tween 20 for 1 hour and then incubated with primary antibody (rabbit anti-mouse TrkB [1 : 1000], ERK [1 : 2000], P-ERK [1 : 1000], NF-κB [1 : 500], Santa Cruz Biotechnology,) overnight at 4°C. The immunoblots were then incubated with goat anti-rabbit horseradish peroxidase-conjugated IgG for 2 hours at room temperature (1 : 1,000), we applied the chemiluminescent to develop the films, and the protein bands were quantified by Quantity One. The protein expression level was controlled by the protein of β-actin. All the data were expressed as the ratio relative after normalization to the β-actin levels.
## 2.7. Statistical Analysis
All data is presented as mean ± SEM for each group. For the Morris water maze test the escape latency time of the hidden platform trial was analyzed by two-way ANOVA of repeated measures, and the probe trial including escape latencies and original angle was conducted in the form of a multifactorial analysis of variance (ANOVA). The immunohistochemistry and western blot assay data were also analyzed by one-way ANOVA analysis of variance followed by LSD (equal variances assumed) or Dunnett’s T3 (equal variances not assumed) post hoc test. All the analysis was performed with Prism 6.0 (GraphPad Software Inc., San Diego, USA), and theP values less than 0.05 were considered statistically significant.
## 3. Results
### 3.1. Effect of Electroacupuncture on Spatial Learning and Memory
The results of the Morris water maze test are presented in Figure2. In the hidden platform trial, the escape latency time in each group showed a downward trend in 4-day training time extension (Figures 2(a) and 2(b)). To analyze the effect of acupuncture treatment, the two probe trials were designed. The percentage of time spent in the target quadrant and the frequency of locating platform were used for statistical analysis. At the same time, the different values of the percentage of time and the frequency between pretreatment and posttreatment were further calculated to evaluate the significance of change after acupuncture. The results of probe trial showed that, compared with CON group, Terc−/− mice in EA subgroup had significantly more variation to the time in quadrant where the platform used to be (P≤0.01, Figure 2(c)), while there was almost no difference among three subgroups for WT mice. Meanwhile the D-value of frequency of locating platform between before and after treatment in EA group appeared significantly increased compared to CON group (P≤0.05, Figure 2(d)), whereas the variation among WT mice shows no difference among three subgroups. These results demonstrated that EA acupuncture could ameliorate the cognitive deficits in the Terc−/− mice, while it did not affect WT mice. Fortunately, these findings were consistent with our previous report [12].Figure 2
Acupuncture intervention prevented spatial and memory impairment in telomerase-deficient mice (Terc−/−) in Morris water maze task. (a) Performance in training trial of Terc−/− mice (n=8 for each subgroup) and (b) wild-type mice (n=8 for each subgroup), during 4-day hidden platform trial. The data shows that escape latency to reach the hidden platform before acupuncture intervention in three subgroups. (c) In the probe test, the difference value of time spent in the target quadrant between before and after treatment was calculated. It was interesting that the EA stimulation can significantly increase the time in target quadrant for Terc−/− mice (P∗∗≤0.01), while nothing changed in WT mice. (d) There was also no significant difference among each subgroup for WT mice, and, compared with CON group, EA stimulation could significantly increase the D-value of frequency of locating platform between before and after treatment (P∗≤0.05).
(a) (b) (c) (d)
### 3.2. Effects of EA Treatment Improved the Levels of TrkB Protein in the Hippocampus for Terc−/− Group
Brain tissue samples from the subjects were analyzed using immunohistochemistry and western blot analysis to investigate the effect of acupuncture stimulation in the two strains of mice (Figures3(a) and 3(c)). The TrkB protein is mainly distributed in the membrane in the subgranular zone (SGZ) around the dentate gyrus (DG) of the hippocampus. Even there were fewer weakly stains, they were still some positively stained cells. The result from the immunohistochemical evaluation indicated that only in Terc−/− mice the stimulation of electroacupuncture (EA group) could significantly increase the expression of TrkB protein (P∗≤0.05) compared with CON subgroup, and there was no obvious difference among any subgroup in WT mice (Figure 3(b)). The western blotting results of TrkB in the hippocampus were also shown that EA could promote the expression of TrkB compared with the other subgroup (Figure 3(c)), which was also consistent with our former reports [12].Figure 3
Immunohistochemistry and western blot analysis of TrkB/NF-κB in hippocampus of the WT and Terc−/− mice (n=8 per subgroup). The brain samples were sliced sagittally into 5 μm sections, and the representative photographs and the mean optical density of positive cell values are, respectively, shown in a and d; b and e. Data were expressed as mean ± SEM, and the data were analyzed using one-way ANOVA followed by Turkey’s test of multiple comparisons. (a and d) The mice brain slices were hematoxylin-stained in the same region of the hippocampus among the three subgroups. The TrkB/NF-κB positively stained cells appear brown (red arrow) in the subgranular zone (SGZ) around the dentate gyrus area (DG), and the scale bar = 100 μm. (b) Compared with CON subgroup, the mean optical density of positive TrkB protein in EA subgroup was significantly increased in Terc−/− mice (P∗≤0.05), and there was no difference among the three subgroups for WT mice. Although there was obviously more NF-κB positively strained cell observed in the picture, (e) no significantly differences were observed in NF-κB immunoreactivity in any subgroups in both strains. (c and f) Expression of TrkB/NF-κB in WT and Terc−/− mice (n = 8 per subgroup) was detected by western blotting assay. Data are represented as the ratio of TrkB (NF-κB)/β-actin. The bar graphs represent the levels of TrkB/NF-κB in hippocampus in both strains. In the Terc−/− mice, both electroacupuncture and manual acupuncture significantly increased the expression of TrkB/NF-κB (P∗≤0.05; P∗∗∗≤0.001), and nothing changed in WT mice.
(a) (b) (c) (d) (e) (f)
### 3.3. Effects of EA Treatment Increased the NF-κB Expression in the Hippocampus for Terc−/− Group
To investigate whether EA or MA can alter the expression of TrkB downstream signal pathway, the expression of NF-κB/ERK/p-ERK was measured in tissue sample. From the photograph, the positively stained NF-κB appears brown mainly in the subgranular zone (SGZ) around the dentate gyrus area (DG) of hippocampus (Figures 3(d) and 3(e)), and there were no significant differences among any groups in the two strains of mice (Terc−/− and WT mice). Meanwhile, the western blotting results of NF-κB showed that, compared with CON group, the relative expressions of NF-κB significantly increased in EA (P∗∗∗≤0.001) and MA (P∗≤0.05) subgroups for Terc−/− mice (Figure 3(f)).
### 3.4. The Effects of Acupuncture Treatment on the Phosphorylation Levels of ERK
In order to further explore the mechanisms of acupuncture, the protein levels of p-ERK and ERK were measured by western blot to evaluate the activation of ERK. As p-ERK is a marker of ERK activation, the ratio of them was also calculated (Figure4). The result demonstrated that neither electroacupuncture nor manual acupuncture showed significant differences in ERK/p-ERK expression among any subgroups in Terc−/− mice (P≥0.05) and likewise in the subgroups of WT mice. Furthermore, the ratio of p-ERK/ERK shows no significance in any subgroups for the 2 types of mice (P≥0.05).Figure 4
Expression level of ERK in the hippocampus of WT and Terc−/− mice (n = 8 per subgroup) was detected by western blot assay. Blots were reprobed for expression of β-actin to control for loading and transfer. Data were expressed as mean ± SEM. The degree of ERK activation was represented as the ratio of p-ERK/ERK. The result demonstrated that the expression of ERK/p-ERK shows nothing significantly changed in the subgroups for the 2 types of mice, even in the ratio of p-ERK/ERK.
## 3.1. Effect of Electroacupuncture on Spatial Learning and Memory
The results of the Morris water maze test are presented in Figure2. In the hidden platform trial, the escape latency time in each group showed a downward trend in 4-day training time extension (Figures 2(a) and 2(b)). To analyze the effect of acupuncture treatment, the two probe trials were designed. The percentage of time spent in the target quadrant and the frequency of locating platform were used for statistical analysis. At the same time, the different values of the percentage of time and the frequency between pretreatment and posttreatment were further calculated to evaluate the significance of change after acupuncture. The results of probe trial showed that, compared with CON group, Terc−/− mice in EA subgroup had significantly more variation to the time in quadrant where the platform used to be (P≤0.01, Figure 2(c)), while there was almost no difference among three subgroups for WT mice. Meanwhile the D-value of frequency of locating platform between before and after treatment in EA group appeared significantly increased compared to CON group (P≤0.05, Figure 2(d)), whereas the variation among WT mice shows no difference among three subgroups. These results demonstrated that EA acupuncture could ameliorate the cognitive deficits in the Terc−/− mice, while it did not affect WT mice. Fortunately, these findings were consistent with our previous report [12].Figure 2
Acupuncture intervention prevented spatial and memory impairment in telomerase-deficient mice (Terc−/−) in Morris water maze task. (a) Performance in training trial of Terc−/− mice (n=8 for each subgroup) and (b) wild-type mice (n=8 for each subgroup), during 4-day hidden platform trial. The data shows that escape latency to reach the hidden platform before acupuncture intervention in three subgroups. (c) In the probe test, the difference value of time spent in the target quadrant between before and after treatment was calculated. It was interesting that the EA stimulation can significantly increase the time in target quadrant for Terc−/− mice (P∗∗≤0.01), while nothing changed in WT mice. (d) There was also no significant difference among each subgroup for WT mice, and, compared with CON group, EA stimulation could significantly increase the D-value of frequency of locating platform between before and after treatment (P∗≤0.05).
(a) (b) (c) (d)
## 3.2. Effects of EA Treatment Improved the Levels of TrkB Protein in the Hippocampus for Terc−/− Group
Brain tissue samples from the subjects were analyzed using immunohistochemistry and western blot analysis to investigate the effect of acupuncture stimulation in the two strains of mice (Figures3(a) and 3(c)). The TrkB protein is mainly distributed in the membrane in the subgranular zone (SGZ) around the dentate gyrus (DG) of the hippocampus. Even there were fewer weakly stains, they were still some positively stained cells. The result from the immunohistochemical evaluation indicated that only in Terc−/− mice the stimulation of electroacupuncture (EA group) could significantly increase the expression of TrkB protein (P∗≤0.05) compared with CON subgroup, and there was no obvious difference among any subgroup in WT mice (Figure 3(b)). The western blotting results of TrkB in the hippocampus were also shown that EA could promote the expression of TrkB compared with the other subgroup (Figure 3(c)), which was also consistent with our former reports [12].Figure 3
Immunohistochemistry and western blot analysis of TrkB/NF-κB in hippocampus of the WT and Terc−/− mice (n=8 per subgroup). The brain samples were sliced sagittally into 5 μm sections, and the representative photographs and the mean optical density of positive cell values are, respectively, shown in a and d; b and e. Data were expressed as mean ± SEM, and the data were analyzed using one-way ANOVA followed by Turkey’s test of multiple comparisons. (a and d) The mice brain slices were hematoxylin-stained in the same region of the hippocampus among the three subgroups. The TrkB/NF-κB positively stained cells appear brown (red arrow) in the subgranular zone (SGZ) around the dentate gyrus area (DG), and the scale bar = 100 μm. (b) Compared with CON subgroup, the mean optical density of positive TrkB protein in EA subgroup was significantly increased in Terc−/− mice (P∗≤0.05), and there was no difference among the three subgroups for WT mice. Although there was obviously more NF-κB positively strained cell observed in the picture, (e) no significantly differences were observed in NF-κB immunoreactivity in any subgroups in both strains. (c and f) Expression of TrkB/NF-κB in WT and Terc−/− mice (n = 8 per subgroup) was detected by western blotting assay. Data are represented as the ratio of TrkB (NF-κB)/β-actin. The bar graphs represent the levels of TrkB/NF-κB in hippocampus in both strains. In the Terc−/− mice, both electroacupuncture and manual acupuncture significantly increased the expression of TrkB/NF-κB (P∗≤0.05; P∗∗∗≤0.001), and nothing changed in WT mice.
(a) (b) (c) (d) (e) (f)
## 3.3. Effects of EA Treatment Increased the NF-κB Expression in the Hippocampus for Terc−/− Group
To investigate whether EA or MA can alter the expression of TrkB downstream signal pathway, the expression of NF-κB/ERK/p-ERK was measured in tissue sample. From the photograph, the positively stained NF-κB appears brown mainly in the subgranular zone (SGZ) around the dentate gyrus area (DG) of hippocampus (Figures 3(d) and 3(e)), and there were no significant differences among any groups in the two strains of mice (Terc−/− and WT mice). Meanwhile, the western blotting results of NF-κB showed that, compared with CON group, the relative expressions of NF-κB significantly increased in EA (P∗∗∗≤0.001) and MA (P∗≤0.05) subgroups for Terc−/− mice (Figure 3(f)).
## 3.4. The Effects of Acupuncture Treatment on the Phosphorylation Levels of ERK
In order to further explore the mechanisms of acupuncture, the protein levels of p-ERK and ERK were measured by western blot to evaluate the activation of ERK. As p-ERK is a marker of ERK activation, the ratio of them was also calculated (Figure4). The result demonstrated that neither electroacupuncture nor manual acupuncture showed significant differences in ERK/p-ERK expression among any subgroups in Terc−/− mice (P≥0.05) and likewise in the subgroups of WT mice. Furthermore, the ratio of p-ERK/ERK shows no significance in any subgroups for the 2 types of mice (P≥0.05).Figure 4
Expression level of ERK in the hippocampus of WT and Terc−/− mice (n = 8 per subgroup) was detected by western blot assay. Blots were reprobed for expression of β-actin to control for loading and transfer. Data were expressed as mean ± SEM. The degree of ERK activation was represented as the ratio of p-ERK/ERK. The result demonstrated that the expression of ERK/p-ERK shows nothing significantly changed in the subgroups for the 2 types of mice, even in the ratio of p-ERK/ERK.
## 4. Discussion
As one of the most common tasks used to assess spatial learning and memory ability, the Morris water maze (MWM) was used in this study. The hidden platform trial and probe trial were used to assess the capabilities in spatial learning and memory, respectively. The abilities of spatial learning and memory were observed in the two strains of mice for 4 consecutive days [19]. The results of training period showed no significant difference among the various groups of mice, suggesting that all mice had the same learning and memory capacity before treatment (Figure 2).Even the acupuncture has been widely applied for different kinds of nervous system, but there were still few studies that described whether the acupuncture intervention had different effects in different strains. In our present study, the Terc−/− mice showed a better response to electroacupuncture. It implies the stimulation of acupuncture only produced therapeutic effects on animals at the pathological state. Some studies have reported that, in healthy animals [21], both electroacupuncture and manual acupuncture can lead to a significant increase in cell proliferation just in the SGZ of the DG. However, in our studies, only electroacupuncture can play a therapeutic role in the amelioration of learning and memory abilities for Terc−/− mice [15, 22].Recently, it has been reported that aging related neurodegenerative diseases are characterized by imbalance between neurogenesis and neurodegenerative diseases. And interestingly, some researches demonstrated that the stimulation of neurogenesis may be beneficial to patients with those diseases [23, 24]. In the current study, our research team found that, only in Terc−/− mice, after acupuncture treatment, the TrkB/NF-κB proteins were exhibited in the SGZ around the dentate gyrus area of hippocampus. We also found that EA administration showed more amelioration of reference memory impairment in Terc−/− mice [2]. This suggests that EA administration alleviates aging risk by inhibiting reference memory decline. On the other hand, the hippocampus, which plays an important role in learning and memory, demonstrates a high degree of neurogenesis, and only the DG of hippocampus continues to develop through adulthood [25]. Presently more and more research demonstrated that there are only two neurogenic areas in the brain including subventricular zone (SVZ) of the lateral ventricles and the subgranular zone (SGZ) of the DG in hippocampus [26]. So it is obviously that the ability of undifferentiated and rapidly proliferating for the progenitor cells that could differentiate into granule in the SGZ of DG throughout life. In our study, the NF-κB and TrkB positively strained cell could be found in SGZ area in hippocampus, and they showed significantly higher level compared with CON subgroup or MA subgroup. This indicates the electroacupuncture treatment can activate some protein signal pathways in the cell around DG for Terc−/− mice. And the WT mice were not affected after the acupuncture stimulation [15, 27, 28].For the difference between the effect of electroacupuncture and manual acupuncture, it is commonly accepted that EA stimulation shows a beneficial effect on neurodegeneration diseases. For manual acupuncture, the “De-Qi” feeling is essential to induce action. In clinical, acupuncture needles were repetitively penetrated up and down in different directions just for the purpose of “De-Qi” feeling [29, 30]. Consequently, some people believed that the effect of MA depends upon stimulating intensity (mild or strong) and duration of manipulation, even when the needle was tightly wound around by muscle fibers [31]. Some researches demonstrated that EA may cause electrical twitching of surrounding tissues and induce MA-like stimulation through mechanoreceptors [32]. The previous studies showed that manual acupuncture at ST36 significantly increased the number of BrdU-positive cells after ischemic injury [33, 34]. Subsequently, electroacupuncture stimulation at ST36 was reported to enhance cell proliferation in the DG a rat model of diabetes [9]. In our research, after electroacupuncture treatment, the hippocampal expression of TrkB was significantly increased in Terc−/− group compared with the WT group. These results may indicate that the stimulation of acupuncture may have a close relationship with the neurogenesis in the hippocampus. For acupuncture, the acupoint was ST36, which is located on the anterior tibia muscle, and is one of the most important acupoints in clinical acupuncture for antiaging. Simulation of ST36 is carried out for a wide range of conditions affecting digestive system, cardiovascular system, immune system, and nervous system. Furthermore, ST36 is one of the seven acupoints used for stroke treatment. As the high-affinity BDNF receptor, the tyrosine protein kinase receptor B (TrkB) just the same as the BDNF was expressed in different kinds of neurons in the brain. [35–37]. Our previous studies demonstrated that manual acupuncture stimulation can activate the BDNF and its downstream signaling pathways [2].Our study has shown that EA causes an increase in the positive cell of TrkB and NF-κB signal pathway, but there was no evidence supporting that acupuncture can activate downstream protein of TrkB through ERK signal pathway. Several studies support that the activation of TrkB can prevent cell death by activating the ERK pathway in cortical neurons and cerebellar neurons [38]. And some researchers suggested that the ability of BDNF-TrkB to stimulate telomerase activity can be partially decreased through the total inhibition of the extracellular signal-regulated protein kinase (ERK) [6]. However, our study indicated that acupuncture can only specifically increase the expression of TrkB and NF-κB in Terc−/− mice instead of via the activation of ERK/p-ERK (Figures 3 and 4) protein. From the result, we found that even the ERK signal pathway plays an important role in the overall effects of electroacupuncture, but there was nothing changed in neurons of hippocampus [39]. As such, based on our result, it can be inferred that EA stimulation increased the ability of spatial learning and memory in Terc−/− mice, and it might stem from the activation of NF-κB.Some studies demonstrated that NF-κB is proinflammatory transcription factor which is increased in aging brain, and the activation of NF-κB can protect neurons against death induced by neurodegeneration [40]. Therefore, the upregulated of NF-κB could show an neuroprotective effect in our brain [41]. At the same time, reactive oxygen species (ROS) have been implicated in many aspects of aging and in neurodegenerative diseases [42]. And NF-κB is oxygen sensitive and also is a precursor to VEGF (vascular endothelial growth factor) gene expression that leads to angiogenesis, it can regulate the proinflammatory response in endothelial cells [43]. Several studies supported that the role of NF-κB depends on the types of axoneuron, and the activation of NF-κB in ischemic dementia caused the neuron degeneration to microglia in cortex. However the neuroprotection effect was shown in the hippocampal neuron cell [43–45]. Our result indicated that NF-κB could be specifically increased by electroacupuncture in Terc−/− mice rather than WT mice, and the positive cells were exhibited in the SGZ around the dentate gyrus area of hippocampus. It suggests that the electroacupuncture may be involved in the nerve regeneration in SGZ; furthermore the positive increasing expression of TrkB and NF-κB in the subgranular zone (SGZ) around the dentate gyrus (DG) area may be a possible mechanism of EA in the treatment of aging in telomerase-deficient mice.
## 5. Conclusions
In summary, our key findings suggest that, compared with MA, the application of EA could ameliorate the spatial learning and memory ability for telomerase-deficient mice; furthermore, it could also increase the expression of TrkB and NF-κB in the subgranular zone (SGZ) around the dentate gyrus (DG) area. Based on this result, it is also suggested that the neuroprotection and neuron regeneration may play a critical role in electroacupuncture-induced antiaging effect. At the same time, the mechanisms of EA and MA effects on telomerase-deficient mice further provide the theoretical basis for antiaging clinical applications.
---
*Source: 1013978-2018-04-22.xml* | 2018 |
# Digital Music Recommendation Technology for Music Teaching Based on Deep Learning
**Authors:** Meng Lu; Du Pengcheng; Song Yanfeng
**Journal:** Wireless Communications and Mobile Computing
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1013997
---
## Abstract
With the rapid development of music streaming media service industry, users can easily hear any song on mobile devices. Internet has become a huge music storage platform. With the development of network and large-scale digital music industry, the acquisition and listening of music are presented to users in a more convenient way. How to find the music loved by users from the massive Internet digital music data has become the key problem and main goal to be solved in the field of music information retrieval. Personalized music recommendation system can accurately find and push songs that users may be interested in from tens of millions of huge music libraries according to users’ information under the condition that users only have vague demand for listening to songs. Relying on the traditional search method to find the music that you are interested in can no longer meet the needs of users, so the current music recommendation system needs to dig out the music that has no clear needs in the long tail to help people find their favorite songs.
---
## Body
## 1. Introduction
The 21st century is a big data era with the rapid development of the Internet and the rapid dissemination of information [1]. The Internet has become the main way for people to obtain multimedia resources, including books, movies, and music. In contemporary society, the Internet and electronic information technology have made great progress [2]. At the hardware level, electronic chips are becoming smaller and smaller, but their functions are becoming more and more powerful, CPU computing power is rising, and the processing capacity of personal electronic devices is becoming stronger and stronger, which means that it is easier for everyone to create and share new information, such as pictures, videos, and text information [3]. Semiconductor storage capacity is becoming stronger and stronger. Server clusters store a large amount of user information and item information. Listening to music, as one of people’s main recreational activities, has always played an important role in people’s life [4]. Ancient sages attached great importance to the important role of music in the development of social civilization and the maintenance of social order [5]. They believed that music could help maintain social harmony, that is, the ideal realm of social development of “no resentment when music comes, no struggle when etiquette comes.” Due to the rapid growth of music streaming media service industry and the rapid progress of portable device technology, thousands of music has become accessible, but it is becoming more and more difficult to find the music you want [6]. The network has brought a lot of information to people, met people’s demand for information in the information age, and benefited from it, but it has also brought the problem of information overload. Both consumers and producers of information have encountered great challenges [7]. Because of the great variety and amount of information contained in the data, people cannot directly find the information they want. According to this, the predecessors also proposed the use of search engine, which is suitable for such scenarios when users have clear requirements and the requirements can be described by keywords. How to enable users to quickly find the items they are interested in from a large amount of information is becoming more and more important and valuable [8].The function of recommendation system is to help people find out what they are potentially interested in from the vast amount of information. At present, the scale of the song library of large-scale music portal websites often contains tens of millions of songs, which are divided into different languages, genres, years, themes, moods, and scenes [9] and contain abundant information, and there is a serious information overload. Music retrieval and music recommendation, as the development products of the era of big data, have gradually entered people’s daily life and been widely used. After many years of development and improvement, recommendation technology has been widely used in many fields, such as short video, news, stock, and e-commerce. Because of the necessity of music recommendation system in the current society, it is a research hotspot in both industry and academia. In the past, people could only search music by keywords such as music name, singer, and classification, and the search results did not only take into account the differences of users but also led to the phenomenon of long tail of music [10]. However, the music recommendation system can help users find the music they want to listen to according to their past behaviors, provide users with a series of song lists, and at the same time increase the sales of digital music. The task of the recommendation system is to act as the link between users and items [11]. First, it can make it easier for users to accurately and quickly find items that users are potentially interested in from a large number of items. Second, it can make more items in the item library have the opportunity to be exposed in front of users, so that unpopular items can be explored more. Music retrieval and music recommendation, as the development products of the era of big data, have gradually entered people’s daily life and been widely used. With the development of music recommendation algorithm, people put forward higher requirements [12]. On the one hand, the problems of traditional recommendation methods such as “cold start” in collaborative filtering need to be solved urgently, and the original recommendation algorithm needs to be upgraded. On the other hand, with the development of machine learning and deep learning, the emergence of new computing technologies is helpful to fully tap the potential preferences of users and improve the performance of recommendation systems [13].The emergence of recommendation system has a great impact on traditional information retrieval services and traditional Internet services. It can accurately customize and recommend the songs that users like, enhance the retention of users, enhance users’ stickiness to the platform, and lay the foundation for the next paid products, thus making the music platform profitable [14]. Because of the penetrating power of recommendation algorithm, it is an active mining of the content and information existing in the Internet. With the establishment of this connection, music information retrieval, as a new technology, has brought surprises to the society [15]. Although its history is only twenty or thirty years old, as a research field, it has always stood at the forefront of technological development. Different from other recommendation fields, music recommendation has its own characteristics. For example, book recommendation, users want to spend a long time reading a book, and many people will not read it repeatedly, but people choose to listen to a song repeatedly because it takes a short time. Music recommendation system should be able to integrate various factors for real-time adjustment and realize personalized recommendation for different needs of users, which is more complicated than general recommendation system. At present, the related research of music recommendation system mainly focuses on the improvement of recommendation strategies and algorithms, and there are some problems in the recommendation results, such as low accuracy and coverage and lack of personalization. This paper studies music recommendation technology based on deep learning digitalization, optimizes the algorithm of recommendation technology, and provides more possibilities for recommendation system.
## 2. Literature Review
Literature [16] proposes that the music retrieval model is people’s perception of music works, especially the feeling of music similarity, which is mainly affected by the lyrics, rhythm, the performance of players, or the mental state of users at that time. Literature [17] music recommendation system is regarded as the cross field of music information retrieval and recommendation system. The main goal of music information retrieval technology is to extract the characteristics of different levels of music, so as to retrieve music from all aspects. These features can be audio signal, song name, album name or singer name, etc. The content recommendation of document [18] needs to count the items that users like and then learn the features of these items, extract the user’s preferences, and then obtain the user portrait. By calculating the matching degree between the user portrait features and the content features of the items, the users are recommended the items that may be of interest according to the matching degree. Literature [19] through various music labels, the music retrieval system can match the user’s needs with the songs in the database, and the music recommendation system can connect the songs in the database, to recommend according to the user’s listening history. Literature [20] due to the leading technology, the commercial recommendation system was first successfully deployed by e-commerce giant Amazon, setting off a trend of applying recommendation algorithms in the field of e-commerce. Then, Netflix, a film rental website, applied the recommendation system in the field of film rental with great success. In literature [21] a personalized navigation system which lets us browse based on cooperation is proposed, which marks the start of global personalized service. As an efficient data mining technology, recommendation system has been studied and used in both academic and commercial circles, and various research results are emerging. In literature [22], the characteristics of music content are contained in music signals, such as musical form structure, melody, and rhythm. Music context contains information that cannot be extracted directly from music signals. These information comes from music clips, artists, or players, such as artists’ cultural and political background, semantic tags, and album titles. Literature [23] says that with the maturity of machine learning technology and the popularity of deep learning technology, many large companies begin to use machine learning or deep learning related technologies to build recommendation systems, and the traditional collaborative filtering and various rule-based recommendation systems are gradually eliminated. Literature music metadata has many forms, such as manual annotation, social tag, and annotation automatically mined from the web using text retrieval technology. At present, there are many online editing metadata database communities established by music experts or lovers, and the annotation contents are genre, rhythm, emotion, age, and emotion. Literature [24] studies the personalized music recommendation system using content-based recommendation, obtains the user’s preference through the rhythm and melody of the user’s favorite songs, classifies the candidate songs through the melody preference classifier, and then, recommends the classified songs with similar rhythm and melody to the user, to realize the personalized recommendation of music.To sum up, it can be seen that most music researches are based on data mining, database establishment, and personalized recommendation system research according to the height of unique performance values. Moreover, recommendation systems are now being applied in various regions. At present, the recommendation system has made great progress in theory and application research, but it has not yet reached a very mature stage. As a frontier exploration field, there are still many problems worthy of in-depth analysis and further discussion.
## 3. Related Technology of Music Recommendation System
Music content refers to the information contained in music works, which can represent the music itself. The goal of music content description technology is to automatically extract meaningful features from music. Music recommendation is a special research field in the recommendation system. Its typical application is network music radio. It learns users’ preferences from users’ historical behaviors such as playing, collecting, and downloading, so as to generate a song playlist for users and push songs that users are satisfied with. Recommendation system is a kind of software or technology that can provide suggestions for users when purchasing and using certain items. Generally speaking, recommendation system is to connect items with users through technical means. Music semantics is the experience of music works after they are listened to by people. Retrieval behavior is a process in which people get relevant music actively or passively, with people’s subjective consciousness. Content-based music retrieval is an objective existence starting from the characteristics of music, but when people get the music they really “want,” there is an insurmountable “semantic gap,” that is to say, it is difficult to directly obtain semantic-related results from music content analysis. The description level of music content is mainly the abstract level, which extracts high-level semantic features from the low-level signal feature description; the second is his timing layer. The content description is related to a certain time range, which may be short-term or calculated by frame, finally, the presentation layer of music, melody, rhythm, chord, music/instrument, rhythm, structure, etc. At the same time, it is also the core basis to distinguish different recommendation system types.
### 3.1. Convolutional Neural Network
Convolutional neural network (CNN) is a feedforward neural network with convolution calculation and depth structure. Specifically, first, according to the user’s previous behavior, obtain the items interacted with it, such as the items that the user has selected or rated, then calculate its preference by extracting the characteristics of these items, then calculate the similarity with each item to be recommended, and finally, recommend it to the user according to the similarity, to recommend the items that may be of interest to it. Music recommendation is different from other recommendations. By studying the characteristics of his recommendation, we can push it more accurately. The trial production of music works is relatively short, so music is likely to be consumed at will, and other data generated by users’ hang-up behavior can be eliminated in the recommendation process. Also, users have different ways of tracking users’ preferences, and the production cost of movies is high. However, because of its own characteristics, music recommendation does not design to collect explicit feedback, and music recommendation can only acquire users’ preferences implicitly. Also, the listening environment of music recommends suitable songs according to the situation; the last one is the emotional connotation of songs. Music can arouse strong personal feelings. By grasping users’ recent listening styles, customers can get emotional catharsis.Because convolutional neural network (CNN) has strong nonlinear fitting ability and has achieved good results in many fields, more and more scholars apply deep learning to the extraction of music recommendation technology. The automatic description of music content is based on computable time-frequency domain signal feature extraction. In order to realize the core function of recommendation system, it is necessary to find useful items, and the recommendation system must be able to predict its recommendation value. Low-level features are extracted directly or indirectly from the frequency representation of music signals. They are easy to be mined by computer system, but they are of little significance to users. Low-level features are the basis of advanced feature analysis, so they should provide an appropriate expression for the studied sound object. CNN is used to predict the implicit features of music, obtain the low dimensional vector representation of music features, and then combine with the implicit representation of user preferences to finally generate reasonable topn recommendations for relevant users. The characteristics of neural network and the connected feature plane used in CNN can allow the image to be used as input directly; it avoids the process of rebuilding the model in the traditional recognition algorithm. The structural law of CNN model is shown in Figure 1.Figure 1
CNN model structure law.Slightly different from convolution in the field of signal processing, the operation in CNN is more like linear weighting operation. For the input imagex, convolution is performed using the convolution kernel K of size, and finally, the characteristic image y is output, which can be expressed as:
(1)Yjj∈p∗q=f∑i∈m∗mXi∗Ki+b.In the formula,b is the bias, p∗q is the size of the output feature map, CNN has remarkable advantages in image recognition and voice analysis. The fundamental reason is that the weight sharing network structure reduces the number of parameters and simplifies the complex network structure.Pool layer, pool operation is another basic operation in CNN. The redundant information extracted in convolution layer is removed to save the most basic and important information, so as to achieve the purpose of dimension reduction. Pooling operation first needs to divide the input convolution feature map to get a number of small local receptive fields and then assign values to the local receptive fields according to the pooled function. For theJth output feature map ajl−1 in the l-1 layer, the feature map obtained by pooling operation can be expressed as:
(2)ajl=fdownajl−1+bjl,where bjlbji is the offset and down. is the pooling function. Assuming maximum pooling, the characteristic value of the local receptive field in the graph is the maximum value in the local area of the characteristic graph, while average pooling and summation pooling are the average or summation of the characteristic points in the pooling domain, and the pooling process is similar to this. The amplitude function in traditional segmentation algorithm is defined by the following formula:
(3)Ax=∑W=0Naw,where Ax represents waveform amplitude function; aW is the amplitude of the wth sampling point; N is the window length; X is a frame of the input signal, x∈0,M; and M is the number of input signal frames. Then, the amplitude difference function is:
(4)DAx=Ax+1−Ax.The dividing line of single note withDAx is more obvious than that with Ax alone, which is convenient for subsequent processing.
### 3.2. Recommendation Algorithm Based on Collaborative Filtering
User-based collaborative filtering recommendation is somewhat similar to the principle of “grouping people.” The principle of user-based collaborative filtering recommendation algorithm is shown in Figure2.Figure 2
Principle of user-based collaborative filtering recommendation algorithm.According to whether the type of user rating information is direct or indirect, the recommendation system based on collaborative filtering can be divided into two different types of problems.The first type is direct user rating information, which is called recommendation. Direct user scoring includes digital scoring, sequential scoring, and binary scoring. Domain-based collaborative filtering is a heuristic recommendation method. It is the most basic and core method in the recommendation system. It is also the focus of research and application in academia and business. In this case, we first learn an objective function, which can predict the user’s new use and then recommend the items with the highest score for the user.(5)i∗=argmaxj∈Iandj∉Iufu,f.The second category is indirect user scoring information, which mainly refers to single value scoring types. Collaborative recommendation strategy was born and developed. Its main purpose is to mine the relevance of users or the relevance of the project itself according to the user’s preference for the project and then recommend based on these relevance. This kind of problem is called top-N recommendation. Due to the lack of displayed user scores, this kind of recommendation cannot be evaluated correctly. The commonly used evaluation standards are two standards in the field of information retrieval—accuracy rate and recall rate, such as formulas (6) and (7).
(6)PrecisionL=1U∑Lu∩Tuu∈ULu,(7)RecallL=1U∑Lu∩Tuu∈UTu.Among them, the item set is divided into training set and test set, the training set is used to calculateL, and the intersection of the test set and the item set scored by the user constitutes T, namely, Tu⊂Iu∩Itst. Lu is the recommendation list of user u.Item-based collaborative filtering recommendation is to calculate the similarity between any two items in the system according to the scores of all users and then recommend the items with higher similarity to the items in the list to the target user according to the user’s historical preference list of items. The principle of item-based collaborative filtering recommendation algorithm is shown in Figure3.
(1)
Calculate the similarity between any two items according to all user preference behavior information in the system(2)
Combining the user’s historical behavior and making recommendations to the user according to the similarity of the itemsFigure 3
Principle of item-based collaborative filtering recommendation algorithm.
## 3.1. Convolutional Neural Network
Convolutional neural network (CNN) is a feedforward neural network with convolution calculation and depth structure. Specifically, first, according to the user’s previous behavior, obtain the items interacted with it, such as the items that the user has selected or rated, then calculate its preference by extracting the characteristics of these items, then calculate the similarity with each item to be recommended, and finally, recommend it to the user according to the similarity, to recommend the items that may be of interest to it. Music recommendation is different from other recommendations. By studying the characteristics of his recommendation, we can push it more accurately. The trial production of music works is relatively short, so music is likely to be consumed at will, and other data generated by users’ hang-up behavior can be eliminated in the recommendation process. Also, users have different ways of tracking users’ preferences, and the production cost of movies is high. However, because of its own characteristics, music recommendation does not design to collect explicit feedback, and music recommendation can only acquire users’ preferences implicitly. Also, the listening environment of music recommends suitable songs according to the situation; the last one is the emotional connotation of songs. Music can arouse strong personal feelings. By grasping users’ recent listening styles, customers can get emotional catharsis.Because convolutional neural network (CNN) has strong nonlinear fitting ability and has achieved good results in many fields, more and more scholars apply deep learning to the extraction of music recommendation technology. The automatic description of music content is based on computable time-frequency domain signal feature extraction. In order to realize the core function of recommendation system, it is necessary to find useful items, and the recommendation system must be able to predict its recommendation value. Low-level features are extracted directly or indirectly from the frequency representation of music signals. They are easy to be mined by computer system, but they are of little significance to users. Low-level features are the basis of advanced feature analysis, so they should provide an appropriate expression for the studied sound object. CNN is used to predict the implicit features of music, obtain the low dimensional vector representation of music features, and then combine with the implicit representation of user preferences to finally generate reasonable topn recommendations for relevant users. The characteristics of neural network and the connected feature plane used in CNN can allow the image to be used as input directly; it avoids the process of rebuilding the model in the traditional recognition algorithm. The structural law of CNN model is shown in Figure 1.Figure 1
CNN model structure law.Slightly different from convolution in the field of signal processing, the operation in CNN is more like linear weighting operation. For the input imagex, convolution is performed using the convolution kernel K of size, and finally, the characteristic image y is output, which can be expressed as:
(1)Yjj∈p∗q=f∑i∈m∗mXi∗Ki+b.In the formula,b is the bias, p∗q is the size of the output feature map, CNN has remarkable advantages in image recognition and voice analysis. The fundamental reason is that the weight sharing network structure reduces the number of parameters and simplifies the complex network structure.Pool layer, pool operation is another basic operation in CNN. The redundant information extracted in convolution layer is removed to save the most basic and important information, so as to achieve the purpose of dimension reduction. Pooling operation first needs to divide the input convolution feature map to get a number of small local receptive fields and then assign values to the local receptive fields according to the pooled function. For theJth output feature map ajl−1 in the l-1 layer, the feature map obtained by pooling operation can be expressed as:
(2)ajl=fdownajl−1+bjl,where bjlbji is the offset and down. is the pooling function. Assuming maximum pooling, the characteristic value of the local receptive field in the graph is the maximum value in the local area of the characteristic graph, while average pooling and summation pooling are the average or summation of the characteristic points in the pooling domain, and the pooling process is similar to this. The amplitude function in traditional segmentation algorithm is defined by the following formula:
(3)Ax=∑W=0Naw,where Ax represents waveform amplitude function; aW is the amplitude of the wth sampling point; N is the window length; X is a frame of the input signal, x∈0,M; and M is the number of input signal frames. Then, the amplitude difference function is:
(4)DAx=Ax+1−Ax.The dividing line of single note withDAx is more obvious than that with Ax alone, which is convenient for subsequent processing.
## 3.2. Recommendation Algorithm Based on Collaborative Filtering
User-based collaborative filtering recommendation is somewhat similar to the principle of “grouping people.” The principle of user-based collaborative filtering recommendation algorithm is shown in Figure2.Figure 2
Principle of user-based collaborative filtering recommendation algorithm.According to whether the type of user rating information is direct or indirect, the recommendation system based on collaborative filtering can be divided into two different types of problems.The first type is direct user rating information, which is called recommendation. Direct user scoring includes digital scoring, sequential scoring, and binary scoring. Domain-based collaborative filtering is a heuristic recommendation method. It is the most basic and core method in the recommendation system. It is also the focus of research and application in academia and business. In this case, we first learn an objective function, which can predict the user’s new use and then recommend the items with the highest score for the user.(5)i∗=argmaxj∈Iandj∉Iufu,f.The second category is indirect user scoring information, which mainly refers to single value scoring types. Collaborative recommendation strategy was born and developed. Its main purpose is to mine the relevance of users or the relevance of the project itself according to the user’s preference for the project and then recommend based on these relevance. This kind of problem is called top-N recommendation. Due to the lack of displayed user scores, this kind of recommendation cannot be evaluated correctly. The commonly used evaluation standards are two standards in the field of information retrieval—accuracy rate and recall rate, such as formulas (6) and (7).
(6)PrecisionL=1U∑Lu∩Tuu∈ULu,(7)RecallL=1U∑Lu∩Tuu∈UTu.Among them, the item set is divided into training set and test set, the training set is used to calculateL, and the intersection of the test set and the item set scored by the user constitutes T, namely, Tu⊂Iu∩Itst. Lu is the recommendation list of user u.Item-based collaborative filtering recommendation is to calculate the similarity between any two items in the system according to the scores of all users and then recommend the items with higher similarity to the items in the list to the target user according to the user’s historical preference list of items. The principle of item-based collaborative filtering recommendation algorithm is shown in Figure3.
(1)
Calculate the similarity between any two items according to all user preference behavior information in the system(2)
Combining the user’s historical behavior and making recommendations to the user according to the similarity of the itemsFigure 3
Principle of item-based collaborative filtering recommendation algorithm.
## 4. The Effectiveness of Music Recommendation Method
In the research scheme of this paper, the first is through feature extraction. Firstly, each label in the label set is retrieved, a single label is sorted according to the decision value of the label, and the evaluation indexes of all labels are averaged. For the music in the test set, the semantic vector is obtained through the convolution model, and the return value is obtained in the marked corpus set. If the labeling accuracy of the algorithm is high, the original songs manually labeled in the corpus can be returned after retrieval. The accuracy of CNN data set is shown in Figure4.Figure 4
CNN dataset accuracy.It can be seen from the figure that the proportion of the first 10 bits that can return the original song is greater than 92%, indicating that the labeling result of the algorithm is better in the whole song, and can be close to the performance of manual labeling in semantics. However, there are more than 400000 song sets in the music database, so we need to use the user-based collaborative algorithm to collect customer information and select some songs that users have not heard as the total song set. The number of users in the dataset-the distribution of playing songs is shown in Figure5.Figure 5
Histogram of the number of users in the dataset-the distribution of songs played.The figure shows that the number of songs of most users is concentrated between 0 and 1000. We try to randomly select users from users with more universal behavior. Users with too few songs have limited data, but very few users have too many songs, and it may be that they do not close the music playback software in time. In order to reduce noise, these user data are not used. I process the experimental data objectively. Using the optimization of the algorithm, the algorithm can calculate the number of features, which can better combine the user’s feature recommendation. We use the ROC curve and AUC value of the evaluation index to classify the classification results on the validation set of the three experimental groups, as shown in Figure6.Figure 6
ROC curves of three groups of experimental results.It can be seen from the figure that with the increase of the number of our features, the effect is better. It shows that adding the statistical features of users and the audio features of music is conducive to the classifier to judge users’ preferences. At the same time, it can help the model find out the potential reasons why users will like the music.The recommendation algorithm based on the combination of item collaborative filtering and interest tag proposed above mainly uses the user’s behavior to calculate the song similarity offline for preference and then quickly and accurately find the candidate similar song collection that the user is most likely to be interested in from the ten million level song library according to the song similarity. The experimental data set is the similar candidate song set calculated by hybrid recommendation, and the data is assembled according to the input format required by the deep neural network. In general, user-based collaborative filtering recommendation is more social. The recommended items are popular items in the neighborhood of the target user, while item-based collaborative filtering recommendation is more personalized, because the recommended items generally meet their own interests and preferences. Using the above formula, we tested the two software. We focus differentL values on 0.5-0.7, as shown in Figures 7 and 8.Figure 7
Experimental comparison trend diagram under differentk values (1).Figure 8
Experimental comparison trend diagram under differentk values (2).It can be seen from the figure that whenL=0.5, the data obtained is small, and the recommendation is more accurate; when L=0.6 and L=0.7, the data can also be more accurate, but compared with the value of 0.5, too many candidate songs will be filtered out, resulting in a short candidate list.Different recommendation models will have different superior performance. CNN and collaborative filtering algorithms mentioned in this paper are the most commonly used algorithms at present, besides which there are different recommendation algorithm models such as Frunk-SVD, User-CF, and CB. Frunk-SVD is the most basic matrix decomposition method of implicit semantic model. Random gradient descent method is used to find the optimal solution. Finally, by completing the matrix, the user’s rating of items can be predicted, so as to achieve recommendation. CB is to recommend items that are similar to those that users liked in the past according to the items that users liked in the past. In order to make the structure of the article more rigorous, the accuracy of three kinds of data is tested as shown in Figure9.Figure 9
Three data accuracy test trends.The data show that the two algorithms mentioned in this paper have higher accuracy than frunk SVD and CB. CNN and collaborative filtering algorithm can improve the cold start of traditional recommendation algorithms, supplement available information sources for music recommendation system, and improve the performance of recommendation system as a whole.
## 5. Conclusions
Music is a popular artistic expression. The development of digital music industry puts forward higher requirements for music information retrieval. People cannot live without music. Music contains a huge library of songs, so it is impossible for users to listen to every song in the library. Moreover, in many cases, users’ vague demand for music is just “one or several nice songs.” In the era of information overload, recommendation system can act as the link between users and items and can help users find items that may be of interest without clear requirements. Music recommendation system is a research topic with broad application scenarios and practical application value. Improving the effect of music recommendation can not only promote users’ experience but also greatly improve the profits of music streaming media service providers. The traditional music information retrieval based on text metadata has been unable to meet people’s increasing retrieval requirements. Considering the particularity of the music recommendation system, this paper studies the content-based music recommendation system, tries different methods for two important links of the content-based music recommendation system, and improves the method of extracting music audio features based on deep learning, which has been effectively promoted. In the recommendation link, the traditional way of calculating the similarity of songs or the similarity between songs and users is not used for recommendation, and the effectiveness of the recommendation is proved from objective and subjective perspectives. Due to the time problem, the content is not very comprehensive, and the follow-up research will continue.
---
*Source: 1013997-2022-05-23.xml* | 1013997-2022-05-23_1013997-2022-05-23.md | 36,807 | Digital Music Recommendation Technology for Music Teaching Based on Deep Learning | Meng Lu; Du Pengcheng; Song Yanfeng | Wireless Communications and Mobile Computing
(2022) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1013997 | 1013997-2022-05-23.xml | ---
## Abstract
With the rapid development of music streaming media service industry, users can easily hear any song on mobile devices. Internet has become a huge music storage platform. With the development of network and large-scale digital music industry, the acquisition and listening of music are presented to users in a more convenient way. How to find the music loved by users from the massive Internet digital music data has become the key problem and main goal to be solved in the field of music information retrieval. Personalized music recommendation system can accurately find and push songs that users may be interested in from tens of millions of huge music libraries according to users’ information under the condition that users only have vague demand for listening to songs. Relying on the traditional search method to find the music that you are interested in can no longer meet the needs of users, so the current music recommendation system needs to dig out the music that has no clear needs in the long tail to help people find their favorite songs.
---
## Body
## 1. Introduction
The 21st century is a big data era with the rapid development of the Internet and the rapid dissemination of information [1]. The Internet has become the main way for people to obtain multimedia resources, including books, movies, and music. In contemporary society, the Internet and electronic information technology have made great progress [2]. At the hardware level, electronic chips are becoming smaller and smaller, but their functions are becoming more and more powerful, CPU computing power is rising, and the processing capacity of personal electronic devices is becoming stronger and stronger, which means that it is easier for everyone to create and share new information, such as pictures, videos, and text information [3]. Semiconductor storage capacity is becoming stronger and stronger. Server clusters store a large amount of user information and item information. Listening to music, as one of people’s main recreational activities, has always played an important role in people’s life [4]. Ancient sages attached great importance to the important role of music in the development of social civilization and the maintenance of social order [5]. They believed that music could help maintain social harmony, that is, the ideal realm of social development of “no resentment when music comes, no struggle when etiquette comes.” Due to the rapid growth of music streaming media service industry and the rapid progress of portable device technology, thousands of music has become accessible, but it is becoming more and more difficult to find the music you want [6]. The network has brought a lot of information to people, met people’s demand for information in the information age, and benefited from it, but it has also brought the problem of information overload. Both consumers and producers of information have encountered great challenges [7]. Because of the great variety and amount of information contained in the data, people cannot directly find the information they want. According to this, the predecessors also proposed the use of search engine, which is suitable for such scenarios when users have clear requirements and the requirements can be described by keywords. How to enable users to quickly find the items they are interested in from a large amount of information is becoming more and more important and valuable [8].The function of recommendation system is to help people find out what they are potentially interested in from the vast amount of information. At present, the scale of the song library of large-scale music portal websites often contains tens of millions of songs, which are divided into different languages, genres, years, themes, moods, and scenes [9] and contain abundant information, and there is a serious information overload. Music retrieval and music recommendation, as the development products of the era of big data, have gradually entered people’s daily life and been widely used. After many years of development and improvement, recommendation technology has been widely used in many fields, such as short video, news, stock, and e-commerce. Because of the necessity of music recommendation system in the current society, it is a research hotspot in both industry and academia. In the past, people could only search music by keywords such as music name, singer, and classification, and the search results did not only take into account the differences of users but also led to the phenomenon of long tail of music [10]. However, the music recommendation system can help users find the music they want to listen to according to their past behaviors, provide users with a series of song lists, and at the same time increase the sales of digital music. The task of the recommendation system is to act as the link between users and items [11]. First, it can make it easier for users to accurately and quickly find items that users are potentially interested in from a large number of items. Second, it can make more items in the item library have the opportunity to be exposed in front of users, so that unpopular items can be explored more. Music retrieval and music recommendation, as the development products of the era of big data, have gradually entered people’s daily life and been widely used. With the development of music recommendation algorithm, people put forward higher requirements [12]. On the one hand, the problems of traditional recommendation methods such as “cold start” in collaborative filtering need to be solved urgently, and the original recommendation algorithm needs to be upgraded. On the other hand, with the development of machine learning and deep learning, the emergence of new computing technologies is helpful to fully tap the potential preferences of users and improve the performance of recommendation systems [13].The emergence of recommendation system has a great impact on traditional information retrieval services and traditional Internet services. It can accurately customize and recommend the songs that users like, enhance the retention of users, enhance users’ stickiness to the platform, and lay the foundation for the next paid products, thus making the music platform profitable [14]. Because of the penetrating power of recommendation algorithm, it is an active mining of the content and information existing in the Internet. With the establishment of this connection, music information retrieval, as a new technology, has brought surprises to the society [15]. Although its history is only twenty or thirty years old, as a research field, it has always stood at the forefront of technological development. Different from other recommendation fields, music recommendation has its own characteristics. For example, book recommendation, users want to spend a long time reading a book, and many people will not read it repeatedly, but people choose to listen to a song repeatedly because it takes a short time. Music recommendation system should be able to integrate various factors for real-time adjustment and realize personalized recommendation for different needs of users, which is more complicated than general recommendation system. At present, the related research of music recommendation system mainly focuses on the improvement of recommendation strategies and algorithms, and there are some problems in the recommendation results, such as low accuracy and coverage and lack of personalization. This paper studies music recommendation technology based on deep learning digitalization, optimizes the algorithm of recommendation technology, and provides more possibilities for recommendation system.
## 2. Literature Review
Literature [16] proposes that the music retrieval model is people’s perception of music works, especially the feeling of music similarity, which is mainly affected by the lyrics, rhythm, the performance of players, or the mental state of users at that time. Literature [17] music recommendation system is regarded as the cross field of music information retrieval and recommendation system. The main goal of music information retrieval technology is to extract the characteristics of different levels of music, so as to retrieve music from all aspects. These features can be audio signal, song name, album name or singer name, etc. The content recommendation of document [18] needs to count the items that users like and then learn the features of these items, extract the user’s preferences, and then obtain the user portrait. By calculating the matching degree between the user portrait features and the content features of the items, the users are recommended the items that may be of interest according to the matching degree. Literature [19] through various music labels, the music retrieval system can match the user’s needs with the songs in the database, and the music recommendation system can connect the songs in the database, to recommend according to the user’s listening history. Literature [20] due to the leading technology, the commercial recommendation system was first successfully deployed by e-commerce giant Amazon, setting off a trend of applying recommendation algorithms in the field of e-commerce. Then, Netflix, a film rental website, applied the recommendation system in the field of film rental with great success. In literature [21] a personalized navigation system which lets us browse based on cooperation is proposed, which marks the start of global personalized service. As an efficient data mining technology, recommendation system has been studied and used in both academic and commercial circles, and various research results are emerging. In literature [22], the characteristics of music content are contained in music signals, such as musical form structure, melody, and rhythm. Music context contains information that cannot be extracted directly from music signals. These information comes from music clips, artists, or players, such as artists’ cultural and political background, semantic tags, and album titles. Literature [23] says that with the maturity of machine learning technology and the popularity of deep learning technology, many large companies begin to use machine learning or deep learning related technologies to build recommendation systems, and the traditional collaborative filtering and various rule-based recommendation systems are gradually eliminated. Literature music metadata has many forms, such as manual annotation, social tag, and annotation automatically mined from the web using text retrieval technology. At present, there are many online editing metadata database communities established by music experts or lovers, and the annotation contents are genre, rhythm, emotion, age, and emotion. Literature [24] studies the personalized music recommendation system using content-based recommendation, obtains the user’s preference through the rhythm and melody of the user’s favorite songs, classifies the candidate songs through the melody preference classifier, and then, recommends the classified songs with similar rhythm and melody to the user, to realize the personalized recommendation of music.To sum up, it can be seen that most music researches are based on data mining, database establishment, and personalized recommendation system research according to the height of unique performance values. Moreover, recommendation systems are now being applied in various regions. At present, the recommendation system has made great progress in theory and application research, but it has not yet reached a very mature stage. As a frontier exploration field, there are still many problems worthy of in-depth analysis and further discussion.
## 3. Related Technology of Music Recommendation System
Music content refers to the information contained in music works, which can represent the music itself. The goal of music content description technology is to automatically extract meaningful features from music. Music recommendation is a special research field in the recommendation system. Its typical application is network music radio. It learns users’ preferences from users’ historical behaviors such as playing, collecting, and downloading, so as to generate a song playlist for users and push songs that users are satisfied with. Recommendation system is a kind of software or technology that can provide suggestions for users when purchasing and using certain items. Generally speaking, recommendation system is to connect items with users through technical means. Music semantics is the experience of music works after they are listened to by people. Retrieval behavior is a process in which people get relevant music actively or passively, with people’s subjective consciousness. Content-based music retrieval is an objective existence starting from the characteristics of music, but when people get the music they really “want,” there is an insurmountable “semantic gap,” that is to say, it is difficult to directly obtain semantic-related results from music content analysis. The description level of music content is mainly the abstract level, which extracts high-level semantic features from the low-level signal feature description; the second is his timing layer. The content description is related to a certain time range, which may be short-term or calculated by frame, finally, the presentation layer of music, melody, rhythm, chord, music/instrument, rhythm, structure, etc. At the same time, it is also the core basis to distinguish different recommendation system types.
### 3.1. Convolutional Neural Network
Convolutional neural network (CNN) is a feedforward neural network with convolution calculation and depth structure. Specifically, first, according to the user’s previous behavior, obtain the items interacted with it, such as the items that the user has selected or rated, then calculate its preference by extracting the characteristics of these items, then calculate the similarity with each item to be recommended, and finally, recommend it to the user according to the similarity, to recommend the items that may be of interest to it. Music recommendation is different from other recommendations. By studying the characteristics of his recommendation, we can push it more accurately. The trial production of music works is relatively short, so music is likely to be consumed at will, and other data generated by users’ hang-up behavior can be eliminated in the recommendation process. Also, users have different ways of tracking users’ preferences, and the production cost of movies is high. However, because of its own characteristics, music recommendation does not design to collect explicit feedback, and music recommendation can only acquire users’ preferences implicitly. Also, the listening environment of music recommends suitable songs according to the situation; the last one is the emotional connotation of songs. Music can arouse strong personal feelings. By grasping users’ recent listening styles, customers can get emotional catharsis.Because convolutional neural network (CNN) has strong nonlinear fitting ability and has achieved good results in many fields, more and more scholars apply deep learning to the extraction of music recommendation technology. The automatic description of music content is based on computable time-frequency domain signal feature extraction. In order to realize the core function of recommendation system, it is necessary to find useful items, and the recommendation system must be able to predict its recommendation value. Low-level features are extracted directly or indirectly from the frequency representation of music signals. They are easy to be mined by computer system, but they are of little significance to users. Low-level features are the basis of advanced feature analysis, so they should provide an appropriate expression for the studied sound object. CNN is used to predict the implicit features of music, obtain the low dimensional vector representation of music features, and then combine with the implicit representation of user preferences to finally generate reasonable topn recommendations for relevant users. The characteristics of neural network and the connected feature plane used in CNN can allow the image to be used as input directly; it avoids the process of rebuilding the model in the traditional recognition algorithm. The structural law of CNN model is shown in Figure 1.Figure 1
CNN model structure law.Slightly different from convolution in the field of signal processing, the operation in CNN is more like linear weighting operation. For the input imagex, convolution is performed using the convolution kernel K of size, and finally, the characteristic image y is output, which can be expressed as:
(1)Yjj∈p∗q=f∑i∈m∗mXi∗Ki+b.In the formula,b is the bias, p∗q is the size of the output feature map, CNN has remarkable advantages in image recognition and voice analysis. The fundamental reason is that the weight sharing network structure reduces the number of parameters and simplifies the complex network structure.Pool layer, pool operation is another basic operation in CNN. The redundant information extracted in convolution layer is removed to save the most basic and important information, so as to achieve the purpose of dimension reduction. Pooling operation first needs to divide the input convolution feature map to get a number of small local receptive fields and then assign values to the local receptive fields according to the pooled function. For theJth output feature map ajl−1 in the l-1 layer, the feature map obtained by pooling operation can be expressed as:
(2)ajl=fdownajl−1+bjl,where bjlbji is the offset and down. is the pooling function. Assuming maximum pooling, the characteristic value of the local receptive field in the graph is the maximum value in the local area of the characteristic graph, while average pooling and summation pooling are the average or summation of the characteristic points in the pooling domain, and the pooling process is similar to this. The amplitude function in traditional segmentation algorithm is defined by the following formula:
(3)Ax=∑W=0Naw,where Ax represents waveform amplitude function; aW is the amplitude of the wth sampling point; N is the window length; X is a frame of the input signal, x∈0,M; and M is the number of input signal frames. Then, the amplitude difference function is:
(4)DAx=Ax+1−Ax.The dividing line of single note withDAx is more obvious than that with Ax alone, which is convenient for subsequent processing.
### 3.2. Recommendation Algorithm Based on Collaborative Filtering
User-based collaborative filtering recommendation is somewhat similar to the principle of “grouping people.” The principle of user-based collaborative filtering recommendation algorithm is shown in Figure2.Figure 2
Principle of user-based collaborative filtering recommendation algorithm.According to whether the type of user rating information is direct or indirect, the recommendation system based on collaborative filtering can be divided into two different types of problems.The first type is direct user rating information, which is called recommendation. Direct user scoring includes digital scoring, sequential scoring, and binary scoring. Domain-based collaborative filtering is a heuristic recommendation method. It is the most basic and core method in the recommendation system. It is also the focus of research and application in academia and business. In this case, we first learn an objective function, which can predict the user’s new use and then recommend the items with the highest score for the user.(5)i∗=argmaxj∈Iandj∉Iufu,f.The second category is indirect user scoring information, which mainly refers to single value scoring types. Collaborative recommendation strategy was born and developed. Its main purpose is to mine the relevance of users or the relevance of the project itself according to the user’s preference for the project and then recommend based on these relevance. This kind of problem is called top-N recommendation. Due to the lack of displayed user scores, this kind of recommendation cannot be evaluated correctly. The commonly used evaluation standards are two standards in the field of information retrieval—accuracy rate and recall rate, such as formulas (6) and (7).
(6)PrecisionL=1U∑Lu∩Tuu∈ULu,(7)RecallL=1U∑Lu∩Tuu∈UTu.Among them, the item set is divided into training set and test set, the training set is used to calculateL, and the intersection of the test set and the item set scored by the user constitutes T, namely, Tu⊂Iu∩Itst. Lu is the recommendation list of user u.Item-based collaborative filtering recommendation is to calculate the similarity between any two items in the system according to the scores of all users and then recommend the items with higher similarity to the items in the list to the target user according to the user’s historical preference list of items. The principle of item-based collaborative filtering recommendation algorithm is shown in Figure3.
(1)
Calculate the similarity between any two items according to all user preference behavior information in the system(2)
Combining the user’s historical behavior and making recommendations to the user according to the similarity of the itemsFigure 3
Principle of item-based collaborative filtering recommendation algorithm.
## 3.1. Convolutional Neural Network
Convolutional neural network (CNN) is a feedforward neural network with convolution calculation and depth structure. Specifically, first, according to the user’s previous behavior, obtain the items interacted with it, such as the items that the user has selected or rated, then calculate its preference by extracting the characteristics of these items, then calculate the similarity with each item to be recommended, and finally, recommend it to the user according to the similarity, to recommend the items that may be of interest to it. Music recommendation is different from other recommendations. By studying the characteristics of his recommendation, we can push it more accurately. The trial production of music works is relatively short, so music is likely to be consumed at will, and other data generated by users’ hang-up behavior can be eliminated in the recommendation process. Also, users have different ways of tracking users’ preferences, and the production cost of movies is high. However, because of its own characteristics, music recommendation does not design to collect explicit feedback, and music recommendation can only acquire users’ preferences implicitly. Also, the listening environment of music recommends suitable songs according to the situation; the last one is the emotional connotation of songs. Music can arouse strong personal feelings. By grasping users’ recent listening styles, customers can get emotional catharsis.Because convolutional neural network (CNN) has strong nonlinear fitting ability and has achieved good results in many fields, more and more scholars apply deep learning to the extraction of music recommendation technology. The automatic description of music content is based on computable time-frequency domain signal feature extraction. In order to realize the core function of recommendation system, it is necessary to find useful items, and the recommendation system must be able to predict its recommendation value. Low-level features are extracted directly or indirectly from the frequency representation of music signals. They are easy to be mined by computer system, but they are of little significance to users. Low-level features are the basis of advanced feature analysis, so they should provide an appropriate expression for the studied sound object. CNN is used to predict the implicit features of music, obtain the low dimensional vector representation of music features, and then combine with the implicit representation of user preferences to finally generate reasonable topn recommendations for relevant users. The characteristics of neural network and the connected feature plane used in CNN can allow the image to be used as input directly; it avoids the process of rebuilding the model in the traditional recognition algorithm. The structural law of CNN model is shown in Figure 1.Figure 1
CNN model structure law.Slightly different from convolution in the field of signal processing, the operation in CNN is more like linear weighting operation. For the input imagex, convolution is performed using the convolution kernel K of size, and finally, the characteristic image y is output, which can be expressed as:
(1)Yjj∈p∗q=f∑i∈m∗mXi∗Ki+b.In the formula,b is the bias, p∗q is the size of the output feature map, CNN has remarkable advantages in image recognition and voice analysis. The fundamental reason is that the weight sharing network structure reduces the number of parameters and simplifies the complex network structure.Pool layer, pool operation is another basic operation in CNN. The redundant information extracted in convolution layer is removed to save the most basic and important information, so as to achieve the purpose of dimension reduction. Pooling operation first needs to divide the input convolution feature map to get a number of small local receptive fields and then assign values to the local receptive fields according to the pooled function. For theJth output feature map ajl−1 in the l-1 layer, the feature map obtained by pooling operation can be expressed as:
(2)ajl=fdownajl−1+bjl,where bjlbji is the offset and down. is the pooling function. Assuming maximum pooling, the characteristic value of the local receptive field in the graph is the maximum value in the local area of the characteristic graph, while average pooling and summation pooling are the average or summation of the characteristic points in the pooling domain, and the pooling process is similar to this. The amplitude function in traditional segmentation algorithm is defined by the following formula:
(3)Ax=∑W=0Naw,where Ax represents waveform amplitude function; aW is the amplitude of the wth sampling point; N is the window length; X is a frame of the input signal, x∈0,M; and M is the number of input signal frames. Then, the amplitude difference function is:
(4)DAx=Ax+1−Ax.The dividing line of single note withDAx is more obvious than that with Ax alone, which is convenient for subsequent processing.
## 3.2. Recommendation Algorithm Based on Collaborative Filtering
User-based collaborative filtering recommendation is somewhat similar to the principle of “grouping people.” The principle of user-based collaborative filtering recommendation algorithm is shown in Figure2.Figure 2
Principle of user-based collaborative filtering recommendation algorithm.According to whether the type of user rating information is direct or indirect, the recommendation system based on collaborative filtering can be divided into two different types of problems.The first type is direct user rating information, which is called recommendation. Direct user scoring includes digital scoring, sequential scoring, and binary scoring. Domain-based collaborative filtering is a heuristic recommendation method. It is the most basic and core method in the recommendation system. It is also the focus of research and application in academia and business. In this case, we first learn an objective function, which can predict the user’s new use and then recommend the items with the highest score for the user.(5)i∗=argmaxj∈Iandj∉Iufu,f.The second category is indirect user scoring information, which mainly refers to single value scoring types. Collaborative recommendation strategy was born and developed. Its main purpose is to mine the relevance of users or the relevance of the project itself according to the user’s preference for the project and then recommend based on these relevance. This kind of problem is called top-N recommendation. Due to the lack of displayed user scores, this kind of recommendation cannot be evaluated correctly. The commonly used evaluation standards are two standards in the field of information retrieval—accuracy rate and recall rate, such as formulas (6) and (7).
(6)PrecisionL=1U∑Lu∩Tuu∈ULu,(7)RecallL=1U∑Lu∩Tuu∈UTu.Among them, the item set is divided into training set and test set, the training set is used to calculateL, and the intersection of the test set and the item set scored by the user constitutes T, namely, Tu⊂Iu∩Itst. Lu is the recommendation list of user u.Item-based collaborative filtering recommendation is to calculate the similarity between any two items in the system according to the scores of all users and then recommend the items with higher similarity to the items in the list to the target user according to the user’s historical preference list of items. The principle of item-based collaborative filtering recommendation algorithm is shown in Figure3.
(1)
Calculate the similarity between any two items according to all user preference behavior information in the system(2)
Combining the user’s historical behavior and making recommendations to the user according to the similarity of the itemsFigure 3
Principle of item-based collaborative filtering recommendation algorithm.
## 4. The Effectiveness of Music Recommendation Method
In the research scheme of this paper, the first is through feature extraction. Firstly, each label in the label set is retrieved, a single label is sorted according to the decision value of the label, and the evaluation indexes of all labels are averaged. For the music in the test set, the semantic vector is obtained through the convolution model, and the return value is obtained in the marked corpus set. If the labeling accuracy of the algorithm is high, the original songs manually labeled in the corpus can be returned after retrieval. The accuracy of CNN data set is shown in Figure4.Figure 4
CNN dataset accuracy.It can be seen from the figure that the proportion of the first 10 bits that can return the original song is greater than 92%, indicating that the labeling result of the algorithm is better in the whole song, and can be close to the performance of manual labeling in semantics. However, there are more than 400000 song sets in the music database, so we need to use the user-based collaborative algorithm to collect customer information and select some songs that users have not heard as the total song set. The number of users in the dataset-the distribution of playing songs is shown in Figure5.Figure 5
Histogram of the number of users in the dataset-the distribution of songs played.The figure shows that the number of songs of most users is concentrated between 0 and 1000. We try to randomly select users from users with more universal behavior. Users with too few songs have limited data, but very few users have too many songs, and it may be that they do not close the music playback software in time. In order to reduce noise, these user data are not used. I process the experimental data objectively. Using the optimization of the algorithm, the algorithm can calculate the number of features, which can better combine the user’s feature recommendation. We use the ROC curve and AUC value of the evaluation index to classify the classification results on the validation set of the three experimental groups, as shown in Figure6.Figure 6
ROC curves of three groups of experimental results.It can be seen from the figure that with the increase of the number of our features, the effect is better. It shows that adding the statistical features of users and the audio features of music is conducive to the classifier to judge users’ preferences. At the same time, it can help the model find out the potential reasons why users will like the music.The recommendation algorithm based on the combination of item collaborative filtering and interest tag proposed above mainly uses the user’s behavior to calculate the song similarity offline for preference and then quickly and accurately find the candidate similar song collection that the user is most likely to be interested in from the ten million level song library according to the song similarity. The experimental data set is the similar candidate song set calculated by hybrid recommendation, and the data is assembled according to the input format required by the deep neural network. In general, user-based collaborative filtering recommendation is more social. The recommended items are popular items in the neighborhood of the target user, while item-based collaborative filtering recommendation is more personalized, because the recommended items generally meet their own interests and preferences. Using the above formula, we tested the two software. We focus differentL values on 0.5-0.7, as shown in Figures 7 and 8.Figure 7
Experimental comparison trend diagram under differentk values (1).Figure 8
Experimental comparison trend diagram under differentk values (2).It can be seen from the figure that whenL=0.5, the data obtained is small, and the recommendation is more accurate; when L=0.6 and L=0.7, the data can also be more accurate, but compared with the value of 0.5, too many candidate songs will be filtered out, resulting in a short candidate list.Different recommendation models will have different superior performance. CNN and collaborative filtering algorithms mentioned in this paper are the most commonly used algorithms at present, besides which there are different recommendation algorithm models such as Frunk-SVD, User-CF, and CB. Frunk-SVD is the most basic matrix decomposition method of implicit semantic model. Random gradient descent method is used to find the optimal solution. Finally, by completing the matrix, the user’s rating of items can be predicted, so as to achieve recommendation. CB is to recommend items that are similar to those that users liked in the past according to the items that users liked in the past. In order to make the structure of the article more rigorous, the accuracy of three kinds of data is tested as shown in Figure9.Figure 9
Three data accuracy test trends.The data show that the two algorithms mentioned in this paper have higher accuracy than frunk SVD and CB. CNN and collaborative filtering algorithm can improve the cold start of traditional recommendation algorithms, supplement available information sources for music recommendation system, and improve the performance of recommendation system as a whole.
## 5. Conclusions
Music is a popular artistic expression. The development of digital music industry puts forward higher requirements for music information retrieval. People cannot live without music. Music contains a huge library of songs, so it is impossible for users to listen to every song in the library. Moreover, in many cases, users’ vague demand for music is just “one or several nice songs.” In the era of information overload, recommendation system can act as the link between users and items and can help users find items that may be of interest without clear requirements. Music recommendation system is a research topic with broad application scenarios and practical application value. Improving the effect of music recommendation can not only promote users’ experience but also greatly improve the profits of music streaming media service providers. The traditional music information retrieval based on text metadata has been unable to meet people’s increasing retrieval requirements. Considering the particularity of the music recommendation system, this paper studies the content-based music recommendation system, tries different methods for two important links of the content-based music recommendation system, and improves the method of extracting music audio features based on deep learning, which has been effectively promoted. In the recommendation link, the traditional way of calculating the similarity of songs or the similarity between songs and users is not used for recommendation, and the effectiveness of the recommendation is proved from objective and subjective perspectives. Due to the time problem, the content is not very comprehensive, and the follow-up research will continue.
---
*Source: 1013997-2022-05-23.xml* | 2022 |
# Design and Reconstruction of Visual Art Based on Virtual Reality
**Authors:** Bai Yun
**Journal:** Security and Communication Networks
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1014017
---
## Abstract
Because traditional methods generally lack the image preprocessing link, the effect of visual image detail processing is not good. In order to enhance the image visual effect, a visual art design method based on virtual reality is proposed. Wavelet transform method is used to denoise the visual image and the noise signal in the image is removed; a binary model of fuzzy space vision fusion is established, the space of the visual image is planned, and the spatial distribution information of the visual image is obtained. According to the principle of light and shadow phenomenon in visual image rendering, the Extend Shadow map algorithm is used to render the visual image. Virtual reality technology was used to reconstruct the preprocessed visual image, and the ant colony algorithm was used to optimize the visual image to realize the visual image design. The results show that the peak signal-to-noise ratio of the visual image processed by the proposed method is high, and the image detail processing effect is better.
---
## Body
## 1. Introduction
As a communicator of culture in the information age-visual design art [1], it has become more and more deeply involved in people’s daily life and has attracted the attention of the public. As the material manifestation and important carrier of visual design, images are increasingly exerting a strong influence on public life [2]. Although the technical potential in the field of the visual art design is huge and the application prospects are also very broad, there are still many unresolved theoretical problems and unsurpassed technical obstacles that require us to actively explore and pursue.As a category with strong practicality, visual art design still has a large number of directions worthy of deep exploration by art creators. By browsing the literature, we can find that there have been much research studies on visual art design. However, in terms of visual image design, research in this field of art is still relatively fragmented and only stays at a relatively shallow level of case analysis of works and popularization of visual art concepts. Most of them are based on theoretical research, and there are few visual arts. The characteristics and techniques of images, how to systematically use virtual reality as a medium for visual art design research, and the actual use of virtual reality technology for visual art design creation research are even rarer [3]. Reference [4] proposed an image optimization design method based on the theory of visual cognition. First, the morphological analysis method is used to extract the morphological design elements of the product. Secondly, based on the eye-tracking experiment to obtain the eye movement cognition data of the visual form, then, the single factor analysis method is used to screen the eye movement indicators that are highly correlated with the form. Finally, the quantitative type I theory is used to parametrically encode design elements, and the relationship model between eye-tracking indicators and design elements is established through data mining technology. As a result, the relationship model between eye-tracking indicators and design elements is finally obtained, and then the form design elements that are closely related to the user’s visual cognition and their weight rankings are extracted so as to determine the optimal scheme of form design. Take the electric kettle as an example to verify the feasibility of this method. The analysis results show that this method can help designers optimize the product form from the perspective of users’ visual cognition, thereby enhancing the scientificity and rationality of the design. Reference [5] aims to build a bridge between human and computer information and proposes a visual optimization design method based on computer interaction technology. Through a graphical user interface, user demand analysis, data processing, interface image enhancement, interface text optimization, and other steps, the visual optimization design of web interface text is realized. Among them, the data processing adopts big data integration based on computer interaction technology to process the information generated in the interface interaction process, which improves the big data fusion and information clustering capabilities during human-computer interface interaction; Using a visual feature-based web interface image color enhancement algorithm, combined with the optimization of text shape features and color, the web interface text visual optimization design is jointly realized.Network system security mainly refers to the security problems existing in the computer and the network itself, which is to guarantee the availability and security of the e-commerce platform. Its contents include computer physics, system, database, network equipment, network services, and other security problems. Virtual reality inevitably involves information security. The problem of information security is information theft, information tampering, information counterfeiting, and information malicious destruction in the process of the transmission of e-commerce information in the network.Although the above method can improve the visual effect to a certain extent, the visual image is not processed well because of the lack of preprocessing of the visual image. Therefore, a visual art design method based on virtual reality is proposed. A visual image is an important form of visual art. This article takes it as the main research object and takes visual art design as the main starting point. Using virtual reality technology to adjust the light and shade of the object, the color and gloss of the image are coordinated, and the visual tension of the visual image is improved. The experimental results show that this method improves the efficiency of visual image creation. It can make designers use more accurate color matching and filling methods to improve the visual expression of the whole visual design work and create a more comfortable visual image effect.Our contribution is threefold:(1)
Because traditional methods generally lack the image preprocessing link, the effect of visual image detail processing is not good. In order to enhance the image visual effect, a visual art design method based on virtual reality is proposed.(2)
The wavelet transform method is used to denoise the visual image, the noise signal in the image is removed, a binary model of fuzzy space vision fusion is established, the space of the visual image is planned, and the spatial distribution information of the visual image is obtained.(3)
The results show that the peak signal-to-noise ratio of the visual image processed by the proposed method is high, and the image detail processing effect is better.
## 2. Visual Image Preprocessing
Before visual image design, first, preprocess the visual image. This link mainly includes three parts: wavelet denoising, visual image spatial distribution information extraction, and visual image rendering.
### 2.1. Visual Image Wavelet Denoising
The wavelet transform method was used to denoise the visual image [6], assuming that the visual image contains the following noise signals:(1)Fn=ωn+αn.Among them,ωn represents the original signal of the visual image and αn represents the noise contained in the image.Discretize the noise signal in the visual image to obtain the discrete signalkn, where n=0,1,…,N, and use the wavelet transform method to sample some of the signals in the discrete signal:(2)Dj=1nX1j−AVGj+X2j−AVGj+,...,+Xnj−AVGj.Among them,Xnj represents the scale function, Dj represents the wavelet decomposition result of the visual image, and AVGj represents the wavelet coefficient. Further, transform the wavelet coefficients to obtain the following:(3)AVGj=1nX1j+X2j+,...,+Xnj.Assuming that the wavelet function isYij, the filter coefficient matrices corresponding to the scaling function Xnj and the wavelet function Yij are Znj and Lij, which are expressed by formulas (4) and (5), respectively:(4)Znj=1z11⋯z1j1z21⋯z2j⋮⋮⋮1zn1⋯znj,(5)Lij=l1i0⋯00l2i⋯⋮⋮⋮00⋯lij.According to formulas (4) and (5), the visual image is denoted as SZnj,Lij. According to the linear nature of wavelet transform, it can be seen that after discretizing SZnj,Lij, the image obtained is still composed of two parts; these are the coefficient Wk corresponding to the original signal ωn and the coefficient Xk corresponding to the noise signal αn.After the wavelet decomposition of the visual image, the normal signal in the image is highly concentrated on the wavelet coefficients with a larger amplitude, while the noise signal is randomly distributed in the transform domain. At this time, the noise in the wavelet coefficients can be further denoised. In reference [7], the specific denoising process is as follows:Step 1: select appropriate wavelet coefficients, decompose the visual image to be denoised, and obtain theo-level visual image hierarchyStep 2: choose an appropriate threshold and quantify different visual image levels [8]Step 3: according to the quantization processing result, the visual images of each layer are subjected to denoising processing to achieve the overall denoising result
### 2.2. Extraction of Visual Image Spatial Distribution Information
Based on the wavelet denoising of the visual image, the spatial planning of the visual image is carried out to obtain the adaptive spatial distribution value of the visual image:(6)Hμ=λN∑i=1Myi2.Among them,yi represents the gradient pattern of the pixel feature points of the visual image in the i direction. According to the visual scene, a binary model of fuzzy spatial vision fusion is established, and the correlation characteristic of the adaptive spatial visual feature matching of the visual image is obtained by using this model, which is expressed by the following formula:(7)Hμi×n=∑i=1ncirij.Among them,ci represents the texture feature of the visual image and rij represents the pixel array of the image.From this, the local spatial structure component of the visual image can be obtained. On this basis, the linear programming method [9] is further used to control the global spatial threshold ϖdi of the visual image:(8)ϖdi=∑i=1n∑j=1ntpi−opj.Among them,tpi represents the distance of the visual point and opj represents the image reference feature point.Perform visual tracking and matching of the histogram of the visual image with the reference feature points, and establish a multilayer segmentation model of the visual image space in the local area of the image:(9)Zi=∑i,j=1Nexp−di,jϖdi.In the visual image area, the Harris corner point detection method is used to mark the corner points of the image, and the spatial visual information fusion result of the visual image is obtained as follows:(10)Eκ=∑i,j=1nφμi,j×Zi.Among them,φμ represents the image texture coordinate space change rate and κ represents the image quantization coding.Use the edge contour feature extraction method to extract the spatial distribution information of the visual image in the fusion result:(11)Wφ=1I1I2∑x=0n∑y=0nfx,yφi1i2x,y.Among them,I1 represents the nonoverlapping square blocks in the visual image, I2 represents the overlapping square blocks in the visual image, φi1i2 represents the local information between adjacent pixels in the image, and fx,y represents the spatial neighborhood information of the visual image.According to the spatial neighborhood information of the visual image, the feature quantity of the visual image is extracted, the spatial distribution information of the visual image is obtained, and the performance of the adaptive planning of the visual image space is improved.
### 2.3. Three-Dimensional Rendering of Visual Images
Three-dimensional rendering technology is one of the cores of virtual reality technology. Rendering technology refers to the process of simulating the lighting of the physical environment and the texture of objects in the physical world in a three-dimensional scene to obtain a more realistic image. Rendering is not an independent concept. It is the process of bringing together all the work in the process of 3D model, texture, lighting, camera, and effect to form the final graphics sequence. Simply put, it is to create pixels that are given different colors to form a complete image. The rendering process requires a lot of complex calculations, which makes the computer busier. Currently, popular renderers support global illumination and HDRI technology, and simulations of caustics, depth of field, and 3S materials will also bring unexpected effects to rendering.
#### 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
#### 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 2.1. Visual Image Wavelet Denoising
The wavelet transform method was used to denoise the visual image [6], assuming that the visual image contains the following noise signals:(1)Fn=ωn+αn.Among them,ωn represents the original signal of the visual image and αn represents the noise contained in the image.Discretize the noise signal in the visual image to obtain the discrete signalkn, where n=0,1,…,N, and use the wavelet transform method to sample some of the signals in the discrete signal:(2)Dj=1nX1j−AVGj+X2j−AVGj+,...,+Xnj−AVGj.Among them,Xnj represents the scale function, Dj represents the wavelet decomposition result of the visual image, and AVGj represents the wavelet coefficient. Further, transform the wavelet coefficients to obtain the following:(3)AVGj=1nX1j+X2j+,...,+Xnj.Assuming that the wavelet function isYij, the filter coefficient matrices corresponding to the scaling function Xnj and the wavelet function Yij are Znj and Lij, which are expressed by formulas (4) and (5), respectively:(4)Znj=1z11⋯z1j1z21⋯z2j⋮⋮⋮1zn1⋯znj,(5)Lij=l1i0⋯00l2i⋯⋮⋮⋮00⋯lij.According to formulas (4) and (5), the visual image is denoted as SZnj,Lij. According to the linear nature of wavelet transform, it can be seen that after discretizing SZnj,Lij, the image obtained is still composed of two parts; these are the coefficient Wk corresponding to the original signal ωn and the coefficient Xk corresponding to the noise signal αn.After the wavelet decomposition of the visual image, the normal signal in the image is highly concentrated on the wavelet coefficients with a larger amplitude, while the noise signal is randomly distributed in the transform domain. At this time, the noise in the wavelet coefficients can be further denoised. In reference [7], the specific denoising process is as follows:Step 1: select appropriate wavelet coefficients, decompose the visual image to be denoised, and obtain theo-level visual image hierarchyStep 2: choose an appropriate threshold and quantify different visual image levels [8]Step 3: according to the quantization processing result, the visual images of each layer are subjected to denoising processing to achieve the overall denoising result
## 2.2. Extraction of Visual Image Spatial Distribution Information
Based on the wavelet denoising of the visual image, the spatial planning of the visual image is carried out to obtain the adaptive spatial distribution value of the visual image:(6)Hμ=λN∑i=1Myi2.Among them,yi represents the gradient pattern of the pixel feature points of the visual image in the i direction. According to the visual scene, a binary model of fuzzy spatial vision fusion is established, and the correlation characteristic of the adaptive spatial visual feature matching of the visual image is obtained by using this model, which is expressed by the following formula:(7)Hμi×n=∑i=1ncirij.Among them,ci represents the texture feature of the visual image and rij represents the pixel array of the image.From this, the local spatial structure component of the visual image can be obtained. On this basis, the linear programming method [9] is further used to control the global spatial threshold ϖdi of the visual image:(8)ϖdi=∑i=1n∑j=1ntpi−opj.Among them,tpi represents the distance of the visual point and opj represents the image reference feature point.Perform visual tracking and matching of the histogram of the visual image with the reference feature points, and establish a multilayer segmentation model of the visual image space in the local area of the image:(9)Zi=∑i,j=1Nexp−di,jϖdi.In the visual image area, the Harris corner point detection method is used to mark the corner points of the image, and the spatial visual information fusion result of the visual image is obtained as follows:(10)Eκ=∑i,j=1nφμi,j×Zi.Among them,φμ represents the image texture coordinate space change rate and κ represents the image quantization coding.Use the edge contour feature extraction method to extract the spatial distribution information of the visual image in the fusion result:(11)Wφ=1I1I2∑x=0n∑y=0nfx,yφi1i2x,y.Among them,I1 represents the nonoverlapping square blocks in the visual image, I2 represents the overlapping square blocks in the visual image, φi1i2 represents the local information between adjacent pixels in the image, and fx,y represents the spatial neighborhood information of the visual image.According to the spatial neighborhood information of the visual image, the feature quantity of the visual image is extracted, the spatial distribution information of the visual image is obtained, and the performance of the adaptive planning of the visual image space is improved.
## 2.3. Three-Dimensional Rendering of Visual Images
Three-dimensional rendering technology is one of the cores of virtual reality technology. Rendering technology refers to the process of simulating the lighting of the physical environment and the texture of objects in the physical world in a three-dimensional scene to obtain a more realistic image. Rendering is not an independent concept. It is the process of bringing together all the work in the process of 3D model, texture, lighting, camera, and effect to form the final graphics sequence. Simply put, it is to create pixels that are given different colors to form a complete image. The rendering process requires a lot of complex calculations, which makes the computer busier. Currently, popular renderers support global illumination and HDRI technology, and simulations of caustics, depth of field, and 3S materials will also bring unexpected effects to rendering.
### 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
### 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
## 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 3. Visual Image Reconstruction Based on Virtual Reality
Based on the spatial distribution information of the visual image and the three-dimensional rendering result of the visual image, virtual reality technology is used to reconstruct the visual image. Virtual reality is a technology that can simulate all kinds of materials in space very realistically. Through the human-computer interface technology, we can freely observe the surrounding scenery and use special equipment to interact with virtual objects, which can make the image have a very realistic visual effect and fully meet the standard of visual image reconstruction.In order to solve the problem of poor visual image detail processing in traditional visual art design, a visual image reconstruction method based on virtual reality is proposed. Wavelet technology is introduced to decompose the visual image, quantify the visual image, and compress the image to obtain different quality image resources. According to the virtual reality measurement, different image resources are classified to complete the multimedia image reconstruction.Use the irregular triangulation method to realize the reconstruction of the visual image. First, obtain the boundary information of the mesh model, and then visually track and measure the seed point according to the angle of wavelet diffusion, and match the tracked trajectory with features, and obtain different discrete sampling points. Use the cropping method to track each grid line and then obtain the corresponding points of the image tracking trajectory reconstruction. In the distributed scene of the target, the visual image rendering technology is used to realize the three-dimensional imaging of the image trajectory. Finally, modeling is performed. In the virtual scene, the data of the visual image is input into the 3D model to obtain the initial position and posture information, and the image is reconstructed virtually in reality. The actual block diagram of the reconstruction is shown in Figure2.Figure 2
Frame diagram of visual image reconstruction.According to Figure2, we first complete the graphical interface setting and establish a static three-dimensional virtual model library combined with the large scene terrain, and then analyze the image environment, initial position, special effects design, application Settings, and transmit the files in the image. The corresponding programs include driving algorithm, data processing, image collision detection and response, scene scheduling and management, scene rendering, device output, etc., to achieve visual image reconstruction.According to the above steps, the visual image reconstruction based on virtual reality is completed. Based on this, the visual image is further optimized and the final visual image design result is obtained.
## 4. Optimization of Visual Image Imaging Results
In order to better present the imaging results of the visual image design, the image reconstruction needs to be optimized. In this paper, the image segmentation method is used to optimize the reconstruction results so that the image area division is clearer and the feature points are more prominent.At this stage, the more common image segmentation methods are the variance method and directional image method. The former is mainly based on the gray characteristics of the image, which is more suitable for the segmentation of the image background area. However, this method has one disadvantage, that is, it is easy to lead to misjudgment for areas with small gray change [10]. The latter is mainly based on the direction information of the image. This method has a good segmentation effect for the small change area, but the segmentation effect for the background area is not very ideal [11]. Aiming at the problems of the abovementioned traditional methods, this paper uses the ant colony algorithm to optimize the visual image. The image segmentation method based on the ant colony algorithm comprehensively considers the grayscale, gradient, and neighborhood characteristics of each pixel in the image [12]. Using the fuzzy clustering ability of the ant colony algorithm, image segmentation is regarded as a process of clustering pixels with different characteristics.Given the original imageU, consider each pixel uii=1,2,…,n as an ant, and each ant is a three-dimensional vector characterized by grayscale, gradient, and neighborhood. Image segmentation is the process by which these ants with different characteristics search for food sources. The distance between any pixel u1 and u2 is du1u2, and the Euclidean distance between the two is calculated:(12)du1u2=∑r=1nDijru1−ru2.Among them,r represents the feature dimension of the ant colony and Dij represents the weighting factor, and the value of this parameter is determined by the degree of influence of each component of the pixel on the cluster.SetR as the cluster radius and ψk as the amount of information contained in the image, then(13)ψk=1,Dij≤R,0,Dij>R.The probability thatu1 chooses the path to u2 is as follows:(14)Dk=12∑i′j′nwTui′′−wTuj′×di′j′.Among them,di′j′ represents the set of feasible paths. After the above cycle, the amount of information on each path can be adjusted by the following formula:(15)ψk=exp−u1−m2+u2−n2σ2.Among them,σ2 represents the similarity between pixels.According to the above analysis, the optimization process of visual image imaging results based on ant colony algorithm is as follows:(1)
Convert the image data into a matrix [13]; each data in the matrix corresponds to an ant.(2)
Initialize the parameters, set the timeT=0 and the number of cycles τ=0, and set the maximum number of cycles to τmax.(3)
Start the clustering cycle, and the number of cycles gradually increases by 1, and at the same time, the number of ant colonies continues to increase.(4)
Calculate the distance from pixelu1 to u2 according to formula (12) [14]. If the distance is zero, the degree of membership of the pixel to this class is 1.(5)
According to formula (15), calculate the amount of information of each path from u1 to u2.(6)
Adjust the amount of information on the path and update the cluster center.(7)
If the end condition is met, that is, the number of cyclesτ≥τmax, the cycle [15] is ended, and the calculation result is output. Otherwise, go to step (3).In summary, the preprocessing of visual images is achieved through visual image wavelet denoising, visual image spatial distribution information extraction, and visual image three-dimensional rendering, which provides preconditions for visual image design. Then use virtual reality technology to reconstruct the preprocessed image and further use the ant colony algorithm to segment the reconstructed image so that the image area is more clearly divided, the feature points are more prominent, and the goal of visual image optimization is realized [16].
## 5. Experimental Research
In order to verify the effectiveness of the visual art design method based on virtual reality and verify the effect of visual image processing under the method, a simulation experiment is set up. The image optimization design method based on the visual cognition theory and the visual optimization design method based on computer interaction technology are used as the contrast method, and the advantages of the visual art design method based on virtual reality are verified through comparative analysis.
### 5.1. Experimental Data Settings
The images used in the experiment are all from the ImageNet database, which is the largest known image database at present and contains a wide range of images. ImageNet is an image dataset organized according to the WordNet hierarchical structure. 600 images of different types are selected from the database [17]. There are 6 datasets, and the parameters of each dataset are shown in Table 1.Table 1
Experimental dataset.
Dataset numberNumber of imagesData dimension1154629710355541038587661044According to the above experimental conditions, the visual image processing and design results of different methods are compared, and the experimental conclusions are drawn.
### 5.2. Analysis of Experimental Results
#### 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
#### 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 5.1. Experimental Data Settings
The images used in the experiment are all from the ImageNet database, which is the largest known image database at present and contains a wide range of images. ImageNet is an image dataset organized according to the WordNet hierarchical structure. 600 images of different types are selected from the database [17]. There are 6 datasets, and the parameters of each dataset are shown in Table 1.Table 1
Experimental dataset.
Dataset numberNumber of imagesData dimension1154629710355541038587661044According to the above experimental conditions, the visual image processing and design results of different methods are compared, and the experimental conclusions are drawn.
## 5.2. Analysis of Experimental Results
### 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
### 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
## 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 6. Conclusion
Aiming at the problem of poor detail processing effect of traditional visual images, a visual art design method based on virtual reality is proposed. Through visual image denoising, image information extraction, and image rendering, the visual image is preprocessed, and then virtual reality technology is used to reconstruct the preprocessed image, and then the ant colony algorithm is used to segment the reconstructed image. The image area is divided more clearly, the feature points are more prominent, and the purpose of visual image optimization is realized. The results show that the visual art design method based on virtual reality has a better image processing effect and high peak signal-to-noise ratio, which fully verifies the visual image processing effect of this method and shows that it has a very important significance in the field of the visual art design.
---
*Source: 1014017-2021-08-20.xml* | 1014017-2021-08-20_1014017-2021-08-20.md | 55,243 | Design and Reconstruction of Visual Art Based on Virtual Reality | Bai Yun | Security and Communication Networks
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1014017 | 1014017-2021-08-20.xml | ---
## Abstract
Because traditional methods generally lack the image preprocessing link, the effect of visual image detail processing is not good. In order to enhance the image visual effect, a visual art design method based on virtual reality is proposed. Wavelet transform method is used to denoise the visual image and the noise signal in the image is removed; a binary model of fuzzy space vision fusion is established, the space of the visual image is planned, and the spatial distribution information of the visual image is obtained. According to the principle of light and shadow phenomenon in visual image rendering, the Extend Shadow map algorithm is used to render the visual image. Virtual reality technology was used to reconstruct the preprocessed visual image, and the ant colony algorithm was used to optimize the visual image to realize the visual image design. The results show that the peak signal-to-noise ratio of the visual image processed by the proposed method is high, and the image detail processing effect is better.
---
## Body
## 1. Introduction
As a communicator of culture in the information age-visual design art [1], it has become more and more deeply involved in people’s daily life and has attracted the attention of the public. As the material manifestation and important carrier of visual design, images are increasingly exerting a strong influence on public life [2]. Although the technical potential in the field of the visual art design is huge and the application prospects are also very broad, there are still many unresolved theoretical problems and unsurpassed technical obstacles that require us to actively explore and pursue.As a category with strong practicality, visual art design still has a large number of directions worthy of deep exploration by art creators. By browsing the literature, we can find that there have been much research studies on visual art design. However, in terms of visual image design, research in this field of art is still relatively fragmented and only stays at a relatively shallow level of case analysis of works and popularization of visual art concepts. Most of them are based on theoretical research, and there are few visual arts. The characteristics and techniques of images, how to systematically use virtual reality as a medium for visual art design research, and the actual use of virtual reality technology for visual art design creation research are even rarer [3]. Reference [4] proposed an image optimization design method based on the theory of visual cognition. First, the morphological analysis method is used to extract the morphological design elements of the product. Secondly, based on the eye-tracking experiment to obtain the eye movement cognition data of the visual form, then, the single factor analysis method is used to screen the eye movement indicators that are highly correlated with the form. Finally, the quantitative type I theory is used to parametrically encode design elements, and the relationship model between eye-tracking indicators and design elements is established through data mining technology. As a result, the relationship model between eye-tracking indicators and design elements is finally obtained, and then the form design elements that are closely related to the user’s visual cognition and their weight rankings are extracted so as to determine the optimal scheme of form design. Take the electric kettle as an example to verify the feasibility of this method. The analysis results show that this method can help designers optimize the product form from the perspective of users’ visual cognition, thereby enhancing the scientificity and rationality of the design. Reference [5] aims to build a bridge between human and computer information and proposes a visual optimization design method based on computer interaction technology. Through a graphical user interface, user demand analysis, data processing, interface image enhancement, interface text optimization, and other steps, the visual optimization design of web interface text is realized. Among them, the data processing adopts big data integration based on computer interaction technology to process the information generated in the interface interaction process, which improves the big data fusion and information clustering capabilities during human-computer interface interaction; Using a visual feature-based web interface image color enhancement algorithm, combined with the optimization of text shape features and color, the web interface text visual optimization design is jointly realized.Network system security mainly refers to the security problems existing in the computer and the network itself, which is to guarantee the availability and security of the e-commerce platform. Its contents include computer physics, system, database, network equipment, network services, and other security problems. Virtual reality inevitably involves information security. The problem of information security is information theft, information tampering, information counterfeiting, and information malicious destruction in the process of the transmission of e-commerce information in the network.Although the above method can improve the visual effect to a certain extent, the visual image is not processed well because of the lack of preprocessing of the visual image. Therefore, a visual art design method based on virtual reality is proposed. A visual image is an important form of visual art. This article takes it as the main research object and takes visual art design as the main starting point. Using virtual reality technology to adjust the light and shade of the object, the color and gloss of the image are coordinated, and the visual tension of the visual image is improved. The experimental results show that this method improves the efficiency of visual image creation. It can make designers use more accurate color matching and filling methods to improve the visual expression of the whole visual design work and create a more comfortable visual image effect.Our contribution is threefold:(1)
Because traditional methods generally lack the image preprocessing link, the effect of visual image detail processing is not good. In order to enhance the image visual effect, a visual art design method based on virtual reality is proposed.(2)
The wavelet transform method is used to denoise the visual image, the noise signal in the image is removed, a binary model of fuzzy space vision fusion is established, the space of the visual image is planned, and the spatial distribution information of the visual image is obtained.(3)
The results show that the peak signal-to-noise ratio of the visual image processed by the proposed method is high, and the image detail processing effect is better.
## 2. Visual Image Preprocessing
Before visual image design, first, preprocess the visual image. This link mainly includes three parts: wavelet denoising, visual image spatial distribution information extraction, and visual image rendering.
### 2.1. Visual Image Wavelet Denoising
The wavelet transform method was used to denoise the visual image [6], assuming that the visual image contains the following noise signals:(1)Fn=ωn+αn.Among them,ωn represents the original signal of the visual image and αn represents the noise contained in the image.Discretize the noise signal in the visual image to obtain the discrete signalkn, where n=0,1,…,N, and use the wavelet transform method to sample some of the signals in the discrete signal:(2)Dj=1nX1j−AVGj+X2j−AVGj+,...,+Xnj−AVGj.Among them,Xnj represents the scale function, Dj represents the wavelet decomposition result of the visual image, and AVGj represents the wavelet coefficient. Further, transform the wavelet coefficients to obtain the following:(3)AVGj=1nX1j+X2j+,...,+Xnj.Assuming that the wavelet function isYij, the filter coefficient matrices corresponding to the scaling function Xnj and the wavelet function Yij are Znj and Lij, which are expressed by formulas (4) and (5), respectively:(4)Znj=1z11⋯z1j1z21⋯z2j⋮⋮⋮1zn1⋯znj,(5)Lij=l1i0⋯00l2i⋯⋮⋮⋮00⋯lij.According to formulas (4) and (5), the visual image is denoted as SZnj,Lij. According to the linear nature of wavelet transform, it can be seen that after discretizing SZnj,Lij, the image obtained is still composed of two parts; these are the coefficient Wk corresponding to the original signal ωn and the coefficient Xk corresponding to the noise signal αn.After the wavelet decomposition of the visual image, the normal signal in the image is highly concentrated on the wavelet coefficients with a larger amplitude, while the noise signal is randomly distributed in the transform domain. At this time, the noise in the wavelet coefficients can be further denoised. In reference [7], the specific denoising process is as follows:Step 1: select appropriate wavelet coefficients, decompose the visual image to be denoised, and obtain theo-level visual image hierarchyStep 2: choose an appropriate threshold and quantify different visual image levels [8]Step 3: according to the quantization processing result, the visual images of each layer are subjected to denoising processing to achieve the overall denoising result
### 2.2. Extraction of Visual Image Spatial Distribution Information
Based on the wavelet denoising of the visual image, the spatial planning of the visual image is carried out to obtain the adaptive spatial distribution value of the visual image:(6)Hμ=λN∑i=1Myi2.Among them,yi represents the gradient pattern of the pixel feature points of the visual image in the i direction. According to the visual scene, a binary model of fuzzy spatial vision fusion is established, and the correlation characteristic of the adaptive spatial visual feature matching of the visual image is obtained by using this model, which is expressed by the following formula:(7)Hμi×n=∑i=1ncirij.Among them,ci represents the texture feature of the visual image and rij represents the pixel array of the image.From this, the local spatial structure component of the visual image can be obtained. On this basis, the linear programming method [9] is further used to control the global spatial threshold ϖdi of the visual image:(8)ϖdi=∑i=1n∑j=1ntpi−opj.Among them,tpi represents the distance of the visual point and opj represents the image reference feature point.Perform visual tracking and matching of the histogram of the visual image with the reference feature points, and establish a multilayer segmentation model of the visual image space in the local area of the image:(9)Zi=∑i,j=1Nexp−di,jϖdi.In the visual image area, the Harris corner point detection method is used to mark the corner points of the image, and the spatial visual information fusion result of the visual image is obtained as follows:(10)Eκ=∑i,j=1nφμi,j×Zi.Among them,φμ represents the image texture coordinate space change rate and κ represents the image quantization coding.Use the edge contour feature extraction method to extract the spatial distribution information of the visual image in the fusion result:(11)Wφ=1I1I2∑x=0n∑y=0nfx,yφi1i2x,y.Among them,I1 represents the nonoverlapping square blocks in the visual image, I2 represents the overlapping square blocks in the visual image, φi1i2 represents the local information between adjacent pixels in the image, and fx,y represents the spatial neighborhood information of the visual image.According to the spatial neighborhood information of the visual image, the feature quantity of the visual image is extracted, the spatial distribution information of the visual image is obtained, and the performance of the adaptive planning of the visual image space is improved.
### 2.3. Three-Dimensional Rendering of Visual Images
Three-dimensional rendering technology is one of the cores of virtual reality technology. Rendering technology refers to the process of simulating the lighting of the physical environment and the texture of objects in the physical world in a three-dimensional scene to obtain a more realistic image. Rendering is not an independent concept. It is the process of bringing together all the work in the process of 3D model, texture, lighting, camera, and effect to form the final graphics sequence. Simply put, it is to create pixels that are given different colors to form a complete image. The rendering process requires a lot of complex calculations, which makes the computer busier. Currently, popular renderers support global illumination and HDRI technology, and simulations of caustics, depth of field, and 3S materials will also bring unexpected effects to rendering.
#### 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
#### 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 2.1. Visual Image Wavelet Denoising
The wavelet transform method was used to denoise the visual image [6], assuming that the visual image contains the following noise signals:(1)Fn=ωn+αn.Among them,ωn represents the original signal of the visual image and αn represents the noise contained in the image.Discretize the noise signal in the visual image to obtain the discrete signalkn, where n=0,1,…,N, and use the wavelet transform method to sample some of the signals in the discrete signal:(2)Dj=1nX1j−AVGj+X2j−AVGj+,...,+Xnj−AVGj.Among them,Xnj represents the scale function, Dj represents the wavelet decomposition result of the visual image, and AVGj represents the wavelet coefficient. Further, transform the wavelet coefficients to obtain the following:(3)AVGj=1nX1j+X2j+,...,+Xnj.Assuming that the wavelet function isYij, the filter coefficient matrices corresponding to the scaling function Xnj and the wavelet function Yij are Znj and Lij, which are expressed by formulas (4) and (5), respectively:(4)Znj=1z11⋯z1j1z21⋯z2j⋮⋮⋮1zn1⋯znj,(5)Lij=l1i0⋯00l2i⋯⋮⋮⋮00⋯lij.According to formulas (4) and (5), the visual image is denoted as SZnj,Lij. According to the linear nature of wavelet transform, it can be seen that after discretizing SZnj,Lij, the image obtained is still composed of two parts; these are the coefficient Wk corresponding to the original signal ωn and the coefficient Xk corresponding to the noise signal αn.After the wavelet decomposition of the visual image, the normal signal in the image is highly concentrated on the wavelet coefficients with a larger amplitude, while the noise signal is randomly distributed in the transform domain. At this time, the noise in the wavelet coefficients can be further denoised. In reference [7], the specific denoising process is as follows:Step 1: select appropriate wavelet coefficients, decompose the visual image to be denoised, and obtain theo-level visual image hierarchyStep 2: choose an appropriate threshold and quantify different visual image levels [8]Step 3: according to the quantization processing result, the visual images of each layer are subjected to denoising processing to achieve the overall denoising result
## 2.2. Extraction of Visual Image Spatial Distribution Information
Based on the wavelet denoising of the visual image, the spatial planning of the visual image is carried out to obtain the adaptive spatial distribution value of the visual image:(6)Hμ=λN∑i=1Myi2.Among them,yi represents the gradient pattern of the pixel feature points of the visual image in the i direction. According to the visual scene, a binary model of fuzzy spatial vision fusion is established, and the correlation characteristic of the adaptive spatial visual feature matching of the visual image is obtained by using this model, which is expressed by the following formula:(7)Hμi×n=∑i=1ncirij.Among them,ci represents the texture feature of the visual image and rij represents the pixel array of the image.From this, the local spatial structure component of the visual image can be obtained. On this basis, the linear programming method [9] is further used to control the global spatial threshold ϖdi of the visual image:(8)ϖdi=∑i=1n∑j=1ntpi−opj.Among them,tpi represents the distance of the visual point and opj represents the image reference feature point.Perform visual tracking and matching of the histogram of the visual image with the reference feature points, and establish a multilayer segmentation model of the visual image space in the local area of the image:(9)Zi=∑i,j=1Nexp−di,jϖdi.In the visual image area, the Harris corner point detection method is used to mark the corner points of the image, and the spatial visual information fusion result of the visual image is obtained as follows:(10)Eκ=∑i,j=1nφμi,j×Zi.Among them,φμ represents the image texture coordinate space change rate and κ represents the image quantization coding.Use the edge contour feature extraction method to extract the spatial distribution information of the visual image in the fusion result:(11)Wφ=1I1I2∑x=0n∑y=0nfx,yφi1i2x,y.Among them,I1 represents the nonoverlapping square blocks in the visual image, I2 represents the overlapping square blocks in the visual image, φi1i2 represents the local information between adjacent pixels in the image, and fx,y represents the spatial neighborhood information of the visual image.According to the spatial neighborhood information of the visual image, the feature quantity of the visual image is extracted, the spatial distribution information of the visual image is obtained, and the performance of the adaptive planning of the visual image space is improved.
## 2.3. Three-Dimensional Rendering of Visual Images
Three-dimensional rendering technology is one of the cores of virtual reality technology. Rendering technology refers to the process of simulating the lighting of the physical environment and the texture of objects in the physical world in a three-dimensional scene to obtain a more realistic image. Rendering is not an independent concept. It is the process of bringing together all the work in the process of 3D model, texture, lighting, camera, and effect to form the final graphics sequence. Simply put, it is to create pixels that are given different colors to form a complete image. The rendering process requires a lot of complex calculations, which makes the computer busier. Currently, popular renderers support global illumination and HDRI technology, and simulations of caustics, depth of field, and 3S materials will also bring unexpected effects to rendering.
### 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
### 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 2.3.1. Analysis of Light and Shadow Phenomena in Visual Image Rendering
In all aspects of rendering, light is the most important element. In order to better understand the principle of rendering, one must first understand the propagation mode of light in the real world: reflection, refraction, and transmission.(1). Reflection. Reflection is a very important factor that reflects the texture of an object. The reflection of light refers to the phenomenon that light hits the surface of an object and rebounds during movement. It includes diffuse reflection and specular reflection. All objects that can be seen are affected by these two methods. The first is color. When the object bounces all the light back, people will see that the object appears white. When the object absorbs all the light but does not bounce, the object will appear black. When the object only absorbs part of the light, then the rest of the light is absorbed. When bounced out, the object will show a variety of colors. For example, when the object only bounces red light and absorbs other light, the surface of the object will appear red. The second is gloss. Smooth objects will always have obvious highlights. For example, glass, porcelain, metal, but objects without obvious highlights are usually relatively rough, such as bricks, tiles, soil, and lawn lights. The generation of highlights is also the effect of light reflection, which is the effect of mirror reflection. Smooth objects have a mirror-like effect, which is very sensitive to the position and color of the light source. Therefore, the smooth surface of the object reflects the light source, which is the highlighted area of the surface of the object. The smoother the object, the smaller the highlight range and the higher the intensity.(2). Refraction. The refraction of light is a phenomenon that occurs in transparent objects. Due to the different densities of the material, the light will be deflected when it passes from one medium to another. Different transparent materials have different refractive indices, which is an important means of expressing transparent materials.(3). Transmission. In the real world, when light encounters a transparent object, part of the light will be bounced, while another part of the light will continue through the object. If the light is strong, the light will have a caustic effect after penetrating the object. If the object is a translucent material, the light will scatter inside the object, which is called “subsurface scattering”. For example, milk, cola, jade, skin, etc., all have this effect.It can be said that the texture of any object is represented by the above three light transmission methods. In the process of visual image design, according to the light and shadow phenomenon in nature, it can be applied to rendering, which can more truly express the rendering effect and improve the visual effect of the image.
## 2.3.2. Visual Image Rendering Processing Based on Extend Shadow Map Algorithm
The key to the execution of the Extend Shadow map algorithm lies in the comparison of two depth values. The comparison process is implemented using alpha testing technology. Alpha test is to test the alpha channel value of each pixel in the visual image by setting conditions. When each pixel is about to be drawn if the alpha test is started, only the pixels whose alpha values meet the conditions will be finally drawn (strictly speaking, the pixels that meet the conditions will pass this test and proceed to the next test. Only when all tests are passed can the painting be carried out), and those that do not meet the conditions will not be drawn. This condition can be always pass (default), never pass, pass if it is greater than the set value, pass if it is less than the set value, pass if it is equal to the set value, pass if it is greater than or equal to the setting value, pass if it is less than or equal to the set value, and pass if it is not equal to the setting value. In visual image rendering, two depth values are stored in the alpha channel of the texture image. The specific image rendering process is shown in Figure1.Figure 1
Image rendering flowchart.According to Figure1, in the process of visual image rendering, the original image is first obtained, and the image texture and one-dimensional gradient texture are created; then the entire scene is rendered as the viewpoint to obtain the depth value, and the depth value is stored in the visual image alpha channel of the texture; then use the current viewpoint to render the scene to render the image texture.
## 3. Visual Image Reconstruction Based on Virtual Reality
Based on the spatial distribution information of the visual image and the three-dimensional rendering result of the visual image, virtual reality technology is used to reconstruct the visual image. Virtual reality is a technology that can simulate all kinds of materials in space very realistically. Through the human-computer interface technology, we can freely observe the surrounding scenery and use special equipment to interact with virtual objects, which can make the image have a very realistic visual effect and fully meet the standard of visual image reconstruction.In order to solve the problem of poor visual image detail processing in traditional visual art design, a visual image reconstruction method based on virtual reality is proposed. Wavelet technology is introduced to decompose the visual image, quantify the visual image, and compress the image to obtain different quality image resources. According to the virtual reality measurement, different image resources are classified to complete the multimedia image reconstruction.Use the irregular triangulation method to realize the reconstruction of the visual image. First, obtain the boundary information of the mesh model, and then visually track and measure the seed point according to the angle of wavelet diffusion, and match the tracked trajectory with features, and obtain different discrete sampling points. Use the cropping method to track each grid line and then obtain the corresponding points of the image tracking trajectory reconstruction. In the distributed scene of the target, the visual image rendering technology is used to realize the three-dimensional imaging of the image trajectory. Finally, modeling is performed. In the virtual scene, the data of the visual image is input into the 3D model to obtain the initial position and posture information, and the image is reconstructed virtually in reality. The actual block diagram of the reconstruction is shown in Figure2.Figure 2
Frame diagram of visual image reconstruction.According to Figure2, we first complete the graphical interface setting and establish a static three-dimensional virtual model library combined with the large scene terrain, and then analyze the image environment, initial position, special effects design, application Settings, and transmit the files in the image. The corresponding programs include driving algorithm, data processing, image collision detection and response, scene scheduling and management, scene rendering, device output, etc., to achieve visual image reconstruction.According to the above steps, the visual image reconstruction based on virtual reality is completed. Based on this, the visual image is further optimized and the final visual image design result is obtained.
## 4. Optimization of Visual Image Imaging Results
In order to better present the imaging results of the visual image design, the image reconstruction needs to be optimized. In this paper, the image segmentation method is used to optimize the reconstruction results so that the image area division is clearer and the feature points are more prominent.At this stage, the more common image segmentation methods are the variance method and directional image method. The former is mainly based on the gray characteristics of the image, which is more suitable for the segmentation of the image background area. However, this method has one disadvantage, that is, it is easy to lead to misjudgment for areas with small gray change [10]. The latter is mainly based on the direction information of the image. This method has a good segmentation effect for the small change area, but the segmentation effect for the background area is not very ideal [11]. Aiming at the problems of the abovementioned traditional methods, this paper uses the ant colony algorithm to optimize the visual image. The image segmentation method based on the ant colony algorithm comprehensively considers the grayscale, gradient, and neighborhood characteristics of each pixel in the image [12]. Using the fuzzy clustering ability of the ant colony algorithm, image segmentation is regarded as a process of clustering pixels with different characteristics.Given the original imageU, consider each pixel uii=1,2,…,n as an ant, and each ant is a three-dimensional vector characterized by grayscale, gradient, and neighborhood. Image segmentation is the process by which these ants with different characteristics search for food sources. The distance between any pixel u1 and u2 is du1u2, and the Euclidean distance between the two is calculated:(12)du1u2=∑r=1nDijru1−ru2.Among them,r represents the feature dimension of the ant colony and Dij represents the weighting factor, and the value of this parameter is determined by the degree of influence of each component of the pixel on the cluster.SetR as the cluster radius and ψk as the amount of information contained in the image, then(13)ψk=1,Dij≤R,0,Dij>R.The probability thatu1 chooses the path to u2 is as follows:(14)Dk=12∑i′j′nwTui′′−wTuj′×di′j′.Among them,di′j′ represents the set of feasible paths. After the above cycle, the amount of information on each path can be adjusted by the following formula:(15)ψk=exp−u1−m2+u2−n2σ2.Among them,σ2 represents the similarity between pixels.According to the above analysis, the optimization process of visual image imaging results based on ant colony algorithm is as follows:(1)
Convert the image data into a matrix [13]; each data in the matrix corresponds to an ant.(2)
Initialize the parameters, set the timeT=0 and the number of cycles τ=0, and set the maximum number of cycles to τmax.(3)
Start the clustering cycle, and the number of cycles gradually increases by 1, and at the same time, the number of ant colonies continues to increase.(4)
Calculate the distance from pixelu1 to u2 according to formula (12) [14]. If the distance is zero, the degree of membership of the pixel to this class is 1.(5)
According to formula (15), calculate the amount of information of each path from u1 to u2.(6)
Adjust the amount of information on the path and update the cluster center.(7)
If the end condition is met, that is, the number of cyclesτ≥τmax, the cycle [15] is ended, and the calculation result is output. Otherwise, go to step (3).In summary, the preprocessing of visual images is achieved through visual image wavelet denoising, visual image spatial distribution information extraction, and visual image three-dimensional rendering, which provides preconditions for visual image design. Then use virtual reality technology to reconstruct the preprocessed image and further use the ant colony algorithm to segment the reconstructed image so that the image area is more clearly divided, the feature points are more prominent, and the goal of visual image optimization is realized [16].
## 5. Experimental Research
In order to verify the effectiveness of the visual art design method based on virtual reality and verify the effect of visual image processing under the method, a simulation experiment is set up. The image optimization design method based on the visual cognition theory and the visual optimization design method based on computer interaction technology are used as the contrast method, and the advantages of the visual art design method based on virtual reality are verified through comparative analysis.
### 5.1. Experimental Data Settings
The images used in the experiment are all from the ImageNet database, which is the largest known image database at present and contains a wide range of images. ImageNet is an image dataset organized according to the WordNet hierarchical structure. 600 images of different types are selected from the database [17]. There are 6 datasets, and the parameters of each dataset are shown in Table 1.Table 1
Experimental dataset.
Dataset numberNumber of imagesData dimension1154629710355541038587661044According to the above experimental conditions, the visual image processing and design results of different methods are compared, and the experimental conclusions are drawn.
### 5.2. Analysis of Experimental Results
#### 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
#### 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 5.1. Experimental Data Settings
The images used in the experiment are all from the ImageNet database, which is the largest known image database at present and contains a wide range of images. ImageNet is an image dataset organized according to the WordNet hierarchical structure. 600 images of different types are selected from the database [17]. There are 6 datasets, and the parameters of each dataset are shown in Table 1.Table 1
Experimental dataset.
Dataset numberNumber of imagesData dimension1154629710355541038587661044According to the above experimental conditions, the visual image processing and design results of different methods are compared, and the experimental conclusions are drawn.
## 5.2. Analysis of Experimental Results
### 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
### 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 5.2.1. Visual Image Processing Effects under Different Lighting Conditions
With normal light, strong light, and weak light as the preconditions, 5 pictures are selected arbitrarily in the experimental dataset, and the peak signal-to-noise ratio is used as the evaluation index. The calculation formula is as follows:(16)PSNR=10×log10ImaxMSE.The visual image processing effects of different methods are compared, and the results are shown in Table2.Table 2
Comparison of peak signal-to-noise ratio of different methods.
MethodLighting conditionsImage 1Image 2Image 3Image 4Image 5Visual art design method based on virtual realityNormal light201.47161.52190.30162.01138.76Strong light161.88116.10139.56116.91128.55Weak light89.0681.3981.9088.5675.42Image optimization design method based on sparse representation algorithmNormal light135.64123.56176.35111.69134.26Strong light115.3496.32107.2991.2484.23Weak light86.3269.3269.8272.6962.39Image optimization design method based on dynamic visual communicationNormal light128.12121.58137.26128.46104.54Strong light101.36109.2191.3274.1378.12Weak light78.2869.9277.2365.2467.11Image optimization design method based on visual cognition theoryNormal light123.63119.25151.06136.1790.54Strong light104.2589.25101.3672.1570.19Weak light75.2663.9779.6360.2557.18Visual optimization design method based on computer interactive technologyNormal light147.58114.36105.5896.36111.23Strong light143.2296.3396.37124.47105.99Weak light54.1960.2376.3964.2863.59The higher the PSNR, the better the image processing effect. According to the data in Table2, under different lighting conditions, the peak signal-to-noise ratio of visual art design method based on virtual reality is higher than that of image optimization design method based on visual cognition theory and visual optimization design method based on computer interaction technology. The peak signal-to-noise ratio of the visual art design method based on virtual reality reaches 201.47, which is much higher than the existing methods, indicating that this method can effectively improve the quality of the visual image.
## 5.2.2. Visual Image Optimization Effect
A street view image is selected arbitrarily in the experimental dataset as the research object, and the visual art design method based on virtual reality is used to optimize it. The result is shown in Figure3.Figure 3
Comparison of image optimization effects. (a) Original image. (b) Optimized image.
(a)(b)According to Figure3, Figure 3(a) is the original image. After optimization, the effect color of Figure 3(b) changes, and the color is brighter. It shows that the visual art design method based on virtual reality can accurately estimate the color in the process of visual image optimization, and the optimization effect is better.In order to further verify the application effect of the visual art design method based on virtual reality, compare the visual image optimization effects of different methods. Similarly, an image is selected as the experimental object in the experimental dataset, and different methods are used to optimize the image. The results are shown in Figure4.Figure 4
Comparison of visual image optimization effects of different methods. (a) Original image. (b) Visual art design method based on vertical reality. (c) Image optimization design method based on sparse representation algorithm. (d) Image optimization design method based on dynamic visual communication. (e) Image optimization design method based on dynamic visual cognition theory. (f) Visual optimization design method based on computer interactive technology.
(a)(b)(c)(d)(e)(f)According to Figure4, the visual optimization design method based on computer interaction technology has a small amount of calculation and good stability, but the image segmentation is inaccurate, some spatial information is ignored, and the edge detection result is inaccurate. The segmentation effect of the image optimization design method based on the visual cognitive theory is better, but the image details are too much, and the phenomenon of overexposure occurs. The image optimization effect of the visual art design method based on virtual reality is ideal, the image details are kept intact, and the image background is visually improved.
## 6. Conclusion
Aiming at the problem of poor detail processing effect of traditional visual images, a visual art design method based on virtual reality is proposed. Through visual image denoising, image information extraction, and image rendering, the visual image is preprocessed, and then virtual reality technology is used to reconstruct the preprocessed image, and then the ant colony algorithm is used to segment the reconstructed image. The image area is divided more clearly, the feature points are more prominent, and the purpose of visual image optimization is realized. The results show that the visual art design method based on virtual reality has a better image processing effect and high peak signal-to-noise ratio, which fully verifies the visual image processing effect of this method and shows that it has a very important significance in the field of the visual art design.
---
*Source: 1014017-2021-08-20.xml* | 2021 |
# Experienced CEO and Factor Productivity Improvement: Re-Examination of Experience Trap
**Authors:** Chengpeng Zhu; Mengyun Wu; Jie Xiong
**Journal:** Complexity
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1014224
---
## Abstract
We examine the role of experienced CEO in the CEO succession and their contributions to the performance of focal firms. We utilize the propensity score matching with difference in differences (PSM-DID) model to evaluate to what extent experienced CEO succession affects the total factor productivity (TFP) growth. Based on the analysis of 1,675 listed manufacturing companies in China, results show that experienced CEO in succession significantly improves firms’ TFP. Our analysis demonstrates that, on average, the event of hiring an experienced CEO succession yields a 3.1% increase in TFP improvement compared with nonsuccession firms. This positive effect can be continued for three years. Furthermore, the heterogeneous effect of experienced CEO succession on TFP is shown between different categories of focal firms (i.e., high-tech versus low-tech enterprises).
---
## Body
## 1. Introduction
In corporate governance and business management, one of the long-term challenges to the management team is the chief executive officer (CEO) succession [1]. Candidates who are experienced or rookies have possible benefits and downsides for the management team of private enterprises or the board of directors of publicly traded firms. Such choices are even more complicated and challenging for business firms in emerging countries [2].In comparison to the liability of alienness (unfamiliarity with the focal firm), the management team or board of directors believes that the liability of newness (lack of experience or capabilities) may provide more obstacles to the business operations following the CEO succession. The past performance and records of the experienced CEO candidates can be tracked in the disclosed information, such as the annual reports of the listed company they worked for, or articles in the business media [3, 4]. Skills and capabilities have grown increasingly vital as the corporate environment has become more turbulent in recent years. It might be because businesses are becoming increasingly hesitant to take the risk of hiring someone who has no prior experience in the field [3, 4]. When compared with the affordable possible learning costs to adapt to the focal business’s setting, CEO candidates with experience may provide greater potential benefits to the focal firm [5].Furthermore, from a contextual standpoint, the possible problem of cultural fit between external experienced CEO candidates and the focal firm may be overcome at a low cost [6] because CEOs may reshape and integrate with the organizational culture with effort in order to adapt to the changing environment [7]. For public firms, CEO candidates with prior experience can provide a better way for management team’s decision-making [8, 9]. An experienced CEO can apply the advanced management experience learned in the previous company to the current enterprise, so as to avoid the blindness of innovation activities when investing in R&D [10]. As suggested in Reference [11], the aggregate total factor productivity (TFP) of China’s manufacturing firms could increase by 30%-50% if the resource misallocation is reduced. Secondly, a new broom sweeps clean. An experienced CEO tends to increase their R&D investment and innovation output to improve their profitability and competitiveness in the industry for their own career development. The abovementioned measures will enhance the innovation capacity of the enterprise, and the process of improving the innovation capacity of the enterprise is often accompanied by the improvement of technological level and productivity. As a result, experienced CEO candidates may help the focal firms’ TFP.However, the established studies on experienced CEOs and firm performance remain mixed. Some existing literature on experienced CEOs and firm performance mainly explores the positive role of relevant experiences of CEOs in firm performance [12–14]. Other studies have found that prior experience might have a detrimental influence on business performance [15], raising the question of whether an experienced CEO can bring positive effects to the focal firm in the CEO succession. Some research studies on CEO succession suggest that previous CEO experiences hurt the successor firm’s performance [15, 16]. It might be partially because the past experience of CEOs may not be easily utilized in the new context of the successor firm [15], or partially due to the rigidity or inertia of experienced CEOs in learning and acquiring new knowledge of the successor firm [17–19].Furthermore, studies on the influence of experienced CEOs on organizational success have traditionally relied heavily on financial measurements (ROA, ROE, and Tobin’s Q). As performance metrics, ROE, and Tobin’s Q have obvious deficiencies. To begin with, the majority of the aforementioned indicators are derived from financial statements and have a time lag, so they can only represent the enterprise’s past production and operational conditions and cannot provide more future information. Second, while China has been steadily strengthening its oversight of accounting information in recent years, the practice of manipulating earning management has remained uncommon. Many businesses continue to whitewash their financial statements by adjusting expenses through real earning management. Although Tobin Q is forward-looking, market players’ maturity and emotional shifts may cause it to be overstated or underestimated [20]. TFP may represent the firm’s operational circumstances in a more complete way and objectively evaluate the company’s performance as a comprehensive indicator to quantify the input-output efficiency of a company [21]. TFP can give additional explanations for the variation in the market value of different firms than standard performance metrics (ROA, ROE, and Tobin’s Q). A company’s output efficiency is the foundation of its income and profit, as well as its fundamental capacity to turn production materials into output. The level of total factor productivity reflects the level of the fundamental ability of factor transformation [22]. Companies with better corporate governance and operation management have higher efficiency in utilizing factors of production such as labor and capital [23]. To achieve high-quality development, China’s industrial businesses must undergo transformation and upgrade. As one of the most essential human resources in a company, the CEO has a significant impact on how the firm makes choices and how it operates and manages. It also has an important impact on the company’s total factor productivity. Therefore, analyzing the CEO’s corporate governance ability and comprehensive resource utilization efficiency is easier when total factor productivity is used as a proxy for financial indicators to gauge corporate success.Besides, previous research on experienced CEO and company performance mostly focusses on developed-country contexts, leaving experienced CEOs and companies in developing markets relatively unstudied. We still do not know much about how experienced CEO candidates contribute to firm performance in a developing market. As a result, we use listed companies in China as our empirical sample to examine how experienced CEOs influence the performance of the CEO succession firm.The contribution of this study is threefold. First, we examine the role of an experienced CEO in firm performance when facing CEO succession. As the existing studies on such a correlation are mixed, we analyze the outsider experienced CEO from both function and context perspectives. Moreover, different from previous studies, we treat the outsider experienced CEO in CEO succession as a natural experiment and adopt the PSM-DID method to control econometric problems such as the sample selection problem. Our finding shows that the outsider-experienced CEO in CEO succession can significantly improve 3.1% of firms’ TFP, which can last for three years. Second, previous research focuses mainly on advanced economies such as America [19] and South Korea [4], whose capital markets and institutional backgrounds are well established. However, extant studies cannot give guidelines to emerging markets such as China, India, and Brazil. Hence, in this study, we shift our focus to examine to what extent outsider-experienced CEO impacts firm performance in the Chinese market. Given the growth of the Chinese capital market, CEO succession and outsider-experienced CEO succession are becoming increasingly common. However, empirical studies based on developing countries are largely ignored. Thus, our study sheds new insights into the studies of CEO succession and outsider-experienced CEOs. Third, the heterogeneous effects of outsider-experienced CEO on TFP among different technological sectors and institutional backgrounds are discussed to test the robustness of the results.
## 2. Literature Review
CEOs are widely considered as one of the most important human resources of a business firm [24, 25]. They play an important role in managing business firms and are responsible for the business activities of firms due to their rich knowledge resources and cognitive abilities. Established studies show that professional experience contributes significantly to building the capabilities of CEOs [28]. As a result, management teams pick CEOs with caution in order to effectively manage business activities and achieve better results than their competitors. In this case, CEO succession is one of the most challenging management issues in both academia and practice [1]. Due to the unique skills of CEOs, companies facing CEO succession prefer to hire existing CEOs from other companies who can demonstrate their qualifications and capabilities, largely due to their professional experience and partially due to their track records in the job [29]. In addition to their competencies, other significant functions and values of CEOs in commercial enterprises are also addressed in the literature [6]. According to some research, CEOs may be capable of establishing corporate culture and procedures as a result of their strong personal styles and characteristics [6, 30, 31]. Other studies show that the characteristics of CEOs affect their strategic choices [32], which eventually influence the performance of business firms, such as exploration or exploitation, which may lead to the difference in the short-term and long-term performance difference. For business owners (not necessarily the professional managers such as CEOs), the performance of business firms such as the total factor productivity (TFP) is one of the most important concerns when picking up the CEO. It is widely accepted that total factor productivity (TFP) is one of the most important core driving forces for firms’ development and economic growth. Since Solow [33] proposed this concept, total factor productivity has always been an important topic in academia and industry. In this study, it is calculated by the Levinsohn–Petrin method [34] at the firm level.Strong leadership with logical decision-making is essential at the top management team, where the CEO plays the most crucial role in increasing the TFP of business enterprises. However, existing research also shows that managers’ cognitive capacities are restricted [35] and that businesses may not have unlimited resources [36]. Thus, to compete in turbulent business environments and achieve above average performance, CEOs have to utilize the resources of the focal firm with the required capabilities [37, 38], build and change the organizational routines to fit the environment, or renew the business model [39]. It implies that experienced CEOs possess more external knowledge and information than those hired from the firm’s internal ranks [29], and thus, are better equipped to expand the resource base of the firm and promote innovation, learning, and high performance [40, 41]. Both the internal requirements of focal firms and the demands of the external business environment indicate that experienced CEOs are better off than inexperienced ones.Experienced CEOs are valuable for business firms, not only because their past records are more visible but also because of the capabilities accumulated during their past experiences in managing a business firm [29]. Even with the mistakes and lessons from their previous career, experienced CEOs may know how to avoid such mistakes in the new position if hired as the CEO of a new firm. Moreover, when focal firms go through the CEO succession, top management team and board of directors are more sensitive in selecting the outsider-experienced CEOs. This may bring both opportunities and challenges to the experienced CEO candidates. On the one hand, the new position of CEO may give the experienced candidates a chance to take more innovative actions due to their entrepreneurial spirits. One the other hand, the new position may also bring the experienced candidates the challenges of liability of alliance and strategic fit [6]. In the long term, the possible obstacles may be overcome since CEOs have the capability to change corporate culture [25].Existing studies already show that experienced CEOs bring positive outcomes to the focal firm [13, 14]. However, some other studies also find that experienced CEOs may not meet the expectations of the business owners of the focal firm. For instance, some experienced CEOs can finally hinder performance in the successor firm. This might be partially because the experienced CEO after succession failed to manage the liability of alienness/strangers, or partially because the experienced CEO did not successfully address the inertia that resulted from the past career in the previous business firm. So far, the empirical studies of experienced CEOs and firm performance are mixed.When it comes to selecting a successor CEO, however, the management team and board of directors continue to favor experienced CEO candidates. Existing research also suggests that in CEO succession, the experienced CEO is a desirable profile of the focal business (Hamori and Koyuncu, 2015). But due to the inertia problems, some experienced CEOs may be more difficult in acquiring new knowledge [17–19]. In this sense, whether an experienced CEO can overcome the inertia and fit the new position of the succession firm will influence the performance of the focal firm. Moreover, the majority of established studies on the mixed empirical results on the correlation between experienced CEO and firm performance are based on the contexts of developed countries. To date, we still know little about how an experienced CEO may contribute to the firm’s performance when facing CEO succession.In recent years, the research on the influencing factors of total factor productivity has been the focus of academic circles. Most of these literature studies focus on the external environment of enterprise operation and internal R&D and technology and discuss the influencing mechanism of enterprise resource allocation efficiency by focusing on trade system, infrastructure, human capital, and enterprise R&D. Coe and Helpman [42], Fernandes and Paunov[43], Huang et al. [44], and Ahsan [45] looked into the impact on TFP from the perspective of trade systems, such as technology spillover, tariff reduction, and market segmentation of import and export commodities, as well as FDI, whereas Hulten et al. [46], Montolio and Solé-Ollé [47], Song and Liu [48], and others focused on infrastructure, such as transportation, energy, communication, and financial services. If the samples of enterprises in the same country or region, system, location, infrastructure, and other external objective factors have a systemic or approximate homogeneous influence on the total factor productivity of the enterprise, then enterprises in total factor productivity of heterogeneous characteristics and enterprise’s own human capital and technological innovation are closely related to internal factors such as management ability. However, there hasn’t been enough focus on the link between the CEO and total factor productivity in the context of internal factors.
## 3. Model Specification and Data Description
### 3.1. Model Specification with PSM-DID Procedure
Endogeneity issues may occur from sample selection bias or missing variables if traditional econometric methods are employed directly to estimate the impact of experienced CEO succession on organizational performance. Due to the sample selection bias, existing studies failed to separate manager effects from firm effects, which eventually led to a biased conclusion. In addition, since the majority of earlier research on experienced CEOs did not take into consideration the company’s beginning condition, we will not be able to identify whether the ultimate result is due to the company’s starting operational status or the CEO. To avoid these biases, in this paper, we can properly discern the TFP changes induced by personal characteristics of CEOs using the quasinatural experiment of PSM-DID.Heckman et al. [49] were the first to suggest merging the PSM and DID models, pointing out that the PSM model may pick the control group for the DID model, giving the PSM-DID model a theoretical foundation. Propensity score matching (PSM) and difference-in-differences (DID) make up the PSM-DID model. The models (difference-in-differences, or DID for short) are integrated. Screening control items for the treated people is the responsibility of the front-end PSM model. The back-end DID model is in charge of determining the impact of policy shocks on this basis. Following the DID procedure, samples are divided into two groups. The treated group was composed of firms where experienced CEO succession occurred, whereas the control group consisted of organizations where experienced CEO succession did not exist. We construct a binary dummy variable DCEOi=0,1 when enterprise i is the experienced CEO succession enterprise, DCEOi=1; otherwise, DCEOi=0. Specifically, in our sample period, if an enterprise CEO changed during the period t, and the succeeding CEO has experience in other listed companies during period t−j, then we define it as experienced CEO succession firm. Otherwise, it is defined as a nonexperienced succession enterprise. In addition, we construct binary dummy variables DTt=0,1, where DTt=0 and DTt=1 represent before and after experienced CEO succession, respectively. Further, the change in TFP of enterprise i in the two periods of DTt=0 and DTt=1 can be expressed as ΔTPFit. Nonparametric techniques (TFP index and data envelopment analysis (DEA)) and parametric approaches (estimation of the production function and stochastic frontier analysis (SFA)) are the two main paths in the literature for measuring TFP. OLS estimation, the Olley and Pakes method [50], and the Levinsohn and Petrin approach [34]are all common methods for estimating the production function. To estimate the production function, the classic ordinary least square (OLS) method was widely used (Timmer, 1991). However, using this method might lead to a variety of estimating issues, such as the simultaneity problem and sample selectivity bias (Van, 2012). Unobserved firm productivity shocks can be approximated by a nonparametric function of an observable firm characteristic—specifically, an intermediate input—and, as a result, unbiased estimates of production function coefficients can be obtained, according to Levinsohn and Petrin’s estimation methodology [34]. The change of TFP of the enterprise undergoing CEO succession in the two periods can be expressed as ΔTPFit1, whereas the change of TFP of the nonexperienced CEO succession enterprise in the two periods can be expressed as ΔTPFit0. Accordingly, the actual impact of an experienced CEO on TFP is as follows:(1)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=1.In equation (1), EΔTPFit0|DCEOi=1 is “counterfactual,” that is, observing the change in TFP of an experienced succession firm in the absence of hiring an experienced CEO is impossible. If the average TFP change of companies with nonexperienced succession during the observation period EΔTPFit0|DCEOi=0 is directly selected as an approximate substitute, then bias will be generated due to the characteristic differences between companies. To solve this problem, we use nearest neighbor matching to find the optimal control group (nonexperienced CEO succession firm) for the treated group (experienced CEO succession firm). The selection of matching variables is an important step in nearest neighbor matching. According to existing theories and empirical research literature, the following variables affecting the TFP of enterprises are selected as matching variables: The asset-liability ratio (LA) is measured by the ratio of total liabilities to total assets. When an enterprise is faced with a high debt ratio, it often leads to the CEO’s replacement. Capital intensity (CIR) is measured as the ratio of fixed assets to the number of employees (in logarithmic form). Enterprise size (size) is measured by the logarithm of enterprise sales. Enterprise age (age), the survival time in the market, affects an enterprise’s production experience, research and development ability, and also the enterprise’s decisions regarding personnel. Corporate profit margin (profit) is measured by the ratio of operating profit divided by business sales. The ownership structure (SOE) is measured by whether the ownership structure is a state-owned enterprise. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model:(2)pDCEOit=1=ϕLAit−1,CIRit−1,Sizeit−1,Ageit−1,Profitit−1,SOEit−1,TFPit−1.The probability prediction valuep∧ can be obtained after the estimation of equation (2). We use p∧i and p∧j to represent the propensity scores of the treatment group and the control group, respectively. The latest matching model is as follows:(3)Θi=minjp∧i−p∧j,j∈DCEO=0.Θi represents the matching set from the control enterprise corresponding to the treatment group, and for each treatment group i, only a unique control group j falls into the set.After the above nearest neighbor matching, we can obtain the set of pregroup enterprises like matched control group enterprisesΘi, and their TFP variation EΔTPFit0|DCEOi=0,i∈Θi can be better substituted as EΔTPFit0|DCEOi=1. Therefore, equation (1) is transformed into the following equation:(4)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=0,i∈Θi.Equation (4) is equivalent to the following empirical model:(5)TFPit=α0+α1•DEO+α2•DT+δ•DCEOt×DTt+εit.In equation (5), i,t represent the enterprise and year, respectively, whereas the binary dummy variable DCEO=1 represents the experienced CEO succession enterprises (treated group). DCEO=0 is propensity matching to obtain nonexperienced CEO succession enterprises (control group) and εit is the random error. The estimated coefficient of DCEOt×DTt describes the impact of experienced CEO succession on firm TFP. Specifically, in equation (5), for the enterprises in the treatment group, their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1, and their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1+α2+δ, that is, the TFP change of the enterprises in the treatment group in the two periods is as follows:(6)EΔTPFit1|DCEOi=1,DT=0=EΔTPFit1|DCEOi=1,DT=0−EΔTPFit1|DCEOi=1,DT=0=α2+δ.In addition, for the control group enterprise, the TFP of whenDT=0 is EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α0, and the TFP is EΔTPFit1|DCEOi=0,DT=1,i∈Θi=α0+α1 when DT=1, that is, the TFP change of the control group enterprise in two periods is as follows:(7)EΔTPFit0|DCEOi=0,DT=0,i∈Θi=EΔTPFit0|DCEOi=0,DT=0,i∈Θi−EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α2.Equation (6) minus equation (7)gives the following equation:(8)EΔTPFit0|DCEOi=1,DT=0−EΔTPFit0|DCEOi=0,i∈Θi=δ.Combining equation (4), we can obtain δ=Eλi|DCEO=1=λ. If δ>0, which means the TFP growth of enterprises in the treatment group is greater than that of enterprises in the control group after the experienced CEO succession, then the experienced CEO improves the TFP of enterprises. For robustness, we add the set of control variables Xit on the basis of equation (5), such as LA, CIR, size, age, profit, SOE, and enterprise nature (HiTech). In addition, we controlled for industry characteristics vs and regional characteristics vr. To make the model easy to understand, we set DCEOt×DTt=DID. The coefficient of DID, δ, represents the impact of hiring an experienced CEO on the enterprise. Finally, the DID model used for estimation is as follows:(9)TFPit=α0+α1DCEO+α2DT+δDIDt+βXit+vs+vr+εit.
### 3.2. Data Description
The WIND and CSMAR databases provided the sample data for this investigation. Panel data from 2010 to 2019 were chosen as samples in this article, taking into account the impact of the global financial crisis from 2007 to 2009 and the COVID-19 epidemic in 2020. The “Database on Governance Structure of Chinese Listed Companies” in the CSMAR databases contains basic information about management personnel of Chinese listed companies, such as annual salaries, shareholdings, changes in shareholding structure, changes in chairman and general manager, and shareholder meetings. To make the subsequent analysis conclusion accurate and credible, “ST” (the Shanghai and Shenzhen Stock Exchanges shall give special treatment to the stocks of listed companies with abnormal financial conditions or other conditions) samples of the current year were deleted. The industry categorization standard of CSRC (2012) was used to filter listed manufacturing companies, and a total of 1,675 listed manufacturing companies were finally studied in this research. The descriptive statistics of the main variables are presented in Table1. Succession has been experienced by a total of 118 companies, accounting for 8.45 percent of all observations. State-owned firms account for 35.52 percent of the sample, whereas high-tech enterprises account for 40.1 percent of the sample.Table 1
Descriptive statistics of variables.
MeanSDMinMaxSkewnessKurtosisTFP15.9700.96211.26020.560.4673.678DCEO0.08450.278012.9879.922LA0.4130.1990.0071.7580.3142.745CIR12.5800.9034.83517.69−0.03274.209Size21.6001.35516.3427.510.5203.655Age17.5605.5632641.1908.843Profit0.0650.501−35.4808.062−57.3004,021SOE0.3550.479010.6051.366Hitech0.4010.490010.4061.165
## 3.1. Model Specification with PSM-DID Procedure
Endogeneity issues may occur from sample selection bias or missing variables if traditional econometric methods are employed directly to estimate the impact of experienced CEO succession on organizational performance. Due to the sample selection bias, existing studies failed to separate manager effects from firm effects, which eventually led to a biased conclusion. In addition, since the majority of earlier research on experienced CEOs did not take into consideration the company’s beginning condition, we will not be able to identify whether the ultimate result is due to the company’s starting operational status or the CEO. To avoid these biases, in this paper, we can properly discern the TFP changes induced by personal characteristics of CEOs using the quasinatural experiment of PSM-DID.Heckman et al. [49] were the first to suggest merging the PSM and DID models, pointing out that the PSM model may pick the control group for the DID model, giving the PSM-DID model a theoretical foundation. Propensity score matching (PSM) and difference-in-differences (DID) make up the PSM-DID model. The models (difference-in-differences, or DID for short) are integrated. Screening control items for the treated people is the responsibility of the front-end PSM model. The back-end DID model is in charge of determining the impact of policy shocks on this basis. Following the DID procedure, samples are divided into two groups. The treated group was composed of firms where experienced CEO succession occurred, whereas the control group consisted of organizations where experienced CEO succession did not exist. We construct a binary dummy variable DCEOi=0,1 when enterprise i is the experienced CEO succession enterprise, DCEOi=1; otherwise, DCEOi=0. Specifically, in our sample period, if an enterprise CEO changed during the period t, and the succeeding CEO has experience in other listed companies during period t−j, then we define it as experienced CEO succession firm. Otherwise, it is defined as a nonexperienced succession enterprise. In addition, we construct binary dummy variables DTt=0,1, where DTt=0 and DTt=1 represent before and after experienced CEO succession, respectively. Further, the change in TFP of enterprise i in the two periods of DTt=0 and DTt=1 can be expressed as ΔTPFit. Nonparametric techniques (TFP index and data envelopment analysis (DEA)) and parametric approaches (estimation of the production function and stochastic frontier analysis (SFA)) are the two main paths in the literature for measuring TFP. OLS estimation, the Olley and Pakes method [50], and the Levinsohn and Petrin approach [34]are all common methods for estimating the production function. To estimate the production function, the classic ordinary least square (OLS) method was widely used (Timmer, 1991). However, using this method might lead to a variety of estimating issues, such as the simultaneity problem and sample selectivity bias (Van, 2012). Unobserved firm productivity shocks can be approximated by a nonparametric function of an observable firm characteristic—specifically, an intermediate input—and, as a result, unbiased estimates of production function coefficients can be obtained, according to Levinsohn and Petrin’s estimation methodology [34]. The change of TFP of the enterprise undergoing CEO succession in the two periods can be expressed as ΔTPFit1, whereas the change of TFP of the nonexperienced CEO succession enterprise in the two periods can be expressed as ΔTPFit0. Accordingly, the actual impact of an experienced CEO on TFP is as follows:(1)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=1.In equation (1), EΔTPFit0|DCEOi=1 is “counterfactual,” that is, observing the change in TFP of an experienced succession firm in the absence of hiring an experienced CEO is impossible. If the average TFP change of companies with nonexperienced succession during the observation period EΔTPFit0|DCEOi=0 is directly selected as an approximate substitute, then bias will be generated due to the characteristic differences between companies. To solve this problem, we use nearest neighbor matching to find the optimal control group (nonexperienced CEO succession firm) for the treated group (experienced CEO succession firm). The selection of matching variables is an important step in nearest neighbor matching. According to existing theories and empirical research literature, the following variables affecting the TFP of enterprises are selected as matching variables: The asset-liability ratio (LA) is measured by the ratio of total liabilities to total assets. When an enterprise is faced with a high debt ratio, it often leads to the CEO’s replacement. Capital intensity (CIR) is measured as the ratio of fixed assets to the number of employees (in logarithmic form). Enterprise size (size) is measured by the logarithm of enterprise sales. Enterprise age (age), the survival time in the market, affects an enterprise’s production experience, research and development ability, and also the enterprise’s decisions regarding personnel. Corporate profit margin (profit) is measured by the ratio of operating profit divided by business sales. The ownership structure (SOE) is measured by whether the ownership structure is a state-owned enterprise. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model:(2)pDCEOit=1=ϕLAit−1,CIRit−1,Sizeit−1,Ageit−1,Profitit−1,SOEit−1,TFPit−1.The probability prediction valuep∧ can be obtained after the estimation of equation (2). We use p∧i and p∧j to represent the propensity scores of the treatment group and the control group, respectively. The latest matching model is as follows:(3)Θi=minjp∧i−p∧j,j∈DCEO=0.Θi represents the matching set from the control enterprise corresponding to the treatment group, and for each treatment group i, only a unique control group j falls into the set.After the above nearest neighbor matching, we can obtain the set of pregroup enterprises like matched control group enterprisesΘi, and their TFP variation EΔTPFit0|DCEOi=0,i∈Θi can be better substituted as EΔTPFit0|DCEOi=1. Therefore, equation (1) is transformed into the following equation:(4)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=0,i∈Θi.Equation (4) is equivalent to the following empirical model:(5)TFPit=α0+α1•DEO+α2•DT+δ•DCEOt×DTt+εit.In equation (5), i,t represent the enterprise and year, respectively, whereas the binary dummy variable DCEO=1 represents the experienced CEO succession enterprises (treated group). DCEO=0 is propensity matching to obtain nonexperienced CEO succession enterprises (control group) and εit is the random error. The estimated coefficient of DCEOt×DTt describes the impact of experienced CEO succession on firm TFP. Specifically, in equation (5), for the enterprises in the treatment group, their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1, and their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1+α2+δ, that is, the TFP change of the enterprises in the treatment group in the two periods is as follows:(6)EΔTPFit1|DCEOi=1,DT=0=EΔTPFit1|DCEOi=1,DT=0−EΔTPFit1|DCEOi=1,DT=0=α2+δ.In addition, for the control group enterprise, the TFP of whenDT=0 is EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α0, and the TFP is EΔTPFit1|DCEOi=0,DT=1,i∈Θi=α0+α1 when DT=1, that is, the TFP change of the control group enterprise in two periods is as follows:(7)EΔTPFit0|DCEOi=0,DT=0,i∈Θi=EΔTPFit0|DCEOi=0,DT=0,i∈Θi−EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α2.Equation (6) minus equation (7)gives the following equation:(8)EΔTPFit0|DCEOi=1,DT=0−EΔTPFit0|DCEOi=0,i∈Θi=δ.Combining equation (4), we can obtain δ=Eλi|DCEO=1=λ. If δ>0, which means the TFP growth of enterprises in the treatment group is greater than that of enterprises in the control group after the experienced CEO succession, then the experienced CEO improves the TFP of enterprises. For robustness, we add the set of control variables Xit on the basis of equation (5), such as LA, CIR, size, age, profit, SOE, and enterprise nature (HiTech). In addition, we controlled for industry characteristics vs and regional characteristics vr. To make the model easy to understand, we set DCEOt×DTt=DID. The coefficient of DID, δ, represents the impact of hiring an experienced CEO on the enterprise. Finally, the DID model used for estimation is as follows:(9)TFPit=α0+α1DCEO+α2DT+δDIDt+βXit+vs+vr+εit.
## 3.2. Data Description
The WIND and CSMAR databases provided the sample data for this investigation. Panel data from 2010 to 2019 were chosen as samples in this article, taking into account the impact of the global financial crisis from 2007 to 2009 and the COVID-19 epidemic in 2020. The “Database on Governance Structure of Chinese Listed Companies” in the CSMAR databases contains basic information about management personnel of Chinese listed companies, such as annual salaries, shareholdings, changes in shareholding structure, changes in chairman and general manager, and shareholder meetings. To make the subsequent analysis conclusion accurate and credible, “ST” (the Shanghai and Shenzhen Stock Exchanges shall give special treatment to the stocks of listed companies with abnormal financial conditions or other conditions) samples of the current year were deleted. The industry categorization standard of CSRC (2012) was used to filter listed manufacturing companies, and a total of 1,675 listed manufacturing companies were finally studied in this research. The descriptive statistics of the main variables are presented in Table1. Succession has been experienced by a total of 118 companies, accounting for 8.45 percent of all observations. State-owned firms account for 35.52 percent of the sample, whereas high-tech enterprises account for 40.1 percent of the sample.Table 1
Descriptive statistics of variables.
MeanSDMinMaxSkewnessKurtosisTFP15.9700.96211.26020.560.4673.678DCEO0.08450.278012.9879.922LA0.4130.1990.0071.7580.3142.745CIR12.5800.9034.83517.69−0.03274.209Size21.6001.35516.3427.510.5203.655Age17.5605.5632641.1908.843Profit0.0650.501−35.4808.062−57.3004,021SOE0.3550.479010.6051.366Hitech0.4010.490010.4061.165
## 4. Result Analysis
### 4.1. Baseline Regression
In this paper, we use the DID method to examine the effect of experienced CEO succession on TFP. Comparing the difference in TFP level among the same or similar firms with and without experienced CEO succession is better. However, this cannot be observed simultaneously in reality. To overcome systematic differences in the listed manufacturing firms we selected and to reduce the bias of the DID estimation, PSM is used to improve the efficiency of the traditional DID method [51]. To fulfil this goal, we constructed a “counterfactual” control sample (the nonexperienced CEO succession group) for the treatment group (the experienced CEO succession group). To match experienced succession companies with nonexperienced succession firms, we employed the PSM approach. To begin with, we estimated the propensity scores of the listed manufacturing enterprises in China using both logit and probit regressions. Then, using kernel matching, we chose nonexperienced succession companies with individual characteristics that were similar to the succession experienced companies. The variables (i.e., LA, CIR, size, age, profit, and SOE) affecting TFP are selected as matching variables. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model: We compared the treatment groups year by year to acquire reliable matching findings and then summarized the matching data for each year. As an example, Table 2 presents comparisons of primary indicators before and after matching between the treatment and control groups in 2010. It shows that for the previous matching sample, significant differences exist between CEO succession enterprises and non-CEO succession enterprises in terms of asset-liability ratio, capital intensity, enterprise size, profit margin, and other variables. Figure 1 illustrates the difference in terms of the trend and score value of nuclear density before and after PSM matching. However, such a difference sharply diminishes after PSM matching. This result reflects a good matching effect.Table 2
Balance test of variables before and after PSM.
VariableUnmatchedMatchedTreatedControl%Bias|bias|t-test (t)LAU0.4825104135634.494.73.18M0.482510.478851.80.13CIRU12.4712.3878.627.20.86M12.4712.416.30.45SizeU22.08921.42747.696.94.79M22.08922.0691.50.10AgeU14.97 14.9714.11416.776.61.68M15.17-3.9-0.28ProfitU0.054020.08915-8.932.2-0.74M0.054020.08525-6.1-0.56SOEU0.770.3934482.494.77.14M0.770.754.40.33TFPU16.28415.88340.293.04.05M16.28416.2562.80.19Figure 1
Kernel density plot with a comparison of the two groups (treated vs. control) before (left) and after (right) propensity score matching.Table3 reports the estimated results of the impact of experienced CEO succession on the firms’ TFP by using PSM-DID regression. To make the results comparable and robust, we use two PSM matching methods, namely, nearest neighbor matching and radius matching. We just use the control variables in columns (1) and (4). We examined the DID variable with the time-fixed effect and industry-fixed effect controlled in columns (2) and (5). In columns (3) and (6), we further control for the regional-based variable. All the regression results among different models show a consistent coefficient, which implies the robustness of our results. The estimated coefficient of DID is positive and significant at the 0.01 statistical level, indicating that experienced CEO succession significantly improves firm’s TFP compared with nonexperienced succession companies. According to Table 3, experienced CEO succession, on average, increases the TFP by 3.1% compared with firms where experienced CEO succession does not occur. All the coefficients for control variables are consistent with the usual expectation.Table 3
Regression results: the impact of experienced CEO succession on TFP.
Nearest neighbor matchingRadius matching(1)(2)(3)(4)(5)(6)DID0.031∗∗∗0.031∗∗∗0.033∗∗∗0.033∗∗∗(0.008)(0.008)(0.008)(0.008)LA−0.137∗∗∗−0.147∗∗∗−0.150∗∗∗−0.133∗∗∗−0.143∗∗∗−0.156∗∗∗(0.011)(0.011)(0.011)(0.011)(0.011)(0.011)CIR−0.079∗∗∗−0.073∗∗∗−0.075∗∗∗−0.079∗∗∗−0.076∗∗∗−0.076∗∗∗(0.003)(0.003)(0.003)(0.002)(0.003)(0.003)Size0.754∗∗∗0.771∗∗∗0.773∗∗∗0.750∗∗∗0.767∗∗∗0.771∗∗∗(0.002)(0.003)(0.003)(0.002)(0.003)(0.003)Age−0.002∗∗∗−0.004∗∗∗−0.004∗∗∗−0.002∗∗∗−0.004∗∗∗−0.005∗∗∗(0.000)(0.000)(0.000)(0.000)(0.001)(0.001)Profit0.106∗∗∗0.087∗∗∗0.086∗∗∗0.102∗∗∗0.088∗∗∗0.086∗∗∗(0.008)(0.008)(0.008)(0.007)(0.007)(0.007)SOE−0.037∗∗∗−0.018−0.024∗∗−0.045∗∗∗−0.018∗−0.029∗∗∗(0.008)(0.010)(0.010)(0.008)(0.010)(0.010)Constant0.776∗∗∗0.367∗∗∗0.443∗∗∗0.874∗∗∗0.506∗∗∗0.528∗∗∗(0.051)(0.061)(0.067)(0.051)(0.060)(0.066)TimeNoYesYesNoYesYesIndustryNoYesYesNoYesYesRegionalNoNoYesNoNoYesObservations11,71111,71111,71112,37112,37112,371χ2142493∗∗∗8189∗∗∗4294∗∗∗144494∗∗∗8225∗∗∗3945∗∗∗R-squared0.9210.9240.9250.9170.9200.922Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
### 4.2. Parallel Trend Test
In the DID method, the parallel trend assumption indicates that the treatment and control groups should have a consistent evolutionary trend prior to the succession of an experienced CEO. Prior to experienced succession implementation, an event study approach was employed to further assess the endogenous issue produced by discrepancies between the treatment and control groups. The event study regression model specification is presented in equation (10):(10)TFPit=α0+∑k=−25δk•DCEOt×DTtk+εit,where k represents the time difference from the year of experienced CEO succession. If k is negative, then it indicates the number of years before experienced succession; if k is positive, then it indicates the number of years after experienced succession. The year 0 is the baseline time period, and the dynamic effects in a [−2, 5] time period were examined.Only the treated group was used as the sample, and equation (6) was applied for regression and reporting the coefficients of experienced CEO succession to be measured relative to a baseline time period. Figure 2 shows the result of the parallel trend test. In the 1 to 3 years following experienced CEO succession, the coefficients of experienced CEO succession on boosting TFP increased significantly. All pretreatment coefficients were close to zero and statistically insignificant in all years before year 0. This result suggests that the TFP of these firms remained constant before the succession of an experienced CEO, and no significant difference existed in the change trend between the treated group and the control group. As a result, dynamic trend indicates that the DID method in this paper satisfies the parallel trend condition. Furthermore, the coefficients of the postsuccession period were positive and mostly statistically significant, indicating that the TFP increased after the hiring of an experienced CEO.Figure 2
Event study analysis of experienced CEO succession.
### 4.3. Placebo Test
Following Topalova [52], we performed a placebo test by using a fictitious succession time. We also carried out a regression analysis if the experienced CEO succession happens one year earlier and two years earlier. The estimations are likely to be skewed when the coefficients are similar to those obtained with the actual pre- and postsuccession data. The rationale for this is that even if we use the year in which the experienced succession did not occur, the results will still be consistent with the basic regression. The estimated coefficients after the change of succession year are not significant, as shown in Table 4, which is inconsistent with the results of the basic regression, indicating that the findings in Table 3 are reliable.Table 4
Coefficient of the placebo test.
Nearest neighbor matchingRadius matchingTFPTFPd_20.0080.009(0.012)(0.012)d_10.0180.020(0.012)(0.012)d00.020∗0.020∗(0.011)(0.012)d10.023∗0.024∗∗(0.012)(0.012)d20.033∗∗∗0.035∗∗∗(0.013)(0.013)d30.037∗∗∗0.038∗∗∗(0.014)(0.014)d40.0080.010(0.016)(0.017)d50.0090.010(0.018)(0.018)Observations11,71112,371χ25578∗∗∗5603∗∗∗R-squared0.9240.920Note: standard errors in parentheses;p∗<0.1, ∗∗p∗∗<0.05, p∗∗∗<0.01.
### 4.4. Heterogeneity Analysis
Table5 reports the heterogeneity tests for different firm types. According to the announcement of high-tech enterprise recognition reported by itself, we sort out enterprises that have obtained the high and new technology enterprise qualification list and recode the recognition time. On this basis, companies that had experienced CEO succession are divided into two types: high-tech enterprises and nonhigh-tech enterprises. In Table 5, the heterogeneous impact of experienced CEO succession on different type firms was examined. Column (1) reports the effects of high-tech enterprises, and the coefficient of the experienced CEO succession is 0.045. The sample in column (2) consists of nonhigh-tech enterprises, and the coefficient of the experienced CEO succession is 0.032. The regression results show that experienced CEO succession significantly promotes TFP in high-tech and nonhigh-tech enterprises. However, compared with nonhigh-tech enterprises, the promotion effect of experienced CEO succession in high-tech enterprises is greater. This may be because high-tech enterprises pay more attention to innovative activities, and the arrival of a new CEO will be more conducive to enterprises to carry out innovative activities. In a high-tech industry with high dynamism and instability, under the circumstances, the experience of a prior CEO might be more valuable.Table 5
Heterogeneity results for high-tech and nonhigh-tech enterprises.
VariableNearest neighbor matchingRadius matchingHigh-techNonhigh-techHigh-techNonhigh-tech(1)(2)(3)(4)dceodt0.045∗∗∗0.032∗∗∗0.046∗∗∗0.031∗∗∗(0.013)(0.010)(0.013)(0.010)LOAR−0.130∗∗∗−0.176∗∗∗−0.133∗∗∗−0.176∗∗∗(0.018)(0.015)(0.018)(0.015)CI−0.068∗∗∗−0.077∗∗∗−0.061∗∗∗−0.084∗∗∗(0.004)(0.004)(0.004)(0.003)Size0.796∗∗∗0.766∗∗∗0.788∗∗∗0.768∗∗∗(0.005)(0.004)(0.005)(0.004)Age−0.007∗∗∗−0.002∗∗∗−0.007∗∗∗−0.002∗∗(0.002)(0.001)(0.002)(0.001)Profit0.074∗∗∗0.084∗∗∗0.073∗∗∗0.086∗∗∗(0.011)(0.010)(0.010)(0.009)SOE−0.002−0.042∗∗∗−0.015−0.042∗∗∗(0.019)(0.012)(0.020)(0.012)Constant−0.292∗∗0.629∗∗∗−0.209∗0.633∗∗∗(0.121)(0.088)(0.121)(0.087)Observations4,6847,0274,9727,995χ22339∗∗∗2274∗∗∗1869∗∗∗2987∗∗∗R-squared0.9300.9180.9250.925Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 4.1. Baseline Regression
In this paper, we use the DID method to examine the effect of experienced CEO succession on TFP. Comparing the difference in TFP level among the same or similar firms with and without experienced CEO succession is better. However, this cannot be observed simultaneously in reality. To overcome systematic differences in the listed manufacturing firms we selected and to reduce the bias of the DID estimation, PSM is used to improve the efficiency of the traditional DID method [51]. To fulfil this goal, we constructed a “counterfactual” control sample (the nonexperienced CEO succession group) for the treatment group (the experienced CEO succession group). To match experienced succession companies with nonexperienced succession firms, we employed the PSM approach. To begin with, we estimated the propensity scores of the listed manufacturing enterprises in China using both logit and probit regressions. Then, using kernel matching, we chose nonexperienced succession companies with individual characteristics that were similar to the succession experienced companies. The variables (i.e., LA, CIR, size, age, profit, and SOE) affecting TFP are selected as matching variables. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model: We compared the treatment groups year by year to acquire reliable matching findings and then summarized the matching data for each year. As an example, Table 2 presents comparisons of primary indicators before and after matching between the treatment and control groups in 2010. It shows that for the previous matching sample, significant differences exist between CEO succession enterprises and non-CEO succession enterprises in terms of asset-liability ratio, capital intensity, enterprise size, profit margin, and other variables. Figure 1 illustrates the difference in terms of the trend and score value of nuclear density before and after PSM matching. However, such a difference sharply diminishes after PSM matching. This result reflects a good matching effect.Table 2
Balance test of variables before and after PSM.
VariableUnmatchedMatchedTreatedControl%Bias|bias|t-test (t)LAU0.4825104135634.494.73.18M0.482510.478851.80.13CIRU12.4712.3878.627.20.86M12.4712.416.30.45SizeU22.08921.42747.696.94.79M22.08922.0691.50.10AgeU14.97 14.9714.11416.776.61.68M15.17-3.9-0.28ProfitU0.054020.08915-8.932.2-0.74M0.054020.08525-6.1-0.56SOEU0.770.3934482.494.77.14M0.770.754.40.33TFPU16.28415.88340.293.04.05M16.28416.2562.80.19Figure 1
Kernel density plot with a comparison of the two groups (treated vs. control) before (left) and after (right) propensity score matching.Table3 reports the estimated results of the impact of experienced CEO succession on the firms’ TFP by using PSM-DID regression. To make the results comparable and robust, we use two PSM matching methods, namely, nearest neighbor matching and radius matching. We just use the control variables in columns (1) and (4). We examined the DID variable with the time-fixed effect and industry-fixed effect controlled in columns (2) and (5). In columns (3) and (6), we further control for the regional-based variable. All the regression results among different models show a consistent coefficient, which implies the robustness of our results. The estimated coefficient of DID is positive and significant at the 0.01 statistical level, indicating that experienced CEO succession significantly improves firm’s TFP compared with nonexperienced succession companies. According to Table 3, experienced CEO succession, on average, increases the TFP by 3.1% compared with firms where experienced CEO succession does not occur. All the coefficients for control variables are consistent with the usual expectation.Table 3
Regression results: the impact of experienced CEO succession on TFP.
Nearest neighbor matchingRadius matching(1)(2)(3)(4)(5)(6)DID0.031∗∗∗0.031∗∗∗0.033∗∗∗0.033∗∗∗(0.008)(0.008)(0.008)(0.008)LA−0.137∗∗∗−0.147∗∗∗−0.150∗∗∗−0.133∗∗∗−0.143∗∗∗−0.156∗∗∗(0.011)(0.011)(0.011)(0.011)(0.011)(0.011)CIR−0.079∗∗∗−0.073∗∗∗−0.075∗∗∗−0.079∗∗∗−0.076∗∗∗−0.076∗∗∗(0.003)(0.003)(0.003)(0.002)(0.003)(0.003)Size0.754∗∗∗0.771∗∗∗0.773∗∗∗0.750∗∗∗0.767∗∗∗0.771∗∗∗(0.002)(0.003)(0.003)(0.002)(0.003)(0.003)Age−0.002∗∗∗−0.004∗∗∗−0.004∗∗∗−0.002∗∗∗−0.004∗∗∗−0.005∗∗∗(0.000)(0.000)(0.000)(0.000)(0.001)(0.001)Profit0.106∗∗∗0.087∗∗∗0.086∗∗∗0.102∗∗∗0.088∗∗∗0.086∗∗∗(0.008)(0.008)(0.008)(0.007)(0.007)(0.007)SOE−0.037∗∗∗−0.018−0.024∗∗−0.045∗∗∗−0.018∗−0.029∗∗∗(0.008)(0.010)(0.010)(0.008)(0.010)(0.010)Constant0.776∗∗∗0.367∗∗∗0.443∗∗∗0.874∗∗∗0.506∗∗∗0.528∗∗∗(0.051)(0.061)(0.067)(0.051)(0.060)(0.066)TimeNoYesYesNoYesYesIndustryNoYesYesNoYesYesRegionalNoNoYesNoNoYesObservations11,71111,71111,71112,37112,37112,371χ2142493∗∗∗8189∗∗∗4294∗∗∗144494∗∗∗8225∗∗∗3945∗∗∗R-squared0.9210.9240.9250.9170.9200.922Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 4.2. Parallel Trend Test
In the DID method, the parallel trend assumption indicates that the treatment and control groups should have a consistent evolutionary trend prior to the succession of an experienced CEO. Prior to experienced succession implementation, an event study approach was employed to further assess the endogenous issue produced by discrepancies between the treatment and control groups. The event study regression model specification is presented in equation (10):(10)TFPit=α0+∑k=−25δk•DCEOt×DTtk+εit,where k represents the time difference from the year of experienced CEO succession. If k is negative, then it indicates the number of years before experienced succession; if k is positive, then it indicates the number of years after experienced succession. The year 0 is the baseline time period, and the dynamic effects in a [−2, 5] time period were examined.Only the treated group was used as the sample, and equation (6) was applied for regression and reporting the coefficients of experienced CEO succession to be measured relative to a baseline time period. Figure 2 shows the result of the parallel trend test. In the 1 to 3 years following experienced CEO succession, the coefficients of experienced CEO succession on boosting TFP increased significantly. All pretreatment coefficients were close to zero and statistically insignificant in all years before year 0. This result suggests that the TFP of these firms remained constant before the succession of an experienced CEO, and no significant difference existed in the change trend between the treated group and the control group. As a result, dynamic trend indicates that the DID method in this paper satisfies the parallel trend condition. Furthermore, the coefficients of the postsuccession period were positive and mostly statistically significant, indicating that the TFP increased after the hiring of an experienced CEO.Figure 2
Event study analysis of experienced CEO succession.
## 4.3. Placebo Test
Following Topalova [52], we performed a placebo test by using a fictitious succession time. We also carried out a regression analysis if the experienced CEO succession happens one year earlier and two years earlier. The estimations are likely to be skewed when the coefficients are similar to those obtained with the actual pre- and postsuccession data. The rationale for this is that even if we use the year in which the experienced succession did not occur, the results will still be consistent with the basic regression. The estimated coefficients after the change of succession year are not significant, as shown in Table 4, which is inconsistent with the results of the basic regression, indicating that the findings in Table 3 are reliable.Table 4
Coefficient of the placebo test.
Nearest neighbor matchingRadius matchingTFPTFPd_20.0080.009(0.012)(0.012)d_10.0180.020(0.012)(0.012)d00.020∗0.020∗(0.011)(0.012)d10.023∗0.024∗∗(0.012)(0.012)d20.033∗∗∗0.035∗∗∗(0.013)(0.013)d30.037∗∗∗0.038∗∗∗(0.014)(0.014)d40.0080.010(0.016)(0.017)d50.0090.010(0.018)(0.018)Observations11,71112,371χ25578∗∗∗5603∗∗∗R-squared0.9240.920Note: standard errors in parentheses;p∗<0.1, ∗∗p∗∗<0.05, p∗∗∗<0.01.
## 4.4. Heterogeneity Analysis
Table5 reports the heterogeneity tests for different firm types. According to the announcement of high-tech enterprise recognition reported by itself, we sort out enterprises that have obtained the high and new technology enterprise qualification list and recode the recognition time. On this basis, companies that had experienced CEO succession are divided into two types: high-tech enterprises and nonhigh-tech enterprises. In Table 5, the heterogeneous impact of experienced CEO succession on different type firms was examined. Column (1) reports the effects of high-tech enterprises, and the coefficient of the experienced CEO succession is 0.045. The sample in column (2) consists of nonhigh-tech enterprises, and the coefficient of the experienced CEO succession is 0.032. The regression results show that experienced CEO succession significantly promotes TFP in high-tech and nonhigh-tech enterprises. However, compared with nonhigh-tech enterprises, the promotion effect of experienced CEO succession in high-tech enterprises is greater. This may be because high-tech enterprises pay more attention to innovative activities, and the arrival of a new CEO will be more conducive to enterprises to carry out innovative activities. In a high-tech industry with high dynamism and instability, under the circumstances, the experience of a prior CEO might be more valuable.Table 5
Heterogeneity results for high-tech and nonhigh-tech enterprises.
VariableNearest neighbor matchingRadius matchingHigh-techNonhigh-techHigh-techNonhigh-tech(1)(2)(3)(4)dceodt0.045∗∗∗0.032∗∗∗0.046∗∗∗0.031∗∗∗(0.013)(0.010)(0.013)(0.010)LOAR−0.130∗∗∗−0.176∗∗∗−0.133∗∗∗−0.176∗∗∗(0.018)(0.015)(0.018)(0.015)CI−0.068∗∗∗−0.077∗∗∗−0.061∗∗∗−0.084∗∗∗(0.004)(0.004)(0.004)(0.003)Size0.796∗∗∗0.766∗∗∗0.788∗∗∗0.768∗∗∗(0.005)(0.004)(0.005)(0.004)Age−0.007∗∗∗−0.002∗∗∗−0.007∗∗∗−0.002∗∗(0.002)(0.001)(0.002)(0.001)Profit0.074∗∗∗0.084∗∗∗0.073∗∗∗0.086∗∗∗(0.011)(0.010)(0.010)(0.009)SOE−0.002−0.042∗∗∗−0.015−0.042∗∗∗(0.019)(0.012)(0.020)(0.012)Constant−0.292∗∗0.629∗∗∗−0.209∗0.633∗∗∗(0.121)(0.088)(0.121)(0.087)Observations4,6847,0274,9727,995χ22339∗∗∗2274∗∗∗1869∗∗∗2987∗∗∗R-squared0.9300.9180.9250.925Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 5. Conclusion and Discussion
In this study, we examine the relationship between the role of experienced CEO candidates in the CEO succession and the TFP of the focal firms, as the established studies fail to reach a consensus on the influence of experienced CEO candidates on the focal firm’s performance when facing succession. To obtain a robust conclusion, we adopt the PSM-DID model to understand the research question based on the context of listed firms in an emerging market. We provide empirical evidence from the micro level to objectively evaluate the contributions of CEO candidates with experience in the CEO succession of listed Chinese manufacturing companies. Our finding shows that hiring an experienced CEO is an important channel to improve firms’ performance. Specifically, our analysis demonstrates that, on average, the experienced CEO in CEO succession yields a 3.1% increase in TFP improvement compared with nonexperienced succession firms. This positive effect can be continued for four years. The following decreasing performance after the experienced CEO takes the new CEO position in the focal firm is partially due to the changing environment with new demands and partially because of the CEO’s becoming more ambitious after the positive performance, which leads to the strategic goal of the experienced CEO changes. Our results provide new insights into experienced CEO succession literature and firm performance in general.To investigate the robust effects of experienced CEOs in CEO succession on the focal firm’s TFP, we classified the samples and tested the heterogeneous effect of experienced CEO succession on firm TFP by using the PSM-DID procedure. Compared with state-owned listed companies, CEO succession of private companies can effectively promote the improvement of TFP. In terms of the nature of the firm, the effect of an experienced CEO in CEO succession on the TFP of high-tech enterprises is higher than that of nonhigh-tech enterprises. Private enterprises and high-tech enterprises have a more dynamic environment that makes experienced CEOs are less likely to prevent the focal firm’s performance. Working in more challenging and complex environments, in which CEOs are less likely to rely on simplified prescriptions or pasted personal experience, a CEO needs to upgrade their skills and acquire new knowledge to adapt to the changing environment. Higher dynamic environments effectively prevent a CEO from following decision-making shortcuts, such as the previous routines outside of the boundary of the focal firm. As a consequence, an experienced CEO will have to explore new business practices or mindsets to solve the emerging challenges and issues.Experience trap not founded in the Chinese context. Previous research proposes contradictory perspectives. Some studies hold that experienced managers can becomeunknowingly “trapped” in their past ways of success. Therefore, these managers can fail to adapt to environmental changes, ultimately leading to unsatisfactory performance. However, the mobility between CEOs of listed companies has significantly promoted the TFP of enterprises and maintained a sustainable growth trend in the next four years, that is, the experience trap is not found in this paper. Previous research proposes contradictory perspectives on the experienced CEOs in CEO succession. Some studies hold that experienced managers can become unknowingly “trapped” in their past ways of success, such as inertia. Therefore, these managers can fail to adapt to environmental changes, ultimately leading to unsatisfactory performance. However, the succession between experienced CEOs of listed companies has significantly promoted the TFP of the focal firms and maintains a sustainable growth trend in the next three years. CEOs are the most valuable human capital that contains an individual’s knowledge, skills, experiences, and capabilities. CEOs have accumulated skills through professions and careers. Managers’ skills are classified into generic skills, firm-specific skills, and industry-specific skills. Firm-specific human capital is useful only to firms that provide it and is not transferable, whereas industry-specific human capital can be transferred within an industry but has less transferability across industries. Generic, or general, skills are those that can be transferred across organizations and industries. In the Chinese market, due to the immature capital market environment, CEOs are busy dealing with institutional investors, government departments, and shareholders; hence, generic skills become more important. Such portability in generic skills gives the potential of experienced CEOs to better survive and perform in the CEO succession of focal firms. After succession, experienced CEOs are outsiders to their new organizations, and their early strategic changes are likely to be adaptive because they bring new ideas and are less likely to be influenced by the status quo, which will likely have a positive effect on organizational performance.We acknowledge the possible geographical/cultural biases in this research stream. However, these biases go beyond the research question addressed in this study. Therefore, we encourage future research to encompass a broader range of subjects from various geographies, ethnicities/races, and cultures [53].
---
*Source: 1014224-2022-10-22.xml* | 1014224-2022-10-22_1014224-2022-10-22.md | 65,179 | Experienced CEO and Factor Productivity Improvement: Re-Examination of Experience Trap | Chengpeng Zhu; Mengyun Wu; Jie Xiong | Complexity
(2022) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1014224 | 1014224-2022-10-22.xml | ---
## Abstract
We examine the role of experienced CEO in the CEO succession and their contributions to the performance of focal firms. We utilize the propensity score matching with difference in differences (PSM-DID) model to evaluate to what extent experienced CEO succession affects the total factor productivity (TFP) growth. Based on the analysis of 1,675 listed manufacturing companies in China, results show that experienced CEO in succession significantly improves firms’ TFP. Our analysis demonstrates that, on average, the event of hiring an experienced CEO succession yields a 3.1% increase in TFP improvement compared with nonsuccession firms. This positive effect can be continued for three years. Furthermore, the heterogeneous effect of experienced CEO succession on TFP is shown between different categories of focal firms (i.e., high-tech versus low-tech enterprises).
---
## Body
## 1. Introduction
In corporate governance and business management, one of the long-term challenges to the management team is the chief executive officer (CEO) succession [1]. Candidates who are experienced or rookies have possible benefits and downsides for the management team of private enterprises or the board of directors of publicly traded firms. Such choices are even more complicated and challenging for business firms in emerging countries [2].In comparison to the liability of alienness (unfamiliarity with the focal firm), the management team or board of directors believes that the liability of newness (lack of experience or capabilities) may provide more obstacles to the business operations following the CEO succession. The past performance and records of the experienced CEO candidates can be tracked in the disclosed information, such as the annual reports of the listed company they worked for, or articles in the business media [3, 4]. Skills and capabilities have grown increasingly vital as the corporate environment has become more turbulent in recent years. It might be because businesses are becoming increasingly hesitant to take the risk of hiring someone who has no prior experience in the field [3, 4]. When compared with the affordable possible learning costs to adapt to the focal business’s setting, CEO candidates with experience may provide greater potential benefits to the focal firm [5].Furthermore, from a contextual standpoint, the possible problem of cultural fit between external experienced CEO candidates and the focal firm may be overcome at a low cost [6] because CEOs may reshape and integrate with the organizational culture with effort in order to adapt to the changing environment [7]. For public firms, CEO candidates with prior experience can provide a better way for management team’s decision-making [8, 9]. An experienced CEO can apply the advanced management experience learned in the previous company to the current enterprise, so as to avoid the blindness of innovation activities when investing in R&D [10]. As suggested in Reference [11], the aggregate total factor productivity (TFP) of China’s manufacturing firms could increase by 30%-50% if the resource misallocation is reduced. Secondly, a new broom sweeps clean. An experienced CEO tends to increase their R&D investment and innovation output to improve their profitability and competitiveness in the industry for their own career development. The abovementioned measures will enhance the innovation capacity of the enterprise, and the process of improving the innovation capacity of the enterprise is often accompanied by the improvement of technological level and productivity. As a result, experienced CEO candidates may help the focal firms’ TFP.However, the established studies on experienced CEOs and firm performance remain mixed. Some existing literature on experienced CEOs and firm performance mainly explores the positive role of relevant experiences of CEOs in firm performance [12–14]. Other studies have found that prior experience might have a detrimental influence on business performance [15], raising the question of whether an experienced CEO can bring positive effects to the focal firm in the CEO succession. Some research studies on CEO succession suggest that previous CEO experiences hurt the successor firm’s performance [15, 16]. It might be partially because the past experience of CEOs may not be easily utilized in the new context of the successor firm [15], or partially due to the rigidity or inertia of experienced CEOs in learning and acquiring new knowledge of the successor firm [17–19].Furthermore, studies on the influence of experienced CEOs on organizational success have traditionally relied heavily on financial measurements (ROA, ROE, and Tobin’s Q). As performance metrics, ROE, and Tobin’s Q have obvious deficiencies. To begin with, the majority of the aforementioned indicators are derived from financial statements and have a time lag, so they can only represent the enterprise’s past production and operational conditions and cannot provide more future information. Second, while China has been steadily strengthening its oversight of accounting information in recent years, the practice of manipulating earning management has remained uncommon. Many businesses continue to whitewash their financial statements by adjusting expenses through real earning management. Although Tobin Q is forward-looking, market players’ maturity and emotional shifts may cause it to be overstated or underestimated [20]. TFP may represent the firm’s operational circumstances in a more complete way and objectively evaluate the company’s performance as a comprehensive indicator to quantify the input-output efficiency of a company [21]. TFP can give additional explanations for the variation in the market value of different firms than standard performance metrics (ROA, ROE, and Tobin’s Q). A company’s output efficiency is the foundation of its income and profit, as well as its fundamental capacity to turn production materials into output. The level of total factor productivity reflects the level of the fundamental ability of factor transformation [22]. Companies with better corporate governance and operation management have higher efficiency in utilizing factors of production such as labor and capital [23]. To achieve high-quality development, China’s industrial businesses must undergo transformation and upgrade. As one of the most essential human resources in a company, the CEO has a significant impact on how the firm makes choices and how it operates and manages. It also has an important impact on the company’s total factor productivity. Therefore, analyzing the CEO’s corporate governance ability and comprehensive resource utilization efficiency is easier when total factor productivity is used as a proxy for financial indicators to gauge corporate success.Besides, previous research on experienced CEO and company performance mostly focusses on developed-country contexts, leaving experienced CEOs and companies in developing markets relatively unstudied. We still do not know much about how experienced CEO candidates contribute to firm performance in a developing market. As a result, we use listed companies in China as our empirical sample to examine how experienced CEOs influence the performance of the CEO succession firm.The contribution of this study is threefold. First, we examine the role of an experienced CEO in firm performance when facing CEO succession. As the existing studies on such a correlation are mixed, we analyze the outsider experienced CEO from both function and context perspectives. Moreover, different from previous studies, we treat the outsider experienced CEO in CEO succession as a natural experiment and adopt the PSM-DID method to control econometric problems such as the sample selection problem. Our finding shows that the outsider-experienced CEO in CEO succession can significantly improve 3.1% of firms’ TFP, which can last for three years. Second, previous research focuses mainly on advanced economies such as America [19] and South Korea [4], whose capital markets and institutional backgrounds are well established. However, extant studies cannot give guidelines to emerging markets such as China, India, and Brazil. Hence, in this study, we shift our focus to examine to what extent outsider-experienced CEO impacts firm performance in the Chinese market. Given the growth of the Chinese capital market, CEO succession and outsider-experienced CEO succession are becoming increasingly common. However, empirical studies based on developing countries are largely ignored. Thus, our study sheds new insights into the studies of CEO succession and outsider-experienced CEOs. Third, the heterogeneous effects of outsider-experienced CEO on TFP among different technological sectors and institutional backgrounds are discussed to test the robustness of the results.
## 2. Literature Review
CEOs are widely considered as one of the most important human resources of a business firm [24, 25]. They play an important role in managing business firms and are responsible for the business activities of firms due to their rich knowledge resources and cognitive abilities. Established studies show that professional experience contributes significantly to building the capabilities of CEOs [28]. As a result, management teams pick CEOs with caution in order to effectively manage business activities and achieve better results than their competitors. In this case, CEO succession is one of the most challenging management issues in both academia and practice [1]. Due to the unique skills of CEOs, companies facing CEO succession prefer to hire existing CEOs from other companies who can demonstrate their qualifications and capabilities, largely due to their professional experience and partially due to their track records in the job [29]. In addition to their competencies, other significant functions and values of CEOs in commercial enterprises are also addressed in the literature [6]. According to some research, CEOs may be capable of establishing corporate culture and procedures as a result of their strong personal styles and characteristics [6, 30, 31]. Other studies show that the characteristics of CEOs affect their strategic choices [32], which eventually influence the performance of business firms, such as exploration or exploitation, which may lead to the difference in the short-term and long-term performance difference. For business owners (not necessarily the professional managers such as CEOs), the performance of business firms such as the total factor productivity (TFP) is one of the most important concerns when picking up the CEO. It is widely accepted that total factor productivity (TFP) is one of the most important core driving forces for firms’ development and economic growth. Since Solow [33] proposed this concept, total factor productivity has always been an important topic in academia and industry. In this study, it is calculated by the Levinsohn–Petrin method [34] at the firm level.Strong leadership with logical decision-making is essential at the top management team, where the CEO plays the most crucial role in increasing the TFP of business enterprises. However, existing research also shows that managers’ cognitive capacities are restricted [35] and that businesses may not have unlimited resources [36]. Thus, to compete in turbulent business environments and achieve above average performance, CEOs have to utilize the resources of the focal firm with the required capabilities [37, 38], build and change the organizational routines to fit the environment, or renew the business model [39]. It implies that experienced CEOs possess more external knowledge and information than those hired from the firm’s internal ranks [29], and thus, are better equipped to expand the resource base of the firm and promote innovation, learning, and high performance [40, 41]. Both the internal requirements of focal firms and the demands of the external business environment indicate that experienced CEOs are better off than inexperienced ones.Experienced CEOs are valuable for business firms, not only because their past records are more visible but also because of the capabilities accumulated during their past experiences in managing a business firm [29]. Even with the mistakes and lessons from their previous career, experienced CEOs may know how to avoid such mistakes in the new position if hired as the CEO of a new firm. Moreover, when focal firms go through the CEO succession, top management team and board of directors are more sensitive in selecting the outsider-experienced CEOs. This may bring both opportunities and challenges to the experienced CEO candidates. On the one hand, the new position of CEO may give the experienced candidates a chance to take more innovative actions due to their entrepreneurial spirits. One the other hand, the new position may also bring the experienced candidates the challenges of liability of alliance and strategic fit [6]. In the long term, the possible obstacles may be overcome since CEOs have the capability to change corporate culture [25].Existing studies already show that experienced CEOs bring positive outcomes to the focal firm [13, 14]. However, some other studies also find that experienced CEOs may not meet the expectations of the business owners of the focal firm. For instance, some experienced CEOs can finally hinder performance in the successor firm. This might be partially because the experienced CEO after succession failed to manage the liability of alienness/strangers, or partially because the experienced CEO did not successfully address the inertia that resulted from the past career in the previous business firm. So far, the empirical studies of experienced CEOs and firm performance are mixed.When it comes to selecting a successor CEO, however, the management team and board of directors continue to favor experienced CEO candidates. Existing research also suggests that in CEO succession, the experienced CEO is a desirable profile of the focal business (Hamori and Koyuncu, 2015). But due to the inertia problems, some experienced CEOs may be more difficult in acquiring new knowledge [17–19]. In this sense, whether an experienced CEO can overcome the inertia and fit the new position of the succession firm will influence the performance of the focal firm. Moreover, the majority of established studies on the mixed empirical results on the correlation between experienced CEO and firm performance are based on the contexts of developed countries. To date, we still know little about how an experienced CEO may contribute to the firm’s performance when facing CEO succession.In recent years, the research on the influencing factors of total factor productivity has been the focus of academic circles. Most of these literature studies focus on the external environment of enterprise operation and internal R&D and technology and discuss the influencing mechanism of enterprise resource allocation efficiency by focusing on trade system, infrastructure, human capital, and enterprise R&D. Coe and Helpman [42], Fernandes and Paunov[43], Huang et al. [44], and Ahsan [45] looked into the impact on TFP from the perspective of trade systems, such as technology spillover, tariff reduction, and market segmentation of import and export commodities, as well as FDI, whereas Hulten et al. [46], Montolio and Solé-Ollé [47], Song and Liu [48], and others focused on infrastructure, such as transportation, energy, communication, and financial services. If the samples of enterprises in the same country or region, system, location, infrastructure, and other external objective factors have a systemic or approximate homogeneous influence on the total factor productivity of the enterprise, then enterprises in total factor productivity of heterogeneous characteristics and enterprise’s own human capital and technological innovation are closely related to internal factors such as management ability. However, there hasn’t been enough focus on the link between the CEO and total factor productivity in the context of internal factors.
## 3. Model Specification and Data Description
### 3.1. Model Specification with PSM-DID Procedure
Endogeneity issues may occur from sample selection bias or missing variables if traditional econometric methods are employed directly to estimate the impact of experienced CEO succession on organizational performance. Due to the sample selection bias, existing studies failed to separate manager effects from firm effects, which eventually led to a biased conclusion. In addition, since the majority of earlier research on experienced CEOs did not take into consideration the company’s beginning condition, we will not be able to identify whether the ultimate result is due to the company’s starting operational status or the CEO. To avoid these biases, in this paper, we can properly discern the TFP changes induced by personal characteristics of CEOs using the quasinatural experiment of PSM-DID.Heckman et al. [49] were the first to suggest merging the PSM and DID models, pointing out that the PSM model may pick the control group for the DID model, giving the PSM-DID model a theoretical foundation. Propensity score matching (PSM) and difference-in-differences (DID) make up the PSM-DID model. The models (difference-in-differences, or DID for short) are integrated. Screening control items for the treated people is the responsibility of the front-end PSM model. The back-end DID model is in charge of determining the impact of policy shocks on this basis. Following the DID procedure, samples are divided into two groups. The treated group was composed of firms where experienced CEO succession occurred, whereas the control group consisted of organizations where experienced CEO succession did not exist. We construct a binary dummy variable DCEOi=0,1 when enterprise i is the experienced CEO succession enterprise, DCEOi=1; otherwise, DCEOi=0. Specifically, in our sample period, if an enterprise CEO changed during the period t, and the succeeding CEO has experience in other listed companies during period t−j, then we define it as experienced CEO succession firm. Otherwise, it is defined as a nonexperienced succession enterprise. In addition, we construct binary dummy variables DTt=0,1, where DTt=0 and DTt=1 represent before and after experienced CEO succession, respectively. Further, the change in TFP of enterprise i in the two periods of DTt=0 and DTt=1 can be expressed as ΔTPFit. Nonparametric techniques (TFP index and data envelopment analysis (DEA)) and parametric approaches (estimation of the production function and stochastic frontier analysis (SFA)) are the two main paths in the literature for measuring TFP. OLS estimation, the Olley and Pakes method [50], and the Levinsohn and Petrin approach [34]are all common methods for estimating the production function. To estimate the production function, the classic ordinary least square (OLS) method was widely used (Timmer, 1991). However, using this method might lead to a variety of estimating issues, such as the simultaneity problem and sample selectivity bias (Van, 2012). Unobserved firm productivity shocks can be approximated by a nonparametric function of an observable firm characteristic—specifically, an intermediate input—and, as a result, unbiased estimates of production function coefficients can be obtained, according to Levinsohn and Petrin’s estimation methodology [34]. The change of TFP of the enterprise undergoing CEO succession in the two periods can be expressed as ΔTPFit1, whereas the change of TFP of the nonexperienced CEO succession enterprise in the two periods can be expressed as ΔTPFit0. Accordingly, the actual impact of an experienced CEO on TFP is as follows:(1)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=1.In equation (1), EΔTPFit0|DCEOi=1 is “counterfactual,” that is, observing the change in TFP of an experienced succession firm in the absence of hiring an experienced CEO is impossible. If the average TFP change of companies with nonexperienced succession during the observation period EΔTPFit0|DCEOi=0 is directly selected as an approximate substitute, then bias will be generated due to the characteristic differences between companies. To solve this problem, we use nearest neighbor matching to find the optimal control group (nonexperienced CEO succession firm) for the treated group (experienced CEO succession firm). The selection of matching variables is an important step in nearest neighbor matching. According to existing theories and empirical research literature, the following variables affecting the TFP of enterprises are selected as matching variables: The asset-liability ratio (LA) is measured by the ratio of total liabilities to total assets. When an enterprise is faced with a high debt ratio, it often leads to the CEO’s replacement. Capital intensity (CIR) is measured as the ratio of fixed assets to the number of employees (in logarithmic form). Enterprise size (size) is measured by the logarithm of enterprise sales. Enterprise age (age), the survival time in the market, affects an enterprise’s production experience, research and development ability, and also the enterprise’s decisions regarding personnel. Corporate profit margin (profit) is measured by the ratio of operating profit divided by business sales. The ownership structure (SOE) is measured by whether the ownership structure is a state-owned enterprise. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model:(2)pDCEOit=1=ϕLAit−1,CIRit−1,Sizeit−1,Ageit−1,Profitit−1,SOEit−1,TFPit−1.The probability prediction valuep∧ can be obtained after the estimation of equation (2). We use p∧i and p∧j to represent the propensity scores of the treatment group and the control group, respectively. The latest matching model is as follows:(3)Θi=minjp∧i−p∧j,j∈DCEO=0.Θi represents the matching set from the control enterprise corresponding to the treatment group, and for each treatment group i, only a unique control group j falls into the set.After the above nearest neighbor matching, we can obtain the set of pregroup enterprises like matched control group enterprisesΘi, and their TFP variation EΔTPFit0|DCEOi=0,i∈Θi can be better substituted as EΔTPFit0|DCEOi=1. Therefore, equation (1) is transformed into the following equation:(4)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=0,i∈Θi.Equation (4) is equivalent to the following empirical model:(5)TFPit=α0+α1•DEO+α2•DT+δ•DCEOt×DTt+εit.In equation (5), i,t represent the enterprise and year, respectively, whereas the binary dummy variable DCEO=1 represents the experienced CEO succession enterprises (treated group). DCEO=0 is propensity matching to obtain nonexperienced CEO succession enterprises (control group) and εit is the random error. The estimated coefficient of DCEOt×DTt describes the impact of experienced CEO succession on firm TFP. Specifically, in equation (5), for the enterprises in the treatment group, their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1, and their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1+α2+δ, that is, the TFP change of the enterprises in the treatment group in the two periods is as follows:(6)EΔTPFit1|DCEOi=1,DT=0=EΔTPFit1|DCEOi=1,DT=0−EΔTPFit1|DCEOi=1,DT=0=α2+δ.In addition, for the control group enterprise, the TFP of whenDT=0 is EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α0, and the TFP is EΔTPFit1|DCEOi=0,DT=1,i∈Θi=α0+α1 when DT=1, that is, the TFP change of the control group enterprise in two periods is as follows:(7)EΔTPFit0|DCEOi=0,DT=0,i∈Θi=EΔTPFit0|DCEOi=0,DT=0,i∈Θi−EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α2.Equation (6) minus equation (7)gives the following equation:(8)EΔTPFit0|DCEOi=1,DT=0−EΔTPFit0|DCEOi=0,i∈Θi=δ.Combining equation (4), we can obtain δ=Eλi|DCEO=1=λ. If δ>0, which means the TFP growth of enterprises in the treatment group is greater than that of enterprises in the control group after the experienced CEO succession, then the experienced CEO improves the TFP of enterprises. For robustness, we add the set of control variables Xit on the basis of equation (5), such as LA, CIR, size, age, profit, SOE, and enterprise nature (HiTech). In addition, we controlled for industry characteristics vs and regional characteristics vr. To make the model easy to understand, we set DCEOt×DTt=DID. The coefficient of DID, δ, represents the impact of hiring an experienced CEO on the enterprise. Finally, the DID model used for estimation is as follows:(9)TFPit=α0+α1DCEO+α2DT+δDIDt+βXit+vs+vr+εit.
### 3.2. Data Description
The WIND and CSMAR databases provided the sample data for this investigation. Panel data from 2010 to 2019 were chosen as samples in this article, taking into account the impact of the global financial crisis from 2007 to 2009 and the COVID-19 epidemic in 2020. The “Database on Governance Structure of Chinese Listed Companies” in the CSMAR databases contains basic information about management personnel of Chinese listed companies, such as annual salaries, shareholdings, changes in shareholding structure, changes in chairman and general manager, and shareholder meetings. To make the subsequent analysis conclusion accurate and credible, “ST” (the Shanghai and Shenzhen Stock Exchanges shall give special treatment to the stocks of listed companies with abnormal financial conditions or other conditions) samples of the current year were deleted. The industry categorization standard of CSRC (2012) was used to filter listed manufacturing companies, and a total of 1,675 listed manufacturing companies were finally studied in this research. The descriptive statistics of the main variables are presented in Table1. Succession has been experienced by a total of 118 companies, accounting for 8.45 percent of all observations. State-owned firms account for 35.52 percent of the sample, whereas high-tech enterprises account for 40.1 percent of the sample.Table 1
Descriptive statistics of variables.
MeanSDMinMaxSkewnessKurtosisTFP15.9700.96211.26020.560.4673.678DCEO0.08450.278012.9879.922LA0.4130.1990.0071.7580.3142.745CIR12.5800.9034.83517.69−0.03274.209Size21.6001.35516.3427.510.5203.655Age17.5605.5632641.1908.843Profit0.0650.501−35.4808.062−57.3004,021SOE0.3550.479010.6051.366Hitech0.4010.490010.4061.165
## 3.1. Model Specification with PSM-DID Procedure
Endogeneity issues may occur from sample selection bias or missing variables if traditional econometric methods are employed directly to estimate the impact of experienced CEO succession on organizational performance. Due to the sample selection bias, existing studies failed to separate manager effects from firm effects, which eventually led to a biased conclusion. In addition, since the majority of earlier research on experienced CEOs did not take into consideration the company’s beginning condition, we will not be able to identify whether the ultimate result is due to the company’s starting operational status or the CEO. To avoid these biases, in this paper, we can properly discern the TFP changes induced by personal characteristics of CEOs using the quasinatural experiment of PSM-DID.Heckman et al. [49] were the first to suggest merging the PSM and DID models, pointing out that the PSM model may pick the control group for the DID model, giving the PSM-DID model a theoretical foundation. Propensity score matching (PSM) and difference-in-differences (DID) make up the PSM-DID model. The models (difference-in-differences, or DID for short) are integrated. Screening control items for the treated people is the responsibility of the front-end PSM model. The back-end DID model is in charge of determining the impact of policy shocks on this basis. Following the DID procedure, samples are divided into two groups. The treated group was composed of firms where experienced CEO succession occurred, whereas the control group consisted of organizations where experienced CEO succession did not exist. We construct a binary dummy variable DCEOi=0,1 when enterprise i is the experienced CEO succession enterprise, DCEOi=1; otherwise, DCEOi=0. Specifically, in our sample period, if an enterprise CEO changed during the period t, and the succeeding CEO has experience in other listed companies during period t−j, then we define it as experienced CEO succession firm. Otherwise, it is defined as a nonexperienced succession enterprise. In addition, we construct binary dummy variables DTt=0,1, where DTt=0 and DTt=1 represent before and after experienced CEO succession, respectively. Further, the change in TFP of enterprise i in the two periods of DTt=0 and DTt=1 can be expressed as ΔTPFit. Nonparametric techniques (TFP index and data envelopment analysis (DEA)) and parametric approaches (estimation of the production function and stochastic frontier analysis (SFA)) are the two main paths in the literature for measuring TFP. OLS estimation, the Olley and Pakes method [50], and the Levinsohn and Petrin approach [34]are all common methods for estimating the production function. To estimate the production function, the classic ordinary least square (OLS) method was widely used (Timmer, 1991). However, using this method might lead to a variety of estimating issues, such as the simultaneity problem and sample selectivity bias (Van, 2012). Unobserved firm productivity shocks can be approximated by a nonparametric function of an observable firm characteristic—specifically, an intermediate input—and, as a result, unbiased estimates of production function coefficients can be obtained, according to Levinsohn and Petrin’s estimation methodology [34]. The change of TFP of the enterprise undergoing CEO succession in the two periods can be expressed as ΔTPFit1, whereas the change of TFP of the nonexperienced CEO succession enterprise in the two periods can be expressed as ΔTPFit0. Accordingly, the actual impact of an experienced CEO on TFP is as follows:(1)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=1.In equation (1), EΔTPFit0|DCEOi=1 is “counterfactual,” that is, observing the change in TFP of an experienced succession firm in the absence of hiring an experienced CEO is impossible. If the average TFP change of companies with nonexperienced succession during the observation period EΔTPFit0|DCEOi=0 is directly selected as an approximate substitute, then bias will be generated due to the characteristic differences between companies. To solve this problem, we use nearest neighbor matching to find the optimal control group (nonexperienced CEO succession firm) for the treated group (experienced CEO succession firm). The selection of matching variables is an important step in nearest neighbor matching. According to existing theories and empirical research literature, the following variables affecting the TFP of enterprises are selected as matching variables: The asset-liability ratio (LA) is measured by the ratio of total liabilities to total assets. When an enterprise is faced with a high debt ratio, it often leads to the CEO’s replacement. Capital intensity (CIR) is measured as the ratio of fixed assets to the number of employees (in logarithmic form). Enterprise size (size) is measured by the logarithm of enterprise sales. Enterprise age (age), the survival time in the market, affects an enterprise’s production experience, research and development ability, and also the enterprise’s decisions regarding personnel. Corporate profit margin (profit) is measured by the ratio of operating profit divided by business sales. The ownership structure (SOE) is measured by whether the ownership structure is a state-owned enterprise. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model:(2)pDCEOit=1=ϕLAit−1,CIRit−1,Sizeit−1,Ageit−1,Profitit−1,SOEit−1,TFPit−1.The probability prediction valuep∧ can be obtained after the estimation of equation (2). We use p∧i and p∧j to represent the propensity scores of the treatment group and the control group, respectively. The latest matching model is as follows:(3)Θi=minjp∧i−p∧j,j∈DCEO=0.Θi represents the matching set from the control enterprise corresponding to the treatment group, and for each treatment group i, only a unique control group j falls into the set.After the above nearest neighbor matching, we can obtain the set of pregroup enterprises like matched control group enterprisesΘi, and their TFP variation EΔTPFit0|DCEOi=0,i∈Θi can be better substituted as EΔTPFit0|DCEOi=1. Therefore, equation (1) is transformed into the following equation:(4)λ=Eλi|DCEO=1=EΔTPFit1|DCEOi=1−EΔTPFit0|DCEOi=0,i∈Θi.Equation (4) is equivalent to the following empirical model:(5)TFPit=α0+α1•DEO+α2•DT+δ•DCEOt×DTt+εit.In equation (5), i,t represent the enterprise and year, respectively, whereas the binary dummy variable DCEO=1 represents the experienced CEO succession enterprises (treated group). DCEO=0 is propensity matching to obtain nonexperienced CEO succession enterprises (control group) and εit is the random error. The estimated coefficient of DCEOt×DTt describes the impact of experienced CEO succession on firm TFP. Specifically, in equation (5), for the enterprises in the treatment group, their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1, and their TFP at DT=0 is EΔTPFit1|DCEOi=1,DT=0=α0+α1+α2+δ, that is, the TFP change of the enterprises in the treatment group in the two periods is as follows:(6)EΔTPFit1|DCEOi=1,DT=0=EΔTPFit1|DCEOi=1,DT=0−EΔTPFit1|DCEOi=1,DT=0=α2+δ.In addition, for the control group enterprise, the TFP of whenDT=0 is EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α0, and the TFP is EΔTPFit1|DCEOi=0,DT=1,i∈Θi=α0+α1 when DT=1, that is, the TFP change of the control group enterprise in two periods is as follows:(7)EΔTPFit0|DCEOi=0,DT=0,i∈Θi=EΔTPFit0|DCEOi=0,DT=0,i∈Θi−EΔTPFit0|DCEOi=0,DT=0,i∈Θi=α2.Equation (6) minus equation (7)gives the following equation:(8)EΔTPFit0|DCEOi=1,DT=0−EΔTPFit0|DCEOi=0,i∈Θi=δ.Combining equation (4), we can obtain δ=Eλi|DCEO=1=λ. If δ>0, which means the TFP growth of enterprises in the treatment group is greater than that of enterprises in the control group after the experienced CEO succession, then the experienced CEO improves the TFP of enterprises. For robustness, we add the set of control variables Xit on the basis of equation (5), such as LA, CIR, size, age, profit, SOE, and enterprise nature (HiTech). In addition, we controlled for industry characteristics vs and regional characteristics vr. To make the model easy to understand, we set DCEOt×DTt=DID. The coefficient of DID, δ, represents the impact of hiring an experienced CEO on the enterprise. Finally, the DID model used for estimation is as follows:(9)TFPit=α0+α1DCEO+α2DT+δDIDt+βXit+vs+vr+εit.
## 3.2. Data Description
The WIND and CSMAR databases provided the sample data for this investigation. Panel data from 2010 to 2019 were chosen as samples in this article, taking into account the impact of the global financial crisis from 2007 to 2009 and the COVID-19 epidemic in 2020. The “Database on Governance Structure of Chinese Listed Companies” in the CSMAR databases contains basic information about management personnel of Chinese listed companies, such as annual salaries, shareholdings, changes in shareholding structure, changes in chairman and general manager, and shareholder meetings. To make the subsequent analysis conclusion accurate and credible, “ST” (the Shanghai and Shenzhen Stock Exchanges shall give special treatment to the stocks of listed companies with abnormal financial conditions or other conditions) samples of the current year were deleted. The industry categorization standard of CSRC (2012) was used to filter listed manufacturing companies, and a total of 1,675 listed manufacturing companies were finally studied in this research. The descriptive statistics of the main variables are presented in Table1. Succession has been experienced by a total of 118 companies, accounting for 8.45 percent of all observations. State-owned firms account for 35.52 percent of the sample, whereas high-tech enterprises account for 40.1 percent of the sample.Table 1
Descriptive statistics of variables.
MeanSDMinMaxSkewnessKurtosisTFP15.9700.96211.26020.560.4673.678DCEO0.08450.278012.9879.922LA0.4130.1990.0071.7580.3142.745CIR12.5800.9034.83517.69−0.03274.209Size21.6001.35516.3427.510.5203.655Age17.5605.5632641.1908.843Profit0.0650.501−35.4808.062−57.3004,021SOE0.3550.479010.6051.366Hitech0.4010.490010.4061.165
## 4. Result Analysis
### 4.1. Baseline Regression
In this paper, we use the DID method to examine the effect of experienced CEO succession on TFP. Comparing the difference in TFP level among the same or similar firms with and without experienced CEO succession is better. However, this cannot be observed simultaneously in reality. To overcome systematic differences in the listed manufacturing firms we selected and to reduce the bias of the DID estimation, PSM is used to improve the efficiency of the traditional DID method [51]. To fulfil this goal, we constructed a “counterfactual” control sample (the nonexperienced CEO succession group) for the treatment group (the experienced CEO succession group). To match experienced succession companies with nonexperienced succession firms, we employed the PSM approach. To begin with, we estimated the propensity scores of the listed manufacturing enterprises in China using both logit and probit regressions. Then, using kernel matching, we chose nonexperienced succession companies with individual characteristics that were similar to the succession experienced companies. The variables (i.e., LA, CIR, size, age, profit, and SOE) affecting TFP are selected as matching variables. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model: We compared the treatment groups year by year to acquire reliable matching findings and then summarized the matching data for each year. As an example, Table 2 presents comparisons of primary indicators before and after matching between the treatment and control groups in 2010. It shows that for the previous matching sample, significant differences exist between CEO succession enterprises and non-CEO succession enterprises in terms of asset-liability ratio, capital intensity, enterprise size, profit margin, and other variables. Figure 1 illustrates the difference in terms of the trend and score value of nuclear density before and after PSM matching. However, such a difference sharply diminishes after PSM matching. This result reflects a good matching effect.Table 2
Balance test of variables before and after PSM.
VariableUnmatchedMatchedTreatedControl%Bias|bias|t-test (t)LAU0.4825104135634.494.73.18M0.482510.478851.80.13CIRU12.4712.3878.627.20.86M12.4712.416.30.45SizeU22.08921.42747.696.94.79M22.08922.0691.50.10AgeU14.97 14.9714.11416.776.61.68M15.17-3.9-0.28ProfitU0.054020.08915-8.932.2-0.74M0.054020.08525-6.1-0.56SOEU0.770.3934482.494.77.14M0.770.754.40.33TFPU16.28415.88340.293.04.05M16.28416.2562.80.19Figure 1
Kernel density plot with a comparison of the two groups (treated vs. control) before (left) and after (right) propensity score matching.Table3 reports the estimated results of the impact of experienced CEO succession on the firms’ TFP by using PSM-DID regression. To make the results comparable and robust, we use two PSM matching methods, namely, nearest neighbor matching and radius matching. We just use the control variables in columns (1) and (4). We examined the DID variable with the time-fixed effect and industry-fixed effect controlled in columns (2) and (5). In columns (3) and (6), we further control for the regional-based variable. All the regression results among different models show a consistent coefficient, which implies the robustness of our results. The estimated coefficient of DID is positive and significant at the 0.01 statistical level, indicating that experienced CEO succession significantly improves firm’s TFP compared with nonexperienced succession companies. According to Table 3, experienced CEO succession, on average, increases the TFP by 3.1% compared with firms where experienced CEO succession does not occur. All the coefficients for control variables are consistent with the usual expectation.Table 3
Regression results: the impact of experienced CEO succession on TFP.
Nearest neighbor matchingRadius matching(1)(2)(3)(4)(5)(6)DID0.031∗∗∗0.031∗∗∗0.033∗∗∗0.033∗∗∗(0.008)(0.008)(0.008)(0.008)LA−0.137∗∗∗−0.147∗∗∗−0.150∗∗∗−0.133∗∗∗−0.143∗∗∗−0.156∗∗∗(0.011)(0.011)(0.011)(0.011)(0.011)(0.011)CIR−0.079∗∗∗−0.073∗∗∗−0.075∗∗∗−0.079∗∗∗−0.076∗∗∗−0.076∗∗∗(0.003)(0.003)(0.003)(0.002)(0.003)(0.003)Size0.754∗∗∗0.771∗∗∗0.773∗∗∗0.750∗∗∗0.767∗∗∗0.771∗∗∗(0.002)(0.003)(0.003)(0.002)(0.003)(0.003)Age−0.002∗∗∗−0.004∗∗∗−0.004∗∗∗−0.002∗∗∗−0.004∗∗∗−0.005∗∗∗(0.000)(0.000)(0.000)(0.000)(0.001)(0.001)Profit0.106∗∗∗0.087∗∗∗0.086∗∗∗0.102∗∗∗0.088∗∗∗0.086∗∗∗(0.008)(0.008)(0.008)(0.007)(0.007)(0.007)SOE−0.037∗∗∗−0.018−0.024∗∗−0.045∗∗∗−0.018∗−0.029∗∗∗(0.008)(0.010)(0.010)(0.008)(0.010)(0.010)Constant0.776∗∗∗0.367∗∗∗0.443∗∗∗0.874∗∗∗0.506∗∗∗0.528∗∗∗(0.051)(0.061)(0.067)(0.051)(0.060)(0.066)TimeNoYesYesNoYesYesIndustryNoYesYesNoYesYesRegionalNoNoYesNoNoYesObservations11,71111,71111,71112,37112,37112,371χ2142493∗∗∗8189∗∗∗4294∗∗∗144494∗∗∗8225∗∗∗3945∗∗∗R-squared0.9210.9240.9250.9170.9200.922Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
### 4.2. Parallel Trend Test
In the DID method, the parallel trend assumption indicates that the treatment and control groups should have a consistent evolutionary trend prior to the succession of an experienced CEO. Prior to experienced succession implementation, an event study approach was employed to further assess the endogenous issue produced by discrepancies between the treatment and control groups. The event study regression model specification is presented in equation (10):(10)TFPit=α0+∑k=−25δk•DCEOt×DTtk+εit,where k represents the time difference from the year of experienced CEO succession. If k is negative, then it indicates the number of years before experienced succession; if k is positive, then it indicates the number of years after experienced succession. The year 0 is the baseline time period, and the dynamic effects in a [−2, 5] time period were examined.Only the treated group was used as the sample, and equation (6) was applied for regression and reporting the coefficients of experienced CEO succession to be measured relative to a baseline time period. Figure 2 shows the result of the parallel trend test. In the 1 to 3 years following experienced CEO succession, the coefficients of experienced CEO succession on boosting TFP increased significantly. All pretreatment coefficients were close to zero and statistically insignificant in all years before year 0. This result suggests that the TFP of these firms remained constant before the succession of an experienced CEO, and no significant difference existed in the change trend between the treated group and the control group. As a result, dynamic trend indicates that the DID method in this paper satisfies the parallel trend condition. Furthermore, the coefficients of the postsuccession period were positive and mostly statistically significant, indicating that the TFP increased after the hiring of an experienced CEO.Figure 2
Event study analysis of experienced CEO succession.
### 4.3. Placebo Test
Following Topalova [52], we performed a placebo test by using a fictitious succession time. We also carried out a regression analysis if the experienced CEO succession happens one year earlier and two years earlier. The estimations are likely to be skewed when the coefficients are similar to those obtained with the actual pre- and postsuccession data. The rationale for this is that even if we use the year in which the experienced succession did not occur, the results will still be consistent with the basic regression. The estimated coefficients after the change of succession year are not significant, as shown in Table 4, which is inconsistent with the results of the basic regression, indicating that the findings in Table 3 are reliable.Table 4
Coefficient of the placebo test.
Nearest neighbor matchingRadius matchingTFPTFPd_20.0080.009(0.012)(0.012)d_10.0180.020(0.012)(0.012)d00.020∗0.020∗(0.011)(0.012)d10.023∗0.024∗∗(0.012)(0.012)d20.033∗∗∗0.035∗∗∗(0.013)(0.013)d30.037∗∗∗0.038∗∗∗(0.014)(0.014)d40.0080.010(0.016)(0.017)d50.0090.010(0.018)(0.018)Observations11,71112,371χ25578∗∗∗5603∗∗∗R-squared0.9240.920Note: standard errors in parentheses;p∗<0.1, ∗∗p∗∗<0.05, p∗∗∗<0.01.
### 4.4. Heterogeneity Analysis
Table5 reports the heterogeneity tests for different firm types. According to the announcement of high-tech enterprise recognition reported by itself, we sort out enterprises that have obtained the high and new technology enterprise qualification list and recode the recognition time. On this basis, companies that had experienced CEO succession are divided into two types: high-tech enterprises and nonhigh-tech enterprises. In Table 5, the heterogeneous impact of experienced CEO succession on different type firms was examined. Column (1) reports the effects of high-tech enterprises, and the coefficient of the experienced CEO succession is 0.045. The sample in column (2) consists of nonhigh-tech enterprises, and the coefficient of the experienced CEO succession is 0.032. The regression results show that experienced CEO succession significantly promotes TFP in high-tech and nonhigh-tech enterprises. However, compared with nonhigh-tech enterprises, the promotion effect of experienced CEO succession in high-tech enterprises is greater. This may be because high-tech enterprises pay more attention to innovative activities, and the arrival of a new CEO will be more conducive to enterprises to carry out innovative activities. In a high-tech industry with high dynamism and instability, under the circumstances, the experience of a prior CEO might be more valuable.Table 5
Heterogeneity results for high-tech and nonhigh-tech enterprises.
VariableNearest neighbor matchingRadius matchingHigh-techNonhigh-techHigh-techNonhigh-tech(1)(2)(3)(4)dceodt0.045∗∗∗0.032∗∗∗0.046∗∗∗0.031∗∗∗(0.013)(0.010)(0.013)(0.010)LOAR−0.130∗∗∗−0.176∗∗∗−0.133∗∗∗−0.176∗∗∗(0.018)(0.015)(0.018)(0.015)CI−0.068∗∗∗−0.077∗∗∗−0.061∗∗∗−0.084∗∗∗(0.004)(0.004)(0.004)(0.003)Size0.796∗∗∗0.766∗∗∗0.788∗∗∗0.768∗∗∗(0.005)(0.004)(0.005)(0.004)Age−0.007∗∗∗−0.002∗∗∗−0.007∗∗∗−0.002∗∗(0.002)(0.001)(0.002)(0.001)Profit0.074∗∗∗0.084∗∗∗0.073∗∗∗0.086∗∗∗(0.011)(0.010)(0.010)(0.009)SOE−0.002−0.042∗∗∗−0.015−0.042∗∗∗(0.019)(0.012)(0.020)(0.012)Constant−0.292∗∗0.629∗∗∗−0.209∗0.633∗∗∗(0.121)(0.088)(0.121)(0.087)Observations4,6847,0274,9727,995χ22339∗∗∗2274∗∗∗1869∗∗∗2987∗∗∗R-squared0.9300.9180.9250.925Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 4.1. Baseline Regression
In this paper, we use the DID method to examine the effect of experienced CEO succession on TFP. Comparing the difference in TFP level among the same or similar firms with and without experienced CEO succession is better. However, this cannot be observed simultaneously in reality. To overcome systematic differences in the listed manufacturing firms we selected and to reduce the bias of the DID estimation, PSM is used to improve the efficiency of the traditional DID method [51]. To fulfil this goal, we constructed a “counterfactual” control sample (the nonexperienced CEO succession group) for the treatment group (the experienced CEO succession group). To match experienced succession companies with nonexperienced succession firms, we employed the PSM approach. To begin with, we estimated the propensity scores of the listed manufacturing enterprises in China using both logit and probit regressions. Then, using kernel matching, we chose nonexperienced succession companies with individual characteristics that were similar to the succession experienced companies. The variables (i.e., LA, CIR, size, age, profit, and SOE) affecting TFP are selected as matching variables. In addition, TFP variables were added to ensure that no systematic difference exists in productivity between the treatment and control groups. Next, the logit method is used to estimate the following model: We compared the treatment groups year by year to acquire reliable matching findings and then summarized the matching data for each year. As an example, Table 2 presents comparisons of primary indicators before and after matching between the treatment and control groups in 2010. It shows that for the previous matching sample, significant differences exist between CEO succession enterprises and non-CEO succession enterprises in terms of asset-liability ratio, capital intensity, enterprise size, profit margin, and other variables. Figure 1 illustrates the difference in terms of the trend and score value of nuclear density before and after PSM matching. However, such a difference sharply diminishes after PSM matching. This result reflects a good matching effect.Table 2
Balance test of variables before and after PSM.
VariableUnmatchedMatchedTreatedControl%Bias|bias|t-test (t)LAU0.4825104135634.494.73.18M0.482510.478851.80.13CIRU12.4712.3878.627.20.86M12.4712.416.30.45SizeU22.08921.42747.696.94.79M22.08922.0691.50.10AgeU14.97 14.9714.11416.776.61.68M15.17-3.9-0.28ProfitU0.054020.08915-8.932.2-0.74M0.054020.08525-6.1-0.56SOEU0.770.3934482.494.77.14M0.770.754.40.33TFPU16.28415.88340.293.04.05M16.28416.2562.80.19Figure 1
Kernel density plot with a comparison of the two groups (treated vs. control) before (left) and after (right) propensity score matching.Table3 reports the estimated results of the impact of experienced CEO succession on the firms’ TFP by using PSM-DID regression. To make the results comparable and robust, we use two PSM matching methods, namely, nearest neighbor matching and radius matching. We just use the control variables in columns (1) and (4). We examined the DID variable with the time-fixed effect and industry-fixed effect controlled in columns (2) and (5). In columns (3) and (6), we further control for the regional-based variable. All the regression results among different models show a consistent coefficient, which implies the robustness of our results. The estimated coefficient of DID is positive and significant at the 0.01 statistical level, indicating that experienced CEO succession significantly improves firm’s TFP compared with nonexperienced succession companies. According to Table 3, experienced CEO succession, on average, increases the TFP by 3.1% compared with firms where experienced CEO succession does not occur. All the coefficients for control variables are consistent with the usual expectation.Table 3
Regression results: the impact of experienced CEO succession on TFP.
Nearest neighbor matchingRadius matching(1)(2)(3)(4)(5)(6)DID0.031∗∗∗0.031∗∗∗0.033∗∗∗0.033∗∗∗(0.008)(0.008)(0.008)(0.008)LA−0.137∗∗∗−0.147∗∗∗−0.150∗∗∗−0.133∗∗∗−0.143∗∗∗−0.156∗∗∗(0.011)(0.011)(0.011)(0.011)(0.011)(0.011)CIR−0.079∗∗∗−0.073∗∗∗−0.075∗∗∗−0.079∗∗∗−0.076∗∗∗−0.076∗∗∗(0.003)(0.003)(0.003)(0.002)(0.003)(0.003)Size0.754∗∗∗0.771∗∗∗0.773∗∗∗0.750∗∗∗0.767∗∗∗0.771∗∗∗(0.002)(0.003)(0.003)(0.002)(0.003)(0.003)Age−0.002∗∗∗−0.004∗∗∗−0.004∗∗∗−0.002∗∗∗−0.004∗∗∗−0.005∗∗∗(0.000)(0.000)(0.000)(0.000)(0.001)(0.001)Profit0.106∗∗∗0.087∗∗∗0.086∗∗∗0.102∗∗∗0.088∗∗∗0.086∗∗∗(0.008)(0.008)(0.008)(0.007)(0.007)(0.007)SOE−0.037∗∗∗−0.018−0.024∗∗−0.045∗∗∗−0.018∗−0.029∗∗∗(0.008)(0.010)(0.010)(0.008)(0.010)(0.010)Constant0.776∗∗∗0.367∗∗∗0.443∗∗∗0.874∗∗∗0.506∗∗∗0.528∗∗∗(0.051)(0.061)(0.067)(0.051)(0.060)(0.066)TimeNoYesYesNoYesYesIndustryNoYesYesNoYesYesRegionalNoNoYesNoNoYesObservations11,71111,71111,71112,37112,37112,371χ2142493∗∗∗8189∗∗∗4294∗∗∗144494∗∗∗8225∗∗∗3945∗∗∗R-squared0.9210.9240.9250.9170.9200.922Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 4.2. Parallel Trend Test
In the DID method, the parallel trend assumption indicates that the treatment and control groups should have a consistent evolutionary trend prior to the succession of an experienced CEO. Prior to experienced succession implementation, an event study approach was employed to further assess the endogenous issue produced by discrepancies between the treatment and control groups. The event study regression model specification is presented in equation (10):(10)TFPit=α0+∑k=−25δk•DCEOt×DTtk+εit,where k represents the time difference from the year of experienced CEO succession. If k is negative, then it indicates the number of years before experienced succession; if k is positive, then it indicates the number of years after experienced succession. The year 0 is the baseline time period, and the dynamic effects in a [−2, 5] time period were examined.Only the treated group was used as the sample, and equation (6) was applied for regression and reporting the coefficients of experienced CEO succession to be measured relative to a baseline time period. Figure 2 shows the result of the parallel trend test. In the 1 to 3 years following experienced CEO succession, the coefficients of experienced CEO succession on boosting TFP increased significantly. All pretreatment coefficients were close to zero and statistically insignificant in all years before year 0. This result suggests that the TFP of these firms remained constant before the succession of an experienced CEO, and no significant difference existed in the change trend between the treated group and the control group. As a result, dynamic trend indicates that the DID method in this paper satisfies the parallel trend condition. Furthermore, the coefficients of the postsuccession period were positive and mostly statistically significant, indicating that the TFP increased after the hiring of an experienced CEO.Figure 2
Event study analysis of experienced CEO succession.
## 4.3. Placebo Test
Following Topalova [52], we performed a placebo test by using a fictitious succession time. We also carried out a regression analysis if the experienced CEO succession happens one year earlier and two years earlier. The estimations are likely to be skewed when the coefficients are similar to those obtained with the actual pre- and postsuccession data. The rationale for this is that even if we use the year in which the experienced succession did not occur, the results will still be consistent with the basic regression. The estimated coefficients after the change of succession year are not significant, as shown in Table 4, which is inconsistent with the results of the basic regression, indicating that the findings in Table 3 are reliable.Table 4
Coefficient of the placebo test.
Nearest neighbor matchingRadius matchingTFPTFPd_20.0080.009(0.012)(0.012)d_10.0180.020(0.012)(0.012)d00.020∗0.020∗(0.011)(0.012)d10.023∗0.024∗∗(0.012)(0.012)d20.033∗∗∗0.035∗∗∗(0.013)(0.013)d30.037∗∗∗0.038∗∗∗(0.014)(0.014)d40.0080.010(0.016)(0.017)d50.0090.010(0.018)(0.018)Observations11,71112,371χ25578∗∗∗5603∗∗∗R-squared0.9240.920Note: standard errors in parentheses;p∗<0.1, ∗∗p∗∗<0.05, p∗∗∗<0.01.
## 4.4. Heterogeneity Analysis
Table5 reports the heterogeneity tests for different firm types. According to the announcement of high-tech enterprise recognition reported by itself, we sort out enterprises that have obtained the high and new technology enterprise qualification list and recode the recognition time. On this basis, companies that had experienced CEO succession are divided into two types: high-tech enterprises and nonhigh-tech enterprises. In Table 5, the heterogeneous impact of experienced CEO succession on different type firms was examined. Column (1) reports the effects of high-tech enterprises, and the coefficient of the experienced CEO succession is 0.045. The sample in column (2) consists of nonhigh-tech enterprises, and the coefficient of the experienced CEO succession is 0.032. The regression results show that experienced CEO succession significantly promotes TFP in high-tech and nonhigh-tech enterprises. However, compared with nonhigh-tech enterprises, the promotion effect of experienced CEO succession in high-tech enterprises is greater. This may be because high-tech enterprises pay more attention to innovative activities, and the arrival of a new CEO will be more conducive to enterprises to carry out innovative activities. In a high-tech industry with high dynamism and instability, under the circumstances, the experience of a prior CEO might be more valuable.Table 5
Heterogeneity results for high-tech and nonhigh-tech enterprises.
VariableNearest neighbor matchingRadius matchingHigh-techNonhigh-techHigh-techNonhigh-tech(1)(2)(3)(4)dceodt0.045∗∗∗0.032∗∗∗0.046∗∗∗0.031∗∗∗(0.013)(0.010)(0.013)(0.010)LOAR−0.130∗∗∗−0.176∗∗∗−0.133∗∗∗−0.176∗∗∗(0.018)(0.015)(0.018)(0.015)CI−0.068∗∗∗−0.077∗∗∗−0.061∗∗∗−0.084∗∗∗(0.004)(0.004)(0.004)(0.003)Size0.796∗∗∗0.766∗∗∗0.788∗∗∗0.768∗∗∗(0.005)(0.004)(0.005)(0.004)Age−0.007∗∗∗−0.002∗∗∗−0.007∗∗∗−0.002∗∗(0.002)(0.001)(0.002)(0.001)Profit0.074∗∗∗0.084∗∗∗0.073∗∗∗0.086∗∗∗(0.011)(0.010)(0.010)(0.009)SOE−0.002−0.042∗∗∗−0.015−0.042∗∗∗(0.019)(0.012)(0.020)(0.012)Constant−0.292∗∗0.629∗∗∗−0.209∗0.633∗∗∗(0.121)(0.088)(0.121)(0.087)Observations4,6847,0274,9727,995χ22339∗∗∗2274∗∗∗1869∗∗∗2987∗∗∗R-squared0.9300.9180.9250.925Note: standard errors in parentheses;p∗<0.1, p∗∗<0.05, p∗∗∗<0.01.
## 5. Conclusion and Discussion
In this study, we examine the relationship between the role of experienced CEO candidates in the CEO succession and the TFP of the focal firms, as the established studies fail to reach a consensus on the influence of experienced CEO candidates on the focal firm’s performance when facing succession. To obtain a robust conclusion, we adopt the PSM-DID model to understand the research question based on the context of listed firms in an emerging market. We provide empirical evidence from the micro level to objectively evaluate the contributions of CEO candidates with experience in the CEO succession of listed Chinese manufacturing companies. Our finding shows that hiring an experienced CEO is an important channel to improve firms’ performance. Specifically, our analysis demonstrates that, on average, the experienced CEO in CEO succession yields a 3.1% increase in TFP improvement compared with nonexperienced succession firms. This positive effect can be continued for four years. The following decreasing performance after the experienced CEO takes the new CEO position in the focal firm is partially due to the changing environment with new demands and partially because of the CEO’s becoming more ambitious after the positive performance, which leads to the strategic goal of the experienced CEO changes. Our results provide new insights into experienced CEO succession literature and firm performance in general.To investigate the robust effects of experienced CEOs in CEO succession on the focal firm’s TFP, we classified the samples and tested the heterogeneous effect of experienced CEO succession on firm TFP by using the PSM-DID procedure. Compared with state-owned listed companies, CEO succession of private companies can effectively promote the improvement of TFP. In terms of the nature of the firm, the effect of an experienced CEO in CEO succession on the TFP of high-tech enterprises is higher than that of nonhigh-tech enterprises. Private enterprises and high-tech enterprises have a more dynamic environment that makes experienced CEOs are less likely to prevent the focal firm’s performance. Working in more challenging and complex environments, in which CEOs are less likely to rely on simplified prescriptions or pasted personal experience, a CEO needs to upgrade their skills and acquire new knowledge to adapt to the changing environment. Higher dynamic environments effectively prevent a CEO from following decision-making shortcuts, such as the previous routines outside of the boundary of the focal firm. As a consequence, an experienced CEO will have to explore new business practices or mindsets to solve the emerging challenges and issues.Experience trap not founded in the Chinese context. Previous research proposes contradictory perspectives. Some studies hold that experienced managers can becomeunknowingly “trapped” in their past ways of success. Therefore, these managers can fail to adapt to environmental changes, ultimately leading to unsatisfactory performance. However, the mobility between CEOs of listed companies has significantly promoted the TFP of enterprises and maintained a sustainable growth trend in the next four years, that is, the experience trap is not found in this paper. Previous research proposes contradictory perspectives on the experienced CEOs in CEO succession. Some studies hold that experienced managers can become unknowingly “trapped” in their past ways of success, such as inertia. Therefore, these managers can fail to adapt to environmental changes, ultimately leading to unsatisfactory performance. However, the succession between experienced CEOs of listed companies has significantly promoted the TFP of the focal firms and maintains a sustainable growth trend in the next three years. CEOs are the most valuable human capital that contains an individual’s knowledge, skills, experiences, and capabilities. CEOs have accumulated skills through professions and careers. Managers’ skills are classified into generic skills, firm-specific skills, and industry-specific skills. Firm-specific human capital is useful only to firms that provide it and is not transferable, whereas industry-specific human capital can be transferred within an industry but has less transferability across industries. Generic, or general, skills are those that can be transferred across organizations and industries. In the Chinese market, due to the immature capital market environment, CEOs are busy dealing with institutional investors, government departments, and shareholders; hence, generic skills become more important. Such portability in generic skills gives the potential of experienced CEOs to better survive and perform in the CEO succession of focal firms. After succession, experienced CEOs are outsiders to their new organizations, and their early strategic changes are likely to be adaptive because they bring new ideas and are less likely to be influenced by the status quo, which will likely have a positive effect on organizational performance.We acknowledge the possible geographical/cultural biases in this research stream. However, these biases go beyond the research question addressed in this study. Therefore, we encourage future research to encompass a broader range of subjects from various geographies, ethnicities/races, and cultures [53].
---
*Source: 1014224-2022-10-22.xml* | 2022 |
# Contextualizing Child Malaria Diagnosis and Treatment Practices at an Outpatient Clinic in Southwest Nigeria: A Qualitative Study
**Authors:** Juliet Iwelunmor; Collins O. Airhihenbuwa; Gary King; Ayoade Adedokun
**Journal:** ISRN Infectious Diseases
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.5402/2013/101423
---
## Abstract
Background. This study sought to explore contextual features of an outpatient clinic located in southwest Nigeria that enable and/or discourage effective diagnosis and treatment of child malaria. Methods. We conducted in-depth interviews with mothers of 135 febrile children attending a pediatric outpatient clinic in southwest Nigeria. Also, participant observations and informal discussions with physicians were conducted to examine the potential impact of context on effective child malaria diagnosis and treatment. Results. The findings indicate that availability of drugs and laboratory testing for malaria, affordability of antimalarial drugs, access to the clinic (particularly access to pediatricians), adequacy of the outpatient clinic, and acceptability of services provided at the clinic are key contextual factors that influence effective case management of malaria in children. Conclusion. If the Millennium Development Goal 6 of reversing malaria incidence by 2015 particularly among children is to be achieved, it is necessary to identify the contextual factors that may act as potential barriers to effective diagnosis and treatment practices at clinical settings. Understanding the context in which case management of child malaria occurs can provide insights into the factors that influence mis- and over-diagnosis of malaria in clinical settings.
---
## Body
## 1. Background
One of the central goals of malaria control programs is to provide effective diagnosis and treatment of malaria particularly in children less than five years of age. To this end, efforts have been made to encourage caretakers of febrile children to seek prompt diagnosis and treatment at health care settings within 24 hours of illness onset. However, despite these efforts, over the past several years, malaria mis- and over-diagnosis have increased dramatically [1–4]. Although the World Health Organization [5] currently recommends “prompt parasitological confirmation by microscopy or alternatively by rapid diagnostic tests (RDTs) for all patients with suspected malaria before treatment is started,” in settings where these tools are available, few qualitative lines of evidence exist about the contextual features of health care clinics that influence effective diagnosis and treatment of malaria. Contextual factors are those features of the health care systems which enable and/or discourage effective case management of child malaria [6]. These factors have important implications for reducing the morbidity and mortality from malaria in children. Ignoring the context in which child malaria diagnosis and treatment practices occur, may impede renewed optimism towards improved malaria control and possibly elimination in many endemic countries.The importance attributed to contextual factors is also underscored by empirical evidence indicating a need to go beyond presumptive (or clinical) diagnosis of malaria in children [7–9]. Presumptive diagnosis based on clinical signs and symptoms has been the primary means of diagnosing and treating malaria in many malaria endemic countries [10–12]. It refers to how the disease is understood in the absence of a laboratory confirmation of blood analysis and the course of treatment to be taken. Indeed, in many malaria endemic countries, malaria microscopy which remains the gold standard for laboratory diagnosis remains inaccessible to patients because of poor laboratory infrastructure and understaffing of technical expertise it requires [11]. Even in settings where microscopy is available, referrals for laboratory test rarely happen because in many instances, aspects of such a test (i.e., drawing blood from patients) may also be performed by a physician (as was the case in this study site). Further, when they do happen, they are time consuming and physicians often mistrust the laboratory results and continue to treat those who test negative with antimalarials [13]. For these reasons, knowledge of the contextual factors that influence effective diagnosis and treatment of malaria in children is important for efforts aimed at halting and reversing the incidence of malaria in endemic countries.In Nigeria, malaria follows a hyperendemic pattern, with peak transmission occurring during the rainy season period (June-July). Nigeria offers a unique opportunity to study the contextual factors that influence effective case management of child malaria for several reasons. First, a quarter of all malaria cases in the World Health Organization in the African region occur in Nigeria [5]. Also, while evidence of systematic decline in malaria cases has been reported in others parts of Africa, malaria remains a persistent problem in Nigeria [5]. According to the National Malaria Control Program in Nigeria [14], malaria is, by far, one of the most important public health problems, representing about 60% of outpatient visits to health facilities, 30% of childhood deaths, and 25% of death in children under one year. Given the burden of malaria in Nigeria, it is possible that contextual features of health care clinics may act in various ways to enable and/or discourage effective case management of malaria in children. At a time of changes in the burden of malaria, with compelling evidence of dramatic decline in malaria transmission in other parts of sub-Saharan Africa [15], it is important to examine the role context plays in influencing effective diagnosis and treatment of malaria. In this paper, we apply the health access livelihood framework to contextualize child malaria diagnosis and treatment practices at an outpatient clinic in southwest Nigeria.Theoretical Framework: The Health Access Livelihood Framework. The Health Access Livelihood Framework was developed in response to the need to address access to prompt and effective malaria treatment in rural Tanzania [16, 17]. It is designed to better align health care resources with people’s needs, perceptions, and expectations [17]. It combines issues related to health seeking (why, how, and when individuals seek help for illness) with factors that influence access to health care services (availability, accessibility, etc.) to situate the broader context in which effective case management of illness occurs [17]. It consists of five dimensions: availability, affordability, accessibility, adequacy, and acceptability [17]. While availability addresses issues related to the types of services that exist within a health care setting and whether these services correspond with people’s needs and expectations, affordability refers to the costs of the services provided (both direct and indirect costs) such as costs of consultations as well as transportation costs and lost time from work [17]. Accessibility is concerned with the geographic distance between services and homes of the intended users [17]. Adequacy examines whether the organization of the health care settings meets the patient’s expectations, and acceptability highlights whether or not the information, explanations, and treatment protocols provided take the patient’s expectations or perceptions into account [17]. Although this framework has been used to examine the factors that influence access to malaria treatment in a rural setting with limited resources, in urban settings with access to diagnostic tools such as microscopy or malaria rapid diagnosis test, few qualitative attempts have been made to explore how these services align with caretaker’s perceptions and expectations. This framework was used in this study to contextualize how services provided at an outpatient clinic in southwest Nigeria align with caretaker’s perceptions and expectations of effective diagnosis and treatment of malaria in children.
## 2. Methods
### 2.1. Setting
This study was conducted in Lagos, one of the largest urban metropolises located in the southwest region of Nigeria. With an estimated population of 12 million people [18], Lagos is also a state and is one of the most populous states in Nigeria with a sociocultural rainbow of people from diverse indigenous backgrounds. It is located within the rainforest region of southwest Nigeria and there are two climatic seasons in Lagos: the dry season and the wet season. The dry season lasts from November to March while the wet (or rainy) season lasts from April to October, with the highest rainfalls occurring during May through July. Malaria transmission in Lagos is intense particularly during the rainy season. This study took place in the pediatric section of an outpatient clinic located in Ikeja, the capital of Lagos. Three times a week on average, the researchers conducted this study at the clinic during the rainy season of June and July, 2010, to explore the mechanisms that guide child malaria diagnosis and treatment decisions at the clinic.
### 2.2. Study Design and Participants
In-depth interviews, participant observations, informal discussions, and fieldnotes were used to collect data with a purposive sample of mothers with febrile children attending the outpatient clinic and the physicians providing care. Mothers were sensitized to the study in the outpatient waiting room prior to the commencement of the study, and those who provided verbal and written consent were recruited to participate. A total of 135 mothers with febrile children participated in this study. The age range of the mothers was 20–65, while the children ranged in age from 3 months to 12 years. The majority of the mothers belonged to the Yoruba (59.1%) and Igbo (28.1%) ethnic groups in Nigeria. Ethics approval for this study was granted by Penn State and the Lagos State University Teaching Hospital.Verbal consent was also obtained from each physician observed prior to the commencement of the study. Data collection took place in the consultation rooms at the outpatient clinic after routine consultations of mothers with febrile children by physicians. In-depth interviews with mothers focused on perceptions and treatment seeking practices for child’s febrile illness prior to clinic attendance. Specifically, mothers were asked to describe how the illness began, what caused the illness, and whether it was severe. They also described their reasons for bringing the child to the clinic, as well as their expectations of the services provided at the clinic. Participant observation focused on interactions between the physicians and mothers. Specifically, using a checklist, the symptoms described by mothers as well as the diagnosis by physicians were recorded. Clinical logic for malaria diagnosis or diagnosis of nonmalaria cases and treatment decisions by physicians were also recorded. Informal discussions with physicians explored their criteria and decision logic for diagnosing malaria in children, their treatment choices, and the potential for malaria mis- and overdiagnoses at this clinical setting.
### 2.3. Data Analysis
Transcripts of the in-depth interviews, as well as checklists of participant observations, informal discussions, and fieldnotes were analyzed using the content analysis techniques described by Morse and Field [19]. Using the Health Access Livelihood Framework as a guide, responses from the in-depth interviews, informal discussions, as well as checklists of participant observations, and field notes were organized and categorized into expectations about effective diagnosis or child malaria and the resources that enhance or create barriers with effectively managing child malaria at this outpatient clinic. An audit trail of the researcher’s decisions and insights were also summarized. Credibility of the data was maintained through triangulation of the multiple sources of data. Also, the data were read in their entirety several times and repeatedly examined so as to obtain a general sense of the information gathered as well as to categorize the material until saturation was reached, that is, until no new themes emerged.
## 2.1. Setting
This study was conducted in Lagos, one of the largest urban metropolises located in the southwest region of Nigeria. With an estimated population of 12 million people [18], Lagos is also a state and is one of the most populous states in Nigeria with a sociocultural rainbow of people from diverse indigenous backgrounds. It is located within the rainforest region of southwest Nigeria and there are two climatic seasons in Lagos: the dry season and the wet season. The dry season lasts from November to March while the wet (or rainy) season lasts from April to October, with the highest rainfalls occurring during May through July. Malaria transmission in Lagos is intense particularly during the rainy season. This study took place in the pediatric section of an outpatient clinic located in Ikeja, the capital of Lagos. Three times a week on average, the researchers conducted this study at the clinic during the rainy season of June and July, 2010, to explore the mechanisms that guide child malaria diagnosis and treatment decisions at the clinic.
## 2.2. Study Design and Participants
In-depth interviews, participant observations, informal discussions, and fieldnotes were used to collect data with a purposive sample of mothers with febrile children attending the outpatient clinic and the physicians providing care. Mothers were sensitized to the study in the outpatient waiting room prior to the commencement of the study, and those who provided verbal and written consent were recruited to participate. A total of 135 mothers with febrile children participated in this study. The age range of the mothers was 20–65, while the children ranged in age from 3 months to 12 years. The majority of the mothers belonged to the Yoruba (59.1%) and Igbo (28.1%) ethnic groups in Nigeria. Ethics approval for this study was granted by Penn State and the Lagos State University Teaching Hospital.Verbal consent was also obtained from each physician observed prior to the commencement of the study. Data collection took place in the consultation rooms at the outpatient clinic after routine consultations of mothers with febrile children by physicians. In-depth interviews with mothers focused on perceptions and treatment seeking practices for child’s febrile illness prior to clinic attendance. Specifically, mothers were asked to describe how the illness began, what caused the illness, and whether it was severe. They also described their reasons for bringing the child to the clinic, as well as their expectations of the services provided at the clinic. Participant observation focused on interactions between the physicians and mothers. Specifically, using a checklist, the symptoms described by mothers as well as the diagnosis by physicians were recorded. Clinical logic for malaria diagnosis or diagnosis of nonmalaria cases and treatment decisions by physicians were also recorded. Informal discussions with physicians explored their criteria and decision logic for diagnosing malaria in children, their treatment choices, and the potential for malaria mis- and overdiagnoses at this clinical setting.
## 2.3. Data Analysis
Transcripts of the in-depth interviews, as well as checklists of participant observations, informal discussions, and fieldnotes were analyzed using the content analysis techniques described by Morse and Field [19]. Using the Health Access Livelihood Framework as a guide, responses from the in-depth interviews, informal discussions, as well as checklists of participant observations, and field notes were organized and categorized into expectations about effective diagnosis or child malaria and the resources that enhance or create barriers with effectively managing child malaria at this outpatient clinic. An audit trail of the researcher’s decisions and insights were also summarized. Credibility of the data was maintained through triangulation of the multiple sources of data. Also, the data were read in their entirety several times and repeatedly examined so as to obtain a general sense of the information gathered as well as to categorize the material until saturation was reached, that is, until no new themes emerged.
## 3. Results
As stated earlier, the contextual factors are those features of the health care system which either promote or lessen the ability to effectively manage child malaria. In this study, these factors include the availability of drugs and laboratory testing for malaria, affordability of antimalarial drugs, access to the clinic (particularly access to pediatricians), adequacy of clinic, and acceptability of services provided at the clinic.
### 3.1. Availability of Drugs and Laboratory Testing
This outpatient clinic is known to provide free antimalarial medication for all children less than 5 years of age diagnosed with malaria. Unfortunately, during the course of this study, access to free antimalarial drugs was problematic as the drugs were not always available at the dispensary. Most mothers remarked that the lack of free anti-malaria drugs at the dispensary was a hindrance to the effective treatment of malaria diagnosed in their children. One mother stated the following.“I came to this clinic because I thought that they give free antimalarial drugs that were of good quality, but they do not have any and I am not sure if I would trust the ones that they have at the market.”In addition to the lack of free antimalarial drugs, although laboratory testing with microscopy is provided at no cost to children attending this clinic, referrals are rare. While some mothers were of the opinion that “it is better to run tests to know the exact problem causing the child’s illness,” physicians did not recommend it because of time factor, absence of personnel to perform laboratory tasks, and, finally, delay in receiving lab results. In this setting, giving antimalarial treatments to all children with febrile illness was deemed to be necessary by physicians particularly as malaria transmission is hyperendemic in this region of Nigeria. One physician stated the following.“If I referred a patient for microscopy, it will take at least 2 days before results are available; by then malaria may have worsened, so it is better to treat immediately due to the volume of patients we see in any given day. Moreover, the microscopy laboratories are small, understaffed, and overworked and they lack the equipment to handle the sheer volume of tests needed by patients.”
### 3.2. Affordability of Antimalarial Drugs
One of the key elements for malaria control in Sub-Saharan Africa is prompt treatment with effective antimalarial drugs. Although major efforts are underway to strengthen and promote appropriate utilization of effective antimalarial drugs, barriers imposed by the cost of the new and expensive artemisinin combination therapies may constrain malaria control efforts in multiple ways. For example, findings from the in-depth interviews indicate that affordability of antimalarial drugs can delay prompt treatment of child malaria as evidenced in the following comments.“I cannot afford to buy meds that the doctor just prescribed because of the cost. I do not have any job or money to buy it now for my child.”Also, it was not uncommon for some mothers to buy chloroquine (despite its known resistance to malaria in this setting) because it was cheap and affordable when compared to the new/expensive artemisinin combination therapies currently in the market. In improving the affordability of antimalarial drugs, one mother stated that “these drugs need to be provided at subsidized price at this clinic so that even poor people can afford to buy them.”
### 3.3. Access to Clinic and Pediatricians
When mothers were asked to describe the length of time it took to travel from their homes to the clinic, 41.5% stated that it took less than 30 minutes, 43.3% stated it took between 1-2 hours depending on the traffic, and 14.8% stated that it took over 3 hours to arrive at the clinic. Some of the mothers said they brought their child to this clinic because it is known to provide “free services to everyone.” Access to free clinical services was considered to be important, particularly as it addressed the health needs of the poorest who are often deterred from seeking care at most clinics. Some mothers stated that access to the clinic also guaranteed they would receive the “best decision and treatment” for their child’s illness. Another component of the clinic resources that matter with the mothers interviewed was “easy access to pediatricians.” Indeed, due to ease of access to pediatricians at the clinic, it was not uncommon for some mothers to bring their children to the clinic within 24 hours of illness onset. Easy access to pediatricians also played a significant role in influencing many mother’s decisions to travel long distances and in some cases wait 2-3 hours before being seen by the physician at the clinic with little or no complaints. Indeed, our findings revealed that what is often viewed as healthcare barrier or constraint in some settings (i.e., long travelling distances or long waiting times), although important, was insignificant when considered alongside other defining characteristics such as easy access to pediatricians at the clinic.
### 3.4. Adequacy of Outpatient Clinic
The outpatient clinic caters to the needs of all people residing in the surrounding areas of the clinic as well as people from throughout the country. Although the hours of operation are from 9 a.m. to 3 p.m., most caretakers and their children arrive as early as 6 a.m. to ensure that they are seen as soon as the clinic opens. No prerequisites (such as formal referral letters) are needed to access the clinic’s services. As a result, the clinic is readily accessible to patients from all social classes with varied health problems. The caretakers and children who arrive as early as 6 a.m. begin the task of waiting in an area outside the hospital designated as the outside waiting room. The physicians often arrive a little after 8, and the clinics begin by 9 a.m. The first points of contact for the caretakers and their children are the nurses, matrons, and orderlies at the clinic. They are all women. These women are in charge of ushering the patients from the outside waiting room to the waiting room inside the clinic. The inside waiting area is located outside the physician’s office and patients sit in the order of their arrival time or the nurse’s arrangement. When it is their turn, they go into the room and begin to narrate their child’s illness. Although there are five rooms inside the clinic, only two are designated for general outpatient consultations, and so on many occasions due to lack of space, two physicians share a consultation room, seeing two separate patients at a time. In the context of providing treatment for malaria, as mentioned earlier, the clinic provides free services for children, including free antimalarial drugs for those diagnosed with malaria and free additional testing with microscopy when necessary. Physicians rarely recommended patients for microscopy as they had large queues of patients waiting for consultations. Their priority was to ensure that all waiting patients were seen by a physician during the scheduled hours of operation. Also, unlike in other countries where nurses perform blood work, such task is performed by physicians in this clinic. This means that physicians often have to weigh the time spent drawing blood of one patient (which translates to seeing 3 to 4 patients) or base diagnosis on observation and mothers explanation and history of illness. One physician aptly stated the following.“Malaria tests with microscopy are cumbersome and in the long-run malaria rapid diagnostic tests are not available indefinitely in this clinic. Patients are many and yes when they come with a temperature, before we think of anything, we have to think of malaria.”
### 3.5. Acceptability of Health Care Services
With regard to acceptability of the health care services provided at the clinic, mothers were asked whether they were satisfied with their consultations (as related to the quality of services provided by physicians and other healthcare workers at the clinic), what they liked best (comments from satisfied mothers), and what the clinic could do to provide better services (comments from dissatisfied mothers). More than half of the mothers (52.6%) interviewed stated that the quality of care provided at the clinic was excellent. 36.1% stated that the services were good, while 10.5% noted that the care provided at the clinic was average. Among mothers who were satisfied with the care that they received, some were of the opinion that the physicians at the clinic were “helpful” and “attentive to their needs.” Dissatisfied mothers cited “lack of additional tests” prior to prescribing medications and the “absence of free antimalarial drugs,” as key potential barriers for adequate case management of child malaria at this clinic.
## 3.1. Availability of Drugs and Laboratory Testing
This outpatient clinic is known to provide free antimalarial medication for all children less than 5 years of age diagnosed with malaria. Unfortunately, during the course of this study, access to free antimalarial drugs was problematic as the drugs were not always available at the dispensary. Most mothers remarked that the lack of free anti-malaria drugs at the dispensary was a hindrance to the effective treatment of malaria diagnosed in their children. One mother stated the following.“I came to this clinic because I thought that they give free antimalarial drugs that were of good quality, but they do not have any and I am not sure if I would trust the ones that they have at the market.”In addition to the lack of free antimalarial drugs, although laboratory testing with microscopy is provided at no cost to children attending this clinic, referrals are rare. While some mothers were of the opinion that “it is better to run tests to know the exact problem causing the child’s illness,” physicians did not recommend it because of time factor, absence of personnel to perform laboratory tasks, and, finally, delay in receiving lab results. In this setting, giving antimalarial treatments to all children with febrile illness was deemed to be necessary by physicians particularly as malaria transmission is hyperendemic in this region of Nigeria. One physician stated the following.“If I referred a patient for microscopy, it will take at least 2 days before results are available; by then malaria may have worsened, so it is better to treat immediately due to the volume of patients we see in any given day. Moreover, the microscopy laboratories are small, understaffed, and overworked and they lack the equipment to handle the sheer volume of tests needed by patients.”
## 3.2. Affordability of Antimalarial Drugs
One of the key elements for malaria control in Sub-Saharan Africa is prompt treatment with effective antimalarial drugs. Although major efforts are underway to strengthen and promote appropriate utilization of effective antimalarial drugs, barriers imposed by the cost of the new and expensive artemisinin combination therapies may constrain malaria control efforts in multiple ways. For example, findings from the in-depth interviews indicate that affordability of antimalarial drugs can delay prompt treatment of child malaria as evidenced in the following comments.“I cannot afford to buy meds that the doctor just prescribed because of the cost. I do not have any job or money to buy it now for my child.”Also, it was not uncommon for some mothers to buy chloroquine (despite its known resistance to malaria in this setting) because it was cheap and affordable when compared to the new/expensive artemisinin combination therapies currently in the market. In improving the affordability of antimalarial drugs, one mother stated that “these drugs need to be provided at subsidized price at this clinic so that even poor people can afford to buy them.”
## 3.3. Access to Clinic and Pediatricians
When mothers were asked to describe the length of time it took to travel from their homes to the clinic, 41.5% stated that it took less than 30 minutes, 43.3% stated it took between 1-2 hours depending on the traffic, and 14.8% stated that it took over 3 hours to arrive at the clinic. Some of the mothers said they brought their child to this clinic because it is known to provide “free services to everyone.” Access to free clinical services was considered to be important, particularly as it addressed the health needs of the poorest who are often deterred from seeking care at most clinics. Some mothers stated that access to the clinic also guaranteed they would receive the “best decision and treatment” for their child’s illness. Another component of the clinic resources that matter with the mothers interviewed was “easy access to pediatricians.” Indeed, due to ease of access to pediatricians at the clinic, it was not uncommon for some mothers to bring their children to the clinic within 24 hours of illness onset. Easy access to pediatricians also played a significant role in influencing many mother’s decisions to travel long distances and in some cases wait 2-3 hours before being seen by the physician at the clinic with little or no complaints. Indeed, our findings revealed that what is often viewed as healthcare barrier or constraint in some settings (i.e., long travelling distances or long waiting times), although important, was insignificant when considered alongside other defining characteristics such as easy access to pediatricians at the clinic.
## 3.4. Adequacy of Outpatient Clinic
The outpatient clinic caters to the needs of all people residing in the surrounding areas of the clinic as well as people from throughout the country. Although the hours of operation are from 9 a.m. to 3 p.m., most caretakers and their children arrive as early as 6 a.m. to ensure that they are seen as soon as the clinic opens. No prerequisites (such as formal referral letters) are needed to access the clinic’s services. As a result, the clinic is readily accessible to patients from all social classes with varied health problems. The caretakers and children who arrive as early as 6 a.m. begin the task of waiting in an area outside the hospital designated as the outside waiting room. The physicians often arrive a little after 8, and the clinics begin by 9 a.m. The first points of contact for the caretakers and their children are the nurses, matrons, and orderlies at the clinic. They are all women. These women are in charge of ushering the patients from the outside waiting room to the waiting room inside the clinic. The inside waiting area is located outside the physician’s office and patients sit in the order of their arrival time or the nurse’s arrangement. When it is their turn, they go into the room and begin to narrate their child’s illness. Although there are five rooms inside the clinic, only two are designated for general outpatient consultations, and so on many occasions due to lack of space, two physicians share a consultation room, seeing two separate patients at a time. In the context of providing treatment for malaria, as mentioned earlier, the clinic provides free services for children, including free antimalarial drugs for those diagnosed with malaria and free additional testing with microscopy when necessary. Physicians rarely recommended patients for microscopy as they had large queues of patients waiting for consultations. Their priority was to ensure that all waiting patients were seen by a physician during the scheduled hours of operation. Also, unlike in other countries where nurses perform blood work, such task is performed by physicians in this clinic. This means that physicians often have to weigh the time spent drawing blood of one patient (which translates to seeing 3 to 4 patients) or base diagnosis on observation and mothers explanation and history of illness. One physician aptly stated the following.“Malaria tests with microscopy are cumbersome and in the long-run malaria rapid diagnostic tests are not available indefinitely in this clinic. Patients are many and yes when they come with a temperature, before we think of anything, we have to think of malaria.”
## 3.5. Acceptability of Health Care Services
With regard to acceptability of the health care services provided at the clinic, mothers were asked whether they were satisfied with their consultations (as related to the quality of services provided by physicians and other healthcare workers at the clinic), what they liked best (comments from satisfied mothers), and what the clinic could do to provide better services (comments from dissatisfied mothers). More than half of the mothers (52.6%) interviewed stated that the quality of care provided at the clinic was excellent. 36.1% stated that the services were good, while 10.5% noted that the care provided at the clinic was average. Among mothers who were satisfied with the care that they received, some were of the opinion that the physicians at the clinic were “helpful” and “attentive to their needs.” Dissatisfied mothers cited “lack of additional tests” prior to prescribing medications and the “absence of free antimalarial drugs,” as key potential barriers for adequate case management of child malaria at this clinic.
## 4. Conclusion
The aim of this study was to illustrate the ways in which contextual factors of an outpatient clinic in southwest Nigeria influence effective diagnosis and treatment of malaria. As child malaria diagnosis remains a major challenge in many endemic countries, the findings indicate that malaria control strategies should take contextual factors into account as they are critical with the effective case management of malaria in children attending health clinics. This is crucial because the success of malaria control strategies does not depend only on the development of effective drugs or vaccines or improved vector control, but also on knowledge of aspects of the context that promote and/or act as barriers to effective diagnosis and treatment. The findings of this study suggest that availability of antimalarial drugs and laboratory testing services, affordability of services, access to clinic and physicians, adequacy of clinics, and acceptability of services are important in addressing access to effective case management of malaria in children attending an outpatient clinic.Contextual features of health care clinics are also important particularly with the recent advent of malaria rapid diagnostic tests (RDTs) in malaria endemic regions settings [9–11]. Although malaria RDTs could also be useful with effective diagnosis of child malaria, contextual factors such as the availability, affordability, access to, adequacy, and acceptability of RDTs may also constrain physician’s practice and impoverish their professional judgment. For example, in rural Burkina Faso, Bisoffi and colleagues [20] found that as many as 85% RDT negative patients were prescribed antimalarials despite the knowledge that negative RDT results excludes presumptive treatment of malaria. Also, in Zambia, Hamer and colleagues [21] noted that when rapid diagnostic tests were performed and reported as negative, 35% of patients were still prescribed an antimalarial. Simply put, promising advances in malaria rapid diagnostic tests might be futile if the same vigor is not applied to understanding the contexts in which human behaviors occur [22]. Moreover, as noted by Chandler and colleagues [23], “changing ingrained clinical behaviors (i.e., presumptive diagnosis) may be difficult” if attention is not equally given to the role contextual factors play.Some potential limitations of this study must be duly acknowledged. There is always the possibility that the physicians may have altered their diagnostic and prescribing behavior to err on the side of diagnosing malaria due to the presence of the research study (e.g., Hawthorne effect). To minimize this effect, efforts were made to not interfere with consultations, allowing physicians to diagnose and prescribe child malaria treatment according to their routine. The findings of this study may also be limited due to selection bias since we did not compare participants recruited at this outpatient clinic with those who sought care at other clinics. One caution about our population is that it is plausible for example for mothers with febrile children in search of answers to their child’s illness to amplify the severity and persistent sign and symptoms observed in their children in hope to receive additional testing so as to accurately pinpoint the cause of illness. Future studies with mothers with febrile children outside clinical settings are necessary to determine whether this process occurs. Also, the generalizability of our findings is limited since this outpatient clinic may not be representative of other outpatient clinics in malaria-endemic countries. Further, the constraint of space in which this study was conducted may have contributed to bias in reporting some of the findings of this study. However, such space constraint also enabled observations of mundane actions or events to be recorded, particularly with regard to differences between physicians diagnostic decisions and mothers interpretation of their child’s illness.Study findings have implications for improving effective diagnosis and treatment of child malaria in malaria-endemic countries. If the Millennium Development Goal 6 of reversing malaria incidence by 2015 particularly among children is to be achieved, evidently, it is time to examine the contextual factors that are essential for effective diagnosis and treatment of child malaria among children in clinical settings. The results presented in this paper are timely given the increased interest in factors that influence mis- and over-diagnosis of malaria in clinical setting. However, more research is necessary to assess whether these findings remain valid in different clinical settings (i.e., rural clinics and, private versus government owned clinic) and with different participants (i.e., mothers in community settings).
---
*Source: 101423-2013-09-24.xml* | 101423-2013-09-24_101423-2013-09-24.md | 38,897 | Contextualizing Child Malaria Diagnosis and Treatment Practices at an Outpatient Clinic in Southwest Nigeria: A Qualitative Study | Juliet Iwelunmor; Collins O. Airhihenbuwa; Gary King; Ayoade Adedokun | ISRN Infectious Diseases
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.5402/2013/101423 | 101423-2013-09-24.xml | ---
## Abstract
Background. This study sought to explore contextual features of an outpatient clinic located in southwest Nigeria that enable and/or discourage effective diagnosis and treatment of child malaria. Methods. We conducted in-depth interviews with mothers of 135 febrile children attending a pediatric outpatient clinic in southwest Nigeria. Also, participant observations and informal discussions with physicians were conducted to examine the potential impact of context on effective child malaria diagnosis and treatment. Results. The findings indicate that availability of drugs and laboratory testing for malaria, affordability of antimalarial drugs, access to the clinic (particularly access to pediatricians), adequacy of the outpatient clinic, and acceptability of services provided at the clinic are key contextual factors that influence effective case management of malaria in children. Conclusion. If the Millennium Development Goal 6 of reversing malaria incidence by 2015 particularly among children is to be achieved, it is necessary to identify the contextual factors that may act as potential barriers to effective diagnosis and treatment practices at clinical settings. Understanding the context in which case management of child malaria occurs can provide insights into the factors that influence mis- and over-diagnosis of malaria in clinical settings.
---
## Body
## 1. Background
One of the central goals of malaria control programs is to provide effective diagnosis and treatment of malaria particularly in children less than five years of age. To this end, efforts have been made to encourage caretakers of febrile children to seek prompt diagnosis and treatment at health care settings within 24 hours of illness onset. However, despite these efforts, over the past several years, malaria mis- and over-diagnosis have increased dramatically [1–4]. Although the World Health Organization [5] currently recommends “prompt parasitological confirmation by microscopy or alternatively by rapid diagnostic tests (RDTs) for all patients with suspected malaria before treatment is started,” in settings where these tools are available, few qualitative lines of evidence exist about the contextual features of health care clinics that influence effective diagnosis and treatment of malaria. Contextual factors are those features of the health care systems which enable and/or discourage effective case management of child malaria [6]. These factors have important implications for reducing the morbidity and mortality from malaria in children. Ignoring the context in which child malaria diagnosis and treatment practices occur, may impede renewed optimism towards improved malaria control and possibly elimination in many endemic countries.The importance attributed to contextual factors is also underscored by empirical evidence indicating a need to go beyond presumptive (or clinical) diagnosis of malaria in children [7–9]. Presumptive diagnosis based on clinical signs and symptoms has been the primary means of diagnosing and treating malaria in many malaria endemic countries [10–12]. It refers to how the disease is understood in the absence of a laboratory confirmation of blood analysis and the course of treatment to be taken. Indeed, in many malaria endemic countries, malaria microscopy which remains the gold standard for laboratory diagnosis remains inaccessible to patients because of poor laboratory infrastructure and understaffing of technical expertise it requires [11]. Even in settings where microscopy is available, referrals for laboratory test rarely happen because in many instances, aspects of such a test (i.e., drawing blood from patients) may also be performed by a physician (as was the case in this study site). Further, when they do happen, they are time consuming and physicians often mistrust the laboratory results and continue to treat those who test negative with antimalarials [13]. For these reasons, knowledge of the contextual factors that influence effective diagnosis and treatment of malaria in children is important for efforts aimed at halting and reversing the incidence of malaria in endemic countries.In Nigeria, malaria follows a hyperendemic pattern, with peak transmission occurring during the rainy season period (June-July). Nigeria offers a unique opportunity to study the contextual factors that influence effective case management of child malaria for several reasons. First, a quarter of all malaria cases in the World Health Organization in the African region occur in Nigeria [5]. Also, while evidence of systematic decline in malaria cases has been reported in others parts of Africa, malaria remains a persistent problem in Nigeria [5]. According to the National Malaria Control Program in Nigeria [14], malaria is, by far, one of the most important public health problems, representing about 60% of outpatient visits to health facilities, 30% of childhood deaths, and 25% of death in children under one year. Given the burden of malaria in Nigeria, it is possible that contextual features of health care clinics may act in various ways to enable and/or discourage effective case management of malaria in children. At a time of changes in the burden of malaria, with compelling evidence of dramatic decline in malaria transmission in other parts of sub-Saharan Africa [15], it is important to examine the role context plays in influencing effective diagnosis and treatment of malaria. In this paper, we apply the health access livelihood framework to contextualize child malaria diagnosis and treatment practices at an outpatient clinic in southwest Nigeria.Theoretical Framework: The Health Access Livelihood Framework. The Health Access Livelihood Framework was developed in response to the need to address access to prompt and effective malaria treatment in rural Tanzania [16, 17]. It is designed to better align health care resources with people’s needs, perceptions, and expectations [17]. It combines issues related to health seeking (why, how, and when individuals seek help for illness) with factors that influence access to health care services (availability, accessibility, etc.) to situate the broader context in which effective case management of illness occurs [17]. It consists of five dimensions: availability, affordability, accessibility, adequacy, and acceptability [17]. While availability addresses issues related to the types of services that exist within a health care setting and whether these services correspond with people’s needs and expectations, affordability refers to the costs of the services provided (both direct and indirect costs) such as costs of consultations as well as transportation costs and lost time from work [17]. Accessibility is concerned with the geographic distance between services and homes of the intended users [17]. Adequacy examines whether the organization of the health care settings meets the patient’s expectations, and acceptability highlights whether or not the information, explanations, and treatment protocols provided take the patient’s expectations or perceptions into account [17]. Although this framework has been used to examine the factors that influence access to malaria treatment in a rural setting with limited resources, in urban settings with access to diagnostic tools such as microscopy or malaria rapid diagnosis test, few qualitative attempts have been made to explore how these services align with caretaker’s perceptions and expectations. This framework was used in this study to contextualize how services provided at an outpatient clinic in southwest Nigeria align with caretaker’s perceptions and expectations of effective diagnosis and treatment of malaria in children.
## 2. Methods
### 2.1. Setting
This study was conducted in Lagos, one of the largest urban metropolises located in the southwest region of Nigeria. With an estimated population of 12 million people [18], Lagos is also a state and is one of the most populous states in Nigeria with a sociocultural rainbow of people from diverse indigenous backgrounds. It is located within the rainforest region of southwest Nigeria and there are two climatic seasons in Lagos: the dry season and the wet season. The dry season lasts from November to March while the wet (or rainy) season lasts from April to October, with the highest rainfalls occurring during May through July. Malaria transmission in Lagos is intense particularly during the rainy season. This study took place in the pediatric section of an outpatient clinic located in Ikeja, the capital of Lagos. Three times a week on average, the researchers conducted this study at the clinic during the rainy season of June and July, 2010, to explore the mechanisms that guide child malaria diagnosis and treatment decisions at the clinic.
### 2.2. Study Design and Participants
In-depth interviews, participant observations, informal discussions, and fieldnotes were used to collect data with a purposive sample of mothers with febrile children attending the outpatient clinic and the physicians providing care. Mothers were sensitized to the study in the outpatient waiting room prior to the commencement of the study, and those who provided verbal and written consent were recruited to participate. A total of 135 mothers with febrile children participated in this study. The age range of the mothers was 20–65, while the children ranged in age from 3 months to 12 years. The majority of the mothers belonged to the Yoruba (59.1%) and Igbo (28.1%) ethnic groups in Nigeria. Ethics approval for this study was granted by Penn State and the Lagos State University Teaching Hospital.Verbal consent was also obtained from each physician observed prior to the commencement of the study. Data collection took place in the consultation rooms at the outpatient clinic after routine consultations of mothers with febrile children by physicians. In-depth interviews with mothers focused on perceptions and treatment seeking practices for child’s febrile illness prior to clinic attendance. Specifically, mothers were asked to describe how the illness began, what caused the illness, and whether it was severe. They also described their reasons for bringing the child to the clinic, as well as their expectations of the services provided at the clinic. Participant observation focused on interactions between the physicians and mothers. Specifically, using a checklist, the symptoms described by mothers as well as the diagnosis by physicians were recorded. Clinical logic for malaria diagnosis or diagnosis of nonmalaria cases and treatment decisions by physicians were also recorded. Informal discussions with physicians explored their criteria and decision logic for diagnosing malaria in children, their treatment choices, and the potential for malaria mis- and overdiagnoses at this clinical setting.
### 2.3. Data Analysis
Transcripts of the in-depth interviews, as well as checklists of participant observations, informal discussions, and fieldnotes were analyzed using the content analysis techniques described by Morse and Field [19]. Using the Health Access Livelihood Framework as a guide, responses from the in-depth interviews, informal discussions, as well as checklists of participant observations, and field notes were organized and categorized into expectations about effective diagnosis or child malaria and the resources that enhance or create barriers with effectively managing child malaria at this outpatient clinic. An audit trail of the researcher’s decisions and insights were also summarized. Credibility of the data was maintained through triangulation of the multiple sources of data. Also, the data were read in their entirety several times and repeatedly examined so as to obtain a general sense of the information gathered as well as to categorize the material until saturation was reached, that is, until no new themes emerged.
## 2.1. Setting
This study was conducted in Lagos, one of the largest urban metropolises located in the southwest region of Nigeria. With an estimated population of 12 million people [18], Lagos is also a state and is one of the most populous states in Nigeria with a sociocultural rainbow of people from diverse indigenous backgrounds. It is located within the rainforest region of southwest Nigeria and there are two climatic seasons in Lagos: the dry season and the wet season. The dry season lasts from November to March while the wet (or rainy) season lasts from April to October, with the highest rainfalls occurring during May through July. Malaria transmission in Lagos is intense particularly during the rainy season. This study took place in the pediatric section of an outpatient clinic located in Ikeja, the capital of Lagos. Three times a week on average, the researchers conducted this study at the clinic during the rainy season of June and July, 2010, to explore the mechanisms that guide child malaria diagnosis and treatment decisions at the clinic.
## 2.2. Study Design and Participants
In-depth interviews, participant observations, informal discussions, and fieldnotes were used to collect data with a purposive sample of mothers with febrile children attending the outpatient clinic and the physicians providing care. Mothers were sensitized to the study in the outpatient waiting room prior to the commencement of the study, and those who provided verbal and written consent were recruited to participate. A total of 135 mothers with febrile children participated in this study. The age range of the mothers was 20–65, while the children ranged in age from 3 months to 12 years. The majority of the mothers belonged to the Yoruba (59.1%) and Igbo (28.1%) ethnic groups in Nigeria. Ethics approval for this study was granted by Penn State and the Lagos State University Teaching Hospital.Verbal consent was also obtained from each physician observed prior to the commencement of the study. Data collection took place in the consultation rooms at the outpatient clinic after routine consultations of mothers with febrile children by physicians. In-depth interviews with mothers focused on perceptions and treatment seeking practices for child’s febrile illness prior to clinic attendance. Specifically, mothers were asked to describe how the illness began, what caused the illness, and whether it was severe. They also described their reasons for bringing the child to the clinic, as well as their expectations of the services provided at the clinic. Participant observation focused on interactions between the physicians and mothers. Specifically, using a checklist, the symptoms described by mothers as well as the diagnosis by physicians were recorded. Clinical logic for malaria diagnosis or diagnosis of nonmalaria cases and treatment decisions by physicians were also recorded. Informal discussions with physicians explored their criteria and decision logic for diagnosing malaria in children, their treatment choices, and the potential for malaria mis- and overdiagnoses at this clinical setting.
## 2.3. Data Analysis
Transcripts of the in-depth interviews, as well as checklists of participant observations, informal discussions, and fieldnotes were analyzed using the content analysis techniques described by Morse and Field [19]. Using the Health Access Livelihood Framework as a guide, responses from the in-depth interviews, informal discussions, as well as checklists of participant observations, and field notes were organized and categorized into expectations about effective diagnosis or child malaria and the resources that enhance or create barriers with effectively managing child malaria at this outpatient clinic. An audit trail of the researcher’s decisions and insights were also summarized. Credibility of the data was maintained through triangulation of the multiple sources of data. Also, the data were read in their entirety several times and repeatedly examined so as to obtain a general sense of the information gathered as well as to categorize the material until saturation was reached, that is, until no new themes emerged.
## 3. Results
As stated earlier, the contextual factors are those features of the health care system which either promote or lessen the ability to effectively manage child malaria. In this study, these factors include the availability of drugs and laboratory testing for malaria, affordability of antimalarial drugs, access to the clinic (particularly access to pediatricians), adequacy of clinic, and acceptability of services provided at the clinic.
### 3.1. Availability of Drugs and Laboratory Testing
This outpatient clinic is known to provide free antimalarial medication for all children less than 5 years of age diagnosed with malaria. Unfortunately, during the course of this study, access to free antimalarial drugs was problematic as the drugs were not always available at the dispensary. Most mothers remarked that the lack of free anti-malaria drugs at the dispensary was a hindrance to the effective treatment of malaria diagnosed in their children. One mother stated the following.“I came to this clinic because I thought that they give free antimalarial drugs that were of good quality, but they do not have any and I am not sure if I would trust the ones that they have at the market.”In addition to the lack of free antimalarial drugs, although laboratory testing with microscopy is provided at no cost to children attending this clinic, referrals are rare. While some mothers were of the opinion that “it is better to run tests to know the exact problem causing the child’s illness,” physicians did not recommend it because of time factor, absence of personnel to perform laboratory tasks, and, finally, delay in receiving lab results. In this setting, giving antimalarial treatments to all children with febrile illness was deemed to be necessary by physicians particularly as malaria transmission is hyperendemic in this region of Nigeria. One physician stated the following.“If I referred a patient for microscopy, it will take at least 2 days before results are available; by then malaria may have worsened, so it is better to treat immediately due to the volume of patients we see in any given day. Moreover, the microscopy laboratories are small, understaffed, and overworked and they lack the equipment to handle the sheer volume of tests needed by patients.”
### 3.2. Affordability of Antimalarial Drugs
One of the key elements for malaria control in Sub-Saharan Africa is prompt treatment with effective antimalarial drugs. Although major efforts are underway to strengthen and promote appropriate utilization of effective antimalarial drugs, barriers imposed by the cost of the new and expensive artemisinin combination therapies may constrain malaria control efforts in multiple ways. For example, findings from the in-depth interviews indicate that affordability of antimalarial drugs can delay prompt treatment of child malaria as evidenced in the following comments.“I cannot afford to buy meds that the doctor just prescribed because of the cost. I do not have any job or money to buy it now for my child.”Also, it was not uncommon for some mothers to buy chloroquine (despite its known resistance to malaria in this setting) because it was cheap and affordable when compared to the new/expensive artemisinin combination therapies currently in the market. In improving the affordability of antimalarial drugs, one mother stated that “these drugs need to be provided at subsidized price at this clinic so that even poor people can afford to buy them.”
### 3.3. Access to Clinic and Pediatricians
When mothers were asked to describe the length of time it took to travel from their homes to the clinic, 41.5% stated that it took less than 30 minutes, 43.3% stated it took between 1-2 hours depending on the traffic, and 14.8% stated that it took over 3 hours to arrive at the clinic. Some of the mothers said they brought their child to this clinic because it is known to provide “free services to everyone.” Access to free clinical services was considered to be important, particularly as it addressed the health needs of the poorest who are often deterred from seeking care at most clinics. Some mothers stated that access to the clinic also guaranteed they would receive the “best decision and treatment” for their child’s illness. Another component of the clinic resources that matter with the mothers interviewed was “easy access to pediatricians.” Indeed, due to ease of access to pediatricians at the clinic, it was not uncommon for some mothers to bring their children to the clinic within 24 hours of illness onset. Easy access to pediatricians also played a significant role in influencing many mother’s decisions to travel long distances and in some cases wait 2-3 hours before being seen by the physician at the clinic with little or no complaints. Indeed, our findings revealed that what is often viewed as healthcare barrier or constraint in some settings (i.e., long travelling distances or long waiting times), although important, was insignificant when considered alongside other defining characteristics such as easy access to pediatricians at the clinic.
### 3.4. Adequacy of Outpatient Clinic
The outpatient clinic caters to the needs of all people residing in the surrounding areas of the clinic as well as people from throughout the country. Although the hours of operation are from 9 a.m. to 3 p.m., most caretakers and their children arrive as early as 6 a.m. to ensure that they are seen as soon as the clinic opens. No prerequisites (such as formal referral letters) are needed to access the clinic’s services. As a result, the clinic is readily accessible to patients from all social classes with varied health problems. The caretakers and children who arrive as early as 6 a.m. begin the task of waiting in an area outside the hospital designated as the outside waiting room. The physicians often arrive a little after 8, and the clinics begin by 9 a.m. The first points of contact for the caretakers and their children are the nurses, matrons, and orderlies at the clinic. They are all women. These women are in charge of ushering the patients from the outside waiting room to the waiting room inside the clinic. The inside waiting area is located outside the physician’s office and patients sit in the order of their arrival time or the nurse’s arrangement. When it is their turn, they go into the room and begin to narrate their child’s illness. Although there are five rooms inside the clinic, only two are designated for general outpatient consultations, and so on many occasions due to lack of space, two physicians share a consultation room, seeing two separate patients at a time. In the context of providing treatment for malaria, as mentioned earlier, the clinic provides free services for children, including free antimalarial drugs for those diagnosed with malaria and free additional testing with microscopy when necessary. Physicians rarely recommended patients for microscopy as they had large queues of patients waiting for consultations. Their priority was to ensure that all waiting patients were seen by a physician during the scheduled hours of operation. Also, unlike in other countries where nurses perform blood work, such task is performed by physicians in this clinic. This means that physicians often have to weigh the time spent drawing blood of one patient (which translates to seeing 3 to 4 patients) or base diagnosis on observation and mothers explanation and history of illness. One physician aptly stated the following.“Malaria tests with microscopy are cumbersome and in the long-run malaria rapid diagnostic tests are not available indefinitely in this clinic. Patients are many and yes when they come with a temperature, before we think of anything, we have to think of malaria.”
### 3.5. Acceptability of Health Care Services
With regard to acceptability of the health care services provided at the clinic, mothers were asked whether they were satisfied with their consultations (as related to the quality of services provided by physicians and other healthcare workers at the clinic), what they liked best (comments from satisfied mothers), and what the clinic could do to provide better services (comments from dissatisfied mothers). More than half of the mothers (52.6%) interviewed stated that the quality of care provided at the clinic was excellent. 36.1% stated that the services were good, while 10.5% noted that the care provided at the clinic was average. Among mothers who were satisfied with the care that they received, some were of the opinion that the physicians at the clinic were “helpful” and “attentive to their needs.” Dissatisfied mothers cited “lack of additional tests” prior to prescribing medications and the “absence of free antimalarial drugs,” as key potential barriers for adequate case management of child malaria at this clinic.
## 3.1. Availability of Drugs and Laboratory Testing
This outpatient clinic is known to provide free antimalarial medication for all children less than 5 years of age diagnosed with malaria. Unfortunately, during the course of this study, access to free antimalarial drugs was problematic as the drugs were not always available at the dispensary. Most mothers remarked that the lack of free anti-malaria drugs at the dispensary was a hindrance to the effective treatment of malaria diagnosed in their children. One mother stated the following.“I came to this clinic because I thought that they give free antimalarial drugs that were of good quality, but they do not have any and I am not sure if I would trust the ones that they have at the market.”In addition to the lack of free antimalarial drugs, although laboratory testing with microscopy is provided at no cost to children attending this clinic, referrals are rare. While some mothers were of the opinion that “it is better to run tests to know the exact problem causing the child’s illness,” physicians did not recommend it because of time factor, absence of personnel to perform laboratory tasks, and, finally, delay in receiving lab results. In this setting, giving antimalarial treatments to all children with febrile illness was deemed to be necessary by physicians particularly as malaria transmission is hyperendemic in this region of Nigeria. One physician stated the following.“If I referred a patient for microscopy, it will take at least 2 days before results are available; by then malaria may have worsened, so it is better to treat immediately due to the volume of patients we see in any given day. Moreover, the microscopy laboratories are small, understaffed, and overworked and they lack the equipment to handle the sheer volume of tests needed by patients.”
## 3.2. Affordability of Antimalarial Drugs
One of the key elements for malaria control in Sub-Saharan Africa is prompt treatment with effective antimalarial drugs. Although major efforts are underway to strengthen and promote appropriate utilization of effective antimalarial drugs, barriers imposed by the cost of the new and expensive artemisinin combination therapies may constrain malaria control efforts in multiple ways. For example, findings from the in-depth interviews indicate that affordability of antimalarial drugs can delay prompt treatment of child malaria as evidenced in the following comments.“I cannot afford to buy meds that the doctor just prescribed because of the cost. I do not have any job or money to buy it now for my child.”Also, it was not uncommon for some mothers to buy chloroquine (despite its known resistance to malaria in this setting) because it was cheap and affordable when compared to the new/expensive artemisinin combination therapies currently in the market. In improving the affordability of antimalarial drugs, one mother stated that “these drugs need to be provided at subsidized price at this clinic so that even poor people can afford to buy them.”
## 3.3. Access to Clinic and Pediatricians
When mothers were asked to describe the length of time it took to travel from their homes to the clinic, 41.5% stated that it took less than 30 minutes, 43.3% stated it took between 1-2 hours depending on the traffic, and 14.8% stated that it took over 3 hours to arrive at the clinic. Some of the mothers said they brought their child to this clinic because it is known to provide “free services to everyone.” Access to free clinical services was considered to be important, particularly as it addressed the health needs of the poorest who are often deterred from seeking care at most clinics. Some mothers stated that access to the clinic also guaranteed they would receive the “best decision and treatment” for their child’s illness. Another component of the clinic resources that matter with the mothers interviewed was “easy access to pediatricians.” Indeed, due to ease of access to pediatricians at the clinic, it was not uncommon for some mothers to bring their children to the clinic within 24 hours of illness onset. Easy access to pediatricians also played a significant role in influencing many mother’s decisions to travel long distances and in some cases wait 2-3 hours before being seen by the physician at the clinic with little or no complaints. Indeed, our findings revealed that what is often viewed as healthcare barrier or constraint in some settings (i.e., long travelling distances or long waiting times), although important, was insignificant when considered alongside other defining characteristics such as easy access to pediatricians at the clinic.
## 3.4. Adequacy of Outpatient Clinic
The outpatient clinic caters to the needs of all people residing in the surrounding areas of the clinic as well as people from throughout the country. Although the hours of operation are from 9 a.m. to 3 p.m., most caretakers and their children arrive as early as 6 a.m. to ensure that they are seen as soon as the clinic opens. No prerequisites (such as formal referral letters) are needed to access the clinic’s services. As a result, the clinic is readily accessible to patients from all social classes with varied health problems. The caretakers and children who arrive as early as 6 a.m. begin the task of waiting in an area outside the hospital designated as the outside waiting room. The physicians often arrive a little after 8, and the clinics begin by 9 a.m. The first points of contact for the caretakers and their children are the nurses, matrons, and orderlies at the clinic. They are all women. These women are in charge of ushering the patients from the outside waiting room to the waiting room inside the clinic. The inside waiting area is located outside the physician’s office and patients sit in the order of their arrival time or the nurse’s arrangement. When it is their turn, they go into the room and begin to narrate their child’s illness. Although there are five rooms inside the clinic, only two are designated for general outpatient consultations, and so on many occasions due to lack of space, two physicians share a consultation room, seeing two separate patients at a time. In the context of providing treatment for malaria, as mentioned earlier, the clinic provides free services for children, including free antimalarial drugs for those diagnosed with malaria and free additional testing with microscopy when necessary. Physicians rarely recommended patients for microscopy as they had large queues of patients waiting for consultations. Their priority was to ensure that all waiting patients were seen by a physician during the scheduled hours of operation. Also, unlike in other countries where nurses perform blood work, such task is performed by physicians in this clinic. This means that physicians often have to weigh the time spent drawing blood of one patient (which translates to seeing 3 to 4 patients) or base diagnosis on observation and mothers explanation and history of illness. One physician aptly stated the following.“Malaria tests with microscopy are cumbersome and in the long-run malaria rapid diagnostic tests are not available indefinitely in this clinic. Patients are many and yes when they come with a temperature, before we think of anything, we have to think of malaria.”
## 3.5. Acceptability of Health Care Services
With regard to acceptability of the health care services provided at the clinic, mothers were asked whether they were satisfied with their consultations (as related to the quality of services provided by physicians and other healthcare workers at the clinic), what they liked best (comments from satisfied mothers), and what the clinic could do to provide better services (comments from dissatisfied mothers). More than half of the mothers (52.6%) interviewed stated that the quality of care provided at the clinic was excellent. 36.1% stated that the services were good, while 10.5% noted that the care provided at the clinic was average. Among mothers who were satisfied with the care that they received, some were of the opinion that the physicians at the clinic were “helpful” and “attentive to their needs.” Dissatisfied mothers cited “lack of additional tests” prior to prescribing medications and the “absence of free antimalarial drugs,” as key potential barriers for adequate case management of child malaria at this clinic.
## 4. Conclusion
The aim of this study was to illustrate the ways in which contextual factors of an outpatient clinic in southwest Nigeria influence effective diagnosis and treatment of malaria. As child malaria diagnosis remains a major challenge in many endemic countries, the findings indicate that malaria control strategies should take contextual factors into account as they are critical with the effective case management of malaria in children attending health clinics. This is crucial because the success of malaria control strategies does not depend only on the development of effective drugs or vaccines or improved vector control, but also on knowledge of aspects of the context that promote and/or act as barriers to effective diagnosis and treatment. The findings of this study suggest that availability of antimalarial drugs and laboratory testing services, affordability of services, access to clinic and physicians, adequacy of clinics, and acceptability of services are important in addressing access to effective case management of malaria in children attending an outpatient clinic.Contextual features of health care clinics are also important particularly with the recent advent of malaria rapid diagnostic tests (RDTs) in malaria endemic regions settings [9–11]. Although malaria RDTs could also be useful with effective diagnosis of child malaria, contextual factors such as the availability, affordability, access to, adequacy, and acceptability of RDTs may also constrain physician’s practice and impoverish their professional judgment. For example, in rural Burkina Faso, Bisoffi and colleagues [20] found that as many as 85% RDT negative patients were prescribed antimalarials despite the knowledge that negative RDT results excludes presumptive treatment of malaria. Also, in Zambia, Hamer and colleagues [21] noted that when rapid diagnostic tests were performed and reported as negative, 35% of patients were still prescribed an antimalarial. Simply put, promising advances in malaria rapid diagnostic tests might be futile if the same vigor is not applied to understanding the contexts in which human behaviors occur [22]. Moreover, as noted by Chandler and colleagues [23], “changing ingrained clinical behaviors (i.e., presumptive diagnosis) may be difficult” if attention is not equally given to the role contextual factors play.Some potential limitations of this study must be duly acknowledged. There is always the possibility that the physicians may have altered their diagnostic and prescribing behavior to err on the side of diagnosing malaria due to the presence of the research study (e.g., Hawthorne effect). To minimize this effect, efforts were made to not interfere with consultations, allowing physicians to diagnose and prescribe child malaria treatment according to their routine. The findings of this study may also be limited due to selection bias since we did not compare participants recruited at this outpatient clinic with those who sought care at other clinics. One caution about our population is that it is plausible for example for mothers with febrile children in search of answers to their child’s illness to amplify the severity and persistent sign and symptoms observed in their children in hope to receive additional testing so as to accurately pinpoint the cause of illness. Future studies with mothers with febrile children outside clinical settings are necessary to determine whether this process occurs. Also, the generalizability of our findings is limited since this outpatient clinic may not be representative of other outpatient clinics in malaria-endemic countries. Further, the constraint of space in which this study was conducted may have contributed to bias in reporting some of the findings of this study. However, such space constraint also enabled observations of mundane actions or events to be recorded, particularly with regard to differences between physicians diagnostic decisions and mothers interpretation of their child’s illness.Study findings have implications for improving effective diagnosis and treatment of child malaria in malaria-endemic countries. If the Millennium Development Goal 6 of reversing malaria incidence by 2015 particularly among children is to be achieved, evidently, it is time to examine the contextual factors that are essential for effective diagnosis and treatment of child malaria among children in clinical settings. The results presented in this paper are timely given the increased interest in factors that influence mis- and over-diagnosis of malaria in clinical setting. However, more research is necessary to assess whether these findings remain valid in different clinical settings (i.e., rural clinics and, private versus government owned clinic) and with different participants (i.e., mothers in community settings).
---
*Source: 101423-2013-09-24.xml* | 2013 |
# The Discrepancy between Patient and Clinician Reported Function in Extremity Bone Metastases
**Authors:** Stein J. Janssen; Eva A. J. van Rein; Nuno Rui Paulino Pereira; Kevin A. Raskin; Marco L. Ferrone; Francis J. Hornicek; Santiago A. Lozano-Calderon; Joseph H. Schwab
**Journal:** Sarcoma
(2016)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2016/1014248
---
## Abstract
Background. The Musculoskeletal Tumor Society (MSTS) scoring system measures function and is commonly used but criticized because it was developed to be completed by the clinician and not by the patient. We therefore evaluated if there is a difference between patient and clinician reported function using the MSTS score.Methods. 128 patients with bone metastasis of the lower (n
=
100) and upper (n
=
28) extremity completed the MSTS score. The MSTS score consists of six domains, scored on a 0 to 5 scale and transformed into an overall score ranging from 0 to 100% with a higher score indicating better function. The MSTS score was also derived from clinicians’ reports in the medical record.Results. The median age was 63 years (interquartile range [IQR]: 55–71) and the study included 74 (58%) women. We found that the clinicians’ MSTS score (median: 65, IQR: 49–83) overestimated the function as compared to the patient perceived score (median: 57, IQR: 40–70) by 8 points (p
<
0.001).Conclusion. Clinician reports overestimate function as compared to the patient perceived score. This is important for acknowledging when informing patients about the expected outcome of treatment and for understanding patients’ perceptions.
---
## Body
## 1. Introduction
Treatment for bone metastatic disease is often palliative and aims to maintain function and quality of life for the remaining life span [1, 2]. Traditionally, studies focused on oncological and surgical outcomes (e.g., survival and local recurrence), but more emphasis has been placed on measuring impairment and disability over the past decades [1, 3–5]. The Musculoskeletal Tumor Society (MSTS) recognized this and developed a system—the MSTS score—to evaluate function in patients with musculoskeletal tumors [3]. The validity and reliability of this tool were found to be acceptable when applied to a sample of patients with malignant musculoskeletal tumors [6]. The scoring system has been criticized because it was developed to be completed by a clinician, instead of measuring function as perceived by the patient [1, 7]; however, the MSTS score is still used because of its simplicity and brevity (it consists of six items) [8, 9]. Studies in other fields have demonstrated discrepancies between patient and physician assessment of physical and mental health [10–13]. It is unclear whether the clinician derived MSTS score is representative of the patients’ perceived function. We therefore sought to evaluate if there is a difference between patient and clinician reported physical function using the MSTS score in a cohort of patients with bone metastases of the extremities. Secondarily, we compared MSTS domain scores and assessed agreement between the clinician and patient perceived scores.
## 2. Materials and Methods
### 2.1. Study Design
Our institutional review board approved secondary use of prospectively collected data for the purpose of this study, and a waiver of informed consent was obtained. We included data from the first 128 patients who completed a set of physical function questionnaires for two prior studies. These studies compared physical function questionnaires in patients with lower (n
=
100) and upper (n
=
28) extremity bone metastases, myeloma, or lymphoma [14]. Only English-speaking patients aged 18 years or above who were able to provide informed consent were approached for these studies. Patients were enrolled between June 2014 and September 2015 from two orthopaedic oncology clinics. Patients were included regardless of previous treatment and disease stage [14]. Seventeen patients declined participation for the initial study, and three patients were excluded because of incomplete questionnaires.An ante hoc sample size calculation determined that we would need a minimum of 128 patients to find an effect size of 0.25 with an alpha of 0.05 and power of 0.80 using a pairedt-test comparing the clinician reported MSTS score with the patient perceived MSTS score.
### 2.2. Outcome Measures
Our primary outcome measure was the Musculoskeletal Tumor Society (MSTS) score, introduced in 1983 and modified in 1993 [3]. This scoring system was developed to be completed by a clinician—physician or physician extender—and it aims to assess physical function in patients with lower and upper extremity tumors. The modified version (1993) of the MSTS score consists of six domains, each scored on a scale from 0 to 5, with a higher score indicating better function. The total score, ranging from 0 to 30, can be transformed to a point scale of 0 to 100. There are two versions: one for lower extremity tumors and one for upper extremity tumors. These versions have three domains in common, pain, function, and emotional acceptance, and three region specific domains. The region specific domains for the lower extremity are use of supports, walking ability, and gait. The region specific domains for the upper extremity are hand-positioning, dexterity, and lifting ability. Patients completed one of the two versions based on the location of their most disabling bone metastasis.In addition, patients completed questions about their level of education, marital status, presence of other disabling conditions, prior treatment, and other bone or visceral metastases. Prior treatment and presence of other metastases were also derived from medical records. We extracted age, sex, race, and location of bone metastasis from the medical records.Two research fellows (SJ and EvR)—blinded to the patients’ answers—independently completed the MSTS score based on the clinicians’ report in the medical record of the patient; we used the report that was written at the time (or within a few days) of survey completion by the patient. Reports completed by the orthopaedic oncologist, medical oncologist, and physical therapist were used to complete the MSTS score. We averaged the scores assigned by the two researchers per domain and for the overall MSTS score. To assess reliability of extracting this data from medical records, we assessed difference in overall MSTS score and domain scores between researchers and assessed their interobserver agreement.
### 2.3. Statistical Analysis
We used frequencies with percentages to describe categorical variables and median with interquartile range for continuous variables as histograms suggested nonnormality.The nonparametric Wilcoxon signed rank test was used to assess the difference between patient and clinician domain scores and overall MSTS scores as data was not normally distributed.We assessed the relationship between the patient and clinician MSTS and domain scores using both Spearman rank correlation and intraclass correlation (ICC). Spearman rank correlation determines the relationship between two variables (range: −1 to 1): a score of 1 indicates a perfect correlation, 0 indicates no correlation, and −1 indicates a perfect inverse correlation. We used bootstrapping (number of resamples: 1,000) to calculatep values and 95% confidence intervals for the Spearman rank correlation coefficients. The intraclass correlation coefficient also assesses a relationship between two variables but accounts for discrepancy in measurements and therefore measures absolute agreement. We calculated the ICC through a two-way mixed-effects model with absolute agreement for the overall MSTS score and the domain scores. As with the Spearman rank correlation coefficient, an ICC of 1 reflects perfect agreement, whereas 0 reflects no agreement.Additionally, we assessed difference in domain and total scores between the two researchers using the Wilcoxon signed rank test and assessed their interobserver agreement per domain and overall score using the ICC.
### 2.4. Patient Characteristics
The median age was 63 years (interquartile range [IQR]: 55 to 71) and the study included 74 (58%) women. The majority had a metastatic lesion in the lower extremity (78% [100/128]). Eighty (63%) patients had previous surgery, and 72 (56%) had previous radiotherapy (Table1). Breast was the most common primary tumor type (26%) (Table 2).Table 1
Demographics (n
=
128).
Median (±interquartile range)
Age
63 (55–71)
n (%)
Sex
Women
74 (58)
Men
54 (42)
Race
Caucasian
117 (91)
African American
10 (8)
Asian
1 (1)
Education
High-school or less
41 (32)
College or Bachelor’s degree
53 (41)
Graduate or professional degree
34 (27)
Marital status
Married
84 (66)
Single
15 (12)
Widowed
15 (12)
Separated/divorced
9 (7)
Living with partner
5 (4)
Location of metastasis
Upper extremity
28 (22)
Humerus
21 (16)
Scapula
5 (4)
Clavicule
1 (1)
Radius
1 (1)
Lower extremity
100 (78)
Femur
71 (55)
Acetabulum
14 (11)
Pelvis
12 (9)
Tibia
2 (2)
Fibula
1 (1)
Other disabling conditions
Yes
37 (29)
No
90 (71)
Previous surgery for metastatic lesion
Yes
80 (63)
No
48 (38)
Previous radiotherapy for metastatic lesion
Yes
72 (56)
No
53 (41)
Unknown
3 (2)
Multiple bones affected
Yes
61 (48)
No
55 (43)
Unknown
12 (9)
Visceral organs affected
Yes
39 (30)
No
78 (61)
Unknown
11 (9)Table 2
Tumor type (n
=
128).
Bone metastases
Breast
33 (26)
Renal cell
17 (13)
Prostate
11 (8.6)
Lung
11 (8.6)
Melanoma
7 (5.5)
Leiomyosarcoma
5 (3.9)
Bladder
3 (2.3)
Thyroid
3 (2.3)
Colorectal
2 (1.6)
Hepatocellular
2 (1.6)
Stomach
1 (0.8)
Esophageal
1 (0.8)
Neuroendocrine
1 (0.8)
Sarcoma
1 (0.8)
Primary bone tumors
Myeloma
17 (13)
Lymphoma
13 (10)
## 2.1. Study Design
Our institutional review board approved secondary use of prospectively collected data for the purpose of this study, and a waiver of informed consent was obtained. We included data from the first 128 patients who completed a set of physical function questionnaires for two prior studies. These studies compared physical function questionnaires in patients with lower (n
=
100) and upper (n
=
28) extremity bone metastases, myeloma, or lymphoma [14]. Only English-speaking patients aged 18 years or above who were able to provide informed consent were approached for these studies. Patients were enrolled between June 2014 and September 2015 from two orthopaedic oncology clinics. Patients were included regardless of previous treatment and disease stage [14]. Seventeen patients declined participation for the initial study, and three patients were excluded because of incomplete questionnaires.An ante hoc sample size calculation determined that we would need a minimum of 128 patients to find an effect size of 0.25 with an alpha of 0.05 and power of 0.80 using a pairedt-test comparing the clinician reported MSTS score with the patient perceived MSTS score.
## 2.2. Outcome Measures
Our primary outcome measure was the Musculoskeletal Tumor Society (MSTS) score, introduced in 1983 and modified in 1993 [3]. This scoring system was developed to be completed by a clinician—physician or physician extender—and it aims to assess physical function in patients with lower and upper extremity tumors. The modified version (1993) of the MSTS score consists of six domains, each scored on a scale from 0 to 5, with a higher score indicating better function. The total score, ranging from 0 to 30, can be transformed to a point scale of 0 to 100. There are two versions: one for lower extremity tumors and one for upper extremity tumors. These versions have three domains in common, pain, function, and emotional acceptance, and three region specific domains. The region specific domains for the lower extremity are use of supports, walking ability, and gait. The region specific domains for the upper extremity are hand-positioning, dexterity, and lifting ability. Patients completed one of the two versions based on the location of their most disabling bone metastasis.In addition, patients completed questions about their level of education, marital status, presence of other disabling conditions, prior treatment, and other bone or visceral metastases. Prior treatment and presence of other metastases were also derived from medical records. We extracted age, sex, race, and location of bone metastasis from the medical records.Two research fellows (SJ and EvR)—blinded to the patients’ answers—independently completed the MSTS score based on the clinicians’ report in the medical record of the patient; we used the report that was written at the time (or within a few days) of survey completion by the patient. Reports completed by the orthopaedic oncologist, medical oncologist, and physical therapist were used to complete the MSTS score. We averaged the scores assigned by the two researchers per domain and for the overall MSTS score. To assess reliability of extracting this data from medical records, we assessed difference in overall MSTS score and domain scores between researchers and assessed their interobserver agreement.
## 2.3. Statistical Analysis
We used frequencies with percentages to describe categorical variables and median with interquartile range for continuous variables as histograms suggested nonnormality.The nonparametric Wilcoxon signed rank test was used to assess the difference between patient and clinician domain scores and overall MSTS scores as data was not normally distributed.We assessed the relationship between the patient and clinician MSTS and domain scores using both Spearman rank correlation and intraclass correlation (ICC). Spearman rank correlation determines the relationship between two variables (range: −1 to 1): a score of 1 indicates a perfect correlation, 0 indicates no correlation, and −1 indicates a perfect inverse correlation. We used bootstrapping (number of resamples: 1,000) to calculatep values and 95% confidence intervals for the Spearman rank correlation coefficients. The intraclass correlation coefficient also assesses a relationship between two variables but accounts for discrepancy in measurements and therefore measures absolute agreement. We calculated the ICC through a two-way mixed-effects model with absolute agreement for the overall MSTS score and the domain scores. As with the Spearman rank correlation coefficient, an ICC of 1 reflects perfect agreement, whereas 0 reflects no agreement.Additionally, we assessed difference in domain and total scores between the two researchers using the Wilcoxon signed rank test and assessed their interobserver agreement per domain and overall score using the ICC.
## 2.4. Patient Characteristics
The median age was 63 years (interquartile range [IQR]: 55 to 71) and the study included 74 (58%) women. The majority had a metastatic lesion in the lower extremity (78% [100/128]). Eighty (63%) patients had previous surgery, and 72 (56%) had previous radiotherapy (Table1). Breast was the most common primary tumor type (26%) (Table 2).Table 1
Demographics (n
=
128).
Median (±interquartile range)
Age
63 (55–71)
n (%)
Sex
Women
74 (58)
Men
54 (42)
Race
Caucasian
117 (91)
African American
10 (8)
Asian
1 (1)
Education
High-school or less
41 (32)
College or Bachelor’s degree
53 (41)
Graduate or professional degree
34 (27)
Marital status
Married
84 (66)
Single
15 (12)
Widowed
15 (12)
Separated/divorced
9 (7)
Living with partner
5 (4)
Location of metastasis
Upper extremity
28 (22)
Humerus
21 (16)
Scapula
5 (4)
Clavicule
1 (1)
Radius
1 (1)
Lower extremity
100 (78)
Femur
71 (55)
Acetabulum
14 (11)
Pelvis
12 (9)
Tibia
2 (2)
Fibula
1 (1)
Other disabling conditions
Yes
37 (29)
No
90 (71)
Previous surgery for metastatic lesion
Yes
80 (63)
No
48 (38)
Previous radiotherapy for metastatic lesion
Yes
72 (56)
No
53 (41)
Unknown
3 (2)
Multiple bones affected
Yes
61 (48)
No
55 (43)
Unknown
12 (9)
Visceral organs affected
Yes
39 (30)
No
78 (61)
Unknown
11 (9)Table 2
Tumor type (n
=
128).
Bone metastases
Breast
33 (26)
Renal cell
17 (13)
Prostate
11 (8.6)
Lung
11 (8.6)
Melanoma
7 (5.5)
Leiomyosarcoma
5 (3.9)
Bladder
3 (2.3)
Thyroid
3 (2.3)
Colorectal
2 (1.6)
Hepatocellular
2 (1.6)
Stomach
1 (0.8)
Esophageal
1 (0.8)
Neuroendocrine
1 (0.8)
Sarcoma
1 (0.8)
Primary bone tumors
Myeloma
17 (13)
Lymphoma
13 (10)
## 3. Results
### 3.1. Patient Perceived Compared to Clinician MSTS Score
We found that the clinicians’ MSTS score overestimated the physical function as compared to the patient perceived score. The median clinician MSTS score was 8 points higher (median: 65 and IQR: 49 to 83) as compared to the patient perceived score (median: 57 and IQR: 40 to 70) (p
<
0.001) (Table 3). This difference also existed when analyzing the lower extremity and upper extremity versions separately (Table 3).Table 3
MSTS score comparison.
Patient score
Clinician score
p value
Median (±interquartile range)
Median (±interquartile range)
Overall MSTS score
57 (40–70)
65 (49–83)
<0.001
Lower extremity MSTS score
57 (37–70)
63 (48–84)
<0.001
Upper extremity MSTS score
63 (53–73)
74 (58–85)
<0.001
MSTS common domains
Pain
4 (2–5)
3 (3-4)
0.076
Function
2 (2–4)
4 (3–5)
<0.001
Emotional acceptance
2 (1–3)
4 (3-4)
<0.001
Lower extremity specific domains
Use of supports
1 (1–5)
3 (1–5)
0.003
Walking ability
4 (3-4)
3 (3-4)
0.102
Gait
3 (2–4)
4 (3–5)
0.006
Upper extremity specific domains
Hand-positioning
4 (1–5)
4 (3-4)
0.048
Dexterity
5 (3–5)
4 (4-5)
0.890
Lifting ability
3 (1–4)
4 (3-4)
0.002
Bold indicates significant difference (two-tailedp value below 0.05).When comparing the three common domains, clinicians scored higher for function (p
<
0.001) and emotional acceptance (p
<
0.001) as compared to the patient perceived score; however, there was no difference in assessment of pain (p
=
0.076). When comparing the three lower extremity specific domains, clinicians scored higher for use of supports (p
=
0.003) and gait (p
=
0.006) as compared to the patient perceived score, and there was no difference in assessment of walking ability (p
=
0.102). When comparing the three upper extremity specific domains, clinicians scored higher for hand-positioning (p
=
0.048) and lifting ability (p
=
0.002) as compared to the patient perceived score, and there was no difference in assessment of dexterity (p
=
0.890).Agreement between the overall clinician score and the patient perceived score was substantial (ICC: 0.66, 95% CI 0.43–0.79, andp
<
0.001) (Table 4). We found moderate agreement for assessment of the common domains: pain (ICC: 0.50) and function (ICC: 0.43), but no agreement for emotional acceptance (ICC: 0.08). Agreement was substantial for assessment of the lower extremity specific use of supports domain (ICC: 0.72) and moderate for walking ability (ICC: 0.47) and gait (ICC: 0.48). We found substantial agreement for the upper extremity specific hand-positioning domain (ICC: 0.61), moderate for dexterity (ICC: 0.51), and no agreement for lifting ability (ICC: 0.16). The Spearman rank correlation coefficients were higher than the intraclass correlation coefficients reflecting the discrepancy of scores between the clinician and patient (Table 4).Table 4
Comparison of interobserver reliability.
Patient score compared with clinician score
p value
Patient score compared with clinician score
p value
Spearman correlation coefficient (95% confidence interval)∗
Intraclass correlation coefficient (95% confidence interval)
Overall MSTS score
0.74 (0.64–0.83)
<0.001
0.66 (0.43–0.79)
<0.001
Lower extremity MSTS score
0.71 (0.59–0.82)
<0.001
0.64 (0.42–0.78)
<0.001
Upper extremity MSTS score
0.82 (0.68–0.97)
<0.001
0.74 (0.25–0.90)
<0.001
MSTS common domains
Pain
0.50 (0.35–0.64)
<0.001
0.50 (0.36–0.62)
<0.001
Function
0.52 (0.38–0.66)
<0.001
0.43 (0.23–0.59)
<0.001
Emotional acceptance
0.16 (-
0.02–0.35)
0.073
0.08 (-
0.06–0.21)
0.105
Lower extremity specific domains
Use of supports
0.74 (0.62–0.86)
<0.001
0.72 (0.60–0.81)
<0.001
Walking ability
0.54 (0.39–0.68)
<0.001
0.47 (0.30–0.61)
<0.001
Gait
0.49 (0.34–0.63)
<0.001
0.48 (0.29–0.62)
<0.001
Upper extremity specific domains
Hand-positioning
0.84 (0.72–0.96)
<0.001
0.61 (0.29–0.81)
<0.001
Dexterity
0.51 (0.18–0.85)
0.003
0.51 (0.18–0.74)
0.003
Lifting ability
0.30 (-
0.07–0.66)
0.110
0.16 (-
0.12–0.46)
0.124
Bold indicates significant correlation (two-tailedp value below 0.05).
∗95% confidence interval calculated through bootstrapping (1,000 resamples).
### 3.2. Assessing Reliability of Extracting the Clinician MSTS Score from Medical Records
We found no difference in overall clinician MSTS score derived from medical records between researchers (researcher 1: median: 67 and IQR: 48–90 and researcher 2: median: 63 and IQR: 50–82;p
=
0.142), nor did we find a difference between researchers for deriving any of the medical record based domain scores. The interobserver agreement between researchers for the overall clinician MSTS score was substantial (ICC: 0.78, 95% CI 0.70–0.84, and p
<
0.001). These analyses indicate substantial reliability for deriving the clinician MSTS score from the medical record.
## 3.1. Patient Perceived Compared to Clinician MSTS Score
We found that the clinicians’ MSTS score overestimated the physical function as compared to the patient perceived score. The median clinician MSTS score was 8 points higher (median: 65 and IQR: 49 to 83) as compared to the patient perceived score (median: 57 and IQR: 40 to 70) (p
<
0.001) (Table 3). This difference also existed when analyzing the lower extremity and upper extremity versions separately (Table 3).Table 3
MSTS score comparison.
Patient score
Clinician score
p value
Median (±interquartile range)
Median (±interquartile range)
Overall MSTS score
57 (40–70)
65 (49–83)
<0.001
Lower extremity MSTS score
57 (37–70)
63 (48–84)
<0.001
Upper extremity MSTS score
63 (53–73)
74 (58–85)
<0.001
MSTS common domains
Pain
4 (2–5)
3 (3-4)
0.076
Function
2 (2–4)
4 (3–5)
<0.001
Emotional acceptance
2 (1–3)
4 (3-4)
<0.001
Lower extremity specific domains
Use of supports
1 (1–5)
3 (1–5)
0.003
Walking ability
4 (3-4)
3 (3-4)
0.102
Gait
3 (2–4)
4 (3–5)
0.006
Upper extremity specific domains
Hand-positioning
4 (1–5)
4 (3-4)
0.048
Dexterity
5 (3–5)
4 (4-5)
0.890
Lifting ability
3 (1–4)
4 (3-4)
0.002
Bold indicates significant difference (two-tailedp value below 0.05).When comparing the three common domains, clinicians scored higher for function (p
<
0.001) and emotional acceptance (p
<
0.001) as compared to the patient perceived score; however, there was no difference in assessment of pain (p
=
0.076). When comparing the three lower extremity specific domains, clinicians scored higher for use of supports (p
=
0.003) and gait (p
=
0.006) as compared to the patient perceived score, and there was no difference in assessment of walking ability (p
=
0.102). When comparing the three upper extremity specific domains, clinicians scored higher for hand-positioning (p
=
0.048) and lifting ability (p
=
0.002) as compared to the patient perceived score, and there was no difference in assessment of dexterity (p
=
0.890).Agreement between the overall clinician score and the patient perceived score was substantial (ICC: 0.66, 95% CI 0.43–0.79, andp
<
0.001) (Table 4). We found moderate agreement for assessment of the common domains: pain (ICC: 0.50) and function (ICC: 0.43), but no agreement for emotional acceptance (ICC: 0.08). Agreement was substantial for assessment of the lower extremity specific use of supports domain (ICC: 0.72) and moderate for walking ability (ICC: 0.47) and gait (ICC: 0.48). We found substantial agreement for the upper extremity specific hand-positioning domain (ICC: 0.61), moderate for dexterity (ICC: 0.51), and no agreement for lifting ability (ICC: 0.16). The Spearman rank correlation coefficients were higher than the intraclass correlation coefficients reflecting the discrepancy of scores between the clinician and patient (Table 4).Table 4
Comparison of interobserver reliability.
Patient score compared with clinician score
p value
Patient score compared with clinician score
p value
Spearman correlation coefficient (95% confidence interval)∗
Intraclass correlation coefficient (95% confidence interval)
Overall MSTS score
0.74 (0.64–0.83)
<0.001
0.66 (0.43–0.79)
<0.001
Lower extremity MSTS score
0.71 (0.59–0.82)
<0.001
0.64 (0.42–0.78)
<0.001
Upper extremity MSTS score
0.82 (0.68–0.97)
<0.001
0.74 (0.25–0.90)
<0.001
MSTS common domains
Pain
0.50 (0.35–0.64)
<0.001
0.50 (0.36–0.62)
<0.001
Function
0.52 (0.38–0.66)
<0.001
0.43 (0.23–0.59)
<0.001
Emotional acceptance
0.16 (-
0.02–0.35)
0.073
0.08 (-
0.06–0.21)
0.105
Lower extremity specific domains
Use of supports
0.74 (0.62–0.86)
<0.001
0.72 (0.60–0.81)
<0.001
Walking ability
0.54 (0.39–0.68)
<0.001
0.47 (0.30–0.61)
<0.001
Gait
0.49 (0.34–0.63)
<0.001
0.48 (0.29–0.62)
<0.001
Upper extremity specific domains
Hand-positioning
0.84 (0.72–0.96)
<0.001
0.61 (0.29–0.81)
<0.001
Dexterity
0.51 (0.18–0.85)
0.003
0.51 (0.18–0.74)
0.003
Lifting ability
0.30 (-
0.07–0.66)
0.110
0.16 (-
0.12–0.46)
0.124
Bold indicates significant correlation (two-tailedp value below 0.05).
∗95% confidence interval calculated through bootstrapping (1,000 resamples).
## 3.2. Assessing Reliability of Extracting the Clinician MSTS Score from Medical Records
We found no difference in overall clinician MSTS score derived from medical records between researchers (researcher 1: median: 67 and IQR: 48–90 and researcher 2: median: 63 and IQR: 50–82;p
=
0.142), nor did we find a difference between researchers for deriving any of the medical record based domain scores. The interobserver agreement between researchers for the overall clinician MSTS score was substantial (ICC: 0.78, 95% CI 0.70–0.84, and p
<
0.001). These analyses indicate substantial reliability for deriving the clinician MSTS score from the medical record.
## 4. Discussion
The MSTS scoring tool evaluates function in patients with extremity tumors and is developed to be completed by the clinician [3]. It is unclear how this clinician-based score relates to the patients perceived function. We therefore compared the MSTS score as completed by the patient with a medical record based clinician reported MSTS score and assessed discrepancies and agreement. We found that the clinicians’ MSTS score overestimated physical function as compared to the patient completed MSTS score. This discrepancy was the largest for the common overall function and emotional acceptance domains but was absent for the pain domain.This study has limitations. First, we based the MSTS score on review of information provided by the clinician in the medical records; however, the MSTS score was developed to be completed by a clinician at time of the consultation. We see this as an important limitation and explored its possible consequences by assessing discrepancies and interobserver agreement between two researchers who independently derived these data from medical records. There was no discrepancy between the researchers for the overall MSTS score and their interobserver agreement was substantial; this suggests reproducible assessment of the MSTS score based on the medical record. Previous studies used the same methodology to extract an MSTS score from information in the medical record [15–17]. In addition, the judgment of the two research fellows might have been different from the judgment of the attending surgeon. Future prospective study should therefore compare the patient completed MSTS score with an MSTS score completed by the clinician at time of the consultation. Second, patients might have misunderstood specific items or answer options as the scoring system is not developed to be completed by a patient and not validated in a patient sample. We considered this as a limitation but feel that this did not compromise our results, as we believe that erroneous answers would have occurred in both directions (i.e., better and worse). Third, the MSTS score is developed for evaluation of functional status in all musculoskeletal tumor types. Patient demographics differ per tumor type and we only studied a sample of patients with bone metastases; this limits the generalizability of our results to this specific population. Future study might help elucidate the discrepancy between patient and physician perceived function in primary bone tumors.Previous studies in other fields also demonstrated an overestimation of patients’ physical and mental health when estimated by a clinician as compared to the patients’ perception [10, 13, 18]. Nelson et al. [10] demonstrated in 1,101 primary care patients that 12% rated major physical limitations in the preceding month, while only 4.4% of the patients were rated as such by their primary care physician. This study also demonstrated that 9% rated major emotional limitations, while only 5% were rated as such by their physician. Rosenberger et al. [18] demonstrated that physicians overestimated function and underestimated pain in 98 patients who underwent surgical anterior cruciate ligament reconstruction or meniscectomy. In line with these previous studies, we found the largest discrepancy for assessment of the function and emotional acceptance domains in our study. However, we found no difference for the pain domain. Pain level in the MSTS score is based on the amount of pain and the degree of disability it causes; this might explain why we did not find difference in pain score. Despite the discrepancies, clinicians’ estimates do correlate reasonably well with patient scores for the overall MSTS score and domain scores, except for emotional acceptance and lifting ability. This means that clinicians recognize worse overall function as perceived by the patient; however, the clinician tends to underestimate its impact. Assessment of emotional acceptance by the clinician does not correlate with the patients’ perception, which might be explained by the subjectivity and complexity of this measure. Lifting ability is a relatively objective measure and the absence of correlation between the patient and clinician score might have been a result of the small sample size (28 upper extremity patients).The discrepancy between the clinicians’ assessment and patients perception of health and symptoms can have several consequences. First, surgeons have an important role in counseling their patients regarding expected outcome after treatment. It is important for them to understand patients’ perspectives about outcome to educate future patients. For example, patients might be less satisfied, if their expectations are not met or when recovery is slower than expected [18]. Second, patients might feel misunderstood or unheard by their physician. A previous study demonstrated that concordance (so called dyadic agreement) between the patients’ and physicians’ perceptions of health and symptoms are associated with higher patient satisfaction [19]. Another study demonstrated that dissatisfaction of the patient leads to less compliance with treatment recommendations and potentially jeopardizes patients’ health and outcome [20]. A review of plaintiff depositions demonstrated that delivering information poorly and failure to empathize with the patients’ or family’s perspective are common causes of medical litigation [21, 22]. Third, a clinician might be biased towards certain treatments; this might compromise comparison of clinician reported outcomes across treatment options in prospective studies and nonblinded clinical trials. Fourth, overestimating outcomes tends to breed an attitude of complacency and inertia among clinicians which could preclude further improvement. Fifth, third-party payers may use reported (overestimated) outcomes to dissuade costly innovation and research.Capturing patient reported outcome measures, questionnaires completed by the patient, using validated instruments for both research purposes and day-to-day clinical practice is key. Previous studies demonstrated that use of information from patient reported outcome measures leads to better communication and decision making between doctors and patients and improves satisfaction [11, 23, 24]. However, this does not mean that clinician measures are uninformative. Measuring pathophysiology and impairment (e.g., range of motion, strength, and stability), in addition to patient reported outcome measures (e.g., symptoms and disability), will help us to better understand patient perceptions and inform them about prognosis and outcome of different treatment options.In conclusion, clinician reports overestimate function as compared to the patient perceived score. This is important to acknowledge when informing patients about the expected outcome of treatment and to understand patients’ perceptions. Our study reinforces the need for obtaining patient reported outcomes using validated methods in orthopaedic oncology.
---
*Source: 1014248-2016-09-20.xml* | 1014248-2016-09-20_1014248-2016-09-20.md | 33,339 | The Discrepancy between Patient and Clinician Reported Function in Extremity Bone Metastases | Stein J. Janssen; Eva A. J. van Rein; Nuno Rui Paulino Pereira; Kevin A. Raskin; Marco L. Ferrone; Francis J. Hornicek; Santiago A. Lozano-Calderon; Joseph H. Schwab | Sarcoma
(2016) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2016/1014248 | 1014248-2016-09-20.xml | ---
## Abstract
Background. The Musculoskeletal Tumor Society (MSTS) scoring system measures function and is commonly used but criticized because it was developed to be completed by the clinician and not by the patient. We therefore evaluated if there is a difference between patient and clinician reported function using the MSTS score.Methods. 128 patients with bone metastasis of the lower (n
=
100) and upper (n
=
28) extremity completed the MSTS score. The MSTS score consists of six domains, scored on a 0 to 5 scale and transformed into an overall score ranging from 0 to 100% with a higher score indicating better function. The MSTS score was also derived from clinicians’ reports in the medical record.Results. The median age was 63 years (interquartile range [IQR]: 55–71) and the study included 74 (58%) women. We found that the clinicians’ MSTS score (median: 65, IQR: 49–83) overestimated the function as compared to the patient perceived score (median: 57, IQR: 40–70) by 8 points (p
<
0.001).Conclusion. Clinician reports overestimate function as compared to the patient perceived score. This is important for acknowledging when informing patients about the expected outcome of treatment and for understanding patients’ perceptions.
---
## Body
## 1. Introduction
Treatment for bone metastatic disease is often palliative and aims to maintain function and quality of life for the remaining life span [1, 2]. Traditionally, studies focused on oncological and surgical outcomes (e.g., survival and local recurrence), but more emphasis has been placed on measuring impairment and disability over the past decades [1, 3–5]. The Musculoskeletal Tumor Society (MSTS) recognized this and developed a system—the MSTS score—to evaluate function in patients with musculoskeletal tumors [3]. The validity and reliability of this tool were found to be acceptable when applied to a sample of patients with malignant musculoskeletal tumors [6]. The scoring system has been criticized because it was developed to be completed by a clinician, instead of measuring function as perceived by the patient [1, 7]; however, the MSTS score is still used because of its simplicity and brevity (it consists of six items) [8, 9]. Studies in other fields have demonstrated discrepancies between patient and physician assessment of physical and mental health [10–13]. It is unclear whether the clinician derived MSTS score is representative of the patients’ perceived function. We therefore sought to evaluate if there is a difference between patient and clinician reported physical function using the MSTS score in a cohort of patients with bone metastases of the extremities. Secondarily, we compared MSTS domain scores and assessed agreement between the clinician and patient perceived scores.
## 2. Materials and Methods
### 2.1. Study Design
Our institutional review board approved secondary use of prospectively collected data for the purpose of this study, and a waiver of informed consent was obtained. We included data from the first 128 patients who completed a set of physical function questionnaires for two prior studies. These studies compared physical function questionnaires in patients with lower (n
=
100) and upper (n
=
28) extremity bone metastases, myeloma, or lymphoma [14]. Only English-speaking patients aged 18 years or above who were able to provide informed consent were approached for these studies. Patients were enrolled between June 2014 and September 2015 from two orthopaedic oncology clinics. Patients were included regardless of previous treatment and disease stage [14]. Seventeen patients declined participation for the initial study, and three patients were excluded because of incomplete questionnaires.An ante hoc sample size calculation determined that we would need a minimum of 128 patients to find an effect size of 0.25 with an alpha of 0.05 and power of 0.80 using a pairedt-test comparing the clinician reported MSTS score with the patient perceived MSTS score.
### 2.2. Outcome Measures
Our primary outcome measure was the Musculoskeletal Tumor Society (MSTS) score, introduced in 1983 and modified in 1993 [3]. This scoring system was developed to be completed by a clinician—physician or physician extender—and it aims to assess physical function in patients with lower and upper extremity tumors. The modified version (1993) of the MSTS score consists of six domains, each scored on a scale from 0 to 5, with a higher score indicating better function. The total score, ranging from 0 to 30, can be transformed to a point scale of 0 to 100. There are two versions: one for lower extremity tumors and one for upper extremity tumors. These versions have three domains in common, pain, function, and emotional acceptance, and three region specific domains. The region specific domains for the lower extremity are use of supports, walking ability, and gait. The region specific domains for the upper extremity are hand-positioning, dexterity, and lifting ability. Patients completed one of the two versions based on the location of their most disabling bone metastasis.In addition, patients completed questions about their level of education, marital status, presence of other disabling conditions, prior treatment, and other bone or visceral metastases. Prior treatment and presence of other metastases were also derived from medical records. We extracted age, sex, race, and location of bone metastasis from the medical records.Two research fellows (SJ and EvR)—blinded to the patients’ answers—independently completed the MSTS score based on the clinicians’ report in the medical record of the patient; we used the report that was written at the time (or within a few days) of survey completion by the patient. Reports completed by the orthopaedic oncologist, medical oncologist, and physical therapist were used to complete the MSTS score. We averaged the scores assigned by the two researchers per domain and for the overall MSTS score. To assess reliability of extracting this data from medical records, we assessed difference in overall MSTS score and domain scores between researchers and assessed their interobserver agreement.
### 2.3. Statistical Analysis
We used frequencies with percentages to describe categorical variables and median with interquartile range for continuous variables as histograms suggested nonnormality.The nonparametric Wilcoxon signed rank test was used to assess the difference between patient and clinician domain scores and overall MSTS scores as data was not normally distributed.We assessed the relationship between the patient and clinician MSTS and domain scores using both Spearman rank correlation and intraclass correlation (ICC). Spearman rank correlation determines the relationship between two variables (range: −1 to 1): a score of 1 indicates a perfect correlation, 0 indicates no correlation, and −1 indicates a perfect inverse correlation. We used bootstrapping (number of resamples: 1,000) to calculatep values and 95% confidence intervals for the Spearman rank correlation coefficients. The intraclass correlation coefficient also assesses a relationship between two variables but accounts for discrepancy in measurements and therefore measures absolute agreement. We calculated the ICC through a two-way mixed-effects model with absolute agreement for the overall MSTS score and the domain scores. As with the Spearman rank correlation coefficient, an ICC of 1 reflects perfect agreement, whereas 0 reflects no agreement.Additionally, we assessed difference in domain and total scores between the two researchers using the Wilcoxon signed rank test and assessed their interobserver agreement per domain and overall score using the ICC.
### 2.4. Patient Characteristics
The median age was 63 years (interquartile range [IQR]: 55 to 71) and the study included 74 (58%) women. The majority had a metastatic lesion in the lower extremity (78% [100/128]). Eighty (63%) patients had previous surgery, and 72 (56%) had previous radiotherapy (Table1). Breast was the most common primary tumor type (26%) (Table 2).Table 1
Demographics (n
=
128).
Median (±interquartile range)
Age
63 (55–71)
n (%)
Sex
Women
74 (58)
Men
54 (42)
Race
Caucasian
117 (91)
African American
10 (8)
Asian
1 (1)
Education
High-school or less
41 (32)
College or Bachelor’s degree
53 (41)
Graduate or professional degree
34 (27)
Marital status
Married
84 (66)
Single
15 (12)
Widowed
15 (12)
Separated/divorced
9 (7)
Living with partner
5 (4)
Location of metastasis
Upper extremity
28 (22)
Humerus
21 (16)
Scapula
5 (4)
Clavicule
1 (1)
Radius
1 (1)
Lower extremity
100 (78)
Femur
71 (55)
Acetabulum
14 (11)
Pelvis
12 (9)
Tibia
2 (2)
Fibula
1 (1)
Other disabling conditions
Yes
37 (29)
No
90 (71)
Previous surgery for metastatic lesion
Yes
80 (63)
No
48 (38)
Previous radiotherapy for metastatic lesion
Yes
72 (56)
No
53 (41)
Unknown
3 (2)
Multiple bones affected
Yes
61 (48)
No
55 (43)
Unknown
12 (9)
Visceral organs affected
Yes
39 (30)
No
78 (61)
Unknown
11 (9)Table 2
Tumor type (n
=
128).
Bone metastases
Breast
33 (26)
Renal cell
17 (13)
Prostate
11 (8.6)
Lung
11 (8.6)
Melanoma
7 (5.5)
Leiomyosarcoma
5 (3.9)
Bladder
3 (2.3)
Thyroid
3 (2.3)
Colorectal
2 (1.6)
Hepatocellular
2 (1.6)
Stomach
1 (0.8)
Esophageal
1 (0.8)
Neuroendocrine
1 (0.8)
Sarcoma
1 (0.8)
Primary bone tumors
Myeloma
17 (13)
Lymphoma
13 (10)
## 2.1. Study Design
Our institutional review board approved secondary use of prospectively collected data for the purpose of this study, and a waiver of informed consent was obtained. We included data from the first 128 patients who completed a set of physical function questionnaires for two prior studies. These studies compared physical function questionnaires in patients with lower (n
=
100) and upper (n
=
28) extremity bone metastases, myeloma, or lymphoma [14]. Only English-speaking patients aged 18 years or above who were able to provide informed consent were approached for these studies. Patients were enrolled between June 2014 and September 2015 from two orthopaedic oncology clinics. Patients were included regardless of previous treatment and disease stage [14]. Seventeen patients declined participation for the initial study, and three patients were excluded because of incomplete questionnaires.An ante hoc sample size calculation determined that we would need a minimum of 128 patients to find an effect size of 0.25 with an alpha of 0.05 and power of 0.80 using a pairedt-test comparing the clinician reported MSTS score with the patient perceived MSTS score.
## 2.2. Outcome Measures
Our primary outcome measure was the Musculoskeletal Tumor Society (MSTS) score, introduced in 1983 and modified in 1993 [3]. This scoring system was developed to be completed by a clinician—physician or physician extender—and it aims to assess physical function in patients with lower and upper extremity tumors. The modified version (1993) of the MSTS score consists of six domains, each scored on a scale from 0 to 5, with a higher score indicating better function. The total score, ranging from 0 to 30, can be transformed to a point scale of 0 to 100. There are two versions: one for lower extremity tumors and one for upper extremity tumors. These versions have three domains in common, pain, function, and emotional acceptance, and three region specific domains. The region specific domains for the lower extremity are use of supports, walking ability, and gait. The region specific domains for the upper extremity are hand-positioning, dexterity, and lifting ability. Patients completed one of the two versions based on the location of their most disabling bone metastasis.In addition, patients completed questions about their level of education, marital status, presence of other disabling conditions, prior treatment, and other bone or visceral metastases. Prior treatment and presence of other metastases were also derived from medical records. We extracted age, sex, race, and location of bone metastasis from the medical records.Two research fellows (SJ and EvR)—blinded to the patients’ answers—independently completed the MSTS score based on the clinicians’ report in the medical record of the patient; we used the report that was written at the time (or within a few days) of survey completion by the patient. Reports completed by the orthopaedic oncologist, medical oncologist, and physical therapist were used to complete the MSTS score. We averaged the scores assigned by the two researchers per domain and for the overall MSTS score. To assess reliability of extracting this data from medical records, we assessed difference in overall MSTS score and domain scores between researchers and assessed their interobserver agreement.
## 2.3. Statistical Analysis
We used frequencies with percentages to describe categorical variables and median with interquartile range for continuous variables as histograms suggested nonnormality.The nonparametric Wilcoxon signed rank test was used to assess the difference between patient and clinician domain scores and overall MSTS scores as data was not normally distributed.We assessed the relationship between the patient and clinician MSTS and domain scores using both Spearman rank correlation and intraclass correlation (ICC). Spearman rank correlation determines the relationship between two variables (range: −1 to 1): a score of 1 indicates a perfect correlation, 0 indicates no correlation, and −1 indicates a perfect inverse correlation. We used bootstrapping (number of resamples: 1,000) to calculatep values and 95% confidence intervals for the Spearman rank correlation coefficients. The intraclass correlation coefficient also assesses a relationship between two variables but accounts for discrepancy in measurements and therefore measures absolute agreement. We calculated the ICC through a two-way mixed-effects model with absolute agreement for the overall MSTS score and the domain scores. As with the Spearman rank correlation coefficient, an ICC of 1 reflects perfect agreement, whereas 0 reflects no agreement.Additionally, we assessed difference in domain and total scores between the two researchers using the Wilcoxon signed rank test and assessed their interobserver agreement per domain and overall score using the ICC.
## 2.4. Patient Characteristics
The median age was 63 years (interquartile range [IQR]: 55 to 71) and the study included 74 (58%) women. The majority had a metastatic lesion in the lower extremity (78% [100/128]). Eighty (63%) patients had previous surgery, and 72 (56%) had previous radiotherapy (Table1). Breast was the most common primary tumor type (26%) (Table 2).Table 1
Demographics (n
=
128).
Median (±interquartile range)
Age
63 (55–71)
n (%)
Sex
Women
74 (58)
Men
54 (42)
Race
Caucasian
117 (91)
African American
10 (8)
Asian
1 (1)
Education
High-school or less
41 (32)
College or Bachelor’s degree
53 (41)
Graduate or professional degree
34 (27)
Marital status
Married
84 (66)
Single
15 (12)
Widowed
15 (12)
Separated/divorced
9 (7)
Living with partner
5 (4)
Location of metastasis
Upper extremity
28 (22)
Humerus
21 (16)
Scapula
5 (4)
Clavicule
1 (1)
Radius
1 (1)
Lower extremity
100 (78)
Femur
71 (55)
Acetabulum
14 (11)
Pelvis
12 (9)
Tibia
2 (2)
Fibula
1 (1)
Other disabling conditions
Yes
37 (29)
No
90 (71)
Previous surgery for metastatic lesion
Yes
80 (63)
No
48 (38)
Previous radiotherapy for metastatic lesion
Yes
72 (56)
No
53 (41)
Unknown
3 (2)
Multiple bones affected
Yes
61 (48)
No
55 (43)
Unknown
12 (9)
Visceral organs affected
Yes
39 (30)
No
78 (61)
Unknown
11 (9)Table 2
Tumor type (n
=
128).
Bone metastases
Breast
33 (26)
Renal cell
17 (13)
Prostate
11 (8.6)
Lung
11 (8.6)
Melanoma
7 (5.5)
Leiomyosarcoma
5 (3.9)
Bladder
3 (2.3)
Thyroid
3 (2.3)
Colorectal
2 (1.6)
Hepatocellular
2 (1.6)
Stomach
1 (0.8)
Esophageal
1 (0.8)
Neuroendocrine
1 (0.8)
Sarcoma
1 (0.8)
Primary bone tumors
Myeloma
17 (13)
Lymphoma
13 (10)
## 3. Results
### 3.1. Patient Perceived Compared to Clinician MSTS Score
We found that the clinicians’ MSTS score overestimated the physical function as compared to the patient perceived score. The median clinician MSTS score was 8 points higher (median: 65 and IQR: 49 to 83) as compared to the patient perceived score (median: 57 and IQR: 40 to 70) (p
<
0.001) (Table 3). This difference also existed when analyzing the lower extremity and upper extremity versions separately (Table 3).Table 3
MSTS score comparison.
Patient score
Clinician score
p value
Median (±interquartile range)
Median (±interquartile range)
Overall MSTS score
57 (40–70)
65 (49–83)
<0.001
Lower extremity MSTS score
57 (37–70)
63 (48–84)
<0.001
Upper extremity MSTS score
63 (53–73)
74 (58–85)
<0.001
MSTS common domains
Pain
4 (2–5)
3 (3-4)
0.076
Function
2 (2–4)
4 (3–5)
<0.001
Emotional acceptance
2 (1–3)
4 (3-4)
<0.001
Lower extremity specific domains
Use of supports
1 (1–5)
3 (1–5)
0.003
Walking ability
4 (3-4)
3 (3-4)
0.102
Gait
3 (2–4)
4 (3–5)
0.006
Upper extremity specific domains
Hand-positioning
4 (1–5)
4 (3-4)
0.048
Dexterity
5 (3–5)
4 (4-5)
0.890
Lifting ability
3 (1–4)
4 (3-4)
0.002
Bold indicates significant difference (two-tailedp value below 0.05).When comparing the three common domains, clinicians scored higher for function (p
<
0.001) and emotional acceptance (p
<
0.001) as compared to the patient perceived score; however, there was no difference in assessment of pain (p
=
0.076). When comparing the three lower extremity specific domains, clinicians scored higher for use of supports (p
=
0.003) and gait (p
=
0.006) as compared to the patient perceived score, and there was no difference in assessment of walking ability (p
=
0.102). When comparing the three upper extremity specific domains, clinicians scored higher for hand-positioning (p
=
0.048) and lifting ability (p
=
0.002) as compared to the patient perceived score, and there was no difference in assessment of dexterity (p
=
0.890).Agreement between the overall clinician score and the patient perceived score was substantial (ICC: 0.66, 95% CI 0.43–0.79, andp
<
0.001) (Table 4). We found moderate agreement for assessment of the common domains: pain (ICC: 0.50) and function (ICC: 0.43), but no agreement for emotional acceptance (ICC: 0.08). Agreement was substantial for assessment of the lower extremity specific use of supports domain (ICC: 0.72) and moderate for walking ability (ICC: 0.47) and gait (ICC: 0.48). We found substantial agreement for the upper extremity specific hand-positioning domain (ICC: 0.61), moderate for dexterity (ICC: 0.51), and no agreement for lifting ability (ICC: 0.16). The Spearman rank correlation coefficients were higher than the intraclass correlation coefficients reflecting the discrepancy of scores between the clinician and patient (Table 4).Table 4
Comparison of interobserver reliability.
Patient score compared with clinician score
p value
Patient score compared with clinician score
p value
Spearman correlation coefficient (95% confidence interval)∗
Intraclass correlation coefficient (95% confidence interval)
Overall MSTS score
0.74 (0.64–0.83)
<0.001
0.66 (0.43–0.79)
<0.001
Lower extremity MSTS score
0.71 (0.59–0.82)
<0.001
0.64 (0.42–0.78)
<0.001
Upper extremity MSTS score
0.82 (0.68–0.97)
<0.001
0.74 (0.25–0.90)
<0.001
MSTS common domains
Pain
0.50 (0.35–0.64)
<0.001
0.50 (0.36–0.62)
<0.001
Function
0.52 (0.38–0.66)
<0.001
0.43 (0.23–0.59)
<0.001
Emotional acceptance
0.16 (-
0.02–0.35)
0.073
0.08 (-
0.06–0.21)
0.105
Lower extremity specific domains
Use of supports
0.74 (0.62–0.86)
<0.001
0.72 (0.60–0.81)
<0.001
Walking ability
0.54 (0.39–0.68)
<0.001
0.47 (0.30–0.61)
<0.001
Gait
0.49 (0.34–0.63)
<0.001
0.48 (0.29–0.62)
<0.001
Upper extremity specific domains
Hand-positioning
0.84 (0.72–0.96)
<0.001
0.61 (0.29–0.81)
<0.001
Dexterity
0.51 (0.18–0.85)
0.003
0.51 (0.18–0.74)
0.003
Lifting ability
0.30 (-
0.07–0.66)
0.110
0.16 (-
0.12–0.46)
0.124
Bold indicates significant correlation (two-tailedp value below 0.05).
∗95% confidence interval calculated through bootstrapping (1,000 resamples).
### 3.2. Assessing Reliability of Extracting the Clinician MSTS Score from Medical Records
We found no difference in overall clinician MSTS score derived from medical records between researchers (researcher 1: median: 67 and IQR: 48–90 and researcher 2: median: 63 and IQR: 50–82;p
=
0.142), nor did we find a difference between researchers for deriving any of the medical record based domain scores. The interobserver agreement between researchers for the overall clinician MSTS score was substantial (ICC: 0.78, 95% CI 0.70–0.84, and p
<
0.001). These analyses indicate substantial reliability for deriving the clinician MSTS score from the medical record.
## 3.1. Patient Perceived Compared to Clinician MSTS Score
We found that the clinicians’ MSTS score overestimated the physical function as compared to the patient perceived score. The median clinician MSTS score was 8 points higher (median: 65 and IQR: 49 to 83) as compared to the patient perceived score (median: 57 and IQR: 40 to 70) (p
<
0.001) (Table 3). This difference also existed when analyzing the lower extremity and upper extremity versions separately (Table 3).Table 3
MSTS score comparison.
Patient score
Clinician score
p value
Median (±interquartile range)
Median (±interquartile range)
Overall MSTS score
57 (40–70)
65 (49–83)
<0.001
Lower extremity MSTS score
57 (37–70)
63 (48–84)
<0.001
Upper extremity MSTS score
63 (53–73)
74 (58–85)
<0.001
MSTS common domains
Pain
4 (2–5)
3 (3-4)
0.076
Function
2 (2–4)
4 (3–5)
<0.001
Emotional acceptance
2 (1–3)
4 (3-4)
<0.001
Lower extremity specific domains
Use of supports
1 (1–5)
3 (1–5)
0.003
Walking ability
4 (3-4)
3 (3-4)
0.102
Gait
3 (2–4)
4 (3–5)
0.006
Upper extremity specific domains
Hand-positioning
4 (1–5)
4 (3-4)
0.048
Dexterity
5 (3–5)
4 (4-5)
0.890
Lifting ability
3 (1–4)
4 (3-4)
0.002
Bold indicates significant difference (two-tailedp value below 0.05).When comparing the three common domains, clinicians scored higher for function (p
<
0.001) and emotional acceptance (p
<
0.001) as compared to the patient perceived score; however, there was no difference in assessment of pain (p
=
0.076). When comparing the three lower extremity specific domains, clinicians scored higher for use of supports (p
=
0.003) and gait (p
=
0.006) as compared to the patient perceived score, and there was no difference in assessment of walking ability (p
=
0.102). When comparing the three upper extremity specific domains, clinicians scored higher for hand-positioning (p
=
0.048) and lifting ability (p
=
0.002) as compared to the patient perceived score, and there was no difference in assessment of dexterity (p
=
0.890).Agreement between the overall clinician score and the patient perceived score was substantial (ICC: 0.66, 95% CI 0.43–0.79, andp
<
0.001) (Table 4). We found moderate agreement for assessment of the common domains: pain (ICC: 0.50) and function (ICC: 0.43), but no agreement for emotional acceptance (ICC: 0.08). Agreement was substantial for assessment of the lower extremity specific use of supports domain (ICC: 0.72) and moderate for walking ability (ICC: 0.47) and gait (ICC: 0.48). We found substantial agreement for the upper extremity specific hand-positioning domain (ICC: 0.61), moderate for dexterity (ICC: 0.51), and no agreement for lifting ability (ICC: 0.16). The Spearman rank correlation coefficients were higher than the intraclass correlation coefficients reflecting the discrepancy of scores between the clinician and patient (Table 4).Table 4
Comparison of interobserver reliability.
Patient score compared with clinician score
p value
Patient score compared with clinician score
p value
Spearman correlation coefficient (95% confidence interval)∗
Intraclass correlation coefficient (95% confidence interval)
Overall MSTS score
0.74 (0.64–0.83)
<0.001
0.66 (0.43–0.79)
<0.001
Lower extremity MSTS score
0.71 (0.59–0.82)
<0.001
0.64 (0.42–0.78)
<0.001
Upper extremity MSTS score
0.82 (0.68–0.97)
<0.001
0.74 (0.25–0.90)
<0.001
MSTS common domains
Pain
0.50 (0.35–0.64)
<0.001
0.50 (0.36–0.62)
<0.001
Function
0.52 (0.38–0.66)
<0.001
0.43 (0.23–0.59)
<0.001
Emotional acceptance
0.16 (-
0.02–0.35)
0.073
0.08 (-
0.06–0.21)
0.105
Lower extremity specific domains
Use of supports
0.74 (0.62–0.86)
<0.001
0.72 (0.60–0.81)
<0.001
Walking ability
0.54 (0.39–0.68)
<0.001
0.47 (0.30–0.61)
<0.001
Gait
0.49 (0.34–0.63)
<0.001
0.48 (0.29–0.62)
<0.001
Upper extremity specific domains
Hand-positioning
0.84 (0.72–0.96)
<0.001
0.61 (0.29–0.81)
<0.001
Dexterity
0.51 (0.18–0.85)
0.003
0.51 (0.18–0.74)
0.003
Lifting ability
0.30 (-
0.07–0.66)
0.110
0.16 (-
0.12–0.46)
0.124
Bold indicates significant correlation (two-tailedp value below 0.05).
∗95% confidence interval calculated through bootstrapping (1,000 resamples).
## 3.2. Assessing Reliability of Extracting the Clinician MSTS Score from Medical Records
We found no difference in overall clinician MSTS score derived from medical records between researchers (researcher 1: median: 67 and IQR: 48–90 and researcher 2: median: 63 and IQR: 50–82;p
=
0.142), nor did we find a difference between researchers for deriving any of the medical record based domain scores. The interobserver agreement between researchers for the overall clinician MSTS score was substantial (ICC: 0.78, 95% CI 0.70–0.84, and p
<
0.001). These analyses indicate substantial reliability for deriving the clinician MSTS score from the medical record.
## 4. Discussion
The MSTS scoring tool evaluates function in patients with extremity tumors and is developed to be completed by the clinician [3]. It is unclear how this clinician-based score relates to the patients perceived function. We therefore compared the MSTS score as completed by the patient with a medical record based clinician reported MSTS score and assessed discrepancies and agreement. We found that the clinicians’ MSTS score overestimated physical function as compared to the patient completed MSTS score. This discrepancy was the largest for the common overall function and emotional acceptance domains but was absent for the pain domain.This study has limitations. First, we based the MSTS score on review of information provided by the clinician in the medical records; however, the MSTS score was developed to be completed by a clinician at time of the consultation. We see this as an important limitation and explored its possible consequences by assessing discrepancies and interobserver agreement between two researchers who independently derived these data from medical records. There was no discrepancy between the researchers for the overall MSTS score and their interobserver agreement was substantial; this suggests reproducible assessment of the MSTS score based on the medical record. Previous studies used the same methodology to extract an MSTS score from information in the medical record [15–17]. In addition, the judgment of the two research fellows might have been different from the judgment of the attending surgeon. Future prospective study should therefore compare the patient completed MSTS score with an MSTS score completed by the clinician at time of the consultation. Second, patients might have misunderstood specific items or answer options as the scoring system is not developed to be completed by a patient and not validated in a patient sample. We considered this as a limitation but feel that this did not compromise our results, as we believe that erroneous answers would have occurred in both directions (i.e., better and worse). Third, the MSTS score is developed for evaluation of functional status in all musculoskeletal tumor types. Patient demographics differ per tumor type and we only studied a sample of patients with bone metastases; this limits the generalizability of our results to this specific population. Future study might help elucidate the discrepancy between patient and physician perceived function in primary bone tumors.Previous studies in other fields also demonstrated an overestimation of patients’ physical and mental health when estimated by a clinician as compared to the patients’ perception [10, 13, 18]. Nelson et al. [10] demonstrated in 1,101 primary care patients that 12% rated major physical limitations in the preceding month, while only 4.4% of the patients were rated as such by their primary care physician. This study also demonstrated that 9% rated major emotional limitations, while only 5% were rated as such by their physician. Rosenberger et al. [18] demonstrated that physicians overestimated function and underestimated pain in 98 patients who underwent surgical anterior cruciate ligament reconstruction or meniscectomy. In line with these previous studies, we found the largest discrepancy for assessment of the function and emotional acceptance domains in our study. However, we found no difference for the pain domain. Pain level in the MSTS score is based on the amount of pain and the degree of disability it causes; this might explain why we did not find difference in pain score. Despite the discrepancies, clinicians’ estimates do correlate reasonably well with patient scores for the overall MSTS score and domain scores, except for emotional acceptance and lifting ability. This means that clinicians recognize worse overall function as perceived by the patient; however, the clinician tends to underestimate its impact. Assessment of emotional acceptance by the clinician does not correlate with the patients’ perception, which might be explained by the subjectivity and complexity of this measure. Lifting ability is a relatively objective measure and the absence of correlation between the patient and clinician score might have been a result of the small sample size (28 upper extremity patients).The discrepancy between the clinicians’ assessment and patients perception of health and symptoms can have several consequences. First, surgeons have an important role in counseling their patients regarding expected outcome after treatment. It is important for them to understand patients’ perspectives about outcome to educate future patients. For example, patients might be less satisfied, if their expectations are not met or when recovery is slower than expected [18]. Second, patients might feel misunderstood or unheard by their physician. A previous study demonstrated that concordance (so called dyadic agreement) between the patients’ and physicians’ perceptions of health and symptoms are associated with higher patient satisfaction [19]. Another study demonstrated that dissatisfaction of the patient leads to less compliance with treatment recommendations and potentially jeopardizes patients’ health and outcome [20]. A review of plaintiff depositions demonstrated that delivering information poorly and failure to empathize with the patients’ or family’s perspective are common causes of medical litigation [21, 22]. Third, a clinician might be biased towards certain treatments; this might compromise comparison of clinician reported outcomes across treatment options in prospective studies and nonblinded clinical trials. Fourth, overestimating outcomes tends to breed an attitude of complacency and inertia among clinicians which could preclude further improvement. Fifth, third-party payers may use reported (overestimated) outcomes to dissuade costly innovation and research.Capturing patient reported outcome measures, questionnaires completed by the patient, using validated instruments for both research purposes and day-to-day clinical practice is key. Previous studies demonstrated that use of information from patient reported outcome measures leads to better communication and decision making between doctors and patients and improves satisfaction [11, 23, 24]. However, this does not mean that clinician measures are uninformative. Measuring pathophysiology and impairment (e.g., range of motion, strength, and stability), in addition to patient reported outcome measures (e.g., symptoms and disability), will help us to better understand patient perceptions and inform them about prognosis and outcome of different treatment options.In conclusion, clinician reports overestimate function as compared to the patient perceived score. This is important to acknowledge when informing patients about the expected outcome of treatment and to understand patients’ perceptions. Our study reinforces the need for obtaining patient reported outcomes using validated methods in orthopaedic oncology.
---
*Source: 1014248-2016-09-20.xml* | 2016 |
# Global Structure of Positive Solutions for Some Second-Order Multipoint Boundary Value Problems
**Authors:** Hongyu Li; Junting Zhang
**Journal:** Journal of Function Spaces
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1014250
---
## Abstract
We investigate in this paper the following second-order multipoint boundary value problem:-(Lφ)(t)=λf(t,φ(t)), 0≤t≤1, φ′0=0, φ1=∑i=1m-2βiφηi. Under some conditions, we obtain global structure of positive solution set of this boundary value problem and the behavior of positive solutions with respect to parameter λ by using global bifurcation method. We also obtain the infinite interval of parameter λ about the existence of positive solution.
---
## Body
## 1. Introduction
In this paper, we shall study the following second-order multipoint boundary value problem:(1)-Lφt=λft,φt,0≤t≤1,φ′0=0,φ1=∑i=1m-2βiφηi,where (Lφ)(t)=ptφ′t′+q(t)φ(t), ηi∈(0,1),0<η1<η2<⋯<ηm-2<1, βi∈[0+∞), and λ is a positive parameter.The multipoint boundary value problems for ordinary differential equations play an important role in physics and applied mathematics, and so on. The existence and multiplicity of nontrivial solutions for multipoint boundary value problems have been extensively considered (including positive solutions, negative solutions, or sign-changing solutions) by using the fixed point theorem with lattice, fixed point index theory, coincidence degree theory, Leray-Schauder continuation theorems, upper and lower solution method, and so on (see [1–25] and references therein). On the other hand, some scholars have studied the global structure of nontrivial solutions for second-order multipoint boundary value problems (see [26–32] and references therein).There are few papers about the global structure of nontrivial solutions for the boundary value problem (1). Motivated by [1, 26–32], we shall investigate the global structure of positive solutions of the boundary value problem (1). In [1], the authors only have studied the existence of positive solutions, but in this paper, we prove that the set of nontrivial positive solutions of the boundary value problem (1) possesses an unbounded connected component.This paper is arranged as follows. In Section2, some notation and lemmas are presented. In Section 3, we prove the main results of the boundary value problem (1). Finally, in Section 4, two examples are given to illustrate the main results obtained in Section 3.
## 2. Preliminaries
LetE be a Banach space, P⊂E be a cone, and A:P→P be a completely continuous operator.Definition 1 (see [33]).
LetΩ⊂P be an open set, A:Ω→P, and λ0∈(0,+∞). If, for any ϵ>0, there exists the solution (λ,x)∈R+×Ω of the equation x=λAx, satisfying (2)λ-λ0<ϵ,0<x<ϵ, then λ0 is called a bifurcation point of the cone operator A.Definition 2 (see [33]).
LetΩ⊂P be an open set, A:Ω→P, and λ0∈(0,+∞). If, for any ϵ>0, there exists the solution (λ,x)∈R+×Ω of the equation x=λAx, satisfying (3)λ-λ0<ϵ,x>1ϵ,then λ0 is called an asymptotic bifurcation point of the cone operator A.Definition 3 (see [34]).
LetT:E→E be a linear operator and T map P into P. The linear operator T is u0-positive if there exists u0∈P∖{θ} such that, for any x∈P∖{θ}, we can find an integer n and real numbers α0>0,β0>0 such that α0u0≤Tnx≤β0u0.Lemma 4 (see [34]).
LetΩ(P) be an open set of P. Assume that the operator A has no fixed points on ∂Ω(P). If there exists a linear operator B and u∗∈P∖{θ} such that(i)
Ax≥Bx,x∈∂Ω(P);(ii)
for somen,Bnu∗≥u∗,then i(A,Ω(P),P)=0.Lemma 5 (see [34]).
LetA:P→P be completely continuous and T be a completely continuous u0-bounded linear operator. If, for any x∈P, Ax≥Tx, λAx=x, then λ≤1/r(T), where 1/r(T) is unique eigenvalue of T corresponding to positive eigenfunction.Lemma 6 (see [34]).
LetM be a compact metric space and A and B be disjoint, compact subsets of M. If there does not exist connected subset C of M such that C∩A≠∅ and C∩B≠∅, then there exist disjoint compact subsets MA and MB such that A⊂MA,B⊂MB and M=MA∪MB.
## 3. Main Results
LetE=C[0,1] with the norm φ(t)=maxt∈[0,1]φt; then E is a Banach space. Let P={φ∈E∣φ(t)≥0,t∈[0,1]}. Obviously, P is a normal cone of E.In this paper, we always assume that(H1)
p(t)∈C1[0,1],p(t)>0,q(t)∈C[0,1],q(t)≤0.Lemma 7 (see [1]).
Suppose that(H1) holds. Let Φ1 and Φ2 be the solutions of (4)Lφt=0,0<t<1,φ′0=0,φ1=1,(5)Lφt=0,0<t<1,φ0=1,φ1=0,respectively. Then(i)
Φ1 is increasing on [0,1] and Φ1>0,t∈[0,1];(ii)
Φ2 is decreasing on [0,1] and Φ2>0,t∈[0,1).
Let(6)Gt,s=1ρΦ1tΦ2s,0≤t≤s≤1,1ρΦ1sΦ2t,0≤s≤t≤1,where ρ=-Φ1(0)Φ2′(0)>0 by [1]. Let (7)Kt,s=Gt,s+D-1Φ1t∑i=1m-2βiGηi,s,0≤t,s≤1,where D=1-∑i=1m-2βiΦ1(ηi).
Define the operatorsA, B, and F: (8)Aφt=∫01Kt,sp~sfs,φsds,(9)Bφt=∫01Kt,sp~sφsds,(10)Fφt=ft,φt, where p~(s)=1/psexp(∫0sp′(x)/p(x)dx) and K(t,s) is defined by (7).
Obviously,A=BF. It is easy to know that the solutions of the boundary value problem (1) are equivalent to the solutions of the equation (11)φ=λAφ.
LetL={(λ,φ)∈(0,+∞)×P∣φ=λAφ,φ≠θ}¯ be the closure of nontrivial positive solution set of (11). Then L is also the closure of the nontrivial positive solution set of the boundary value problem (1).
We give the following assumptions:(H2)
∑i=1m-2βiΦ1(ηi)<1, where Φ1(t) is the solution of (4).(H3)
f:[0,1]×R+→R+ is continuous, f(t,0)=0,uniformlyont∈[0,1].(H4)
liminfu→0+(f(t,u)/u)≥α>0,uniformlyont∈[0,1].(H5)
limsupu→+∞(f(t,u)/u)=0,uniformlyont∈[0,1].Lemma 8 (see [1]).
Suppose that(H1)–(H3) are satisfied. Then, for the operator B defined by (9), the spectral radius r(B)≠0 and B has a positive eigenfunction corresponding to its first eigenvalue λ1=(r(B))-1.Theorem 9.
Suppose that(H1)–(H4) are satisfied. Then(i)
the operatorA defined by (8) has at least a bifurcation point λ∗∈[0,λ1/α] corresponding to positive solution; the operator A has no bifurcation points in (λ1/α,+∞) corresponding to positive solution, where λ1 is defined by Lemma 8;(ii)
L possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/α,+∞)×{θ})=∅, where λ1 is defined by Lemma 8.Proof.
By(H1)–(H3), it is easy to know that A:P→P is completely continuous and B:P→P is completely continuous; and Aθ=θ. By Lemma 8, we have r(B)=1/λ1.
By(H3), for any ϵ>0, there exists rϵ>0 such that (12)ft,uu≥α-ϵ,∀t∈0,1,0≤u≤rϵ,that is, (13)ft,u≥α-ϵu,∀t∈0,1,0≤u≤rϵ.
LetNrϵ(P)={φ∈P∣φ<rϵ}. From (8) and (13), for any φ∈N¯rϵ(P), we have (14)Aφt≥α-ϵ∫01Kt,sp~sφsds=α-ϵBφt=Tφt,where T=(α-ϵ)B. Clearly, T:P→P is completely continuous and r(T)=(α-ϵ)r(B)=(α-ϵ)/λ1.
By Lemma7 and (6) and (7), it follows that (15)Φ1tD-1∑i=1m-2βiGηi,s≤Kt,s≤1ρΦ2s+D-1∑i=1m-2βiGηi,sΦ1t,∀t,s∈0,1.
For anyφ∈P, by (14) and (15), we have (16)Tφt≥α-ϵΦ1t∫01D-1∑i=1m-2βiGηi,sp~sφsds,Tφt≤α-ϵΦ1t∫011ρΦ2s+D-1∑i=1m-2βiGηi,sp~sφsds.
Letu0=Φ1(t). It follows from (16) that T is a u0-bounded operator by Definition 3. By Krein-Rutman theorem, there exists φ∗∈P∖{θ} such that (17)Tφ∗=rTφ∗.
By (14) and (17), we have (18)λAφ≥λTφ,∀φ∈∂NrϵP,λ≥λ1α-ϵ,λTφ∗=λrTφ∗≥φ∗,∀λ≥λ1α-ϵ.
So, by (18) and Lemma 4, we have (19)iλA,NrϵP,P=0,∀λ≥λ1α-ϵ.
In the following, we prove that the operatorA has at least one bifurcation point on [0,λ1/(α-ϵ)] and has no bifurcation points on (λ1/(α-ϵ),+∞).
We shall prove that, for anyϵ¯∈(0,rϵ), there must exist λϵ¯∈[0,λ1/(α-ϵ)] and φϵ¯∈∂Nϵ¯(P) such that (20)φϵ¯=λϵ¯Aφϵ¯,where Nϵ¯(P)={φ∈P∣φ<ϵ¯}.
Without loss of generality, we may assume that the equation(λ1/α-ϵ)Aφ=φ has no solutions on ∂Nϵ¯(P). By (19), we get (21)iλ1α-ϵA,Nϵ¯P,P=0.
Obviously,(22)i0,Nϵ¯P,P=1.
Set(23)Ht,φ=φ-tλ1α-ϵAφ,t∈0,1.
By (21) and (22) and the homotopy invariance of fixed point index, there exists t∗∈[0,1] such that H(t∗,φ)=θ has a solution φϵ¯∗∈∂Nϵ¯(P). Namely, φϵ¯∗=λϵ¯∗Aφϵ¯∗, where λϵ¯∗=λ1t∗/(α-ϵ)∈[0,λ1/(α-ϵ)].Choose1/n<rϵ. Then there exist λn∈[0,λ1/(α-ϵ)] and φn∈P with φn=1/n such that φn=λnAφn. And φn≠θ,φn→θ(n→∞). Assume that λn→λ∗(n→∞). Then λ∗∈[0,λ1/(α-ϵ)] is a bifurcation point of the cone operator A.By (14) and Lemma 5, for any 0<r<rϵ, the equation φ=λAφ has no solutions in (λ1/(α-ϵ),+∞)×∂Nr(P), where Nr(P)={φ∈P∣φ<r}. Hence, A has no bifurcation points in (λ1/(α-ϵ),+∞) corresponding to positive solution, and LP∩((λ1/(α-ϵ),+∞)×{θ})=∅.LetG={(λ,θ)∣λ∈[0,λ1/(α-ϵ)],λis a bifurcation point of the cone operatorA}. By the above proof, we know that G≠∅. If, for any λ∈[0,λ1/(α-ϵ)], the connected component Cλ of LP is bounded, which passes through (λ,θ), then Cλ is a compact set.LetQλ be an open neighborhood of Cλ in [0,+∞)×P. If ∂Qλ∩LP≠∅, then Z=Q¯λ∩Lp is a compact metric space, and ∂Qλ∩LP and Cλ are disjoint, compact subsets of Z. Since Cλ has the property of maximal connectivity, there exists no connected subset C~ of Z such that C~∩Cλ=∅ and C~∩(∂Qλ∩LP)=∅. By Lemma 6, we know that there exist two compact subsets Z1,Z2 of Z such that (24)Z=Z1∪Z2,Z1∩Z2=∅,Cλ⊂Z1,∂Qλ∩LP⊂Z2.Obviously, the distanceρ=d(Z1,Z2)>0. Let Qλ′={u∈[0,+∞)×P∣d(u,Z1)<ρ/3}. Then Qλ′ is an open neighborhood of Z1. Let Qλ′′=Qλ∩Qλ′. Then ∂Qλ′′∩LP=∅. Let (25)Qλ∗=Qλ,if∂Qλ∩LP=∅,Qλ′′,if∂Qλ∩LP≠∅.Clearly,Qλ∗ is a bounded open set of [0,+∞)×P, and ∂Qλ∗∩LP=∅. Hence {Qλ∗∣(λ,θ)∈G} is an open covering of G. Since G is compact, there exist (λi,θ)∈G(i=1,2,…,n) such that {Qλi∗∣i=1,2,…,n} is also an open covering of G. Let Q∗=⋃i=1nQλi∗. Then Q∗ is a bounded open set of [0,+∞)×P, and G⊂Q∗,∂Q∗∩LP=∅.Take sufficiently largeλ~>λ1/(α-ϵ) such that Q¯∗⊂[0,λ~]×P. For 0<r<rϵ, let Ur=[0,λ~]×Nr(P), where Nr(P)={φ∈P∣φ<r}. Evidently, Ur is an open set of [0,λ~]×P, and ∂Ur=[0,λ~]×∂Nr(P). And λAφ=φ has no nontrivial solutions on ∂Ur∖Q∗ when r is sufficiently small.LetX=Q∗∪Ur. Then ∂X⊂∂Q∗∪∂(Ur∖Q∗). Since [0,λ~]×{θ}⊂X and ∂Q∗∩LP=∅, we know that φ=λAφ has no solutions on ∂X. By the general homotopy invariance of topological degree, we get (26)iλ~A,Xλ~,P=i0A,X0,P=1.SinceQ∗⊂[0,λ~]×P, Q∗(λ~)=({λ~}×P)∩Q∗=∅, so (27)Xλ~=λ~×P∩Ur=Urλ~=λ~×P∩NrP=Nrλ~,P. Therefore, by (20), we have (28)iλ~A,Xλ~,P=iλ~A,Nrλ~,P,P=iλ~A,NrP,P=0,which contradicts (26).Hence,LP possesses an unbounded connected component Cλ⊂(0,+∞)×P passing through (λ,θ).By the above proof and the arbitrariness ofϵ, we obtain that (i) the cone operator A has at least a bifurcation point λ∗∈[0,λ1/α] (the cone operator A has no bifurcation point in (λ1/α,+∞)) and (ii) LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/α,+∞)×{θ})=∅.Theorem 10.
Suppose that(H1)–(H3) and (H5) are satisfied. Then the operator A has no asymptotic bifurcation points in [0,+∞).Proof.
For anyλ0∈[0,+∞), there exists sufficiently small ϵ0>0 such that (29)λ0ϵ0<λ1,where λ1 is defined by Lemma 8.
By(H5), for the above ϵ0>0, there exists R>0 such that (30)ft,uu≤ϵ0,∀t∈0,1,u≥R,that is, (31)ft,u≤uϵ0,∀t∈0,1,u≥R.Set M=maxt∈[0,1],0≤u≤Rf(t,u); then (32)ft,u≤ϵ0u+M,∀t∈0,1,u≥0.
LetG(λ0)={φ∈P∣φ=λAφ,0≤λ≤λ0}. For any φ¯∈G(λ0), there exists λ¯∈[0,λ0] such that φ¯=λ¯Aφ¯. By (32), we have (33)φ¯t=λ¯Aφ¯t=λ¯∫01Kt,sp~sfs,φ¯sds≤λ0ϵ0∫01Kt,sp~sφ¯sds+M∫01Kt,sp~sds=T¯φ¯t+v0,where T¯=λ0ϵ0B,v0=M∫01K(t,s)p~(s)ds. It follows from (33) that r(T¯)=λ0ϵ0r(B)<1.
By (33), we get that φ¯(t)≤(I-T¯)-1v0. So φ¯≤(I-T¯)-1v0. Therefore, G(λ0) is bounded. By the arbitrariness of λ0, the operator A has no asymptotic bifurcation point in (0,+∞).By Theorems9 and 10, we have the following theorem.Theorem 11.
Suppose that(H1)–(H5) are satisfied. Then, for any λ∈(λ1/α,+∞), the boundary value problem (1) has at least one positive solution.
Furthermore, we takeα=+∞ in (H4), that is, the following condition (H4′).(H4′)
liminfu→0+(ft,u/u)=+∞,uniformlyont∈[0,1].Theorem 12.
Suppose that(H1)–(H3)(H5) and (H4′) are satisfied. Then(i)
the operatorA has no asymptotic bifurcation points in [0,+∞);(ii)
LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (0,θ), and C∩((0,+∞)×{θ})=∅.Proof.
Since(H1)–(H3) and (H5) are satisfied, it follows from Theorem 10 that (i) holds.
By(H4′), for sufficiently large M>0, there exists rM>0 such that (34)ft,uu≥M,∀t∈0,1,0≤u≤rM, that is,(35)ft,u≥Mu,∀t∈0,1,0≤u≤rM.
LetN~(P)={φ∈P∣φ<rM}. From (7) and (35), for any φ∈N~(P), we have (36)Aφt≥M∫01Kt,sp~sφsds=MBφt=T~φt, where T~=MB. Clearly, T~:P→P is completely continuous and r(T~)=Mr(B)=M/λ1.
Similar to the proof of Theorem9, we obtain that the operator A has a bifurcation point λ∗∈[0,λ1/M] corresponding to positive solution and LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/M,+∞)×{θ})=∅. Since M can take sufficiently large value, we know that (i) and (ii) hold. The proof is completed.It follows from Theorem12 that we have the following theorem.Theorem 13.
Suppose that(H1)–(H3)(H4′) and (H5) are satisfied. Then, for any λ∈(0,+∞), the boundary value problem (1) has at least one positive solution.
## 4. Applications
In this section, two examples are given to illustrate our main results.Example 14.
Consider the following boundary value problem:(37)-φ′′t=λft,φt,0≤t≤1,φ′0=0,φ1=12φ12,where(38)ft,u=2u+u2t,t∈0,1,u∈0,10,100t+10u+10,t∈0,1,u∈10,+∞.
By simple calculations,λ1≈6.9497. The nonlinear term f satisfies the conditions of Theorem 11. Thus, for any λ>3.4749, the boundary value problem (37) has at least one positive solution by Theorem 11.Example 15.
Consider the following boundary value problem:(39)-φ′′t=λft,φt,0≤t≤1,φ′0=0,φ1=12φ12,where(40)ft,u=tu+u1/3,t∈0,1,u∈0,1,t+u,t∈0,1,u∈1,+∞.
The nonlinear termf satisfies the conditions of Theorem 13. Thus, for any λ>0, the boundary value problem (39) has at least one positive solution by Theorem 13.
---
*Source: 1014250-2017-10-15.xml* | 1014250-2017-10-15_1014250-2017-10-15.md | 14,926 | Global Structure of Positive Solutions for Some Second-Order Multipoint Boundary Value Problems | Hongyu Li; Junting Zhang | Journal of Function Spaces
(2017) | Mathematical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1014250 | 1014250-2017-10-15.xml | ---
## Abstract
We investigate in this paper the following second-order multipoint boundary value problem:-(Lφ)(t)=λf(t,φ(t)), 0≤t≤1, φ′0=0, φ1=∑i=1m-2βiφηi. Under some conditions, we obtain global structure of positive solution set of this boundary value problem and the behavior of positive solutions with respect to parameter λ by using global bifurcation method. We also obtain the infinite interval of parameter λ about the existence of positive solution.
---
## Body
## 1. Introduction
In this paper, we shall study the following second-order multipoint boundary value problem:(1)-Lφt=λft,φt,0≤t≤1,φ′0=0,φ1=∑i=1m-2βiφηi,where (Lφ)(t)=ptφ′t′+q(t)φ(t), ηi∈(0,1),0<η1<η2<⋯<ηm-2<1, βi∈[0+∞), and λ is a positive parameter.The multipoint boundary value problems for ordinary differential equations play an important role in physics and applied mathematics, and so on. The existence and multiplicity of nontrivial solutions for multipoint boundary value problems have been extensively considered (including positive solutions, negative solutions, or sign-changing solutions) by using the fixed point theorem with lattice, fixed point index theory, coincidence degree theory, Leray-Schauder continuation theorems, upper and lower solution method, and so on (see [1–25] and references therein). On the other hand, some scholars have studied the global structure of nontrivial solutions for second-order multipoint boundary value problems (see [26–32] and references therein).There are few papers about the global structure of nontrivial solutions for the boundary value problem (1). Motivated by [1, 26–32], we shall investigate the global structure of positive solutions of the boundary value problem (1). In [1], the authors only have studied the existence of positive solutions, but in this paper, we prove that the set of nontrivial positive solutions of the boundary value problem (1) possesses an unbounded connected component.This paper is arranged as follows. In Section2, some notation and lemmas are presented. In Section 3, we prove the main results of the boundary value problem (1). Finally, in Section 4, two examples are given to illustrate the main results obtained in Section 3.
## 2. Preliminaries
LetE be a Banach space, P⊂E be a cone, and A:P→P be a completely continuous operator.Definition 1 (see [33]).
LetΩ⊂P be an open set, A:Ω→P, and λ0∈(0,+∞). If, for any ϵ>0, there exists the solution (λ,x)∈R+×Ω of the equation x=λAx, satisfying (2)λ-λ0<ϵ,0<x<ϵ, then λ0 is called a bifurcation point of the cone operator A.Definition 2 (see [33]).
LetΩ⊂P be an open set, A:Ω→P, and λ0∈(0,+∞). If, for any ϵ>0, there exists the solution (λ,x)∈R+×Ω of the equation x=λAx, satisfying (3)λ-λ0<ϵ,x>1ϵ,then λ0 is called an asymptotic bifurcation point of the cone operator A.Definition 3 (see [34]).
LetT:E→E be a linear operator and T map P into P. The linear operator T is u0-positive if there exists u0∈P∖{θ} such that, for any x∈P∖{θ}, we can find an integer n and real numbers α0>0,β0>0 such that α0u0≤Tnx≤β0u0.Lemma 4 (see [34]).
LetΩ(P) be an open set of P. Assume that the operator A has no fixed points on ∂Ω(P). If there exists a linear operator B and u∗∈P∖{θ} such that(i)
Ax≥Bx,x∈∂Ω(P);(ii)
for somen,Bnu∗≥u∗,then i(A,Ω(P),P)=0.Lemma 5 (see [34]).
LetA:P→P be completely continuous and T be a completely continuous u0-bounded linear operator. If, for any x∈P, Ax≥Tx, λAx=x, then λ≤1/r(T), where 1/r(T) is unique eigenvalue of T corresponding to positive eigenfunction.Lemma 6 (see [34]).
LetM be a compact metric space and A and B be disjoint, compact subsets of M. If there does not exist connected subset C of M such that C∩A≠∅ and C∩B≠∅, then there exist disjoint compact subsets MA and MB such that A⊂MA,B⊂MB and M=MA∪MB.
## 3. Main Results
LetE=C[0,1] with the norm φ(t)=maxt∈[0,1]φt; then E is a Banach space. Let P={φ∈E∣φ(t)≥0,t∈[0,1]}. Obviously, P is a normal cone of E.In this paper, we always assume that(H1)
p(t)∈C1[0,1],p(t)>0,q(t)∈C[0,1],q(t)≤0.Lemma 7 (see [1]).
Suppose that(H1) holds. Let Φ1 and Φ2 be the solutions of (4)Lφt=0,0<t<1,φ′0=0,φ1=1,(5)Lφt=0,0<t<1,φ0=1,φ1=0,respectively. Then(i)
Φ1 is increasing on [0,1] and Φ1>0,t∈[0,1];(ii)
Φ2 is decreasing on [0,1] and Φ2>0,t∈[0,1).
Let(6)Gt,s=1ρΦ1tΦ2s,0≤t≤s≤1,1ρΦ1sΦ2t,0≤s≤t≤1,where ρ=-Φ1(0)Φ2′(0)>0 by [1]. Let (7)Kt,s=Gt,s+D-1Φ1t∑i=1m-2βiGηi,s,0≤t,s≤1,where D=1-∑i=1m-2βiΦ1(ηi).
Define the operatorsA, B, and F: (8)Aφt=∫01Kt,sp~sfs,φsds,(9)Bφt=∫01Kt,sp~sφsds,(10)Fφt=ft,φt, where p~(s)=1/psexp(∫0sp′(x)/p(x)dx) and K(t,s) is defined by (7).
Obviously,A=BF. It is easy to know that the solutions of the boundary value problem (1) are equivalent to the solutions of the equation (11)φ=λAφ.
LetL={(λ,φ)∈(0,+∞)×P∣φ=λAφ,φ≠θ}¯ be the closure of nontrivial positive solution set of (11). Then L is also the closure of the nontrivial positive solution set of the boundary value problem (1).
We give the following assumptions:(H2)
∑i=1m-2βiΦ1(ηi)<1, where Φ1(t) is the solution of (4).(H3)
f:[0,1]×R+→R+ is continuous, f(t,0)=0,uniformlyont∈[0,1].(H4)
liminfu→0+(f(t,u)/u)≥α>0,uniformlyont∈[0,1].(H5)
limsupu→+∞(f(t,u)/u)=0,uniformlyont∈[0,1].Lemma 8 (see [1]).
Suppose that(H1)–(H3) are satisfied. Then, for the operator B defined by (9), the spectral radius r(B)≠0 and B has a positive eigenfunction corresponding to its first eigenvalue λ1=(r(B))-1.Theorem 9.
Suppose that(H1)–(H4) are satisfied. Then(i)
the operatorA defined by (8) has at least a bifurcation point λ∗∈[0,λ1/α] corresponding to positive solution; the operator A has no bifurcation points in (λ1/α,+∞) corresponding to positive solution, where λ1 is defined by Lemma 8;(ii)
L possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/α,+∞)×{θ})=∅, where λ1 is defined by Lemma 8.Proof.
By(H1)–(H3), it is easy to know that A:P→P is completely continuous and B:P→P is completely continuous; and Aθ=θ. By Lemma 8, we have r(B)=1/λ1.
By(H3), for any ϵ>0, there exists rϵ>0 such that (12)ft,uu≥α-ϵ,∀t∈0,1,0≤u≤rϵ,that is, (13)ft,u≥α-ϵu,∀t∈0,1,0≤u≤rϵ.
LetNrϵ(P)={φ∈P∣φ<rϵ}. From (8) and (13), for any φ∈N¯rϵ(P), we have (14)Aφt≥α-ϵ∫01Kt,sp~sφsds=α-ϵBφt=Tφt,where T=(α-ϵ)B. Clearly, T:P→P is completely continuous and r(T)=(α-ϵ)r(B)=(α-ϵ)/λ1.
By Lemma7 and (6) and (7), it follows that (15)Φ1tD-1∑i=1m-2βiGηi,s≤Kt,s≤1ρΦ2s+D-1∑i=1m-2βiGηi,sΦ1t,∀t,s∈0,1.
For anyφ∈P, by (14) and (15), we have (16)Tφt≥α-ϵΦ1t∫01D-1∑i=1m-2βiGηi,sp~sφsds,Tφt≤α-ϵΦ1t∫011ρΦ2s+D-1∑i=1m-2βiGηi,sp~sφsds.
Letu0=Φ1(t). It follows from (16) that T is a u0-bounded operator by Definition 3. By Krein-Rutman theorem, there exists φ∗∈P∖{θ} such that (17)Tφ∗=rTφ∗.
By (14) and (17), we have (18)λAφ≥λTφ,∀φ∈∂NrϵP,λ≥λ1α-ϵ,λTφ∗=λrTφ∗≥φ∗,∀λ≥λ1α-ϵ.
So, by (18) and Lemma 4, we have (19)iλA,NrϵP,P=0,∀λ≥λ1α-ϵ.
In the following, we prove that the operatorA has at least one bifurcation point on [0,λ1/(α-ϵ)] and has no bifurcation points on (λ1/(α-ϵ),+∞).
We shall prove that, for anyϵ¯∈(0,rϵ), there must exist λϵ¯∈[0,λ1/(α-ϵ)] and φϵ¯∈∂Nϵ¯(P) such that (20)φϵ¯=λϵ¯Aφϵ¯,where Nϵ¯(P)={φ∈P∣φ<ϵ¯}.
Without loss of generality, we may assume that the equation(λ1/α-ϵ)Aφ=φ has no solutions on ∂Nϵ¯(P). By (19), we get (21)iλ1α-ϵA,Nϵ¯P,P=0.
Obviously,(22)i0,Nϵ¯P,P=1.
Set(23)Ht,φ=φ-tλ1α-ϵAφ,t∈0,1.
By (21) and (22) and the homotopy invariance of fixed point index, there exists t∗∈[0,1] such that H(t∗,φ)=θ has a solution φϵ¯∗∈∂Nϵ¯(P). Namely, φϵ¯∗=λϵ¯∗Aφϵ¯∗, where λϵ¯∗=λ1t∗/(α-ϵ)∈[0,λ1/(α-ϵ)].Choose1/n<rϵ. Then there exist λn∈[0,λ1/(α-ϵ)] and φn∈P with φn=1/n such that φn=λnAφn. And φn≠θ,φn→θ(n→∞). Assume that λn→λ∗(n→∞). Then λ∗∈[0,λ1/(α-ϵ)] is a bifurcation point of the cone operator A.By (14) and Lemma 5, for any 0<r<rϵ, the equation φ=λAφ has no solutions in (λ1/(α-ϵ),+∞)×∂Nr(P), where Nr(P)={φ∈P∣φ<r}. Hence, A has no bifurcation points in (λ1/(α-ϵ),+∞) corresponding to positive solution, and LP∩((λ1/(α-ϵ),+∞)×{θ})=∅.LetG={(λ,θ)∣λ∈[0,λ1/(α-ϵ)],λis a bifurcation point of the cone operatorA}. By the above proof, we know that G≠∅. If, for any λ∈[0,λ1/(α-ϵ)], the connected component Cλ of LP is bounded, which passes through (λ,θ), then Cλ is a compact set.LetQλ be an open neighborhood of Cλ in [0,+∞)×P. If ∂Qλ∩LP≠∅, then Z=Q¯λ∩Lp is a compact metric space, and ∂Qλ∩LP and Cλ are disjoint, compact subsets of Z. Since Cλ has the property of maximal connectivity, there exists no connected subset C~ of Z such that C~∩Cλ=∅ and C~∩(∂Qλ∩LP)=∅. By Lemma 6, we know that there exist two compact subsets Z1,Z2 of Z such that (24)Z=Z1∪Z2,Z1∩Z2=∅,Cλ⊂Z1,∂Qλ∩LP⊂Z2.Obviously, the distanceρ=d(Z1,Z2)>0. Let Qλ′={u∈[0,+∞)×P∣d(u,Z1)<ρ/3}. Then Qλ′ is an open neighborhood of Z1. Let Qλ′′=Qλ∩Qλ′. Then ∂Qλ′′∩LP=∅. Let (25)Qλ∗=Qλ,if∂Qλ∩LP=∅,Qλ′′,if∂Qλ∩LP≠∅.Clearly,Qλ∗ is a bounded open set of [0,+∞)×P, and ∂Qλ∗∩LP=∅. Hence {Qλ∗∣(λ,θ)∈G} is an open covering of G. Since G is compact, there exist (λi,θ)∈G(i=1,2,…,n) such that {Qλi∗∣i=1,2,…,n} is also an open covering of G. Let Q∗=⋃i=1nQλi∗. Then Q∗ is a bounded open set of [0,+∞)×P, and G⊂Q∗,∂Q∗∩LP=∅.Take sufficiently largeλ~>λ1/(α-ϵ) such that Q¯∗⊂[0,λ~]×P. For 0<r<rϵ, let Ur=[0,λ~]×Nr(P), where Nr(P)={φ∈P∣φ<r}. Evidently, Ur is an open set of [0,λ~]×P, and ∂Ur=[0,λ~]×∂Nr(P). And λAφ=φ has no nontrivial solutions on ∂Ur∖Q∗ when r is sufficiently small.LetX=Q∗∪Ur. Then ∂X⊂∂Q∗∪∂(Ur∖Q∗). Since [0,λ~]×{θ}⊂X and ∂Q∗∩LP=∅, we know that φ=λAφ has no solutions on ∂X. By the general homotopy invariance of topological degree, we get (26)iλ~A,Xλ~,P=i0A,X0,P=1.SinceQ∗⊂[0,λ~]×P, Q∗(λ~)=({λ~}×P)∩Q∗=∅, so (27)Xλ~=λ~×P∩Ur=Urλ~=λ~×P∩NrP=Nrλ~,P. Therefore, by (20), we have (28)iλ~A,Xλ~,P=iλ~A,Nrλ~,P,P=iλ~A,NrP,P=0,which contradicts (26).Hence,LP possesses an unbounded connected component Cλ⊂(0,+∞)×P passing through (λ,θ).By the above proof and the arbitrariness ofϵ, we obtain that (i) the cone operator A has at least a bifurcation point λ∗∈[0,λ1/α] (the cone operator A has no bifurcation point in (λ1/α,+∞)) and (ii) LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/α,+∞)×{θ})=∅.Theorem 10.
Suppose that(H1)–(H3) and (H5) are satisfied. Then the operator A has no asymptotic bifurcation points in [0,+∞).Proof.
For anyλ0∈[0,+∞), there exists sufficiently small ϵ0>0 such that (29)λ0ϵ0<λ1,where λ1 is defined by Lemma 8.
By(H5), for the above ϵ0>0, there exists R>0 such that (30)ft,uu≤ϵ0,∀t∈0,1,u≥R,that is, (31)ft,u≤uϵ0,∀t∈0,1,u≥R.Set M=maxt∈[0,1],0≤u≤Rf(t,u); then (32)ft,u≤ϵ0u+M,∀t∈0,1,u≥0.
LetG(λ0)={φ∈P∣φ=λAφ,0≤λ≤λ0}. For any φ¯∈G(λ0), there exists λ¯∈[0,λ0] such that φ¯=λ¯Aφ¯. By (32), we have (33)φ¯t=λ¯Aφ¯t=λ¯∫01Kt,sp~sfs,φ¯sds≤λ0ϵ0∫01Kt,sp~sφ¯sds+M∫01Kt,sp~sds=T¯φ¯t+v0,where T¯=λ0ϵ0B,v0=M∫01K(t,s)p~(s)ds. It follows from (33) that r(T¯)=λ0ϵ0r(B)<1.
By (33), we get that φ¯(t)≤(I-T¯)-1v0. So φ¯≤(I-T¯)-1v0. Therefore, G(λ0) is bounded. By the arbitrariness of λ0, the operator A has no asymptotic bifurcation point in (0,+∞).By Theorems9 and 10, we have the following theorem.Theorem 11.
Suppose that(H1)–(H5) are satisfied. Then, for any λ∈(λ1/α,+∞), the boundary value problem (1) has at least one positive solution.
Furthermore, we takeα=+∞ in (H4), that is, the following condition (H4′).(H4′)
liminfu→0+(ft,u/u)=+∞,uniformlyont∈[0,1].Theorem 12.
Suppose that(H1)–(H3)(H5) and (H4′) are satisfied. Then(i)
the operatorA has no asymptotic bifurcation points in [0,+∞);(ii)
LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (0,θ), and C∩((0,+∞)×{θ})=∅.Proof.
Since(H1)–(H3) and (H5) are satisfied, it follows from Theorem 10 that (i) holds.
By(H4′), for sufficiently large M>0, there exists rM>0 such that (34)ft,uu≥M,∀t∈0,1,0≤u≤rM, that is,(35)ft,u≥Mu,∀t∈0,1,0≤u≤rM.
LetN~(P)={φ∈P∣φ<rM}. From (7) and (35), for any φ∈N~(P), we have (36)Aφt≥M∫01Kt,sp~sφsds=MBφt=T~φt, where T~=MB. Clearly, T~:P→P is completely continuous and r(T~)=Mr(B)=M/λ1.
Similar to the proof of Theorem9, we obtain that the operator A has a bifurcation point λ∗∈[0,λ1/M] corresponding to positive solution and LP possesses an unbounded connected component C⊂(0,+∞)×P passing through (λ∗,θ), and C∩((λ1/M,+∞)×{θ})=∅. Since M can take sufficiently large value, we know that (i) and (ii) hold. The proof is completed.It follows from Theorem12 that we have the following theorem.Theorem 13.
Suppose that(H1)–(H3)(H4′) and (H5) are satisfied. Then, for any λ∈(0,+∞), the boundary value problem (1) has at least one positive solution.
## 4. Applications
In this section, two examples are given to illustrate our main results.Example 14.
Consider the following boundary value problem:(37)-φ′′t=λft,φt,0≤t≤1,φ′0=0,φ1=12φ12,where(38)ft,u=2u+u2t,t∈0,1,u∈0,10,100t+10u+10,t∈0,1,u∈10,+∞.
By simple calculations,λ1≈6.9497. The nonlinear term f satisfies the conditions of Theorem 11. Thus, for any λ>3.4749, the boundary value problem (37) has at least one positive solution by Theorem 11.Example 15.
Consider the following boundary value problem:(39)-φ′′t=λft,φt,0≤t≤1,φ′0=0,φ1=12φ12,where(40)ft,u=tu+u1/3,t∈0,1,u∈0,1,t+u,t∈0,1,u∈1,+∞.
The nonlinear termf satisfies the conditions of Theorem 13. Thus, for any λ>0, the boundary value problem (39) has at least one positive solution by Theorem 13.
---
*Source: 1014250-2017-10-15.xml* | 2017 |
# Exponential Stabilization of Neutral-Type Neural Networks with Interval Nondifferentiable and Distributed Time-Varying Delays
**Authors:** W. Weera; P. Niamsup
**Journal:** Abstract and Applied Analysis
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101426
---
## Abstract
The problem of exponential stabilization of neutral-type neural networks
with various activation functions and interval nondifferentiable and distributed time-varying
delays is considered. The interval time-varying delay function is not required
to be differentiable. By employing new and improved Lyapunov-Krasovskii functional
combined with Leibniz-Newton’s formula, the stabilizability criteria are formulated in
terms of a linear matrix inequalities. Numerical examples are given to illustrate and show the
effectiveness of the obtained results.
---
## Body
## 1. Introduction
In recent years, there have been great attentions on the stability analysis of neural networks due to its real world application to various systems such as signal processing, pattern recognition, content-addressable memory, and optimization [1–5]. In performing a periodicity or stability analysis of a neural network, the conditions to be imposed on the neural network are determined by the characteristics of various activation functions and network parameters. When neural networks are designed for problem solving, it is desirable that their activation functions are not too restrictive, [3, 6, 7]. It is known that time delays cannot be avoided in the hardware implementation of neural networks due to the finite switching speed of amplifies in electronic neural networks or the finite signal propagation time in biological networks. The existence of time delays may result in instability or oscillation of a neural network. Therefore, many researchers have focused on the study of stability analysis of delayed neural networks with various activation functions with more general conditions during the last decades [2, 8–11].The stability criteria for system with time delays can be classified into two categories: delay independent and delay dependent. Delay-independent criteria do not employ any information on the size of the delay; while delay-dependent criteria make use of such information at different levels. Delay-dependent stability conditions are generally less conservative than delay-independent ones especially when the delay is small. In many situations, time delays are time-varying continuous functions which vary from 0 to a given upper bound. In addition, the range of time delays may vary in a range for which the lower bound is not restricted to be 0; in which case time delays are called interval time-varying delay. A typical example with interval time delay is the networked control system, which has been widely studied in the recent literature (see, e.g., [2, 11–14]). Therefore, it is of great significance to investigate the stability of system with interval time-varying delay. Another important type of time delay is distributed delay where stability analysis of neural networks with distributed delayed has been studied extensively recently, see [2, 5, 8–11, 15–17].It is know that exponential stability is more important than asymptotic stability since it provides information on convergence rate of solutions of systems to equilibrium points. It is particularly important for neural networks where the exponential convergence rate is used to determine the speed of neural computations. Therefore, it is important to determine the exponential stability and to estimate the exponential convergence rate for delayed neural networks. Consequently, many researchers have considered the exponential stability analysis problem for delayed neural networks and several results on this topic that have been reported in the literatures [3, 9, 13, 14, 17].In practical control designs, due to systems uncertainty, failure modes, or systems with various modes of operation, the simultaneous stabilization problem has often to be taken into account. The problem is concerned with designing a single controller which can simultaneously stabilize a set of systems. Recently, the exponential stability and stabilization problems for time-delay systems have been studied by many researchers, see [8, 12, 18, 19]. Among the usual approach, there are many results on the stabilization problem of neural networks being reported in the literature (see [2, 9, 15, 16, 20, 21]). In [21], robust stabilization criterion are provided via designing a memoryless state feedback controller for the time-delay dynamical neural networks with nonlinear perturbation. However, time-delay is required to be constant. In [16], global robust stabilizing control are presented for neural network with time-varying delay with the lower bound restricted to be 0. For neural network with interval time-varying delay, global stability analysis is considered with control input in [2, 9, 12]. Nonetheless, in most studies, the time-varying delays are required to be differentiable [2, 9, 15, 16]. Therefore, their methods have a conservatism which can be improved upon.It is noted that these stability conditions are either with testing difficulty or with conservatism to some extent. It is natural and important that systems will contain some information about the derivative of the past state to further describe and model the dynamics for such complex neural reactions. Such systems are called neutral-type systems in which the systems contain both state delay as well as state derivative delay, so called neutral delay. The phenomena on neutral delay often appears in the study of heat exchanges, distributed networks containing lossless transmission lines, partial element equivalent circuits and population ecology are examples of neutral systems see [1, 10, 11, 22, 23] and references cited therein.Based on the above discussions, we consider the problem of exponential stabilization of neutral-type neural networks with interval and distributed time-varying delays. There are various activation functions which are considered in the system and the restriction on differentiability of interval time-varying delays is removed, which means that a fast interval time-varying delay is allowed. Based on the construction of improved Lyapunov-Krasovskii functionals combined with Liebniz-Newton's formula and some appropriate estimation of integral terms, new delay-dependent sufficient conditions for the exponential stabilization of the system are derived in terms of LMIs without introducing any free-weighting matrices. The new stability conditions are much less conservative and more general than some existing results. Numerical examples are given to illustrate the effectiveness and less conservativeness of our theoretical results. To the best of our knowledge, our results are among the first on investigation of exponential stabilization for neutral-type neural networks with discrete, neutral, and distributed delays.The rest of this paper is organized as follows. In Section2, we give notations, definitions, propositions, and lemmas which will be used in the proof of the main results. Delay-dependent sufficient conditions for the exponential stabilization of neutral-type neural networks with various activation functions, interval and distributed time-varying delays, and designs of memoryless feedback controls are presented in Section 3. Numerical examples illustrated the obtained results are given in Section 4. The paper ends with conclusions in Section 5 and cited references.
## 2. Preliminaries
The following notation will be used in this paper:ℝ+ denotes the set of all real nonnegative numbers; ℝn denotes the n-dimensional space and the vector norm ∥·∥; Mn×r denotes the space of all matrices of (n×r)-dimensions.AT denotes the transpose of matrix A; A is symmetric if A=AT; I denotes the identity matrix; λ(A) denotes the set of all eigenvalues of A; λmax(A)=max{Reλ;λ∈λ(A)}.xt:={x(t+s):s∈[-h,0]},∥xt∥=sups∈[-h,0]∥x(t+s)∥;C([0,t],ℝn) denotes the set of all ℝn-valued continuous functions on [0,t]; L2([0,t],ℝm) denotes the set of all the ℝm-valued square integrable functions on [0,t].MatrixA is called semi-positive definite (A≥0) if 〈Ax,x〉≥0, for all x∈ℝn;A is positive definite (A>0) if 〈Ax,x〉>0 for all x≠0;A>B means A-B>0. The symmetric term in a matrix is denoted by *.Consider the following neural networks with mixed time-varying delays and control input(2.1)ẋ(t)=-(A+ΔA(t))x(t)+(W0+ΔW0)f(x(t))+(W1+ΔW1)g(x(t-h(t)))+(W2+ΔW2)∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t),x(t)=ϕ(t),t∈[-d,0],d=max{h2,k,η},
where x(t)∈ℝn is the state of the neural networks, u(·)∈L2([0,t],ℝm) is the control, n is the number of neurals, and (2.2)f(x(t))=[f1(x1(t)),f2(x2(t)),…,fn(xn(t))]T,g(x(t))=[g1(x1(t)),g2(x2(t)),…,gn(xn(t))]T,h(x(t))=[h1(x1(t)),h2(x2(t)),…hn(xn(t))]T
are the activation functions, A=diag(a¯1,a¯2,…,a¯n),a¯i>0 represents the self-feedback term and W0,W1,W2 denote the connection weights, the discretely delayed connection weights, and the distributively delayed connection weight, respectively. In this paper, we consider various activation functions and assume that the activation functions f(·),g(·),h(·) are Lipschitzian with the Lipschitz constants ai,bi,ci>0(2.3)|fi(ξ1)-fi(ξ2)|≤ai|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R,|gi(ξ1)-gi(ξ2)|≤bi|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R,|hi(ξ1)-hi(ξ2)|≤ci|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R.
The time-varying delay functions h(t),k(t),η(t) satisfy the condition(2.4)0≤h1≤h(t)≤h2,0≤k(t)≤k,0≤η(t)≤η,η̇(t)≤δ<1.
It is worth noting that the time delay is assumed to be a continuous function belonging to a given interval, which means that the lower and upper bounds for the time-varying delay are available, but the delay function is bounded but not restricted to being zero. The initial functions ϕ(t)∈C1([-d,0],ℝn), with the norm (2.5)∥ϕ∥=supt∈[-d,0]∥ϕ(t)∥2+∥ϕ̇(t)∥2.
The uncertainties satisfy the following condition:(2.6)ΔA(t)=EaFa(t)Ha,ΔW0(t)=E0F0(t)H0,ΔW1(t)=E1F1(t)H1,ΔW2(t)=E2F2(t)H2,
where Ei,Hi,i=a,0,1,2 are given constant matrices with appropriate dimensions, Fi(t),i=a,0,1,2 are unknown, real matrices with Lebesgue measurable elements satisfying(2.7)FiT(t)Fi(t)≤I,i=a,0,1,2∀t≥0.Definition 2.1.
The zero solution of system (2.8) is exponentially stabilizable if there exists a feedback control u(t)=Kx(t),K∈ℝm×n such that the resulting closed-loop system
(2.8)ẋ(t)=-[A-BK]x(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))
is α-stable.Definition 2.2.
Givenα>0. The zero solution of system (2.1) with u(t)=0 is α-stable if there exists a positive number N>0 such that every solution x(t,ϕ) satisfies the following condition:
(2.9)‖x(t,ϕ)‖≤Ne-αt‖ϕ‖,∀t≥0.We introduce the following technical well-known propositions and lemma, which will be used in the proof of our results.Proposition 2.3 (Cauchy inequality).
For any symmetric positive definite matrixN∈Mn×n and x,y∈ℝn one has
(2.10)±2xTy≤xTNx+yTN-1y.Proposition 2.4 (see [24]).
For any symmetric positive definite matrixM>0, scalar γ>0 and vector function ω:[0,γ]→ℝn such that the integrations concerned are well defined, the following inequality holds:
(2.11)(∫0γω(s)ds)TM(∫0γω(s)ds)≤γ(∫0γωT(s)Mω(s)ds).Proposition 2.5 ([24, Schur complement lemma]).
Given constant symmetric matricesX,Y,Z with appropriate dimensions satisfying X=XT,Y=YT>0. Then X+ZTY-1Z<0 if and only if
(2.12)(XZTZ-Y)<0or(-YZZTX)<0.Lemma 2.6 (see [25]).
Given matricesQ=QT,H,E, and R=RT>0 with appropriate dimensions. Then
(2.13)Q+HFE+ETFTHT<0,
for all F satisfying FTF≤R, if and only if there exists an ϵ>0 such that
(2.14)Q+ϵHHT+ϵ-1ETRE<0.
## 3. Main Results
### 3.1. Exponential Stabilization for Nominal Interval Time-Varying Delay Systems
The nominal system is given by(3.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t)x(t)=ϕ(t),t∈[-d,0].
First, we present a delay-dependent exponential stabilizability analysis conditions for the given nominal interval time-varying delay system (3.1) with ΔA(t)=ΔW0(t)=ΔW1(t)=ΔW2(t)=0. Let us set (3.2)λ1=λmin(P-1),λ2=λmax(P-1)+2h2λmax(P-1QP-1)+2h22λmax(P-1RP-1)+ηλmax(P-1Q1P-1)+2λmax(HD2-1H)+(h2-h1)2λmax(P-1UP-1).Assumption 3.1.
All the eigenvalues of matrixB0 are inside the unit circle.Theorem 3.2.
Givenα>0. The system (3.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 such that the following LMI holds:
(3.3)M1=M-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.4)M2=M-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.5)M3=[-0.1BBT-0.1(e-2αh1+e-2αh2)R2kPH2PF*-2kD20**-2D0]<0,(3.6)M4=[-0.1e-2αh2U2PG*-2D1]<0,(3.7)M=[M11M12M13M14M150*M2200M250**M3300M36***M440M46****M550*****M66],
where
(3.8)M11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R]M12=B0P,M13=e-2αh1R,M14=e-2αh2R,M15=-PAT-0.5BBT,M22=-(1-δ)e-2αηQ1,M25=PB0T,M33=-e-2αh1Q-e-2αh1R-e-2αh2U,M36=e-2αh2U,M44=-e-2αh2Q-e-2αh2R-e-2αh2U,M46=e-2αh2U,M55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]M66=-1.9e-2αh2U.
Moreover, the memoryless feedback control is
(3.9)u(t)=-0.5BTP-1x(t),t≥0,
and the solution x(t,ϕ) of the system satisfies
(3.10)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,∀t≥0.Proof.
LetY=P-1,y(t)=Yx(t). Using the feedback control (3.9) we consider the following Lyapunov-Krasovskii functional:
(3.11)V(t,xt)=∑i=18Vi,
where
(3.12)V1=xT(t)Yx(t),V2=∫t-h1te2α(s-t)xT(s)YQYx(s)ds,V3=∫t-h2te2α(s-t)xT(s)YQYx(s)ds,V4=h1∫-h10∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V5=h2∫-h20∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V6=(h2-h1)∫-h2-h1∫t+ste2α(τ-t)ẋT(τ)YUYẋ(τ)dτds,V7=∫t-η(t)te2α(s-t)ẋT(s)YQ1Yẋ(s)ds,V8=2∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds.
It easy to check that
(3.13)λ1∥x(t)∥2≤V(t,xt)≤λ2∥xt∥2,∀t≥0.
Taking the derivative of V(xt) along the solution of system (3.1) we have
(3.14)V̇1=2xT(t)Yẋ(t),=2yT(t)[-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))-0.5BBTP-1x(t)∫t-k(t)t]=2yT(t)[-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)∫t-k(t)t](3.15)=yT(t)[-AP-PAT]y(t)+2yT(t)W0f(x(t))+2yT(t)W1g(x(t-h(t)))+2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-yT(t)BBTy(t)+2αyT(t)Py(t)-2αV1,V̇2=yT(t)Qy(t)-e-2αh1yT(t-h1)Qy(t-h1)-2αV2,V̇3=yT(t)Qy(t)-e-2αh2yT(t-h2)Qy(t-h2)-2αV3,V̇4≤h12ẏT(t)Rẏ(t)-h1e-2αh1∫t-h1tẏT(s)Rẏ(s)ds-2αV4,V̇5≤h22ẏT(t)Rẏ(t)-h2e-2αh2∫t-h2tẏT(s)Rẏ(s)ds-2αV5,V̇6≤(h2-h1)2ẏT(t)Uẏ(t)-(h2-h1)e-2αh2∫t-h2t-h1ẏT(s)Uẏ(s)ds-2αV6,V̇7≤ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))-2αV7,V̇8≤2khT(x(t))D2-1h(x(t))-2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2αV8.
Using the condition (2.3) and since the matrices Di-1>0,i=0,1,2 are diagonal, we have
(3.16)khT(x(t))D2-1h(x(t))≤kxT(t)HD2-1Hx(t)=kyT(t)PHD2-1HPy(t),fT(x(t))D0-1f(x(t))≤xT(t)FD0-1Fx(t)=yT(t)PFD0-1FPy(t),gT(x(t-h(t)))D1-1g(x(t-h(t)))≤xT(t-h(t))GD1-1Gx(t-h(t)),=yT(t-h(t))PGD1-1GPy(t-h(t)),
and using (2.3) and the Proposition (2.3) for the following estimations:
(3.17)2yT(t)W0f(x(t))≤yT(t)W0D0W0Ty(t)+fT(x(t))D0-1f(x(t))≤yT(t)W0D0W0Ty(t)+xT(t)FD0-1Fx(t)≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t),2yT(t)W1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+xT(t-h(t))GD1-1Gx(t-h(t))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),2yT(t)W2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+k-1e-2αk(∫t-k(t)th(x(s)ds))TD2-1(∫t-k(t)th(x(s))ds)≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
Applying Proposition 2.4 and the Leibniz-Newton formula, we have
(3.18)-h1∫t-h1tẏT(s)Rẏ(s)ds≤-[∫t-h1tẏ(s)]TR[∫t-h1tẏ(s)]≤-[y(t)-y(t-h1)]TR[y(t)-y(t-h1)]=-yT(t)Ry(t)+2yT(t)Ry(t-h1)-yT(t-h1)Ry(t-h1),-h2∫t-h2tẏT(s)Rẏ(s)ds≤-[∫t-h2tẏ(s)]TR[∫t-h2tẏ(s)]≤-[y(t)-y(t-h2)]TR[y(t)-y(t-h2)]=-yT(t)Ry(t)+2yT(t)Ry(t-h2)-yT(t-h2)Ry(t-h2).
Note that
(3.19)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds=-(h2-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h2-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds.
Using Proposition 2.4 gives
(3.20)-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds≤-[∫t-h2t-h(t)ẏ(s)ds]TU[∫t-h2t-h(t)ẏ(s)ds]≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)],(3.21)-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds≤-[∫t-h(t)t-h1ẏ(s)ds]TU[∫t-h(t)t-h1ẏ(s)ds]≤-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))].
Let β=(h2-h(t))/(h2-h1)≤1. Then
(3.22)-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-β∫t-h(t)t-h1(h2-h1)ẏT(s)Uẏ(s)ds≤-β∫t-h(t)t-h1(h(t)-h1)ẏT(s)Uẏ(s)ds≤-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))],(3.23)-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds=-(1-β)∫t-h2t-h(t)(h2-h1)ẏT(s)Uẏ(s)ds≤-(1-β)∫t-h2t-h(t)(h2-h(t))ẏT(s)Uẏ(s)ds≤-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
Therefore from (3.20)–(3.23), we obtain
(3.24)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
By using the following identity relation:
(3.25)-Pẏ(t)-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W0∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)=0,
we have
(3.26)-2ẏT(t)Pẏ(t)-2ẏT(t)APy(t)+2ẏT(t)W0f(x(t))+2ẏT(t)W1g(x(t-h(t)))+2ẏT(t)W0∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=0.
By using Propositions 2.3 and 2.4, we have
(3.27)2ẏT(t)W0f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+fT(x(t))D0-1f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t),(3.28)2ẏT(t)W1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),(3.29)2ẏT(t)W2∫t-k(t)th(x(s))ds≤ke2αkẏT(t)W2D2W2Tẏ(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
From (3.15)–(3.29), we obtain
(3.30)V̇(t,xt)+2αV(t,xt)≤yT(t)[-AP-PAT+2αP-BBT+2Q+2kPHD2-1HP+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-e-2αh1R-e-2αh2R+PFD1-1FP]y(t)+2yT(t)B0Pẏ(t-η(t))+yT(t-h1)[-e-2αh1Q-e-2αh1R-e-2αh2U]y(t-h1),+(t-h2)[-e-2αh2Q-e-2αh2R-e-2αh2U]y(t-h2),+ẏT(t)[h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+yT(t-h(t))[2PGD1-1GP-2e-2αh2U]y(t-h(t))+2e-2αh1yT(t)Ry(t-h1)+2e-2αh2yT(t)Ry(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h1)-2ẏT(t)APy(t)+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]=ξT(t)Ωξ(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU×[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)]=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)),
where
(3.31)M3=-0.1BBT-0.1(e-2αh1+e-2αh2)R+2kPHD2-1HP+2PFD0-1FP,M4=-0.1e-2αh2U+2PGD1-1GP,ζ(t)=[y(t),ẏ(t-η(t)),y(t-h1),y(t-h2),ẏ(t),y(t-h(t))].
Since 0≤β≤1, (1-β)ℳ1+βℳ2 is a convex combination of ℳ1 and ℳ2. Therefore, (1-β)ℳ1+βℳ2<0 is equivalent to ℳ1<0 and ℳ2<0. Applying Schur complement lemma Proposition 2.5, the inequalities M3<0 and M4<0 are equivalent to ℳ3<0 and ℳ4<0, respectively. Thus, it follows from (3.3)–(3.6) and (3.30), we obtain
(3.32)V̇(t,xt)≤-2αV(t,xt),∀t≥0.
Integrating both sides of (3.32) from 0 to t, we obtain
(3.33)V(t,xt)≤V(ϕ)e-2αt,∀t≥0.
Furthermore, taking condition (3.13) into account, we have
(3.34)λ1‖x(t,ϕ)‖2≤V(xt)≤V(ϕ)e-2αt≤λ2e-2αt‖ϕ‖2,
then
(3.35)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,t≥0.
Therefore, nominal system (3.1) is α-exponentially stabilizable. The proof is completed.
### 3.2. Exponential Stabilization for Interval Time-Varying Delay Systems
Based on Theorem3.2, we derive robustly α-exponential stabilizability conditions of uncertain linear control systems with interval time-varying delay (2.1) in terms of LMIs.Theorem 3.3.
Givenα>0. The system (2.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 and ϵi>0,i=1,2,…,6 such that the following LMI holds:
(3.36)W1=W-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.37)W2=W-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.38)W3=[Δ4kPH2PFPHaTPHaTPFH0TPFH0T*-4kD200000**-2D00000***-ϵ1I000****-ϵ4I00*****-ϵ2I0******-ϵ5I]<0,(3.39)W4=[-0.1e-2αh2U2PGPGH1TPGH1T*-2D100**-ϵ3I0***-ϵ6I]<0,W=[W11W12W13W14W150*W2200W250**W3300W36***W440W46****W550*****W66],
where
(3.40)Δ=-0.1BBT-0.1(e-2αh1+e-2αh2)R,W11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R+ϵ1EaTEa+ϵ3E1TE1+ϵ2E0TE0+ke2αkE2TH2TD2H2E2,W12=B0P,W13=e-2αh1R,W14=e-2αh2R,W15=-PAT-0.5BBT,W22=-(1-δ)e-2αηQ1,W25=PB0T,W33=-e-2αh1Q-e-2αh1R-e-2αh2U,W36=e-2αh2U,W44=-e-2αh2Q-e-2αh2R-e-2αh2U,W46=e-2αh2U,W55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T+ϵ4EaTEa+ϵ5E0TE0+ϵ6E1TE1+ke2αkE2TH2TD2H2E2,W66=-1.9e-2αh2U.Proof.
Choose Lyapunov-Krasovskii functional as in (3.11) but change V8 to V8=4∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds, we may proof the Theorem by using a similar argument as in the proof of Theorem 3.2. By replacing A,W0,W1, and W2 with A+EaFa(t)Ha,W0+E0F0(t)H0,W1+E1F1(t)H1, and W2+E2F2(t)H2, respectively. We have the following:
(3.41)V̇(t,xt)+2αV(t,xt)≤yT(t)[(-A+EaFa(t)Ha)P+P(-A+EaFa(t)Ha)T-BBT+2αP+2Q-e-2αh1R-e-2αh2RyT]y(t)+2yT(t)(W0+E0F0(t)H0)f(x(t))+2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2yT(t)(W2+E2F2(t)H2)×∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-e-2αh1yT(t-h1)Qy(t-h1)-e-2αh2yT(t-h2)Qy(t-h2)+h12ẏT(t)Rẏ(t)+h22ẏT(t)Rẏ(t)+(h2-h1)2ẏT(t)Uẏ(t)+e-2αh12yT(t)Ry(t-h1)-e-2αh1yT(t-h1)Ry(t-h1)+e-2αh22yT(t)Ry(t-h2)-e-2αh2yT(t-h2)Ry(t-h2)-e-2αh2[y(t-h(t))-y(t-h2)]TU[y(t-h(t)-y(t-h2))]T-e-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-βe-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]T+ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+4kyT(t)PHD2-1HPy(t)-4e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)Pẏ(t)-2ẏT(t)(A+EaFa(t)Ha)Py(t)+2ẏT(t)(W0+E0F0(t)H0)f(x(t))+2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2ẏT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)).
Applying Proposition 2.3 and Lemma 2.6, the following estimations hold:
(3.42)yT(t)[(-A+EaFa(t)H)a)P+P(-AT+HaTFaT(t)EaT)]y(t)≤yT(t)[-PAT-AP]y(t)+ϵ1yT(t)EaTEay(t)+ϵ1-1yT(t)PHaTHaPy(t),2yT(t)[W0+E0F0(t)H0]f(x(t))=2yT(t)W0f(x(t))+2yT(t)E0F0(t)H0f(x(t))≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t)+ϵ2yT(t)E0TE0y(t)+ϵ2-1yT(t)PFH0TH0FPy(t),2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))=2yT(t)W1g(x(t-h(t)))+2yT(t)E1F1(t)H1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t))+ϵ3yT(t)E1TE1y(t)+ϵ3-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2yT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds=2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)E2F2(t)H2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+k-1e-2αk[∫t-k(t)th(x(s))ds]TH2TH2-TD2-1H2-1H2×[∫t-k(t)th(x(s))ds]≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)(A+EaFa(t)Ha)Py(t)≤-2ẏT(t)APy(t)-2ẏT(t)EaFa(t)HaPy(t)≤-2ẏT(t)APy(t)+ϵ4ẏT(t)EaTEaẏ(t)+ϵ4-1yT(t)PHaTHaPy(t),2ẏT(t)(W0+E0F0(t)H0)f(x(t))=2ẏT(t)[W0+E0F0(t)H0]f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t)+ϵ5ẏT(t)E0TE0ẏ(t)+ϵ5-1yT(t)PFH0TH0FPy(t),2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1G×Py(t-h(t))+ϵ6ẏT(t)E1TE1ẏ(t)+ϵ6-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2ẏT(t)(W2+E2F2(t)H2)≤ke2αkẏT(t)W2D2W2Tẏ(t)+2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkẏT(t)E2TH2TD2H2E2ẏ(t).Remark 3.4.
In [10, 13, 14], exponential stability of neutral-type neural networks with time-varying delays were investigated. However, the distributed delays have not been considered. Stability conditions in [13, 26–28] are not applicable to our work, since we consider more activation functions than them. Therefore, our stability conditions are less conservative than some other existing results.Remark 3.5.
In this paper, the restriction that the state delay is differentiable is not required which allows the state delay to be fast time-varying. Meanwhile, this restriction is required in some existing result, see [13, 14, 26–28].
## 3.1. Exponential Stabilization for Nominal Interval Time-Varying Delay Systems
The nominal system is given by(3.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t)x(t)=ϕ(t),t∈[-d,0].
First, we present a delay-dependent exponential stabilizability analysis conditions for the given nominal interval time-varying delay system (3.1) with ΔA(t)=ΔW0(t)=ΔW1(t)=ΔW2(t)=0. Let us set (3.2)λ1=λmin(P-1),λ2=λmax(P-1)+2h2λmax(P-1QP-1)+2h22λmax(P-1RP-1)+ηλmax(P-1Q1P-1)+2λmax(HD2-1H)+(h2-h1)2λmax(P-1UP-1).Assumption 3.1.
All the eigenvalues of matrixB0 are inside the unit circle.Theorem 3.2.
Givenα>0. The system (3.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 such that the following LMI holds:
(3.3)M1=M-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.4)M2=M-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.5)M3=[-0.1BBT-0.1(e-2αh1+e-2αh2)R2kPH2PF*-2kD20**-2D0]<0,(3.6)M4=[-0.1e-2αh2U2PG*-2D1]<0,(3.7)M=[M11M12M13M14M150*M2200M250**M3300M36***M440M46****M550*****M66],
where
(3.8)M11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R]M12=B0P,M13=e-2αh1R,M14=e-2αh2R,M15=-PAT-0.5BBT,M22=-(1-δ)e-2αηQ1,M25=PB0T,M33=-e-2αh1Q-e-2αh1R-e-2αh2U,M36=e-2αh2U,M44=-e-2αh2Q-e-2αh2R-e-2αh2U,M46=e-2αh2U,M55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]M66=-1.9e-2αh2U.
Moreover, the memoryless feedback control is
(3.9)u(t)=-0.5BTP-1x(t),t≥0,
and the solution x(t,ϕ) of the system satisfies
(3.10)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,∀t≥0.Proof.
LetY=P-1,y(t)=Yx(t). Using the feedback control (3.9) we consider the following Lyapunov-Krasovskii functional:
(3.11)V(t,xt)=∑i=18Vi,
where
(3.12)V1=xT(t)Yx(t),V2=∫t-h1te2α(s-t)xT(s)YQYx(s)ds,V3=∫t-h2te2α(s-t)xT(s)YQYx(s)ds,V4=h1∫-h10∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V5=h2∫-h20∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V6=(h2-h1)∫-h2-h1∫t+ste2α(τ-t)ẋT(τ)YUYẋ(τ)dτds,V7=∫t-η(t)te2α(s-t)ẋT(s)YQ1Yẋ(s)ds,V8=2∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds.
It easy to check that
(3.13)λ1∥x(t)∥2≤V(t,xt)≤λ2∥xt∥2,∀t≥0.
Taking the derivative of V(xt) along the solution of system (3.1) we have
(3.14)V̇1=2xT(t)Yẋ(t),=2yT(t)[-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))-0.5BBTP-1x(t)∫t-k(t)t]=2yT(t)[-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)∫t-k(t)t](3.15)=yT(t)[-AP-PAT]y(t)+2yT(t)W0f(x(t))+2yT(t)W1g(x(t-h(t)))+2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-yT(t)BBTy(t)+2αyT(t)Py(t)-2αV1,V̇2=yT(t)Qy(t)-e-2αh1yT(t-h1)Qy(t-h1)-2αV2,V̇3=yT(t)Qy(t)-e-2αh2yT(t-h2)Qy(t-h2)-2αV3,V̇4≤h12ẏT(t)Rẏ(t)-h1e-2αh1∫t-h1tẏT(s)Rẏ(s)ds-2αV4,V̇5≤h22ẏT(t)Rẏ(t)-h2e-2αh2∫t-h2tẏT(s)Rẏ(s)ds-2αV5,V̇6≤(h2-h1)2ẏT(t)Uẏ(t)-(h2-h1)e-2αh2∫t-h2t-h1ẏT(s)Uẏ(s)ds-2αV6,V̇7≤ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))-2αV7,V̇8≤2khT(x(t))D2-1h(x(t))-2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2αV8.
Using the condition (2.3) and since the matrices Di-1>0,i=0,1,2 are diagonal, we have
(3.16)khT(x(t))D2-1h(x(t))≤kxT(t)HD2-1Hx(t)=kyT(t)PHD2-1HPy(t),fT(x(t))D0-1f(x(t))≤xT(t)FD0-1Fx(t)=yT(t)PFD0-1FPy(t),gT(x(t-h(t)))D1-1g(x(t-h(t)))≤xT(t-h(t))GD1-1Gx(t-h(t)),=yT(t-h(t))PGD1-1GPy(t-h(t)),
and using (2.3) and the Proposition (2.3) for the following estimations:
(3.17)2yT(t)W0f(x(t))≤yT(t)W0D0W0Ty(t)+fT(x(t))D0-1f(x(t))≤yT(t)W0D0W0Ty(t)+xT(t)FD0-1Fx(t)≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t),2yT(t)W1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+xT(t-h(t))GD1-1Gx(t-h(t))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),2yT(t)W2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+k-1e-2αk(∫t-k(t)th(x(s)ds))TD2-1(∫t-k(t)th(x(s))ds)≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
Applying Proposition 2.4 and the Leibniz-Newton formula, we have
(3.18)-h1∫t-h1tẏT(s)Rẏ(s)ds≤-[∫t-h1tẏ(s)]TR[∫t-h1tẏ(s)]≤-[y(t)-y(t-h1)]TR[y(t)-y(t-h1)]=-yT(t)Ry(t)+2yT(t)Ry(t-h1)-yT(t-h1)Ry(t-h1),-h2∫t-h2tẏT(s)Rẏ(s)ds≤-[∫t-h2tẏ(s)]TR[∫t-h2tẏ(s)]≤-[y(t)-y(t-h2)]TR[y(t)-y(t-h2)]=-yT(t)Ry(t)+2yT(t)Ry(t-h2)-yT(t-h2)Ry(t-h2).
Note that
(3.19)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds=-(h2-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h2-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds.
Using Proposition 2.4 gives
(3.20)-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds≤-[∫t-h2t-h(t)ẏ(s)ds]TU[∫t-h2t-h(t)ẏ(s)ds]≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)],(3.21)-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds≤-[∫t-h(t)t-h1ẏ(s)ds]TU[∫t-h(t)t-h1ẏ(s)ds]≤-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))].
Let β=(h2-h(t))/(h2-h1)≤1. Then
(3.22)-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-β∫t-h(t)t-h1(h2-h1)ẏT(s)Uẏ(s)ds≤-β∫t-h(t)t-h1(h(t)-h1)ẏT(s)Uẏ(s)ds≤-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))],(3.23)-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds=-(1-β)∫t-h2t-h(t)(h2-h1)ẏT(s)Uẏ(s)ds≤-(1-β)∫t-h2t-h(t)(h2-h(t))ẏT(s)Uẏ(s)ds≤-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
Therefore from (3.20)–(3.23), we obtain
(3.24)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
By using the following identity relation:
(3.25)-Pẏ(t)-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W0∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)=0,
we have
(3.26)-2ẏT(t)Pẏ(t)-2ẏT(t)APy(t)+2ẏT(t)W0f(x(t))+2ẏT(t)W1g(x(t-h(t)))+2ẏT(t)W0∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=0.
By using Propositions 2.3 and 2.4, we have
(3.27)2ẏT(t)W0f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+fT(x(t))D0-1f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t),(3.28)2ẏT(t)W1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),(3.29)2ẏT(t)W2∫t-k(t)th(x(s))ds≤ke2αkẏT(t)W2D2W2Tẏ(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
From (3.15)–(3.29), we obtain
(3.30)V̇(t,xt)+2αV(t,xt)≤yT(t)[-AP-PAT+2αP-BBT+2Q+2kPHD2-1HP+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-e-2αh1R-e-2αh2R+PFD1-1FP]y(t)+2yT(t)B0Pẏ(t-η(t))+yT(t-h1)[-e-2αh1Q-e-2αh1R-e-2αh2U]y(t-h1),+(t-h2)[-e-2αh2Q-e-2αh2R-e-2αh2U]y(t-h2),+ẏT(t)[h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+yT(t-h(t))[2PGD1-1GP-2e-2αh2U]y(t-h(t))+2e-2αh1yT(t)Ry(t-h1)+2e-2αh2yT(t)Ry(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h1)-2ẏT(t)APy(t)+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]=ξT(t)Ωξ(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU×[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)]=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)),
where
(3.31)M3=-0.1BBT-0.1(e-2αh1+e-2αh2)R+2kPHD2-1HP+2PFD0-1FP,M4=-0.1e-2αh2U+2PGD1-1GP,ζ(t)=[y(t),ẏ(t-η(t)),y(t-h1),y(t-h2),ẏ(t),y(t-h(t))].
Since 0≤β≤1, (1-β)ℳ1+βℳ2 is a convex combination of ℳ1 and ℳ2. Therefore, (1-β)ℳ1+βℳ2<0 is equivalent to ℳ1<0 and ℳ2<0. Applying Schur complement lemma Proposition 2.5, the inequalities M3<0 and M4<0 are equivalent to ℳ3<0 and ℳ4<0, respectively. Thus, it follows from (3.3)–(3.6) and (3.30), we obtain
(3.32)V̇(t,xt)≤-2αV(t,xt),∀t≥0.
Integrating both sides of (3.32) from 0 to t, we obtain
(3.33)V(t,xt)≤V(ϕ)e-2αt,∀t≥0.
Furthermore, taking condition (3.13) into account, we have
(3.34)λ1‖x(t,ϕ)‖2≤V(xt)≤V(ϕ)e-2αt≤λ2e-2αt‖ϕ‖2,
then
(3.35)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,t≥0.
Therefore, nominal system (3.1) is α-exponentially stabilizable. The proof is completed.
## 3.2. Exponential Stabilization for Interval Time-Varying Delay Systems
Based on Theorem3.2, we derive robustly α-exponential stabilizability conditions of uncertain linear control systems with interval time-varying delay (2.1) in terms of LMIs.Theorem 3.3.
Givenα>0. The system (2.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 and ϵi>0,i=1,2,…,6 such that the following LMI holds:
(3.36)W1=W-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.37)W2=W-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.38)W3=[Δ4kPH2PFPHaTPHaTPFH0TPFH0T*-4kD200000**-2D00000***-ϵ1I000****-ϵ4I00*****-ϵ2I0******-ϵ5I]<0,(3.39)W4=[-0.1e-2αh2U2PGPGH1TPGH1T*-2D100**-ϵ3I0***-ϵ6I]<0,W=[W11W12W13W14W150*W2200W250**W3300W36***W440W46****W550*****W66],
where
(3.40)Δ=-0.1BBT-0.1(e-2αh1+e-2αh2)R,W11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R+ϵ1EaTEa+ϵ3E1TE1+ϵ2E0TE0+ke2αkE2TH2TD2H2E2,W12=B0P,W13=e-2αh1R,W14=e-2αh2R,W15=-PAT-0.5BBT,W22=-(1-δ)e-2αηQ1,W25=PB0T,W33=-e-2αh1Q-e-2αh1R-e-2αh2U,W36=e-2αh2U,W44=-e-2αh2Q-e-2αh2R-e-2αh2U,W46=e-2αh2U,W55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T+ϵ4EaTEa+ϵ5E0TE0+ϵ6E1TE1+ke2αkE2TH2TD2H2E2,W66=-1.9e-2αh2U.Proof.
Choose Lyapunov-Krasovskii functional as in (3.11) but change V8 to V8=4∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds, we may proof the Theorem by using a similar argument as in the proof of Theorem 3.2. By replacing A,W0,W1, and W2 with A+EaFa(t)Ha,W0+E0F0(t)H0,W1+E1F1(t)H1, and W2+E2F2(t)H2, respectively. We have the following:
(3.41)V̇(t,xt)+2αV(t,xt)≤yT(t)[(-A+EaFa(t)Ha)P+P(-A+EaFa(t)Ha)T-BBT+2αP+2Q-e-2αh1R-e-2αh2RyT]y(t)+2yT(t)(W0+E0F0(t)H0)f(x(t))+2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2yT(t)(W2+E2F2(t)H2)×∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-e-2αh1yT(t-h1)Qy(t-h1)-e-2αh2yT(t-h2)Qy(t-h2)+h12ẏT(t)Rẏ(t)+h22ẏT(t)Rẏ(t)+(h2-h1)2ẏT(t)Uẏ(t)+e-2αh12yT(t)Ry(t-h1)-e-2αh1yT(t-h1)Ry(t-h1)+e-2αh22yT(t)Ry(t-h2)-e-2αh2yT(t-h2)Ry(t-h2)-e-2αh2[y(t-h(t))-y(t-h2)]TU[y(t-h(t)-y(t-h2))]T-e-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-βe-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]T+ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+4kyT(t)PHD2-1HPy(t)-4e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)Pẏ(t)-2ẏT(t)(A+EaFa(t)Ha)Py(t)+2ẏT(t)(W0+E0F0(t)H0)f(x(t))+2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2ẏT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)).
Applying Proposition 2.3 and Lemma 2.6, the following estimations hold:
(3.42)yT(t)[(-A+EaFa(t)H)a)P+P(-AT+HaTFaT(t)EaT)]y(t)≤yT(t)[-PAT-AP]y(t)+ϵ1yT(t)EaTEay(t)+ϵ1-1yT(t)PHaTHaPy(t),2yT(t)[W0+E0F0(t)H0]f(x(t))=2yT(t)W0f(x(t))+2yT(t)E0F0(t)H0f(x(t))≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t)+ϵ2yT(t)E0TE0y(t)+ϵ2-1yT(t)PFH0TH0FPy(t),2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))=2yT(t)W1g(x(t-h(t)))+2yT(t)E1F1(t)H1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t))+ϵ3yT(t)E1TE1y(t)+ϵ3-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2yT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds=2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)E2F2(t)H2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+k-1e-2αk[∫t-k(t)th(x(s))ds]TH2TH2-TD2-1H2-1H2×[∫t-k(t)th(x(s))ds]≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)(A+EaFa(t)Ha)Py(t)≤-2ẏT(t)APy(t)-2ẏT(t)EaFa(t)HaPy(t)≤-2ẏT(t)APy(t)+ϵ4ẏT(t)EaTEaẏ(t)+ϵ4-1yT(t)PHaTHaPy(t),2ẏT(t)(W0+E0F0(t)H0)f(x(t))=2ẏT(t)[W0+E0F0(t)H0]f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t)+ϵ5ẏT(t)E0TE0ẏ(t)+ϵ5-1yT(t)PFH0TH0FPy(t),2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1G×Py(t-h(t))+ϵ6ẏT(t)E1TE1ẏ(t)+ϵ6-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2ẏT(t)(W2+E2F2(t)H2)≤ke2αkẏT(t)W2D2W2Tẏ(t)+2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkẏT(t)E2TH2TD2H2E2ẏ(t).Remark 3.4.
In [10, 13, 14], exponential stability of neutral-type neural networks with time-varying delays were investigated. However, the distributed delays have not been considered. Stability conditions in [13, 26–28] are not applicable to our work, since we consider more activation functions than them. Therefore, our stability conditions are less conservative than some other existing results.Remark 3.5.
In this paper, the restriction that the state delay is differentiable is not required which allows the state delay to be fast time-varying. Meanwhile, this restriction is required in some existing result, see [13, 14, 26–28].
## 4. Numerical Examples
In this section, we now provide an example to show the effectiveness of the result in Theorem3.2.Example 4.1.
Consider the neural networks with interval time-varying delay and control input with the following parameters:(4.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+Bu(t),
where
(4.2)A=[-0.2012],W0=[0.40.10.1-0.2],W1=[0.30.10.50.2],F=[0.3000.5],G=[0.1000.4],B=[0.30.1].
It is worth noting that, the delay functions h(t)=0.1+0.1|sint|. Therefore, the methods used in [2, 9] are not applicable to this system. We have h1=0.1,h2=0.2. Given α=0.2 and any initial function ϕ(t)=C1([-0.2,0],ℝ2). Using the Matlab LMI toolbox, we obtain
(4.3)P=[0.03700.00100.00100.2938],Q=[0.00080.00290.00290.0250],U=[0.01530.00800.0080.6201],R=[0.03770.00550.00550.8173],D0=[0.0353000.2833],D1=[0.0215000.5025].
Thus, the system (4.1) is 0.2-exponentially stabilizable and the value λ2/λ1=1.6469, so the solution of the closed-loop system satisfies
(4.4)‖x(t,ϕ)‖≤1.6469e-0.2t‖ϕ‖,∀t∈R+.Example 4.2.
Consider the neural networks with mixed interval time-varying delays and control input with the following parameters:(4.5)ẋ(t)=-(A+ΔA(t))x(t)+(W0+ΔW0)f(x(t))+(W1+ΔW1)g(x(t-h(t)))+(W2+ΔW2)∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t),
where
(4.6)A=[0.15001],W0=[0.50.120.1-0.3],W1=[0.20.10.10.2],W2=[0.10.20.50.1],B0=[0.15000.15],F=[0.4000.5],G=[0.1000.2],H=[0.5000.3],B=[0.10],Ha=H0=H1=H2=Ea=E0=E1=E2=[0.1000.1].
It is worth noting that, the delay functions h(t)=0.2+0.2|sint|,k(t)=|cost| are nondifferentiable and η(t)=0.2sin2(t). Therefore, the methods used in [13, 14] are not applicable to this system. We have h1=0.2,h2=0.4,k=0.1,δ=0.1,η=0.2. Given α=0.1 and any initial function ϕ(t)=C1([-0.4,0],ℝ2). Using the Matlab LMI toolbox, we obtain e1=0.0173,e2=0.0128,e3=0.0111,e4=0.0263,e5=0.0209,e6=0.0192,
(4.7)P=[0.00610.00020.00020.0228],Q=[0.00030.00050.00010.0031],Q1=[0.00050.00010.00010.0024],U=[0.00280.00040.00040.00382],R=[0.00520.00080.00080.0543],D0=[0.0068000.0304],D1=[0.0038000.0145],D2=[0.0433000.0275].
Thus, the system (4.1) is 0.1-exponentially stabilizable and the value λ2/λ1=2.2939, so the solution of the closed-loop system satisfies
(4.8)‖x(t,ϕ)‖≤2.2939e-0.1t‖ϕ‖,∀t∈R+.
## 5. Conclusions
In this paper, we have investigated the exponential stabilization of neutral-type neural networks with various activation functions and interval nondifferentiable and distributed time-varying delays. The interval time-varying delay function is not necessary to be differentiable which allows time-delay function to be a fast time-varying function. By constructing a set of improved Lyapunov-Krasovskii functional combined with Leibniz-Newton's formula, the proposed stability criteria have been formulated in the form of a linear matrix inequalities. Numerical examples illustrate the effectiveness of the results.
---
*Source: 101426-2012-03-12.xml* | 101426-2012-03-12_101426-2012-03-12.md | 40,299 | Exponential Stabilization of Neutral-Type Neural Networks with Interval Nondifferentiable and Distributed Time-Varying Delays | W. Weera; P. Niamsup | Abstract and Applied Analysis
(2012) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101426 | 101426-2012-03-12.xml | ---
## Abstract
The problem of exponential stabilization of neutral-type neural networks
with various activation functions and interval nondifferentiable and distributed time-varying
delays is considered. The interval time-varying delay function is not required
to be differentiable. By employing new and improved Lyapunov-Krasovskii functional
combined with Leibniz-Newton’s formula, the stabilizability criteria are formulated in
terms of a linear matrix inequalities. Numerical examples are given to illustrate and show the
effectiveness of the obtained results.
---
## Body
## 1. Introduction
In recent years, there have been great attentions on the stability analysis of neural networks due to its real world application to various systems such as signal processing, pattern recognition, content-addressable memory, and optimization [1–5]. In performing a periodicity or stability analysis of a neural network, the conditions to be imposed on the neural network are determined by the characteristics of various activation functions and network parameters. When neural networks are designed for problem solving, it is desirable that their activation functions are not too restrictive, [3, 6, 7]. It is known that time delays cannot be avoided in the hardware implementation of neural networks due to the finite switching speed of amplifies in electronic neural networks or the finite signal propagation time in biological networks. The existence of time delays may result in instability or oscillation of a neural network. Therefore, many researchers have focused on the study of stability analysis of delayed neural networks with various activation functions with more general conditions during the last decades [2, 8–11].The stability criteria for system with time delays can be classified into two categories: delay independent and delay dependent. Delay-independent criteria do not employ any information on the size of the delay; while delay-dependent criteria make use of such information at different levels. Delay-dependent stability conditions are generally less conservative than delay-independent ones especially when the delay is small. In many situations, time delays are time-varying continuous functions which vary from 0 to a given upper bound. In addition, the range of time delays may vary in a range for which the lower bound is not restricted to be 0; in which case time delays are called interval time-varying delay. A typical example with interval time delay is the networked control system, which has been widely studied in the recent literature (see, e.g., [2, 11–14]). Therefore, it is of great significance to investigate the stability of system with interval time-varying delay. Another important type of time delay is distributed delay where stability analysis of neural networks with distributed delayed has been studied extensively recently, see [2, 5, 8–11, 15–17].It is know that exponential stability is more important than asymptotic stability since it provides information on convergence rate of solutions of systems to equilibrium points. It is particularly important for neural networks where the exponential convergence rate is used to determine the speed of neural computations. Therefore, it is important to determine the exponential stability and to estimate the exponential convergence rate for delayed neural networks. Consequently, many researchers have considered the exponential stability analysis problem for delayed neural networks and several results on this topic that have been reported in the literatures [3, 9, 13, 14, 17].In practical control designs, due to systems uncertainty, failure modes, or systems with various modes of operation, the simultaneous stabilization problem has often to be taken into account. The problem is concerned with designing a single controller which can simultaneously stabilize a set of systems. Recently, the exponential stability and stabilization problems for time-delay systems have been studied by many researchers, see [8, 12, 18, 19]. Among the usual approach, there are many results on the stabilization problem of neural networks being reported in the literature (see [2, 9, 15, 16, 20, 21]). In [21], robust stabilization criterion are provided via designing a memoryless state feedback controller for the time-delay dynamical neural networks with nonlinear perturbation. However, time-delay is required to be constant. In [16], global robust stabilizing control are presented for neural network with time-varying delay with the lower bound restricted to be 0. For neural network with interval time-varying delay, global stability analysis is considered with control input in [2, 9, 12]. Nonetheless, in most studies, the time-varying delays are required to be differentiable [2, 9, 15, 16]. Therefore, their methods have a conservatism which can be improved upon.It is noted that these stability conditions are either with testing difficulty or with conservatism to some extent. It is natural and important that systems will contain some information about the derivative of the past state to further describe and model the dynamics for such complex neural reactions. Such systems are called neutral-type systems in which the systems contain both state delay as well as state derivative delay, so called neutral delay. The phenomena on neutral delay often appears in the study of heat exchanges, distributed networks containing lossless transmission lines, partial element equivalent circuits and population ecology are examples of neutral systems see [1, 10, 11, 22, 23] and references cited therein.Based on the above discussions, we consider the problem of exponential stabilization of neutral-type neural networks with interval and distributed time-varying delays. There are various activation functions which are considered in the system and the restriction on differentiability of interval time-varying delays is removed, which means that a fast interval time-varying delay is allowed. Based on the construction of improved Lyapunov-Krasovskii functionals combined with Liebniz-Newton's formula and some appropriate estimation of integral terms, new delay-dependent sufficient conditions for the exponential stabilization of the system are derived in terms of LMIs without introducing any free-weighting matrices. The new stability conditions are much less conservative and more general than some existing results. Numerical examples are given to illustrate the effectiveness and less conservativeness of our theoretical results. To the best of our knowledge, our results are among the first on investigation of exponential stabilization for neutral-type neural networks with discrete, neutral, and distributed delays.The rest of this paper is organized as follows. In Section2, we give notations, definitions, propositions, and lemmas which will be used in the proof of the main results. Delay-dependent sufficient conditions for the exponential stabilization of neutral-type neural networks with various activation functions, interval and distributed time-varying delays, and designs of memoryless feedback controls are presented in Section 3. Numerical examples illustrated the obtained results are given in Section 4. The paper ends with conclusions in Section 5 and cited references.
## 2. Preliminaries
The following notation will be used in this paper:ℝ+ denotes the set of all real nonnegative numbers; ℝn denotes the n-dimensional space and the vector norm ∥·∥; Mn×r denotes the space of all matrices of (n×r)-dimensions.AT denotes the transpose of matrix A; A is symmetric if A=AT; I denotes the identity matrix; λ(A) denotes the set of all eigenvalues of A; λmax(A)=max{Reλ;λ∈λ(A)}.xt:={x(t+s):s∈[-h,0]},∥xt∥=sups∈[-h,0]∥x(t+s)∥;C([0,t],ℝn) denotes the set of all ℝn-valued continuous functions on [0,t]; L2([0,t],ℝm) denotes the set of all the ℝm-valued square integrable functions on [0,t].MatrixA is called semi-positive definite (A≥0) if 〈Ax,x〉≥0, for all x∈ℝn;A is positive definite (A>0) if 〈Ax,x〉>0 for all x≠0;A>B means A-B>0. The symmetric term in a matrix is denoted by *.Consider the following neural networks with mixed time-varying delays and control input(2.1)ẋ(t)=-(A+ΔA(t))x(t)+(W0+ΔW0)f(x(t))+(W1+ΔW1)g(x(t-h(t)))+(W2+ΔW2)∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t),x(t)=ϕ(t),t∈[-d,0],d=max{h2,k,η},
where x(t)∈ℝn is the state of the neural networks, u(·)∈L2([0,t],ℝm) is the control, n is the number of neurals, and (2.2)f(x(t))=[f1(x1(t)),f2(x2(t)),…,fn(xn(t))]T,g(x(t))=[g1(x1(t)),g2(x2(t)),…,gn(xn(t))]T,h(x(t))=[h1(x1(t)),h2(x2(t)),…hn(xn(t))]T
are the activation functions, A=diag(a¯1,a¯2,…,a¯n),a¯i>0 represents the self-feedback term and W0,W1,W2 denote the connection weights, the discretely delayed connection weights, and the distributively delayed connection weight, respectively. In this paper, we consider various activation functions and assume that the activation functions f(·),g(·),h(·) are Lipschitzian with the Lipschitz constants ai,bi,ci>0(2.3)|fi(ξ1)-fi(ξ2)|≤ai|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R,|gi(ξ1)-gi(ξ2)|≤bi|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R,|hi(ξ1)-hi(ξ2)|≤ci|ξ1-ξ2|,i=1,2,…,n,∀ξ1,ξ2∈R.
The time-varying delay functions h(t),k(t),η(t) satisfy the condition(2.4)0≤h1≤h(t)≤h2,0≤k(t)≤k,0≤η(t)≤η,η̇(t)≤δ<1.
It is worth noting that the time delay is assumed to be a continuous function belonging to a given interval, which means that the lower and upper bounds for the time-varying delay are available, but the delay function is bounded but not restricted to being zero. The initial functions ϕ(t)∈C1([-d,0],ℝn), with the norm (2.5)∥ϕ∥=supt∈[-d,0]∥ϕ(t)∥2+∥ϕ̇(t)∥2.
The uncertainties satisfy the following condition:(2.6)ΔA(t)=EaFa(t)Ha,ΔW0(t)=E0F0(t)H0,ΔW1(t)=E1F1(t)H1,ΔW2(t)=E2F2(t)H2,
where Ei,Hi,i=a,0,1,2 are given constant matrices with appropriate dimensions, Fi(t),i=a,0,1,2 are unknown, real matrices with Lebesgue measurable elements satisfying(2.7)FiT(t)Fi(t)≤I,i=a,0,1,2∀t≥0.Definition 2.1.
The zero solution of system (2.8) is exponentially stabilizable if there exists a feedback control u(t)=Kx(t),K∈ℝm×n such that the resulting closed-loop system
(2.8)ẋ(t)=-[A-BK]x(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))
is α-stable.Definition 2.2.
Givenα>0. The zero solution of system (2.1) with u(t)=0 is α-stable if there exists a positive number N>0 such that every solution x(t,ϕ) satisfies the following condition:
(2.9)‖x(t,ϕ)‖≤Ne-αt‖ϕ‖,∀t≥0.We introduce the following technical well-known propositions and lemma, which will be used in the proof of our results.Proposition 2.3 (Cauchy inequality).
For any symmetric positive definite matrixN∈Mn×n and x,y∈ℝn one has
(2.10)±2xTy≤xTNx+yTN-1y.Proposition 2.4 (see [24]).
For any symmetric positive definite matrixM>0, scalar γ>0 and vector function ω:[0,γ]→ℝn such that the integrations concerned are well defined, the following inequality holds:
(2.11)(∫0γω(s)ds)TM(∫0γω(s)ds)≤γ(∫0γωT(s)Mω(s)ds).Proposition 2.5 ([24, Schur complement lemma]).
Given constant symmetric matricesX,Y,Z with appropriate dimensions satisfying X=XT,Y=YT>0. Then X+ZTY-1Z<0 if and only if
(2.12)(XZTZ-Y)<0or(-YZZTX)<0.Lemma 2.6 (see [25]).
Given matricesQ=QT,H,E, and R=RT>0 with appropriate dimensions. Then
(2.13)Q+HFE+ETFTHT<0,
for all F satisfying FTF≤R, if and only if there exists an ϵ>0 such that
(2.14)Q+ϵHHT+ϵ-1ETRE<0.
## 3. Main Results
### 3.1. Exponential Stabilization for Nominal Interval Time-Varying Delay Systems
The nominal system is given by(3.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t)x(t)=ϕ(t),t∈[-d,0].
First, we present a delay-dependent exponential stabilizability analysis conditions for the given nominal interval time-varying delay system (3.1) with ΔA(t)=ΔW0(t)=ΔW1(t)=ΔW2(t)=0. Let us set (3.2)λ1=λmin(P-1),λ2=λmax(P-1)+2h2λmax(P-1QP-1)+2h22λmax(P-1RP-1)+ηλmax(P-1Q1P-1)+2λmax(HD2-1H)+(h2-h1)2λmax(P-1UP-1).Assumption 3.1.
All the eigenvalues of matrixB0 are inside the unit circle.Theorem 3.2.
Givenα>0. The system (3.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 such that the following LMI holds:
(3.3)M1=M-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.4)M2=M-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.5)M3=[-0.1BBT-0.1(e-2αh1+e-2αh2)R2kPH2PF*-2kD20**-2D0]<0,(3.6)M4=[-0.1e-2αh2U2PG*-2D1]<0,(3.7)M=[M11M12M13M14M150*M2200M250**M3300M36***M440M46****M550*****M66],
where
(3.8)M11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R]M12=B0P,M13=e-2αh1R,M14=e-2αh2R,M15=-PAT-0.5BBT,M22=-(1-δ)e-2αηQ1,M25=PB0T,M33=-e-2αh1Q-e-2αh1R-e-2αh2U,M36=e-2αh2U,M44=-e-2αh2Q-e-2αh2R-e-2αh2U,M46=e-2αh2U,M55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]M66=-1.9e-2αh2U.
Moreover, the memoryless feedback control is
(3.9)u(t)=-0.5BTP-1x(t),t≥0,
and the solution x(t,ϕ) of the system satisfies
(3.10)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,∀t≥0.Proof.
LetY=P-1,y(t)=Yx(t). Using the feedback control (3.9) we consider the following Lyapunov-Krasovskii functional:
(3.11)V(t,xt)=∑i=18Vi,
where
(3.12)V1=xT(t)Yx(t),V2=∫t-h1te2α(s-t)xT(s)YQYx(s)ds,V3=∫t-h2te2α(s-t)xT(s)YQYx(s)ds,V4=h1∫-h10∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V5=h2∫-h20∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V6=(h2-h1)∫-h2-h1∫t+ste2α(τ-t)ẋT(τ)YUYẋ(τ)dτds,V7=∫t-η(t)te2α(s-t)ẋT(s)YQ1Yẋ(s)ds,V8=2∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds.
It easy to check that
(3.13)λ1∥x(t)∥2≤V(t,xt)≤λ2∥xt∥2,∀t≥0.
Taking the derivative of V(xt) along the solution of system (3.1) we have
(3.14)V̇1=2xT(t)Yẋ(t),=2yT(t)[-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))-0.5BBTP-1x(t)∫t-k(t)t]=2yT(t)[-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)∫t-k(t)t](3.15)=yT(t)[-AP-PAT]y(t)+2yT(t)W0f(x(t))+2yT(t)W1g(x(t-h(t)))+2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-yT(t)BBTy(t)+2αyT(t)Py(t)-2αV1,V̇2=yT(t)Qy(t)-e-2αh1yT(t-h1)Qy(t-h1)-2αV2,V̇3=yT(t)Qy(t)-e-2αh2yT(t-h2)Qy(t-h2)-2αV3,V̇4≤h12ẏT(t)Rẏ(t)-h1e-2αh1∫t-h1tẏT(s)Rẏ(s)ds-2αV4,V̇5≤h22ẏT(t)Rẏ(t)-h2e-2αh2∫t-h2tẏT(s)Rẏ(s)ds-2αV5,V̇6≤(h2-h1)2ẏT(t)Uẏ(t)-(h2-h1)e-2αh2∫t-h2t-h1ẏT(s)Uẏ(s)ds-2αV6,V̇7≤ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))-2αV7,V̇8≤2khT(x(t))D2-1h(x(t))-2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2αV8.
Using the condition (2.3) and since the matrices Di-1>0,i=0,1,2 are diagonal, we have
(3.16)khT(x(t))D2-1h(x(t))≤kxT(t)HD2-1Hx(t)=kyT(t)PHD2-1HPy(t),fT(x(t))D0-1f(x(t))≤xT(t)FD0-1Fx(t)=yT(t)PFD0-1FPy(t),gT(x(t-h(t)))D1-1g(x(t-h(t)))≤xT(t-h(t))GD1-1Gx(t-h(t)),=yT(t-h(t))PGD1-1GPy(t-h(t)),
and using (2.3) and the Proposition (2.3) for the following estimations:
(3.17)2yT(t)W0f(x(t))≤yT(t)W0D0W0Ty(t)+fT(x(t))D0-1f(x(t))≤yT(t)W0D0W0Ty(t)+xT(t)FD0-1Fx(t)≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t),2yT(t)W1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+xT(t-h(t))GD1-1Gx(t-h(t))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),2yT(t)W2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+k-1e-2αk(∫t-k(t)th(x(s)ds))TD2-1(∫t-k(t)th(x(s))ds)≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
Applying Proposition 2.4 and the Leibniz-Newton formula, we have
(3.18)-h1∫t-h1tẏT(s)Rẏ(s)ds≤-[∫t-h1tẏ(s)]TR[∫t-h1tẏ(s)]≤-[y(t)-y(t-h1)]TR[y(t)-y(t-h1)]=-yT(t)Ry(t)+2yT(t)Ry(t-h1)-yT(t-h1)Ry(t-h1),-h2∫t-h2tẏT(s)Rẏ(s)ds≤-[∫t-h2tẏ(s)]TR[∫t-h2tẏ(s)]≤-[y(t)-y(t-h2)]TR[y(t)-y(t-h2)]=-yT(t)Ry(t)+2yT(t)Ry(t-h2)-yT(t-h2)Ry(t-h2).
Note that
(3.19)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds=-(h2-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h2-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds.
Using Proposition 2.4 gives
(3.20)-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds≤-[∫t-h2t-h(t)ẏ(s)ds]TU[∫t-h2t-h(t)ẏ(s)ds]≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)],(3.21)-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds≤-[∫t-h(t)t-h1ẏ(s)ds]TU[∫t-h(t)t-h1ẏ(s)ds]≤-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))].
Let β=(h2-h(t))/(h2-h1)≤1. Then
(3.22)-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-β∫t-h(t)t-h1(h2-h1)ẏT(s)Uẏ(s)ds≤-β∫t-h(t)t-h1(h(t)-h1)ẏT(s)Uẏ(s)ds≤-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))],(3.23)-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds=-(1-β)∫t-h2t-h(t)(h2-h1)ẏT(s)Uẏ(s)ds≤-(1-β)∫t-h2t-h(t)(h2-h(t))ẏT(s)Uẏ(s)ds≤-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
Therefore from (3.20)–(3.23), we obtain
(3.24)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
By using the following identity relation:
(3.25)-Pẏ(t)-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W0∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)=0,
we have
(3.26)-2ẏT(t)Pẏ(t)-2ẏT(t)APy(t)+2ẏT(t)W0f(x(t))+2ẏT(t)W1g(x(t-h(t)))+2ẏT(t)W0∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=0.
By using Propositions 2.3 and 2.4, we have
(3.27)2ẏT(t)W0f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+fT(x(t))D0-1f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t),(3.28)2ẏT(t)W1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),(3.29)2ẏT(t)W2∫t-k(t)th(x(s))ds≤ke2αkẏT(t)W2D2W2Tẏ(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
From (3.15)–(3.29), we obtain
(3.30)V̇(t,xt)+2αV(t,xt)≤yT(t)[-AP-PAT+2αP-BBT+2Q+2kPHD2-1HP+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-e-2αh1R-e-2αh2R+PFD1-1FP]y(t)+2yT(t)B0Pẏ(t-η(t))+yT(t-h1)[-e-2αh1Q-e-2αh1R-e-2αh2U]y(t-h1),+(t-h2)[-e-2αh2Q-e-2αh2R-e-2αh2U]y(t-h2),+ẏT(t)[h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+yT(t-h(t))[2PGD1-1GP-2e-2αh2U]y(t-h(t))+2e-2αh1yT(t)Ry(t-h1)+2e-2αh2yT(t)Ry(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h1)-2ẏT(t)APy(t)+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]=ξT(t)Ωξ(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU×[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)]=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)),
where
(3.31)M3=-0.1BBT-0.1(e-2αh1+e-2αh2)R+2kPHD2-1HP+2PFD0-1FP,M4=-0.1e-2αh2U+2PGD1-1GP,ζ(t)=[y(t),ẏ(t-η(t)),y(t-h1),y(t-h2),ẏ(t),y(t-h(t))].
Since 0≤β≤1, (1-β)ℳ1+βℳ2 is a convex combination of ℳ1 and ℳ2. Therefore, (1-β)ℳ1+βℳ2<0 is equivalent to ℳ1<0 and ℳ2<0. Applying Schur complement lemma Proposition 2.5, the inequalities M3<0 and M4<0 are equivalent to ℳ3<0 and ℳ4<0, respectively. Thus, it follows from (3.3)–(3.6) and (3.30), we obtain
(3.32)V̇(t,xt)≤-2αV(t,xt),∀t≥0.
Integrating both sides of (3.32) from 0 to t, we obtain
(3.33)V(t,xt)≤V(ϕ)e-2αt,∀t≥0.
Furthermore, taking condition (3.13) into account, we have
(3.34)λ1‖x(t,ϕ)‖2≤V(xt)≤V(ϕ)e-2αt≤λ2e-2αt‖ϕ‖2,
then
(3.35)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,t≥0.
Therefore, nominal system (3.1) is α-exponentially stabilizable. The proof is completed.
### 3.2. Exponential Stabilization for Interval Time-Varying Delay Systems
Based on Theorem3.2, we derive robustly α-exponential stabilizability conditions of uncertain linear control systems with interval time-varying delay (2.1) in terms of LMIs.Theorem 3.3.
Givenα>0. The system (2.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 and ϵi>0,i=1,2,…,6 such that the following LMI holds:
(3.36)W1=W-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.37)W2=W-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.38)W3=[Δ4kPH2PFPHaTPHaTPFH0TPFH0T*-4kD200000**-2D00000***-ϵ1I000****-ϵ4I00*****-ϵ2I0******-ϵ5I]<0,(3.39)W4=[-0.1e-2αh2U2PGPGH1TPGH1T*-2D100**-ϵ3I0***-ϵ6I]<0,W=[W11W12W13W14W150*W2200W250**W3300W36***W440W46****W550*****W66],
where
(3.40)Δ=-0.1BBT-0.1(e-2αh1+e-2αh2)R,W11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R+ϵ1EaTEa+ϵ3E1TE1+ϵ2E0TE0+ke2αkE2TH2TD2H2E2,W12=B0P,W13=e-2αh1R,W14=e-2αh2R,W15=-PAT-0.5BBT,W22=-(1-δ)e-2αηQ1,W25=PB0T,W33=-e-2αh1Q-e-2αh1R-e-2αh2U,W36=e-2αh2U,W44=-e-2αh2Q-e-2αh2R-e-2αh2U,W46=e-2αh2U,W55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T+ϵ4EaTEa+ϵ5E0TE0+ϵ6E1TE1+ke2αkE2TH2TD2H2E2,W66=-1.9e-2αh2U.Proof.
Choose Lyapunov-Krasovskii functional as in (3.11) but change V8 to V8=4∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds, we may proof the Theorem by using a similar argument as in the proof of Theorem 3.2. By replacing A,W0,W1, and W2 with A+EaFa(t)Ha,W0+E0F0(t)H0,W1+E1F1(t)H1, and W2+E2F2(t)H2, respectively. We have the following:
(3.41)V̇(t,xt)+2αV(t,xt)≤yT(t)[(-A+EaFa(t)Ha)P+P(-A+EaFa(t)Ha)T-BBT+2αP+2Q-e-2αh1R-e-2αh2RyT]y(t)+2yT(t)(W0+E0F0(t)H0)f(x(t))+2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2yT(t)(W2+E2F2(t)H2)×∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-e-2αh1yT(t-h1)Qy(t-h1)-e-2αh2yT(t-h2)Qy(t-h2)+h12ẏT(t)Rẏ(t)+h22ẏT(t)Rẏ(t)+(h2-h1)2ẏT(t)Uẏ(t)+e-2αh12yT(t)Ry(t-h1)-e-2αh1yT(t-h1)Ry(t-h1)+e-2αh22yT(t)Ry(t-h2)-e-2αh2yT(t-h2)Ry(t-h2)-e-2αh2[y(t-h(t))-y(t-h2)]TU[y(t-h(t)-y(t-h2))]T-e-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-βe-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]T+ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+4kyT(t)PHD2-1HPy(t)-4e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)Pẏ(t)-2ẏT(t)(A+EaFa(t)Ha)Py(t)+2ẏT(t)(W0+E0F0(t)H0)f(x(t))+2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2ẏT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)).
Applying Proposition 2.3 and Lemma 2.6, the following estimations hold:
(3.42)yT(t)[(-A+EaFa(t)H)a)P+P(-AT+HaTFaT(t)EaT)]y(t)≤yT(t)[-PAT-AP]y(t)+ϵ1yT(t)EaTEay(t)+ϵ1-1yT(t)PHaTHaPy(t),2yT(t)[W0+E0F0(t)H0]f(x(t))=2yT(t)W0f(x(t))+2yT(t)E0F0(t)H0f(x(t))≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t)+ϵ2yT(t)E0TE0y(t)+ϵ2-1yT(t)PFH0TH0FPy(t),2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))=2yT(t)W1g(x(t-h(t)))+2yT(t)E1F1(t)H1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t))+ϵ3yT(t)E1TE1y(t)+ϵ3-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2yT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds=2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)E2F2(t)H2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+k-1e-2αk[∫t-k(t)th(x(s))ds]TH2TH2-TD2-1H2-1H2×[∫t-k(t)th(x(s))ds]≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)(A+EaFa(t)Ha)Py(t)≤-2ẏT(t)APy(t)-2ẏT(t)EaFa(t)HaPy(t)≤-2ẏT(t)APy(t)+ϵ4ẏT(t)EaTEaẏ(t)+ϵ4-1yT(t)PHaTHaPy(t),2ẏT(t)(W0+E0F0(t)H0)f(x(t))=2ẏT(t)[W0+E0F0(t)H0]f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t)+ϵ5ẏT(t)E0TE0ẏ(t)+ϵ5-1yT(t)PFH0TH0FPy(t),2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1G×Py(t-h(t))+ϵ6ẏT(t)E1TE1ẏ(t)+ϵ6-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2ẏT(t)(W2+E2F2(t)H2)≤ke2αkẏT(t)W2D2W2Tẏ(t)+2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkẏT(t)E2TH2TD2H2E2ẏ(t).Remark 3.4.
In [10, 13, 14], exponential stability of neutral-type neural networks with time-varying delays were investigated. However, the distributed delays have not been considered. Stability conditions in [13, 26–28] are not applicable to our work, since we consider more activation functions than them. Therefore, our stability conditions are less conservative than some other existing results.Remark 3.5.
In this paper, the restriction that the state delay is differentiable is not required which allows the state delay to be fast time-varying. Meanwhile, this restriction is required in some existing result, see [13, 14, 26–28].
## 3.1. Exponential Stabilization for Nominal Interval Time-Varying Delay Systems
The nominal system is given by(3.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t)x(t)=ϕ(t),t∈[-d,0].
First, we present a delay-dependent exponential stabilizability analysis conditions for the given nominal interval time-varying delay system (3.1) with ΔA(t)=ΔW0(t)=ΔW1(t)=ΔW2(t)=0. Let us set (3.2)λ1=λmin(P-1),λ2=λmax(P-1)+2h2λmax(P-1QP-1)+2h22λmax(P-1RP-1)+ηλmax(P-1Q1P-1)+2λmax(HD2-1H)+(h2-h1)2λmax(P-1UP-1).Assumption 3.1.
All the eigenvalues of matrixB0 are inside the unit circle.Theorem 3.2.
Givenα>0. The system (3.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 such that the following LMI holds:
(3.3)M1=M-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.4)M2=M-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.5)M3=[-0.1BBT-0.1(e-2αh1+e-2αh2)R2kPH2PF*-2kD20**-2D0]<0,(3.6)M4=[-0.1e-2αh2U2PG*-2D1]<0,(3.7)M=[M11M12M13M14M150*M2200M250**M3300M36***M440M46****M550*****M66],
where
(3.8)M11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R]M12=B0P,M13=e-2αh1R,M14=e-2αh2R,M15=-PAT-0.5BBT,M22=-(1-δ)e-2αηQ1,M25=PB0T,M33=-e-2αh1Q-e-2αh1R-e-2αh2U,M36=e-2αh2U,M44=-e-2αh2Q-e-2αh2R-e-2αh2U,M46=e-2αh2U,M55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]M66=-1.9e-2αh2U.
Moreover, the memoryless feedback control is
(3.9)u(t)=-0.5BTP-1x(t),t≥0,
and the solution x(t,ϕ) of the system satisfies
(3.10)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,∀t≥0.Proof.
LetY=P-1,y(t)=Yx(t). Using the feedback control (3.9) we consider the following Lyapunov-Krasovskii functional:
(3.11)V(t,xt)=∑i=18Vi,
where
(3.12)V1=xT(t)Yx(t),V2=∫t-h1te2α(s-t)xT(s)YQYx(s)ds,V3=∫t-h2te2α(s-t)xT(s)YQYx(s)ds,V4=h1∫-h10∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V5=h2∫-h20∫t+ste2α(τ-t)ẋT(τ)YRYẋ(τ)dτds,V6=(h2-h1)∫-h2-h1∫t+ste2α(τ-t)ẋT(τ)YUYẋ(τ)dτds,V7=∫t-η(t)te2α(s-t)ẋT(s)YQ1Yẋ(s)ds,V8=2∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds.
It easy to check that
(3.13)λ1∥x(t)∥2≤V(t,xt)≤λ2∥xt∥2,∀t≥0.
Taking the derivative of V(xt) along the solution of system (3.1) we have
(3.14)V̇1=2xT(t)Yẋ(t),=2yT(t)[-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))-0.5BBTP-1x(t)∫t-k(t)t]=2yT(t)[-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W2∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)∫t-k(t)t](3.15)=yT(t)[-AP-PAT]y(t)+2yT(t)W0f(x(t))+2yT(t)W1g(x(t-h(t)))+2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-yT(t)BBTy(t)+2αyT(t)Py(t)-2αV1,V̇2=yT(t)Qy(t)-e-2αh1yT(t-h1)Qy(t-h1)-2αV2,V̇3=yT(t)Qy(t)-e-2αh2yT(t-h2)Qy(t-h2)-2αV3,V̇4≤h12ẏT(t)Rẏ(t)-h1e-2αh1∫t-h1tẏT(s)Rẏ(s)ds-2αV4,V̇5≤h22ẏT(t)Rẏ(t)-h2e-2αh2∫t-h2tẏT(s)Rẏ(s)ds-2αV5,V̇6≤(h2-h1)2ẏT(t)Uẏ(t)-(h2-h1)e-2αh2∫t-h2t-h1ẏT(s)Uẏ(s)ds-2αV6,V̇7≤ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))-2αV7,V̇8≤2khT(x(t))D2-1h(x(t))-2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2αV8.
Using the condition (2.3) and since the matrices Di-1>0,i=0,1,2 are diagonal, we have
(3.16)khT(x(t))D2-1h(x(t))≤kxT(t)HD2-1Hx(t)=kyT(t)PHD2-1HPy(t),fT(x(t))D0-1f(x(t))≤xT(t)FD0-1Fx(t)=yT(t)PFD0-1FPy(t),gT(x(t-h(t)))D1-1g(x(t-h(t)))≤xT(t-h(t))GD1-1Gx(t-h(t)),=yT(t-h(t))PGD1-1GPy(t-h(t)),
and using (2.3) and the Proposition (2.3) for the following estimations:
(3.17)2yT(t)W0f(x(t))≤yT(t)W0D0W0Ty(t)+fT(x(t))D0-1f(x(t))≤yT(t)W0D0W0Ty(t)+xT(t)FD0-1Fx(t)≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t),2yT(t)W1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+xT(t-h(t))GD1-1Gx(t-h(t))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),2yT(t)W2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+k-1e-2αk(∫t-k(t)th(x(s)ds))TD2-1(∫t-k(t)th(x(s))ds)≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
Applying Proposition 2.4 and the Leibniz-Newton formula, we have
(3.18)-h1∫t-h1tẏT(s)Rẏ(s)ds≤-[∫t-h1tẏ(s)]TR[∫t-h1tẏ(s)]≤-[y(t)-y(t-h1)]TR[y(t)-y(t-h1)]=-yT(t)Ry(t)+2yT(t)Ry(t-h1)-yT(t-h1)Ry(t-h1),-h2∫t-h2tẏT(s)Rẏ(s)ds≤-[∫t-h2tẏ(s)]TR[∫t-h2tẏ(s)]≤-[y(t)-y(t-h2)]TR[y(t)-y(t-h2)]=-yT(t)Ry(t)+2yT(t)Ry(t-h2)-yT(t-h2)Ry(t-h2).
Note that
(3.19)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds=-(h2-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h2-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds.
Using Proposition 2.4 gives
(3.20)-(h2-h(t))∫t-h2t-h(t)ẏT(s)Uẏ(s)ds≤-[∫t-h2t-h(t)ẏ(s)ds]TU[∫t-h2t-h(t)ẏ(s)ds]≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)],(3.21)-(h(t)-h1)∫t-h(t)t-h1ẏT(s)Uẏ(s)ds≤-[∫t-h(t)t-h1ẏ(s)ds]TU[∫t-h(t)t-h1ẏ(s)ds]≤-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))].
Let β=(h2-h(t))/(h2-h1)≤1. Then
(3.22)-(h2-h(t))∫t-h(t)t-h1ẏT(s)Uẏ(s)ds=-β∫t-h(t)t-h1(h2-h1)ẏT(s)Uẏ(s)ds≤-β∫t-h(t)t-h1(h(t)-h1)ẏT(s)Uẏ(s)ds≤-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))],(3.23)-(h(t)-h1)∫t-h2t-h(t)ẏT(s)Uẏ(s)ds=-(1-β)∫t-h2t-h(t)(h2-h1)ẏT(s)Uẏ(s)ds≤-(1-β)∫t-h2t-h(t)(h2-h(t))ẏT(s)Uẏ(s)ds≤-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
Therefore from (3.20)–(3.23), we obtain
(3.24)-(h2-h1)∫t-h2t-h1ẏT(s)Uẏ(s)ds≤-[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]-[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)].
By using the following identity relation:
(3.25)-Pẏ(t)-APy(t)+W0f(x(t))+W1g(x(t-h(t)))+W0∫t-k(t)th(x(s))ds+B0Pẏ(t-η(t))-0.5BBTy(t)=0,
we have
(3.26)-2ẏT(t)Pẏ(t)-2ẏT(t)APy(t)+2ẏT(t)W0f(x(t))+2ẏT(t)W1g(x(t-h(t)))+2ẏT(t)W0∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=0.
By using Propositions 2.3 and 2.4, we have
(3.27)2ẏT(t)W0f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+fT(x(t))D0-1f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t),(3.28)2ẏT(t)W1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+gT(x(t-h(t)))D1-1g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1GPy(t-h(t)),(3.29)2ẏT(t)W2∫t-k(t)th(x(s))ds≤ke2αkẏT(t)W2D2W2Tẏ(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds.
From (3.15)–(3.29), we obtain
(3.30)V̇(t,xt)+2αV(t,xt)≤yT(t)[-AP-PAT+2αP-BBT+2Q+2kPHD2-1HP+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-e-2αh1R-e-2αh2R+PFD1-1FP]y(t)+2yT(t)B0Pẏ(t-η(t))+yT(t-h1)[-e-2αh1Q-e-2αh1R-e-2αh2U]y(t-h1),+(t-h2)[-e-2αh2Q-e-2αh2R-e-2αh2U]y(t-h2),+ẏT(t)[h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T]ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+yT(t-h(t))[2PGD1-1GP-2e-2αh2U]y(t-h(t))+2e-2αh1yT(t)Ry(t-h1)+2e-2αh2yT(t)Ry(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h2)+2e-2αh2yT(t-h(t))Uy(t-h1)-2ẏT(t)APy(t)+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]=ξT(t)Ωξ(t)-e-2αh2β[y(t-h1)-y(t-h(t))]TU×[y(t-h1)-y(t-h(t))]-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]T×U[y(t-h(t))-y(t-h2)]=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)),
where
(3.31)M3=-0.1BBT-0.1(e-2αh1+e-2αh2)R+2kPHD2-1HP+2PFD0-1FP,M4=-0.1e-2αh2U+2PGD1-1GP,ζ(t)=[y(t),ẏ(t-η(t)),y(t-h1),y(t-h2),ẏ(t),y(t-h(t))].
Since 0≤β≤1, (1-β)ℳ1+βℳ2 is a convex combination of ℳ1 and ℳ2. Therefore, (1-β)ℳ1+βℳ2<0 is equivalent to ℳ1<0 and ℳ2<0. Applying Schur complement lemma Proposition 2.5, the inequalities M3<0 and M4<0 are equivalent to ℳ3<0 and ℳ4<0, respectively. Thus, it follows from (3.3)–(3.6) and (3.30), we obtain
(3.32)V̇(t,xt)≤-2αV(t,xt),∀t≥0.
Integrating both sides of (3.32) from 0 to t, we obtain
(3.33)V(t,xt)≤V(ϕ)e-2αt,∀t≥0.
Furthermore, taking condition (3.13) into account, we have
(3.34)λ1‖x(t,ϕ)‖2≤V(xt)≤V(ϕ)e-2αt≤λ2e-2αt‖ϕ‖2,
then
(3.35)‖x(t,ϕ)‖≤λ2λ1e-αt‖ϕ‖,t≥0.
Therefore, nominal system (3.1) is α-exponentially stabilizable. The proof is completed.
## 3.2. Exponential Stabilization for Interval Time-Varying Delay Systems
Based on Theorem3.2, we derive robustly α-exponential stabilizability conditions of uncertain linear control systems with interval time-varying delay (2.1) in terms of LMIs.Theorem 3.3.
Givenα>0. The system (2.1) is α-exponentially stabilizable if there exist symmetric positive definite matrices P,Q,R,U,Q1, three diagonal matrices Di,i=0,1,2 and ϵi>0,i=1,2,…,6 such that the following LMI holds:
(3.36)W1=W-[000-I0I]T×e-2αh2U[000-I0I]<0,(3.37)W2=W-[00I00-I]T×e-2αh2U[00I00-I]<0,(3.38)W3=[Δ4kPH2PFPHaTPHaTPFH0TPFH0T*-4kD200000**-2D00000***-ϵ1I000****-ϵ4I00*****-ϵ2I0******-ϵ5I]<0,(3.39)W4=[-0.1e-2αh2U2PGPGH1TPGH1T*-2D100**-ϵ3I0***-ϵ6I]<0,W=[W11W12W13W14W150*W2200W250**W3300W36***W440W46****W550*****W66],
where
(3.40)Δ=-0.1BBT-0.1(e-2αh1+e-2αh2)R,W11=[-A+αI]P+P[-A+αI]T-0.9BBT+2Q+W0D0W0T+W1D1W1T+ke2αkW2D2W2T-0.9e-2αh1R-0.9e-2αh2R+ϵ1EaTEa+ϵ3E1TE1+ϵ2E0TE0+ke2αkE2TH2TD2H2E2,W12=B0P,W13=e-2αh1R,W14=e-2αh2R,W15=-PAT-0.5BBT,W22=-(1-δ)e-2αηQ1,W25=PB0T,W33=-e-2αh1Q-e-2αh1R-e-2αh2U,W36=e-2αh2U,W44=-e-2αh2Q-e-2αh2R-e-2αh2U,W46=e-2αh2U,W55=h12R+h22R+(h2-h1)2U+Q1-2P+W0D0W0T+W1D1W1T+ke2αkW2D2W2T+ϵ4EaTEa+ϵ5E0TE0+ϵ6E1TE1+ke2αkE2TH2TD2H2E2,W66=-1.9e-2αh2U.Proof.
Choose Lyapunov-Krasovskii functional as in (3.11) but change V8 to V8=4∫-k0∫t+ste2α(τ-t)hT(x(τ))D2-1h(x(τ))dτds, we may proof the Theorem by using a similar argument as in the proof of Theorem 3.2. By replacing A,W0,W1, and W2 with A+EaFa(t)Ha,W0+E0F0(t)H0,W1+E1F1(t)H1, and W2+E2F2(t)H2, respectively. We have the following:
(3.41)V̇(t,xt)+2αV(t,xt)≤yT(t)[(-A+EaFa(t)Ha)P+P(-A+EaFa(t)Ha)T-BBT+2αP+2Q-e-2αh1R-e-2αh2RyT]y(t)+2yT(t)(W0+E0F0(t)H0)f(x(t))+2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2yT(t)(W2+E2F2(t)H2)×∫t-k(t)th(x(s))ds+2yT(t)B0Pẏ(t-η(t))-e-2αh1yT(t-h1)Qy(t-h1)-e-2αh2yT(t-h2)Qy(t-h2)+h12ẏT(t)Rẏ(t)+h22ẏT(t)Rẏ(t)+(h2-h1)2ẏT(t)Uẏ(t)+e-2αh12yT(t)Ry(t-h1)-e-2αh1yT(t-h1)Ry(t-h1)+e-2αh22yT(t)Ry(t-h2)-e-2αh2yT(t-h2)Ry(t-h2)-e-2αh2[y(t-h(t))-y(t-h2)]TU[y(t-h(t)-y(t-h2))]T-e-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-βe-2αh2[y(t-h1)-y(t-h(t))]TU[y(t-h1)-y(t-h(t))]T-e-2αh2(1-β)[y(t-h(t))-y(t-h2)]TU[y(t-h(t))-y(t-h2)]T+ẏT(t)Q1ẏ(t)-(1-δ)e-2αηẏT(t-η(t))Q1ẏ(t-η(t))+4kyT(t)PHD2-1HPy(t)-4e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)Pẏ(t)-2ẏT(t)(A+EaFa(t)Ha)Py(t)+2ẏT(t)(W0+E0F0(t)H0)f(x(t))+2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))+2ẏT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds+2ẏT(t)B0Pẏ(t-η(t))-ẏT(t)BBTy(t)=ξT(t)[(1-β)M1+βM2]ξ(t)+yT(t)M3y(t)+yT(t-h(t))M4y(t-h(t)).
Applying Proposition 2.3 and Lemma 2.6, the following estimations hold:
(3.42)yT(t)[(-A+EaFa(t)H)a)P+P(-AT+HaTFaT(t)EaT)]y(t)≤yT(t)[-PAT-AP]y(t)+ϵ1yT(t)EaTEay(t)+ϵ1-1yT(t)PHaTHaPy(t),2yT(t)[W0+E0F0(t)H0]f(x(t))=2yT(t)W0f(x(t))+2yT(t)E0F0(t)H0f(x(t))≤yT(t)W0D0W0Ty(t)+yT(t)PFD0-1FPy(t)+ϵ2yT(t)E0TE0y(t)+ϵ2-1yT(t)PFH0TH0FPy(t),2yT(t)(W1+E1F1(t)H1)g(x(t-h(t)))=2yT(t)W1g(x(t-h(t)))+2yT(t)E1F1(t)H1g(x(t-h(t)))≤yT(t)W1D1W1Ty(t)+yT(t-h(t))PGD1-1GPy(t-h(t))+ϵ3yT(t)E1TE1y(t)+ϵ3-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2yT(t)(W2+E2F2(t)H2)∫t-k(t)th(x(s))ds=2yT(t)W2∫t-k(t)th(x(s))ds+2yT(t)E2F2(t)H2∫t-k(t)th(x(s))ds≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+k-1e-2αk[∫t-k(t)th(x(s))ds]TH2TH2-TD2-1H2-1H2×[∫t-k(t)th(x(s))ds]≤ke2αkyT(t)W2D2W2Ty(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkyT(t)E2TH2D2H2TE2y(t)+e-2αk∫t-kthT(x(s))D2-1h(x(s))ds-2ẏT(t)(A+EaFa(t)Ha)Py(t)≤-2ẏT(t)APy(t)-2ẏT(t)EaFa(t)HaPy(t)≤-2ẏT(t)APy(t)+ϵ4ẏT(t)EaTEaẏ(t)+ϵ4-1yT(t)PHaTHaPy(t),2ẏT(t)(W0+E0F0(t)H0)f(x(t))=2ẏT(t)[W0+E0F0(t)H0]f(x(t))≤ẏT(t)W0D0W0Tẏ(t)+yT(t)PFD0-1FPy(t)+ϵ5ẏT(t)E0TE0ẏ(t)+ϵ5-1yT(t)PFH0TH0FPy(t),2ẏT(t)(W1+E1F1(t)H1)g(x(t-h(t)))≤ẏT(t)W1D1W1Tẏ(t)+yT(t-h(t))PGD1-1G×Py(t-h(t))+ϵ6ẏT(t)E1TE1ẏ(t)+ϵ6-1yT(t-h(t))PGH1TH1GPyT(t-h(t)),2ẏT(t)(W2+E2F2(t)H2)≤ke2αkẏT(t)W2D2W2Tẏ(t)+2e-2αk∫t-kthT(x(s))D2-1h(x(s))ds+ke2αkẏT(t)E2TH2TD2H2E2ẏ(t).Remark 3.4.
In [10, 13, 14], exponential stability of neutral-type neural networks with time-varying delays were investigated. However, the distributed delays have not been considered. Stability conditions in [13, 26–28] are not applicable to our work, since we consider more activation functions than them. Therefore, our stability conditions are less conservative than some other existing results.Remark 3.5.
In this paper, the restriction that the state delay is differentiable is not required which allows the state delay to be fast time-varying. Meanwhile, this restriction is required in some existing result, see [13, 14, 26–28].
## 4. Numerical Examples
In this section, we now provide an example to show the effectiveness of the result in Theorem3.2.Example 4.1.
Consider the neural networks with interval time-varying delay and control input with the following parameters:(4.1)ẋ(t)=-Ax(t)+W0f(x(t))+W1g(x(t-h(t)))+Bu(t),
where
(4.2)A=[-0.2012],W0=[0.40.10.1-0.2],W1=[0.30.10.50.2],F=[0.3000.5],G=[0.1000.4],B=[0.30.1].
It is worth noting that, the delay functions h(t)=0.1+0.1|sint|. Therefore, the methods used in [2, 9] are not applicable to this system. We have h1=0.1,h2=0.2. Given α=0.2 and any initial function ϕ(t)=C1([-0.2,0],ℝ2). Using the Matlab LMI toolbox, we obtain
(4.3)P=[0.03700.00100.00100.2938],Q=[0.00080.00290.00290.0250],U=[0.01530.00800.0080.6201],R=[0.03770.00550.00550.8173],D0=[0.0353000.2833],D1=[0.0215000.5025].
Thus, the system (4.1) is 0.2-exponentially stabilizable and the value λ2/λ1=1.6469, so the solution of the closed-loop system satisfies
(4.4)‖x(t,ϕ)‖≤1.6469e-0.2t‖ϕ‖,∀t∈R+.Example 4.2.
Consider the neural networks with mixed interval time-varying delays and control input with the following parameters:(4.5)ẋ(t)=-(A+ΔA(t))x(t)+(W0+ΔW0)f(x(t))+(W1+ΔW1)g(x(t-h(t)))+(W2+ΔW2)∫t-k(t)th(x(s))ds+B0ẋ(t-η(t))+Bu(t),
where
(4.6)A=[0.15001],W0=[0.50.120.1-0.3],W1=[0.20.10.10.2],W2=[0.10.20.50.1],B0=[0.15000.15],F=[0.4000.5],G=[0.1000.2],H=[0.5000.3],B=[0.10],Ha=H0=H1=H2=Ea=E0=E1=E2=[0.1000.1].
It is worth noting that, the delay functions h(t)=0.2+0.2|sint|,k(t)=|cost| are nondifferentiable and η(t)=0.2sin2(t). Therefore, the methods used in [13, 14] are not applicable to this system. We have h1=0.2,h2=0.4,k=0.1,δ=0.1,η=0.2. Given α=0.1 and any initial function ϕ(t)=C1([-0.4,0],ℝ2). Using the Matlab LMI toolbox, we obtain e1=0.0173,e2=0.0128,e3=0.0111,e4=0.0263,e5=0.0209,e6=0.0192,
(4.7)P=[0.00610.00020.00020.0228],Q=[0.00030.00050.00010.0031],Q1=[0.00050.00010.00010.0024],U=[0.00280.00040.00040.00382],R=[0.00520.00080.00080.0543],D0=[0.0068000.0304],D1=[0.0038000.0145],D2=[0.0433000.0275].
Thus, the system (4.1) is 0.1-exponentially stabilizable and the value λ2/λ1=2.2939, so the solution of the closed-loop system satisfies
(4.8)‖x(t,ϕ)‖≤2.2939e-0.1t‖ϕ‖,∀t∈R+.
## 5. Conclusions
In this paper, we have investigated the exponential stabilization of neutral-type neural networks with various activation functions and interval nondifferentiable and distributed time-varying delays. The interval time-varying delay function is not necessary to be differentiable which allows time-delay function to be a fast time-varying function. By constructing a set of improved Lyapunov-Krasovskii functional combined with Leibniz-Newton's formula, the proposed stability criteria have been formulated in the form of a linear matrix inequalities. Numerical examples illustrate the effectiveness of the results.
---
*Source: 101426-2012-03-12.xml* | 2012 |
# Evaluation of the Microbial Load and Heavy Metal Content of Two Polyherbal Antimalarial Products on the Ghanaian Market
**Authors:** Bernard K. Turkson; Merlin L. K. Mensah; George H. Sam; Abraham Y. Mensah; Isaac K. Amponsah; Edmund Ekuadzi; Gustav Komlaga; Emmanuel Achaab
**Journal:** Evidence-Based Complementary and Alternative Medicine
(2020)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2020/1014273
---
## Abstract
The use of herbal products has increased and become more popularized globally; however, limited studies coupled with questions related to the quality and safety of these herbal products have been raised. Herbal products with hope of their nontoxicity may play a role of alternative to overcome the problems of multi-drug resistant pathogens. Medicinal plants used as raw materials for production may have quality and safety issues due to proximity to wastewater application of fungicides and pesticides, which may be directly deposited superficially or absorbed by the plant system. Therefore, possible contamination of some Ghanaian herbal products cannot be ignored, as it may severely affect human life in the process of treatment.Aim. To evaluate the microbial load and the presence of toxic heavy metals in Mist Amen Fevermix and Edhec Malacure, two polyherbal products used in the treatment of uncomplicated malaria in Ghana. Methods. Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) fitted with graphite furnace and an auto sampler was used to determine the heavy metal contents of the herbal products. The herbal samples were evaluated for the microbial load by using the appropriate culture media. Results and Analysis. Mist Amen Fevermix and Edhec Malacure complied with the safety limits evaluated for all different microbial counts and contamination. The following heavy metals were present in Mist Amen Fevermix and Edhec Malacure Mixture: Fe, Ni, K, Zn, Hg, Cu, Mn, Cr, Cd, Pb, Fe, Cu, K, and Na. Ni was below detectable limit in Edhec Malacure. Conclusion. Mist Amen Fevermix and Edhec Malacure may be assured of safety. The products contained heavy metals, but all were within acceptable limit established by the FAO/WHO. The levels of microbial contamination were below the maximum acceptable limit.
---
## Body
## 1. Introduction
The use of herbal products for treating various diseases, including malaria, predates the history of mankind. Herbal therapies are mainstream for nearly all illnesses [1]. Herbal products use has increased globally; they are in high demand in both developed and developing countries for primary health care purposes. This is a result of the attributed wide range of biological activities, perceived safety, ready availability, and lesser cost associated with the products. Due to increased occurrence of falsified medicines, incidence of adverse drug reactions, drug resistance by various diseases and the economic burden of orthodox medicines, public, academic, and government interests in herbal products as alternatives has grown exponentially due to their acceptability and effectiveness [2].Most distributors and consumers consider herbal products to be safe, although toxic heavy metals and microbial contamination in finished herbal products have been a concern. Medicinal herbal products have been reported to be contaminated with toxic heavy metals and microorganisms found in the soil and plants where they were grown [3]. Heavy metals and microbial contamination of herbal products pose a threat to their quality and safety. Normally, the shortcomings of herbal products include unhygienic conditions under which they are produced [4, 5]. The contaminants that present serious health hazards are pathogenic bacteria such as Salmonella spp, Escherichia coli, Staphylococcus aureus, and Shigella spp [6]. Inappropriate cleaning, unsuitable transportation, and poor storage conditions render the medicinal plant materials vulnerable to infestations and expose them to much microbial contamination during the production stage, leading to deterioration in quality. This may give rise to the risk of mycotoxin production, especially aflatoxin, which has been proven to be mutagenic, carcinogenic, teratogenic, neurotoxic, nephrotoxic, and immunosuppressive [7, 8].Quality and safety parameters of herbal medicines based on the heavy metal contents and microbial load have been an important concern for health authorities and health professionals. The contamination of these herbal products reduces their effectiveness and also poses serious health hazards to consumers. Heavy metals, if consumed, can accumulate in different organs of the body, leading to unwanted side effects [6, 9]. Therefore, it is important to evaluate the heavy metal content and microbial load of herbal products based on relevant scientific guidelines [10].In Ghana, several herbal products are readily available on the market, and about 75% of the population relies on herbal medicines for their primary health care needs. However, about 9% of the population depends on herbal products for the treatment of malaria [11]. However, studies on them regarding, especially, heavy metal contents are limited. Therefore, it is important to evaluate for the toxic heavy metal and microbial content with the aim to establish the level of microbes and heavy metals present in Mist Amen Fevermix and Edhec Malacure, two finished herbal products formulated and used in Ghana for the management of uncomplicated malaria.
## 2. Materials and Methods
### 2.1. Reagents, Glassware, and Instrumentation
Two bottles each ofMist Amen Fevermix and Edhec Malacure, Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS), (Model ICE3000; Thermo Scientific, USA) fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST), concentrated HNO3, concentrated HCl, H2O, nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar laboratory incubator (Gallenkamp), oven (Gallenkamp), electrical balance (Mettler Toledo), and general laboratory glass wares were obtained from the central stores of the Department of Microbiology, KNUST, Ghana.
### 2.2. Samples
Two bottles each ofMist Amen Fevermix and Edhec Malacure were obtained from the herbal medicine unit of the Tafo Government Hospital, Kumasi, Ghana. Mist Amen Fevermix is a finished herbal product registered with the FDA, Ghana. Mist Amen Fevermix is produced by Amen Scientific Herbal Hospital and is on the recommended essential herbal medicines list of the Ministry of Health and used in the herbal medicine units in Ghana. Edhec Malacure Mixture is a finished herbal product, registered by the FDA, Ghana. Edhec Malacure is produced by Edu Herbal Clinic, Mankessim, Central Region, Ghana. It is not on the recommended essential herbal medicines list; however, it is sold on the market.
### 2.3. Heavy Metals Determination
Test for elemental constituents of the herbal products were determined using the Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) (Model ICE3000; Thermo Scientific, USA), fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST). Eleven heavy metals were analyzed in each product. An aliquot of 1 mL from two samples each ofMist Amen Fevermix and Edhec Malacure were placed in a 250 mL beaker, and 5 mL each of freshly prepared acid mixture of concentrated HNO3, concentrated HCl, and H2O in the ratio 1.5 : 0.5 : 0.5 was added. The mixture was gently heated on a hot plate maintaining a temperature of 150°C until the sample had completely dissolved to give a clear solution. During the digestion process, the inner walls of the beaker were washed with deionized water to prevent sample loss. After digestion, the Mist Amen Fevermix and Edhec Malacure were made up to 50 mL with deionized water and analyzed. Multielement standard solutions of all the elements involved were prepared by diluting 1000 mg/L stock solutions with 5 percent nitric acid solution [10]. Samples were analyzed in duplicate and the average was calculated.
### 2.4. Microbial Load Determination
Analysis for the microbial load ofMist Amen Fevermix and Edhec Malacure were determined using nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar which were bought from Lab Chem Medical Supplies, Kumasi. The microbial load of Mist Amen Fevermix and Edhec Malacure were assessed in the microbiology laboratory of the Department of Microbiology, Kwame Nkrumah University of Science and Technology (KNUST). Samples were analyzed in duplicate and the average was used.
### 2.5. Preparation of Culture Media
Mist Amen Fevermix and Edhec Malacure were shaken vigorously to ensure uniform distribution of microorganisms if any. An aliquot of 5 mL each of the two antimalarial products was pipetted into 95 ml of sterile distilled water for stock sample preparation and was subjected to tenfold serial dilution in sterile test tubes. An aliquot of 1 mL each of stock sample was aseptically transferred and mixed in 9 ml of sterile distilled water. All media used were prepared according to the manufacturer’s specifications. For total viable bacterial and coliform count, the appropriate solutions were transferred into sterile duplicate plates, and 20 mls each of nutrient and MacConkey agar were added and mixed separately. For fungal count, 1 mL each of Mist Amen Fevermix and Edhec Malacure were streaked over duplicate plates of prepared dried potato dextrose agar. Plates were incubated at 32°C for 48 hours for total bacterial counts, at 37°C for 24 hours for coliform counts, and at 25°C for 5 days for fungi counts. Plates were counted for total bacterial and fungal counts [12].
## 2.1. Reagents, Glassware, and Instrumentation
Two bottles each ofMist Amen Fevermix and Edhec Malacure, Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS), (Model ICE3000; Thermo Scientific, USA) fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST), concentrated HNO3, concentrated HCl, H2O, nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar laboratory incubator (Gallenkamp), oven (Gallenkamp), electrical balance (Mettler Toledo), and general laboratory glass wares were obtained from the central stores of the Department of Microbiology, KNUST, Ghana.
## 2.2. Samples
Two bottles each ofMist Amen Fevermix and Edhec Malacure were obtained from the herbal medicine unit of the Tafo Government Hospital, Kumasi, Ghana. Mist Amen Fevermix is a finished herbal product registered with the FDA, Ghana. Mist Amen Fevermix is produced by Amen Scientific Herbal Hospital and is on the recommended essential herbal medicines list of the Ministry of Health and used in the herbal medicine units in Ghana. Edhec Malacure Mixture is a finished herbal product, registered by the FDA, Ghana. Edhec Malacure is produced by Edu Herbal Clinic, Mankessim, Central Region, Ghana. It is not on the recommended essential herbal medicines list; however, it is sold on the market.
## 2.3. Heavy Metals Determination
Test for elemental constituents of the herbal products were determined using the Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) (Model ICE3000; Thermo Scientific, USA), fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST). Eleven heavy metals were analyzed in each product. An aliquot of 1 mL from two samples each ofMist Amen Fevermix and Edhec Malacure were placed in a 250 mL beaker, and 5 mL each of freshly prepared acid mixture of concentrated HNO3, concentrated HCl, and H2O in the ratio 1.5 : 0.5 : 0.5 was added. The mixture was gently heated on a hot plate maintaining a temperature of 150°C until the sample had completely dissolved to give a clear solution. During the digestion process, the inner walls of the beaker were washed with deionized water to prevent sample loss. After digestion, the Mist Amen Fevermix and Edhec Malacure were made up to 50 mL with deionized water and analyzed. Multielement standard solutions of all the elements involved were prepared by diluting 1000 mg/L stock solutions with 5 percent nitric acid solution [10]. Samples were analyzed in duplicate and the average was calculated.
## 2.4. Microbial Load Determination
Analysis for the microbial load ofMist Amen Fevermix and Edhec Malacure were determined using nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar which were bought from Lab Chem Medical Supplies, Kumasi. The microbial load of Mist Amen Fevermix and Edhec Malacure were assessed in the microbiology laboratory of the Department of Microbiology, Kwame Nkrumah University of Science and Technology (KNUST). Samples were analyzed in duplicate and the average was used.
## 2.5. Preparation of Culture Media
Mist Amen Fevermix and Edhec Malacure were shaken vigorously to ensure uniform distribution of microorganisms if any. An aliquot of 5 mL each of the two antimalarial products was pipetted into 95 ml of sterile distilled water for stock sample preparation and was subjected to tenfold serial dilution in sterile test tubes. An aliquot of 1 mL each of stock sample was aseptically transferred and mixed in 9 ml of sterile distilled water. All media used were prepared according to the manufacturer’s specifications. For total viable bacterial and coliform count, the appropriate solutions were transferred into sterile duplicate plates, and 20 mls each of nutrient and MacConkey agar were added and mixed separately. For fungal count, 1 mL each of Mist Amen Fevermix and Edhec Malacure were streaked over duplicate plates of prepared dried potato dextrose agar. Plates were incubated at 32°C for 48 hours for total bacterial counts, at 37°C for 24 hours for coliform counts, and at 25°C for 5 days for fungi counts. Plates were counted for total bacterial and fungal counts [12].
## 3. Results
The results for heavy metals tested inMist Amen Fevermix and Edhec Malacure are stated in Table 1. The results showed that all the heavy metals in Mist Amen Fevermix and Edhec Malacure were within the permissible limits and some were not yet established [13, 14].Table 1
Elemental analysis ofMist Amen Fevermix and Edhec Malacure Mixture.
Metals (mg/kg)SamplesPermissible limits (mg/kg)Mist Amen FevermixEdhec MalacureABAVESDABAVESDAs0.0620.0860.0740.0120.0070.0030.0050.0025.0 [13]Cu0.0100.0160.0130.0035.2353.5324.38350.8515Not set [14].Cd0.0090.0050.0070.0020.080.020.050.030.3 [13]Fe0.090.0660.0780.01223.55926.72125.141.581Not set [14].Hg0.0130.0090.0110.0020.000840.001220.001030.000190.5 [13]Mn0.220.350.2850.0652.221842.39652.309170.08733Not set [14]Ni0.0080.0020.0050.003BDLBDLBDLBDL1.683 [13]Pb0.0010.0170.0090.0080.000250.002690.001470.0012210 [13]Zn0.1020.0760.0890.0130.4220.4380.4300.00827.4 [13]K3.973.693.830.14305.172406.322355.74750.575Not set [14]Na0.370.880.6250.25541.21538.995640.10531.1097Not set [14]A and B samples ofMist Amen Fevermix and Edhec Malacure; AVE:average; SD:standard deviation.The results of microbial load evaluated inMist Amen Fevermix and Edhec Malacure are recorded in Tables 2 and 3 below. The results showed that the microbial contamination in Mist Amen Fevermix and Edhec Malacure were below the maximum acceptable limit.Table 2
Microbial load ofMist Amen Fevermix.
TestResultsAcceptable limitsABAVESDTotal aerobic viable count (NA; 37°C; 24 hrs) ≤1 × 105 cfu/mL1.21 × 1031.33 × 1031.27 × 1030.06Not more 1.0 × 107 cfu/mLTest for Salmonella, Shigella. BSA/37°C/48 hrs. Nil/L0000AbsentTest forEscherichia coli. MAC/37°C/48 hrs. Nil/L0000Not more 1.0 × 102 cfu/mLTest forPseudomonas cetrimide PCA/37°C/48 hrs. Nil/L0000AbsentTest for yeasts and molds. PDC/SAB/25°C/5 days1.01 × 1031.17 × 1031.09 × 1030.08Not more 1.0 × 105 cfu/mLA and B samples ofMist Amen Fevermix; AVE: average; SD: standard deviation.Table 3
Microbial load ofEdhec Malacure.
TestResultsAcceptable limitsABAVESDTotal aerobic viable count (NA; 37°C; 24 hrs) ≤1 × 105 cfu/mL2.11 × 1032.23 × 1032.17 × 1030.06Not more 1.0 × 107 cfu/mLTest forSalmonella, Shigella. BSA/37°C/48 hrs. Nil/L0000AbsentTest forEscherichia coli. MAC/37°C/48 hrs. Nil/L0000Not more 1.0 × 102 cfu/mLTest forPseudomonas cetrimide PCA/37°C/48 hrs. Nil/L0000AbsentTest for yeast and molds. PDC/SAB/25°C/5 days1.23 × 1032.43 × 1031.83 × 1030.6Not more 1.0 × 105 cfu/mLA and B samples ofEdhec Malacure; AVE: average; SD: standard deviation.
## 4. Discussion
There has been an increased use of herbal products globally including Ghana. An herbal medicine services have been integrated into the health-care delivery system since the year 2011 and appears to have led to increased demand for herbal medicines. Despite the upsurge in usage, herbal products have not been subjected to rigorous quality assurance to benefit patrons and satisfy critics of these herbal remedies.Also, national limits for toxic heavy metals and the microbial contamination in various types of herbal products are different for each country and depend on the herb type and whether it is raw material or finished herbal product [10]. Even though there are no permissible limits for toxic heavy metals and the microbial load for herbal products established by the Ghana Standards Authority, Ghana, the Food and Drugs Authority, has the mandate to withdraw any herbal products proven to be unsafe for consumption based on reports of suspected adverse reaction or reactions, or problems related to hazards or harms [15]. The increased use of herbal products in Ghana warrants that the products used are free of toxic heavy metals and the microbial contamination in tests and they are of good quality and safe.The elemental analysis showed thatMist Amen Fevermix and Edhec Malacure Mixture contain heavy metals such as presented in Table 1. The heavy metals were all present and within permissible limits set by FAO/WHO for herbal products [16, 17]. However, nickel was below detectable limits in the Edhec Malacure Mixture. The concentrations of iron, copper, potassium, and sodium were more in Edhec Malacure Mixture than in Mist Amen Fevermix, but no regulatory limits have been established for these metals in herbal medicines by the WHO [14]. The results and observations are in line with previous studies which found that heavy metals found in Ghanaian, Egyptian, Indian, and Pakistani herbal preparations and medicinal plants are relatively low [18, 19].The levels of the elements analyzed inMist Amen Fevermix were within FAO/WHO limits [20]; a finding that is very important for the quality control of the products because of the long-term safety implications for users who may get exposed to excessive amounts of these elements [13]. Despite the fact that the concentrations of the metals present in the test products were low and within the permissible limit, dosages above the critical limit may lead to possible heavy metal toxicity.Contamination of herbal products by microorganism can be attributed to many causes including: environmental pollution and soil contamination [21]. The microbiological results, in Tables 2 and 3, shows that Salmonella, Shigella, Pseudomonas, and E. coli were not detected in the study products; however, a total aerobic viable count of 1.27 × 103 cfu/mL was detected in Mist Amen Fevermix, and a count of 2.17 × 103 cfu/mL was detected in Edhec Malacure. These microbial counts are below the maximum permissible limit of 1.0 × 105 cfu/mL. Also, the amount of yeast and molds in Mist Amen Fevermix was 1.09 × 103 cfu/mL and Edhec Malacure Mixture had 1.83 × 103 cfu/mL. The microbes present in Mist Amen Fevermix and Edhec Malacure Mixture were below the acceptable maximum limit of 1.0 × 107 cfu/mL. This observation is in line with previous studies which found that pathogenic bacteria like Salmonella spp and Shigella were not isolated from some herbal preparations [22], as in the case of Mist Amen Fevermix and Edhec Malacure Mixture due to good manufacturing practices observed. However, some herbal antidiabetic preparations formulated in Bangladesh were contaminated with microorganisms which pose a potential risk for human health, and care should be taken in every step involved in the preparation of herbal preparations to assure safety [23].
## 5. Conclusion
Evaluation of the heavy metal contents and microbial load ofMist Amen Fevermix and Edhec Malacure revealed that Mist Amen Fevermix and Edhec Malacure contain toxic heavy metals, but within the acceptable limits. Also, the microbes in Mist Amen Fevermix and Edhec Malacure herbal products were within acceptable limits. This is an indication that the products may be safe for use in the treatment of uncomplicated malaria infection. This data could provide a reference to the field of herbal preparations in Ghana.
---
*Source: 1014273-2020-05-16.xml* | 1014273-2020-05-16_1014273-2020-05-16.md | 21,650 | Evaluation of the Microbial Load and Heavy Metal Content of Two Polyherbal Antimalarial Products on the Ghanaian Market | Bernard K. Turkson; Merlin L. K. Mensah; George H. Sam; Abraham Y. Mensah; Isaac K. Amponsah; Edmund Ekuadzi; Gustav Komlaga; Emmanuel Achaab | Evidence-Based Complementary and Alternative Medicine
(2020) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2020/1014273 | 1014273-2020-05-16.xml | ---
## Abstract
The use of herbal products has increased and become more popularized globally; however, limited studies coupled with questions related to the quality and safety of these herbal products have been raised. Herbal products with hope of their nontoxicity may play a role of alternative to overcome the problems of multi-drug resistant pathogens. Medicinal plants used as raw materials for production may have quality and safety issues due to proximity to wastewater application of fungicides and pesticides, which may be directly deposited superficially or absorbed by the plant system. Therefore, possible contamination of some Ghanaian herbal products cannot be ignored, as it may severely affect human life in the process of treatment.Aim. To evaluate the microbial load and the presence of toxic heavy metals in Mist Amen Fevermix and Edhec Malacure, two polyherbal products used in the treatment of uncomplicated malaria in Ghana. Methods. Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) fitted with graphite furnace and an auto sampler was used to determine the heavy metal contents of the herbal products. The herbal samples were evaluated for the microbial load by using the appropriate culture media. Results and Analysis. Mist Amen Fevermix and Edhec Malacure complied with the safety limits evaluated for all different microbial counts and contamination. The following heavy metals were present in Mist Amen Fevermix and Edhec Malacure Mixture: Fe, Ni, K, Zn, Hg, Cu, Mn, Cr, Cd, Pb, Fe, Cu, K, and Na. Ni was below detectable limit in Edhec Malacure. Conclusion. Mist Amen Fevermix and Edhec Malacure may be assured of safety. The products contained heavy metals, but all were within acceptable limit established by the FAO/WHO. The levels of microbial contamination were below the maximum acceptable limit.
---
## Body
## 1. Introduction
The use of herbal products for treating various diseases, including malaria, predates the history of mankind. Herbal therapies are mainstream for nearly all illnesses [1]. Herbal products use has increased globally; they are in high demand in both developed and developing countries for primary health care purposes. This is a result of the attributed wide range of biological activities, perceived safety, ready availability, and lesser cost associated with the products. Due to increased occurrence of falsified medicines, incidence of adverse drug reactions, drug resistance by various diseases and the economic burden of orthodox medicines, public, academic, and government interests in herbal products as alternatives has grown exponentially due to their acceptability and effectiveness [2].Most distributors and consumers consider herbal products to be safe, although toxic heavy metals and microbial contamination in finished herbal products have been a concern. Medicinal herbal products have been reported to be contaminated with toxic heavy metals and microorganisms found in the soil and plants where they were grown [3]. Heavy metals and microbial contamination of herbal products pose a threat to their quality and safety. Normally, the shortcomings of herbal products include unhygienic conditions under which they are produced [4, 5]. The contaminants that present serious health hazards are pathogenic bacteria such as Salmonella spp, Escherichia coli, Staphylococcus aureus, and Shigella spp [6]. Inappropriate cleaning, unsuitable transportation, and poor storage conditions render the medicinal plant materials vulnerable to infestations and expose them to much microbial contamination during the production stage, leading to deterioration in quality. This may give rise to the risk of mycotoxin production, especially aflatoxin, which has been proven to be mutagenic, carcinogenic, teratogenic, neurotoxic, nephrotoxic, and immunosuppressive [7, 8].Quality and safety parameters of herbal medicines based on the heavy metal contents and microbial load have been an important concern for health authorities and health professionals. The contamination of these herbal products reduces their effectiveness and also poses serious health hazards to consumers. Heavy metals, if consumed, can accumulate in different organs of the body, leading to unwanted side effects [6, 9]. Therefore, it is important to evaluate the heavy metal content and microbial load of herbal products based on relevant scientific guidelines [10].In Ghana, several herbal products are readily available on the market, and about 75% of the population relies on herbal medicines for their primary health care needs. However, about 9% of the population depends on herbal products for the treatment of malaria [11]. However, studies on them regarding, especially, heavy metal contents are limited. Therefore, it is important to evaluate for the toxic heavy metal and microbial content with the aim to establish the level of microbes and heavy metals present in Mist Amen Fevermix and Edhec Malacure, two finished herbal products formulated and used in Ghana for the management of uncomplicated malaria.
## 2. Materials and Methods
### 2.1. Reagents, Glassware, and Instrumentation
Two bottles each ofMist Amen Fevermix and Edhec Malacure, Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS), (Model ICE3000; Thermo Scientific, USA) fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST), concentrated HNO3, concentrated HCl, H2O, nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar laboratory incubator (Gallenkamp), oven (Gallenkamp), electrical balance (Mettler Toledo), and general laboratory glass wares were obtained from the central stores of the Department of Microbiology, KNUST, Ghana.
### 2.2. Samples
Two bottles each ofMist Amen Fevermix and Edhec Malacure were obtained from the herbal medicine unit of the Tafo Government Hospital, Kumasi, Ghana. Mist Amen Fevermix is a finished herbal product registered with the FDA, Ghana. Mist Amen Fevermix is produced by Amen Scientific Herbal Hospital and is on the recommended essential herbal medicines list of the Ministry of Health and used in the herbal medicine units in Ghana. Edhec Malacure Mixture is a finished herbal product, registered by the FDA, Ghana. Edhec Malacure is produced by Edu Herbal Clinic, Mankessim, Central Region, Ghana. It is not on the recommended essential herbal medicines list; however, it is sold on the market.
### 2.3. Heavy Metals Determination
Test for elemental constituents of the herbal products were determined using the Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) (Model ICE3000; Thermo Scientific, USA), fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST). Eleven heavy metals were analyzed in each product. An aliquot of 1 mL from two samples each ofMist Amen Fevermix and Edhec Malacure were placed in a 250 mL beaker, and 5 mL each of freshly prepared acid mixture of concentrated HNO3, concentrated HCl, and H2O in the ratio 1.5 : 0.5 : 0.5 was added. The mixture was gently heated on a hot plate maintaining a temperature of 150°C until the sample had completely dissolved to give a clear solution. During the digestion process, the inner walls of the beaker were washed with deionized water to prevent sample loss. After digestion, the Mist Amen Fevermix and Edhec Malacure were made up to 50 mL with deionized water and analyzed. Multielement standard solutions of all the elements involved were prepared by diluting 1000 mg/L stock solutions with 5 percent nitric acid solution [10]. Samples were analyzed in duplicate and the average was calculated.
### 2.4. Microbial Load Determination
Analysis for the microbial load ofMist Amen Fevermix and Edhec Malacure were determined using nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar which were bought from Lab Chem Medical Supplies, Kumasi. The microbial load of Mist Amen Fevermix and Edhec Malacure were assessed in the microbiology laboratory of the Department of Microbiology, Kwame Nkrumah University of Science and Technology (KNUST). Samples were analyzed in duplicate and the average was used.
### 2.5. Preparation of Culture Media
Mist Amen Fevermix and Edhec Malacure were shaken vigorously to ensure uniform distribution of microorganisms if any. An aliquot of 5 mL each of the two antimalarial products was pipetted into 95 ml of sterile distilled water for stock sample preparation and was subjected to tenfold serial dilution in sterile test tubes. An aliquot of 1 mL each of stock sample was aseptically transferred and mixed in 9 ml of sterile distilled water. All media used were prepared according to the manufacturer’s specifications. For total viable bacterial and coliform count, the appropriate solutions were transferred into sterile duplicate plates, and 20 mls each of nutrient and MacConkey agar were added and mixed separately. For fungal count, 1 mL each of Mist Amen Fevermix and Edhec Malacure were streaked over duplicate plates of prepared dried potato dextrose agar. Plates were incubated at 32°C for 48 hours for total bacterial counts, at 37°C for 24 hours for coliform counts, and at 25°C for 5 days for fungi counts. Plates were counted for total bacterial and fungal counts [12].
## 2.1. Reagents, Glassware, and Instrumentation
Two bottles each ofMist Amen Fevermix and Edhec Malacure, Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS), (Model ICE3000; Thermo Scientific, USA) fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST), concentrated HNO3, concentrated HCl, H2O, nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar laboratory incubator (Gallenkamp), oven (Gallenkamp), electrical balance (Mettler Toledo), and general laboratory glass wares were obtained from the central stores of the Department of Microbiology, KNUST, Ghana.
## 2.2. Samples
Two bottles each ofMist Amen Fevermix and Edhec Malacure were obtained from the herbal medicine unit of the Tafo Government Hospital, Kumasi, Ghana. Mist Amen Fevermix is a finished herbal product registered with the FDA, Ghana. Mist Amen Fevermix is produced by Amen Scientific Herbal Hospital and is on the recommended essential herbal medicines list of the Ministry of Health and used in the herbal medicine units in Ghana. Edhec Malacure Mixture is a finished herbal product, registered by the FDA, Ghana. Edhec Malacure is produced by Edu Herbal Clinic, Mankessim, Central Region, Ghana. It is not on the recommended essential herbal medicines list; however, it is sold on the market.
## 2.3. Heavy Metals Determination
Test for elemental constituents of the herbal products were determined using the Thermo Elemental M5 Atomic Absorption Spectrophotometer (AAS) (Model ICE3000; Thermo Scientific, USA), fitted with graphite furnace and an auto sampler at the Faculty of Agriculture, Department of Soil Science laboratory, Kwame Nkrumah University of Science and Technology (KNUST). Eleven heavy metals were analyzed in each product. An aliquot of 1 mL from two samples each ofMist Amen Fevermix and Edhec Malacure were placed in a 250 mL beaker, and 5 mL each of freshly prepared acid mixture of concentrated HNO3, concentrated HCl, and H2O in the ratio 1.5 : 0.5 : 0.5 was added. The mixture was gently heated on a hot plate maintaining a temperature of 150°C until the sample had completely dissolved to give a clear solution. During the digestion process, the inner walls of the beaker were washed with deionized water to prevent sample loss. After digestion, the Mist Amen Fevermix and Edhec Malacure were made up to 50 mL with deionized water and analyzed. Multielement standard solutions of all the elements involved were prepared by diluting 1000 mg/L stock solutions with 5 percent nitric acid solution [10]. Samples were analyzed in duplicate and the average was calculated.
## 2.4. Microbial Load Determination
Analysis for the microbial load ofMist Amen Fevermix and Edhec Malacure were determined using nutrient agar, MacConkey agar, Sabouraud agar, Salmonella agar, Shigella Agar and potato dextrose agar which were bought from Lab Chem Medical Supplies, Kumasi. The microbial load of Mist Amen Fevermix and Edhec Malacure were assessed in the microbiology laboratory of the Department of Microbiology, Kwame Nkrumah University of Science and Technology (KNUST). Samples were analyzed in duplicate and the average was used.
## 2.5. Preparation of Culture Media
Mist Amen Fevermix and Edhec Malacure were shaken vigorously to ensure uniform distribution of microorganisms if any. An aliquot of 5 mL each of the two antimalarial products was pipetted into 95 ml of sterile distilled water for stock sample preparation and was subjected to tenfold serial dilution in sterile test tubes. An aliquot of 1 mL each of stock sample was aseptically transferred and mixed in 9 ml of sterile distilled water. All media used were prepared according to the manufacturer’s specifications. For total viable bacterial and coliform count, the appropriate solutions were transferred into sterile duplicate plates, and 20 mls each of nutrient and MacConkey agar were added and mixed separately. For fungal count, 1 mL each of Mist Amen Fevermix and Edhec Malacure were streaked over duplicate plates of prepared dried potato dextrose agar. Plates were incubated at 32°C for 48 hours for total bacterial counts, at 37°C for 24 hours for coliform counts, and at 25°C for 5 days for fungi counts. Plates were counted for total bacterial and fungal counts [12].
## 3. Results
The results for heavy metals tested inMist Amen Fevermix and Edhec Malacure are stated in Table 1. The results showed that all the heavy metals in Mist Amen Fevermix and Edhec Malacure were within the permissible limits and some were not yet established [13, 14].Table 1
Elemental analysis ofMist Amen Fevermix and Edhec Malacure Mixture.
Metals (mg/kg)SamplesPermissible limits (mg/kg)Mist Amen FevermixEdhec MalacureABAVESDABAVESDAs0.0620.0860.0740.0120.0070.0030.0050.0025.0 [13]Cu0.0100.0160.0130.0035.2353.5324.38350.8515Not set [14].Cd0.0090.0050.0070.0020.080.020.050.030.3 [13]Fe0.090.0660.0780.01223.55926.72125.141.581Not set [14].Hg0.0130.0090.0110.0020.000840.001220.001030.000190.5 [13]Mn0.220.350.2850.0652.221842.39652.309170.08733Not set [14]Ni0.0080.0020.0050.003BDLBDLBDLBDL1.683 [13]Pb0.0010.0170.0090.0080.000250.002690.001470.0012210 [13]Zn0.1020.0760.0890.0130.4220.4380.4300.00827.4 [13]K3.973.693.830.14305.172406.322355.74750.575Not set [14]Na0.370.880.6250.25541.21538.995640.10531.1097Not set [14]A and B samples ofMist Amen Fevermix and Edhec Malacure; AVE:average; SD:standard deviation.The results of microbial load evaluated inMist Amen Fevermix and Edhec Malacure are recorded in Tables 2 and 3 below. The results showed that the microbial contamination in Mist Amen Fevermix and Edhec Malacure were below the maximum acceptable limit.Table 2
Microbial load ofMist Amen Fevermix.
TestResultsAcceptable limitsABAVESDTotal aerobic viable count (NA; 37°C; 24 hrs) ≤1 × 105 cfu/mL1.21 × 1031.33 × 1031.27 × 1030.06Not more 1.0 × 107 cfu/mLTest for Salmonella, Shigella. BSA/37°C/48 hrs. Nil/L0000AbsentTest forEscherichia coli. MAC/37°C/48 hrs. Nil/L0000Not more 1.0 × 102 cfu/mLTest forPseudomonas cetrimide PCA/37°C/48 hrs. Nil/L0000AbsentTest for yeasts and molds. PDC/SAB/25°C/5 days1.01 × 1031.17 × 1031.09 × 1030.08Not more 1.0 × 105 cfu/mLA and B samples ofMist Amen Fevermix; AVE: average; SD: standard deviation.Table 3
Microbial load ofEdhec Malacure.
TestResultsAcceptable limitsABAVESDTotal aerobic viable count (NA; 37°C; 24 hrs) ≤1 × 105 cfu/mL2.11 × 1032.23 × 1032.17 × 1030.06Not more 1.0 × 107 cfu/mLTest forSalmonella, Shigella. BSA/37°C/48 hrs. Nil/L0000AbsentTest forEscherichia coli. MAC/37°C/48 hrs. Nil/L0000Not more 1.0 × 102 cfu/mLTest forPseudomonas cetrimide PCA/37°C/48 hrs. Nil/L0000AbsentTest for yeast and molds. PDC/SAB/25°C/5 days1.23 × 1032.43 × 1031.83 × 1030.6Not more 1.0 × 105 cfu/mLA and B samples ofEdhec Malacure; AVE: average; SD: standard deviation.
## 4. Discussion
There has been an increased use of herbal products globally including Ghana. An herbal medicine services have been integrated into the health-care delivery system since the year 2011 and appears to have led to increased demand for herbal medicines. Despite the upsurge in usage, herbal products have not been subjected to rigorous quality assurance to benefit patrons and satisfy critics of these herbal remedies.Also, national limits for toxic heavy metals and the microbial contamination in various types of herbal products are different for each country and depend on the herb type and whether it is raw material or finished herbal product [10]. Even though there are no permissible limits for toxic heavy metals and the microbial load for herbal products established by the Ghana Standards Authority, Ghana, the Food and Drugs Authority, has the mandate to withdraw any herbal products proven to be unsafe for consumption based on reports of suspected adverse reaction or reactions, or problems related to hazards or harms [15]. The increased use of herbal products in Ghana warrants that the products used are free of toxic heavy metals and the microbial contamination in tests and they are of good quality and safe.The elemental analysis showed thatMist Amen Fevermix and Edhec Malacure Mixture contain heavy metals such as presented in Table 1. The heavy metals were all present and within permissible limits set by FAO/WHO for herbal products [16, 17]. However, nickel was below detectable limits in the Edhec Malacure Mixture. The concentrations of iron, copper, potassium, and sodium were more in Edhec Malacure Mixture than in Mist Amen Fevermix, but no regulatory limits have been established for these metals in herbal medicines by the WHO [14]. The results and observations are in line with previous studies which found that heavy metals found in Ghanaian, Egyptian, Indian, and Pakistani herbal preparations and medicinal plants are relatively low [18, 19].The levels of the elements analyzed inMist Amen Fevermix were within FAO/WHO limits [20]; a finding that is very important for the quality control of the products because of the long-term safety implications for users who may get exposed to excessive amounts of these elements [13]. Despite the fact that the concentrations of the metals present in the test products were low and within the permissible limit, dosages above the critical limit may lead to possible heavy metal toxicity.Contamination of herbal products by microorganism can be attributed to many causes including: environmental pollution and soil contamination [21]. The microbiological results, in Tables 2 and 3, shows that Salmonella, Shigella, Pseudomonas, and E. coli were not detected in the study products; however, a total aerobic viable count of 1.27 × 103 cfu/mL was detected in Mist Amen Fevermix, and a count of 2.17 × 103 cfu/mL was detected in Edhec Malacure. These microbial counts are below the maximum permissible limit of 1.0 × 105 cfu/mL. Also, the amount of yeast and molds in Mist Amen Fevermix was 1.09 × 103 cfu/mL and Edhec Malacure Mixture had 1.83 × 103 cfu/mL. The microbes present in Mist Amen Fevermix and Edhec Malacure Mixture were below the acceptable maximum limit of 1.0 × 107 cfu/mL. This observation is in line with previous studies which found that pathogenic bacteria like Salmonella spp and Shigella were not isolated from some herbal preparations [22], as in the case of Mist Amen Fevermix and Edhec Malacure Mixture due to good manufacturing practices observed. However, some herbal antidiabetic preparations formulated in Bangladesh were contaminated with microorganisms which pose a potential risk for human health, and care should be taken in every step involved in the preparation of herbal preparations to assure safety [23].
## 5. Conclusion
Evaluation of the heavy metal contents and microbial load ofMist Amen Fevermix and Edhec Malacure revealed that Mist Amen Fevermix and Edhec Malacure contain toxic heavy metals, but within the acceptable limits. Also, the microbes in Mist Amen Fevermix and Edhec Malacure herbal products were within acceptable limits. This is an indication that the products may be safe for use in the treatment of uncomplicated malaria infection. This data could provide a reference to the field of herbal preparations in Ghana.
---
*Source: 1014273-2020-05-16.xml* | 2020 |
# Realization of New Electronically Controllable Grounded and Floating Simulated Inductance Circuits Using Voltage Differencing Differential Input Buffered Amplifiers
**Authors:** Dinesh Prasad; D. R. Bhaskar; K. L. Pushkar
**Journal:** Active and Passive Electronic Components
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101432
---
## Abstract
A new active circuit is proposed for the realisation of lossless grounded and floating inductance employing Voltage Differencing Differential Input Buffered Amplifiers (VD-DIBAs). The proposed grounded simulated inductance circuit employs two VD-DIBAs and a single-grounded capacitor whereas the floating simulated inductance circuit employs three VD-DIBAs and a grounded capacitor. The circuit for grounded inductance does not require any realization conditions whereas in case of floating inductance, only equality of two transconductances is needed. Some sample results demonstrating the applications of the new simulated inductors using VD-DIBAs have been given to confirm the workability of the new circuits.
---
## Body
## 1. Introduction
Several synthetic grounded and floating inductance circuits using different active elements such as operational amplifiers (op-amps) [1–5], current conveyors (CCs) [6–13], current controlled conveyors (CCCIIs) [14, 15], current feedback operational amplifiers (CFOAs) [16], operational mirrored amplifiers (OMAs) [17], differential voltage current conveyors (DVCCIIs) [18], current differencing buffered amplifiers (CDBAs) [19–21], current differencing transconductance amplifiers (CDTAs) [22, 23], and operational transconductance amplifier (OTA) [24] are reported in the literature. Recently, various active building blocks have been introduced in [25], VD-DIBA is one of them. Although some applications of VD-DIBAs have been reported in the literature such as in the realization of all pass section [26], to the best knowledge and belief of the authors, no grounded/ floating inductance simulation circuits using VD-DIBAs have yet been reported in the open literature so far. The purpose of this paper is, therefore, to propose new VD-DIBA-based lossless grounded and floating inductance simulation circuits.
## 2. The Proposed New Configurations
The schematic symbol and equivalent model of the VD-DIBA are shown in Figures1(a) and 1(b) [26]. The model includes two controlled sources: the current source controlled by differential voltage (V+-V-), with the transconductance gm, and the voltage source controlled by differential voltage (Vz-Vv), with the unity voltage gain. The VD-DIBA can be described by the following set of equations:
(1)(Iv+Iv-IzIvVw)=(0000000000gm-gm00000000001-10)(Vv+Vv-VzVvIw).
The proposed configurations are shown in Figures 2 and 3, respectively.(a) Schematic symbol and (b) equivalent model of VD-DIBA [26].
(a)(b)Figure 2
Proposed grounded inductance simulation configuration.Figure 3
Proposed floating inductance simulation configuration.A routine analysis of the circuit shown in Figure2 results in the following expression for the input impedance(2)Zin(s)=s(4Cgm1gm2).
The circuit, thus, simulates a grounded inductance with resulting value given by(3)Leq=4Cgm1gm2.
On the other hand, an analysis of the circuit shown in Figure 3 yields(4)[I1I2]=gm1gm24sC[+1-1-1+1][V1V2],withgm1=gm3
which proves that the circuit simulates a floating lossless inductance with resulting inductance value given by(5)Leq=4Cgm1gm2.
Note that ensuring gm1=gm3 requires only the equality of the two DC bias currents of the VD-DIBAs which can be easily implemented in practice
## 3. Nonideal Analysis and Sensitivity Performance
LetRz and Cz denote the parasitic resistance and parasitic capacitance of the Z terminal. Taking into account the nonidealities of the VD-DIBA, namely, Vw=(β+Vz-β-Vv), where β+=1-ε1(ε1≪1) and β-=1-ε2(ε2≪1) are voltage tracking errors of the VD-DIBA.For the circuit shown in Figure2, the input impedance will become(6)Zin(s)=4{s(C+Cz)+1/Rz}4{s2Cz2+1/Rz2+2sCz/Rz+s2CCz+sC/Rz}+gm1gm2(2-β1+)(2-β2+).The non-ideal equivalent circuit of the grounded inductor is shown in Figure 4(7)L=4(C+Cz)(4/Rz2+(2-β1+)(2-β2+)gm1gm2),C′=Cz,R′=(C+Cz)Rz(C+2Cz),D=Cz(C+Cz)Rz,C′′=(C+2Cz),R′′=4Rz(4+Rz2(2-β1+)(2-β2+)gm1gm2).The sensitivities of L with respect to active and passive elements are(8)SCL=C(C+Cz),SCzL=Cz(C+Cz),SRzL=2(4+(2-β1+)(2-β2+)Rz2gm1gm2),Sβ1+L=β1+(2-β2+)gm1gm2(4/Rz2+(2-β1+)(2-β2+)gm1gm2),Sβ2+L=β2+(2-β2+)gm1gm2(4/Rz2+(2-β1+)(2-β2+)gm1gm2),Sgm1,gm2L=-(2-β1+)(2-β2+)gm1gm2(4Rz2+(2-β1+)(2-β2+)gm1gm2).
For the circuit shown in Figure 3 the input-output currents and voltages relationship is given by:(9)[I1I2]=X[sCzRz+1XRz+1-1-1sCzRz+1XRz+1][V1V2],withβ1=β3,X=gm1gm2Rzβ1+(2-β2+)/4(s(C+Cz)Rz+1).Figure 4
Non-ideal equivalent circuit of grounded inductor of Figure2.The non-ideal equivalent circuit of floating inductor is shown in Figure5:(10)L=4(C+Cz)β1+(2-β2+)gm1gm2,R=4β1+(2-β2+)gm1gm2Rz.
The sensitivities of L with respect to active and passive elements are(11)SCL=C(C+Cz),SCzL=Cz(C+Cz),Sβ1+L=-1,Sgm1,gm2L=-1,Sβ2+L=β2+(2-β2+).
Assuming gm1=gm2=258.091 uA/V, Cz=0, Rz=∞, C=16.65 pF, and β1+=β2+=1. These sensitivities are found to be (1, 0, 0, 1, 1, −1) and (1, 0, −1, −1, 1) for (8) and (11), respectively. Thus, all the passive and active sensitivities of both the proposed circuits are low.Figure 5
Non-ideal equivalent circuit of floating inductor of Figure3.
## 4. Verification of the Workability of the New Proposed Grounded/Floating Inductance Configurations
The workability of the proposed simulated inductors has been verified by realizing a band pass filter (BPF) and band reject filter (BRF), respectively.Figure6 shows the schematics for the realization of a BPF, using the new simulated grounded inductor.Figure 6
BPF realized by the new grounded simulated inductor.The transfer function realized by this configuration is given by(12)V0Vin=s(1/R1C1)s2+s(1/R1C1)+(gm1gm2/4C1C2).
Figure 7 shows the schematics for the realization of a BRF, using the proposed floating inductor circuit.Figure 7
BRF realized by the new floating simulated inductor.The transfer function realized by this configuration is given by(13)V0Vin=(s2+gm1gm2/4C1)s2+s(R1gm1gm2/4C1)+gm1gm2/4C1C2.
The performance of the new simulated inductors was evaluated by PSPICE simulations. Since VD-DIBA is not a commercially available IC, a possible implementation of VD-DIBA using commercially available devices is shown in Figure 8. A CMOS-based OTA as shown in Figure 17 and CFOA SPICE macromodel of AD844 to realise VD-DIBAs as per the schematic of Figure 8 were used to determine the frequency response of the grounded and floating simulated inductors. The following values were used for grounded inductor C=16.65 pF, gm1=gm2=258.09 1 uA/V, for floating inductor C=16.65 pF, gm1=gm3=258.091 uA/V. From the frequency response of the simulated grounded inductor (Figure 9) it has been observed that the inductance value remains constant up to 1 MHz. From the frequency response of the simulated floating inductor (Figure 10) it has been observed that the inductance value remains constant up to 1 MHz.Figure 8
A possible implementation of VD-DIBA.Figure 9
Frequency response of simulated grounded inductor.Figure 10
Frequency response of simulated floating inductor.To verify the theoretical analysis, the application circuits shown in Figures6 and 7 have been simulated using CMOS-based OTA as shown in Figure 17 and CFOA SPICE macromodel of AD844 to realise VD-DIBAs as per the schematic of Figure 8. The component values used were for Figure 6, C1=16.65 pF, C2=16.65 pF, R1=5.479 kΩ, and for Figure 7, C1=16.65 pF, C2=16.65 pF and R1=10.959 kΩ, the VD-DIBA was biased with ±1 volts D.C. power supplies with IB1=IB2=IB3=32 μA. IB1,IB2, and IB3 are the biasing currents for transconductances. Figures 11 and 12 show the simulated filter responses of the BP and BR filters, respectively. Figures 13 and 14 show the phase responses of BP and BR filters, respectively. Figure 15 shows the step response of the filter of Figure 6 which confirms the stability of the implemented filter. Figure 16 shows the simulated spectrum of the output signal Vo in Figure 6, where the total harmonic distortion (THD) is found to be 2.98%. The comparison of performance characteristics of VD-DIBA with OTA (CA3080) and CMOS OTA (as shown in Figure 17) has been given in Table 1. These results, thus, confirm the validity of the application of the proposed grounded/floating simulated inductance circuits.Table 1ParametersVD-DIBAOTA (CA 3080)CMOS OTA (Figure17)Input voltage linear range−120 mV to 120 mV−25 mV to 25 mV−109 mV to 109 mV−3 dB Bandwidth121 MHz2 MHz119 MHzPower consumption62.5 mW30 mW0.194 mWFigure 11
Frequency response of BPF using simulated grounded inductor.Figure 12
Frequency response of BRF using simulated floating inductor.Figure 13
Phase response of BPF using simulated grounded inductor.Figure 14
Phase response of BRF using simulated floating inductor.Figure 15
Step response of Figure6.Figure 16
Simulated spectrum of the output signal Vo in Figure6.Figure 17
CMOS implementation of OTA.
## 5. Concluding Remarks
Among various modern active building blocks, VD-DIBA is emerging as quite flexible and versatile building block for analog circuit design. However, the use of VD-DIBA in the realization of grounded/ floating inductor had not been known earlier. This paper has filled this void by introducing new VD-DIBA- based grounded and floating inductor configurations. This paper, thus, has added a new application circuits to the existing repertoire of VD-DIBA-based application circuits. The workability of new propositions has been confirmed by SPICE simulations.
---
*Source: 101432-2011-06-07.xml* | 101432-2011-06-07_101432-2011-06-07.md | 10,001 | Realization of New Electronically Controllable Grounded and Floating Simulated Inductance Circuits Using Voltage Differencing Differential Input Buffered Amplifiers | Dinesh Prasad; D. R. Bhaskar; K. L. Pushkar | Active and Passive Electronic Components
(2011) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101432 | 101432-2011-06-07.xml | ---
## Abstract
A new active circuit is proposed for the realisation of lossless grounded and floating inductance employing Voltage Differencing Differential Input Buffered Amplifiers (VD-DIBAs). The proposed grounded simulated inductance circuit employs two VD-DIBAs and a single-grounded capacitor whereas the floating simulated inductance circuit employs three VD-DIBAs and a grounded capacitor. The circuit for grounded inductance does not require any realization conditions whereas in case of floating inductance, only equality of two transconductances is needed. Some sample results demonstrating the applications of the new simulated inductors using VD-DIBAs have been given to confirm the workability of the new circuits.
---
## Body
## 1. Introduction
Several synthetic grounded and floating inductance circuits using different active elements such as operational amplifiers (op-amps) [1–5], current conveyors (CCs) [6–13], current controlled conveyors (CCCIIs) [14, 15], current feedback operational amplifiers (CFOAs) [16], operational mirrored amplifiers (OMAs) [17], differential voltage current conveyors (DVCCIIs) [18], current differencing buffered amplifiers (CDBAs) [19–21], current differencing transconductance amplifiers (CDTAs) [22, 23], and operational transconductance amplifier (OTA) [24] are reported in the literature. Recently, various active building blocks have been introduced in [25], VD-DIBA is one of them. Although some applications of VD-DIBAs have been reported in the literature such as in the realization of all pass section [26], to the best knowledge and belief of the authors, no grounded/ floating inductance simulation circuits using VD-DIBAs have yet been reported in the open literature so far. The purpose of this paper is, therefore, to propose new VD-DIBA-based lossless grounded and floating inductance simulation circuits.
## 2. The Proposed New Configurations
The schematic symbol and equivalent model of the VD-DIBA are shown in Figures1(a) and 1(b) [26]. The model includes two controlled sources: the current source controlled by differential voltage (V+-V-), with the transconductance gm, and the voltage source controlled by differential voltage (Vz-Vv), with the unity voltage gain. The VD-DIBA can be described by the following set of equations:
(1)(Iv+Iv-IzIvVw)=(0000000000gm-gm00000000001-10)(Vv+Vv-VzVvIw).
The proposed configurations are shown in Figures 2 and 3, respectively.(a) Schematic symbol and (b) equivalent model of VD-DIBA [26].
(a)(b)Figure 2
Proposed grounded inductance simulation configuration.Figure 3
Proposed floating inductance simulation configuration.A routine analysis of the circuit shown in Figure2 results in the following expression for the input impedance(2)Zin(s)=s(4Cgm1gm2).
The circuit, thus, simulates a grounded inductance with resulting value given by(3)Leq=4Cgm1gm2.
On the other hand, an analysis of the circuit shown in Figure 3 yields(4)[I1I2]=gm1gm24sC[+1-1-1+1][V1V2],withgm1=gm3
which proves that the circuit simulates a floating lossless inductance with resulting inductance value given by(5)Leq=4Cgm1gm2.
Note that ensuring gm1=gm3 requires only the equality of the two DC bias currents of the VD-DIBAs which can be easily implemented in practice
## 3. Nonideal Analysis and Sensitivity Performance
LetRz and Cz denote the parasitic resistance and parasitic capacitance of the Z terminal. Taking into account the nonidealities of the VD-DIBA, namely, Vw=(β+Vz-β-Vv), where β+=1-ε1(ε1≪1) and β-=1-ε2(ε2≪1) are voltage tracking errors of the VD-DIBA.For the circuit shown in Figure2, the input impedance will become(6)Zin(s)=4{s(C+Cz)+1/Rz}4{s2Cz2+1/Rz2+2sCz/Rz+s2CCz+sC/Rz}+gm1gm2(2-β1+)(2-β2+).The non-ideal equivalent circuit of the grounded inductor is shown in Figure 4(7)L=4(C+Cz)(4/Rz2+(2-β1+)(2-β2+)gm1gm2),C′=Cz,R′=(C+Cz)Rz(C+2Cz),D=Cz(C+Cz)Rz,C′′=(C+2Cz),R′′=4Rz(4+Rz2(2-β1+)(2-β2+)gm1gm2).The sensitivities of L with respect to active and passive elements are(8)SCL=C(C+Cz),SCzL=Cz(C+Cz),SRzL=2(4+(2-β1+)(2-β2+)Rz2gm1gm2),Sβ1+L=β1+(2-β2+)gm1gm2(4/Rz2+(2-β1+)(2-β2+)gm1gm2),Sβ2+L=β2+(2-β2+)gm1gm2(4/Rz2+(2-β1+)(2-β2+)gm1gm2),Sgm1,gm2L=-(2-β1+)(2-β2+)gm1gm2(4Rz2+(2-β1+)(2-β2+)gm1gm2).
For the circuit shown in Figure 3 the input-output currents and voltages relationship is given by:(9)[I1I2]=X[sCzRz+1XRz+1-1-1sCzRz+1XRz+1][V1V2],withβ1=β3,X=gm1gm2Rzβ1+(2-β2+)/4(s(C+Cz)Rz+1).Figure 4
Non-ideal equivalent circuit of grounded inductor of Figure2.The non-ideal equivalent circuit of floating inductor is shown in Figure5:(10)L=4(C+Cz)β1+(2-β2+)gm1gm2,R=4β1+(2-β2+)gm1gm2Rz.
The sensitivities of L with respect to active and passive elements are(11)SCL=C(C+Cz),SCzL=Cz(C+Cz),Sβ1+L=-1,Sgm1,gm2L=-1,Sβ2+L=β2+(2-β2+).
Assuming gm1=gm2=258.091 uA/V, Cz=0, Rz=∞, C=16.65 pF, and β1+=β2+=1. These sensitivities are found to be (1, 0, 0, 1, 1, −1) and (1, 0, −1, −1, 1) for (8) and (11), respectively. Thus, all the passive and active sensitivities of both the proposed circuits are low.Figure 5
Non-ideal equivalent circuit of floating inductor of Figure3.
## 4. Verification of the Workability of the New Proposed Grounded/Floating Inductance Configurations
The workability of the proposed simulated inductors has been verified by realizing a band pass filter (BPF) and band reject filter (BRF), respectively.Figure6 shows the schematics for the realization of a BPF, using the new simulated grounded inductor.Figure 6
BPF realized by the new grounded simulated inductor.The transfer function realized by this configuration is given by(12)V0Vin=s(1/R1C1)s2+s(1/R1C1)+(gm1gm2/4C1C2).
Figure 7 shows the schematics for the realization of a BRF, using the proposed floating inductor circuit.Figure 7
BRF realized by the new floating simulated inductor.The transfer function realized by this configuration is given by(13)V0Vin=(s2+gm1gm2/4C1)s2+s(R1gm1gm2/4C1)+gm1gm2/4C1C2.
The performance of the new simulated inductors was evaluated by PSPICE simulations. Since VD-DIBA is not a commercially available IC, a possible implementation of VD-DIBA using commercially available devices is shown in Figure 8. A CMOS-based OTA as shown in Figure 17 and CFOA SPICE macromodel of AD844 to realise VD-DIBAs as per the schematic of Figure 8 were used to determine the frequency response of the grounded and floating simulated inductors. The following values were used for grounded inductor C=16.65 pF, gm1=gm2=258.09 1 uA/V, for floating inductor C=16.65 pF, gm1=gm3=258.091 uA/V. From the frequency response of the simulated grounded inductor (Figure 9) it has been observed that the inductance value remains constant up to 1 MHz. From the frequency response of the simulated floating inductor (Figure 10) it has been observed that the inductance value remains constant up to 1 MHz.Figure 8
A possible implementation of VD-DIBA.Figure 9
Frequency response of simulated grounded inductor.Figure 10
Frequency response of simulated floating inductor.To verify the theoretical analysis, the application circuits shown in Figures6 and 7 have been simulated using CMOS-based OTA as shown in Figure 17 and CFOA SPICE macromodel of AD844 to realise VD-DIBAs as per the schematic of Figure 8. The component values used were for Figure 6, C1=16.65 pF, C2=16.65 pF, R1=5.479 kΩ, and for Figure 7, C1=16.65 pF, C2=16.65 pF and R1=10.959 kΩ, the VD-DIBA was biased with ±1 volts D.C. power supplies with IB1=IB2=IB3=32 μA. IB1,IB2, and IB3 are the biasing currents for transconductances. Figures 11 and 12 show the simulated filter responses of the BP and BR filters, respectively. Figures 13 and 14 show the phase responses of BP and BR filters, respectively. Figure 15 shows the step response of the filter of Figure 6 which confirms the stability of the implemented filter. Figure 16 shows the simulated spectrum of the output signal Vo in Figure 6, where the total harmonic distortion (THD) is found to be 2.98%. The comparison of performance characteristics of VD-DIBA with OTA (CA3080) and CMOS OTA (as shown in Figure 17) has been given in Table 1. These results, thus, confirm the validity of the application of the proposed grounded/floating simulated inductance circuits.Table 1ParametersVD-DIBAOTA (CA 3080)CMOS OTA (Figure17)Input voltage linear range−120 mV to 120 mV−25 mV to 25 mV−109 mV to 109 mV−3 dB Bandwidth121 MHz2 MHz119 MHzPower consumption62.5 mW30 mW0.194 mWFigure 11
Frequency response of BPF using simulated grounded inductor.Figure 12
Frequency response of BRF using simulated floating inductor.Figure 13
Phase response of BPF using simulated grounded inductor.Figure 14
Phase response of BRF using simulated floating inductor.Figure 15
Step response of Figure6.Figure 16
Simulated spectrum of the output signal Vo in Figure6.Figure 17
CMOS implementation of OTA.
## 5. Concluding Remarks
Among various modern active building blocks, VD-DIBA is emerging as quite flexible and versatile building block for analog circuit design. However, the use of VD-DIBA in the realization of grounded/ floating inductor had not been known earlier. This paper has filled this void by introducing new VD-DIBA- based grounded and floating inductor configurations. This paper, thus, has added a new application circuits to the existing repertoire of VD-DIBA-based application circuits. The workability of new propositions has been confirmed by SPICE simulations.
---
*Source: 101432-2011-06-07.xml* | 2011 |
# Immune Infiltration-Related ceRNA Network Revealing Potential Biomarkers for Prognosis of Head and Neck Squamous Cell Carcinoma
**Authors:** Shuai Zhao; Mengle Peng; Zhongquan Wang; Jingjing Cao; Xinyu Zhang; Ruijing Yu; Tao Huang; Wenping Lian
**Journal:** Disease Markers
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1014347
---
## Abstract
Background. Head and neck squamous cell carcinoma (HNSCC) is a frequently lethal malignancy, and the mortality is considerably high. The tumor microenvironment (TME) has been identified as a critical participation in cancer development, treatment, and prognosis. However, competing endogenous RNA (ceRNA) networks grouping with immune/stromal scores of HNSCC patients need to be further illustrated. Therefore, our study aimed to provide clues for searching promising prognostic markers of TME in HNSCC. Materials and Methods. ESTIMATE algorithm was used to calculate immune scores and stromal scores of the enrolled HNSCC patients. Differentially expressed genes (DEGs), lncRNAs (DELs), and miRNAs (DEMs) were identified by comparing the expression difference between high and low immune/stromal scores. Then, a ceRNA network and protein-protein interaction (PPI) network were constructed for selecting hub regulators. In addition, survival analysis was performed to access the association between immune scores, stromal scores, and differentially expressed RNAs in the ceRNA network and the overall survival (OS) of HNSCC patients. Then, the GSE65858 datasets from Gene Expression Omnibus (GEO) database was used for verification. At last, the difference between the clinical characteristics and immune cell infiltration in different expression groups of IL10RA, PRF1, and IL2RA was analyzed. Results. Survival analysis showed a better OS in the high immune score group, and then we constructed a ceRNA network composed of 97 DEGs, 79 DELs and 22 DEMs. Within the ceRNA network, FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, miR-3065-3p, and lncRNAs, including CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2, were closely correlated with the OS of HNSCC patients. Especially, using the data from GSE65858, we successfully verified that IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time. In addition, stratified analysis showed that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Conclusion. In conclusion, we constructed a ceRNA network related to the TME of HNSCC, which provides candidates for therapeutic intervention and prognosis evaluation.
---
## Body
## 1. Introduction
Head and neck cancer is a frequently lethal malignancy, with approximately 800,000 new cases every year [1, 2]. The head and neck squamous cell carcinoma (HNSCC) subtype accounts for almost 95% of head and neck cancers [3]. Despite significant advances in different therapy methods, such as chemotherapy, monoclonal antibody therapies, immunotherapy, and cytokine therapy [4, 5], the mortality of HNSCC is considerably high, mainly due to the heterogeneity, aggressiveness, and late diagnosis of HNSCC [6]. Thus, studies on the molecular mechanisms of HNSCC to discover effective biomarkers and targeted therapy to precisely predict prognosis are necessary.Currently, the tumor microenvironment (TME), consisting of extracellular matrix, stromal cells, and tumor-infiltrating immune cells, is known to be involved in cancer development, distant metastasis, and immune escape [7]. Bidirectional communication between tumor cells and their microenvironment causes the continual change over the evolution of tumors, and various tumor-secreted factors, such as oncoproteins and oncopeptides, RNA species (such as mRNAs, miRNAs, and lncRNAs), lipids, and DNA fragments, are known to participate in this communication [8, 9]. The biological alterations present in the TME provide target molecules that facilitate prognosis evaluation and anticancer therapies [9, 10]. LncRNAs and miRNAs are common types of noncoding RNAs that play multiple roles in normal physiology and pathological processes [11]. The competing endogenous RNA (ceRNA) hypothesis figures that RNA transcripts communicate with each other by competing for shared miRNAs, which act as a widespread form of posttranscriptional regulation of gene expression [12, 13]. So far, several studies have investigated prognostic value of ceRNA networks in HNSCC, and the differential expressed RNAs involved were all obtained from comparisons of HNSCC cases and normal samples. For instance, Pan et al. constructed a ceRNA network in HNSCC patients and identified some miRNAs (hsa-mir-99a, hsa-mir-337, and hsa-mir-137) and mRNAs (NOSTRIN, TIMP4, GRB14, HOXB9, CELSR3, and ADGRD2) that might be prognostic biomarkers in HNSCC [14]. Zhou et al. constructed a ceRNA-related signature and speculated that the interactions among KCNQ1OT1, hsa-miR-148a-3p, ITGA5, and naive B cells might closely correlate with the initiation and progression of HNSCC [15]. Wang et al. investigated the role of the immune microenvironment in the development and prognosis of HPV-negative HNSCC tumors by constructing a ceRNA network [16]. Yang et al. identified five lncRNAs (MIR4435-2HG, CASC9, LINC01980, STARD4-AS1, and MIR99AHG) with remarkable association with OS of HNSCC patients and one lncRNA (PART1) with a superior performance in differentiating HNSCC tissues from non-HNSCC normal tissues [17]. However, ceRNA networks grouping with immune/stromal scores of HNSCC patients need to be further illustrated.In this study, we firstly divided HNSCC patients into two groups according to the immune/stromal scores with the ESTIMATE algorithm. Differentially expressed genes (DEGs), lncRNAs (DELs), and miRNAs (DEMs) were identified between the high- and low-score groups. Then, Kaplan-Meier survival analysis was performed to explore the relationship between immune/stromal scores and overall survival (OS). In light of the better OS of patients with high immune scores, a ceRNA network was constructed using the DEGs, DELs, and DEMs from the high and low immune score groups. In addition, a PPI network of DEGs was constructed to select hub genes, and survival analysis was performed to evaluate the prognostic roles of these RNAs included in the ceRNA network. Furthermore, the different expression and prognostic value of the survival-related RNAs were verified using the GSE65858 dataset from Gene Expression Omnibus (GEO) database. Finally, analysis of the clinical relevance and immune cell infiltration for IL10RA, PRF1, and IL2RA were conducted.
## 2. Materials and Methods
### 2.1. Data Acquisition
The RNA-sequencing (FPKM) and clinical characteristics of 468 HNSCC patients were obtained from the TCGA Database (https://tcga-data.nci.nih.gov/tcga/). Patients with other malignant tumors were excluded from our study, and samples that possessed the mRNA, miRNA, and lncRNA expression data simultaneously were included. One HNSCC cohort of GEO database (GSE65858) with 270 HNSCC patients was used for validation.
### 2.2. Stromal and Immune Scores Based on the ESTIMATE Algorithm
Immune scores and stromal scores were calculated by using the estimate R package (version 4.0.3) [18]. According to the median score of infiltrating immune/stromal cells, HNSCC patients were divided into two groups. Furthermore, Kaplan-Meier survival analysis was performed to illustrate the relationship between the OS and the immune/stromal scores of HNSCC patients using the survival package of R.
### 2.3. Identification of DEGs, DELs, and DEMs
The DEGs, DELs, and DEMs between the two groups were determined with the limma package of R. The DEGs and DEMs were selected withP<0.05, false discovery rate FDR<0.05, and log2foldchangeFC>1.5. When determining the DELs, P<0.05, FDR<0.05, and log2FC>1.2 were used as cutoff values because there were so few candidate lncRNAs. Furthermore, the heatmap packages were applied to generate the heatmaps of DEGs, DELs, and DEMs.
### 2.4. Functional Analysis of DEGs
Gene ontology (GO) categories by molecular function (MF) and cellular component (CC) and biological process (BP), as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of DEGs were conducted by using ggplot2, enrichplot, and clusterProfiler package of R.P values less than 0.05 were considered significantly enriched.
### 2.5. ceRNA Network Construction
Considering the prognostic relevance of immune/stromal scores in HNSCC patients, we selected the groups with a betterP value for further analysis. MiRanda, TargetScan, and miRWalk were used to predict miRNA-mRNA interactions, and miRanda and PITA were used to predict miRNA-lncRNA interactions. Then, the intersection was taken between the target mRNAs/lncRNAs and the previously identified DEGs/DELs. Furthermore, DEMs that negatively regulated the expression of DEL and DEGs were retained to construct the ceRNA network and visualized via Cytoscape v3.8.0.
### 2.6. PPI Network and Survival Analysis
By using the STRING database, a PPI network of DEGs included in the ceRNA network was constructed and then visualized with Cytoscape. And Kaplan-Meier analysis was performed to investigate the relationship between the expression of DEGs, DELs, and DEMs in the ceRNA network and OS of HNSCC patients.P<0.05 was recognized as a statistically significant difference.
### 2.7. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
Based on clinical characteristics (age, sex, tumor stage, TNM stage, grade, smoking, radiation, and therapy), HNSCC patients were stratified into distinct subgroups. A Chi-square test was performed to determine the difference in clinical characteristics between different expression groups of IL10RA, PRF1, and IL2RA. Besides, QUANTISEQ (https://icbi.i-med.ac.at/software/quantiseq/doc/) was employed to access difference in immune cells infiltration.
## 2.1. Data Acquisition
The RNA-sequencing (FPKM) and clinical characteristics of 468 HNSCC patients were obtained from the TCGA Database (https://tcga-data.nci.nih.gov/tcga/). Patients with other malignant tumors were excluded from our study, and samples that possessed the mRNA, miRNA, and lncRNA expression data simultaneously were included. One HNSCC cohort of GEO database (GSE65858) with 270 HNSCC patients was used for validation.
## 2.2. Stromal and Immune Scores Based on the ESTIMATE Algorithm
Immune scores and stromal scores were calculated by using the estimate R package (version 4.0.3) [18]. According to the median score of infiltrating immune/stromal cells, HNSCC patients were divided into two groups. Furthermore, Kaplan-Meier survival analysis was performed to illustrate the relationship between the OS and the immune/stromal scores of HNSCC patients using the survival package of R.
## 2.3. Identification of DEGs, DELs, and DEMs
The DEGs, DELs, and DEMs between the two groups were determined with the limma package of R. The DEGs and DEMs were selected withP<0.05, false discovery rate FDR<0.05, and log2foldchangeFC>1.5. When determining the DELs, P<0.05, FDR<0.05, and log2FC>1.2 were used as cutoff values because there were so few candidate lncRNAs. Furthermore, the heatmap packages were applied to generate the heatmaps of DEGs, DELs, and DEMs.
## 2.4. Functional Analysis of DEGs
Gene ontology (GO) categories by molecular function (MF) and cellular component (CC) and biological process (BP), as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of DEGs were conducted by using ggplot2, enrichplot, and clusterProfiler package of R.P values less than 0.05 were considered significantly enriched.
## 2.5. ceRNA Network Construction
Considering the prognostic relevance of immune/stromal scores in HNSCC patients, we selected the groups with a betterP value for further analysis. MiRanda, TargetScan, and miRWalk were used to predict miRNA-mRNA interactions, and miRanda and PITA were used to predict miRNA-lncRNA interactions. Then, the intersection was taken between the target mRNAs/lncRNAs and the previously identified DEGs/DELs. Furthermore, DEMs that negatively regulated the expression of DEL and DEGs were retained to construct the ceRNA network and visualized via Cytoscape v3.8.0.
## 2.6. PPI Network and Survival Analysis
By using the STRING database, a PPI network of DEGs included in the ceRNA network was constructed and then visualized with Cytoscape. And Kaplan-Meier analysis was performed to investigate the relationship between the expression of DEGs, DELs, and DEMs in the ceRNA network and OS of HNSCC patients.P<0.05 was recognized as a statistically significant difference.
## 2.7. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
Based on clinical characteristics (age, sex, tumor stage, TNM stage, grade, smoking, radiation, and therapy), HNSCC patients were stratified into distinct subgroups. A Chi-square test was performed to determine the difference in clinical characteristics between different expression groups of IL10RA, PRF1, and IL2RA. Besides, QUANTISEQ (https://icbi.i-med.ac.at/software/quantiseq/doc/) was employed to access difference in immune cells infiltration.
## 3. Results
### 3.1. Clinical Characteristics of HNSCC Patients
468 HNSCC patients were eventually enrolled in our study, and the clinicopathological characteristics obtained from the TCGA database were summarized in TableS1. The age ranged from 19 to 90 years, and 346 (73.9%) were male and 122 (26.1%) were female. The median survival time was 625 days, ranged from 2 to 6417 days.
### 3.2. Immune Scores and Stromal Scores of HNSCC Patients
Immune scores and stromal scores were used to infer the level of infiltrating stromal and immune cells in tumor tissues. The 468 HNSCC patients were categorized into lower and upper halves based on the median immune/stromal scores. And the immune scores ranged from − 1088.39 to 2912.77, the stromal scores ranged from − 2092.21 to 1989.27 (TableS2).Kaplan-Meier survival curves showed that patients with higher stromal scores and immune scores had longer survival times than those with lower scores, although these differences in survival were not statistically significant (P=0.0639 and P=0.8799, respectively) (Figures 1(a) and 1(e)).Figure 1
Gene expression profiles and survival analysis based on immune scores and stromal scores. (a) Kaplan-Meier survival curves of high (red line) and low (blue line) immune scores. Immune scores for the heatmaps of (b) DEGs, (c) DELs, and (d) DEMs. (e) Kaplan-Meier survival curves of high (red line) and low (blue line) stromal scores. Stromal scores for the heatmaps (f) DEGs, (g) DELs, and (h) DEMs.
(a)(b)(c)(d)(e)(f)(g)(h)
### 3.3. Identification of DEGs, DELs, and DEMs
A total of 569 DEGs, 185 DELs, and 31 DEMs were identified between the high and low immune score groups, and 384 DEGs, 186 DELs, and 50 DEMs were obtained between the high and low stromal score groups. Heatmaps of the DEGs, DELs, and DEMs in these two comparisons were generated and are shown in Figure1.
### 3.4. GO and KEGG Enrichment Analyses of the DEGs
Enrichment analysis of the 953 DEGs identified in the previous section was performed to reveal their potential functions. GO terms of upregulated DEGs in the immune score groups included “antigen binding,” “external side of plasma membrane,” and “lymphocyte mediated immunity” in MF, CC, and BP, respectively. For downregulated DEGs, the top GO terms included “aldo-keto redustase (NADP) activity” in MF, “apical part of cell” in CC, and “cellular ketone metabolic process” in BP. The enriched KEGG pathways of those upregulated DEGs were mainly involved in “allograft rejection” and “viral protein interaction with cytokine and cytokine receptor,” and the main KEGG term enriched by the downregulated DEGs was “metabolic pathways” (Figures2(a)–2(d)).Figure 2
Gene expression profiles based on GO and KEGG for immune scores and stromal scores. Immune scores for GO terms for (a) upregulated DEGs and (b) downregulated DEGs. Enrichment of pathways for (c) upregulated DEGs and (d) downregulated DEGs. Stromal scores for GO terms for (e) upregulated DEGs and (f) downregulated DEGs. Enrichment of pathways for (g) upregulated DEGs and (h) downregulated DEGs.
(a)(b)(c)(d)(e)(f)(g)(h)In addition, the top GO terms of the upregulated DEGs in the stromal score groups included “extracellular matrix structural constituent,” “collagen-containing extracellular matrix,” and “external encapsulating structure organization” in MF, CC, and BP, respectively. For the downregulated DEGs, the top GO terms included “enzyme inhibitor activity” in MF, “cornified envelope” in CC, and “epidermis development” in BP. The KEGG pathways associated with the upregulated DEGs mainly involved pathways related to “cornification,” and the KEGG terms associated with the downregulated DEGs included “estrogen signaling pathway” (Figures2(e)–2(h)).
### 3.5. ceRNA Network
There were no significant differences in OS between patients with high and low immune/stromal scores (P=0.0639 and P=0.8799), while it did not mean that the DEGs, DELs, and DEMs between the two groups had no prognostic values. Thus, we chose DEGs, DELs, and DEMs of immune score groups, which had a relatively better survival for the construction of ceRNA network (Figure 1(a)). Finally, the ceRNA network contained 926 edges composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed (Figure 3(a)). Especially, hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes, suggesting that they might be master regulators in the network.Figure 3
The ceRNA network and PPI network. (a) The ceRNA network. The diamond, rectangle, and oval shape represent DELs, DEMs, and DEGs, respectively. (b) The PPI network of DEGs. The lines indicate interactions between the RNAs.
(a)(b)
### 3.6. PPI Network Construction and Survival Analysis
The PPI network constructed with the 97 DEGs in the ceRNA network contained 69 nodes and 203 edges. Twelve genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2) were prominent for having many connections with other genes (Figure3(b)). Furthermore, survival analysis of DEGs, DELs, and DEMs involved in the ceRNA network was performed. The survival curves of five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA), two DEMs (miR-148a-3p and miR-3065-3p), and four DELs (CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2) are exhibited in Figures 4(a)–4(k). High expression levels of FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 and low expression levels of miR-3065-3p were associated with longer OS in HNSCC patients (Figure 4).Figure 4
Kaplan-Meier survival curves of DEGs, DELs, and DEMs involved in the ceRNA network.
(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)
### 3.7. Validation Using One Additional Independent Cohort
To verify whether the eleven prognostic biomarkers above were differentially expressed and of prognostic significance in another independent HNSCC cohort, we downloaded GSE65858 from the GEO database for validation. However, for lacking sufficient RNA sequencing data of miRNAs and lncRNAs, we only successfully performed the differential expression and survival analysis of the five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA) between high and low immune score groups. As shown in Figure5, IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time, which were consistent with the results in the TCGA cohort.Figure 5
Verification of the different expression and survival analyses in GEO database. (a) Different expression levels of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in high and low immune score groups. (b) Kaplan-Meier survival curves of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in HNSCC patients.
(a)(b)
### 3.8. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
The distribution of clinical variables with corresponding expression subgroups was visualized (Figures6(a)–6(c)). The results showed that the composition of T status, tumor stage, and clinical grade were significantly distinct between different PRF1 expression groups. And for the clinical relevance of IL2RA expression patterns, there was of significant difference in clinical grade, indicating that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Unfortunately, we noticed no significant difference in clinical components between different IL10RA expression groups.Figure 6
Analysis of the clinical characteristics and immune cell infiltration for IL10RA, PRF1, and IL2RA. The heatmaps of clinical characteristics in different expression groups of (a) IL10RA, (b) PRF1, and (c) IL2RA. The differences of immune cells in distinct expression groups of (d) IL10RA, (e) PRF1, and (f) IL2RA.
(a)(b)(c)(d)(e)(f)In addition, the immune infiltrating analysis showed that patients with high IL10RA, PRF1, and IL2RA expression exhibited high immune cells infiltration, such as CD8+ T cell, cytotoxic lymphocytes, NK cell, B cell, monocyte, and myeloid dendritic cell (Figures6(d)–6(f)). This was also another evidence for that these three genes were elevated in patients high immune score groups.
## 3.1. Clinical Characteristics of HNSCC Patients
468 HNSCC patients were eventually enrolled in our study, and the clinicopathological characteristics obtained from the TCGA database were summarized in TableS1. The age ranged from 19 to 90 years, and 346 (73.9%) were male and 122 (26.1%) were female. The median survival time was 625 days, ranged from 2 to 6417 days.
## 3.2. Immune Scores and Stromal Scores of HNSCC Patients
Immune scores and stromal scores were used to infer the level of infiltrating stromal and immune cells in tumor tissues. The 468 HNSCC patients were categorized into lower and upper halves based on the median immune/stromal scores. And the immune scores ranged from − 1088.39 to 2912.77, the stromal scores ranged from − 2092.21 to 1989.27 (TableS2).Kaplan-Meier survival curves showed that patients with higher stromal scores and immune scores had longer survival times than those with lower scores, although these differences in survival were not statistically significant (P=0.0639 and P=0.8799, respectively) (Figures 1(a) and 1(e)).Figure 1
Gene expression profiles and survival analysis based on immune scores and stromal scores. (a) Kaplan-Meier survival curves of high (red line) and low (blue line) immune scores. Immune scores for the heatmaps of (b) DEGs, (c) DELs, and (d) DEMs. (e) Kaplan-Meier survival curves of high (red line) and low (blue line) stromal scores. Stromal scores for the heatmaps (f) DEGs, (g) DELs, and (h) DEMs.
(a)(b)(c)(d)(e)(f)(g)(h)
## 3.3. Identification of DEGs, DELs, and DEMs
A total of 569 DEGs, 185 DELs, and 31 DEMs were identified between the high and low immune score groups, and 384 DEGs, 186 DELs, and 50 DEMs were obtained between the high and low stromal score groups. Heatmaps of the DEGs, DELs, and DEMs in these two comparisons were generated and are shown in Figure1.
## 3.4. GO and KEGG Enrichment Analyses of the DEGs
Enrichment analysis of the 953 DEGs identified in the previous section was performed to reveal their potential functions. GO terms of upregulated DEGs in the immune score groups included “antigen binding,” “external side of plasma membrane,” and “lymphocyte mediated immunity” in MF, CC, and BP, respectively. For downregulated DEGs, the top GO terms included “aldo-keto redustase (NADP) activity” in MF, “apical part of cell” in CC, and “cellular ketone metabolic process” in BP. The enriched KEGG pathways of those upregulated DEGs were mainly involved in “allograft rejection” and “viral protein interaction with cytokine and cytokine receptor,” and the main KEGG term enriched by the downregulated DEGs was “metabolic pathways” (Figures2(a)–2(d)).Figure 2
Gene expression profiles based on GO and KEGG for immune scores and stromal scores. Immune scores for GO terms for (a) upregulated DEGs and (b) downregulated DEGs. Enrichment of pathways for (c) upregulated DEGs and (d) downregulated DEGs. Stromal scores for GO terms for (e) upregulated DEGs and (f) downregulated DEGs. Enrichment of pathways for (g) upregulated DEGs and (h) downregulated DEGs.
(a)(b)(c)(d)(e)(f)(g)(h)In addition, the top GO terms of the upregulated DEGs in the stromal score groups included “extracellular matrix structural constituent,” “collagen-containing extracellular matrix,” and “external encapsulating structure organization” in MF, CC, and BP, respectively. For the downregulated DEGs, the top GO terms included “enzyme inhibitor activity” in MF, “cornified envelope” in CC, and “epidermis development” in BP. The KEGG pathways associated with the upregulated DEGs mainly involved pathways related to “cornification,” and the KEGG terms associated with the downregulated DEGs included “estrogen signaling pathway” (Figures2(e)–2(h)).
## 3.5. ceRNA Network
There were no significant differences in OS between patients with high and low immune/stromal scores (P=0.0639 and P=0.8799), while it did not mean that the DEGs, DELs, and DEMs between the two groups had no prognostic values. Thus, we chose DEGs, DELs, and DEMs of immune score groups, which had a relatively better survival for the construction of ceRNA network (Figure 1(a)). Finally, the ceRNA network contained 926 edges composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed (Figure 3(a)). Especially, hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes, suggesting that they might be master regulators in the network.Figure 3
The ceRNA network and PPI network. (a) The ceRNA network. The diamond, rectangle, and oval shape represent DELs, DEMs, and DEGs, respectively. (b) The PPI network of DEGs. The lines indicate interactions between the RNAs.
(a)(b)
## 3.6. PPI Network Construction and Survival Analysis
The PPI network constructed with the 97 DEGs in the ceRNA network contained 69 nodes and 203 edges. Twelve genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2) were prominent for having many connections with other genes (Figure3(b)). Furthermore, survival analysis of DEGs, DELs, and DEMs involved in the ceRNA network was performed. The survival curves of five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA), two DEMs (miR-148a-3p and miR-3065-3p), and four DELs (CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2) are exhibited in Figures 4(a)–4(k). High expression levels of FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 and low expression levels of miR-3065-3p were associated with longer OS in HNSCC patients (Figure 4).Figure 4
Kaplan-Meier survival curves of DEGs, DELs, and DEMs involved in the ceRNA network.
(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)
## 3.7. Validation Using One Additional Independent Cohort
To verify whether the eleven prognostic biomarkers above were differentially expressed and of prognostic significance in another independent HNSCC cohort, we downloaded GSE65858 from the GEO database for validation. However, for lacking sufficient RNA sequencing data of miRNAs and lncRNAs, we only successfully performed the differential expression and survival analysis of the five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA) between high and low immune score groups. As shown in Figure5, IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time, which were consistent with the results in the TCGA cohort.Figure 5
Verification of the different expression and survival analyses in GEO database. (a) Different expression levels of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in high and low immune score groups. (b) Kaplan-Meier survival curves of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in HNSCC patients.
(a)(b)
## 3.8. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
The distribution of clinical variables with corresponding expression subgroups was visualized (Figures6(a)–6(c)). The results showed that the composition of T status, tumor stage, and clinical grade were significantly distinct between different PRF1 expression groups. And for the clinical relevance of IL2RA expression patterns, there was of significant difference in clinical grade, indicating that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Unfortunately, we noticed no significant difference in clinical components between different IL10RA expression groups.Figure 6
Analysis of the clinical characteristics and immune cell infiltration for IL10RA, PRF1, and IL2RA. The heatmaps of clinical characteristics in different expression groups of (a) IL10RA, (b) PRF1, and (c) IL2RA. The differences of immune cells in distinct expression groups of (d) IL10RA, (e) PRF1, and (f) IL2RA.
(a)(b)(c)(d)(e)(f)In addition, the immune infiltrating analysis showed that patients with high IL10RA, PRF1, and IL2RA expression exhibited high immune cells infiltration, such as CD8+ T cell, cytotoxic lymphocytes, NK cell, B cell, monocyte, and myeloid dendritic cell (Figures6(d)–6(f)). This was also another evidence for that these three genes were elevated in patients high immune score groups.
## 4. Discussion
In this study, 468 HNSCC patients were divided into two groups based on immune/stromal scores using the ESTIMATE algorithm. ESTIMATE, a method that infers the fraction of immune and stromal cells in tumor samples based on gene expression [18], enables the quantification of the level of immune/stromal cells in TME in the form of a score. Survival analysis of the high- and low-score groups showed that patients with higher immune scores had relatively longer survival time, though the results showed no significant difference. Recent studies have asserted that infiltrating immune cells play crucial roles in tumor relapse, metastasis, therapy and prognosis [19–22]. And immune cell infiltration in HNSCC has been revealed to be involved in m6A methylation, alternative splicing, increased tumor mutation burden, and prognosis [23–25]. Therefore, a deeper understanding of immune cells and tumors at the molecular level is urgent.The ceRNA network composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed, and hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes. Then, the PPI network identified 12 hub genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2), which might play important roles in the network. Among these genes, only PTGS2 was downregulated in the high immune score patients. KEGG analysis showed that PTGS2 was significantly involved in the terms “arachidonic acid metabolism,” “metabolic pathway,” and “chemical carcinogenesis.” PTGS2, also known as COX-2, is an enzyme critical for PGE2 that is associated with the enhancement of cancer cell survival, growth, migration, and invasion [26, 27]. PTGS2 is also associated with prognosis in multiple cancers [28]. In addition, for some tumors, tumor-derived PTGS2 serves an essential role in tumor immune evasion by inducing PGE2 to successfully evade elimination induced by type I interferon and/or T cells [26]. In the ceRNA network, PTGS2 was regulated by miR-148a-3p, and high expression of miR-148a-3p was significantly associated with increased OS in HNSCC patients (P=0.0296; Figure 4). Accordingly, miR-148a-3p has been identified as a tumor suppressor in colorectal cancer, and its downregulation is associated with immune suppression [29]. The roles of miR-148a-3p/PTGS2 in the TME remain to be further investigated.Of the upregulated hub genes in the PPI network, increased levels of IL10RA, PRF1, and IL2RA were significantly associated with longer survival time of HNSCC patients in both TCGA and GEO database. GO and KEGG analyses showed that IL2RA was significantly associated with “cytokine-cytokine receptor interaction” pathway. IL-2 interacts with IL2RA, which stimulates Tregs to express the transcription factors STAT5 and Foxp3, which play an essential role in Treg development and homeostasis [30, 31]. Tregs in the TME are plastic, endowing them with dual functionality [32]. Tregs are negatively correlated with OS in a majority of tumors [33]. However, they appear to be associated with improved OS in head and neck cancers [34], which is consistent with our finding that high expression levels of IL2RA were associated with favorable survival in HNSCC patients. IL10 is an essential regulator in immune homeostasis and notably serves this role through binding to its cell surface receptor, IL10RA [35]. GO and KEGG analyses showed that IL10RA was significantly associated with “cytokine-mediated signaling pathway,” “cytokine-cytokine receptor interaction,” and “Jak-STAT signaling pathway.” Song et al. suggested that high expression of IL10RA in HNSCC had better prognostic value, which is consistent with our findings [36]. The expression of PRF1 has been used to assess tumor-infiltrating lymphocytes in the tumor microenvironment and was related to the response of patients treated with anti-CTLA-4 therapy and anti-PD-1/PD-L1 therapy [37, 38]. Furthermore, Yang et al. asserted that a high expression level of PRF1 provided an appropriate microenvironment for anti-CTLA-4 and anti-PD-1/PD-L1 therapy in type I and II ovarian cancer [39]. Similarly, in this study, overexpression of PRF1 in HNSCC patients with a high immune score was identified, and its overexpression was associated with increased OS. The above findings, combined with the results of the clinical relevance and immune cell infiltration for IL10RA, PRF1, and IL2RA, emphasized that PRF1 and IL2RA might be involved in the mechanism of tumor progress and also provided evidence for the overexpression of the three genes in immune score groups.Interestingly, in the ceRNA network, miR-3065-3p and the lncRNAs CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were significantly correlated with OS in HNSCC patients in the TCGA database; the relevant interactions within the network include the following: miR-3065-3p/IL2RA; CXCR2P1/miR-210-3p/FOXP3; HNRNPA1P21/miR-767-5p/IL10RA; CTA-384D8.36/miR-149-5p/STAT5A; CTA-384D8.36/miR-149-5p/PRF1; and IGHV1OR15-2/miR-744-3p/IL2RA. MiR-3065-3p was downregulated in high immune score patients, and patients with low miR-3065-3p expression had improved overall survival. The overexpression of the lncRNAs CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were correlated with longer overall survival for HNSCC patients. Among these, CXCR2P1 has been speculated to be related to immune checkpoints PD-1, PD-L1, and CTLA4, which are crucial for successful cancer immunotherapy that contribute to the immune response [40, 41], but there is still a lack of direct evidence. Herein, the CXCR2P1/miR-210-3p/FOXP3 axis identified in the ceRNA network may provide clues for future studies of the biological functions of CXCR2P1 in the TME of HNSCC. For HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2, their prognostic roles were first reported in HNSCC. Although these prognosis-related miRNAs and lncRNAs could not be verified due to the lack of RNA sequencing and survival data in GEO database, their biological functions in the TME of HNSCC should not be ignored, and they should be further studied when enough clinical samples and information were provided. In addition, another limitation of the present study should also be taken into consideration. We inferred the level of infiltrating stromal and immune cells in tumor tissues by using ESTIMATE algorithm and not by direct analysis of actual infiltrating cells, which may provide relevant information but is not certainly related to the actual cell content of the TME.In conclusion, we estimated the level of infiltrating stromal and immune cells in the TME of HNSCC patients and constructed a ceRNA network, in which FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, miR-3065-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were significantly correlated with the OS of HNSCC patients. Besides, the expression and survival roles of IL10RA, PRF1, and IL2RA were verified in another GEO cohort. This might provide novel targets in the TME of HNSCC and contribute to therapeutic intervention and prognosis evaluation.
---
*Source: 1014347-2022-09-02.xml* | 1014347-2022-09-02_1014347-2022-09-02.md | 37,171 | Immune Infiltration-Related ceRNA Network Revealing Potential Biomarkers for Prognosis of Head and Neck Squamous Cell Carcinoma | Shuai Zhao; Mengle Peng; Zhongquan Wang; Jingjing Cao; Xinyu Zhang; Ruijing Yu; Tao Huang; Wenping Lian | Disease Markers
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1014347 | 1014347-2022-09-02.xml | ---
## Abstract
Background. Head and neck squamous cell carcinoma (HNSCC) is a frequently lethal malignancy, and the mortality is considerably high. The tumor microenvironment (TME) has been identified as a critical participation in cancer development, treatment, and prognosis. However, competing endogenous RNA (ceRNA) networks grouping with immune/stromal scores of HNSCC patients need to be further illustrated. Therefore, our study aimed to provide clues for searching promising prognostic markers of TME in HNSCC. Materials and Methods. ESTIMATE algorithm was used to calculate immune scores and stromal scores of the enrolled HNSCC patients. Differentially expressed genes (DEGs), lncRNAs (DELs), and miRNAs (DEMs) were identified by comparing the expression difference between high and low immune/stromal scores. Then, a ceRNA network and protein-protein interaction (PPI) network were constructed for selecting hub regulators. In addition, survival analysis was performed to access the association between immune scores, stromal scores, and differentially expressed RNAs in the ceRNA network and the overall survival (OS) of HNSCC patients. Then, the GSE65858 datasets from Gene Expression Omnibus (GEO) database was used for verification. At last, the difference between the clinical characteristics and immune cell infiltration in different expression groups of IL10RA, PRF1, and IL2RA was analyzed. Results. Survival analysis showed a better OS in the high immune score group, and then we constructed a ceRNA network composed of 97 DEGs, 79 DELs and 22 DEMs. Within the ceRNA network, FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, miR-3065-3p, and lncRNAs, including CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2, were closely correlated with the OS of HNSCC patients. Especially, using the data from GSE65858, we successfully verified that IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time. In addition, stratified analysis showed that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Conclusion. In conclusion, we constructed a ceRNA network related to the TME of HNSCC, which provides candidates for therapeutic intervention and prognosis evaluation.
---
## Body
## 1. Introduction
Head and neck cancer is a frequently lethal malignancy, with approximately 800,000 new cases every year [1, 2]. The head and neck squamous cell carcinoma (HNSCC) subtype accounts for almost 95% of head and neck cancers [3]. Despite significant advances in different therapy methods, such as chemotherapy, monoclonal antibody therapies, immunotherapy, and cytokine therapy [4, 5], the mortality of HNSCC is considerably high, mainly due to the heterogeneity, aggressiveness, and late diagnosis of HNSCC [6]. Thus, studies on the molecular mechanisms of HNSCC to discover effective biomarkers and targeted therapy to precisely predict prognosis are necessary.Currently, the tumor microenvironment (TME), consisting of extracellular matrix, stromal cells, and tumor-infiltrating immune cells, is known to be involved in cancer development, distant metastasis, and immune escape [7]. Bidirectional communication between tumor cells and their microenvironment causes the continual change over the evolution of tumors, and various tumor-secreted factors, such as oncoproteins and oncopeptides, RNA species (such as mRNAs, miRNAs, and lncRNAs), lipids, and DNA fragments, are known to participate in this communication [8, 9]. The biological alterations present in the TME provide target molecules that facilitate prognosis evaluation and anticancer therapies [9, 10]. LncRNAs and miRNAs are common types of noncoding RNAs that play multiple roles in normal physiology and pathological processes [11]. The competing endogenous RNA (ceRNA) hypothesis figures that RNA transcripts communicate with each other by competing for shared miRNAs, which act as a widespread form of posttranscriptional regulation of gene expression [12, 13]. So far, several studies have investigated prognostic value of ceRNA networks in HNSCC, and the differential expressed RNAs involved were all obtained from comparisons of HNSCC cases and normal samples. For instance, Pan et al. constructed a ceRNA network in HNSCC patients and identified some miRNAs (hsa-mir-99a, hsa-mir-337, and hsa-mir-137) and mRNAs (NOSTRIN, TIMP4, GRB14, HOXB9, CELSR3, and ADGRD2) that might be prognostic biomarkers in HNSCC [14]. Zhou et al. constructed a ceRNA-related signature and speculated that the interactions among KCNQ1OT1, hsa-miR-148a-3p, ITGA5, and naive B cells might closely correlate with the initiation and progression of HNSCC [15]. Wang et al. investigated the role of the immune microenvironment in the development and prognosis of HPV-negative HNSCC tumors by constructing a ceRNA network [16]. Yang et al. identified five lncRNAs (MIR4435-2HG, CASC9, LINC01980, STARD4-AS1, and MIR99AHG) with remarkable association with OS of HNSCC patients and one lncRNA (PART1) with a superior performance in differentiating HNSCC tissues from non-HNSCC normal tissues [17]. However, ceRNA networks grouping with immune/stromal scores of HNSCC patients need to be further illustrated.In this study, we firstly divided HNSCC patients into two groups according to the immune/stromal scores with the ESTIMATE algorithm. Differentially expressed genes (DEGs), lncRNAs (DELs), and miRNAs (DEMs) were identified between the high- and low-score groups. Then, Kaplan-Meier survival analysis was performed to explore the relationship between immune/stromal scores and overall survival (OS). In light of the better OS of patients with high immune scores, a ceRNA network was constructed using the DEGs, DELs, and DEMs from the high and low immune score groups. In addition, a PPI network of DEGs was constructed to select hub genes, and survival analysis was performed to evaluate the prognostic roles of these RNAs included in the ceRNA network. Furthermore, the different expression and prognostic value of the survival-related RNAs were verified using the GSE65858 dataset from Gene Expression Omnibus (GEO) database. Finally, analysis of the clinical relevance and immune cell infiltration for IL10RA, PRF1, and IL2RA were conducted.
## 2. Materials and Methods
### 2.1. Data Acquisition
The RNA-sequencing (FPKM) and clinical characteristics of 468 HNSCC patients were obtained from the TCGA Database (https://tcga-data.nci.nih.gov/tcga/). Patients with other malignant tumors were excluded from our study, and samples that possessed the mRNA, miRNA, and lncRNA expression data simultaneously were included. One HNSCC cohort of GEO database (GSE65858) with 270 HNSCC patients was used for validation.
### 2.2. Stromal and Immune Scores Based on the ESTIMATE Algorithm
Immune scores and stromal scores were calculated by using the estimate R package (version 4.0.3) [18]. According to the median score of infiltrating immune/stromal cells, HNSCC patients were divided into two groups. Furthermore, Kaplan-Meier survival analysis was performed to illustrate the relationship between the OS and the immune/stromal scores of HNSCC patients using the survival package of R.
### 2.3. Identification of DEGs, DELs, and DEMs
The DEGs, DELs, and DEMs between the two groups were determined with the limma package of R. The DEGs and DEMs were selected withP<0.05, false discovery rate FDR<0.05, and log2foldchangeFC>1.5. When determining the DELs, P<0.05, FDR<0.05, and log2FC>1.2 were used as cutoff values because there were so few candidate lncRNAs. Furthermore, the heatmap packages were applied to generate the heatmaps of DEGs, DELs, and DEMs.
### 2.4. Functional Analysis of DEGs
Gene ontology (GO) categories by molecular function (MF) and cellular component (CC) and biological process (BP), as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of DEGs were conducted by using ggplot2, enrichplot, and clusterProfiler package of R.P values less than 0.05 were considered significantly enriched.
### 2.5. ceRNA Network Construction
Considering the prognostic relevance of immune/stromal scores in HNSCC patients, we selected the groups with a betterP value for further analysis. MiRanda, TargetScan, and miRWalk were used to predict miRNA-mRNA interactions, and miRanda and PITA were used to predict miRNA-lncRNA interactions. Then, the intersection was taken between the target mRNAs/lncRNAs and the previously identified DEGs/DELs. Furthermore, DEMs that negatively regulated the expression of DEL and DEGs were retained to construct the ceRNA network and visualized via Cytoscape v3.8.0.
### 2.6. PPI Network and Survival Analysis
By using the STRING database, a PPI network of DEGs included in the ceRNA network was constructed and then visualized with Cytoscape. And Kaplan-Meier analysis was performed to investigate the relationship between the expression of DEGs, DELs, and DEMs in the ceRNA network and OS of HNSCC patients.P<0.05 was recognized as a statistically significant difference.
### 2.7. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
Based on clinical characteristics (age, sex, tumor stage, TNM stage, grade, smoking, radiation, and therapy), HNSCC patients were stratified into distinct subgroups. A Chi-square test was performed to determine the difference in clinical characteristics between different expression groups of IL10RA, PRF1, and IL2RA. Besides, QUANTISEQ (https://icbi.i-med.ac.at/software/quantiseq/doc/) was employed to access difference in immune cells infiltration.
## 2.1. Data Acquisition
The RNA-sequencing (FPKM) and clinical characteristics of 468 HNSCC patients were obtained from the TCGA Database (https://tcga-data.nci.nih.gov/tcga/). Patients with other malignant tumors were excluded from our study, and samples that possessed the mRNA, miRNA, and lncRNA expression data simultaneously were included. One HNSCC cohort of GEO database (GSE65858) with 270 HNSCC patients was used for validation.
## 2.2. Stromal and Immune Scores Based on the ESTIMATE Algorithm
Immune scores and stromal scores were calculated by using the estimate R package (version 4.0.3) [18]. According to the median score of infiltrating immune/stromal cells, HNSCC patients were divided into two groups. Furthermore, Kaplan-Meier survival analysis was performed to illustrate the relationship between the OS and the immune/stromal scores of HNSCC patients using the survival package of R.
## 2.3. Identification of DEGs, DELs, and DEMs
The DEGs, DELs, and DEMs between the two groups were determined with the limma package of R. The DEGs and DEMs were selected withP<0.05, false discovery rate FDR<0.05, and log2foldchangeFC>1.5. When determining the DELs, P<0.05, FDR<0.05, and log2FC>1.2 were used as cutoff values because there were so few candidate lncRNAs. Furthermore, the heatmap packages were applied to generate the heatmaps of DEGs, DELs, and DEMs.
## 2.4. Functional Analysis of DEGs
Gene ontology (GO) categories by molecular function (MF) and cellular component (CC) and biological process (BP), as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses of DEGs were conducted by using ggplot2, enrichplot, and clusterProfiler package of R.P values less than 0.05 were considered significantly enriched.
## 2.5. ceRNA Network Construction
Considering the prognostic relevance of immune/stromal scores in HNSCC patients, we selected the groups with a betterP value for further analysis. MiRanda, TargetScan, and miRWalk were used to predict miRNA-mRNA interactions, and miRanda and PITA were used to predict miRNA-lncRNA interactions. Then, the intersection was taken between the target mRNAs/lncRNAs and the previously identified DEGs/DELs. Furthermore, DEMs that negatively regulated the expression of DEL and DEGs were retained to construct the ceRNA network and visualized via Cytoscape v3.8.0.
## 2.6. PPI Network and Survival Analysis
By using the STRING database, a PPI network of DEGs included in the ceRNA network was constructed and then visualized with Cytoscape. And Kaplan-Meier analysis was performed to investigate the relationship between the expression of DEGs, DELs, and DEMs in the ceRNA network and OS of HNSCC patients.P<0.05 was recognized as a statistically significant difference.
## 2.7. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
Based on clinical characteristics (age, sex, tumor stage, TNM stage, grade, smoking, radiation, and therapy), HNSCC patients were stratified into distinct subgroups. A Chi-square test was performed to determine the difference in clinical characteristics between different expression groups of IL10RA, PRF1, and IL2RA. Besides, QUANTISEQ (https://icbi.i-med.ac.at/software/quantiseq/doc/) was employed to access difference in immune cells infiltration.
## 3. Results
### 3.1. Clinical Characteristics of HNSCC Patients
468 HNSCC patients were eventually enrolled in our study, and the clinicopathological characteristics obtained from the TCGA database were summarized in TableS1. The age ranged from 19 to 90 years, and 346 (73.9%) were male and 122 (26.1%) were female. The median survival time was 625 days, ranged from 2 to 6417 days.
### 3.2. Immune Scores and Stromal Scores of HNSCC Patients
Immune scores and stromal scores were used to infer the level of infiltrating stromal and immune cells in tumor tissues. The 468 HNSCC patients were categorized into lower and upper halves based on the median immune/stromal scores. And the immune scores ranged from − 1088.39 to 2912.77, the stromal scores ranged from − 2092.21 to 1989.27 (TableS2).Kaplan-Meier survival curves showed that patients with higher stromal scores and immune scores had longer survival times than those with lower scores, although these differences in survival were not statistically significant (P=0.0639 and P=0.8799, respectively) (Figures 1(a) and 1(e)).Figure 1
Gene expression profiles and survival analysis based on immune scores and stromal scores. (a) Kaplan-Meier survival curves of high (red line) and low (blue line) immune scores. Immune scores for the heatmaps of (b) DEGs, (c) DELs, and (d) DEMs. (e) Kaplan-Meier survival curves of high (red line) and low (blue line) stromal scores. Stromal scores for the heatmaps (f) DEGs, (g) DELs, and (h) DEMs.
(a)(b)(c)(d)(e)(f)(g)(h)
### 3.3. Identification of DEGs, DELs, and DEMs
A total of 569 DEGs, 185 DELs, and 31 DEMs were identified between the high and low immune score groups, and 384 DEGs, 186 DELs, and 50 DEMs were obtained between the high and low stromal score groups. Heatmaps of the DEGs, DELs, and DEMs in these two comparisons were generated and are shown in Figure1.
### 3.4. GO and KEGG Enrichment Analyses of the DEGs
Enrichment analysis of the 953 DEGs identified in the previous section was performed to reveal their potential functions. GO terms of upregulated DEGs in the immune score groups included “antigen binding,” “external side of plasma membrane,” and “lymphocyte mediated immunity” in MF, CC, and BP, respectively. For downregulated DEGs, the top GO terms included “aldo-keto redustase (NADP) activity” in MF, “apical part of cell” in CC, and “cellular ketone metabolic process” in BP. The enriched KEGG pathways of those upregulated DEGs were mainly involved in “allograft rejection” and “viral protein interaction with cytokine and cytokine receptor,” and the main KEGG term enriched by the downregulated DEGs was “metabolic pathways” (Figures2(a)–2(d)).Figure 2
Gene expression profiles based on GO and KEGG for immune scores and stromal scores. Immune scores for GO terms for (a) upregulated DEGs and (b) downregulated DEGs. Enrichment of pathways for (c) upregulated DEGs and (d) downregulated DEGs. Stromal scores for GO terms for (e) upregulated DEGs and (f) downregulated DEGs. Enrichment of pathways for (g) upregulated DEGs and (h) downregulated DEGs.
(a)(b)(c)(d)(e)(f)(g)(h)In addition, the top GO terms of the upregulated DEGs in the stromal score groups included “extracellular matrix structural constituent,” “collagen-containing extracellular matrix,” and “external encapsulating structure organization” in MF, CC, and BP, respectively. For the downregulated DEGs, the top GO terms included “enzyme inhibitor activity” in MF, “cornified envelope” in CC, and “epidermis development” in BP. The KEGG pathways associated with the upregulated DEGs mainly involved pathways related to “cornification,” and the KEGG terms associated with the downregulated DEGs included “estrogen signaling pathway” (Figures2(e)–2(h)).
### 3.5. ceRNA Network
There were no significant differences in OS between patients with high and low immune/stromal scores (P=0.0639 and P=0.8799), while it did not mean that the DEGs, DELs, and DEMs between the two groups had no prognostic values. Thus, we chose DEGs, DELs, and DEMs of immune score groups, which had a relatively better survival for the construction of ceRNA network (Figure 1(a)). Finally, the ceRNA network contained 926 edges composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed (Figure 3(a)). Especially, hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes, suggesting that they might be master regulators in the network.Figure 3
The ceRNA network and PPI network. (a) The ceRNA network. The diamond, rectangle, and oval shape represent DELs, DEMs, and DEGs, respectively. (b) The PPI network of DEGs. The lines indicate interactions between the RNAs.
(a)(b)
### 3.6. PPI Network Construction and Survival Analysis
The PPI network constructed with the 97 DEGs in the ceRNA network contained 69 nodes and 203 edges. Twelve genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2) were prominent for having many connections with other genes (Figure3(b)). Furthermore, survival analysis of DEGs, DELs, and DEMs involved in the ceRNA network was performed. The survival curves of five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA), two DEMs (miR-148a-3p and miR-3065-3p), and four DELs (CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2) are exhibited in Figures 4(a)–4(k). High expression levels of FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 and low expression levels of miR-3065-3p were associated with longer OS in HNSCC patients (Figure 4).Figure 4
Kaplan-Meier survival curves of DEGs, DELs, and DEMs involved in the ceRNA network.
(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)
### 3.7. Validation Using One Additional Independent Cohort
To verify whether the eleven prognostic biomarkers above were differentially expressed and of prognostic significance in another independent HNSCC cohort, we downloaded GSE65858 from the GEO database for validation. However, for lacking sufficient RNA sequencing data of miRNAs and lncRNAs, we only successfully performed the differential expression and survival analysis of the five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA) between high and low immune score groups. As shown in Figure5, IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time, which were consistent with the results in the TCGA cohort.Figure 5
Verification of the different expression and survival analyses in GEO database. (a) Different expression levels of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in high and low immune score groups. (b) Kaplan-Meier survival curves of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in HNSCC patients.
(a)(b)
### 3.8. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
The distribution of clinical variables with corresponding expression subgroups was visualized (Figures6(a)–6(c)). The results showed that the composition of T status, tumor stage, and clinical grade were significantly distinct between different PRF1 expression groups. And for the clinical relevance of IL2RA expression patterns, there was of significant difference in clinical grade, indicating that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Unfortunately, we noticed no significant difference in clinical components between different IL10RA expression groups.Figure 6
Analysis of the clinical characteristics and immune cell infiltration for IL10RA, PRF1, and IL2RA. The heatmaps of clinical characteristics in different expression groups of (a) IL10RA, (b) PRF1, and (c) IL2RA. The differences of immune cells in distinct expression groups of (d) IL10RA, (e) PRF1, and (f) IL2RA.
(a)(b)(c)(d)(e)(f)In addition, the immune infiltrating analysis showed that patients with high IL10RA, PRF1, and IL2RA expression exhibited high immune cells infiltration, such as CD8+ T cell, cytotoxic lymphocytes, NK cell, B cell, monocyte, and myeloid dendritic cell (Figures6(d)–6(f)). This was also another evidence for that these three genes were elevated in patients high immune score groups.
## 3.1. Clinical Characteristics of HNSCC Patients
468 HNSCC patients were eventually enrolled in our study, and the clinicopathological characteristics obtained from the TCGA database were summarized in TableS1. The age ranged from 19 to 90 years, and 346 (73.9%) were male and 122 (26.1%) were female. The median survival time was 625 days, ranged from 2 to 6417 days.
## 3.2. Immune Scores and Stromal Scores of HNSCC Patients
Immune scores and stromal scores were used to infer the level of infiltrating stromal and immune cells in tumor tissues. The 468 HNSCC patients were categorized into lower and upper halves based on the median immune/stromal scores. And the immune scores ranged from − 1088.39 to 2912.77, the stromal scores ranged from − 2092.21 to 1989.27 (TableS2).Kaplan-Meier survival curves showed that patients with higher stromal scores and immune scores had longer survival times than those with lower scores, although these differences in survival were not statistically significant (P=0.0639 and P=0.8799, respectively) (Figures 1(a) and 1(e)).Figure 1
Gene expression profiles and survival analysis based on immune scores and stromal scores. (a) Kaplan-Meier survival curves of high (red line) and low (blue line) immune scores. Immune scores for the heatmaps of (b) DEGs, (c) DELs, and (d) DEMs. (e) Kaplan-Meier survival curves of high (red line) and low (blue line) stromal scores. Stromal scores for the heatmaps (f) DEGs, (g) DELs, and (h) DEMs.
(a)(b)(c)(d)(e)(f)(g)(h)
## 3.3. Identification of DEGs, DELs, and DEMs
A total of 569 DEGs, 185 DELs, and 31 DEMs were identified between the high and low immune score groups, and 384 DEGs, 186 DELs, and 50 DEMs were obtained between the high and low stromal score groups. Heatmaps of the DEGs, DELs, and DEMs in these two comparisons were generated and are shown in Figure1.
## 3.4. GO and KEGG Enrichment Analyses of the DEGs
Enrichment analysis of the 953 DEGs identified in the previous section was performed to reveal their potential functions. GO terms of upregulated DEGs in the immune score groups included “antigen binding,” “external side of plasma membrane,” and “lymphocyte mediated immunity” in MF, CC, and BP, respectively. For downregulated DEGs, the top GO terms included “aldo-keto redustase (NADP) activity” in MF, “apical part of cell” in CC, and “cellular ketone metabolic process” in BP. The enriched KEGG pathways of those upregulated DEGs were mainly involved in “allograft rejection” and “viral protein interaction with cytokine and cytokine receptor,” and the main KEGG term enriched by the downregulated DEGs was “metabolic pathways” (Figures2(a)–2(d)).Figure 2
Gene expression profiles based on GO and KEGG for immune scores and stromal scores. Immune scores for GO terms for (a) upregulated DEGs and (b) downregulated DEGs. Enrichment of pathways for (c) upregulated DEGs and (d) downregulated DEGs. Stromal scores for GO terms for (e) upregulated DEGs and (f) downregulated DEGs. Enrichment of pathways for (g) upregulated DEGs and (h) downregulated DEGs.
(a)(b)(c)(d)(e)(f)(g)(h)In addition, the top GO terms of the upregulated DEGs in the stromal score groups included “extracellular matrix structural constituent,” “collagen-containing extracellular matrix,” and “external encapsulating structure organization” in MF, CC, and BP, respectively. For the downregulated DEGs, the top GO terms included “enzyme inhibitor activity” in MF, “cornified envelope” in CC, and “epidermis development” in BP. The KEGG pathways associated with the upregulated DEGs mainly involved pathways related to “cornification,” and the KEGG terms associated with the downregulated DEGs included “estrogen signaling pathway” (Figures2(e)–2(h)).
## 3.5. ceRNA Network
There were no significant differences in OS between patients with high and low immune/stromal scores (P=0.0639 and P=0.8799), while it did not mean that the DEGs, DELs, and DEMs between the two groups had no prognostic values. Thus, we chose DEGs, DELs, and DEMs of immune score groups, which had a relatively better survival for the construction of ceRNA network (Figure 1(a)). Finally, the ceRNA network contained 926 edges composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed (Figure 3(a)). Especially, hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes, suggesting that they might be master regulators in the network.Figure 3
The ceRNA network and PPI network. (a) The ceRNA network. The diamond, rectangle, and oval shape represent DELs, DEMs, and DEGs, respectively. (b) The PPI network of DEGs. The lines indicate interactions between the RNAs.
(a)(b)
## 3.6. PPI Network Construction and Survival Analysis
The PPI network constructed with the 97 DEGs in the ceRNA network contained 69 nodes and 203 edges. Twelve genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2) were prominent for having many connections with other genes (Figure3(b)). Furthermore, survival analysis of DEGs, DELs, and DEMs involved in the ceRNA network was performed. The survival curves of five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA), two DEMs (miR-148a-3p and miR-3065-3p), and four DELs (CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2) are exhibited in Figures 4(a)–4(k). High expression levels of FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 and low expression levels of miR-3065-3p were associated with longer OS in HNSCC patients (Figure 4).Figure 4
Kaplan-Meier survival curves of DEGs, DELs, and DEMs involved in the ceRNA network.
(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)(k)
## 3.7. Validation Using One Additional Independent Cohort
To verify whether the eleven prognostic biomarkers above were differentially expressed and of prognostic significance in another independent HNSCC cohort, we downloaded GSE65858 from the GEO database for validation. However, for lacking sufficient RNA sequencing data of miRNAs and lncRNAs, we only successfully performed the differential expression and survival analysis of the five DEGs (FOXP3, IL10RA, STAT5A, PRF1, and IL2RA) between high and low immune score groups. As shown in Figure5, IL10RA, PRF1, and IL2RA were not only significantly upregulated in patients high immune scores, but also their high expressions were associated with longer survival time, which were consistent with the results in the TCGA cohort.Figure 5
Verification of the different expression and survival analyses in GEO database. (a) Different expression levels of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in high and low immune score groups. (b) Kaplan-Meier survival curves of FOXP3, IL10RA, STAT5A, PRF1, and IL2RA in HNSCC patients.
(a)(b)
## 3.8. Analysis of the Clinical Relevance and Immune Cell Infiltration for IL10RA, PRF1, and IL2RA
The distribution of clinical variables with corresponding expression subgroups was visualized (Figures6(a)–6(c)). The results showed that the composition of T status, tumor stage, and clinical grade were significantly distinct between different PRF1 expression groups. And for the clinical relevance of IL2RA expression patterns, there was of significant difference in clinical grade, indicating that PRF1 and IL2RA might be involved in the mechanism of tumor progress. Unfortunately, we noticed no significant difference in clinical components between different IL10RA expression groups.Figure 6
Analysis of the clinical characteristics and immune cell infiltration for IL10RA, PRF1, and IL2RA. The heatmaps of clinical characteristics in different expression groups of (a) IL10RA, (b) PRF1, and (c) IL2RA. The differences of immune cells in distinct expression groups of (d) IL10RA, (e) PRF1, and (f) IL2RA.
(a)(b)(c)(d)(e)(f)In addition, the immune infiltrating analysis showed that patients with high IL10RA, PRF1, and IL2RA expression exhibited high immune cells infiltration, such as CD8+ T cell, cytotoxic lymphocytes, NK cell, B cell, monocyte, and myeloid dendritic cell (Figures6(d)–6(f)). This was also another evidence for that these three genes were elevated in patients high immune score groups.
## 4. Discussion
In this study, 468 HNSCC patients were divided into two groups based on immune/stromal scores using the ESTIMATE algorithm. ESTIMATE, a method that infers the fraction of immune and stromal cells in tumor samples based on gene expression [18], enables the quantification of the level of immune/stromal cells in TME in the form of a score. Survival analysis of the high- and low-score groups showed that patients with higher immune scores had relatively longer survival time, though the results showed no significant difference. Recent studies have asserted that infiltrating immune cells play crucial roles in tumor relapse, metastasis, therapy and prognosis [19–22]. And immune cell infiltration in HNSCC has been revealed to be involved in m6A methylation, alternative splicing, increased tumor mutation burden, and prognosis [23–25]. Therefore, a deeper understanding of immune cells and tumors at the molecular level is urgent.The ceRNA network composed of 97 DEGs, 79 DELs, and 22 DEMs were constructed, and hsa-miR-149-5p, has-miR-17-5p, hsa-miR-3065-3p, hsa-miR-767-5p, and hsa-miR-96-5p were the top 5 nodes. Then, the PPI network identified 12 hub genes (FOXP3, IL10RA, CD274, CXCL9, IRF1, STAT5A, CXCL12, PRF1, IL2RA, MMP9, CSF1, and PTGS2), which might play important roles in the network. Among these genes, only PTGS2 was downregulated in the high immune score patients. KEGG analysis showed that PTGS2 was significantly involved in the terms “arachidonic acid metabolism,” “metabolic pathway,” and “chemical carcinogenesis.” PTGS2, also known as COX-2, is an enzyme critical for PGE2 that is associated with the enhancement of cancer cell survival, growth, migration, and invasion [26, 27]. PTGS2 is also associated with prognosis in multiple cancers [28]. In addition, for some tumors, tumor-derived PTGS2 serves an essential role in tumor immune evasion by inducing PGE2 to successfully evade elimination induced by type I interferon and/or T cells [26]. In the ceRNA network, PTGS2 was regulated by miR-148a-3p, and high expression of miR-148a-3p was significantly associated with increased OS in HNSCC patients (P=0.0296; Figure 4). Accordingly, miR-148a-3p has been identified as a tumor suppressor in colorectal cancer, and its downregulation is associated with immune suppression [29]. The roles of miR-148a-3p/PTGS2 in the TME remain to be further investigated.Of the upregulated hub genes in the PPI network, increased levels of IL10RA, PRF1, and IL2RA were significantly associated with longer survival time of HNSCC patients in both TCGA and GEO database. GO and KEGG analyses showed that IL2RA was significantly associated with “cytokine-cytokine receptor interaction” pathway. IL-2 interacts with IL2RA, which stimulates Tregs to express the transcription factors STAT5 and Foxp3, which play an essential role in Treg development and homeostasis [30, 31]. Tregs in the TME are plastic, endowing them with dual functionality [32]. Tregs are negatively correlated with OS in a majority of tumors [33]. However, they appear to be associated with improved OS in head and neck cancers [34], which is consistent with our finding that high expression levels of IL2RA were associated with favorable survival in HNSCC patients. IL10 is an essential regulator in immune homeostasis and notably serves this role through binding to its cell surface receptor, IL10RA [35]. GO and KEGG analyses showed that IL10RA was significantly associated with “cytokine-mediated signaling pathway,” “cytokine-cytokine receptor interaction,” and “Jak-STAT signaling pathway.” Song et al. suggested that high expression of IL10RA in HNSCC had better prognostic value, which is consistent with our findings [36]. The expression of PRF1 has been used to assess tumor-infiltrating lymphocytes in the tumor microenvironment and was related to the response of patients treated with anti-CTLA-4 therapy and anti-PD-1/PD-L1 therapy [37, 38]. Furthermore, Yang et al. asserted that a high expression level of PRF1 provided an appropriate microenvironment for anti-CTLA-4 and anti-PD-1/PD-L1 therapy in type I and II ovarian cancer [39]. Similarly, in this study, overexpression of PRF1 in HNSCC patients with a high immune score was identified, and its overexpression was associated with increased OS. The above findings, combined with the results of the clinical relevance and immune cell infiltration for IL10RA, PRF1, and IL2RA, emphasized that PRF1 and IL2RA might be involved in the mechanism of tumor progress and also provided evidence for the overexpression of the three genes in immune score groups.Interestingly, in the ceRNA network, miR-3065-3p and the lncRNAs CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were significantly correlated with OS in HNSCC patients in the TCGA database; the relevant interactions within the network include the following: miR-3065-3p/IL2RA; CXCR2P1/miR-210-3p/FOXP3; HNRNPA1P21/miR-767-5p/IL10RA; CTA-384D8.36/miR-149-5p/STAT5A; CTA-384D8.36/miR-149-5p/PRF1; and IGHV1OR15-2/miR-744-3p/IL2RA. MiR-3065-3p was downregulated in high immune score patients, and patients with low miR-3065-3p expression had improved overall survival. The overexpression of the lncRNAs CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were correlated with longer overall survival for HNSCC patients. Among these, CXCR2P1 has been speculated to be related to immune checkpoints PD-1, PD-L1, and CTLA4, which are crucial for successful cancer immunotherapy that contribute to the immune response [40, 41], but there is still a lack of direct evidence. Herein, the CXCR2P1/miR-210-3p/FOXP3 axis identified in the ceRNA network may provide clues for future studies of the biological functions of CXCR2P1 in the TME of HNSCC. For HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2, their prognostic roles were first reported in HNSCC. Although these prognosis-related miRNAs and lncRNAs could not be verified due to the lack of RNA sequencing and survival data in GEO database, their biological functions in the TME of HNSCC should not be ignored, and they should be further studied when enough clinical samples and information were provided. In addition, another limitation of the present study should also be taken into consideration. We inferred the level of infiltrating stromal and immune cells in tumor tissues by using ESTIMATE algorithm and not by direct analysis of actual infiltrating cells, which may provide relevant information but is not certainly related to the actual cell content of the TME.In conclusion, we estimated the level of infiltrating stromal and immune cells in the TME of HNSCC patients and constructed a ceRNA network, in which FOXP3, IL10RA, STAT5A, PRF1, IL2RA, miR-148a-3p, miR-3065-3p, CXCR2P1, HNRNPA1P21, CTA-384D8.36, and IGHV1OR15-2 were significantly correlated with the OS of HNSCC patients. Besides, the expression and survival roles of IL10RA, PRF1, and IL2RA were verified in another GEO cohort. This might provide novel targets in the TME of HNSCC and contribute to therapeutic intervention and prognosis evaluation.
---
*Source: 1014347-2022-09-02.xml* | 2022 |
# Single and Binary Adsorption Systems of Rhodamine B and Methylene Blue onto Alkali-Activated Vietnamese Diatomite
**Authors:** Pham Dinh Du; Huynh Thanh Danh
**Journal:** Adsorption Science & Technology
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1014354
---
## Abstract
Diatomite was slightly modified with a sodium hydroxide solution. The resulting material was characterized by using energy-dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), scanning electron microscopy (SEM), and nitrogen adsorption-desorption isotherms. The so-treated diatomite has a high specific surface area (77.8 m2/g) and a high concentration of isolated silanol groups on the surface, and therefore, its adsorption capacity increases drastically in both the single and binary adsorption systems for rhodamine B and methylene blue. The binary system is more effective than the single system, with methylene blue being adsorbed more than rhodamine B. The adsorption process is spontaneous and fits well with the Langmuir isothermal model, and it depends on pH significantly.
---
## Body
## 1. Introduction
Dyes are widely used in numerous applications, such as textile, paper, plastic, and dye industries [1]. The amount of dyes produced annually worldwide is estimated at over 7×105 tons, and more than 100,000 commercially available dyes with different physical and chemical properties are being used [2–4]. Various dyes and their decomposition products are toxic and carcinogenic, thus posing a danger to aquatic organisms [1, 5]. Therefore, dye removal from wastewater is essential.A large number of dyes have a complex aromatic ring structure and are difficult to degrade biologically [4, 6]. Therefore, it is necessary to reduce their concentration in wastewaters prior to biological treatment. Chemical oxidation has been extensively studied for dye removal from wastewaters [1, 7–9]. However, oxidation often produces intermediate products that can cause secondary pollution. Meanwhile, the adsorption technique has proven to be a simple, efficient, and attractive way to remove nonbiodegradable pollutants (including dyes) from wastewaters [5, 10, 11].To remove dyes from complex aqueous solutions, a variety of adsorbents have been used, such as banyan aerial roots [12], peat [2], bentonite [13], mesoporous silica nanoparticles [14], clay [15, 16], activated banana peel carbon [17], silica extracted from rice husk [18], and zeolite [19], and some of them exhibit high performance. However, the search for new, effective, cheap, and environmentally friendly adsorbents is on the way.Diatomite is a low-density, small-particle sedimentary rock consisting mainly of amorphous silica (SiO2·nH2O) derived from diatoms. Diatomite encompasses a variety of structures and has high porosity (up to 80%), a large specific surface area, and multiple hydroxyl groups on the surface [3, 6, 10, 11, 20]. These properties enable diatomite to be a potential adsorbent for the pollutants present in industrial wastewaters, including dyes. Besides, diatomite is abundant in nature, cheap, and environmentally friendly [11]. Several studies have dealt with the applicability of natural diatomite in the adsorption field [6, 10, 11, 21–23]. Other studies have focused on diatomite surface modification with metals or organic functional groups to improve adsorption efficiency or expand its applications [1, 7–9, 24–32]. In some studies, natural diatomite is treated thermally [3, 10, 20, 33, 34], with acids [3, 20, 35–37], or with alkalines [4, 37, 38] to enhance its application performance. Other studies use diatomite as a raw material to manufacture other products [14, 35, 39–42]. The diatomite purified by calcining is also investigated by Yuan et al. [43]. They discovered that when the temperature increases, the condensation of surface silanol groups occurs. Hydrogen-linked hydroxyl groups condense more easily than isolated hydroxyl groups. Bronsted acid centers also condense at high temperatures. This condensation reduces the adsorption capacity of the diatomite treated by calcining toward base dyes. When treated with acids (normally at high concentration: 5 M H2SO4 [3], 5 M HCl [35], 1-5 M HCl [36], 10% HCl [20], and 1 M H2SO4 [37]), it is difficult to perform the modification and is easily contaminated by secondary pollution. Therefore, numerous studies have focused only on diatomite purification because it is cheap, easy to operate, and environmentally friendly [4]. When purified with alkali, diatomite retains its hydroxyl groups on the surface, and they are excellent adsorption centers for many metals as well as dyes.Since most industrial wastewaters contain different pollutants, it is important to investigate the effect of multicomponent systems on the adsorption capacity. Various studies have studied the simultaneous removal of different pollutants from aqueous solutions [2, 5, 44] to assess the competitiveness of adsorbates. In this study, natural diatomite is activated by treating with low-concentrated sodium hydroxide to enhance the adsorption capacity for rhodamine B (RB) and methylene blue (MB) in single and binary systems. The equilibrium isotherms and thermodynamic parameters of the adsorption processes are studied. In addition, the effect of the solution pH on the adsorption efficiency of RB or MB in the single system is also investigated.
## 2. Materials and Methods
### 2.1. Materials
Natural diatomite was obtained from Phu Yen province, Vietnam. Natural diatomite was washed several times with water, dried at 100°C, sieved, and stored in closed containers for further tests. The product is called purified diatomite.Sodium hydroxide (NaOH), hydrochloric acid (HCl), and potassium chloride (KCl) were purchased from Guangdong (China). Methylene blue (Guangdong, China) and rhodamine B (HiMedia, India) dyes were used as adsorbates. A summary of the main characteristics of these dyes is given in Table1 [3, 5, 22, 27, 37].Table 1
Main characteristics of the dyes used in this study.
DyeRBMBTypeBasic violet 10, C.I.45170, cationicBasic blue 9, C.I.52015, cationicPhaseSolidSolidMolecular formulaC28H31O3N2ClC16H18N3SClMolecular weight (g/mol)479.03319.85Chemical structure
### 2.2. Activation of Diatomite
Purified diatomite was activated with NaOH to enhance the adsorption capacity. The purified diatomite sample was immersed in a 5% NaOH solution at a ratio of 1 : 10 (w/w) and stirred at 100°C for 2 h to remove impurities and organics. Then, the solid was filtered, washed several times with distilled water, dried at 100°C, and sieved. The obtained alkali-activated diatomite was stored in closed containers for further tests.
### 2.3. Characterization
The chemical analysis of diatomite was performed by using energy-dispersive X-ray spectroscopy (EDX, JEOL JED-2300, Japan) at different sites of the material. The powder X-ray diffraction (XRD) patterns were recorded by VNU-D8 Advance, Bruker, Germany, with Cu Kα radiation (λ=1.5406Å). Fourier-transform infrared spectra (FT-IR) were measured on a Jasco FT/IR-4600 spectrometer (Japan) with a range of 4000-400 cm−1. The morphology of diatomite was observed with scanning electron microscopy (SEM) using SEM JMS-5300LV (Japan). Nitrogen adsorption/desorption isotherm measurements were conducted using a TriStar 3000 analyzer. Samples were pretreated by heating at 250°C for 5 h with N2 before the measurements.
### 2.4. Point of Zero Charge
The point of zero charge (pHPZC) of the adsorbent was determined to follow the methods of Mahmood et al. [45], Jing et al. [46], and Du and Hoai [47]. To a series of 100 mL Erlenmeyer flasks, 50 mL of a 0.01 M KCl solution was added. The initial pH (pHi) of the solutions was adjusted, ranging from 2 to 12, by adding a 0.1 M HCl or 0.1 M NaOH solution. Then, 0.1 g of the adsorbent was added to each flask and mixtures were shaken for 48 h. The final pH (pHf) of the solutions was measured. The difference between the final and initial pHs (ΔpH=pHf−pHi) was plotted against the pHi. The point of intersection of the curve with the abscissa, at which ΔpH=0, provides pHPZC.
### 2.5. Adsorption
#### 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
#### 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
#### 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 2.1. Materials
Natural diatomite was obtained from Phu Yen province, Vietnam. Natural diatomite was washed several times with water, dried at 100°C, sieved, and stored in closed containers for further tests. The product is called purified diatomite.Sodium hydroxide (NaOH), hydrochloric acid (HCl), and potassium chloride (KCl) were purchased from Guangdong (China). Methylene blue (Guangdong, China) and rhodamine B (HiMedia, India) dyes were used as adsorbates. A summary of the main characteristics of these dyes is given in Table1 [3, 5, 22, 27, 37].Table 1
Main characteristics of the dyes used in this study.
DyeRBMBTypeBasic violet 10, C.I.45170, cationicBasic blue 9, C.I.52015, cationicPhaseSolidSolidMolecular formulaC28H31O3N2ClC16H18N3SClMolecular weight (g/mol)479.03319.85Chemical structure
## 2.2. Activation of Diatomite
Purified diatomite was activated with NaOH to enhance the adsorption capacity. The purified diatomite sample was immersed in a 5% NaOH solution at a ratio of 1 : 10 (w/w) and stirred at 100°C for 2 h to remove impurities and organics. Then, the solid was filtered, washed several times with distilled water, dried at 100°C, and sieved. The obtained alkali-activated diatomite was stored in closed containers for further tests.
## 2.3. Characterization
The chemical analysis of diatomite was performed by using energy-dispersive X-ray spectroscopy (EDX, JEOL JED-2300, Japan) at different sites of the material. The powder X-ray diffraction (XRD) patterns were recorded by VNU-D8 Advance, Bruker, Germany, with Cu Kα radiation (λ=1.5406Å). Fourier-transform infrared spectra (FT-IR) were measured on a Jasco FT/IR-4600 spectrometer (Japan) with a range of 4000-400 cm−1. The morphology of diatomite was observed with scanning electron microscopy (SEM) using SEM JMS-5300LV (Japan). Nitrogen adsorption/desorption isotherm measurements were conducted using a TriStar 3000 analyzer. Samples were pretreated by heating at 250°C for 5 h with N2 before the measurements.
## 2.4. Point of Zero Charge
The point of zero charge (pHPZC) of the adsorbent was determined to follow the methods of Mahmood et al. [45], Jing et al. [46], and Du and Hoai [47]. To a series of 100 mL Erlenmeyer flasks, 50 mL of a 0.01 M KCl solution was added. The initial pH (pHi) of the solutions was adjusted, ranging from 2 to 12, by adding a 0.1 M HCl or 0.1 M NaOH solution. Then, 0.1 g of the adsorbent was added to each flask and mixtures were shaken for 48 h. The final pH (pHf) of the solutions was measured. The difference between the final and initial pHs (ΔpH=pHf−pHi) was plotted against the pHi. The point of intersection of the curve with the abscissa, at which ΔpH=0, provides pHPZC.
## 2.5. Adsorption
### 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
### 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
### 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
## 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
## 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 3. Results and Discussion
### 3.1. Characterization of Purified and Alkali-Activated Diatomite Samples
As can be seen from Table2, both the purified and alkali-activated diatomite samples mainly consist of O, Si, Al, and Fe. Alkali-activated diatomite has a lower O and Si content than purified diatomite, and this is probably due to the removal of organic constituents and the dissolution of SiO2 during alkali treatment. This decrease entrains the increase in the content of Fe and Al.Table 2
Elemental composition of the diatomite samples (w%, EDX).
ElementPurified diatomiteAlkali-activated diatomiteO52.72±1.4849.06±1.27Mg0.53±0.060.53±0.04Al10.36±0.8711.59±0.03Si30.56±0.5927.50±1.42K0.20±0.091.09±0.83Ca0.21±0.050.17±0.03Ti0.91±0.171.23±0.10Fe4.50±0.106.02±0.27Na—1.77±0.24Cl—1.03±0.19Total100100Both the purified and alkali-activated diatomite samples have an amorphous structure (Figure2(a)). The broad peaks at 20-25° are typical for amorphous SiO2 [1, 32, 37, 39, 51]. The absence of the peak around 27° indicates that the diatomite in our study does not contain quartz crystals like other types of diatomite [1, 26, 30, 32, 35, 37, 39, 40].Figure 2
XRD pattern (a) and FT-IR spectra (b) of the diatomite samples.
(a)(b)The FT-IR spectra of the diatomite samples are similar (Figure2(b)). The broad absorption bands at 3450 cm-1 and 1641 cm-1 correspond to the adsorbed H2O, including interlayer water and hydrogen-bonded water with surface hydroxyl groups. A broad band centered at 1101-1031 cm-1 and two bands at 789 cm-1 and 465 cm-1 correspond to the asymmetric stretching vibration, symmetric stretching, and bending vibration of Si-O-Si bonds, respectively [1, 20]. The peaks observed at 3699 cm-1 and 3623 cm-1 are assigned to surface hydroxyl groups in diatomite. The peak at 3699 cm-1 is attributed to the isolated hydroxyl (Si-OH) on the surface of diatomite [1, 43, 55], while the peak at 3623 cm-1 belongs to O-H stretching vibration of the aluminol groups (≡AlOH) [55]. Alkali-activated diatomite has high-intensity peaks for O–H stretching, indicating that more isolated hydroxyl groups are present on the surface. The peak at 536 cm-1 corresponds to the stretching vibration of Fe-O [1]. The peak at 1380 cm-1 is attributed to some organic substances [20]. The intensity of this peak is lower in alkali-activated diatomite than in purified diatomite samples, indicating the removal of organic substances from purified diatomite during NaOH treatment.The SEM images show that purified diatomite consists of circular cylinders of a diameter of about 5-7μm, with small pores on the surface (Figures 3(a) and 3(b)). However, these cylinders are partly shattered, causing the pores to become smaller and even blocked. The alkali-activated diatomite retains its multipore structure, and the pores on the surface become larger after treatment (Figures 3(c) and 3(d)). This change may be the result of the formation of soluble silicates SiO32− from SiO2 [4]. Another reason for this change is probably the removal of organic constituents, leading to the increase in the pore size and hence the increase of the surface area of alkali-activated diatomite.Figure 3
SEM images. (a, b) Purified diatomite. (c, d) Alkali-activated diatomite.
(a)(b)(c)(d)Figure4 shows the nitrogen adsorption-desorption isotherms and pore size distribution of the diatomite samples. The diatomite exhibits a type II isotherm and an H3-type hysteresis loop, indicating the presence of macroporous structures with nonuniform size and/or shape [56]. Thus, the morphology of the diatomite consists of a variety of shapes (Figure 3). However, the pore size distribution curves of the diatomite samples demonstrate a uniform pore size with an average diameter of 4.3 nm. The textural properties of the samples are presented in Table 3. According to the Brunauer-Emmett-Teller analysis, the purified diatomite exhibits a large specific surface area of 55.4 m2/g. This value is consistent with that reported by Son et al. [6] (51 m2/g) for Phu Yen’s diatomite and is much higher than that published in previous works [4, 8, 11, 20, 24, 26, 32, 38, 55] (1.0-27 m2/g). It can be seen from Table 3 that the specific surface area of alkali-activated diatomite (77.8 m2/g) is significantly larger than that of purified diatomite. This increase in the surface area results from the removal of organic impurities during the alkali treatment.Figure 4
Nitrogen adsorption-desorption isotherms (a) and pore size distributions (b) of the diatomite samples.
(a)(b)Table 3
Textural properties of the diatomite samples.
SampleSBET (m2·g-1)Smic (m2·g-1)Sext (m2·g-1)Vmic (cm3·g-1)Vtot (cm3·g-1)Purified diatomite55.419.236.20.00880.0623Alkali-activated diatomite77.819.759.10.00890.0924The zero charge point of purified diatomite is 5.7 (Figure5). This pHPZC is similar to that published in the literature [6, 11, 21, 22]. However, the pHPZC of alkali-activated diatomite (8.9) is much greater than that of purified diatomite. This increase is probably due to the formation of isolated hydroxyl groups on the surface of the material during alkali treatment.Figure 5
Determination of the point of zero charge of the diatomite samples.
### 3.2. Isothermal Studies
#### 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
#### 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
### 3.3. Thermodynamic Studies
The spontaneity of the adsorption process and the interactions on the liquid/solid interface can be explained by using thermodynamic parameters (ΔG°, ΔH°, and ΔS°). If ΔG°<0, the adsorption process is spontaneous; otherwise, adsorption does not occur on its own. If ΔH°<0, the adsorption process is exothermic and vice versa. If ΔS°>0, it is possible to infer that the adsorbent affinity for the dye increases, leading to the increase in randomness of the adsorbates at the liquid/solid interface [4, 42]; in contrast, if ΔS°<0, more adsorbate molecules adhere to the adsorbent surface [13, 16, 37].The thermodynamic parameters of RB and MB adsorption onto the diatomite samples are calculated from Equations (9)–(11). The ΔG° of adsorption is negative for both the single and binary systems (Tables 8 and 9), indicating the spontaneity of adsorption processes. The values of ΔH° and ΔS° differ between the adsorption processes, indicating that the adsorption process is complex. Both the physical and chemical adsorption mechanisms are possible.Table 8
Thermodynamic parameters for adsorption of RB onto purified diatomite in single systems.
Temperature (°C)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)30-31.2618.90165.5345-33.74Table 9
Thermodynamic parameters for the adsorption of the dyes onto alkali-activated diatomite.
DyeTemperature (°C)Single systemBinary systemΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)RB30-30.08-11.9659.79-31.07-96.42-215.6945-30.98-27.83MB30-30.322.48108.27-33.42-15.0460.6645-31.94-34.33
### 3.4. Effect of Solution pH
Figure11 shows the effect of solution pH on the adsorption of RB and MB onto alkali-activated diatomite in the single system. The pH of the solution was adjusted between 3 and 11 with a 0.1 M HCl or 0.1 M NaOH solution.Figure 11
Removal efficiency of the dye onto alkali-activated diatomite at different initial solution pHs in the single systems: (a) RB and (b) MB (adsorbent dosage 0.2 g·L-1, initial RB concentration 2.09×10−5mol·L−1, initial MB concentration 3.13×10−5mol·L−1, and 30°C).
(a)(b)As can be seen from Figure11(a), the adsorption efficiency of RB reaches 94% after 240 min of contact at pH 3. At higher pH, this efficiency decreases drastically, reaching 37% up to pH 5–9 and even lower (30%) at pH 11. We know that RB has a carboxylic group in its molecule, and this group dissociates at higher pHs of the solution. This dissociation renders the molecule negative, resulting in the electric repulsion between RB and the negative surface of the adsorbent at high pH. This result is consistent with that of Eftekhari et al. [5].For MB (Figure11(b)), the adsorption efficiency reaches 100% after a short time (60 min) of contact at pH 3–9. The efficiency only decreases to around 60% at pH 11.The zero charge point of alkali-activated diatomite is 8.9 (Figure5). Theoretically, the surface of the material is positively charged when pH<8.9 and negatively charged when pH>8.9. This means that when the pH of the dye solution increases, the adsorption efficiency should increase because the negatively charged diatomite surface attracts the dye cations. However, in both of our cases, the adsorption efficiency decreases with pH, especially at pH 11. This demonstrates that the adsorption process is complex, and the electrostatic interaction mechanism is not suitable to describe the adsorption of RB and MB onto alkali-activated diatomite.
## 3.1. Characterization of Purified and Alkali-Activated Diatomite Samples
As can be seen from Table2, both the purified and alkali-activated diatomite samples mainly consist of O, Si, Al, and Fe. Alkali-activated diatomite has a lower O and Si content than purified diatomite, and this is probably due to the removal of organic constituents and the dissolution of SiO2 during alkali treatment. This decrease entrains the increase in the content of Fe and Al.Table 2
Elemental composition of the diatomite samples (w%, EDX).
ElementPurified diatomiteAlkali-activated diatomiteO52.72±1.4849.06±1.27Mg0.53±0.060.53±0.04Al10.36±0.8711.59±0.03Si30.56±0.5927.50±1.42K0.20±0.091.09±0.83Ca0.21±0.050.17±0.03Ti0.91±0.171.23±0.10Fe4.50±0.106.02±0.27Na—1.77±0.24Cl—1.03±0.19Total100100Both the purified and alkali-activated diatomite samples have an amorphous structure (Figure2(a)). The broad peaks at 20-25° are typical for amorphous SiO2 [1, 32, 37, 39, 51]. The absence of the peak around 27° indicates that the diatomite in our study does not contain quartz crystals like other types of diatomite [1, 26, 30, 32, 35, 37, 39, 40].Figure 2
XRD pattern (a) and FT-IR spectra (b) of the diatomite samples.
(a)(b)The FT-IR spectra of the diatomite samples are similar (Figure2(b)). The broad absorption bands at 3450 cm-1 and 1641 cm-1 correspond to the adsorbed H2O, including interlayer water and hydrogen-bonded water with surface hydroxyl groups. A broad band centered at 1101-1031 cm-1 and two bands at 789 cm-1 and 465 cm-1 correspond to the asymmetric stretching vibration, symmetric stretching, and bending vibration of Si-O-Si bonds, respectively [1, 20]. The peaks observed at 3699 cm-1 and 3623 cm-1 are assigned to surface hydroxyl groups in diatomite. The peak at 3699 cm-1 is attributed to the isolated hydroxyl (Si-OH) on the surface of diatomite [1, 43, 55], while the peak at 3623 cm-1 belongs to O-H stretching vibration of the aluminol groups (≡AlOH) [55]. Alkali-activated diatomite has high-intensity peaks for O–H stretching, indicating that more isolated hydroxyl groups are present on the surface. The peak at 536 cm-1 corresponds to the stretching vibration of Fe-O [1]. The peak at 1380 cm-1 is attributed to some organic substances [20]. The intensity of this peak is lower in alkali-activated diatomite than in purified diatomite samples, indicating the removal of organic substances from purified diatomite during NaOH treatment.The SEM images show that purified diatomite consists of circular cylinders of a diameter of about 5-7μm, with small pores on the surface (Figures 3(a) and 3(b)). However, these cylinders are partly shattered, causing the pores to become smaller and even blocked. The alkali-activated diatomite retains its multipore structure, and the pores on the surface become larger after treatment (Figures 3(c) and 3(d)). This change may be the result of the formation of soluble silicates SiO32− from SiO2 [4]. Another reason for this change is probably the removal of organic constituents, leading to the increase in the pore size and hence the increase of the surface area of alkali-activated diatomite.Figure 3
SEM images. (a, b) Purified diatomite. (c, d) Alkali-activated diatomite.
(a)(b)(c)(d)Figure4 shows the nitrogen adsorption-desorption isotherms and pore size distribution of the diatomite samples. The diatomite exhibits a type II isotherm and an H3-type hysteresis loop, indicating the presence of macroporous structures with nonuniform size and/or shape [56]. Thus, the morphology of the diatomite consists of a variety of shapes (Figure 3). However, the pore size distribution curves of the diatomite samples demonstrate a uniform pore size with an average diameter of 4.3 nm. The textural properties of the samples are presented in Table 3. According to the Brunauer-Emmett-Teller analysis, the purified diatomite exhibits a large specific surface area of 55.4 m2/g. This value is consistent with that reported by Son et al. [6] (51 m2/g) for Phu Yen’s diatomite and is much higher than that published in previous works [4, 8, 11, 20, 24, 26, 32, 38, 55] (1.0-27 m2/g). It can be seen from Table 3 that the specific surface area of alkali-activated diatomite (77.8 m2/g) is significantly larger than that of purified diatomite. This increase in the surface area results from the removal of organic impurities during the alkali treatment.Figure 4
Nitrogen adsorption-desorption isotherms (a) and pore size distributions (b) of the diatomite samples.
(a)(b)Table 3
Textural properties of the diatomite samples.
SampleSBET (m2·g-1)Smic (m2·g-1)Sext (m2·g-1)Vmic (cm3·g-1)Vtot (cm3·g-1)Purified diatomite55.419.236.20.00880.0623Alkali-activated diatomite77.819.759.10.00890.0924The zero charge point of purified diatomite is 5.7 (Figure5). This pHPZC is similar to that published in the literature [6, 11, 21, 22]. However, the pHPZC of alkali-activated diatomite (8.9) is much greater than that of purified diatomite. This increase is probably due to the formation of isolated hydroxyl groups on the surface of the material during alkali treatment.Figure 5
Determination of the point of zero charge of the diatomite samples.
## 3.2. Isothermal Studies
### 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
### 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
## 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
## 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
## 3.3. Thermodynamic Studies
The spontaneity of the adsorption process and the interactions on the liquid/solid interface can be explained by using thermodynamic parameters (ΔG°, ΔH°, and ΔS°). If ΔG°<0, the adsorption process is spontaneous; otherwise, adsorption does not occur on its own. If ΔH°<0, the adsorption process is exothermic and vice versa. If ΔS°>0, it is possible to infer that the adsorbent affinity for the dye increases, leading to the increase in randomness of the adsorbates at the liquid/solid interface [4, 42]; in contrast, if ΔS°<0, more adsorbate molecules adhere to the adsorbent surface [13, 16, 37].The thermodynamic parameters of RB and MB adsorption onto the diatomite samples are calculated from Equations (9)–(11). The ΔG° of adsorption is negative for both the single and binary systems (Tables 8 and 9), indicating the spontaneity of adsorption processes. The values of ΔH° and ΔS° differ between the adsorption processes, indicating that the adsorption process is complex. Both the physical and chemical adsorption mechanisms are possible.Table 8
Thermodynamic parameters for adsorption of RB onto purified diatomite in single systems.
Temperature (°C)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)30-31.2618.90165.5345-33.74Table 9
Thermodynamic parameters for the adsorption of the dyes onto alkali-activated diatomite.
DyeTemperature (°C)Single systemBinary systemΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)RB30-30.08-11.9659.79-31.07-96.42-215.6945-30.98-27.83MB30-30.322.48108.27-33.42-15.0460.6645-31.94-34.33
## 3.4. Effect of Solution pH
Figure11 shows the effect of solution pH on the adsorption of RB and MB onto alkali-activated diatomite in the single system. The pH of the solution was adjusted between 3 and 11 with a 0.1 M HCl or 0.1 M NaOH solution.Figure 11
Removal efficiency of the dye onto alkali-activated diatomite at different initial solution pHs in the single systems: (a) RB and (b) MB (adsorbent dosage 0.2 g·L-1, initial RB concentration 2.09×10−5mol·L−1, initial MB concentration 3.13×10−5mol·L−1, and 30°C).
(a)(b)As can be seen from Figure11(a), the adsorption efficiency of RB reaches 94% after 240 min of contact at pH 3. At higher pH, this efficiency decreases drastically, reaching 37% up to pH 5–9 and even lower (30%) at pH 11. We know that RB has a carboxylic group in its molecule, and this group dissociates at higher pHs of the solution. This dissociation renders the molecule negative, resulting in the electric repulsion between RB and the negative surface of the adsorbent at high pH. This result is consistent with that of Eftekhari et al. [5].For MB (Figure11(b)), the adsorption efficiency reaches 100% after a short time (60 min) of contact at pH 3–9. The efficiency only decreases to around 60% at pH 11.The zero charge point of alkali-activated diatomite is 8.9 (Figure5). Theoretically, the surface of the material is positively charged when pH<8.9 and negatively charged when pH>8.9. This means that when the pH of the dye solution increases, the adsorption efficiency should increase because the negatively charged diatomite surface attracts the dye cations. However, in both of our cases, the adsorption efficiency decreases with pH, especially at pH 11. This demonstrates that the adsorption process is complex, and the electrostatic interaction mechanism is not suitable to describe the adsorption of RB and MB onto alkali-activated diatomite.
## 4. Conclusions
Alkali-activated diatomite is applied to adsorb RB and MB in the single and binary systems. The treatment with sodium hydroxide increases the surface area of the diatomite from 55.4 m2/g to 77.8 m2/g and creates a large number of free silanol groups on the surface of the material. This increases the material’s ability to adsorb RB and MB. The adsorption equilibrium data of RB and MB onto alkali-activated diatomite fit the Langmuir model in both the single and binary systems. MB has a higher affinity to the adsorbent than RB, and the binary system is more effective than the single system. The adsorption process is spontaneous, and the removal efficiency of both MB and RB depends on pH significantly.
---
*Source: 1014354-2021-07-05.xml* | 1014354-2021-07-05_1014354-2021-07-05.md | 62,044 | Single and Binary Adsorption Systems of Rhodamine B and Methylene Blue onto Alkali-Activated Vietnamese Diatomite | Pham Dinh Du; Huynh Thanh Danh | Adsorption Science & Technology
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1014354 | 1014354-2021-07-05.xml | ---
## Abstract
Diatomite was slightly modified with a sodium hydroxide solution. The resulting material was characterized by using energy-dispersive X-ray spectroscopy (EDX), X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), scanning electron microscopy (SEM), and nitrogen adsorption-desorption isotherms. The so-treated diatomite has a high specific surface area (77.8 m2/g) and a high concentration of isolated silanol groups on the surface, and therefore, its adsorption capacity increases drastically in both the single and binary adsorption systems for rhodamine B and methylene blue. The binary system is more effective than the single system, with methylene blue being adsorbed more than rhodamine B. The adsorption process is spontaneous and fits well with the Langmuir isothermal model, and it depends on pH significantly.
---
## Body
## 1. Introduction
Dyes are widely used in numerous applications, such as textile, paper, plastic, and dye industries [1]. The amount of dyes produced annually worldwide is estimated at over 7×105 tons, and more than 100,000 commercially available dyes with different physical and chemical properties are being used [2–4]. Various dyes and their decomposition products are toxic and carcinogenic, thus posing a danger to aquatic organisms [1, 5]. Therefore, dye removal from wastewater is essential.A large number of dyes have a complex aromatic ring structure and are difficult to degrade biologically [4, 6]. Therefore, it is necessary to reduce their concentration in wastewaters prior to biological treatment. Chemical oxidation has been extensively studied for dye removal from wastewaters [1, 7–9]. However, oxidation often produces intermediate products that can cause secondary pollution. Meanwhile, the adsorption technique has proven to be a simple, efficient, and attractive way to remove nonbiodegradable pollutants (including dyes) from wastewaters [5, 10, 11].To remove dyes from complex aqueous solutions, a variety of adsorbents have been used, such as banyan aerial roots [12], peat [2], bentonite [13], mesoporous silica nanoparticles [14], clay [15, 16], activated banana peel carbon [17], silica extracted from rice husk [18], and zeolite [19], and some of them exhibit high performance. However, the search for new, effective, cheap, and environmentally friendly adsorbents is on the way.Diatomite is a low-density, small-particle sedimentary rock consisting mainly of amorphous silica (SiO2·nH2O) derived from diatoms. Diatomite encompasses a variety of structures and has high porosity (up to 80%), a large specific surface area, and multiple hydroxyl groups on the surface [3, 6, 10, 11, 20]. These properties enable diatomite to be a potential adsorbent for the pollutants present in industrial wastewaters, including dyes. Besides, diatomite is abundant in nature, cheap, and environmentally friendly [11]. Several studies have dealt with the applicability of natural diatomite in the adsorption field [6, 10, 11, 21–23]. Other studies have focused on diatomite surface modification with metals or organic functional groups to improve adsorption efficiency or expand its applications [1, 7–9, 24–32]. In some studies, natural diatomite is treated thermally [3, 10, 20, 33, 34], with acids [3, 20, 35–37], or with alkalines [4, 37, 38] to enhance its application performance. Other studies use diatomite as a raw material to manufacture other products [14, 35, 39–42]. The diatomite purified by calcining is also investigated by Yuan et al. [43]. They discovered that when the temperature increases, the condensation of surface silanol groups occurs. Hydrogen-linked hydroxyl groups condense more easily than isolated hydroxyl groups. Bronsted acid centers also condense at high temperatures. This condensation reduces the adsorption capacity of the diatomite treated by calcining toward base dyes. When treated with acids (normally at high concentration: 5 M H2SO4 [3], 5 M HCl [35], 1-5 M HCl [36], 10% HCl [20], and 1 M H2SO4 [37]), it is difficult to perform the modification and is easily contaminated by secondary pollution. Therefore, numerous studies have focused only on diatomite purification because it is cheap, easy to operate, and environmentally friendly [4]. When purified with alkali, diatomite retains its hydroxyl groups on the surface, and they are excellent adsorption centers for many metals as well as dyes.Since most industrial wastewaters contain different pollutants, it is important to investigate the effect of multicomponent systems on the adsorption capacity. Various studies have studied the simultaneous removal of different pollutants from aqueous solutions [2, 5, 44] to assess the competitiveness of adsorbates. In this study, natural diatomite is activated by treating with low-concentrated sodium hydroxide to enhance the adsorption capacity for rhodamine B (RB) and methylene blue (MB) in single and binary systems. The equilibrium isotherms and thermodynamic parameters of the adsorption processes are studied. In addition, the effect of the solution pH on the adsorption efficiency of RB or MB in the single system is also investigated.
## 2. Materials and Methods
### 2.1. Materials
Natural diatomite was obtained from Phu Yen province, Vietnam. Natural diatomite was washed several times with water, dried at 100°C, sieved, and stored in closed containers for further tests. The product is called purified diatomite.Sodium hydroxide (NaOH), hydrochloric acid (HCl), and potassium chloride (KCl) were purchased from Guangdong (China). Methylene blue (Guangdong, China) and rhodamine B (HiMedia, India) dyes were used as adsorbates. A summary of the main characteristics of these dyes is given in Table1 [3, 5, 22, 27, 37].Table 1
Main characteristics of the dyes used in this study.
DyeRBMBTypeBasic violet 10, C.I.45170, cationicBasic blue 9, C.I.52015, cationicPhaseSolidSolidMolecular formulaC28H31O3N2ClC16H18N3SClMolecular weight (g/mol)479.03319.85Chemical structure
### 2.2. Activation of Diatomite
Purified diatomite was activated with NaOH to enhance the adsorption capacity. The purified diatomite sample was immersed in a 5% NaOH solution at a ratio of 1 : 10 (w/w) and stirred at 100°C for 2 h to remove impurities and organics. Then, the solid was filtered, washed several times with distilled water, dried at 100°C, and sieved. The obtained alkali-activated diatomite was stored in closed containers for further tests.
### 2.3. Characterization
The chemical analysis of diatomite was performed by using energy-dispersive X-ray spectroscopy (EDX, JEOL JED-2300, Japan) at different sites of the material. The powder X-ray diffraction (XRD) patterns were recorded by VNU-D8 Advance, Bruker, Germany, with Cu Kα radiation (λ=1.5406Å). Fourier-transform infrared spectra (FT-IR) were measured on a Jasco FT/IR-4600 spectrometer (Japan) with a range of 4000-400 cm−1. The morphology of diatomite was observed with scanning electron microscopy (SEM) using SEM JMS-5300LV (Japan). Nitrogen adsorption/desorption isotherm measurements were conducted using a TriStar 3000 analyzer. Samples were pretreated by heating at 250°C for 5 h with N2 before the measurements.
### 2.4. Point of Zero Charge
The point of zero charge (pHPZC) of the adsorbent was determined to follow the methods of Mahmood et al. [45], Jing et al. [46], and Du and Hoai [47]. To a series of 100 mL Erlenmeyer flasks, 50 mL of a 0.01 M KCl solution was added. The initial pH (pHi) of the solutions was adjusted, ranging from 2 to 12, by adding a 0.1 M HCl or 0.1 M NaOH solution. Then, 0.1 g of the adsorbent was added to each flask and mixtures were shaken for 48 h. The final pH (pHf) of the solutions was measured. The difference between the final and initial pHs (ΔpH=pHf−pHi) was plotted against the pHi. The point of intersection of the curve with the abscissa, at which ΔpH=0, provides pHPZC.
### 2.5. Adsorption
#### 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
#### 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
#### 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 2.1. Materials
Natural diatomite was obtained from Phu Yen province, Vietnam. Natural diatomite was washed several times with water, dried at 100°C, sieved, and stored in closed containers for further tests. The product is called purified diatomite.Sodium hydroxide (NaOH), hydrochloric acid (HCl), and potassium chloride (KCl) were purchased from Guangdong (China). Methylene blue (Guangdong, China) and rhodamine B (HiMedia, India) dyes were used as adsorbates. A summary of the main characteristics of these dyes is given in Table1 [3, 5, 22, 27, 37].Table 1
Main characteristics of the dyes used in this study.
DyeRBMBTypeBasic violet 10, C.I.45170, cationicBasic blue 9, C.I.52015, cationicPhaseSolidSolidMolecular formulaC28H31O3N2ClC16H18N3SClMolecular weight (g/mol)479.03319.85Chemical structure
## 2.2. Activation of Diatomite
Purified diatomite was activated with NaOH to enhance the adsorption capacity. The purified diatomite sample was immersed in a 5% NaOH solution at a ratio of 1 : 10 (w/w) and stirred at 100°C for 2 h to remove impurities and organics. Then, the solid was filtered, washed several times with distilled water, dried at 100°C, and sieved. The obtained alkali-activated diatomite was stored in closed containers for further tests.
## 2.3. Characterization
The chemical analysis of diatomite was performed by using energy-dispersive X-ray spectroscopy (EDX, JEOL JED-2300, Japan) at different sites of the material. The powder X-ray diffraction (XRD) patterns were recorded by VNU-D8 Advance, Bruker, Germany, with Cu Kα radiation (λ=1.5406Å). Fourier-transform infrared spectra (FT-IR) were measured on a Jasco FT/IR-4600 spectrometer (Japan) with a range of 4000-400 cm−1. The morphology of diatomite was observed with scanning electron microscopy (SEM) using SEM JMS-5300LV (Japan). Nitrogen adsorption/desorption isotherm measurements were conducted using a TriStar 3000 analyzer. Samples were pretreated by heating at 250°C for 5 h with N2 before the measurements.
## 2.4. Point of Zero Charge
The point of zero charge (pHPZC) of the adsorbent was determined to follow the methods of Mahmood et al. [45], Jing et al. [46], and Du and Hoai [47]. To a series of 100 mL Erlenmeyer flasks, 50 mL of a 0.01 M KCl solution was added. The initial pH (pHi) of the solutions was adjusted, ranging from 2 to 12, by adding a 0.1 M HCl or 0.1 M NaOH solution. Then, 0.1 g of the adsorbent was added to each flask and mixtures were shaken for 48 h. The final pH (pHf) of the solutions was measured. The difference between the final and initial pHs (ΔpH=pHf−pHi) was plotted against the pHi. The point of intersection of the curve with the abscissa, at which ΔpH=0, provides pHPZC.
## 2.5. Adsorption
### 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
### 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
### 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 2.5.1. Adsorption Experiments
Adsorption experiments were carried out with a typical batch approach in a 250 mL round flask with a reflux condenser. In each experiment, 0.02 g of the adsorbent was stirred with 100 mL of a solution containing RB (or MB or a mixture of RB and MB) at a specific concentration, and the temperature of the reactor was fixed at 30 or 45°C. After a certain interval, 5 mL of the solution was withdrawn and centrifuged to remove the adsorbent, and the concentration of the remaining solution was determined. The concentration of dyes was determined with the UV-Vis method on UVD-3000 (Labomed, USA) atλmax=554nm for RB and λmax=664nm for MB (Figure 1). The adsorbed capacity (qt or qe) and removal efficiency (R) of the dye adsorbed onto the adsorbent were calculated according to the following equations:(1)qt=C0−Ct×Vmmol·g−1,(2)qe=C0−Ce×Vmmol·g−1,(3)R=C0−CeC0×100%,where C0 and Ct are the concentrations of the dyes in the solution (mol·L-1) at time t=0 and t=t, respectively; Ce is the concentration of the dyes in the solution (mol·L-1) at equilibrium; V is the volume of the solution (L); and m is the weight of the dry adsorbent (g).Figure 1
UV-Vis absorption spectra for aqueous solutions of (a) RB, (b) MB, and (c) both of the dyes along with alkali-activated diatomite at various adsorption times.
(a)(b)(c)The influence of initial pH (3, 5, 7, 9, and 11) was also studied in a single system with a similar procedure.
## 2.5.2. Isothermal Models
In this work, the Langmuir and Freundlich two-parameter models were used to analyze the adsorption equilibrium data.The Langmuir model is based on the assumption that the adsorption is a monolayer; that is, the adsorbates form a monolayer and all the sorption sites on the adsorbent surface have the same affinity for the adsorbates. The Langmuir isotherm equation [48] is as follows:(4)qe=qm×KL×Ce1+KL×Ce,where qm is the maximum monolayer adsorption capacity of the adsorbent (mol·g-1) and KL is the Langmuir constant (L·mol-1). The other parameters are described above. The Langmuir constant is a measure of the affinity between the adsorbate and the adsorbent and relates to the free energy of adsorption [5]. The most commonly used linear form of the Langmuir equation [2–5, 13, 18, 20, 32, 42, 49–51] is(5)Ceqe=1qm×Ce+1KL×qm.The plot ofCe/qe versus Ce is a straight line with the slope 1/qm and intercept 1/qm·KL.The Freundlich expression is an exponential equation and therefore assumes that as the adsorbate concentration increases, the concentration of the adsorbate on the adsorbent surface also increases. The Freundlich isotherm is expressed by the following empirical equation [48]:(6)qe=KF×Ce1/n,where n is the heterogeneity factor, and KF is the Freundlich constant (mol(1-1/n)·L1/n·g-1). n and KF are dependent on temperature; n indicates the extent of the adsorption, and KF expresses the degree of nonlinearity between the solution concentration and the adsorption.The linear form of the Freundlich equation is(7)logqe=logKF+1n×logCe.The plot oflogqe versus logCe is a straight line with the slope 1/n and intercept logKF.
## 2.5.3. Thermodynamic Parameters
To determine whether the adsorption process occurs spontaneously or not, we have to study the thermodynamic parameters. At equilibrium, the Gibbs free energy of adsorption (ΔG°) is an important quantity for determining the spontaneity of the process itself and is calculated according to the following equation:(8)ΔG°=−R×T×lnKe,where Ke is the thermodynamic equilibrium constant; R is the universal gas constant (8.314 J·mol-1·K-1); and T is the absolute temperature in Kelvin.For the adsorption process,Ke can be determined in a number of ways, depending on the experimental conditions, such as the equilibrium constant KC=C0−Ce/Ce [3, 16, 18, 20, 38], the distribution coefficient Kd=qe/Ce [12–14, 34, 37, 42, 50–52], and the Langmuir constant KL [4–6, 53, 54].In this study, the adsorption constant in the Langmuir isotherm (KL) was used to determine thermodynamic parameters (ΔG°, ΔH°, and ΔS°) for the adsorption by using the following equations [5]:(9)ΔG°=−R×T×lnKL,(10)ΔH°=−R×T2×T1T2−T1×lnKL1KL2,(11)ΔS°=ΔH°−ΔG°T,where KL1 and KL2 are adsorption Langmuir constants at T1 and T2, ΔH° is the enthalpy change, and ΔS° is the entropy change in a given process.
## 3. Results and Discussion
### 3.1. Characterization of Purified and Alkali-Activated Diatomite Samples
As can be seen from Table2, both the purified and alkali-activated diatomite samples mainly consist of O, Si, Al, and Fe. Alkali-activated diatomite has a lower O and Si content than purified diatomite, and this is probably due to the removal of organic constituents and the dissolution of SiO2 during alkali treatment. This decrease entrains the increase in the content of Fe and Al.Table 2
Elemental composition of the diatomite samples (w%, EDX).
ElementPurified diatomiteAlkali-activated diatomiteO52.72±1.4849.06±1.27Mg0.53±0.060.53±0.04Al10.36±0.8711.59±0.03Si30.56±0.5927.50±1.42K0.20±0.091.09±0.83Ca0.21±0.050.17±0.03Ti0.91±0.171.23±0.10Fe4.50±0.106.02±0.27Na—1.77±0.24Cl—1.03±0.19Total100100Both the purified and alkali-activated diatomite samples have an amorphous structure (Figure2(a)). The broad peaks at 20-25° are typical for amorphous SiO2 [1, 32, 37, 39, 51]. The absence of the peak around 27° indicates that the diatomite in our study does not contain quartz crystals like other types of diatomite [1, 26, 30, 32, 35, 37, 39, 40].Figure 2
XRD pattern (a) and FT-IR spectra (b) of the diatomite samples.
(a)(b)The FT-IR spectra of the diatomite samples are similar (Figure2(b)). The broad absorption bands at 3450 cm-1 and 1641 cm-1 correspond to the adsorbed H2O, including interlayer water and hydrogen-bonded water with surface hydroxyl groups. A broad band centered at 1101-1031 cm-1 and two bands at 789 cm-1 and 465 cm-1 correspond to the asymmetric stretching vibration, symmetric stretching, and bending vibration of Si-O-Si bonds, respectively [1, 20]. The peaks observed at 3699 cm-1 and 3623 cm-1 are assigned to surface hydroxyl groups in diatomite. The peak at 3699 cm-1 is attributed to the isolated hydroxyl (Si-OH) on the surface of diatomite [1, 43, 55], while the peak at 3623 cm-1 belongs to O-H stretching vibration of the aluminol groups (≡AlOH) [55]. Alkali-activated diatomite has high-intensity peaks for O–H stretching, indicating that more isolated hydroxyl groups are present on the surface. The peak at 536 cm-1 corresponds to the stretching vibration of Fe-O [1]. The peak at 1380 cm-1 is attributed to some organic substances [20]. The intensity of this peak is lower in alkali-activated diatomite than in purified diatomite samples, indicating the removal of organic substances from purified diatomite during NaOH treatment.The SEM images show that purified diatomite consists of circular cylinders of a diameter of about 5-7μm, with small pores on the surface (Figures 3(a) and 3(b)). However, these cylinders are partly shattered, causing the pores to become smaller and even blocked. The alkali-activated diatomite retains its multipore structure, and the pores on the surface become larger after treatment (Figures 3(c) and 3(d)). This change may be the result of the formation of soluble silicates SiO32− from SiO2 [4]. Another reason for this change is probably the removal of organic constituents, leading to the increase in the pore size and hence the increase of the surface area of alkali-activated diatomite.Figure 3
SEM images. (a, b) Purified diatomite. (c, d) Alkali-activated diatomite.
(a)(b)(c)(d)Figure4 shows the nitrogen adsorption-desorption isotherms and pore size distribution of the diatomite samples. The diatomite exhibits a type II isotherm and an H3-type hysteresis loop, indicating the presence of macroporous structures with nonuniform size and/or shape [56]. Thus, the morphology of the diatomite consists of a variety of shapes (Figure 3). However, the pore size distribution curves of the diatomite samples demonstrate a uniform pore size with an average diameter of 4.3 nm. The textural properties of the samples are presented in Table 3. According to the Brunauer-Emmett-Teller analysis, the purified diatomite exhibits a large specific surface area of 55.4 m2/g. This value is consistent with that reported by Son et al. [6] (51 m2/g) for Phu Yen’s diatomite and is much higher than that published in previous works [4, 8, 11, 20, 24, 26, 32, 38, 55] (1.0-27 m2/g). It can be seen from Table 3 that the specific surface area of alkali-activated diatomite (77.8 m2/g) is significantly larger than that of purified diatomite. This increase in the surface area results from the removal of organic impurities during the alkali treatment.Figure 4
Nitrogen adsorption-desorption isotherms (a) and pore size distributions (b) of the diatomite samples.
(a)(b)Table 3
Textural properties of the diatomite samples.
SampleSBET (m2·g-1)Smic (m2·g-1)Sext (m2·g-1)Vmic (cm3·g-1)Vtot (cm3·g-1)Purified diatomite55.419.236.20.00880.0623Alkali-activated diatomite77.819.759.10.00890.0924The zero charge point of purified diatomite is 5.7 (Figure5). This pHPZC is similar to that published in the literature [6, 11, 21, 22]. However, the pHPZC of alkali-activated diatomite (8.9) is much greater than that of purified diatomite. This increase is probably due to the formation of isolated hydroxyl groups on the surface of the material during alkali treatment.Figure 5
Determination of the point of zero charge of the diatomite samples.
### 3.2. Isothermal Studies
#### 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
#### 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
### 3.3. Thermodynamic Studies
The spontaneity of the adsorption process and the interactions on the liquid/solid interface can be explained by using thermodynamic parameters (ΔG°, ΔH°, and ΔS°). If ΔG°<0, the adsorption process is spontaneous; otherwise, adsorption does not occur on its own. If ΔH°<0, the adsorption process is exothermic and vice versa. If ΔS°>0, it is possible to infer that the adsorbent affinity for the dye increases, leading to the increase in randomness of the adsorbates at the liquid/solid interface [4, 42]; in contrast, if ΔS°<0, more adsorbate molecules adhere to the adsorbent surface [13, 16, 37].The thermodynamic parameters of RB and MB adsorption onto the diatomite samples are calculated from Equations (9)–(11). The ΔG° of adsorption is negative for both the single and binary systems (Tables 8 and 9), indicating the spontaneity of adsorption processes. The values of ΔH° and ΔS° differ between the adsorption processes, indicating that the adsorption process is complex. Both the physical and chemical adsorption mechanisms are possible.Table 8
Thermodynamic parameters for adsorption of RB onto purified diatomite in single systems.
Temperature (°C)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)30-31.2618.90165.5345-33.74Table 9
Thermodynamic parameters for the adsorption of the dyes onto alkali-activated diatomite.
DyeTemperature (°C)Single systemBinary systemΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)RB30-30.08-11.9659.79-31.07-96.42-215.6945-30.98-27.83MB30-30.322.48108.27-33.42-15.0460.6645-31.94-34.33
### 3.4. Effect of Solution pH
Figure11 shows the effect of solution pH on the adsorption of RB and MB onto alkali-activated diatomite in the single system. The pH of the solution was adjusted between 3 and 11 with a 0.1 M HCl or 0.1 M NaOH solution.Figure 11
Removal efficiency of the dye onto alkali-activated diatomite at different initial solution pHs in the single systems: (a) RB and (b) MB (adsorbent dosage 0.2 g·L-1, initial RB concentration 2.09×10−5mol·L−1, initial MB concentration 3.13×10−5mol·L−1, and 30°C).
(a)(b)As can be seen from Figure11(a), the adsorption efficiency of RB reaches 94% after 240 min of contact at pH 3. At higher pH, this efficiency decreases drastically, reaching 37% up to pH 5–9 and even lower (30%) at pH 11. We know that RB has a carboxylic group in its molecule, and this group dissociates at higher pHs of the solution. This dissociation renders the molecule negative, resulting in the electric repulsion between RB and the negative surface of the adsorbent at high pH. This result is consistent with that of Eftekhari et al. [5].For MB (Figure11(b)), the adsorption efficiency reaches 100% after a short time (60 min) of contact at pH 3–9. The efficiency only decreases to around 60% at pH 11.The zero charge point of alkali-activated diatomite is 8.9 (Figure5). Theoretically, the surface of the material is positively charged when pH<8.9 and negatively charged when pH>8.9. This means that when the pH of the dye solution increases, the adsorption efficiency should increase because the negatively charged diatomite surface attracts the dye cations. However, in both of our cases, the adsorption efficiency decreases with pH, especially at pH 11. This demonstrates that the adsorption process is complex, and the electrostatic interaction mechanism is not suitable to describe the adsorption of RB and MB onto alkali-activated diatomite.
## 3.1. Characterization of Purified and Alkali-Activated Diatomite Samples
As can be seen from Table2, both the purified and alkali-activated diatomite samples mainly consist of O, Si, Al, and Fe. Alkali-activated diatomite has a lower O and Si content than purified diatomite, and this is probably due to the removal of organic constituents and the dissolution of SiO2 during alkali treatment. This decrease entrains the increase in the content of Fe and Al.Table 2
Elemental composition of the diatomite samples (w%, EDX).
ElementPurified diatomiteAlkali-activated diatomiteO52.72±1.4849.06±1.27Mg0.53±0.060.53±0.04Al10.36±0.8711.59±0.03Si30.56±0.5927.50±1.42K0.20±0.091.09±0.83Ca0.21±0.050.17±0.03Ti0.91±0.171.23±0.10Fe4.50±0.106.02±0.27Na—1.77±0.24Cl—1.03±0.19Total100100Both the purified and alkali-activated diatomite samples have an amorphous structure (Figure2(a)). The broad peaks at 20-25° are typical for amorphous SiO2 [1, 32, 37, 39, 51]. The absence of the peak around 27° indicates that the diatomite in our study does not contain quartz crystals like other types of diatomite [1, 26, 30, 32, 35, 37, 39, 40].Figure 2
XRD pattern (a) and FT-IR spectra (b) of the diatomite samples.
(a)(b)The FT-IR spectra of the diatomite samples are similar (Figure2(b)). The broad absorption bands at 3450 cm-1 and 1641 cm-1 correspond to the adsorbed H2O, including interlayer water and hydrogen-bonded water with surface hydroxyl groups. A broad band centered at 1101-1031 cm-1 and two bands at 789 cm-1 and 465 cm-1 correspond to the asymmetric stretching vibration, symmetric stretching, and bending vibration of Si-O-Si bonds, respectively [1, 20]. The peaks observed at 3699 cm-1 and 3623 cm-1 are assigned to surface hydroxyl groups in diatomite. The peak at 3699 cm-1 is attributed to the isolated hydroxyl (Si-OH) on the surface of diatomite [1, 43, 55], while the peak at 3623 cm-1 belongs to O-H stretching vibration of the aluminol groups (≡AlOH) [55]. Alkali-activated diatomite has high-intensity peaks for O–H stretching, indicating that more isolated hydroxyl groups are present on the surface. The peak at 536 cm-1 corresponds to the stretching vibration of Fe-O [1]. The peak at 1380 cm-1 is attributed to some organic substances [20]. The intensity of this peak is lower in alkali-activated diatomite than in purified diatomite samples, indicating the removal of organic substances from purified diatomite during NaOH treatment.The SEM images show that purified diatomite consists of circular cylinders of a diameter of about 5-7μm, with small pores on the surface (Figures 3(a) and 3(b)). However, these cylinders are partly shattered, causing the pores to become smaller and even blocked. The alkali-activated diatomite retains its multipore structure, and the pores on the surface become larger after treatment (Figures 3(c) and 3(d)). This change may be the result of the formation of soluble silicates SiO32− from SiO2 [4]. Another reason for this change is probably the removal of organic constituents, leading to the increase in the pore size and hence the increase of the surface area of alkali-activated diatomite.Figure 3
SEM images. (a, b) Purified diatomite. (c, d) Alkali-activated diatomite.
(a)(b)(c)(d)Figure4 shows the nitrogen adsorption-desorption isotherms and pore size distribution of the diatomite samples. The diatomite exhibits a type II isotherm and an H3-type hysteresis loop, indicating the presence of macroporous structures with nonuniform size and/or shape [56]. Thus, the morphology of the diatomite consists of a variety of shapes (Figure 3). However, the pore size distribution curves of the diatomite samples demonstrate a uniform pore size with an average diameter of 4.3 nm. The textural properties of the samples are presented in Table 3. According to the Brunauer-Emmett-Teller analysis, the purified diatomite exhibits a large specific surface area of 55.4 m2/g. This value is consistent with that reported by Son et al. [6] (51 m2/g) for Phu Yen’s diatomite and is much higher than that published in previous works [4, 8, 11, 20, 24, 26, 32, 38, 55] (1.0-27 m2/g). It can be seen from Table 3 that the specific surface area of alkali-activated diatomite (77.8 m2/g) is significantly larger than that of purified diatomite. This increase in the surface area results from the removal of organic impurities during the alkali treatment.Figure 4
Nitrogen adsorption-desorption isotherms (a) and pore size distributions (b) of the diatomite samples.
(a)(b)Table 3
Textural properties of the diatomite samples.
SampleSBET (m2·g-1)Smic (m2·g-1)Sext (m2·g-1)Vmic (cm3·g-1)Vtot (cm3·g-1)Purified diatomite55.419.236.20.00880.0623Alkali-activated diatomite77.819.759.10.00890.0924The zero charge point of purified diatomite is 5.7 (Figure5). This pHPZC is similar to that published in the literature [6, 11, 21, 22]. However, the pHPZC of alkali-activated diatomite (8.9) is much greater than that of purified diatomite. This increase is probably due to the formation of isolated hydroxyl groups on the surface of the material during alkali treatment.Figure 5
Determination of the point of zero charge of the diatomite samples.
## 3.2. Isothermal Studies
### 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
### 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
## 3.2.1. Adsorption in Single Systems
(1) Adsorption of RB onto Purified and Alkali-Activated Diatomite Samples. Figures 6 and 7 present the Freundlich and Langmuir isothermal models for the adsorption of RB dye onto the diatomite samples at 30 and 45°C. The isothermal parameters obtained from the experimental data and the respective correlation coefficients are listed in Table 4. It can be seen that the experimental points fit the models well with high correlation coefficients (0.9104-0.9955). Table 4 also shows that the maximal RB adsorption capacity of the alkali-activated diatomite sample is greater than that of the purified diatomite sample. Thus, the activation of diatomite with sodium hydroxide enhances the adsorption of the RB basic dye on diatomite. This enhancement can be attributed to a larger number of the silanol groups formed on the surface, as well as the larger specific surface area of the material.Figure 6
Freundlich isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Figure 7
Langmuir isothermal model for RB adsorption onto diatomite: (a) 30°C and (b) 45°C.
(a)(b)Table 4
Isotherm parameters for adsorption of RB onto the diatomite.
SampleTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2Purified diatomite302.808.02×10−30.97382.20×10−42.45×1050.9880452.976.05×10−30.91041.93×10−43.49×1050.9955Alkali-activated diatomite302.0938.98×10−30.98333.00×10−41.53×1050.9857451.9847.23×10−30.98742.97×10−41.23×1050.9856The results presented above show that alkali-activated diatomite is superior to purified diatomite in terms of chemical and physical properties and adsorption capacity. Therefore, in the following sections, only an alkali-activated diatomite sample is used for the adsorption of dyes from aqueous solutions.(2) Adsorption of MB onto Alkali-Activated Diatomite. The MB adsorption isotherms onto alkali-activated diatomite were also investigated and analyzed according to the linear Freundlich and Langmuir equations. Analysis results are shown in Figure 8 and Table 5. In this case, the Langmuir model is significantly more suitable to describe the adsorption data than the Freundlich model (R2=0.9874-0.9915 as opposed to R2=0.7353-0.8676). That is, the adsorption mainly occurs in a monolayer. The maximum adsorption capacity of MB on alkali-activated diatomite is 7.14×10−4 and 6.90×10−4 (mol·g–1) at 30 and 45°C, respectively.Figure 8
Plots of the isothermal equations in the linear form for MB adsorption onto alkali-activated diatomite at different temperatures in the single systems: (a) Freundlich and (b) Langmuir.
(a)(b)Table 5
Isotherm parameters for adsorption of MB onto alkali-activated diatomite in the single systems.
Temperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2306.592.64×10−30.86767.14×10−41.69×1050.9874456.342.72×10−30.73536.90×10−41.77×1050.9915
## 3.2.2. Adsorption onto Alkali-Activated Diatomite in Binary Systems
Like in the single system, in the binary system, the adsorption of the dyes at 30 or 45°C also follows the Langmuir isothermal model with theR2 values approaching 1 (Figures 9 and 10). The isothermal data in Table 6 also show that the maximum adsorption capacity of alkali-activated diatomite for MB is higher than that for RB, which is similar to the single systems (Tables 4 and 5). Specifically, the ratio of the maximum adsorption capacity of the dyes in the binary system (MB/RB=4.55×10−4/1.40×10−4≈3.3 times at 30°C, and MB/RB=4.55×10−4/1.40×10−4≈3.1 times at 45°C) is higher than that in the single system (MB/RB=7.14×10−4/3.00×10−4≈2.4 times at 30°C, and MB/RB=6.90×10−4/2.97×10−4≈2.3 times at 45°C). This proves that there is competitive adsorption in the binary system, where MB molecules preferentially adsorb onto alkali-activated diatomite compared with RB molecules. This enhanced adsorption might result from the smaller size of the MB molecule. MB molecules more easily diffuse into the pores of diatomite than RB molecules, thus occupying the adsorption sites on the adsorbent surface before the RB molecules do. Similar results are also reported by Eftekhari et al. [5].Figure 9
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 30°C: (a) Freundlich and (b) Langmuir.
(a)(b)Figure 10
Plots of the isothermal equations in the linear form for the adsorption onto alkali-activated diatomite in the binary systems at 45°C: (a) Freundlich and (b) Langmuir.
(a)(b)Table 6
Isotherm parameters for the adsorption onto alkali-activated diatomite in the binary systems at different temperatures.
DyeTemperature (°C)FreundlichLangmuirnKFR2qmax (mol·g–1)KLR2RB3014.680.25×10−30.50811.40×10−42.27×1050.9983453.531.88×10−30.85781.80×10−40.37×1050.9905MB3013.000.93×10−30.68984.55×10−45.78×1050.9949457.292.06×10−30.92885.59×10−44.36×1050.9950Table7 compares the adsorption capacity of the diatomite samples for RB and MB in this study and that of other adsorbents published in the literature. It can be seen that alkali-activated diatomite has a much higher adsorption capacity than all other adsorbents. Therefore, alkali-activated diatomite might serve as a promising adsorbent for the removal of dyes from aqueous solutions.Table 7
Maximum adsorption capacity of RB and MB of different adsorbents.
AdsorbentAdsorption capacity (mg·g-1)ReferencesRBMBAlkali-activated diatomite143.9-142.1∗67.0-86.1∗∗228.4-220.7∗145.6-178.9∗∗The present workPurified diatomite105.1-92.2∗—The present workDiatomite was treated with H2SO4 (1 molar)—127[37]Purified diatomite—72[37]Diatomite was treated with sulfuric acid—126.6 (30°C)[3]Diatomite was treated with sodium hydroxide—27.86 (25°C)[4]Sodium alginate/silicone dioxide148.23[49]Tagaran natural clay131.8 (20°C)[16]Zeolite 4A44.35[19]AlMCM-4141.9 (25°C)∗66.5 (25°C)∗[5]α-Ag2WO4/SBA-15150—[50]Co and N comodified mesoporous carbon composites141 (25°C)—[57]Modified banyan aerial roots115.23—[12]Biosorbent prepared from inactivatedAspergillus oryzae cells98.59 (293 K)—[52]L-Asp capped Fe3O4 NPs10.44—[19]Silica extracted from rice husk6.0-6.87—[18]∗In single systems and ∗∗in binary systems at 30 and 45°C.
## 3.3. Thermodynamic Studies
The spontaneity of the adsorption process and the interactions on the liquid/solid interface can be explained by using thermodynamic parameters (ΔG°, ΔH°, and ΔS°). If ΔG°<0, the adsorption process is spontaneous; otherwise, adsorption does not occur on its own. If ΔH°<0, the adsorption process is exothermic and vice versa. If ΔS°>0, it is possible to infer that the adsorbent affinity for the dye increases, leading to the increase in randomness of the adsorbates at the liquid/solid interface [4, 42]; in contrast, if ΔS°<0, more adsorbate molecules adhere to the adsorbent surface [13, 16, 37].The thermodynamic parameters of RB and MB adsorption onto the diatomite samples are calculated from Equations (9)–(11). The ΔG° of adsorption is negative for both the single and binary systems (Tables 8 and 9), indicating the spontaneity of adsorption processes. The values of ΔH° and ΔS° differ between the adsorption processes, indicating that the adsorption process is complex. Both the physical and chemical adsorption mechanisms are possible.Table 8
Thermodynamic parameters for adsorption of RB onto purified diatomite in single systems.
Temperature (°C)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)30-31.2618.90165.5345-33.74Table 9
Thermodynamic parameters for the adsorption of the dyes onto alkali-activated diatomite.
DyeTemperature (°C)Single systemBinary systemΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)ΔG° (kJ·mol-1)ΔH° (kJ·mol-1)ΔS° (J·mol-1·K-1)RB30-30.08-11.9659.79-31.07-96.42-215.6945-30.98-27.83MB30-30.322.48108.27-33.42-15.0460.6645-31.94-34.33
## 3.4. Effect of Solution pH
Figure11 shows the effect of solution pH on the adsorption of RB and MB onto alkali-activated diatomite in the single system. The pH of the solution was adjusted between 3 and 11 with a 0.1 M HCl or 0.1 M NaOH solution.Figure 11
Removal efficiency of the dye onto alkali-activated diatomite at different initial solution pHs in the single systems: (a) RB and (b) MB (adsorbent dosage 0.2 g·L-1, initial RB concentration 2.09×10−5mol·L−1, initial MB concentration 3.13×10−5mol·L−1, and 30°C).
(a)(b)As can be seen from Figure11(a), the adsorption efficiency of RB reaches 94% after 240 min of contact at pH 3. At higher pH, this efficiency decreases drastically, reaching 37% up to pH 5–9 and even lower (30%) at pH 11. We know that RB has a carboxylic group in its molecule, and this group dissociates at higher pHs of the solution. This dissociation renders the molecule negative, resulting in the electric repulsion between RB and the negative surface of the adsorbent at high pH. This result is consistent with that of Eftekhari et al. [5].For MB (Figure11(b)), the adsorption efficiency reaches 100% after a short time (60 min) of contact at pH 3–9. The efficiency only decreases to around 60% at pH 11.The zero charge point of alkali-activated diatomite is 8.9 (Figure5). Theoretically, the surface of the material is positively charged when pH<8.9 and negatively charged when pH>8.9. This means that when the pH of the dye solution increases, the adsorption efficiency should increase because the negatively charged diatomite surface attracts the dye cations. However, in both of our cases, the adsorption efficiency decreases with pH, especially at pH 11. This demonstrates that the adsorption process is complex, and the electrostatic interaction mechanism is not suitable to describe the adsorption of RB and MB onto alkali-activated diatomite.
## 4. Conclusions
Alkali-activated diatomite is applied to adsorb RB and MB in the single and binary systems. The treatment with sodium hydroxide increases the surface area of the diatomite from 55.4 m2/g to 77.8 m2/g and creates a large number of free silanol groups on the surface of the material. This increases the material’s ability to adsorb RB and MB. The adsorption equilibrium data of RB and MB onto alkali-activated diatomite fit the Langmuir model in both the single and binary systems. MB has a higher affinity to the adsorbent than RB, and the binary system is more effective than the single system. The adsorption process is spontaneous, and the removal efficiency of both MB and RB depends on pH significantly.
---
*Source: 1014354-2021-07-05.xml* | 2021 |
# Design of Intelligent Self-Tuning GA ANFIS Temperature Controller for Plastic Extrusion System
**Authors:** S. Ravi; M. Sudha; P. A. Balakrishnan
**Journal:** Modelling and Simulation in Engineering
(2011)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2011/101437
---
## Abstract
This paper develops a GA ANFIS controller design method for temperature control in plastic extrusion system. Temperature control of plastic extrusion system suffers problems related to longer settling time, couple effects, large time constants, and undesirable overshoot. The system is generally nonlinear and the temperature of the plastic extrusion system may vary over a wide range of disturbances. The system is designed with three controllers. The proposed GA ANFIS controller is the most powerful approach to retrieve the adaptiveness in the case of nonlinear system. In this research the control methods are simulated using simulink. Relatively the methodology and efficiency of the proposed method are compared with those of the traditional methods and the results obtained from GA ANFIS controller give improved performance in terms of time domain specification, set point tracking, and disturbance rejection with optimum stability.
---
## Body
## 1. Introduction
The temperature control in plastic extrusion machine is an important factor to produce high-quality products. Plastic extrusion is a well-known process and widely used in polymerization industry. The extrusion consists of large barrel divided into three temperature zones, namely, barrel, adapter, and die zone, respectively. The temperature zone uses more number of heaters in order to provide different temperature ranges. The overall structure of the plastic extrusion is shown in Figure1. The polymer is fed into the hopper in solid pellet forms and it passes through the temperature zones where it is heated and melted. The melted polymer material is pushed forward by a powerful screw and it passes through the molding mechanism from the die. The quality of extrudates depends on uniform temperature distribution, physical property of raw material, and so forth. The temperature section of PVC extrusion plant is shown in Figure 2. High efficient plastic extrudates can be obtained only when temperature in all the zones is precisely controlled [1]. The implementation of PID controllers retunes their three-term parameters so as to ensure that the dynamic behavior of extruder performance is satisfactory along with the specific heat, thermal conductivity, and ambient temperature which vary with time. PID controllers are used for almost all industrial processes. However, PID controller performs well only at a particular operating range and it is necessary to retune the PID controller if the operating range is changed. The PID controllers do not provide contented results for nonlinear and dead time process [2].Figure 1
Overall structure of the plastic extrusion system.Figure 2
Temperature section of PVC plant.In addition with that, the flow of heat from one temperature zone to another may cause bad transient response for the heating process under set point and load variation. The difficult task of modeling and controlling complex real world systems is difficult especially when implementation issues are considered. If a relatively accurate model of a dynamic system can be developed, it is often too complex to use in controller development, especially because many conventional control design techniques require restrictive assumptions, for the plant model and for the control to be designed (e.g., linearity). Not taking into account these assumptions result in a number of unknown variables which the controller design techniques are unable to handle. This is because process industry machines, unlike humans, lack the ability to solve problems using imprecise information. To emulate this ability fuzzy logic and fuzzy sets are introduced. Fuzzy controllers are not like PID; they are robust. Their performances are less sensitive to parametric variations. The fuzzy controller can be designed without knowing the mathematical model of system. Fuzzy logic controllers have been reported successful for a number of complex and nonlinear processes [3]. Fuzzy can operate for wide range and is capable of maintaining set point temperature levels and reducing overshoots. The genetic algorithm-based neurofuzzy controller has the integral advantages of neural and fuzzy approaches and they are used for intelligent decision making systems. Genetic algorithm uses a direct analogy of such natural evolution to do global optimization in order to solve highly complex problems. It presumes that the potential solution of a problem is individual and can be represented by a set of parameters. Neural networks and fuzzy logic represent two distinct methodologies to deal with uncertainty. Neural networks can model complex nonlinear relationships and are quietably suited for classification phenomenon of predetermined classes. The output is not limited to zero error but minimization of least square errors occurs. Training time required is large for neural network [4]. Training data has to be chosen carefully, to cover the entire range over which different variables are expected. Neural networks and fuzzy logic are different technologies which can be used to accomplish the specification of mathematical relationships. Among numerous variables in a complex dynamic process, these perform mappings with some degree of imprecision in different ways which are used to control nonlinear systems. Hence by strengthening the neurofuzzy controllers with genetic algorithms the searching and attainment of optimal solutions will be easier and faster. The benefits of harnessing the capabilities of genetic algorithms are huge, research efforts on optimizing the solutions are challenging. The combination of genetic algorithm and neuro fuzzy controllers is normally shortened as GA-ANFIS and this intelligent hybrid controller is compared with that of the conventional PID and fuzzy controller. The Matlab/Simulink software forms part of the modeling and design tool employed in this research.
## 2. Temperature System in Plastic Extrusion Model
Step response method is based on transient response tests. Many industrial processes have step responses of the system in which the step response is monotonous after an initial time. A system with step response can be approximated by the transfer function as in (1) where “k” is the static gain, “τ” is the apparent time delay, and “T” is the apparent time constant. G(s) is the transfer function of the plant. The transfer function of plastic extrusion pipeline described is given in (2), the plastic extrusion model uses the parameters k=0.92, T=144 seconds, τ=10 seconds [5], and the temperature generally varies from 50°C to 200°C:(1)G(s)=k1+sTe-sτ,(2)G(s)=0.921+144se-10s.
## 3. PID Control
The PID control is designed to ensure the specifying desired nominal operating point for temperature control of plastic extrusion model and regulating it, so that it stays closer to the nominal operating point in the case of sudden disturbances, set point variations, and noise. The proportional gain (Kp), integral time constant (Ti), and derivative time constant (Td) of the PID control settings are designed using Zeigler-Nichols tuning method as shown in Table 1. By applying the step test to (1) the S-shaped curve is obtained and there is identified the temperature control method characterized by two constants as delay time L=10 seconds and time constant = 50 seconds. The delay time and time constant are determined by drawing a tangent line at the inflection point of the S-shaped curve and determined by the intersections of the tangent line with the time axis and line output response c(t). From Zeigler-Nicholas tuning rule, the suggested optimal set (Kp), and (Ti), (Td) values are obtained [6]. The optimal setting values (Kp), (Ti), and (Td) obtained for temperature control of plastic extrusion model are obtained by finding the minimum values of integral square error, integral time square error, integral time average error, and integral average error shown in Table 2. The minimum setting values of Kp, Ki, and Kd shown in Table 3. The simulink model of block of PID control is shown in Figure 3.Table 1
Ziegler-Nichols tuning rules.
Type of controllerKpTiTdPT/L∞0PI0.9T/LL/0.30PID1.2T/L2L0.5LTable 2
Minimum setting values of ISE, ITSE, ITAE, and IAE.
Integral square errorIntegral time square errorIntegral time average errorIntegral average error8.054e+65.971e+101.962e+92.768e+5Table 3
Minimum setting valuesKp,Ki,andKd.
KpTi (s)TdKiKd3020050.315e+1011.52Figure 3
Simulink model of PID controller.
## 4. Fuzzy Controller and Its Membership Function
Fuzzy logic is more effective feedback control system and easier to implement. Fuzzy controller consists of a fuzzifier, rule base, an inference engine, and a defuzzifier [7]. The numerical input values of the fuzzifier are converted into fuzzy values, along with the rule base which are fed into the inference engine which produces control value. In fuzzy rule base, various rules are fostered according to their respective problem requirements. The control values are not in usable form; henceforth they are converted to numerical output values using the defuzzifier. The plastic extrusion temperature controller uses two-dimensional fuzzy controller models which are shown in Figure 4. It has two input variables, error “e”, change in error “ce”, and one output variable “u”. For computations to be relatively simple, the research uses triangular membership function. The computational structure of FLC scheme is composed of the steps rule base and membership function. The fuzzy control rules were formulated in the IF-THEN rules form. The rule base stores the rules governing the input and output relationship of proposed control logic [8]. The inputs to the fuzzy controller error e(k) and change in error Δe(k) computed from the reference value r(k). The kdenotes the discrete time. The fuzzy controller output u(k) is based on error and error change. Table 4 summarizes the 25 rules for the proposed control algorithm for fuzzy logic. Each universe of discourse is divided into five fuzzy subsystems, namely, NB, NS, Z, PS, and PB. The input and change in input variable (e, ce) are shown in Figures 5 and 6. The inference mechanism is used for evaluating linguistic descriptions. The fuzzy control rules have been described using linguistic variables; for example, if error e is NS and the increasing change in error ce is PB, then the output is PS which is used to control the temperature rise. The output variable of fuzzy set u is shown in Figure 7.Table 4
Proposed fuzzy rules.
eceNBNSZPSPBNBNBNBNBNSZNSNBNSNSZPSZNBNSZPSPBPSNSZPSPSPBPBZPSPBPBPBFigure 4
FLC controller-based plastic extrusion system.Figure 5
Fuzzy controller input variable “e”.Figure 6
Fuzzy controller input variable “ce”.Figure 7
Fuzzy controller output variable “u”.The inference result of each rule consists of two parts, the weighing factorwi of the individual rule and the degree of change of temperature C. According to the rule, it is written as follow:
(3)zi=min(μe(e0),μce(ce0)),Ci=wiCi,
where zi denotes the change in control signal inferred by the ith rule and C is noted from the rule table, which shows the mapping from the product space of e and ce to Ci [9, 10].The defuzzification process is after collecting all the singleton rules; it defuzzifies the result so that a crisp value control signal is obtained and the change of the control signal is computed using center of gravity method as given in (4). The simulink model block of fuzzy control is shown in Figure 8:(4)z=δuk=∑i=1Nzi∑i=1Nwi.Figure 8
Simulink model of fuzzy controller.
## 5. GA ANFIS Model
The genetic algorithm technique employed to tune the ANFIS controller. Genetic algorithm was inspired by the mechanism of natural selection, a biological process in which stronger individual is likely to be the winners in a competing environment. Genetic algorithm uses a direct analogy of such natural evolution to do global optimization in order to solve highly complex problems. It presumes that the potential solution of a problem is individual and can be represented by a set of parameters. These parameters are regarded as the genes of a chromosome and can be structured by a string of concatenated values. The form of variables representation is defined by the encoding scheme. The variables can be represented by binary, real numbers, or other forms, depending on the application data. Its range, the search space, is usually defined by the problem. Genetic algorithm has been successfully applied to many different problems. The tuning approach employs the use of matlab M-files and functions to manipulate the ANFIS system and scaling gains, run the simulink-based simulation, checking the resulting performance, and continuously modify the system for a number of times in search for optimal solution. The GA optimization algorithm was run for 1000 epochs. Fuzzy logic and neural networks are natural complementary tools in building intelligent systems. Neural networks are computational structures that perform well, when dealing with new data, while fuzzy logic deals with reasoning, using linguistic information acquired from domain experts. Fuzzy systems lack the ability to learn and cannot adjust themselves to a new environment. Neural networks can learn and they are opaque to the user. The neural network merged with a fuzzy system, forms one integrated system. It offers a promising approach to build intelligent systems. Integrated systems can combine the parallel computation and learning abilities of neural networks, with the human knowledge representation and explanation abilities of fuzzy systems. The neural network, uses feed forward network; the number of input and output layers used for this system is one with linear saturation function. The hidden layer used for this system is two and using tansigmoidal function. Fuzzy inference systems are also known as fuzzy rule-based systems, containing a number of fuzzy IF-THEN rules. GA ANFIS is used in the form of Takaji sugeno model to integrate the best features of fuzzy systems and neural networks. GA ANFIS is also used in representation of prior knowledge into a set of constraints, to reduce optimization search space obtained from fuzzy and adaptation of back propagation to structured network through neural network [11, 12]. To train the GA ANFIS controller generally input-output characterization or desired output of the plant is sufficient. For better performance two systems hybridized. The error and derivative error are given as an input to the system and the neural network’s output is given to the fuzzy logic. Neural network will decide which fuzzy set is selected out of five fuzzy sets. The maximum membership set is selected. The genetic learning algorithm tunes the membership functions of a Sugeno type fuzzy inference system using the training input-output data. These modeling methods can be applied to both static and dynamic systems. If the output of the model at a moment is applied as its input at the next moment, the model is called dynamic model or recurrent model. In other words, in recurrent models, the output of the model at the existing moment is influenced by the out-put of the model, at previous moments. The GA ANFIS system rule base is shown in Table 5. The proposed algorithm summarizes 25 rules. In a GA ANFIS controller training algorithm, each epoch is composed of a forward pass and backward pass [13]. In the forward pass, a training set of input patterns is presented to the GA ANFIS controller, the neurons outputs are calculated on the layer- by-layer basis, and the rules consequent parameters are identified by the least squares estimator. The GA ANFIS system consists of the components of a conventional fuzzy system. But, these computations at each stage are performed by hidden neurons and the neural network learning capacity is provided to enhance the system knowledge [14]. The multi-layer fuzzy neural network model for fuzzy tuning rules is given in Figure 9. It shows the diagram for the internal layers of ANFIS model. The optimal value of the neuro fuzzy controller is found by using genetic algorithm. All possible sets of controller parameter values are particles whose values are adjusted to minimize the objective function. For the GA ANFIS controller design, it is ensured that the controller settings estimated result in a stable closed loop system.Table 5
Proposed GA ANFIS control rules.
eceNBNSZPSPBNBMF1MF2MF3MF4MP5NSMF6MF7MF8MF9MF10ZMF11MF12MF13MF14MF15PSMF16MF17MF18MF19MF20PBMF21MF22MF23MF24MF25Figure 9
Internal layer of GA ANFIS model.
### 5.1. Initialization of Parameters
To start with genetic algorithm, certain parameters need to be defined. These include population size, bit length of chromosome, number of iterations, selection, crossover, and mutation types. Selection of these parameters decides, to a great extend, the ability of the designed controller. The range of the tuning parameters is considered between 0 and 10. Initializing values are detailed as follows:(i)
population type: double vector,(ii)
selection function: tournament selection,(iii)
tournament size: 2,(iv)
reproduction crossover function: 0.8,(v)
crossover function: scattered,(vi)
migration direction: forward,(vii)
mutation function: constraint dependent default value.In each generation, the genetic operators are applied to selected individuals from the current population in order to create a new population. Generally, the three main genetic operators of reproduction, crossover, and mutation are employed. By using different probabilities for applying these operators, the speed of convergence can be controlled. Crossover and mutation operators must be carefully designed, since their choice highly contributes to the performance of the whole genetic algorithm [15].Reproduction
A part of the new population can be created by simply copying without changing selected individuals from the present population. Also new population has the possibility of selection by already developed solutions. There are a number of other selection methods available and it is up to the user to select the appropriate one for each process. Reproduction crossover fraction is using 0.8.Crossover
New individuals are generally created as offspring of two parents (i.e., crossover being a binary operator). One or more so-called crossover points are selected (usually at random) within the chromosome of each parent, at the same place in each. The parts delimited by the crossover points are then interchanged between the parents. The individuals resulting in this way are the offspring. Beyond one point and multiple point crossovers, there exist some crossover types. The so-called arithmetic crossover generates an offspring as a component-wise linear combination of the parents in later phases of evolution, it is more desirable to keep individuals intact, and so it is a good idea to use an adaptively changing crossover rate: higher rates in early phases and a lower rate at the end of the genetic algorithm.Mutation
A new individual is created by making modifications to one selected individual. The modifications can consist of changing one or more values in the representation or adding/deleting parts of the representation. In genetic algorithm, mutation is a source of variability and too great a mutation rate results in less efficient evolution, except in the case of particularly simple problems. Hence, mutation should be used sparingly because it is a random search operator; otherwise, with high mutation rates, the algorithm will become little more than a random search. Moreover, at different stages, one may use different mutation operators. At the beginning, mutation operators resulting in bigger jumps in the search space might be preferred [16]. Later on, when the solution is close by a mutation operator leading to slighter shifts in the search space could be favored. Figures 10 and 11 show the input and change in input variable (e, ce) of GA ANFIS controller. The output variable of fuzzy set “u” is shown in Figure 12. The simulink model block of GA ANFIS controller section is shown in Figure 13.Figure 10
GA ANFIS controller input variable “e”.Figure 11
GA ANFIS controller input variable “ce”.Figure 12
GA ANFIS controller output variable “u”.Figure 13
Simulink model of GA ANFIS controller.
## 5.1. Initialization of Parameters
To start with genetic algorithm, certain parameters need to be defined. These include population size, bit length of chromosome, number of iterations, selection, crossover, and mutation types. Selection of these parameters decides, to a great extend, the ability of the designed controller. The range of the tuning parameters is considered between 0 and 10. Initializing values are detailed as follows:(i)
population type: double vector,(ii)
selection function: tournament selection,(iii)
tournament size: 2,(iv)
reproduction crossover function: 0.8,(v)
crossover function: scattered,(vi)
migration direction: forward,(vii)
mutation function: constraint dependent default value.In each generation, the genetic operators are applied to selected individuals from the current population in order to create a new population. Generally, the three main genetic operators of reproduction, crossover, and mutation are employed. By using different probabilities for applying these operators, the speed of convergence can be controlled. Crossover and mutation operators must be carefully designed, since their choice highly contributes to the performance of the whole genetic algorithm [15].Reproduction
A part of the new population can be created by simply copying without changing selected individuals from the present population. Also new population has the possibility of selection by already developed solutions. There are a number of other selection methods available and it is up to the user to select the appropriate one for each process. Reproduction crossover fraction is using 0.8.Crossover
New individuals are generally created as offspring of two parents (i.e., crossover being a binary operator). One or more so-called crossover points are selected (usually at random) within the chromosome of each parent, at the same place in each. The parts delimited by the crossover points are then interchanged between the parents. The individuals resulting in this way are the offspring. Beyond one point and multiple point crossovers, there exist some crossover types. The so-called arithmetic crossover generates an offspring as a component-wise linear combination of the parents in later phases of evolution, it is more desirable to keep individuals intact, and so it is a good idea to use an adaptively changing crossover rate: higher rates in early phases and a lower rate at the end of the genetic algorithm.Mutation
A new individual is created by making modifications to one selected individual. The modifications can consist of changing one or more values in the representation or adding/deleting parts of the representation. In genetic algorithm, mutation is a source of variability and too great a mutation rate results in less efficient evolution, except in the case of particularly simple problems. Hence, mutation should be used sparingly because it is a random search operator; otherwise, with high mutation rates, the algorithm will become little more than a random search. Moreover, at different stages, one may use different mutation operators. At the beginning, mutation operators resulting in bigger jumps in the search space might be preferred [16]. Later on, when the solution is close by a mutation operator leading to slighter shifts in the search space could be favored. Figures 10 and 11 show the input and change in input variable (e, ce) of GA ANFIS controller. The output variable of fuzzy set “u” is shown in Figure 12. The simulink model block of GA ANFIS controller section is shown in Figure 13.Figure 10
GA ANFIS controller input variable “e”.Figure 11
GA ANFIS controller input variable “ce”.Figure 12
GA ANFIS controller output variable “u”.Figure 13
Simulink model of GA ANFIS controller.
## 6. Simulation Results
The system is a multistage and coupled system; so, four set points are taken for the system. The Matlab simulink used for simulation and uses of the first-order function. Four set point temperatures 70°C, 100°C, 150°C, and 200°C changes at different times are used in 14000 seconds. The results of PID controller for temperature set point shown in Figure14 and it is observed that the performance of the system with PID controller is almost oscillating and takes more time to settle with reference temperature, compared with other types of controller. The fuzzy simulation output diagram for fuzzy is given in Figure 15. FLC is designed using 5 linguistic levels and 25 rules. FLC gives better results compared to PID controller by giving a quick settle with reference temperature and less oscillatory response. The integrated GA ANFIS output controller output with Takaji Sugeno fuzzy model is shown in Figure 16 which eliminates the oscillatory output of different temperature set points and identifies the process variation quickly and provides good control for set point changes and sudden disturbances. The different controllers PID, FLC, and proposed GA ANFIS controller output are depicted in Figure 17. The results shows that the proposed controller output settles with reference temperature very quickly and it eliminates the overshoot problem. The result of the GA-optimized ANFIS controller shows an outstanding performance in terms of achieving the desired value with very small values for the rise time and settling time.Figure 14
PID control simulated output at different temperature set points.Figure 15
FLC control simulated output at different temperature set points.Figure 16
GA ANFIS controller simulated output at different temperature set points.Figure 17
Comparison results of PID, FLC, and GA ANFIS controllers.
## 7. Results and Discussion
In this paper, GA ANFIS controller acts as a replacement for the previous existing controllers due to its unique characteristics. The merits of the GA ANFIS can be observed as follows. The GA ANFIS has improved control quality. The aim of controlling heated barrel is to bring the set points during startup as soon as possible, while avoiding large overshoots in order to maintain it at current temperature set value. Table6 gives various timing specification for the three controllers. From the analysis, the GA ANFIS controller on the basis of delay time gives efficient output differences 8.5 times as that of PID controller and 1.5 times as that of fuzzy. Consequently, the GA ANFIS controller produces an output, which is 5 times ahead of PID and 1.6 times of fuzzy, in the rise time analysis. The peak time results state that the GA ANFIS controller outcasts a production output 6.66 times more efficient than PID and 1.66 times than fuzzy. If we consider the settling time, the GA ANFIS controller is 1.15 times more efficient than PID and 1.09 times than fuzzy. The set points tracking and disturbance rejection are obtained by the proposed method.Table 6
Timing specification of PID, FLC, and GA ANFIS controllers.
Timing specificationsPIDFUZZYGA ANFISDelay time (Td)170 Sec25 Sec20 SecRise time (Tr)250 Sec80 Sec50 SecPeak time (Tp)400 Sec100 Sec60 SecSettling time (Ts)1900 Sec1800 Sec1650 SecPeak overshoot (%)21%00
## 8. Conclusion
We have chosen GA ANFIS controller, since it is characterized by its capability to eliminate sudden input disturbance and maintain the set point temperature in the plastic extrusion system. The simulation results clearly show that the GA ANFIS controller reduces the timing specifications of fuzzy and PID controllers. This paper demonstrates the effectiveness of intelligent controller on nonlinear system, particularly for temperature control in plastic extrusion system. The comparison of performance of the three controllers reveals that the GA ANFIS controller is superior to the other controllers. From the results obtained the proposed controller is good for set point changes and stability. With the aid of the supervisory technique, the proposed controller identifies the process variations quickly and provides good controller performance for the set point changes and sudden disturbances. Therefore GA ANFIS controller will prove efficacious especially in the case of plastic extrusion temperature control system.
---
*Source: 101437-2011-09-08.xml* | 101437-2011-09-08_101437-2011-09-08.md | 28,987 | Design of Intelligent Self-Tuning GA ANFIS Temperature Controller for Plastic Extrusion System | S. Ravi; M. Sudha; P. A. Balakrishnan | Modelling and Simulation in Engineering
(2011) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2011/101437 | 101437-2011-09-08.xml | ---
## Abstract
This paper develops a GA ANFIS controller design method for temperature control in plastic extrusion system. Temperature control of plastic extrusion system suffers problems related to longer settling time, couple effects, large time constants, and undesirable overshoot. The system is generally nonlinear and the temperature of the plastic extrusion system may vary over a wide range of disturbances. The system is designed with three controllers. The proposed GA ANFIS controller is the most powerful approach to retrieve the adaptiveness in the case of nonlinear system. In this research the control methods are simulated using simulink. Relatively the methodology and efficiency of the proposed method are compared with those of the traditional methods and the results obtained from GA ANFIS controller give improved performance in terms of time domain specification, set point tracking, and disturbance rejection with optimum stability.
---
## Body
## 1. Introduction
The temperature control in plastic extrusion machine is an important factor to produce high-quality products. Plastic extrusion is a well-known process and widely used in polymerization industry. The extrusion consists of large barrel divided into three temperature zones, namely, barrel, adapter, and die zone, respectively. The temperature zone uses more number of heaters in order to provide different temperature ranges. The overall structure of the plastic extrusion is shown in Figure1. The polymer is fed into the hopper in solid pellet forms and it passes through the temperature zones where it is heated and melted. The melted polymer material is pushed forward by a powerful screw and it passes through the molding mechanism from the die. The quality of extrudates depends on uniform temperature distribution, physical property of raw material, and so forth. The temperature section of PVC extrusion plant is shown in Figure 2. High efficient plastic extrudates can be obtained only when temperature in all the zones is precisely controlled [1]. The implementation of PID controllers retunes their three-term parameters so as to ensure that the dynamic behavior of extruder performance is satisfactory along with the specific heat, thermal conductivity, and ambient temperature which vary with time. PID controllers are used for almost all industrial processes. However, PID controller performs well only at a particular operating range and it is necessary to retune the PID controller if the operating range is changed. The PID controllers do not provide contented results for nonlinear and dead time process [2].Figure 1
Overall structure of the plastic extrusion system.Figure 2
Temperature section of PVC plant.In addition with that, the flow of heat from one temperature zone to another may cause bad transient response for the heating process under set point and load variation. The difficult task of modeling and controlling complex real world systems is difficult especially when implementation issues are considered. If a relatively accurate model of a dynamic system can be developed, it is often too complex to use in controller development, especially because many conventional control design techniques require restrictive assumptions, for the plant model and for the control to be designed (e.g., linearity). Not taking into account these assumptions result in a number of unknown variables which the controller design techniques are unable to handle. This is because process industry machines, unlike humans, lack the ability to solve problems using imprecise information. To emulate this ability fuzzy logic and fuzzy sets are introduced. Fuzzy controllers are not like PID; they are robust. Their performances are less sensitive to parametric variations. The fuzzy controller can be designed without knowing the mathematical model of system. Fuzzy logic controllers have been reported successful for a number of complex and nonlinear processes [3]. Fuzzy can operate for wide range and is capable of maintaining set point temperature levels and reducing overshoots. The genetic algorithm-based neurofuzzy controller has the integral advantages of neural and fuzzy approaches and they are used for intelligent decision making systems. Genetic algorithm uses a direct analogy of such natural evolution to do global optimization in order to solve highly complex problems. It presumes that the potential solution of a problem is individual and can be represented by a set of parameters. Neural networks and fuzzy logic represent two distinct methodologies to deal with uncertainty. Neural networks can model complex nonlinear relationships and are quietably suited for classification phenomenon of predetermined classes. The output is not limited to zero error but minimization of least square errors occurs. Training time required is large for neural network [4]. Training data has to be chosen carefully, to cover the entire range over which different variables are expected. Neural networks and fuzzy logic are different technologies which can be used to accomplish the specification of mathematical relationships. Among numerous variables in a complex dynamic process, these perform mappings with some degree of imprecision in different ways which are used to control nonlinear systems. Hence by strengthening the neurofuzzy controllers with genetic algorithms the searching and attainment of optimal solutions will be easier and faster. The benefits of harnessing the capabilities of genetic algorithms are huge, research efforts on optimizing the solutions are challenging. The combination of genetic algorithm and neuro fuzzy controllers is normally shortened as GA-ANFIS and this intelligent hybrid controller is compared with that of the conventional PID and fuzzy controller. The Matlab/Simulink software forms part of the modeling and design tool employed in this research.
## 2. Temperature System in Plastic Extrusion Model
Step response method is based on transient response tests. Many industrial processes have step responses of the system in which the step response is monotonous after an initial time. A system with step response can be approximated by the transfer function as in (1) where “k” is the static gain, “τ” is the apparent time delay, and “T” is the apparent time constant. G(s) is the transfer function of the plant. The transfer function of plastic extrusion pipeline described is given in (2), the plastic extrusion model uses the parameters k=0.92, T=144 seconds, τ=10 seconds [5], and the temperature generally varies from 50°C to 200°C:(1)G(s)=k1+sTe-sτ,(2)G(s)=0.921+144se-10s.
## 3. PID Control
The PID control is designed to ensure the specifying desired nominal operating point for temperature control of plastic extrusion model and regulating it, so that it stays closer to the nominal operating point in the case of sudden disturbances, set point variations, and noise. The proportional gain (Kp), integral time constant (Ti), and derivative time constant (Td) of the PID control settings are designed using Zeigler-Nichols tuning method as shown in Table 1. By applying the step test to (1) the S-shaped curve is obtained and there is identified the temperature control method characterized by two constants as delay time L=10 seconds and time constant = 50 seconds. The delay time and time constant are determined by drawing a tangent line at the inflection point of the S-shaped curve and determined by the intersections of the tangent line with the time axis and line output response c(t). From Zeigler-Nicholas tuning rule, the suggested optimal set (Kp), and (Ti), (Td) values are obtained [6]. The optimal setting values (Kp), (Ti), and (Td) obtained for temperature control of plastic extrusion model are obtained by finding the minimum values of integral square error, integral time square error, integral time average error, and integral average error shown in Table 2. The minimum setting values of Kp, Ki, and Kd shown in Table 3. The simulink model of block of PID control is shown in Figure 3.Table 1
Ziegler-Nichols tuning rules.
Type of controllerKpTiTdPT/L∞0PI0.9T/LL/0.30PID1.2T/L2L0.5LTable 2
Minimum setting values of ISE, ITSE, ITAE, and IAE.
Integral square errorIntegral time square errorIntegral time average errorIntegral average error8.054e+65.971e+101.962e+92.768e+5Table 3
Minimum setting valuesKp,Ki,andKd.
KpTi (s)TdKiKd3020050.315e+1011.52Figure 3
Simulink model of PID controller.
## 4. Fuzzy Controller and Its Membership Function
Fuzzy logic is more effective feedback control system and easier to implement. Fuzzy controller consists of a fuzzifier, rule base, an inference engine, and a defuzzifier [7]. The numerical input values of the fuzzifier are converted into fuzzy values, along with the rule base which are fed into the inference engine which produces control value. In fuzzy rule base, various rules are fostered according to their respective problem requirements. The control values are not in usable form; henceforth they are converted to numerical output values using the defuzzifier. The plastic extrusion temperature controller uses two-dimensional fuzzy controller models which are shown in Figure 4. It has two input variables, error “e”, change in error “ce”, and one output variable “u”. For computations to be relatively simple, the research uses triangular membership function. The computational structure of FLC scheme is composed of the steps rule base and membership function. The fuzzy control rules were formulated in the IF-THEN rules form. The rule base stores the rules governing the input and output relationship of proposed control logic [8]. The inputs to the fuzzy controller error e(k) and change in error Δe(k) computed from the reference value r(k). The kdenotes the discrete time. The fuzzy controller output u(k) is based on error and error change. Table 4 summarizes the 25 rules for the proposed control algorithm for fuzzy logic. Each universe of discourse is divided into five fuzzy subsystems, namely, NB, NS, Z, PS, and PB. The input and change in input variable (e, ce) are shown in Figures 5 and 6. The inference mechanism is used for evaluating linguistic descriptions. The fuzzy control rules have been described using linguistic variables; for example, if error e is NS and the increasing change in error ce is PB, then the output is PS which is used to control the temperature rise. The output variable of fuzzy set u is shown in Figure 7.Table 4
Proposed fuzzy rules.
eceNBNSZPSPBNBNBNBNBNSZNSNBNSNSZPSZNBNSZPSPBPSNSZPSPSPBPBZPSPBPBPBFigure 4
FLC controller-based plastic extrusion system.Figure 5
Fuzzy controller input variable “e”.Figure 6
Fuzzy controller input variable “ce”.Figure 7
Fuzzy controller output variable “u”.The inference result of each rule consists of two parts, the weighing factorwi of the individual rule and the degree of change of temperature C. According to the rule, it is written as follow:
(3)zi=min(μe(e0),μce(ce0)),Ci=wiCi,
where zi denotes the change in control signal inferred by the ith rule and C is noted from the rule table, which shows the mapping from the product space of e and ce to Ci [9, 10].The defuzzification process is after collecting all the singleton rules; it defuzzifies the result so that a crisp value control signal is obtained and the change of the control signal is computed using center of gravity method as given in (4). The simulink model block of fuzzy control is shown in Figure 8:(4)z=δuk=∑i=1Nzi∑i=1Nwi.Figure 8
Simulink model of fuzzy controller.
## 5. GA ANFIS Model
The genetic algorithm technique employed to tune the ANFIS controller. Genetic algorithm was inspired by the mechanism of natural selection, a biological process in which stronger individual is likely to be the winners in a competing environment. Genetic algorithm uses a direct analogy of such natural evolution to do global optimization in order to solve highly complex problems. It presumes that the potential solution of a problem is individual and can be represented by a set of parameters. These parameters are regarded as the genes of a chromosome and can be structured by a string of concatenated values. The form of variables representation is defined by the encoding scheme. The variables can be represented by binary, real numbers, or other forms, depending on the application data. Its range, the search space, is usually defined by the problem. Genetic algorithm has been successfully applied to many different problems. The tuning approach employs the use of matlab M-files and functions to manipulate the ANFIS system and scaling gains, run the simulink-based simulation, checking the resulting performance, and continuously modify the system for a number of times in search for optimal solution. The GA optimization algorithm was run for 1000 epochs. Fuzzy logic and neural networks are natural complementary tools in building intelligent systems. Neural networks are computational structures that perform well, when dealing with new data, while fuzzy logic deals with reasoning, using linguistic information acquired from domain experts. Fuzzy systems lack the ability to learn and cannot adjust themselves to a new environment. Neural networks can learn and they are opaque to the user. The neural network merged with a fuzzy system, forms one integrated system. It offers a promising approach to build intelligent systems. Integrated systems can combine the parallel computation and learning abilities of neural networks, with the human knowledge representation and explanation abilities of fuzzy systems. The neural network, uses feed forward network; the number of input and output layers used for this system is one with linear saturation function. The hidden layer used for this system is two and using tansigmoidal function. Fuzzy inference systems are also known as fuzzy rule-based systems, containing a number of fuzzy IF-THEN rules. GA ANFIS is used in the form of Takaji sugeno model to integrate the best features of fuzzy systems and neural networks. GA ANFIS is also used in representation of prior knowledge into a set of constraints, to reduce optimization search space obtained from fuzzy and adaptation of back propagation to structured network through neural network [11, 12]. To train the GA ANFIS controller generally input-output characterization or desired output of the plant is sufficient. For better performance two systems hybridized. The error and derivative error are given as an input to the system and the neural network’s output is given to the fuzzy logic. Neural network will decide which fuzzy set is selected out of five fuzzy sets. The maximum membership set is selected. The genetic learning algorithm tunes the membership functions of a Sugeno type fuzzy inference system using the training input-output data. These modeling methods can be applied to both static and dynamic systems. If the output of the model at a moment is applied as its input at the next moment, the model is called dynamic model or recurrent model. In other words, in recurrent models, the output of the model at the existing moment is influenced by the out-put of the model, at previous moments. The GA ANFIS system rule base is shown in Table 5. The proposed algorithm summarizes 25 rules. In a GA ANFIS controller training algorithm, each epoch is composed of a forward pass and backward pass [13]. In the forward pass, a training set of input patterns is presented to the GA ANFIS controller, the neurons outputs are calculated on the layer- by-layer basis, and the rules consequent parameters are identified by the least squares estimator. The GA ANFIS system consists of the components of a conventional fuzzy system. But, these computations at each stage are performed by hidden neurons and the neural network learning capacity is provided to enhance the system knowledge [14]. The multi-layer fuzzy neural network model for fuzzy tuning rules is given in Figure 9. It shows the diagram for the internal layers of ANFIS model. The optimal value of the neuro fuzzy controller is found by using genetic algorithm. All possible sets of controller parameter values are particles whose values are adjusted to minimize the objective function. For the GA ANFIS controller design, it is ensured that the controller settings estimated result in a stable closed loop system.Table 5
Proposed GA ANFIS control rules.
eceNBNSZPSPBNBMF1MF2MF3MF4MP5NSMF6MF7MF8MF9MF10ZMF11MF12MF13MF14MF15PSMF16MF17MF18MF19MF20PBMF21MF22MF23MF24MF25Figure 9
Internal layer of GA ANFIS model.
### 5.1. Initialization of Parameters
To start with genetic algorithm, certain parameters need to be defined. These include population size, bit length of chromosome, number of iterations, selection, crossover, and mutation types. Selection of these parameters decides, to a great extend, the ability of the designed controller. The range of the tuning parameters is considered between 0 and 10. Initializing values are detailed as follows:(i)
population type: double vector,(ii)
selection function: tournament selection,(iii)
tournament size: 2,(iv)
reproduction crossover function: 0.8,(v)
crossover function: scattered,(vi)
migration direction: forward,(vii)
mutation function: constraint dependent default value.In each generation, the genetic operators are applied to selected individuals from the current population in order to create a new population. Generally, the three main genetic operators of reproduction, crossover, and mutation are employed. By using different probabilities for applying these operators, the speed of convergence can be controlled. Crossover and mutation operators must be carefully designed, since their choice highly contributes to the performance of the whole genetic algorithm [15].Reproduction
A part of the new population can be created by simply copying without changing selected individuals from the present population. Also new population has the possibility of selection by already developed solutions. There are a number of other selection methods available and it is up to the user to select the appropriate one for each process. Reproduction crossover fraction is using 0.8.Crossover
New individuals are generally created as offspring of two parents (i.e., crossover being a binary operator). One or more so-called crossover points are selected (usually at random) within the chromosome of each parent, at the same place in each. The parts delimited by the crossover points are then interchanged between the parents. The individuals resulting in this way are the offspring. Beyond one point and multiple point crossovers, there exist some crossover types. The so-called arithmetic crossover generates an offspring as a component-wise linear combination of the parents in later phases of evolution, it is more desirable to keep individuals intact, and so it is a good idea to use an adaptively changing crossover rate: higher rates in early phases and a lower rate at the end of the genetic algorithm.Mutation
A new individual is created by making modifications to one selected individual. The modifications can consist of changing one or more values in the representation or adding/deleting parts of the representation. In genetic algorithm, mutation is a source of variability and too great a mutation rate results in less efficient evolution, except in the case of particularly simple problems. Hence, mutation should be used sparingly because it is a random search operator; otherwise, with high mutation rates, the algorithm will become little more than a random search. Moreover, at different stages, one may use different mutation operators. At the beginning, mutation operators resulting in bigger jumps in the search space might be preferred [16]. Later on, when the solution is close by a mutation operator leading to slighter shifts in the search space could be favored. Figures 10 and 11 show the input and change in input variable (e, ce) of GA ANFIS controller. The output variable of fuzzy set “u” is shown in Figure 12. The simulink model block of GA ANFIS controller section is shown in Figure 13.Figure 10
GA ANFIS controller input variable “e”.Figure 11
GA ANFIS controller input variable “ce”.Figure 12
GA ANFIS controller output variable “u”.Figure 13
Simulink model of GA ANFIS controller.
## 5.1. Initialization of Parameters
To start with genetic algorithm, certain parameters need to be defined. These include population size, bit length of chromosome, number of iterations, selection, crossover, and mutation types. Selection of these parameters decides, to a great extend, the ability of the designed controller. The range of the tuning parameters is considered between 0 and 10. Initializing values are detailed as follows:(i)
population type: double vector,(ii)
selection function: tournament selection,(iii)
tournament size: 2,(iv)
reproduction crossover function: 0.8,(v)
crossover function: scattered,(vi)
migration direction: forward,(vii)
mutation function: constraint dependent default value.In each generation, the genetic operators are applied to selected individuals from the current population in order to create a new population. Generally, the three main genetic operators of reproduction, crossover, and mutation are employed. By using different probabilities for applying these operators, the speed of convergence can be controlled. Crossover and mutation operators must be carefully designed, since their choice highly contributes to the performance of the whole genetic algorithm [15].Reproduction
A part of the new population can be created by simply copying without changing selected individuals from the present population. Also new population has the possibility of selection by already developed solutions. There are a number of other selection methods available and it is up to the user to select the appropriate one for each process. Reproduction crossover fraction is using 0.8.Crossover
New individuals are generally created as offspring of two parents (i.e., crossover being a binary operator). One or more so-called crossover points are selected (usually at random) within the chromosome of each parent, at the same place in each. The parts delimited by the crossover points are then interchanged between the parents. The individuals resulting in this way are the offspring. Beyond one point and multiple point crossovers, there exist some crossover types. The so-called arithmetic crossover generates an offspring as a component-wise linear combination of the parents in later phases of evolution, it is more desirable to keep individuals intact, and so it is a good idea to use an adaptively changing crossover rate: higher rates in early phases and a lower rate at the end of the genetic algorithm.Mutation
A new individual is created by making modifications to one selected individual. The modifications can consist of changing one or more values in the representation or adding/deleting parts of the representation. In genetic algorithm, mutation is a source of variability and too great a mutation rate results in less efficient evolution, except in the case of particularly simple problems. Hence, mutation should be used sparingly because it is a random search operator; otherwise, with high mutation rates, the algorithm will become little more than a random search. Moreover, at different stages, one may use different mutation operators. At the beginning, mutation operators resulting in bigger jumps in the search space might be preferred [16]. Later on, when the solution is close by a mutation operator leading to slighter shifts in the search space could be favored. Figures 10 and 11 show the input and change in input variable (e, ce) of GA ANFIS controller. The output variable of fuzzy set “u” is shown in Figure 12. The simulink model block of GA ANFIS controller section is shown in Figure 13.Figure 10
GA ANFIS controller input variable “e”.Figure 11
GA ANFIS controller input variable “ce”.Figure 12
GA ANFIS controller output variable “u”.Figure 13
Simulink model of GA ANFIS controller.
## 6. Simulation Results
The system is a multistage and coupled system; so, four set points are taken for the system. The Matlab simulink used for simulation and uses of the first-order function. Four set point temperatures 70°C, 100°C, 150°C, and 200°C changes at different times are used in 14000 seconds. The results of PID controller for temperature set point shown in Figure14 and it is observed that the performance of the system with PID controller is almost oscillating and takes more time to settle with reference temperature, compared with other types of controller. The fuzzy simulation output diagram for fuzzy is given in Figure 15. FLC is designed using 5 linguistic levels and 25 rules. FLC gives better results compared to PID controller by giving a quick settle with reference temperature and less oscillatory response. The integrated GA ANFIS output controller output with Takaji Sugeno fuzzy model is shown in Figure 16 which eliminates the oscillatory output of different temperature set points and identifies the process variation quickly and provides good control for set point changes and sudden disturbances. The different controllers PID, FLC, and proposed GA ANFIS controller output are depicted in Figure 17. The results shows that the proposed controller output settles with reference temperature very quickly and it eliminates the overshoot problem. The result of the GA-optimized ANFIS controller shows an outstanding performance in terms of achieving the desired value with very small values for the rise time and settling time.Figure 14
PID control simulated output at different temperature set points.Figure 15
FLC control simulated output at different temperature set points.Figure 16
GA ANFIS controller simulated output at different temperature set points.Figure 17
Comparison results of PID, FLC, and GA ANFIS controllers.
## 7. Results and Discussion
In this paper, GA ANFIS controller acts as a replacement for the previous existing controllers due to its unique characteristics. The merits of the GA ANFIS can be observed as follows. The GA ANFIS has improved control quality. The aim of controlling heated barrel is to bring the set points during startup as soon as possible, while avoiding large overshoots in order to maintain it at current temperature set value. Table6 gives various timing specification for the three controllers. From the analysis, the GA ANFIS controller on the basis of delay time gives efficient output differences 8.5 times as that of PID controller and 1.5 times as that of fuzzy. Consequently, the GA ANFIS controller produces an output, which is 5 times ahead of PID and 1.6 times of fuzzy, in the rise time analysis. The peak time results state that the GA ANFIS controller outcasts a production output 6.66 times more efficient than PID and 1.66 times than fuzzy. If we consider the settling time, the GA ANFIS controller is 1.15 times more efficient than PID and 1.09 times than fuzzy. The set points tracking and disturbance rejection are obtained by the proposed method.Table 6
Timing specification of PID, FLC, and GA ANFIS controllers.
Timing specificationsPIDFUZZYGA ANFISDelay time (Td)170 Sec25 Sec20 SecRise time (Tr)250 Sec80 Sec50 SecPeak time (Tp)400 Sec100 Sec60 SecSettling time (Ts)1900 Sec1800 Sec1650 SecPeak overshoot (%)21%00
## 8. Conclusion
We have chosen GA ANFIS controller, since it is characterized by its capability to eliminate sudden input disturbance and maintain the set point temperature in the plastic extrusion system. The simulation results clearly show that the GA ANFIS controller reduces the timing specifications of fuzzy and PID controllers. This paper demonstrates the effectiveness of intelligent controller on nonlinear system, particularly for temperature control in plastic extrusion system. The comparison of performance of the three controllers reveals that the GA ANFIS controller is superior to the other controllers. From the results obtained the proposed controller is good for set point changes and stability. With the aid of the supervisory technique, the proposed controller identifies the process variations quickly and provides good controller performance for the set point changes and sudden disturbances. Therefore GA ANFIS controller will prove efficacious especially in the case of plastic extrusion temperature control system.
---
*Source: 101437-2011-09-08.xml* | 2011 |
# Morphometry of the Orbit in East-European Population Based on Three-Dimensional CT Reconstruction
**Authors:** Stanisław Nitek; Leopold Bakoń; Mansoor Sharifi; Maciej Rysz; Lechosław P. Chmielik; Iwona Sadowska-Krawczenko
**Journal:** Advances in Anatomy
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101438
---
## Abstract
Objectives. To determine safe distances within the orbit outlining reliable operative area on the basis of multislice computed tomography (MSCT) scans. Patients and Methods. MSCT of orbits of 50 Caucasian patients (26 males and 24 females, mean age 56) were analysed. Native scans resolutions were in all cases 0.625 mm. Measurements were done in postprocessing workstation with 2D and 3D reconstructions. The safe distances values were calculated by subtracting three standard deviations from the arithmetical average (
X
=
AVG
-
3
STD
). This method was chosen because this range covers 99.86% of every population. Results. The results of the measurements in men and women, respectively, are as follows (1) distance from optic canal to supraorbital foramen, mean 46,49 mm and 43,29 mm, (2) distance from the optic canal to maxillozygomatic suture at the inferior margin of the orbit mean 45,24 mm and 42,8 mm, (3) distance from the optic canal to frontozygomatic suture 46,15 mm and 43,58 mm, (4) distance from the optic canal to anterior lacrimal crest 40,40 mm and 38,39 mm, (5) distance from superior orbital fissure to the frontozygomatic suture 34,06 mm and 32,62 mm, and (6) distance from supraorbital foramen to the superior orbital fissure 42,32 mm and 39,39 mm. Conclusion. The most probable safe distances calculated by adopted formula were for the superior orbital fissure 23,39–30,58 mm and for the orbital opening of the optic canal 31,9–38,0 mm from the bony structures of the orbital entrance depending on the orbital quadrant.
---
## Body
## 1. Introduction
While operating within the orbit surgeon must cope with number of important structures located in a small area, in a nontransparent environment. Position of the soft tissue structures in reference to the easily identifiable bony points is helpful and could prevent serious complications [1–6]. The surgeons should also remember the anatomical variants of the osseous structures of orbit [7]. In our opinion, the published data of the orbital dimensions measured in living patients by multislice computer tomography (MSCT) technique is fragmentary. The majority of published data also concerns other populations than the population of this study [8–10]. The available studies defining safe operating distances within orbit were made in cadavers [8–18]. Our technique of measurements relaying on MSCT could be used in real life, in preoperative assessment as standard and valuable tool for the surgeon.The purpose of this study was to determine the minimal safe distances useful for clinical requirements.
## 2. Material and Methods
The study group was constituted by MSCT scans of both orbits of 50 adult Caucasian patients (26 males and 24 females, mean age 56). All the patients were diagnosed in 2nd Department of Clinical Radiology, Warsaw Medical University, Poland, during period from February 2008 to August 2013. The material of the study was collected retrospectively; therefore permission of ethics committee was not needed. The indications for all the examinations were various, nontraumatic pathologies not involving orbital structures. Patients with osseous wall pathologies of the orbit (i.e., traumatic or neoplastic) were excluded from the study group. CT examinations were made in GE Lightspeed 16 Pro scanner with slice thickness of 0.625 mm and sharp kernel reconstruction. The measurements were made in GE Advantage Windows 4.3 workstation with three-dimensional options. The points of measuring line were inserted in axial scans in appropriate bony structures and then on 3D image the total length between line inserts was noted. This method of placing line inserts in 2D image and noting the 3D distance between them was used because of imperfection of solely 3D measurements. Placing the inserts of the measuring line in high resolution 0,625 mm 2D axial images allowed minimizing calculation error of the 3D spatial reconstruction.The following parameters were measured (Figure1):Figure 1
The scheme of performed measurements. CLA: lacrimal anterior crest, CO: optic canal, FOS: fissure orbital superior, FOI: fissure orbital inferior, AEF: ethmoidal foramen anterior, PEF: ethmoidal foramen posterior, Sfz: frontozygomatic suture, Szm: zygomaticomaxillar suture, IoFor: infraorbital foramen, and SoFor: supraorbital foramen.(A) on the lateral orbital wall: the distance between intersection of the frontozygomatic suture (FZS) on lateral orbital edge and entrance of optic canal (OC), superior orbital fissure (SOF), and inferior orbital fissure (IOF) (Figure2);Figure 2
The measurements on CT scans (2D and 3D) of the lateral orbital wall structures: (A) superior orbital fissure; (B) frontozygomatic suture. On 2D images the insertions of measure line points were positioned and final measurement was taken from 3D scan.(B) on the superior wall: the distance between supraorbital notch or foramen and entrance of optic canal, superior orbital fissure, and meningoorbital foramen (Hyrtl canal) (Figure3);Figure 3
The measurements on CT scans of structures of the superior orbital wall. (A) Supraorbital foramen; (B) optic canal.(C) on the medial orbital wall: the distance between anterior lacrimal crest (ALC) and entrance of optic canal and anterior and posterior ethmoidal foramina (AEF and PEF) (Figure4);Figure 4
The measurements on CT scans of the structures of the medial orbital wall. (A) Anterior lacrimal crest; (B) optic canal.(D) on the inferior orbital wall: the distances between zygomaticomaxillary suture (ZMS) and optic canal and anterior edge of the inferior orbital fissure (Figure5).Figure 5
The measurements on CT scans: structures of the inferior orbital wall. (A) Zygomaticomaxillary suture; (B) optic canal.The additional measurement was made on the lateral wall: the distance between intersection of the frontozygomatic suture on lateral orbital edge and meningo-orbital foramen (Hyrtl canal), if this structure was present (Figure6).Figure 6
The measurements on CT scans: Hyrtl’s canal location considering frontozygomatic suture. (A) Frontozygomatic suture; (B) meningoorbital foramen (Hyrtl’s canal).Statistical analysis was performed by STATISTICA v. 8. The value ofp
≤
0.05 was accepted as statistically important. In certain possible cases double sided critical field was chosen.The data analysis could be divided into two groups: descriptive (divided by sex and side of the skull) and linear (measurements) variations. The following parameters were measured for linear variations: number, mean asymmetry, standard deviation, median deviation, minimal value, maximal value, kurtosis, and skewness coefficient.The analysis of mean values was based on parametric Student’st-test or Cochran-Cox depending on results of F test in descriptive variation based on sex (analyzing if equal variations parameters in both groups were applied).Student’st-test was applied in descriptive variation based on the skull side (two values in the same patient).The safe distance values for studied orbits were calculated by subtracting three standard deviation values from the arithmetical average (X
=
A
V
G
-
3
S
T
D). Safe distance value calculated based on the presented method can cause complication risk of 0,135%. Statistically this is one case per 740 surgical procedures.
## 3. Results
The variation range of the measured parameters according to the gender and skull side/head side is presented in Table1.Table 1
Variation range of the individual orbit parameters on CT (given in mm). Bold: mean arithmetic, in parenthesis: standard deviation, and underneath: min and max values.
Orbital dimensions
Male (n
=
52)R + L
Female (n
=
48)R + L
Roof of the orbit
Supraorbital foramen
Optic canal
46,49 (3,20)39,6–52,8
43,29 (2,33)38,4–48,3
Superior orbital fissure
42,32 (3,92)34,6–50,1
39,39 (3,21)32,5–48,0
Lateral wall
Frontozygomatic suture
Optic canal
46,15 (2,70)41,0–51,6
43,58 (2,05)39,0–47,0
Superior orbital fissure
34,06 (3,32)24,4–41,3
32,62 (2,99)26,4–39,6
Interior orbital fissure
24,62 (3,01)15,0–30,4
23,46 (2,69)17,0–29,7
Medial wall
Anterior lacrimal crest
Optic canal
40,40 (2,76)34,3–48,0
38,39 (1,85)34,8–42,8
Anterior ethmoidal foramen
21,28 (2,76)16,0–28,9
20,55 (2,29)15,0–25,0
Posterior ethmoidal foramen
33,06 (3,52)24,8–40,0
31,8 (3,43)20,4–36,8
Floor of orbit
Zygomaticomaxillary suture
Optic canal
45,24 (2,91)39,3–52,5
42,8 (2,41)37,2–48,7
Inferior orbital fissure
23,33 (2,69)16,0–27,7
20,85 (2,89)15,7–28,0The results of measurements in superior quadrant of the orbit were as follows:(i)
from the optic canal to the supraorbital foramen: mean 46,49 mm (±3,20) and 43,29 (±2,33) in men and women, respectively,
(ii)
distance from the supraorbital foramen to the superior orbital fissure 42,32 mm (±3,92) and 39,39 mm (±3,21) in men and women, respectively.The results of measurements in inferior quadrant of orbit are as follows: distance from the optic canal to the maxillozygomatic suture at the inferior margin of the orbit mean 45,24 mm (±2,91) and 42,80 mm (±2,41) in men and women, respectively.The results of measurements in lateral quadrant are as follows:(i)
distance from the optic canal to the frontozygomatic suture 46,15 mm (±2,70) and 43,58 mm (±2,05) in men and women, respectively,
(ii)
distance from the superior orbital fissure to the frontozygomatic suture 34,06 mm (±3,32) and 32,62 mm (±2,99) in men and women, respectively.The results of measurements in medial quadrant of orbit consisted of the distance from the optic canal to the anterior lacrimal crest, which had 40,40 mm (±2,76) and 38,39 mm (±1,85) in men and women, respectively.The analysis of data from tables showed that there were statistically significant differences between dimensions measured on CT scans depending on gender, greater value in males compared to females(
p
<
0.05
).There were some statistically significant differences between right and left sides in the studied cases.The distance between the OC and the ALC was bigger on the right side in both females(
p
=
0,0011
) and males (
p
=
0,0346
). The distance between IOF and FZS was bigger on the right side in females (
p
=
0,0336
). Also the distance between IOF and ZMS was bigger on the right side both in females (
p
=
0,0008
) and in males (
p
=
0,0491
) (Table 2).Table 2
The most probable safe distances values in the CT (given in mm).
Distance
M
F
R(N
=
26)
L(N
=
26)
R(N
=
24)
L(N
=
24)
ALC
OC
32,34
31,90
33,60
32,27
AEF
12,74
13,15
14,41
12,95
PEF
21,69
23,16
20,92
21,97
FZS
OC
37,91
38,02
37,79
36,94
SOF
23,64
24,49
23,73
23,39
IOF
15,85
15,22
16,60
14,35
Zygomaticomaxillary suture
OC
36,83
36,10
34,65
36,46
IOF
16,15
14,76
13,96
11,74
Supraorbital foramen/notch
OC
36,32
37,27
36,24
36,20
SOF
30,49
30,58
30,30
29,22There were no other statistically significant differences in measured values between sides in the study group.The location of Hyrtl canal was noted in 5 cases between all 100 orbits (5%). The distance between meningo-orbital foramen and FZS was 24,7–32,3 mm and between meningo-orbital foramen and supraorbital foramen was 25,1–36,4 mm (Table4).The safe distances were calculated by inserting average values and standard deviations to the formulaX
=
A
V
G
-
3
S
T
D. The results of these calculations are presented in Tables 2 and 3. The superior orbital fissure and orbital opening of the optic canal are located no shallower than 23,6 mm and 38,0 mm from the bony structures of the orbital entrance, respectively, and these values constitute the safe distances for operating purposes.Table 3
The most probable safe distances of the orbit according to deferent authors AVG-3 STD mm.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Nitek
Ji
Fetouh
Nitek
Specimens
Dry skulls
Cadaveric
Cadaveric
Dry skulls
Dry skulls
Dry skulls
CT
Dry skulls
CT
Number
48
54∗
∗
∗
∗
16
82
62
189
64
104
100
Date
1979
1995
1998
1999
2002
2009
2010
2014
2015 (current paper)
Ethnicity
Indian
US
US
Korean
MaleCaucasian
Caucasian medieval
Chinese
Egyptian
Caucasian
Safe distances
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
Anterior lacrimal crest to optic canal
30,0
29,0
39,9
25,7
32,4
33,1
31,9
38,4
37,2
41,4
36,4
32,1
32,8
Inferior quadrate
Infraorbital foramen to optic canal∗
37,0∗
∗
39,0
30,7
NA
40,7
39,8
38,9
39,9
38,5
46,8
45,2
36,5
35,6
Superior quadrate
Supraorbital foramen to optic canal
35,0
38,0
39,3
NA
35,7
40,0
38,3
44,3
44,0
43,1
40,2
36,9
36,3
Supraorbital foramen to superior orbital fissure
30,0
NA
NA
32,5
34,9
33,7
31,7
NA
NA
40,1
39,8
30,5
29,8
Lateral quadrate
Frontozygomatic suture to optic canal
NA∗
∗
∗
36,0
29,3
NA
37,4
40,0
38,4
40,9
40,2
37,5
34,2
38,0
37,4
Frontozygomatic suture to superior orbital fissure
25,0
NA
NA
26,2
26,9
28,0
25,4
NA
NA
33,5
34,2
24,1
23,7
∗In current paper we measure distance between zygomaticomaxillar suture and the optic canal instead of the infraorbital foramen.
∗
∗Rontal used the distance between infraorbital foramen and posterior wall of the maxillary sinus for the inferior quadrate’s safe distance. Whenever we add to this diameter mean distance between the posterior wall of the maxillary sinus from the optic canal (12 mm), we gain comparable similar results of other authors [15].
∗
∗
∗NA: dimensions were not measured.
∗
∗
∗
∗McQueen studied 54 single sided orbit; safe distances to the optic nerve were identified by subtracting 5 mm from the shortest measured specimen.Table 4
Distances of bony structures not permanently present in the orbit [mm].
Measured distance
R
L
R + L
Supraorbital foramen-Hyrtl canal
N
=
3
Mean 30,5 25,1–36,4
N
=
2
Mean 30,55 25,8–35,3
N
=
5
Mean 30,52 25,1–36,4
Frontozygomatic suture-Hyrtl canal
N
=
3
Mean 27,33 24,7–32,3
N
=
2
Mean 26,1 26,0–26,2
N
=
5
Mean 26,84 24,7–32,3
Anterior lacrimal crest-accessory posterior ethmoidal foramen
N
=
4
Mean 34,85 33,9–36,5
N
=
6
Mean 33,08 28,3–37,6
N
=
10
Mean 33,79 28,3–37,6The safe distances for superior quadrant of the orbit were equal: from OC to supraorbital foramen the distance was 36,9 mm in men and 36,3 mm in women and from supraorbital foramen to SOF the distance was 30,5 mm in men and 29,8 mm in women.The safe distances for inferior quadrant of orbit are as follows: from OC to ZMS at the inferior margin of the orbit the distance was 34,65–36,83 mm depending on sex and side.The safe distances for lateral quadrant are as follows: distance from OC to FZS was 38,0 mm in men and 37,43 mm in women and distance from SOF to FZS was 24,1 mm in men and 23,7 mm in women.The safe distances for medial quadrant of orbit are as follows: distance from the OC to ALC was 31,9–33,6 mm depending on sex and side.
## 4. Discussion
Modern technologies, such as surgical navigation systems, used in the orbit surgery make surgical procedures more safe and they are increasing accuracy of reconstruction [19]. Beside that convenient surgery support, operator needs to be familiar with minimal distances from the orbit structures to the optic canal.There are only few papers in the available literature, which could be referred to the results obtained in this study. Simonton et al. [20] determined safe distances for lateral orbitotomy, but their measurements were closely related to the cranial cavity. In our study we have described distinctive topographical points, which are easy to find in the orbit entrance plane. The most probable safe minimal distances to the optic canal were established from these points (Table 3) (Figure 1).
### 4.1. Medial Wall
These distances are especially important in surgical procedures such as ethmoid vessel ligation, exploration of the medial wall fractures, the anterior skull base reconstruction, tumour resection, ethmoid sinus exenteration, orbital decompression, transethmoidal sphenoidotomies, closure of cerebrospinal fluid leakage, and transethmoidal and sphenoidal hypophysectomy [17, 18, 21–25]. The reference point in the medial quadrant is the anterior lacrimal crest which is located, according to various authors, 29–53 mm from the entrance of the optic canal [10, 12, 14, 15, 18]. In our material this distance was 40,4 mm for men and 38,39 mm for women (Table 1). There were statistically important differences in OC-ALC distance between right and left sides both in women (
p
=
0,0011
) and in men (
p
=
0,0346
). The greater values were measured on the right side. This observation confirms the data published by Kadanoff and Jordanov [26]. In this paper wider orbits were observed on the right side in 63% of cases and on the left side in 15,4%. The mean values of OC-ALC distance in other populations were bigger than in our material (Table 5): in Indian 42 mm [18], in American 43,29 mm [15], in Korean 40,5 mm [9], in male Turkish 41,7 mm [14], in British Caucasian 43,77 mm [11], in Thai 42,2 mm [13], significantly bigger in Egyptians, men 47,25 mm and women 46,21 mm [8] (
p
<
0.05
), and in Chinese, men 46,43 mm and women 44,41 mm (
p
<
0.05
) [10] (Table 5).Table 5
Comparison of the orbital distances between different nations.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Huanmanop
Nitek
Ji
Abed
Abed
Munguti
Fetouh
Nitek (present study)
Specimens
Dry skulls
Cadavers
Cadavers
Dry skulls
Dry skulls
Dry skulls
Dry skulls
CT
Cadavers
Cadavers
?
Dry skulls
CT
Number
48
54∗
16
82
62
100
189
64
47
47
?
104
100
Date
1979
1995
1998
1999
2002
2007
2009
2010
2011
2012
2012
2014
2015
Ethnicity
Indian
US
US
Korean
Male Turkish
Thai
Caucasian medieval
Chinese
Caucasian
British Caucasian
Kenyan
Egyptian
Caucasian
Measured distances
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
ALC-OC
42
43,29
44,1
40,5
41,7
42,2
42,30
39,97
46,43
44,41
NA
43,77
NA
NA
47,25
46,21
40,40
38,39
ALC-AEF
24
21,96
NA
21
23,9
23,5
22,80
21,59
NA
NA
NA
25,61
NA
NA
26,76
26,17
21,28
20,55
ALC-PEF
36
33,36
NA
31,7
35,6
36
34,42
32,28
NA
NA
NA
36,09
NA
NA
35,39
35,26
33,06
31,80
Inferior quadrate
Infraorbital foramen-OC
48
49,73
39,4
45,5
NA
46,2
49,33
46,59
47,93
46,18
43,23
NA
55,18
53,63
51,76
50,53
45,24
42,79
Infraorbital foramen to FOI
24
37,43
NA
21,6
NA
21,7
24,62
22,55
NA
NA
25,4
NA
23,56
22,28
24,62
23,6
23,33
20,85
Superior quadrate
Supraorbital foramen-OC
40
44,34
44,5
44,9
45,3
44,7
48,57
45,60
52,93
50,89
NA
NA
53,25
51,93
49,64
48,16
46,49
43,29
Supraorbital foramen to SOF
35
36,59
NA
40
NA
40
44,27
41,16
NA
NA
NA
NA
NA
NA
46,23
45,26
42,32
39,39
Lateral quadrate
FZS-OC
43
47,1
38,3
47,4
44,9
46,9
48,30
45,67
48,38
46,91
NA
NA
NA
NA
44,25
43,58
46,15
43,58
FZS-SOF
35
36,59
NA
34,3
NA
34,5
36,88
33,82
NA
NA
NA
NA
NA
NA
39,94
39,12
34,06
32,62
FZS-IOF
25
40,92
NA
24,8
NA
24
29,42
27,83
NA
NA
NA
NA
NA
NA
29,08
27,32
24,62
23,46
∗McQueen et al. [15] studied only orbit on the one side.The anterior and posterior ethmoidal foramina are important structures of medial orbital wall. They have relatively variable position to the ALC and frontoethmoidal suture and also vary in number with accessory foramina [11, 17, 18, 24, 27]. Single accessory ethmoidal foramina were detected in 38% and double foramina in 2,4% by Takahashi et al. [24]. The accessory ethmoidal foramina were defined in the above-mentioned paper as located between anterior and posterior ethmoidal foramina. When only one was present, it was named middle ethmoidal foramen; when two were present, the one located closer to the posterior ethmoidal foramen was named deep middle ethmoidal foramen [24]. Piagkou classified the ethmoidal foramina pattern in the medial wall as types I–IV according to their number:(1)
type I: single ethmoidal foramen (usually isolated AEF) was observed in 1,6% [17] and in 0,8% [28],
(2)
type II: double EF (the most common, single AEF and PEF, each) was observed in 61% [17] and in 73,7% [28],
(3)
type III: triple EF was observed in 28,5% [17] and in 24,4% [28],
(4)
type IV: multiple EF was observed in 8,8% [17] and quadruple EF in 1,1% [28].In our material accessory posterior ethmoidal foramen was observed in 10% of the cases. According to other authors incidence of more than one PEF is above 25% [18]. In other papers [28] EFs were identified as single in 0.8%, double in 73.7%, triple 24,4%, and quadruple in 1.1% of the specimens. The mean distances between ALC and AEF, ALC and PEF, and ALC and MEF were 27.7 mm, 10.6 mm, and 12.95 mm, respectively. The distances from ALC-AEF, AEF-PEF, and PEF-OC were 27.7 mm ± 2.8, 10.6 mm ± 3.3, and 5.4 mm ± 1 mm. The ethmoidal foramina are located in the frontoethmoidal suture line in 68% of cases and above this line in about 20% [24] or according to other authors in 32% of cases 1–4 mm above the suture [27]. The key role in surgery of medial orbital wall is reserved for the distance between PEF and OC, which according to Rontal is no less than 3 mm [18]. But other authors observed this distance between 4,3 mm [17] and 7,25 mm [29]. Other authors gave arithmetic average of this value (Table 5) [11, 13, 14, 24]. Abed distinguished first PEF (mean 11,63 mm from OC) and last PEF (mean 7,25 mm from OC) [11]. In our material mean distance from PEF to OC was 7 mm (first PEF), like in Harrison 7 mm for the first PEF and 5,65 mm for the last PEF [30]. Harrison observed that in 30% there are multiple EFs and in those cases this distance may be only 2 mm short [30]. This situation may increase risk of optic nerve injury during coagulation of the posterior ethmoidal artery.The most probable safe distance for medial wall of orbit according to our measurements is 31,9–33,6 mm for ALC-CO, 12,7–14,4 mm for ALC-AEF, and 20,9–23,16 mm for ALC-PEF depending on sex and side. The results obtained by other authors are similar and are in range 29–41,4 mm for ALC-CO [8–10, 12, 14, 15].
### 4.2. Superior Wall
The most frequent of surgical procedures of superior orbital wallare frontal ethmoidectomy, frontal sinus trephine, frontal sinus obliteration, orbital decompression, exploration for fractures, lacrimal gland, or other tumours’ excision and orbital exenteration [18, 22]. The incision must be made just below the eyebrow if the supraorbital nerve and the elevator muscle of the upper lid are to remain intact [18, 30]. The supraorbital notch or foramen is usually found in parasagittal line connecting the mental foramen with the infraorbital foramen [18] and about 5 mm from the orbital margin [30]. Mean values of distance from the supraorbital foramen/notch to the optic canal are usually between 40 mm and 52,93 mm according to other authors [8, 10, 12–15, 18]. In our material this distance was 46,49 ± 3,2 mm for men and 43,2 ± 2,33 mm for women. The superior orbital fissure is usually located 35–52 mm from the supraorbital foramen/notch [8, 10, 12–15, 18]. In our material this distance was 42,3 ± 3,92 mm for men and 39,39 ± 3,21 mm for women. Mean values of supraorbital foramen to optic canal in our material were comparable to other populations [12–15, 18] but were statistically significantly lower than in Chinese population (52,9 mm and 50,89 mm for men and women, resp.) [10], Kenyan population (53,25 mm and 51,93 mm for men and women, resp.) [16], and Egyptian population (49,64 mm and 48,16 mm for men and women, resp.) [8]. The most probable safe distance for orbital roof concluding from our study was 30 mm for superior orbital fissure and 35 mm for optic canal. Rontal also provided similar safe distance: 30 mm for orbital roof [18]. The results of other authors are similar and are between 35–43,7 mm for superior orbital foramen, OC, and 31,9–40,1 mm for superior orbital foramen, SOF [8–10, 12, 14, 15].
### 4.3. Lateral Wall
The knowledge of lateral orbital wall anatomy is crucial for all the surgical procedures, like explorations of orbital fractures, lateral orbitotomy during tumour excisions, orbital decompression, and excisions of the lacrimal gland [18, 22, 25].The mean distance between frontozygomatic suture and the optic canal was measured between 40 and 53 mm [10, 12, 14, 15, 18]. In our material this distance was 46,15 ± 2,70 mm for men and 43,58 ± 2,05 mm for women.The mean distance between FZS and SOF was found between 34,5 mm and 39,94 mm [8, 10, 12, 14–16, 18]. In our material this distance was 34,06 ± 3,32 mm for men and 32,62 ± 2,99 mm for women. The mean distances for FZS-OC and FZS- SOF for other populations are similar to our results (Table 5) [12, 14, 15, 18] except Chinese [10] and Egyptian [8] populations. The safe distance for FZS-OC calculated by our model is 38,0 mm in men and 37,43 mm in women; the most probable safe distance from SOF to FZS is 24,1 mm in men and 23,6 mm in women.The results of calculations of safe distances by other authors are similar to our results and are in the range 29,3–40,5 mm for FZS-OC and 25–34,2 mm for FZS-SOF [8–10, 12, 14, 15].
### 4.4. Inferior Wall
The knowledge of inferior wall anatomy is important for several procedures, that is, maxillectomy, explorations of fractures, or tumour resection [18, 22]. The posterior wall of the maxilla lies as close as 26 mm from the infraorbital foramen [18]. The optic canal lies usually about 12 mm beyond this point [18]. The measurements of distances between zygomaticomaxillary suture, the optic canal, and anterior edge of the inferior orbital fissure are in our opinion better than dimensions from infraorbital foramen because the zygomaticomaxillary suture is easier to be palpated by the surgeon. The mean distance of the optic canal to the infraorbital foramen is reported between 39,4 and 55,18 mm [8–10, 12, 13, 16, 18, 31]. In our material this distance was 45,24 mm for men and 42,79 mm for women. This distance was statistically significantly greater in Kenyan population 55,18 mm for men and 53,63 mm for women [16]. The mean values of distance inferior orbital fissure from the infraorbital foramen are reported in range 21,7–37,43 mm [5, 8, 9, 13, 15, 16, 18]. In our material this distance was 23,3 mm for men and 20,8 mm for women.There were clear differences in distances concerning SOF on the right and left sides due to different shapes of SOF [8]. The variation range determined during our study covers the majority of the average values provided in the literature. The presence of meningo-orbital foramen (Hyrtl canal) in 5% of cases is worth mentioning. According to other authors this variant is found in 6–55% of population [7, 15, 18, 32, 33]. It is a potential source of hemorrhage during deep lateral orbital dissection, because it functions as an anastomosis between the lacrimal artery and the middle meningeal artery [7, 18, 31, 32]. The distance between meningo-orbital foramen and FZS was found in range of 25–39 mm [15, 18, 31, 32]. In our material this distance was in range of 24,7–32,3 mm (Table 4). The distance between meningo-orbital foramen and supraorbital foramen was reported in range of 12–37 mm [15, 18, 32, 33]. In our material this distance was 25,1–36,4 mm (Table 4).Assessment of the most probable safe distances calculated in this paper is also comparable to the data provided by other authors [9, 14, 15, 18]. Only the data on Egyptians published by Fetouh and Mandour [8] and by Danko and Haug [12] show significant difference (Table 3). It can be explained by relatively small study group of these papers, consisting of only 8 cases (16 orbits). Beside that Danko and Haug [12] performed their measurements on specimens with preserved soft tissues unlike other authors who used macerated skulls.McQueen et al. [15] calculated the safe distances by subtracting 5 mm from the lowest obtained value. In our study the formula X
=
A
V
G
-
3
S
T
D was applied. Despite different criteria used to determine the safe distances and genetic differences of the examined material the results are comparable. We speculate that population differences do not significantly influence the safe distances values in the orbit because they provide a large safety margin and vary between 23,39–30,58 mm for the superior orbital fissure and 31,90–38,02 mm for the orbital opening of the optic canal from the bony structures of the orbital entrance depending on the orbital quadrant (Table 2).
## 4.1. Medial Wall
These distances are especially important in surgical procedures such as ethmoid vessel ligation, exploration of the medial wall fractures, the anterior skull base reconstruction, tumour resection, ethmoid sinus exenteration, orbital decompression, transethmoidal sphenoidotomies, closure of cerebrospinal fluid leakage, and transethmoidal and sphenoidal hypophysectomy [17, 18, 21–25]. The reference point in the medial quadrant is the anterior lacrimal crest which is located, according to various authors, 29–53 mm from the entrance of the optic canal [10, 12, 14, 15, 18]. In our material this distance was 40,4 mm for men and 38,39 mm for women (Table 1). There were statistically important differences in OC-ALC distance between right and left sides both in women (
p
=
0,0011
) and in men (
p
=
0,0346
). The greater values were measured on the right side. This observation confirms the data published by Kadanoff and Jordanov [26]. In this paper wider orbits were observed on the right side in 63% of cases and on the left side in 15,4%. The mean values of OC-ALC distance in other populations were bigger than in our material (Table 5): in Indian 42 mm [18], in American 43,29 mm [15], in Korean 40,5 mm [9], in male Turkish 41,7 mm [14], in British Caucasian 43,77 mm [11], in Thai 42,2 mm [13], significantly bigger in Egyptians, men 47,25 mm and women 46,21 mm [8] (
p
<
0.05
), and in Chinese, men 46,43 mm and women 44,41 mm (
p
<
0.05
) [10] (Table 5).Table 5
Comparison of the orbital distances between different nations.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Huanmanop
Nitek
Ji
Abed
Abed
Munguti
Fetouh
Nitek (present study)
Specimens
Dry skulls
Cadavers
Cadavers
Dry skulls
Dry skulls
Dry skulls
Dry skulls
CT
Cadavers
Cadavers
?
Dry skulls
CT
Number
48
54∗
16
82
62
100
189
64
47
47
?
104
100
Date
1979
1995
1998
1999
2002
2007
2009
2010
2011
2012
2012
2014
2015
Ethnicity
Indian
US
US
Korean
Male Turkish
Thai
Caucasian medieval
Chinese
Caucasian
British Caucasian
Kenyan
Egyptian
Caucasian
Measured distances
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
ALC-OC
42
43,29
44,1
40,5
41,7
42,2
42,30
39,97
46,43
44,41
NA
43,77
NA
NA
47,25
46,21
40,40
38,39
ALC-AEF
24
21,96
NA
21
23,9
23,5
22,80
21,59
NA
NA
NA
25,61
NA
NA
26,76
26,17
21,28
20,55
ALC-PEF
36
33,36
NA
31,7
35,6
36
34,42
32,28
NA
NA
NA
36,09
NA
NA
35,39
35,26
33,06
31,80
Inferior quadrate
Infraorbital foramen-OC
48
49,73
39,4
45,5
NA
46,2
49,33
46,59
47,93
46,18
43,23
NA
55,18
53,63
51,76
50,53
45,24
42,79
Infraorbital foramen to FOI
24
37,43
NA
21,6
NA
21,7
24,62
22,55
NA
NA
25,4
NA
23,56
22,28
24,62
23,6
23,33
20,85
Superior quadrate
Supraorbital foramen-OC
40
44,34
44,5
44,9
45,3
44,7
48,57
45,60
52,93
50,89
NA
NA
53,25
51,93
49,64
48,16
46,49
43,29
Supraorbital foramen to SOF
35
36,59
NA
40
NA
40
44,27
41,16
NA
NA
NA
NA
NA
NA
46,23
45,26
42,32
39,39
Lateral quadrate
FZS-OC
43
47,1
38,3
47,4
44,9
46,9
48,30
45,67
48,38
46,91
NA
NA
NA
NA
44,25
43,58
46,15
43,58
FZS-SOF
35
36,59
NA
34,3
NA
34,5
36,88
33,82
NA
NA
NA
NA
NA
NA
39,94
39,12
34,06
32,62
FZS-IOF
25
40,92
NA
24,8
NA
24
29,42
27,83
NA
NA
NA
NA
NA
NA
29,08
27,32
24,62
23,46
∗McQueen et al. [15] studied only orbit on the one side.The anterior and posterior ethmoidal foramina are important structures of medial orbital wall. They have relatively variable position to the ALC and frontoethmoidal suture and also vary in number with accessory foramina [11, 17, 18, 24, 27]. Single accessory ethmoidal foramina were detected in 38% and double foramina in 2,4% by Takahashi et al. [24]. The accessory ethmoidal foramina were defined in the above-mentioned paper as located between anterior and posterior ethmoidal foramina. When only one was present, it was named middle ethmoidal foramen; when two were present, the one located closer to the posterior ethmoidal foramen was named deep middle ethmoidal foramen [24]. Piagkou classified the ethmoidal foramina pattern in the medial wall as types I–IV according to their number:(1)
type I: single ethmoidal foramen (usually isolated AEF) was observed in 1,6% [17] and in 0,8% [28],
(2)
type II: double EF (the most common, single AEF and PEF, each) was observed in 61% [17] and in 73,7% [28],
(3)
type III: triple EF was observed in 28,5% [17] and in 24,4% [28],
(4)
type IV: multiple EF was observed in 8,8% [17] and quadruple EF in 1,1% [28].In our material accessory posterior ethmoidal foramen was observed in 10% of the cases. According to other authors incidence of more than one PEF is above 25% [18]. In other papers [28] EFs were identified as single in 0.8%, double in 73.7%, triple 24,4%, and quadruple in 1.1% of the specimens. The mean distances between ALC and AEF, ALC and PEF, and ALC and MEF were 27.7 mm, 10.6 mm, and 12.95 mm, respectively. The distances from ALC-AEF, AEF-PEF, and PEF-OC were 27.7 mm ± 2.8, 10.6 mm ± 3.3, and 5.4 mm ± 1 mm. The ethmoidal foramina are located in the frontoethmoidal suture line in 68% of cases and above this line in about 20% [24] or according to other authors in 32% of cases 1–4 mm above the suture [27]. The key role in surgery of medial orbital wall is reserved for the distance between PEF and OC, which according to Rontal is no less than 3 mm [18]. But other authors observed this distance between 4,3 mm [17] and 7,25 mm [29]. Other authors gave arithmetic average of this value (Table 5) [11, 13, 14, 24]. Abed distinguished first PEF (mean 11,63 mm from OC) and last PEF (mean 7,25 mm from OC) [11]. In our material mean distance from PEF to OC was 7 mm (first PEF), like in Harrison 7 mm for the first PEF and 5,65 mm for the last PEF [30]. Harrison observed that in 30% there are multiple EFs and in those cases this distance may be only 2 mm short [30]. This situation may increase risk of optic nerve injury during coagulation of the posterior ethmoidal artery.The most probable safe distance for medial wall of orbit according to our measurements is 31,9–33,6 mm for ALC-CO, 12,7–14,4 mm for ALC-AEF, and 20,9–23,16 mm for ALC-PEF depending on sex and side. The results obtained by other authors are similar and are in range 29–41,4 mm for ALC-CO [8–10, 12, 14, 15].
## 4.2. Superior Wall
The most frequent of surgical procedures of superior orbital wallare frontal ethmoidectomy, frontal sinus trephine, frontal sinus obliteration, orbital decompression, exploration for fractures, lacrimal gland, or other tumours’ excision and orbital exenteration [18, 22]. The incision must be made just below the eyebrow if the supraorbital nerve and the elevator muscle of the upper lid are to remain intact [18, 30]. The supraorbital notch or foramen is usually found in parasagittal line connecting the mental foramen with the infraorbital foramen [18] and about 5 mm from the orbital margin [30]. Mean values of distance from the supraorbital foramen/notch to the optic canal are usually between 40 mm and 52,93 mm according to other authors [8, 10, 12–15, 18]. In our material this distance was 46,49 ± 3,2 mm for men and 43,2 ± 2,33 mm for women. The superior orbital fissure is usually located 35–52 mm from the supraorbital foramen/notch [8, 10, 12–15, 18]. In our material this distance was 42,3 ± 3,92 mm for men and 39,39 ± 3,21 mm for women. Mean values of supraorbital foramen to optic canal in our material were comparable to other populations [12–15, 18] but were statistically significantly lower than in Chinese population (52,9 mm and 50,89 mm for men and women, resp.) [10], Kenyan population (53,25 mm and 51,93 mm for men and women, resp.) [16], and Egyptian population (49,64 mm and 48,16 mm for men and women, resp.) [8]. The most probable safe distance for orbital roof concluding from our study was 30 mm for superior orbital fissure and 35 mm for optic canal. Rontal also provided similar safe distance: 30 mm for orbital roof [18]. The results of other authors are similar and are between 35–43,7 mm for superior orbital foramen, OC, and 31,9–40,1 mm for superior orbital foramen, SOF [8–10, 12, 14, 15].
## 4.3. Lateral Wall
The knowledge of lateral orbital wall anatomy is crucial for all the surgical procedures, like explorations of orbital fractures, lateral orbitotomy during tumour excisions, orbital decompression, and excisions of the lacrimal gland [18, 22, 25].The mean distance between frontozygomatic suture and the optic canal was measured between 40 and 53 mm [10, 12, 14, 15, 18]. In our material this distance was 46,15 ± 2,70 mm for men and 43,58 ± 2,05 mm for women.The mean distance between FZS and SOF was found between 34,5 mm and 39,94 mm [8, 10, 12, 14–16, 18]. In our material this distance was 34,06 ± 3,32 mm for men and 32,62 ± 2,99 mm for women. The mean distances for FZS-OC and FZS- SOF for other populations are similar to our results (Table 5) [12, 14, 15, 18] except Chinese [10] and Egyptian [8] populations. The safe distance for FZS-OC calculated by our model is 38,0 mm in men and 37,43 mm in women; the most probable safe distance from SOF to FZS is 24,1 mm in men and 23,6 mm in women.The results of calculations of safe distances by other authors are similar to our results and are in the range 29,3–40,5 mm for FZS-OC and 25–34,2 mm for FZS-SOF [8–10, 12, 14, 15].
## 4.4. Inferior Wall
The knowledge of inferior wall anatomy is important for several procedures, that is, maxillectomy, explorations of fractures, or tumour resection [18, 22]. The posterior wall of the maxilla lies as close as 26 mm from the infraorbital foramen [18]. The optic canal lies usually about 12 mm beyond this point [18]. The measurements of distances between zygomaticomaxillary suture, the optic canal, and anterior edge of the inferior orbital fissure are in our opinion better than dimensions from infraorbital foramen because the zygomaticomaxillary suture is easier to be palpated by the surgeon. The mean distance of the optic canal to the infraorbital foramen is reported between 39,4 and 55,18 mm [8–10, 12, 13, 16, 18, 31]. In our material this distance was 45,24 mm for men and 42,79 mm for women. This distance was statistically significantly greater in Kenyan population 55,18 mm for men and 53,63 mm for women [16]. The mean values of distance inferior orbital fissure from the infraorbital foramen are reported in range 21,7–37,43 mm [5, 8, 9, 13, 15, 16, 18]. In our material this distance was 23,3 mm for men and 20,8 mm for women.There were clear differences in distances concerning SOF on the right and left sides due to different shapes of SOF [8]. The variation range determined during our study covers the majority of the average values provided in the literature. The presence of meningo-orbital foramen (Hyrtl canal) in 5% of cases is worth mentioning. According to other authors this variant is found in 6–55% of population [7, 15, 18, 32, 33]. It is a potential source of hemorrhage during deep lateral orbital dissection, because it functions as an anastomosis between the lacrimal artery and the middle meningeal artery [7, 18, 31, 32]. The distance between meningo-orbital foramen and FZS was found in range of 25–39 mm [15, 18, 31, 32]. In our material this distance was in range of 24,7–32,3 mm (Table 4). The distance between meningo-orbital foramen and supraorbital foramen was reported in range of 12–37 mm [15, 18, 32, 33]. In our material this distance was 25,1–36,4 mm (Table 4).Assessment of the most probable safe distances calculated in this paper is also comparable to the data provided by other authors [9, 14, 15, 18]. Only the data on Egyptians published by Fetouh and Mandour [8] and by Danko and Haug [12] show significant difference (Table 3). It can be explained by relatively small study group of these papers, consisting of only 8 cases (16 orbits). Beside that Danko and Haug [12] performed their measurements on specimens with preserved soft tissues unlike other authors who used macerated skulls.McQueen et al. [15] calculated the safe distances by subtracting 5 mm from the lowest obtained value. In our study the formula X
=
A
V
G
-
3
S
T
D was applied. Despite different criteria used to determine the safe distances and genetic differences of the examined material the results are comparable. We speculate that population differences do not significantly influence the safe distances values in the orbit because they provide a large safety margin and vary between 23,39–30,58 mm for the superior orbital fissure and 31,90–38,02 mm for the orbital opening of the optic canal from the bony structures of the orbital entrance depending on the orbital quadrant (Table 2).
---
*Source: 101438-2015-10-29.xml* | 101438-2015-10-29_101438-2015-10-29.md | 41,827 | Morphometry of the Orbit in East-European Population Based on Three-Dimensional CT Reconstruction | Stanisław Nitek; Leopold Bakoń; Mansoor Sharifi; Maciej Rysz; Lechosław P. Chmielik; Iwona Sadowska-Krawczenko | Advances in Anatomy
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101438 | 101438-2015-10-29.xml | ---
## Abstract
Objectives. To determine safe distances within the orbit outlining reliable operative area on the basis of multislice computed tomography (MSCT) scans. Patients and Methods. MSCT of orbits of 50 Caucasian patients (26 males and 24 females, mean age 56) were analysed. Native scans resolutions were in all cases 0.625 mm. Measurements were done in postprocessing workstation with 2D and 3D reconstructions. The safe distances values were calculated by subtracting three standard deviations from the arithmetical average (
X
=
AVG
-
3
STD
). This method was chosen because this range covers 99.86% of every population. Results. The results of the measurements in men and women, respectively, are as follows (1) distance from optic canal to supraorbital foramen, mean 46,49 mm and 43,29 mm, (2) distance from the optic canal to maxillozygomatic suture at the inferior margin of the orbit mean 45,24 mm and 42,8 mm, (3) distance from the optic canal to frontozygomatic suture 46,15 mm and 43,58 mm, (4) distance from the optic canal to anterior lacrimal crest 40,40 mm and 38,39 mm, (5) distance from superior orbital fissure to the frontozygomatic suture 34,06 mm and 32,62 mm, and (6) distance from supraorbital foramen to the superior orbital fissure 42,32 mm and 39,39 mm. Conclusion. The most probable safe distances calculated by adopted formula were for the superior orbital fissure 23,39–30,58 mm and for the orbital opening of the optic canal 31,9–38,0 mm from the bony structures of the orbital entrance depending on the orbital quadrant.
---
## Body
## 1. Introduction
While operating within the orbit surgeon must cope with number of important structures located in a small area, in a nontransparent environment. Position of the soft tissue structures in reference to the easily identifiable bony points is helpful and could prevent serious complications [1–6]. The surgeons should also remember the anatomical variants of the osseous structures of orbit [7]. In our opinion, the published data of the orbital dimensions measured in living patients by multislice computer tomography (MSCT) technique is fragmentary. The majority of published data also concerns other populations than the population of this study [8–10]. The available studies defining safe operating distances within orbit were made in cadavers [8–18]. Our technique of measurements relaying on MSCT could be used in real life, in preoperative assessment as standard and valuable tool for the surgeon.The purpose of this study was to determine the minimal safe distances useful for clinical requirements.
## 2. Material and Methods
The study group was constituted by MSCT scans of both orbits of 50 adult Caucasian patients (26 males and 24 females, mean age 56). All the patients were diagnosed in 2nd Department of Clinical Radiology, Warsaw Medical University, Poland, during period from February 2008 to August 2013. The material of the study was collected retrospectively; therefore permission of ethics committee was not needed. The indications for all the examinations were various, nontraumatic pathologies not involving orbital structures. Patients with osseous wall pathologies of the orbit (i.e., traumatic or neoplastic) were excluded from the study group. CT examinations were made in GE Lightspeed 16 Pro scanner with slice thickness of 0.625 mm and sharp kernel reconstruction. The measurements were made in GE Advantage Windows 4.3 workstation with three-dimensional options. The points of measuring line were inserted in axial scans in appropriate bony structures and then on 3D image the total length between line inserts was noted. This method of placing line inserts in 2D image and noting the 3D distance between them was used because of imperfection of solely 3D measurements. Placing the inserts of the measuring line in high resolution 0,625 mm 2D axial images allowed minimizing calculation error of the 3D spatial reconstruction.The following parameters were measured (Figure1):Figure 1
The scheme of performed measurements. CLA: lacrimal anterior crest, CO: optic canal, FOS: fissure orbital superior, FOI: fissure orbital inferior, AEF: ethmoidal foramen anterior, PEF: ethmoidal foramen posterior, Sfz: frontozygomatic suture, Szm: zygomaticomaxillar suture, IoFor: infraorbital foramen, and SoFor: supraorbital foramen.(A) on the lateral orbital wall: the distance between intersection of the frontozygomatic suture (FZS) on lateral orbital edge and entrance of optic canal (OC), superior orbital fissure (SOF), and inferior orbital fissure (IOF) (Figure2);Figure 2
The measurements on CT scans (2D and 3D) of the lateral orbital wall structures: (A) superior orbital fissure; (B) frontozygomatic suture. On 2D images the insertions of measure line points were positioned and final measurement was taken from 3D scan.(B) on the superior wall: the distance between supraorbital notch or foramen and entrance of optic canal, superior orbital fissure, and meningoorbital foramen (Hyrtl canal) (Figure3);Figure 3
The measurements on CT scans of structures of the superior orbital wall. (A) Supraorbital foramen; (B) optic canal.(C) on the medial orbital wall: the distance between anterior lacrimal crest (ALC) and entrance of optic canal and anterior and posterior ethmoidal foramina (AEF and PEF) (Figure4);Figure 4
The measurements on CT scans of the structures of the medial orbital wall. (A) Anterior lacrimal crest; (B) optic canal.(D) on the inferior orbital wall: the distances between zygomaticomaxillary suture (ZMS) and optic canal and anterior edge of the inferior orbital fissure (Figure5).Figure 5
The measurements on CT scans: structures of the inferior orbital wall. (A) Zygomaticomaxillary suture; (B) optic canal.The additional measurement was made on the lateral wall: the distance between intersection of the frontozygomatic suture on lateral orbital edge and meningo-orbital foramen (Hyrtl canal), if this structure was present (Figure6).Figure 6
The measurements on CT scans: Hyrtl’s canal location considering frontozygomatic suture. (A) Frontozygomatic suture; (B) meningoorbital foramen (Hyrtl’s canal).Statistical analysis was performed by STATISTICA v. 8. The value ofp
≤
0.05 was accepted as statistically important. In certain possible cases double sided critical field was chosen.The data analysis could be divided into two groups: descriptive (divided by sex and side of the skull) and linear (measurements) variations. The following parameters were measured for linear variations: number, mean asymmetry, standard deviation, median deviation, minimal value, maximal value, kurtosis, and skewness coefficient.The analysis of mean values was based on parametric Student’st-test or Cochran-Cox depending on results of F test in descriptive variation based on sex (analyzing if equal variations parameters in both groups were applied).Student’st-test was applied in descriptive variation based on the skull side (two values in the same patient).The safe distance values for studied orbits were calculated by subtracting three standard deviation values from the arithmetical average (X
=
A
V
G
-
3
S
T
D). Safe distance value calculated based on the presented method can cause complication risk of 0,135%. Statistically this is one case per 740 surgical procedures.
## 3. Results
The variation range of the measured parameters according to the gender and skull side/head side is presented in Table1.Table 1
Variation range of the individual orbit parameters on CT (given in mm). Bold: mean arithmetic, in parenthesis: standard deviation, and underneath: min and max values.
Orbital dimensions
Male (n
=
52)R + L
Female (n
=
48)R + L
Roof of the orbit
Supraorbital foramen
Optic canal
46,49 (3,20)39,6–52,8
43,29 (2,33)38,4–48,3
Superior orbital fissure
42,32 (3,92)34,6–50,1
39,39 (3,21)32,5–48,0
Lateral wall
Frontozygomatic suture
Optic canal
46,15 (2,70)41,0–51,6
43,58 (2,05)39,0–47,0
Superior orbital fissure
34,06 (3,32)24,4–41,3
32,62 (2,99)26,4–39,6
Interior orbital fissure
24,62 (3,01)15,0–30,4
23,46 (2,69)17,0–29,7
Medial wall
Anterior lacrimal crest
Optic canal
40,40 (2,76)34,3–48,0
38,39 (1,85)34,8–42,8
Anterior ethmoidal foramen
21,28 (2,76)16,0–28,9
20,55 (2,29)15,0–25,0
Posterior ethmoidal foramen
33,06 (3,52)24,8–40,0
31,8 (3,43)20,4–36,8
Floor of orbit
Zygomaticomaxillary suture
Optic canal
45,24 (2,91)39,3–52,5
42,8 (2,41)37,2–48,7
Inferior orbital fissure
23,33 (2,69)16,0–27,7
20,85 (2,89)15,7–28,0The results of measurements in superior quadrant of the orbit were as follows:(i)
from the optic canal to the supraorbital foramen: mean 46,49 mm (±3,20) and 43,29 (±2,33) in men and women, respectively,
(ii)
distance from the supraorbital foramen to the superior orbital fissure 42,32 mm (±3,92) and 39,39 mm (±3,21) in men and women, respectively.The results of measurements in inferior quadrant of orbit are as follows: distance from the optic canal to the maxillozygomatic suture at the inferior margin of the orbit mean 45,24 mm (±2,91) and 42,80 mm (±2,41) in men and women, respectively.The results of measurements in lateral quadrant are as follows:(i)
distance from the optic canal to the frontozygomatic suture 46,15 mm (±2,70) and 43,58 mm (±2,05) in men and women, respectively,
(ii)
distance from the superior orbital fissure to the frontozygomatic suture 34,06 mm (±3,32) and 32,62 mm (±2,99) in men and women, respectively.The results of measurements in medial quadrant of orbit consisted of the distance from the optic canal to the anterior lacrimal crest, which had 40,40 mm (±2,76) and 38,39 mm (±1,85) in men and women, respectively.The analysis of data from tables showed that there were statistically significant differences between dimensions measured on CT scans depending on gender, greater value in males compared to females(
p
<
0.05
).There were some statistically significant differences between right and left sides in the studied cases.The distance between the OC and the ALC was bigger on the right side in both females(
p
=
0,0011
) and males (
p
=
0,0346
). The distance between IOF and FZS was bigger on the right side in females (
p
=
0,0336
). Also the distance between IOF and ZMS was bigger on the right side both in females (
p
=
0,0008
) and in males (
p
=
0,0491
) (Table 2).Table 2
The most probable safe distances values in the CT (given in mm).
Distance
M
F
R(N
=
26)
L(N
=
26)
R(N
=
24)
L(N
=
24)
ALC
OC
32,34
31,90
33,60
32,27
AEF
12,74
13,15
14,41
12,95
PEF
21,69
23,16
20,92
21,97
FZS
OC
37,91
38,02
37,79
36,94
SOF
23,64
24,49
23,73
23,39
IOF
15,85
15,22
16,60
14,35
Zygomaticomaxillary suture
OC
36,83
36,10
34,65
36,46
IOF
16,15
14,76
13,96
11,74
Supraorbital foramen/notch
OC
36,32
37,27
36,24
36,20
SOF
30,49
30,58
30,30
29,22There were no other statistically significant differences in measured values between sides in the study group.The location of Hyrtl canal was noted in 5 cases between all 100 orbits (5%). The distance between meningo-orbital foramen and FZS was 24,7–32,3 mm and between meningo-orbital foramen and supraorbital foramen was 25,1–36,4 mm (Table4).The safe distances were calculated by inserting average values and standard deviations to the formulaX
=
A
V
G
-
3
S
T
D. The results of these calculations are presented in Tables 2 and 3. The superior orbital fissure and orbital opening of the optic canal are located no shallower than 23,6 mm and 38,0 mm from the bony structures of the orbital entrance, respectively, and these values constitute the safe distances for operating purposes.Table 3
The most probable safe distances of the orbit according to deferent authors AVG-3 STD mm.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Nitek
Ji
Fetouh
Nitek
Specimens
Dry skulls
Cadaveric
Cadaveric
Dry skulls
Dry skulls
Dry skulls
CT
Dry skulls
CT
Number
48
54∗
∗
∗
∗
16
82
62
189
64
104
100
Date
1979
1995
1998
1999
2002
2009
2010
2014
2015 (current paper)
Ethnicity
Indian
US
US
Korean
MaleCaucasian
Caucasian medieval
Chinese
Egyptian
Caucasian
Safe distances
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
Anterior lacrimal crest to optic canal
30,0
29,0
39,9
25,7
32,4
33,1
31,9
38,4
37,2
41,4
36,4
32,1
32,8
Inferior quadrate
Infraorbital foramen to optic canal∗
37,0∗
∗
39,0
30,7
NA
40,7
39,8
38,9
39,9
38,5
46,8
45,2
36,5
35,6
Superior quadrate
Supraorbital foramen to optic canal
35,0
38,0
39,3
NA
35,7
40,0
38,3
44,3
44,0
43,1
40,2
36,9
36,3
Supraorbital foramen to superior orbital fissure
30,0
NA
NA
32,5
34,9
33,7
31,7
NA
NA
40,1
39,8
30,5
29,8
Lateral quadrate
Frontozygomatic suture to optic canal
NA∗
∗
∗
36,0
29,3
NA
37,4
40,0
38,4
40,9
40,2
37,5
34,2
38,0
37,4
Frontozygomatic suture to superior orbital fissure
25,0
NA
NA
26,2
26,9
28,0
25,4
NA
NA
33,5
34,2
24,1
23,7
∗In current paper we measure distance between zygomaticomaxillar suture and the optic canal instead of the infraorbital foramen.
∗
∗Rontal used the distance between infraorbital foramen and posterior wall of the maxillary sinus for the inferior quadrate’s safe distance. Whenever we add to this diameter mean distance between the posterior wall of the maxillary sinus from the optic canal (12 mm), we gain comparable similar results of other authors [15].
∗
∗
∗NA: dimensions were not measured.
∗
∗
∗
∗McQueen studied 54 single sided orbit; safe distances to the optic nerve were identified by subtracting 5 mm from the shortest measured specimen.Table 4
Distances of bony structures not permanently present in the orbit [mm].
Measured distance
R
L
R + L
Supraorbital foramen-Hyrtl canal
N
=
3
Mean 30,5 25,1–36,4
N
=
2
Mean 30,55 25,8–35,3
N
=
5
Mean 30,52 25,1–36,4
Frontozygomatic suture-Hyrtl canal
N
=
3
Mean 27,33 24,7–32,3
N
=
2
Mean 26,1 26,0–26,2
N
=
5
Mean 26,84 24,7–32,3
Anterior lacrimal crest-accessory posterior ethmoidal foramen
N
=
4
Mean 34,85 33,9–36,5
N
=
6
Mean 33,08 28,3–37,6
N
=
10
Mean 33,79 28,3–37,6The safe distances for superior quadrant of the orbit were equal: from OC to supraorbital foramen the distance was 36,9 mm in men and 36,3 mm in women and from supraorbital foramen to SOF the distance was 30,5 mm in men and 29,8 mm in women.The safe distances for inferior quadrant of orbit are as follows: from OC to ZMS at the inferior margin of the orbit the distance was 34,65–36,83 mm depending on sex and side.The safe distances for lateral quadrant are as follows: distance from OC to FZS was 38,0 mm in men and 37,43 mm in women and distance from SOF to FZS was 24,1 mm in men and 23,7 mm in women.The safe distances for medial quadrant of orbit are as follows: distance from the OC to ALC was 31,9–33,6 mm depending on sex and side.
## 4. Discussion
Modern technologies, such as surgical navigation systems, used in the orbit surgery make surgical procedures more safe and they are increasing accuracy of reconstruction [19]. Beside that convenient surgery support, operator needs to be familiar with minimal distances from the orbit structures to the optic canal.There are only few papers in the available literature, which could be referred to the results obtained in this study. Simonton et al. [20] determined safe distances for lateral orbitotomy, but their measurements were closely related to the cranial cavity. In our study we have described distinctive topographical points, which are easy to find in the orbit entrance plane. The most probable safe minimal distances to the optic canal were established from these points (Table 3) (Figure 1).
### 4.1. Medial Wall
These distances are especially important in surgical procedures such as ethmoid vessel ligation, exploration of the medial wall fractures, the anterior skull base reconstruction, tumour resection, ethmoid sinus exenteration, orbital decompression, transethmoidal sphenoidotomies, closure of cerebrospinal fluid leakage, and transethmoidal and sphenoidal hypophysectomy [17, 18, 21–25]. The reference point in the medial quadrant is the anterior lacrimal crest which is located, according to various authors, 29–53 mm from the entrance of the optic canal [10, 12, 14, 15, 18]. In our material this distance was 40,4 mm for men and 38,39 mm for women (Table 1). There were statistically important differences in OC-ALC distance between right and left sides both in women (
p
=
0,0011
) and in men (
p
=
0,0346
). The greater values were measured on the right side. This observation confirms the data published by Kadanoff and Jordanov [26]. In this paper wider orbits were observed on the right side in 63% of cases and on the left side in 15,4%. The mean values of OC-ALC distance in other populations were bigger than in our material (Table 5): in Indian 42 mm [18], in American 43,29 mm [15], in Korean 40,5 mm [9], in male Turkish 41,7 mm [14], in British Caucasian 43,77 mm [11], in Thai 42,2 mm [13], significantly bigger in Egyptians, men 47,25 mm and women 46,21 mm [8] (
p
<
0.05
), and in Chinese, men 46,43 mm and women 44,41 mm (
p
<
0.05
) [10] (Table 5).Table 5
Comparison of the orbital distances between different nations.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Huanmanop
Nitek
Ji
Abed
Abed
Munguti
Fetouh
Nitek (present study)
Specimens
Dry skulls
Cadavers
Cadavers
Dry skulls
Dry skulls
Dry skulls
Dry skulls
CT
Cadavers
Cadavers
?
Dry skulls
CT
Number
48
54∗
16
82
62
100
189
64
47
47
?
104
100
Date
1979
1995
1998
1999
2002
2007
2009
2010
2011
2012
2012
2014
2015
Ethnicity
Indian
US
US
Korean
Male Turkish
Thai
Caucasian medieval
Chinese
Caucasian
British Caucasian
Kenyan
Egyptian
Caucasian
Measured distances
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
ALC-OC
42
43,29
44,1
40,5
41,7
42,2
42,30
39,97
46,43
44,41
NA
43,77
NA
NA
47,25
46,21
40,40
38,39
ALC-AEF
24
21,96
NA
21
23,9
23,5
22,80
21,59
NA
NA
NA
25,61
NA
NA
26,76
26,17
21,28
20,55
ALC-PEF
36
33,36
NA
31,7
35,6
36
34,42
32,28
NA
NA
NA
36,09
NA
NA
35,39
35,26
33,06
31,80
Inferior quadrate
Infraorbital foramen-OC
48
49,73
39,4
45,5
NA
46,2
49,33
46,59
47,93
46,18
43,23
NA
55,18
53,63
51,76
50,53
45,24
42,79
Infraorbital foramen to FOI
24
37,43
NA
21,6
NA
21,7
24,62
22,55
NA
NA
25,4
NA
23,56
22,28
24,62
23,6
23,33
20,85
Superior quadrate
Supraorbital foramen-OC
40
44,34
44,5
44,9
45,3
44,7
48,57
45,60
52,93
50,89
NA
NA
53,25
51,93
49,64
48,16
46,49
43,29
Supraorbital foramen to SOF
35
36,59
NA
40
NA
40
44,27
41,16
NA
NA
NA
NA
NA
NA
46,23
45,26
42,32
39,39
Lateral quadrate
FZS-OC
43
47,1
38,3
47,4
44,9
46,9
48,30
45,67
48,38
46,91
NA
NA
NA
NA
44,25
43,58
46,15
43,58
FZS-SOF
35
36,59
NA
34,3
NA
34,5
36,88
33,82
NA
NA
NA
NA
NA
NA
39,94
39,12
34,06
32,62
FZS-IOF
25
40,92
NA
24,8
NA
24
29,42
27,83
NA
NA
NA
NA
NA
NA
29,08
27,32
24,62
23,46
∗McQueen et al. [15] studied only orbit on the one side.The anterior and posterior ethmoidal foramina are important structures of medial orbital wall. They have relatively variable position to the ALC and frontoethmoidal suture and also vary in number with accessory foramina [11, 17, 18, 24, 27]. Single accessory ethmoidal foramina were detected in 38% and double foramina in 2,4% by Takahashi et al. [24]. The accessory ethmoidal foramina were defined in the above-mentioned paper as located between anterior and posterior ethmoidal foramina. When only one was present, it was named middle ethmoidal foramen; when two were present, the one located closer to the posterior ethmoidal foramen was named deep middle ethmoidal foramen [24]. Piagkou classified the ethmoidal foramina pattern in the medial wall as types I–IV according to their number:(1)
type I: single ethmoidal foramen (usually isolated AEF) was observed in 1,6% [17] and in 0,8% [28],
(2)
type II: double EF (the most common, single AEF and PEF, each) was observed in 61% [17] and in 73,7% [28],
(3)
type III: triple EF was observed in 28,5% [17] and in 24,4% [28],
(4)
type IV: multiple EF was observed in 8,8% [17] and quadruple EF in 1,1% [28].In our material accessory posterior ethmoidal foramen was observed in 10% of the cases. According to other authors incidence of more than one PEF is above 25% [18]. In other papers [28] EFs were identified as single in 0.8%, double in 73.7%, triple 24,4%, and quadruple in 1.1% of the specimens. The mean distances between ALC and AEF, ALC and PEF, and ALC and MEF were 27.7 mm, 10.6 mm, and 12.95 mm, respectively. The distances from ALC-AEF, AEF-PEF, and PEF-OC were 27.7 mm ± 2.8, 10.6 mm ± 3.3, and 5.4 mm ± 1 mm. The ethmoidal foramina are located in the frontoethmoidal suture line in 68% of cases and above this line in about 20% [24] or according to other authors in 32% of cases 1–4 mm above the suture [27]. The key role in surgery of medial orbital wall is reserved for the distance between PEF and OC, which according to Rontal is no less than 3 mm [18]. But other authors observed this distance between 4,3 mm [17] and 7,25 mm [29]. Other authors gave arithmetic average of this value (Table 5) [11, 13, 14, 24]. Abed distinguished first PEF (mean 11,63 mm from OC) and last PEF (mean 7,25 mm from OC) [11]. In our material mean distance from PEF to OC was 7 mm (first PEF), like in Harrison 7 mm for the first PEF and 5,65 mm for the last PEF [30]. Harrison observed that in 30% there are multiple EFs and in those cases this distance may be only 2 mm short [30]. This situation may increase risk of optic nerve injury during coagulation of the posterior ethmoidal artery.The most probable safe distance for medial wall of orbit according to our measurements is 31,9–33,6 mm for ALC-CO, 12,7–14,4 mm for ALC-AEF, and 20,9–23,16 mm for ALC-PEF depending on sex and side. The results obtained by other authors are similar and are in range 29–41,4 mm for ALC-CO [8–10, 12, 14, 15].
### 4.2. Superior Wall
The most frequent of surgical procedures of superior orbital wallare frontal ethmoidectomy, frontal sinus trephine, frontal sinus obliteration, orbital decompression, exploration for fractures, lacrimal gland, or other tumours’ excision and orbital exenteration [18, 22]. The incision must be made just below the eyebrow if the supraorbital nerve and the elevator muscle of the upper lid are to remain intact [18, 30]. The supraorbital notch or foramen is usually found in parasagittal line connecting the mental foramen with the infraorbital foramen [18] and about 5 mm from the orbital margin [30]. Mean values of distance from the supraorbital foramen/notch to the optic canal are usually between 40 mm and 52,93 mm according to other authors [8, 10, 12–15, 18]. In our material this distance was 46,49 ± 3,2 mm for men and 43,2 ± 2,33 mm for women. The superior orbital fissure is usually located 35–52 mm from the supraorbital foramen/notch [8, 10, 12–15, 18]. In our material this distance was 42,3 ± 3,92 mm for men and 39,39 ± 3,21 mm for women. Mean values of supraorbital foramen to optic canal in our material were comparable to other populations [12–15, 18] but were statistically significantly lower than in Chinese population (52,9 mm and 50,89 mm for men and women, resp.) [10], Kenyan population (53,25 mm and 51,93 mm for men and women, resp.) [16], and Egyptian population (49,64 mm and 48,16 mm for men and women, resp.) [8]. The most probable safe distance for orbital roof concluding from our study was 30 mm for superior orbital fissure and 35 mm for optic canal. Rontal also provided similar safe distance: 30 mm for orbital roof [18]. The results of other authors are similar and are between 35–43,7 mm for superior orbital foramen, OC, and 31,9–40,1 mm for superior orbital foramen, SOF [8–10, 12, 14, 15].
### 4.3. Lateral Wall
The knowledge of lateral orbital wall anatomy is crucial for all the surgical procedures, like explorations of orbital fractures, lateral orbitotomy during tumour excisions, orbital decompression, and excisions of the lacrimal gland [18, 22, 25].The mean distance between frontozygomatic suture and the optic canal was measured between 40 and 53 mm [10, 12, 14, 15, 18]. In our material this distance was 46,15 ± 2,70 mm for men and 43,58 ± 2,05 mm for women.The mean distance between FZS and SOF was found between 34,5 mm and 39,94 mm [8, 10, 12, 14–16, 18]. In our material this distance was 34,06 ± 3,32 mm for men and 32,62 ± 2,99 mm for women. The mean distances for FZS-OC and FZS- SOF for other populations are similar to our results (Table 5) [12, 14, 15, 18] except Chinese [10] and Egyptian [8] populations. The safe distance for FZS-OC calculated by our model is 38,0 mm in men and 37,43 mm in women; the most probable safe distance from SOF to FZS is 24,1 mm in men and 23,6 mm in women.The results of calculations of safe distances by other authors are similar to our results and are in the range 29,3–40,5 mm for FZS-OC and 25–34,2 mm for FZS-SOF [8–10, 12, 14, 15].
### 4.4. Inferior Wall
The knowledge of inferior wall anatomy is important for several procedures, that is, maxillectomy, explorations of fractures, or tumour resection [18, 22]. The posterior wall of the maxilla lies as close as 26 mm from the infraorbital foramen [18]. The optic canal lies usually about 12 mm beyond this point [18]. The measurements of distances between zygomaticomaxillary suture, the optic canal, and anterior edge of the inferior orbital fissure are in our opinion better than dimensions from infraorbital foramen because the zygomaticomaxillary suture is easier to be palpated by the surgeon. The mean distance of the optic canal to the infraorbital foramen is reported between 39,4 and 55,18 mm [8–10, 12, 13, 16, 18, 31]. In our material this distance was 45,24 mm for men and 42,79 mm for women. This distance was statistically significantly greater in Kenyan population 55,18 mm for men and 53,63 mm for women [16]. The mean values of distance inferior orbital fissure from the infraorbital foramen are reported in range 21,7–37,43 mm [5, 8, 9, 13, 15, 16, 18]. In our material this distance was 23,3 mm for men and 20,8 mm for women.There were clear differences in distances concerning SOF on the right and left sides due to different shapes of SOF [8]. The variation range determined during our study covers the majority of the average values provided in the literature. The presence of meningo-orbital foramen (Hyrtl canal) in 5% of cases is worth mentioning. According to other authors this variant is found in 6–55% of population [7, 15, 18, 32, 33]. It is a potential source of hemorrhage during deep lateral orbital dissection, because it functions as an anastomosis between the lacrimal artery and the middle meningeal artery [7, 18, 31, 32]. The distance between meningo-orbital foramen and FZS was found in range of 25–39 mm [15, 18, 31, 32]. In our material this distance was in range of 24,7–32,3 mm (Table 4). The distance between meningo-orbital foramen and supraorbital foramen was reported in range of 12–37 mm [15, 18, 32, 33]. In our material this distance was 25,1–36,4 mm (Table 4).Assessment of the most probable safe distances calculated in this paper is also comparable to the data provided by other authors [9, 14, 15, 18]. Only the data on Egyptians published by Fetouh and Mandour [8] and by Danko and Haug [12] show significant difference (Table 3). It can be explained by relatively small study group of these papers, consisting of only 8 cases (16 orbits). Beside that Danko and Haug [12] performed their measurements on specimens with preserved soft tissues unlike other authors who used macerated skulls.McQueen et al. [15] calculated the safe distances by subtracting 5 mm from the lowest obtained value. In our study the formula X
=
A
V
G
-
3
S
T
D was applied. Despite different criteria used to determine the safe distances and genetic differences of the examined material the results are comparable. We speculate that population differences do not significantly influence the safe distances values in the orbit because they provide a large safety margin and vary between 23,39–30,58 mm for the superior orbital fissure and 31,90–38,02 mm for the orbital opening of the optic canal from the bony structures of the orbital entrance depending on the orbital quadrant (Table 2).
## 4.1. Medial Wall
These distances are especially important in surgical procedures such as ethmoid vessel ligation, exploration of the medial wall fractures, the anterior skull base reconstruction, tumour resection, ethmoid sinus exenteration, orbital decompression, transethmoidal sphenoidotomies, closure of cerebrospinal fluid leakage, and transethmoidal and sphenoidal hypophysectomy [17, 18, 21–25]. The reference point in the medial quadrant is the anterior lacrimal crest which is located, according to various authors, 29–53 mm from the entrance of the optic canal [10, 12, 14, 15, 18]. In our material this distance was 40,4 mm for men and 38,39 mm for women (Table 1). There were statistically important differences in OC-ALC distance between right and left sides both in women (
p
=
0,0011
) and in men (
p
=
0,0346
). The greater values were measured on the right side. This observation confirms the data published by Kadanoff and Jordanov [26]. In this paper wider orbits were observed on the right side in 63% of cases and on the left side in 15,4%. The mean values of OC-ALC distance in other populations were bigger than in our material (Table 5): in Indian 42 mm [18], in American 43,29 mm [15], in Korean 40,5 mm [9], in male Turkish 41,7 mm [14], in British Caucasian 43,77 mm [11], in Thai 42,2 mm [13], significantly bigger in Egyptians, men 47,25 mm and women 46,21 mm [8] (
p
<
0.05
), and in Chinese, men 46,43 mm and women 44,41 mm (
p
<
0.05
) [10] (Table 5).Table 5
Comparison of the orbital distances between different nations.
Study
Rontal
McQueen
Danko
Hwang
Karakas
Huanmanop
Nitek
Ji
Abed
Abed
Munguti
Fetouh
Nitek (present study)
Specimens
Dry skulls
Cadavers
Cadavers
Dry skulls
Dry skulls
Dry skulls
Dry skulls
CT
Cadavers
Cadavers
?
Dry skulls
CT
Number
48
54∗
16
82
62
100
189
64
47
47
?
104
100
Date
1979
1995
1998
1999
2002
2007
2009
2010
2011
2012
2012
2014
2015
Ethnicity
Indian
US
US
Korean
Male Turkish
Thai
Caucasian medieval
Chinese
Caucasian
British Caucasian
Kenyan
Egyptian
Caucasian
Measured distances
Male
Female
Male
Female
Male
Female
Male
Female
Male
Female
Medial quadrate
ALC-OC
42
43,29
44,1
40,5
41,7
42,2
42,30
39,97
46,43
44,41
NA
43,77
NA
NA
47,25
46,21
40,40
38,39
ALC-AEF
24
21,96
NA
21
23,9
23,5
22,80
21,59
NA
NA
NA
25,61
NA
NA
26,76
26,17
21,28
20,55
ALC-PEF
36
33,36
NA
31,7
35,6
36
34,42
32,28
NA
NA
NA
36,09
NA
NA
35,39
35,26
33,06
31,80
Inferior quadrate
Infraorbital foramen-OC
48
49,73
39,4
45,5
NA
46,2
49,33
46,59
47,93
46,18
43,23
NA
55,18
53,63
51,76
50,53
45,24
42,79
Infraorbital foramen to FOI
24
37,43
NA
21,6
NA
21,7
24,62
22,55
NA
NA
25,4
NA
23,56
22,28
24,62
23,6
23,33
20,85
Superior quadrate
Supraorbital foramen-OC
40
44,34
44,5
44,9
45,3
44,7
48,57
45,60
52,93
50,89
NA
NA
53,25
51,93
49,64
48,16
46,49
43,29
Supraorbital foramen to SOF
35
36,59
NA
40
NA
40
44,27
41,16
NA
NA
NA
NA
NA
NA
46,23
45,26
42,32
39,39
Lateral quadrate
FZS-OC
43
47,1
38,3
47,4
44,9
46,9
48,30
45,67
48,38
46,91
NA
NA
NA
NA
44,25
43,58
46,15
43,58
FZS-SOF
35
36,59
NA
34,3
NA
34,5
36,88
33,82
NA
NA
NA
NA
NA
NA
39,94
39,12
34,06
32,62
FZS-IOF
25
40,92
NA
24,8
NA
24
29,42
27,83
NA
NA
NA
NA
NA
NA
29,08
27,32
24,62
23,46
∗McQueen et al. [15] studied only orbit on the one side.The anterior and posterior ethmoidal foramina are important structures of medial orbital wall. They have relatively variable position to the ALC and frontoethmoidal suture and also vary in number with accessory foramina [11, 17, 18, 24, 27]. Single accessory ethmoidal foramina were detected in 38% and double foramina in 2,4% by Takahashi et al. [24]. The accessory ethmoidal foramina were defined in the above-mentioned paper as located between anterior and posterior ethmoidal foramina. When only one was present, it was named middle ethmoidal foramen; when two were present, the one located closer to the posterior ethmoidal foramen was named deep middle ethmoidal foramen [24]. Piagkou classified the ethmoidal foramina pattern in the medial wall as types I–IV according to their number:(1)
type I: single ethmoidal foramen (usually isolated AEF) was observed in 1,6% [17] and in 0,8% [28],
(2)
type II: double EF (the most common, single AEF and PEF, each) was observed in 61% [17] and in 73,7% [28],
(3)
type III: triple EF was observed in 28,5% [17] and in 24,4% [28],
(4)
type IV: multiple EF was observed in 8,8% [17] and quadruple EF in 1,1% [28].In our material accessory posterior ethmoidal foramen was observed in 10% of the cases. According to other authors incidence of more than one PEF is above 25% [18]. In other papers [28] EFs were identified as single in 0.8%, double in 73.7%, triple 24,4%, and quadruple in 1.1% of the specimens. The mean distances between ALC and AEF, ALC and PEF, and ALC and MEF were 27.7 mm, 10.6 mm, and 12.95 mm, respectively. The distances from ALC-AEF, AEF-PEF, and PEF-OC were 27.7 mm ± 2.8, 10.6 mm ± 3.3, and 5.4 mm ± 1 mm. The ethmoidal foramina are located in the frontoethmoidal suture line in 68% of cases and above this line in about 20% [24] or according to other authors in 32% of cases 1–4 mm above the suture [27]. The key role in surgery of medial orbital wall is reserved for the distance between PEF and OC, which according to Rontal is no less than 3 mm [18]. But other authors observed this distance between 4,3 mm [17] and 7,25 mm [29]. Other authors gave arithmetic average of this value (Table 5) [11, 13, 14, 24]. Abed distinguished first PEF (mean 11,63 mm from OC) and last PEF (mean 7,25 mm from OC) [11]. In our material mean distance from PEF to OC was 7 mm (first PEF), like in Harrison 7 mm for the first PEF and 5,65 mm for the last PEF [30]. Harrison observed that in 30% there are multiple EFs and in those cases this distance may be only 2 mm short [30]. This situation may increase risk of optic nerve injury during coagulation of the posterior ethmoidal artery.The most probable safe distance for medial wall of orbit according to our measurements is 31,9–33,6 mm for ALC-CO, 12,7–14,4 mm for ALC-AEF, and 20,9–23,16 mm for ALC-PEF depending on sex and side. The results obtained by other authors are similar and are in range 29–41,4 mm for ALC-CO [8–10, 12, 14, 15].
## 4.2. Superior Wall
The most frequent of surgical procedures of superior orbital wallare frontal ethmoidectomy, frontal sinus trephine, frontal sinus obliteration, orbital decompression, exploration for fractures, lacrimal gland, or other tumours’ excision and orbital exenteration [18, 22]. The incision must be made just below the eyebrow if the supraorbital nerve and the elevator muscle of the upper lid are to remain intact [18, 30]. The supraorbital notch or foramen is usually found in parasagittal line connecting the mental foramen with the infraorbital foramen [18] and about 5 mm from the orbital margin [30]. Mean values of distance from the supraorbital foramen/notch to the optic canal are usually between 40 mm and 52,93 mm according to other authors [8, 10, 12–15, 18]. In our material this distance was 46,49 ± 3,2 mm for men and 43,2 ± 2,33 mm for women. The superior orbital fissure is usually located 35–52 mm from the supraorbital foramen/notch [8, 10, 12–15, 18]. In our material this distance was 42,3 ± 3,92 mm for men and 39,39 ± 3,21 mm for women. Mean values of supraorbital foramen to optic canal in our material were comparable to other populations [12–15, 18] but were statistically significantly lower than in Chinese population (52,9 mm and 50,89 mm for men and women, resp.) [10], Kenyan population (53,25 mm and 51,93 mm for men and women, resp.) [16], and Egyptian population (49,64 mm and 48,16 mm for men and women, resp.) [8]. The most probable safe distance for orbital roof concluding from our study was 30 mm for superior orbital fissure and 35 mm for optic canal. Rontal also provided similar safe distance: 30 mm for orbital roof [18]. The results of other authors are similar and are between 35–43,7 mm for superior orbital foramen, OC, and 31,9–40,1 mm for superior orbital foramen, SOF [8–10, 12, 14, 15].
## 4.3. Lateral Wall
The knowledge of lateral orbital wall anatomy is crucial for all the surgical procedures, like explorations of orbital fractures, lateral orbitotomy during tumour excisions, orbital decompression, and excisions of the lacrimal gland [18, 22, 25].The mean distance between frontozygomatic suture and the optic canal was measured between 40 and 53 mm [10, 12, 14, 15, 18]. In our material this distance was 46,15 ± 2,70 mm for men and 43,58 ± 2,05 mm for women.The mean distance between FZS and SOF was found between 34,5 mm and 39,94 mm [8, 10, 12, 14–16, 18]. In our material this distance was 34,06 ± 3,32 mm for men and 32,62 ± 2,99 mm for women. The mean distances for FZS-OC and FZS- SOF for other populations are similar to our results (Table 5) [12, 14, 15, 18] except Chinese [10] and Egyptian [8] populations. The safe distance for FZS-OC calculated by our model is 38,0 mm in men and 37,43 mm in women; the most probable safe distance from SOF to FZS is 24,1 mm in men and 23,6 mm in women.The results of calculations of safe distances by other authors are similar to our results and are in the range 29,3–40,5 mm for FZS-OC and 25–34,2 mm for FZS-SOF [8–10, 12, 14, 15].
## 4.4. Inferior Wall
The knowledge of inferior wall anatomy is important for several procedures, that is, maxillectomy, explorations of fractures, or tumour resection [18, 22]. The posterior wall of the maxilla lies as close as 26 mm from the infraorbital foramen [18]. The optic canal lies usually about 12 mm beyond this point [18]. The measurements of distances between zygomaticomaxillary suture, the optic canal, and anterior edge of the inferior orbital fissure are in our opinion better than dimensions from infraorbital foramen because the zygomaticomaxillary suture is easier to be palpated by the surgeon. The mean distance of the optic canal to the infraorbital foramen is reported between 39,4 and 55,18 mm [8–10, 12, 13, 16, 18, 31]. In our material this distance was 45,24 mm for men and 42,79 mm for women. This distance was statistically significantly greater in Kenyan population 55,18 mm for men and 53,63 mm for women [16]. The mean values of distance inferior orbital fissure from the infraorbital foramen are reported in range 21,7–37,43 mm [5, 8, 9, 13, 15, 16, 18]. In our material this distance was 23,3 mm for men and 20,8 mm for women.There were clear differences in distances concerning SOF on the right and left sides due to different shapes of SOF [8]. The variation range determined during our study covers the majority of the average values provided in the literature. The presence of meningo-orbital foramen (Hyrtl canal) in 5% of cases is worth mentioning. According to other authors this variant is found in 6–55% of population [7, 15, 18, 32, 33]. It is a potential source of hemorrhage during deep lateral orbital dissection, because it functions as an anastomosis between the lacrimal artery and the middle meningeal artery [7, 18, 31, 32]. The distance between meningo-orbital foramen and FZS was found in range of 25–39 mm [15, 18, 31, 32]. In our material this distance was in range of 24,7–32,3 mm (Table 4). The distance between meningo-orbital foramen and supraorbital foramen was reported in range of 12–37 mm [15, 18, 32, 33]. In our material this distance was 25,1–36,4 mm (Table 4).Assessment of the most probable safe distances calculated in this paper is also comparable to the data provided by other authors [9, 14, 15, 18]. Only the data on Egyptians published by Fetouh and Mandour [8] and by Danko and Haug [12] show significant difference (Table 3). It can be explained by relatively small study group of these papers, consisting of only 8 cases (16 orbits). Beside that Danko and Haug [12] performed their measurements on specimens with preserved soft tissues unlike other authors who used macerated skulls.McQueen et al. [15] calculated the safe distances by subtracting 5 mm from the lowest obtained value. In our study the formula X
=
A
V
G
-
3
S
T
D was applied. Despite different criteria used to determine the safe distances and genetic differences of the examined material the results are comparable. We speculate that population differences do not significantly influence the safe distances values in the orbit because they provide a large safety margin and vary between 23,39–30,58 mm for the superior orbital fissure and 31,90–38,02 mm for the orbital opening of the optic canal from the bony structures of the orbital entrance depending on the orbital quadrant (Table 2).
---
*Source: 101438-2015-10-29.xml* | 2015 |
# Existence and Convergence Theorems of Best Proximity Points
**Authors:** Moosa Gabeleh; Naseer Shahzad
**Journal:** Journal of Applied Mathematics
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101439
---
## Abstract
The aim of this paper is to prove some best proximity point theorems for new classes of cyclic mappings, called pointwise cyclic orbital contractions and asymptotic pointwise cyclic orbital contractions. We also prove a convergence theorem of best proximity point for relatively nonexpansive mappings in uniformly convex Banach spaces.
---
## Body
## 1. Introduction and Preliminaries
Let(X,d) be a metric space, and let A,B be subsets of X. A mapping T:A∪B→A∪B is said to be cyclic provided that T(A)⊆B and T(B)⊆A. In 2003, Kirk et al. [1] proved the following generalization of Banach contraction principle.Theorem 1 (see [1]).
LetA and B be nonempty closed subsets of a complete metric space (X,d). Suppose that T is a cyclic mapping such that
(1)d(Tx,Ty)≤αd(x,y),
for some α∈(0,1) and for all x∈A,y∈B. Then T has a unique fixed point in A∩B.In [2] Eldred and Veeramani introduced the class of cyclic contractions as follows.Definition 2 (see [2]).
LetA and B be nonempty subsets of a metric space X. A mapping T:A∪B→A∪B is said to be a cyclic contraction if T is cyclic and
(2)d(Tx,Ty)≤αd(x,y)+(1-α)dist(A,B),
for some α∈(0,1) and for all x∈A,y∈B.LetT be a cyclic mapping. A point x∈A∪B is said to be a best proximity point for T provided that d(x,Tx)=dist(A,B), where
(3)dist(A,B):=inf{d(x,y):x∈A,y∈B}.
Note that if dist(A,B)=0, then the best proximity point is nothing but a fixed point of T.The next theorem ensures existence, uniqueness, and convergence of best proximity point for cyclic contractions in uniformly convex Banach spaces.Theorem 3 (see [2]).
LetA and B be nonempty closed convex subsets of a uniformly convex Banach space X and let T:A∪B→A∪B be a cyclic contraction map. For x0∈A, define xn+1:=Txn for each n≥0. Then there exists a unique x∈A such that x2n→x and ∥x-Tx∥=dist(A,B).Recently, Suzuki et al. in [3] introduced the notion of property UC which is a kind of geometric property for subsets of a metric space X.Definition 4 (see [3]).
LetA and B be nonempty subsets of a metric space (X,d). Then (A,B) is said to satisfy property UC if the following holds.
If{xn} and {zn} are sequences in A and {yn} is a sequence in B such that limnd(xn,yn)=dist(A,B) and limnd(zn,yn)=dist(A,B), then we have limnd(xn,zn)=0.We mention that ifA and B are nonempty subsets of a uniformly convex Banach space X such that A is convex, then (A,B) satisfies the property UC. Other examples of pairs having the property UC can be found in [3]. Here, we state the following two lemmas of [3].Lemma 5 (see [3]).
LetA and B be nonempty subsets of a metric space (X,d). Assume that (A,B) satisfies the property UC. Let {xn} and {yn} be sequences in A and B, respectively, such that either of the following holds:
(4)limm→∞supn≥md(xm,yn)=d(A,B)orlimn→∞supm≥nd(xm,yn)=d(A,B).
Then {xn} is a Cauchy sequence.Lemma 6 (see [3]).
Let(X,d) be a metric space and let A and B be nonempty subsets of X such that (A,B) satisfies the property UC. Let T:A∪B→A∪B be a cyclic map such that
(5)d(T2x,Tx)≤d(x,Tx)∀x∈A,d(T2x,Tx)<d(x,Tx)∀x∈Awithdist(A,B)<d(x,Tx).
For a point z∈A, the following are equivalent: (i)
zis a best proximity point of T;(ii)
z is a fixed point of T2.Throughout this paper,(A,B) stands for a nonempty pair in a metric space (X,d). When we say that a pair (A,B) satisfies a specific property, we mean that both A and B satisfy the mentioned property. Also, we define (A,B)⊆(C,D)⇔A⊆C and B⊆D. Moreover, we use the following notations:
(6)δx(A)=sup{d(x,y):y∈A}∀x∈X,δ(A,B)=sup{d(x,y):x∈A,y∈B},diam(A)=δ(A,A).
For a cyclic mapping T:A∪B→A∪B and x∈A∪B, we define the orbit setting at x by
(7)𝒪T2x:={x,T2x,T4x,…,T2nx,…},
where T2nx=T(T2n-1x) for n≥1 and T0x=x. We set
(8)𝒪T2(x,y):=𝒪T2(x)∪𝒪T2(y),
for all x,y∈A∪B. Note that if (x,y)∈A×B, then 𝒪T2x⊆A and 𝒪T2y⊆B. Also, the set of all best proximity points of the mapping T in A will be denoted by B.P.P(T)∩A.We mention that a mappingT:A∪B→A∪B is said to be relatively nonexpansive provided that T is cyclic and satisfies the condition ∥Tx-Ty∥≤∥x-y∥ for each (x,y)∈A×B. Note that a relatively nonexpansive mapping need not be a continuous mapping. Also every nonexpansive self-map can be considered as a relatively nonexpansive mapping.In 2005 Eldred et al. in [4] introduced a geometric concept called proximal normal structure. Using this notion they proved that if (A,B) is a nonempty weakly compact convex pair in a Banach space X and T:A∪B→A∪B is a relatively nonexpansive mapping, then there exists (x,y)∈A×B such that ∥x-Tx∥=∥Ty-y∥=dist(A,B). For more details on this subject, we refer the reader to [5–10].
## 2. Pointwise Cyclic Orbital Contractions
In [11], the notion of pointwise cyclic contractions was introduced as follows.Definition 7 (see [11]).
Let(A,B) be a pair of subsets of a metric space (X,d). Let T:A∪B→A∪B be a cyclic mapping. T is said to be a pointwise cyclic contraction if for each (x,y)∈A×B there exist 0≤α(x)<1,0≤α(y)<1 such that
(9)d(Tx,Ty)≤α(x)d(x,y)+(1-α(x))dist(A,B)∀y∈B,d(Tx,Ty)≤α(y)d(x,y)+(1-α(y))dist(A,B)∀x∈A.The following result was proved in [11].Theorem 8 (see [11]).
Let(A,B) be a nonempty weakly compact convex pair in a Banach space X and suppose that T is a pointwise cyclic contraction mapping. Then there exists (x,y)∈A×B such that ∥x-Tx∥=∥y-Ty∥=dist(A,B).In this section, we introduce a new class of cyclic mappings, calledpointwise cyclic orbital contractions, which contains the pointwise cyclic contractions as a subclass. For such mappings, we study the existence of best proximity points in Banach spaces.Definition 9.
Let(A,B) be a pair of subsets of a metric space (X,d). A cyclic mapping T:A∪B→A∪B is said to be a pointwise cyclic orbital contraction if there exists α:A∪B→[0,1) such that for each (x,y)∈A×B(10)d(Tx,Ty)≤α(x)δx(𝒪T2y)+(1-α(x))dist(A,B)∀y∈B,(11)d(Tx,Ty)≤α(y)δy(𝒪T2x)+(1-α(y))dist(A,B)∀x∈A.It is clear that the class of pointwise cyclic orbital contractions contains the class of pointwise cyclic contractions as a subclass. The following example shows that the converse need not be true. Moreover, it is interesting to note that a pointwise cyclic orbital contraction may not be relatively nonexpansive.Example 10.
LetX:=ℝ with the usual metric. For A=B=[0,1/2], define T:A∪B→A∪B by
(12)Tx={18xif0≤x≤14,0if14<x≤12.ThenT is pointwise cyclic orbital contraction with α(x)=7/8 for all x∈A.Proof.
If either0≤x,y≤1/4 or 1/4<x,y≤1/2, then it is easy to see that relations (10) and (11) hold. Suppose that 0≤x≤1/4 and 1/4<y≤1/2. Thus,
(13)d(Tx,Ty)=18x,δx(𝒪T2y)=supn≥0|x-T2ny|=max{x,y-x}.
Hence,(14)d(Tx,Ty)=18x≤78max{x,y-x}=α(x)δx(𝒪T2y),
that is, (10) holds. Also, by the fact that δy(𝒪T2x)=supn≥0|y-T2nx|=y then
(15)d(Tx,Ty)=18x≤78y=α(y)δx(𝒪T2x),
which implies that (10) and (11) hold. Thus, T is a pointwise cyclic orbital contraction. Now, we show that T is not pointwise cyclic contraction. Indeed, if there exists a function α:A∪B→[0,1) such that d(Tx,Ty)≤α(x)d(x,y) for all (x,y)∈A×B, then for x=1/4 and y=26/100 we must have
(16)18×25100=d(Tx,Ty)≤α(x)d(x,y)=α(25100)×1100,
and hence 25/8≤α(1/4), which is a contradiction. Therefore, T is not pointwise cyclic contraction. Moreover, we note that since T is not continuous, T is not relatively nonexpansive.Let us state our main result of this section.Theorem 11.
Let(A,B) be a nonempty weakly compact convex pair in a Banach space X. If T:A∪B→A∪B is a pointwise cyclic orbital contraction, then the set of best proximity points of T is nonempty.Proof.
LetΣ denote the collection of all nonempty weakly compact convex pairs (E,F) which are subsets of (A,B) and such that T is cyclic on E∪F. Then Σ is nonempty, since (A,B)∈Σ. Σ is partially ordered by reverse inclusion; that is, (A,B)≤(C,D)⇔(C,D)⊆(A,B). It is easy to check that every increasing chain in Σ is bounded above. Hence by Zorn's lemma we can get a minimal element say (K1,K2)∈Σ. We have
(17)(co¯(T(K2)),co¯(T(K1)))⊆(K1,K2).
Moreover(18)T(co¯(T(K2)))⊆T(K1)⊆co¯(T(K1)),
and also
(19)T(co¯(T(K1)))⊆co¯(T(K2)).
Now, by the minimality of (K1,K2), we have co¯(T(K2))=K1, co¯(T(K1))=K2. Suppose that a∈K1. Then for each y∈K2 we have
(20)∥Ta-Ty∥≤α(a)δa(𝒪T2y)+(1-α(a))dist(A,B)≤α(a)δa(K2)+(1-α(a))dist(A,B),
which implies that T(K2)⊆ℬ(Ta;α(a)δa(K2)+(1-α(a))dist(A,B)). Hence,
(21)K1=co¯(T(K2))⊆ℬ(Ta;α(a)δa(K2)+(1-α(a))dist(A,B)).
Thus, for each x∈K1 we must have
(22)∥x-Ta∥≤α(a)δa(K2)+(1-α(a))dist(A,B),
which ensures that
(23)δTa(K1)≤α(a)δa(K2)+(1-α(a))dist(A,B).
Similarly, we can see that ifb∈K2, then
(24)δTb(K2)≤α(b)δb(K1)+(1-α(b))dist(A,B).
Assume that(p,q) is a fixed element in K1×K2. Let δp(K2)≤δq(K1). Set r:=δp(K2) and
(25)E:={y∈K2:δy(K1)≤r},F:={x∈K1:δx(K2)≤r}.
Obviously,p∈F. Also, from (23) Tp∈E and then (E,F) is a nonempty pair. Besides, it is easy to see that
(26)E:=⋂a∈K1ℬ(a;r)∩K2,F:=⋂b∈K2ℬ(b;r)∩K1.
Now, lety∈E. Then y∈K2 and by (24), δTy(K2)≤δy(K1)≤r which implies that Ty∈F. Hence, T(E)⊆F. Similarly, by relation (23) we conclude that T(F)⊆E. That is, T is cyclic on E∪F. By the minimality of (K1,K2) we must have F=K1 and E=K2. Therefore,
(27)δx(K2)≤r,δy(K1)≤r,
for each (x,y)∈A×B. Then for all (x,y)∈A×B we have
(28)δx(K2)≤δq(K1),δy(K1)≤δp(K2).
Particularly,δp(K2)≤δq(K1)≤δp(K2). Thus,
(29)δp(K2)=δq(K1).
Similar argument implies that ifδq(K1)≤δp(K2), then relation (29) is to be achieved. Therefore, (29) holds for all (p,q)∈K1×K2. To complete the proof of the theorem, we consider the following cases.
Case 1.If δp(K2)=dist(A,B), then we have
(30)∥p-Tp∥≤δp(K2)=dist(A,B),
that is, p is a best proximity point of T.
Case 2.If δp(K2)>dist(A,B), it now follows from (23) and (29) that
(31)δp(K2)=δTp(K1)≤α(p)δp(K2)+(1-α(p))dist(A,B)<δp(K2),
which is a contradiction. Hence, each point of K1 is a best proximity point of T and so K1⊆B.P.P(T)∩A. Similarly, we can see that K2⊆B.P.P(T)∩B. Thus, for each (x,y)∈K1×K2 we must have
(32)∥x-Tx∥=∥Ty-y∥=dist(A,B).
## 3. Asymptotic Pointwise Cyclic Orbital Contractions
Definition 12.
Let(A,B) be a pair of subsets of a metric space (X,d). A cyclic mapping T:A∪B→A∪B is said to be an asymptotic pointwise cyclic orbital contraction if for each (x,y)∈A×B,
(33)d(T2nx,T2ny)≤αn(x)diam𝒪T2(x,y)+(1-αn(x))dist(A,B)∀y∈B,d(T2nx,T2ny)≤αn(y)diam𝒪T2(x,y)+(1-αn(y))dist(A,B)∀x∈A,
where for each n∈ℕ, αn:A∪B→ℝ+ and limsupn→∞αn(x)≤η for some 0<η<1 and for all x∈A∪B.The following theorem establishes existence and convergence of a best proximity point for asymptotic pointwise cyclic orbital contractions in metric spaces with the property UC.Theorem 13.
Let(A,B) be a nonempty closed pair in a complete metric space (X,d) such that (A,B) satisfies the property UC. Assume that T:A∪B→A∪B is an asymptotic pointwise cyclic orbital contraction such that T is continuous on A. If there exists x∈A such that the orbit of T at x is bounded, then T has a best proximity point in A. Moreover, if x0∈A and xn+1=Txn, then {x2n} converges to the best proximity point of T.Proof.
Letx∈A. We note that the sequence {diam[𝒪T2(T2nx,T2n+1x)]} is decreasing and bounded below by dist(A,B). Let diam[𝒪T2(T2nx,T2n+1x)]→rx≥dist(A,B). We claim that rx=dist(A,B). For all k1,k2∈ℕ with k1≤k2 we have
(34)d(T2(n+k1)x,T2(n+k2)(Tx))≤αn+k1(x)diam[𝒪T2(x,Tx)]+(1-αn+k1(x))dist(A,B).
Taking the supremum with respect tok1 and k2 and then letting n→∞ we obtain
(35)rx≤ηdiam[𝒪T2(x,Tx)]+(1-η)dist(A,B).
Besides, for eachm∈ℕ we have
(36)rx=limn→∞diam[𝒪T2(T2n(T2mx),T2n(T2m(Tx)))]≤ηdiam𝒪T2(T2mx,T2m(Tx))+(1-η)dist(A,B).
Now, ifm→∞ we obtain
(37)rx≤ηrx+(1-η)dist(A,B),
and hence rx=dist(A,B). We now conclude that
(38)limn→∞supm≥nd(T2nx,T2m+1x)=dist(A,B).
Since(A,B) has the property UC, by Lemma 5 {T2nx} is a Cauchy sequence. Suppose that x2n→p. Continuity of T on A implies that x2n+1→Tp. Thus, d(p,Tp)=dist(A,B). That is, p is a best proximity point of the mapping T in A.The next corollary is a direct result of Theorem13.Corollary 14 (compare to Theorem3).
Let(A,B) be a nonempty closed pair in a uniformly convex Banach space X such that A is convex. Assume that T:A∪B→A∪B is an asymptotic pointwise cyclic orbital contraction such that T is continuous on A. If there exists x∈A such that the orbit of T at x is bounded, then T has a best proximity point in A. Moreover, if x0∈A and xn+1=Txn, then {x2n} converges to the best proximity point of T.
## 4. A Convergence Theorem
In this section, we give a convergence theorem of best proximity point for cyclic mappings which is derived from Ishikawa's convergence theorem ([12]). We begin with the following proposition which is an inequality characterization of uniformly convex Banach spaces.Proposition 15 (see [13]).
LetX be a uniformly convex Banach space. Then for each r>0, there exists a strictly increasing, continuous and convex function φ:[0,1)→[0,1) such that φ(0)=0 and
(39)∥λx+(1-λ)y∥2≤λ∥x∥2+(1-λ)∥y∥2-λ(1-λ)φ(∥x-y∥),
for all λ∈[0,1] and all x,y∈X such that ∥x∥≤r and ∥y∥≤r.Definition 16.
Let(A,B) be a nonempty pair of subsets of a normed linear space X. Suppose that T:A∪B→A∪B is a cyclic mapping on A∪B. We say that T is hemicompactness on A provided that each sequence {xn} in A with ∥xn-T2xn∥→0 has a convergent subsequence.It is clear that ifA is compact set, then each cyclic mapping defined on A∪B is hemicompactness, where B is a nonempty subset of X.Theorem 17.
Let(A,B) be a nonempty, bounded, closed, and convex pair in a uniformly convex Banach space X. Assume that T:A∪B→A∪B is a cyclic relatively nonexpansive mapping such that T is hemicompactness on A and T2 is continuous and satisfies the condition
(40)∥T2x-Tx∥<∥Tx-x∥,
for all x∈A∪B with ∥x-Tx∥>dist(A,B). Define a sequence {xn} in A by x1∈A and
(41)xn+1=αxn+(1-α)T2xn,
for n∈ℕ, where α is a real number belonging to (0,1). Then {xn} converges strongly to a best proximity point of T in A.Proof.
Since(A,B) is a bounded, closed, and convex pair in a uniformly convex Banach space X, the relatively nonexpansive mapping T has a best proximity point in B ([4]). Also, we note that both of the (A,B) and (B,A) have the property UC. So, by Lemma 6 a point p∈B is a best proximity point of the mapping T if and only if p is a fixed point of the mapping T2∣B. We now have
(42)∥xn+1-p∥=∥αxn+(1-α)T2xn-T2p∥=∥αxn+(1-α)T2xn-αT2p-(1-α)T2p∥≤α∥xn-p∥+(1-α)∥T2xn-T2p∥≤α∥xn-p∥+(1-α)∥xn-p∥=∥xn-p∥.
Therefore,{∥xn-p∥} is a decreasing sequence and hence {∥xn-p∥} is convergent. So {xn} is bounded. From the uniform convexity of a Banach space X and by Proposition 15, there exists a strictly increasing, continuous and convex function φ:[0,1)→[0,1) such that φ(0)=0 and
(43)∥xn+1-p∥2=∥α(xn-p)+(1-α)(T2xn-p)∥2≤α∥xn-p∥2+(1-α)∥T2xn-T2p∥2-α(1-α)φ(∥xn-T2xn∥)≤∥xn-p∥2-α(1-α)φ(∥xn-T2xn∥).
Thus(44)α(1-α)φ(∥xn-T2xn∥)≤∥xn-p∥2-∥xn+1-p∥2,
which implies that φ(∥xn-T2xn∥)→0. Since φ is strictly increasing and continuous at 0, it follows that ∥xn-T2xn∥→0. (45)∥xn-T2xn∥⟶0.
On the other hand, sinceT2 is hemicompactness on A, there exists a subsequence {xnj} of the sequence {xn} such that xnj→q∈A. By the continuity of the mapping T2 on A, we have T2xnj→T2q. Since ∥xnj-T2xnj∥→0, we obtain q=T2q. Hence q∈A is a fixed point of the mapping T2 in A and again by Lemma 6, q is a best proximity point of T in A and xn→q∈A strongly.
---
*Source: 101439-2013-06-02.xml* | 101439-2013-06-02_101439-2013-06-02.md | 16,498 | Existence and Convergence Theorems of Best Proximity Points | Moosa Gabeleh; Naseer Shahzad | Journal of Applied Mathematics
(2013) | Mathematical Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101439 | 101439-2013-06-02.xml | ---
## Abstract
The aim of this paper is to prove some best proximity point theorems for new classes of cyclic mappings, called pointwise cyclic orbital contractions and asymptotic pointwise cyclic orbital contractions. We also prove a convergence theorem of best proximity point for relatively nonexpansive mappings in uniformly convex Banach spaces.
---
## Body
## 1. Introduction and Preliminaries
Let(X,d) be a metric space, and let A,B be subsets of X. A mapping T:A∪B→A∪B is said to be cyclic provided that T(A)⊆B and T(B)⊆A. In 2003, Kirk et al. [1] proved the following generalization of Banach contraction principle.Theorem 1 (see [1]).
LetA and B be nonempty closed subsets of a complete metric space (X,d). Suppose that T is a cyclic mapping such that
(1)d(Tx,Ty)≤αd(x,y),
for some α∈(0,1) and for all x∈A,y∈B. Then T has a unique fixed point in A∩B.In [2] Eldred and Veeramani introduced the class of cyclic contractions as follows.Definition 2 (see [2]).
LetA and B be nonempty subsets of a metric space X. A mapping T:A∪B→A∪B is said to be a cyclic contraction if T is cyclic and
(2)d(Tx,Ty)≤αd(x,y)+(1-α)dist(A,B),
for some α∈(0,1) and for all x∈A,y∈B.LetT be a cyclic mapping. A point x∈A∪B is said to be a best proximity point for T provided that d(x,Tx)=dist(A,B), where
(3)dist(A,B):=inf{d(x,y):x∈A,y∈B}.
Note that if dist(A,B)=0, then the best proximity point is nothing but a fixed point of T.The next theorem ensures existence, uniqueness, and convergence of best proximity point for cyclic contractions in uniformly convex Banach spaces.Theorem 3 (see [2]).
LetA and B be nonempty closed convex subsets of a uniformly convex Banach space X and let T:A∪B→A∪B be a cyclic contraction map. For x0∈A, define xn+1:=Txn for each n≥0. Then there exists a unique x∈A such that x2n→x and ∥x-Tx∥=dist(A,B).Recently, Suzuki et al. in [3] introduced the notion of property UC which is a kind of geometric property for subsets of a metric space X.Definition 4 (see [3]).
LetA and B be nonempty subsets of a metric space (X,d). Then (A,B) is said to satisfy property UC if the following holds.
If{xn} and {zn} are sequences in A and {yn} is a sequence in B such that limnd(xn,yn)=dist(A,B) and limnd(zn,yn)=dist(A,B), then we have limnd(xn,zn)=0.We mention that ifA and B are nonempty subsets of a uniformly convex Banach space X such that A is convex, then (A,B) satisfies the property UC. Other examples of pairs having the property UC can be found in [3]. Here, we state the following two lemmas of [3].Lemma 5 (see [3]).
LetA and B be nonempty subsets of a metric space (X,d). Assume that (A,B) satisfies the property UC. Let {xn} and {yn} be sequences in A and B, respectively, such that either of the following holds:
(4)limm→∞supn≥md(xm,yn)=d(A,B)orlimn→∞supm≥nd(xm,yn)=d(A,B).
Then {xn} is a Cauchy sequence.Lemma 6 (see [3]).
Let(X,d) be a metric space and let A and B be nonempty subsets of X such that (A,B) satisfies the property UC. Let T:A∪B→A∪B be a cyclic map such that
(5)d(T2x,Tx)≤d(x,Tx)∀x∈A,d(T2x,Tx)<d(x,Tx)∀x∈Awithdist(A,B)<d(x,Tx).
For a point z∈A, the following are equivalent: (i)
zis a best proximity point of T;(ii)
z is a fixed point of T2.Throughout this paper,(A,B) stands for a nonempty pair in a metric space (X,d). When we say that a pair (A,B) satisfies a specific property, we mean that both A and B satisfy the mentioned property. Also, we define (A,B)⊆(C,D)⇔A⊆C and B⊆D. Moreover, we use the following notations:
(6)δx(A)=sup{d(x,y):y∈A}∀x∈X,δ(A,B)=sup{d(x,y):x∈A,y∈B},diam(A)=δ(A,A).
For a cyclic mapping T:A∪B→A∪B and x∈A∪B, we define the orbit setting at x by
(7)𝒪T2x:={x,T2x,T4x,…,T2nx,…},
where T2nx=T(T2n-1x) for n≥1 and T0x=x. We set
(8)𝒪T2(x,y):=𝒪T2(x)∪𝒪T2(y),
for all x,y∈A∪B. Note that if (x,y)∈A×B, then 𝒪T2x⊆A and 𝒪T2y⊆B. Also, the set of all best proximity points of the mapping T in A will be denoted by B.P.P(T)∩A.We mention that a mappingT:A∪B→A∪B is said to be relatively nonexpansive provided that T is cyclic and satisfies the condition ∥Tx-Ty∥≤∥x-y∥ for each (x,y)∈A×B. Note that a relatively nonexpansive mapping need not be a continuous mapping. Also every nonexpansive self-map can be considered as a relatively nonexpansive mapping.In 2005 Eldred et al. in [4] introduced a geometric concept called proximal normal structure. Using this notion they proved that if (A,B) is a nonempty weakly compact convex pair in a Banach space X and T:A∪B→A∪B is a relatively nonexpansive mapping, then there exists (x,y)∈A×B such that ∥x-Tx∥=∥Ty-y∥=dist(A,B). For more details on this subject, we refer the reader to [5–10].
## 2. Pointwise Cyclic Orbital Contractions
In [11], the notion of pointwise cyclic contractions was introduced as follows.Definition 7 (see [11]).
Let(A,B) be a pair of subsets of a metric space (X,d). Let T:A∪B→A∪B be a cyclic mapping. T is said to be a pointwise cyclic contraction if for each (x,y)∈A×B there exist 0≤α(x)<1,0≤α(y)<1 such that
(9)d(Tx,Ty)≤α(x)d(x,y)+(1-α(x))dist(A,B)∀y∈B,d(Tx,Ty)≤α(y)d(x,y)+(1-α(y))dist(A,B)∀x∈A.The following result was proved in [11].Theorem 8 (see [11]).
Let(A,B) be a nonempty weakly compact convex pair in a Banach space X and suppose that T is a pointwise cyclic contraction mapping. Then there exists (x,y)∈A×B such that ∥x-Tx∥=∥y-Ty∥=dist(A,B).In this section, we introduce a new class of cyclic mappings, calledpointwise cyclic orbital contractions, which contains the pointwise cyclic contractions as a subclass. For such mappings, we study the existence of best proximity points in Banach spaces.Definition 9.
Let(A,B) be a pair of subsets of a metric space (X,d). A cyclic mapping T:A∪B→A∪B is said to be a pointwise cyclic orbital contraction if there exists α:A∪B→[0,1) such that for each (x,y)∈A×B(10)d(Tx,Ty)≤α(x)δx(𝒪T2y)+(1-α(x))dist(A,B)∀y∈B,(11)d(Tx,Ty)≤α(y)δy(𝒪T2x)+(1-α(y))dist(A,B)∀x∈A.It is clear that the class of pointwise cyclic orbital contractions contains the class of pointwise cyclic contractions as a subclass. The following example shows that the converse need not be true. Moreover, it is interesting to note that a pointwise cyclic orbital contraction may not be relatively nonexpansive.Example 10.
LetX:=ℝ with the usual metric. For A=B=[0,1/2], define T:A∪B→A∪B by
(12)Tx={18xif0≤x≤14,0if14<x≤12.ThenT is pointwise cyclic orbital contraction with α(x)=7/8 for all x∈A.Proof.
If either0≤x,y≤1/4 or 1/4<x,y≤1/2, then it is easy to see that relations (10) and (11) hold. Suppose that 0≤x≤1/4 and 1/4<y≤1/2. Thus,
(13)d(Tx,Ty)=18x,δx(𝒪T2y)=supn≥0|x-T2ny|=max{x,y-x}.
Hence,(14)d(Tx,Ty)=18x≤78max{x,y-x}=α(x)δx(𝒪T2y),
that is, (10) holds. Also, by the fact that δy(𝒪T2x)=supn≥0|y-T2nx|=y then
(15)d(Tx,Ty)=18x≤78y=α(y)δx(𝒪T2x),
which implies that (10) and (11) hold. Thus, T is a pointwise cyclic orbital contraction. Now, we show that T is not pointwise cyclic contraction. Indeed, if there exists a function α:A∪B→[0,1) such that d(Tx,Ty)≤α(x)d(x,y) for all (x,y)∈A×B, then for x=1/4 and y=26/100 we must have
(16)18×25100=d(Tx,Ty)≤α(x)d(x,y)=α(25100)×1100,
and hence 25/8≤α(1/4), which is a contradiction. Therefore, T is not pointwise cyclic contraction. Moreover, we note that since T is not continuous, T is not relatively nonexpansive.Let us state our main result of this section.Theorem 11.
Let(A,B) be a nonempty weakly compact convex pair in a Banach space X. If T:A∪B→A∪B is a pointwise cyclic orbital contraction, then the set of best proximity points of T is nonempty.Proof.
LetΣ denote the collection of all nonempty weakly compact convex pairs (E,F) which are subsets of (A,B) and such that T is cyclic on E∪F. Then Σ is nonempty, since (A,B)∈Σ. Σ is partially ordered by reverse inclusion; that is, (A,B)≤(C,D)⇔(C,D)⊆(A,B). It is easy to check that every increasing chain in Σ is bounded above. Hence by Zorn's lemma we can get a minimal element say (K1,K2)∈Σ. We have
(17)(co¯(T(K2)),co¯(T(K1)))⊆(K1,K2).
Moreover(18)T(co¯(T(K2)))⊆T(K1)⊆co¯(T(K1)),
and also
(19)T(co¯(T(K1)))⊆co¯(T(K2)).
Now, by the minimality of (K1,K2), we have co¯(T(K2))=K1, co¯(T(K1))=K2. Suppose that a∈K1. Then for each y∈K2 we have
(20)∥Ta-Ty∥≤α(a)δa(𝒪T2y)+(1-α(a))dist(A,B)≤α(a)δa(K2)+(1-α(a))dist(A,B),
which implies that T(K2)⊆ℬ(Ta;α(a)δa(K2)+(1-α(a))dist(A,B)). Hence,
(21)K1=co¯(T(K2))⊆ℬ(Ta;α(a)δa(K2)+(1-α(a))dist(A,B)).
Thus, for each x∈K1 we must have
(22)∥x-Ta∥≤α(a)δa(K2)+(1-α(a))dist(A,B),
which ensures that
(23)δTa(K1)≤α(a)δa(K2)+(1-α(a))dist(A,B).
Similarly, we can see that ifb∈K2, then
(24)δTb(K2)≤α(b)δb(K1)+(1-α(b))dist(A,B).
Assume that(p,q) is a fixed element in K1×K2. Let δp(K2)≤δq(K1). Set r:=δp(K2) and
(25)E:={y∈K2:δy(K1)≤r},F:={x∈K1:δx(K2)≤r}.
Obviously,p∈F. Also, from (23) Tp∈E and then (E,F) is a nonempty pair. Besides, it is easy to see that
(26)E:=⋂a∈K1ℬ(a;r)∩K2,F:=⋂b∈K2ℬ(b;r)∩K1.
Now, lety∈E. Then y∈K2 and by (24), δTy(K2)≤δy(K1)≤r which implies that Ty∈F. Hence, T(E)⊆F. Similarly, by relation (23) we conclude that T(F)⊆E. That is, T is cyclic on E∪F. By the minimality of (K1,K2) we must have F=K1 and E=K2. Therefore,
(27)δx(K2)≤r,δy(K1)≤r,
for each (x,y)∈A×B. Then for all (x,y)∈A×B we have
(28)δx(K2)≤δq(K1),δy(K1)≤δp(K2).
Particularly,δp(K2)≤δq(K1)≤δp(K2). Thus,
(29)δp(K2)=δq(K1).
Similar argument implies that ifδq(K1)≤δp(K2), then relation (29) is to be achieved. Therefore, (29) holds for all (p,q)∈K1×K2. To complete the proof of the theorem, we consider the following cases.
Case 1.If δp(K2)=dist(A,B), then we have
(30)∥p-Tp∥≤δp(K2)=dist(A,B),
that is, p is a best proximity point of T.
Case 2.If δp(K2)>dist(A,B), it now follows from (23) and (29) that
(31)δp(K2)=δTp(K1)≤α(p)δp(K2)+(1-α(p))dist(A,B)<δp(K2),
which is a contradiction. Hence, each point of K1 is a best proximity point of T and so K1⊆B.P.P(T)∩A. Similarly, we can see that K2⊆B.P.P(T)∩B. Thus, for each (x,y)∈K1×K2 we must have
(32)∥x-Tx∥=∥Ty-y∥=dist(A,B).
## 3. Asymptotic Pointwise Cyclic Orbital Contractions
Definition 12.
Let(A,B) be a pair of subsets of a metric space (X,d). A cyclic mapping T:A∪B→A∪B is said to be an asymptotic pointwise cyclic orbital contraction if for each (x,y)∈A×B,
(33)d(T2nx,T2ny)≤αn(x)diam𝒪T2(x,y)+(1-αn(x))dist(A,B)∀y∈B,d(T2nx,T2ny)≤αn(y)diam𝒪T2(x,y)+(1-αn(y))dist(A,B)∀x∈A,
where for each n∈ℕ, αn:A∪B→ℝ+ and limsupn→∞αn(x)≤η for some 0<η<1 and for all x∈A∪B.The following theorem establishes existence and convergence of a best proximity point for asymptotic pointwise cyclic orbital contractions in metric spaces with the property UC.Theorem 13.
Let(A,B) be a nonempty closed pair in a complete metric space (X,d) such that (A,B) satisfies the property UC. Assume that T:A∪B→A∪B is an asymptotic pointwise cyclic orbital contraction such that T is continuous on A. If there exists x∈A such that the orbit of T at x is bounded, then T has a best proximity point in A. Moreover, if x0∈A and xn+1=Txn, then {x2n} converges to the best proximity point of T.Proof.
Letx∈A. We note that the sequence {diam[𝒪T2(T2nx,T2n+1x)]} is decreasing and bounded below by dist(A,B). Let diam[𝒪T2(T2nx,T2n+1x)]→rx≥dist(A,B). We claim that rx=dist(A,B). For all k1,k2∈ℕ with k1≤k2 we have
(34)d(T2(n+k1)x,T2(n+k2)(Tx))≤αn+k1(x)diam[𝒪T2(x,Tx)]+(1-αn+k1(x))dist(A,B).
Taking the supremum with respect tok1 and k2 and then letting n→∞ we obtain
(35)rx≤ηdiam[𝒪T2(x,Tx)]+(1-η)dist(A,B).
Besides, for eachm∈ℕ we have
(36)rx=limn→∞diam[𝒪T2(T2n(T2mx),T2n(T2m(Tx)))]≤ηdiam𝒪T2(T2mx,T2m(Tx))+(1-η)dist(A,B).
Now, ifm→∞ we obtain
(37)rx≤ηrx+(1-η)dist(A,B),
and hence rx=dist(A,B). We now conclude that
(38)limn→∞supm≥nd(T2nx,T2m+1x)=dist(A,B).
Since(A,B) has the property UC, by Lemma 5 {T2nx} is a Cauchy sequence. Suppose that x2n→p. Continuity of T on A implies that x2n+1→Tp. Thus, d(p,Tp)=dist(A,B). That is, p is a best proximity point of the mapping T in A.The next corollary is a direct result of Theorem13.Corollary 14 (compare to Theorem3).
Let(A,B) be a nonempty closed pair in a uniformly convex Banach space X such that A is convex. Assume that T:A∪B→A∪B is an asymptotic pointwise cyclic orbital contraction such that T is continuous on A. If there exists x∈A such that the orbit of T at x is bounded, then T has a best proximity point in A. Moreover, if x0∈A and xn+1=Txn, then {x2n} converges to the best proximity point of T.
## 4. A Convergence Theorem
In this section, we give a convergence theorem of best proximity point for cyclic mappings which is derived from Ishikawa's convergence theorem ([12]). We begin with the following proposition which is an inequality characterization of uniformly convex Banach spaces.Proposition 15 (see [13]).
LetX be a uniformly convex Banach space. Then for each r>0, there exists a strictly increasing, continuous and convex function φ:[0,1)→[0,1) such that φ(0)=0 and
(39)∥λx+(1-λ)y∥2≤λ∥x∥2+(1-λ)∥y∥2-λ(1-λ)φ(∥x-y∥),
for all λ∈[0,1] and all x,y∈X such that ∥x∥≤r and ∥y∥≤r.Definition 16.
Let(A,B) be a nonempty pair of subsets of a normed linear space X. Suppose that T:A∪B→A∪B is a cyclic mapping on A∪B. We say that T is hemicompactness on A provided that each sequence {xn} in A with ∥xn-T2xn∥→0 has a convergent subsequence.It is clear that ifA is compact set, then each cyclic mapping defined on A∪B is hemicompactness, where B is a nonempty subset of X.Theorem 17.
Let(A,B) be a nonempty, bounded, closed, and convex pair in a uniformly convex Banach space X. Assume that T:A∪B→A∪B is a cyclic relatively nonexpansive mapping such that T is hemicompactness on A and T2 is continuous and satisfies the condition
(40)∥T2x-Tx∥<∥Tx-x∥,
for all x∈A∪B with ∥x-Tx∥>dist(A,B). Define a sequence {xn} in A by x1∈A and
(41)xn+1=αxn+(1-α)T2xn,
for n∈ℕ, where α is a real number belonging to (0,1). Then {xn} converges strongly to a best proximity point of T in A.Proof.
Since(A,B) is a bounded, closed, and convex pair in a uniformly convex Banach space X, the relatively nonexpansive mapping T has a best proximity point in B ([4]). Also, we note that both of the (A,B) and (B,A) have the property UC. So, by Lemma 6 a point p∈B is a best proximity point of the mapping T if and only if p is a fixed point of the mapping T2∣B. We now have
(42)∥xn+1-p∥=∥αxn+(1-α)T2xn-T2p∥=∥αxn+(1-α)T2xn-αT2p-(1-α)T2p∥≤α∥xn-p∥+(1-α)∥T2xn-T2p∥≤α∥xn-p∥+(1-α)∥xn-p∥=∥xn-p∥.
Therefore,{∥xn-p∥} is a decreasing sequence and hence {∥xn-p∥} is convergent. So {xn} is bounded. From the uniform convexity of a Banach space X and by Proposition 15, there exists a strictly increasing, continuous and convex function φ:[0,1)→[0,1) such that φ(0)=0 and
(43)∥xn+1-p∥2=∥α(xn-p)+(1-α)(T2xn-p)∥2≤α∥xn-p∥2+(1-α)∥T2xn-T2p∥2-α(1-α)φ(∥xn-T2xn∥)≤∥xn-p∥2-α(1-α)φ(∥xn-T2xn∥).
Thus(44)α(1-α)φ(∥xn-T2xn∥)≤∥xn-p∥2-∥xn+1-p∥2,
which implies that φ(∥xn-T2xn∥)→0. Since φ is strictly increasing and continuous at 0, it follows that ∥xn-T2xn∥→0. (45)∥xn-T2xn∥⟶0.
On the other hand, sinceT2 is hemicompactness on A, there exists a subsequence {xnj} of the sequence {xn} such that xnj→q∈A. By the continuity of the mapping T2 on A, we have T2xnj→T2q. Since ∥xnj-T2xnj∥→0, we obtain q=T2q. Hence q∈A is a fixed point of the mapping T2 in A and again by Lemma 6, q is a best proximity point of T in A and xn→q∈A strongly.
---
*Source: 101439-2013-06-02.xml* | 2013 |
# Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm
**Authors:** Wei Zhang; Qian Xu
**Journal:** Computational Intelligence and Neuroscience
(2022)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2022/1014501
---
## Abstract
In order to improve the teaching efficiency of English teachers in classroom teaching, the target detection algorithm in deep learning and the monitoring information from teachers are used, the target detection algorithm of deep learning Single Shot MultiBox Detector (SSD) is optimized, and the optimized Mobilenet-Single Shot MultiBox Detector (Mobilenet-SSD) is designed. After analyzing the Mobilenet-SSD algorithm, it is recognized that the algorithm has the shortcomings of large amount of basic network parameters and poor small target detection. The deficiencies are optimized in the following partThrough related experiments of student behaviour analysis, the average detection accuracy of the optimized algorithm reached 82.13%, and the detection speed reached 23.5 fps (frames per second). Through experiments, the algorithm has achieved 81.11% in detecting students’ writing behaviour. This proves that the proposed algorithm has improved the accuracy of small target recognition without changing the operation speed of the traditional algorithm. The designed algorithm has more advantages in detection accuracy compared with previous detection algorithms. The optimized algorithm improves the detection efficiency of the algorithm, which is beneficial to provide modern technical support for English teachers to understand the learning status of students and has strong practical significance for improving the efficiency of English classroom teaching.
---
## Body
## 1. Introduction
At present, internationalization is in a stage of rapid development, and social enterprises have higher requirements for the English level of talent. Because college English teaching not only has the characteristics of the subject itself but also needs to meet the overall requirements of current quality education. The teaching of English in colleges and universities strives for the comprehensive development of students, which makes the structure of college English teaching very complicated, and it is difficult to guarantee teaching efficiency. With the change of educational concepts, the number of university classrooms is also showing a rapid increase. In the actual teaching process, the teacher needs to teach multiple students at the same time, and it is difficult to pay attention to all the students. The development of big data, using many monitoring resources in the classroom, combined with target detection in deep learning, provides research ideas for detecting student learning status and improving student teaching efficiency [1].At present, many video target detections are derived from static image target detection. Zhao et al. studied that if the target detection model of the static image is directly used in the video target detection, the effect is very poor. Therefore, scholars combine the time and context information of the video to perform target detection [2]. Initially, the postprocessing stage completes the detection by a single frame of images. However, this method is mostly multistage, the results of each stage are affected by the results of the previous stage, and it is troublesome to correct errors in the previous stage. There are unclear problems caused by out-of-focus and object motion in the video, and this problem is not solved very well in the postprocessing stage [3]. Dou et al. [4] used optical flow, Long-Short Term Memory (LSTM), and Artificial Neural Network (ANN) to aggregate video time and context information to optimize the features of fuzzy frames, to make the detection accuracy better. In addition, the concept of key frames is introduced, the detection time is optimized, and optical flow-related technologies are used to give feature propagation. Recurrent Neural Networks (RNN) combined with lightweight and heavyweight feature extractors are interleaved and used to further improve the accuracy and speed of video target detection [5]. There are many shortcomings in the detection speed and accuracy of the current research compared with previous studies [6–9]. Affected by the detection target, its performance will also have a certain gap. Its availability in complex environments, dense target detection, and lightweight model design still needs great improvement [10].The deep learning Single Shot MultiBox Detector (SSD) algorithm is optimized. Through the analysis of the algorithm, a series of improvements have been made to the deficiencies of its large amount of basic network parameters and poor detection of small targets. The SSD base network is reasonably replaced. The characteristics of the deep separable convolutional network are used to optimize the network parameters to enhance computational efficiency. The data in the deep feature map is merged upward in the shallow layer, and the accuracy of the calibration of small targets can be improved. Finally, experiments related to student behavioural state are analysed. This proves that the accuracy of small target recognition has been improved without changing the calculation speed of traditional algorithms. These help teachers understand the students’ learning status and are of great significance to the improvement of English classroom teaching efficiency.The structure is arranged as follows: Section1 is the introduction, which introduces related research results in the detection field; Section 2 is the research method, which introduces the design process of the algorithm in detail; Section 3 is the experimental results, testing and analysing the performance of the designed algorithm; Section 4 is the conclusion, summarizing the research algorithm and explaining the future research direction.
## 2. Materials and Methods
### 2.1. Target Detection in Deep Learning
As a frequently used deep learning network, the neural network is composed of many neurons. It has two functions, linear and nonlinear functions in sequence. The output of the linear function is not related to the number of layers, and it is always linear, so the scope of application is limited [11]. However, reality is often very complicated, and the neural network needs to analyse and process many nonlinear problems, so a function is used to activate the result. Such a neural network can analyse and process nonlinear problems [12]. The calculation process of the activation function is shown in Figure 1.Figure 1
Activation function calculation process.In the activation function calculation process in Figure1, the input is placed in the god cell, the neuron is linearly calculated on it and then transferred to the activation function, and the neuron can get a nonlinear result [13]. The application of the activation function in the neural network enhances its representation ability.The Sigmoid function originally originated from the biological field, and it is also called the Logistic function. Its function image looks like the letter S, with an increasing trend in general. Its output is in the range of 0 and 1, so it is used on the output of the activation network layer [14]. The Sigmoid function is shown in equation (1):(1)fz=11+e−z.In equation (1), f(z) represents the required loss function, and z is the input value. This function is generally used to solve two classifications. Although it has its own advantages, it can be handled well in some projects, but when using this method to obtain the derivative, the program will be more troublesome, and sometimes, there will be the problem of vanishing gradient [15].The Rectified Linear Unit (ReLU) function is a linear rectification function. Image recognition and computer vision are widely used. Its function equation is(2)fx=maxx,0.In the function equation,f(x) represents the function that requires the loss, and x is the input. The calculation of this function is relatively simple, so its calculation speed is excellent. In the calculation, some neurons are set to 0, so the network will be very sparse so that the problem of overfitting is optimized [16].The Soft version of max (Softmax) function is also called the normalized exponential function. Its result is maintained at (0, 1), and the sum of the probability of satisfying the output result is 1, as shown in equation (3):(3)fxj=exj∑k=1kexj,j=1,…,k.In equation (3), fxj is the value of the loss function, xj is the j-th input, and k is the number of input values. The function works well on multiclassification problems; however, the isolation effect between different categories is slightly insufficient [17].
### 2.2. Convolutional Neural Network (CNN) Structure
As the basis for exploring deep learning, Convolutional Neural Network (CNN) has a network structure divided into three layers: input data, output data, and intermediate layer [18]. The specific CNN structure is shown in Figure 2.Figure 2
CNN structure.In the CNN structure, the input layer can analyse multidimensional data. When inputting relevant data into the network, it is necessary to unify the time and frequency of the relevant data. The output layer is to output the corresponding results of specific problems and classify the problems. The output is related to the object category. In the positioning problem, the output is the coordinate data of the object. The middle layer is divided into three layers: convolutional layer, pooling layer, and fully connected layer, which will be introduced one by one in the following [19].The most important part of the convolutional layer is the convolution kernel. The convolution kernel can also be regarded as a matrix of elements, and different elements will have corresponding weights and bias coefficients. When performing a convolution operation, the input data will be scanned with a certain rule. The function of the pooling layer is to delete invalid information in the data obtained by the upper layer and reduce the size. Generally, there are average pooling, maximum pooling, overlapping pooling, and maximum pooling. The first and second types are widely used. The function of the fully connected layer is used to classify the information data from the previous layer. In special cases, the previous operation can be replaced with the average value of the entire parameter value, which can reduce the target of redundant data [20].CNN generally has the following two characteristics:(1)
Local area connection. Normally, neurons are connected to each other when the network is connected. In CNN, it is only partially connected. If there are connections betweenN−1 layers of neurons and N layers of neurons, the connection form of CNN is shown in Figure 3.(2)
Weight sharing. The convolution kernel of the convolutional layer can be regarded as an element matrix, and the convolution operation is to use the convolution kernel to scan the information. For example, if a3∗3∗1 convolution kernel has 9 parameters, input an image to pass this convolution. The integration kernel performs related convolution processing, and the entire image will share these 9 parameters during scanning.Figure 3
Connection form of CNN.
### 2.3. Methods of Face Recognition and Image Preprocessing
In face recognition, multitask convolutional neural (MTCNN) face detection algorithm, affine transformation face alignment, and Insightface face comparison algorithm are analysed. When performing student face recognition, the process shown in Figure4 is used for recognition and detection.Figure 4
Face recognition process.In the face recognition process in Figure4, the relevant face data set should be prepared, and then the MTCNN algorithm is used to align the face with the affine transformation. The processed data is processed in Insightface for information comparison, and finally, the recognition result is obtained. The entire recognition process is over. In the algorithm selected in this paper, the MTCNN face detection model uses the image pyramid multiscale face detection method as the basis and uses its subnetwork to obtain the relevant features of the face in order to lay the foundation for correcting the direction of the face. When correcting the face, the method of affine transformation is used to align the face. Since the face images do not always show a very regular face, the change of the angle will have a great influence on the recognition, and it is more important to correct the face. Through the above MTCNN face detection algorithm, the key points of the face are obtained as a basis, and appropriate changes such as translation, rotation, and scaling are used to achieve the purpose of face alignment. The geometric transformation of the image is realized by the affine transformation method. The combination of translation-related image transformation is affine transformation, which uses the linear change from two-dimensional coordinates to achieve the mapping between the image and the image, and the flatness and parallelism of the image do not change during this process. The Insightface face comparison method used in face recognition reduces the distance within the class so that the class is closer, and many features with angular characteristics are obtained so that the performance of the face recognition model is enhanced.Under normal circumstances, the image has many interferences such as a lot of noise, and the information effect will be affected to varying degrees. In order to ensure that the quality of the image to be operated meets the standard, the preprocessing of the image is necessary. As shown in Figure5, several common image processing methods are used.Figure 5
Image preprocessing method.In Figure5, normalization is to transform the image into a standard mode, use the image invariant moments to find a set of parameters, and use it to reduce the interference of other functions on the image. Its essence is to find the amount of the image that does not change. After the shape brightness operation is performed on it, the changed image and the original image can be classified into one category.
### 2.4. Classroom Behaviour Recognition Model Design Process
As far as classroom teachers are concerned, mastering the relevant behaviours of students in the classroom can obtain the current state of the students in class and then make corresponding adjustments to improve teaching efficiency. Combining the relevant characteristics of students’ behaviour in the classroom, an optimized SSD algorithm is designed [21]. The specific recognition process of the constructed classroom behaviour recognition model is shown in Figure 6.Figure 6
Behaviour recognition process.Specifically, the application of the behaviour recognition process in the classroom is mainly by the following aspects:(1)
Collect student behaviour images. Find enough images of behaviours such as raising hand, sitting upright, writing, sleeping, and playing with mobile phone in class. The number of each action is equal.(2)
Build an identification database. The collected images are preprocessed and labelled, and the images are divided into training set, test set, and verification set according to the proportions.(3)
Train and test the model. Let the training set be trained in the behaviour recognition network model to obtain the initial model, test the model through the validation set, and then adjust the network model parameters according to the results. Use the test data in the model to observe whether the output results meet the expectations, to decide whether to continue training or not, retain the behaviour recognition model with excellent recognition effect, and use it in the subsequent classroom behaviour recognition [22].
### 2.5. Optimization Design of SSD Target Detection Algorithm
The target detection algorithm is an improvement and optimization on SSD, so it is necessary to understand the structure of the original algorithm model and the principle of prioritization [23]. According to the input image size, it can be divided into SSD300 and SSD512. SSD300 is used. Its network structure is divided into two parts. One is the main part of the network, also known as the basic network. It comes from the relevant subtype network. The second is the convolutional network added later. Its function is to assist the previous network in acquiring image features more deeply [24]. Delete the fully connected layer behind Visual Geometry Group Network 16 (VGG16), and keep the previous part of the convolutional network. Use the two newly created convolutional layers in the deleted places, named Convolution 6 (Conv6) and Conv7, add eight slowly decreasing convolutional layers to the end, and then add the classification layer and the nonmaximum suppression layer. The SSD network structure is shown in Figure 7.Figure 7
SSD network structure.SSD is a one-stage target detection algorithm [25]. In the process of feature extraction, the SSD algorithm uses multiscale feature maps for detection, adds a gradually decreasing convolution layer to the modified VGG16 network, and then selects 6 layers from all levels for prediction. They are Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2, and the size is slow to take effect from front to back. In the feature map, the relatively large size is used to identify small objects, and the smaller one is used to identify large objects [26]. In this way, image features can be obtained at different levels, and not only can shallow-level data information be obtained, but also deeper-level information can be obtained.The goal of basic network improvement is to replace the original backbone network VGG16 with a lightweight network. By consulting relevant information, the Mobilenet network is more suitable for the requirements here because it uses depth separable convolution to replace ordinary convolution to reduce the number of parameters. Compared with the hundreds of millions of parameters in VGG16, the Mobilenet network only contains 4.2 million parameters. Therefore, Mobilenet is used as the foundation, and after certain improvements, it is used as the basic network of SSD [27].The following is an introduction to specific Mobilenet improvements. The related situation of basic improvement is shown in Figure8.(1)
Mobilenet has been improved. Mobilenet is more efficient than VGG16, mainly manifested in the following: (1) depthwise separable convolution is used to construct the network; (2) width coefficient and resolution coefficient are used. It uses two parts to complete a convolution operation, followed by deep convolution and point convolution [28]. If they are regarded as two layers, then the Mobilenet network structure has a total of 28 layers. On the contrary, if they are regarded as one layer, then there are 14 layers. The essence of the depth separable convolution is to perform the convolution operation in two steps. When the image is transferred to the network, it is necessary to use the deep convolution operation to obtain the relevant feature information data and perform the Batch Normalization (BN) and Rectified Linear Unit (ReLU) operations on the previously obtained feature maps. Then use the point convolution operation to obtain other pieces of relevant feature map information. After that, BN and ReLU are used here to get the following results. The ratio of the depth separable convolution and the standard convolution parameter can be obtained by equation (4):(4)Fk∗Fk∗Ff∗Ff∗R+1∗1∗Ff∗Ff∗R∗PFk∗Fk∗R∗P∗Ff∗Ff=P+Fk2PFk2.In equation (4), Fk is the value of the convolution kernel and the size of the Ff shi image.In order to reduce the network parameters, it is necessary to use not only the depth separable convolution but also the width coefficienta and the resolution coefficient ρ. More values for a are 1, 0.75, 0.5, and 0.25. The function of a is to reduce the number of channels. For example, an input channel with a value of R is converted into αR after being added, and the amount of calculation is reduced. The reduced value is α2. The amount of calculation is also affected by the resolution, so the function of ρ is to reduce the object resolution. After they are used, the calculation amount of the pixel value is reduced by ρ2. The above is the related improvement measures taken for Mobilenet.When performing model training and learning, it is necessary to continuously observe the change of the loss function. When the value of the loss function continues to decrease, it means that the result of model training is approaching the best result. During the gradient descent, the amplitude of the value swing may become extremely large or not change, making the gradient descent speed slower. So, the addition of optimization algorithms is obviously very important. Root Mean Square Prop (RMSProp) optimization algorithm is used. This algorithm obtains the historical gradients of all dimensions and squares them. After superposition, the attenuation rate is added to obtain the relevant historical gradient sum. In the parameter update process, the learning rate is divided by the value calculated by equation (4). After using this optimization algorithm, the gradient direction is maintained within a small range, and the network convergence speed is well optimized. Its specific calculation is as equation (5):(5)SsR=βSdR+1−βdR2,(6)R=R−ρdRSdR+a.In equations (5) and (6), β is the decay rate. SR is the cumulative gradient variable. ρ is the learning rate. a is a constant, and its function is to avoid the situation where the denominator is 0. R is the parameter.(2)
Replacement of SSD basic network: inspired by the traditional SSD model design structure, the first 14 improved deep separable convolutional layers in the previously improved Mobilenet network are replaced with VGG16, which is used to improve the backbone network of the algorithm [29]. Then, add the feature extraction performance of the model to it; after the replacement of the basic network, add a convolutional layer with a decreasing correlation size to obtain deeper feature information of the image [30]. At the end of the network, the classification layer used to analyse the category and the nonmaximum suppression layer to filter the regression box are connected to replace the basic network [31]. After the implementation of the abovementioned improvement strategy for the traditional SSD, the improved SSD model is trained in the data-related training set, and the specific model is designed.Figure 8
The replacement process of the basic network.
### 2.6. Case Analysis
In order to evaluate the improved algorithm of the paper, the following will compare the average accuracy and detection speed of the improved SSD algorithm, the general Mobilenet-SSD algorithm, and the improved Mobilenet-SSD algorithm. The precision is closely related to the accuracy rate, calculated as in equation (7):(7)precision=TpTp+Fp.Tp is the number of positive samples in the prediction of positive samples, and Fp is the number of positive samples in the prediction of negative samples. The surveillance video in the teaching process of a university is sampled by the Open Source Computer Vision Library (Open CV) operation frame, and actions such as raising hands and writing are selected for preservation and processing. Finally, 800 images were obtained. Using the data enhancement method mentioned above, after the image data enhancement processing, the final 1600 images were obtained as the data set of this experiment. In the training set, 400 pieces were randomly selected for the various actions of raising hands, listening to lectures, playing with mobile phones, writing, and sleeping, and a total of 2,000 pieces were used as the training set for this experiment.
## 2.1. Target Detection in Deep Learning
As a frequently used deep learning network, the neural network is composed of many neurons. It has two functions, linear and nonlinear functions in sequence. The output of the linear function is not related to the number of layers, and it is always linear, so the scope of application is limited [11]. However, reality is often very complicated, and the neural network needs to analyse and process many nonlinear problems, so a function is used to activate the result. Such a neural network can analyse and process nonlinear problems [12]. The calculation process of the activation function is shown in Figure 1.Figure 1
Activation function calculation process.In the activation function calculation process in Figure1, the input is placed in the god cell, the neuron is linearly calculated on it and then transferred to the activation function, and the neuron can get a nonlinear result [13]. The application of the activation function in the neural network enhances its representation ability.The Sigmoid function originally originated from the biological field, and it is also called the Logistic function. Its function image looks like the letter S, with an increasing trend in general. Its output is in the range of 0 and 1, so it is used on the output of the activation network layer [14]. The Sigmoid function is shown in equation (1):(1)fz=11+e−z.In equation (1), f(z) represents the required loss function, and z is the input value. This function is generally used to solve two classifications. Although it has its own advantages, it can be handled well in some projects, but when using this method to obtain the derivative, the program will be more troublesome, and sometimes, there will be the problem of vanishing gradient [15].The Rectified Linear Unit (ReLU) function is a linear rectification function. Image recognition and computer vision are widely used. Its function equation is(2)fx=maxx,0.In the function equation,f(x) represents the function that requires the loss, and x is the input. The calculation of this function is relatively simple, so its calculation speed is excellent. In the calculation, some neurons are set to 0, so the network will be very sparse so that the problem of overfitting is optimized [16].The Soft version of max (Softmax) function is also called the normalized exponential function. Its result is maintained at (0, 1), and the sum of the probability of satisfying the output result is 1, as shown in equation (3):(3)fxj=exj∑k=1kexj,j=1,…,k.In equation (3), fxj is the value of the loss function, xj is the j-th input, and k is the number of input values. The function works well on multiclassification problems; however, the isolation effect between different categories is slightly insufficient [17].
## 2.2. Convolutional Neural Network (CNN) Structure
As the basis for exploring deep learning, Convolutional Neural Network (CNN) has a network structure divided into three layers: input data, output data, and intermediate layer [18]. The specific CNN structure is shown in Figure 2.Figure 2
CNN structure.In the CNN structure, the input layer can analyse multidimensional data. When inputting relevant data into the network, it is necessary to unify the time and frequency of the relevant data. The output layer is to output the corresponding results of specific problems and classify the problems. The output is related to the object category. In the positioning problem, the output is the coordinate data of the object. The middle layer is divided into three layers: convolutional layer, pooling layer, and fully connected layer, which will be introduced one by one in the following [19].The most important part of the convolutional layer is the convolution kernel. The convolution kernel can also be regarded as a matrix of elements, and different elements will have corresponding weights and bias coefficients. When performing a convolution operation, the input data will be scanned with a certain rule. The function of the pooling layer is to delete invalid information in the data obtained by the upper layer and reduce the size. Generally, there are average pooling, maximum pooling, overlapping pooling, and maximum pooling. The first and second types are widely used. The function of the fully connected layer is used to classify the information data from the previous layer. In special cases, the previous operation can be replaced with the average value of the entire parameter value, which can reduce the target of redundant data [20].CNN generally has the following two characteristics:(1)
Local area connection. Normally, neurons are connected to each other when the network is connected. In CNN, it is only partially connected. If there are connections betweenN−1 layers of neurons and N layers of neurons, the connection form of CNN is shown in Figure 3.(2)
Weight sharing. The convolution kernel of the convolutional layer can be regarded as an element matrix, and the convolution operation is to use the convolution kernel to scan the information. For example, if a3∗3∗1 convolution kernel has 9 parameters, input an image to pass this convolution. The integration kernel performs related convolution processing, and the entire image will share these 9 parameters during scanning.Figure 3
Connection form of CNN.
## 2.3. Methods of Face Recognition and Image Preprocessing
In face recognition, multitask convolutional neural (MTCNN) face detection algorithm, affine transformation face alignment, and Insightface face comparison algorithm are analysed. When performing student face recognition, the process shown in Figure4 is used for recognition and detection.Figure 4
Face recognition process.In the face recognition process in Figure4, the relevant face data set should be prepared, and then the MTCNN algorithm is used to align the face with the affine transformation. The processed data is processed in Insightface for information comparison, and finally, the recognition result is obtained. The entire recognition process is over. In the algorithm selected in this paper, the MTCNN face detection model uses the image pyramid multiscale face detection method as the basis and uses its subnetwork to obtain the relevant features of the face in order to lay the foundation for correcting the direction of the face. When correcting the face, the method of affine transformation is used to align the face. Since the face images do not always show a very regular face, the change of the angle will have a great influence on the recognition, and it is more important to correct the face. Through the above MTCNN face detection algorithm, the key points of the face are obtained as a basis, and appropriate changes such as translation, rotation, and scaling are used to achieve the purpose of face alignment. The geometric transformation of the image is realized by the affine transformation method. The combination of translation-related image transformation is affine transformation, which uses the linear change from two-dimensional coordinates to achieve the mapping between the image and the image, and the flatness and parallelism of the image do not change during this process. The Insightface face comparison method used in face recognition reduces the distance within the class so that the class is closer, and many features with angular characteristics are obtained so that the performance of the face recognition model is enhanced.Under normal circumstances, the image has many interferences such as a lot of noise, and the information effect will be affected to varying degrees. In order to ensure that the quality of the image to be operated meets the standard, the preprocessing of the image is necessary. As shown in Figure5, several common image processing methods are used.Figure 5
Image preprocessing method.In Figure5, normalization is to transform the image into a standard mode, use the image invariant moments to find a set of parameters, and use it to reduce the interference of other functions on the image. Its essence is to find the amount of the image that does not change. After the shape brightness operation is performed on it, the changed image and the original image can be classified into one category.
## 2.4. Classroom Behaviour Recognition Model Design Process
As far as classroom teachers are concerned, mastering the relevant behaviours of students in the classroom can obtain the current state of the students in class and then make corresponding adjustments to improve teaching efficiency. Combining the relevant characteristics of students’ behaviour in the classroom, an optimized SSD algorithm is designed [21]. The specific recognition process of the constructed classroom behaviour recognition model is shown in Figure 6.Figure 6
Behaviour recognition process.Specifically, the application of the behaviour recognition process in the classroom is mainly by the following aspects:(1)
Collect student behaviour images. Find enough images of behaviours such as raising hand, sitting upright, writing, sleeping, and playing with mobile phone in class. The number of each action is equal.(2)
Build an identification database. The collected images are preprocessed and labelled, and the images are divided into training set, test set, and verification set according to the proportions.(3)
Train and test the model. Let the training set be trained in the behaviour recognition network model to obtain the initial model, test the model through the validation set, and then adjust the network model parameters according to the results. Use the test data in the model to observe whether the output results meet the expectations, to decide whether to continue training or not, retain the behaviour recognition model with excellent recognition effect, and use it in the subsequent classroom behaviour recognition [22].
## 2.5. Optimization Design of SSD Target Detection Algorithm
The target detection algorithm is an improvement and optimization on SSD, so it is necessary to understand the structure of the original algorithm model and the principle of prioritization [23]. According to the input image size, it can be divided into SSD300 and SSD512. SSD300 is used. Its network structure is divided into two parts. One is the main part of the network, also known as the basic network. It comes from the relevant subtype network. The second is the convolutional network added later. Its function is to assist the previous network in acquiring image features more deeply [24]. Delete the fully connected layer behind Visual Geometry Group Network 16 (VGG16), and keep the previous part of the convolutional network. Use the two newly created convolutional layers in the deleted places, named Convolution 6 (Conv6) and Conv7, add eight slowly decreasing convolutional layers to the end, and then add the classification layer and the nonmaximum suppression layer. The SSD network structure is shown in Figure 7.Figure 7
SSD network structure.SSD is a one-stage target detection algorithm [25]. In the process of feature extraction, the SSD algorithm uses multiscale feature maps for detection, adds a gradually decreasing convolution layer to the modified VGG16 network, and then selects 6 layers from all levels for prediction. They are Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2, and the size is slow to take effect from front to back. In the feature map, the relatively large size is used to identify small objects, and the smaller one is used to identify large objects [26]. In this way, image features can be obtained at different levels, and not only can shallow-level data information be obtained, but also deeper-level information can be obtained.The goal of basic network improvement is to replace the original backbone network VGG16 with a lightweight network. By consulting relevant information, the Mobilenet network is more suitable for the requirements here because it uses depth separable convolution to replace ordinary convolution to reduce the number of parameters. Compared with the hundreds of millions of parameters in VGG16, the Mobilenet network only contains 4.2 million parameters. Therefore, Mobilenet is used as the foundation, and after certain improvements, it is used as the basic network of SSD [27].The following is an introduction to specific Mobilenet improvements. The related situation of basic improvement is shown in Figure8.(1)
Mobilenet has been improved. Mobilenet is more efficient than VGG16, mainly manifested in the following: (1) depthwise separable convolution is used to construct the network; (2) width coefficient and resolution coefficient are used. It uses two parts to complete a convolution operation, followed by deep convolution and point convolution [28]. If they are regarded as two layers, then the Mobilenet network structure has a total of 28 layers. On the contrary, if they are regarded as one layer, then there are 14 layers. The essence of the depth separable convolution is to perform the convolution operation in two steps. When the image is transferred to the network, it is necessary to use the deep convolution operation to obtain the relevant feature information data and perform the Batch Normalization (BN) and Rectified Linear Unit (ReLU) operations on the previously obtained feature maps. Then use the point convolution operation to obtain other pieces of relevant feature map information. After that, BN and ReLU are used here to get the following results. The ratio of the depth separable convolution and the standard convolution parameter can be obtained by equation (4):(4)Fk∗Fk∗Ff∗Ff∗R+1∗1∗Ff∗Ff∗R∗PFk∗Fk∗R∗P∗Ff∗Ff=P+Fk2PFk2.In equation (4), Fk is the value of the convolution kernel and the size of the Ff shi image.In order to reduce the network parameters, it is necessary to use not only the depth separable convolution but also the width coefficienta and the resolution coefficient ρ. More values for a are 1, 0.75, 0.5, and 0.25. The function of a is to reduce the number of channels. For example, an input channel with a value of R is converted into αR after being added, and the amount of calculation is reduced. The reduced value is α2. The amount of calculation is also affected by the resolution, so the function of ρ is to reduce the object resolution. After they are used, the calculation amount of the pixel value is reduced by ρ2. The above is the related improvement measures taken for Mobilenet.When performing model training and learning, it is necessary to continuously observe the change of the loss function. When the value of the loss function continues to decrease, it means that the result of model training is approaching the best result. During the gradient descent, the amplitude of the value swing may become extremely large or not change, making the gradient descent speed slower. So, the addition of optimization algorithms is obviously very important. Root Mean Square Prop (RMSProp) optimization algorithm is used. This algorithm obtains the historical gradients of all dimensions and squares them. After superposition, the attenuation rate is added to obtain the relevant historical gradient sum. In the parameter update process, the learning rate is divided by the value calculated by equation (4). After using this optimization algorithm, the gradient direction is maintained within a small range, and the network convergence speed is well optimized. Its specific calculation is as equation (5):(5)SsR=βSdR+1−βdR2,(6)R=R−ρdRSdR+a.In equations (5) and (6), β is the decay rate. SR is the cumulative gradient variable. ρ is the learning rate. a is a constant, and its function is to avoid the situation where the denominator is 0. R is the parameter.(2)
Replacement of SSD basic network: inspired by the traditional SSD model design structure, the first 14 improved deep separable convolutional layers in the previously improved Mobilenet network are replaced with VGG16, which is used to improve the backbone network of the algorithm [29]. Then, add the feature extraction performance of the model to it; after the replacement of the basic network, add a convolutional layer with a decreasing correlation size to obtain deeper feature information of the image [30]. At the end of the network, the classification layer used to analyse the category and the nonmaximum suppression layer to filter the regression box are connected to replace the basic network [31]. After the implementation of the abovementioned improvement strategy for the traditional SSD, the improved SSD model is trained in the data-related training set, and the specific model is designed.Figure 8
The replacement process of the basic network.
## 2.6. Case Analysis
In order to evaluate the improved algorithm of the paper, the following will compare the average accuracy and detection speed of the improved SSD algorithm, the general Mobilenet-SSD algorithm, and the improved Mobilenet-SSD algorithm. The precision is closely related to the accuracy rate, calculated as in equation (7):(7)precision=TpTp+Fp.Tp is the number of positive samples in the prediction of positive samples, and Fp is the number of positive samples in the prediction of negative samples. The surveillance video in the teaching process of a university is sampled by the Open Source Computer Vision Library (Open CV) operation frame, and actions such as raising hands and writing are selected for preservation and processing. Finally, 800 images were obtained. Using the data enhancement method mentioned above, after the image data enhancement processing, the final 1600 images were obtained as the data set of this experiment. In the training set, 400 pieces were randomly selected for the various actions of raising hands, listening to lectures, playing with mobile phones, writing, and sleeping, and a total of 2,000 pieces were used as the training set for this experiment.
## 3. Results
### 3.1. The Recognition Performance of Different Algorithms
For the traditional SSD algorithm, the unoptimized Mobilenet-SSD algorithm, and the optimized Mobilenet-SSD algorithm, after the above training set is trained, the average accuracy and detection speed of each model in the data set are shown in Figure9.Figure 9
Different model recognition performance.In Figure9, after comparing the target recognition performance of different models, the optimized Mobilenet-SSD model has a higher average accuracy rate than the traditional SSD algorithm and the unoptimized Mobilenet-SSD algorithm, reaching 82.13%, and the detection speed is up to 23.5 fps (frames per second). Compared with the SSD model with high accuracy and slow detection speed, the overall performance of the Mobilenet-SSD model with fast detection speed and low accuracy is better.
### 3.2. Accuracy Test of Specific Behaviours of Different Models
Table1 shows the five behavior detection results on SSD and optimized Mobilenet SSD models. The specific behaviors are:attending class, raising hands, playing mobile phones, writing, and sleeping. The specific values of the test results are shown in Table 1.Figure 10
Comparison of different behaviour detection accuracy (A: listening to lectures, B: playing with mobile phones, C: raising hands, D: writing, E: sleeping).Table 1
Different algorithm behaviour detection accuracy rate.
Algorithm\behaviourAttend classPlay cell phoneRaise handsWritingSleepSDD88.53%78.74%85.27%76.09%86.66%Optimization of Mobilenet-SSD88.31%79.15%86.76%81.12%85.04%In Table1, the optimized Mobilenet-SSD algorithm has a recognition accuracy of 88.31%, which is lower than that of the previous algorithm. The accuracy of mobile phone playing behaviour is 79.15%, which is an improvement compared with 78.74% of the SDD algorithm. The detection accuracy of the remaining hand-raising and writing behaviours has been improved to varying degrees. The accuracy of sleeping behaviour has a downward trend. The change trend of the five behaviour detection accuracy is shown in Figure 10.Figure10 shows that the optimized Mobilenet-SSD model has different behaviour detection accuracies in the classroom. Except that the behaviour of listening to lectures and sleeping is easily affected by the interference of occlusion, the other three actions have better action detection accuracy than the traditional SSD model. In writing behaviour detection, the optimized Mobilenet-SSD model has a detection accuracy of 81.11%, which is the biggest difference with traditional SSD. Combining the two experiments, the optimized Mobilenet-SSD model is compared with the traditional detection model in behaviour detection accuracy and detection speed. It can provide English teachers with better feedback on the students’ listening status during the teaching process, thereby improving the English classroom teaching efficiency.
## 3.1. The Recognition Performance of Different Algorithms
For the traditional SSD algorithm, the unoptimized Mobilenet-SSD algorithm, and the optimized Mobilenet-SSD algorithm, after the above training set is trained, the average accuracy and detection speed of each model in the data set are shown in Figure9.Figure 9
Different model recognition performance.In Figure9, after comparing the target recognition performance of different models, the optimized Mobilenet-SSD model has a higher average accuracy rate than the traditional SSD algorithm and the unoptimized Mobilenet-SSD algorithm, reaching 82.13%, and the detection speed is up to 23.5 fps (frames per second). Compared with the SSD model with high accuracy and slow detection speed, the overall performance of the Mobilenet-SSD model with fast detection speed and low accuracy is better.
## 3.2. Accuracy Test of Specific Behaviours of Different Models
Table1 shows the five behavior detection results on SSD and optimized Mobilenet SSD models. The specific behaviors are:attending class, raising hands, playing mobile phones, writing, and sleeping. The specific values of the test results are shown in Table 1.Figure 10
Comparison of different behaviour detection accuracy (A: listening to lectures, B: playing with mobile phones, C: raising hands, D: writing, E: sleeping).Table 1
Different algorithm behaviour detection accuracy rate.
Algorithm\behaviourAttend classPlay cell phoneRaise handsWritingSleepSDD88.53%78.74%85.27%76.09%86.66%Optimization of Mobilenet-SSD88.31%79.15%86.76%81.12%85.04%In Table1, the optimized Mobilenet-SSD algorithm has a recognition accuracy of 88.31%, which is lower than that of the previous algorithm. The accuracy of mobile phone playing behaviour is 79.15%, which is an improvement compared with 78.74% of the SDD algorithm. The detection accuracy of the remaining hand-raising and writing behaviours has been improved to varying degrees. The accuracy of sleeping behaviour has a downward trend. The change trend of the five behaviour detection accuracy is shown in Figure 10.Figure10 shows that the optimized Mobilenet-SSD model has different behaviour detection accuracies in the classroom. Except that the behaviour of listening to lectures and sleeping is easily affected by the interference of occlusion, the other three actions have better action detection accuracy than the traditional SSD model. In writing behaviour detection, the optimized Mobilenet-SSD model has a detection accuracy of 81.11%, which is the biggest difference with traditional SSD. Combining the two experiments, the optimized Mobilenet-SSD model is compared with the traditional detection model in behaviour detection accuracy and detection speed. It can provide English teachers with better feedback on the students’ listening status during the teaching process, thereby improving the English classroom teaching efficiency.
## 4. Conclusion
Under the influence of the scale of teaching, the efficiency of English teachers in classroom teaching has been greatly affected. In this case, the use of classroom monitoring resources combined with in-depth learning target detection provides research ideas for improving student teaching efficiency. With the expanding teaching scale, English teachers’ classroom teaching behavior has a greater impact on the teaching efficiency. Based on this, the use of relevant monitoring resources in the classroom combined with target detection in deep learning provides a research idea for detecting students’ learning status and improving students’ teaching efficiency. Therefore, the paper optimizes the SSD target detection algorithm. Through the analysis of the algorithm, the algorithm is optimized and improved aiming at the defects of large amount of basic network parameters and poor small target detection. Using RMSProp’s optimization algorithm, the convergence speed of the algorithm is optimized. Through the related experiments of student behaviour analysis, it is confirmed that the accuracy of small target recognition has been improved without changing the operation speed of the traditional algorithm. The accuracy of the algorithm objectively reflects the better overall performance of the designed algorithm. The disadvantage is limited by conditions, the sample data selected for the experiment is not particularly sufficient, and it may have a certain impact on the final experimental results. In the follow-up exploration, the experiment was carried out since finding more sufficient sample data. The performance of the algorithm is more deeply understood, modern technical support is provided for teachers to understand the learning status of students, and the efficiency of English classroom teaching is improved. The research content has far-reaching significance.
---
*Source: 1014501-2022-01-21.xml* | 1014501-2022-01-21_1014501-2022-01-21.md | 49,755 | Optimization of College English Classroom Teaching Efficiency by Deep Learning SDD Algorithm | Wei Zhang; Qian Xu | Computational Intelligence and Neuroscience
(2022) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2022/1014501 | 1014501-2022-01-21.xml | ---
## Abstract
In order to improve the teaching efficiency of English teachers in classroom teaching, the target detection algorithm in deep learning and the monitoring information from teachers are used, the target detection algorithm of deep learning Single Shot MultiBox Detector (SSD) is optimized, and the optimized Mobilenet-Single Shot MultiBox Detector (Mobilenet-SSD) is designed. After analyzing the Mobilenet-SSD algorithm, it is recognized that the algorithm has the shortcomings of large amount of basic network parameters and poor small target detection. The deficiencies are optimized in the following partThrough related experiments of student behaviour analysis, the average detection accuracy of the optimized algorithm reached 82.13%, and the detection speed reached 23.5 fps (frames per second). Through experiments, the algorithm has achieved 81.11% in detecting students’ writing behaviour. This proves that the proposed algorithm has improved the accuracy of small target recognition without changing the operation speed of the traditional algorithm. The designed algorithm has more advantages in detection accuracy compared with previous detection algorithms. The optimized algorithm improves the detection efficiency of the algorithm, which is beneficial to provide modern technical support for English teachers to understand the learning status of students and has strong practical significance for improving the efficiency of English classroom teaching.
---
## Body
## 1. Introduction
At present, internationalization is in a stage of rapid development, and social enterprises have higher requirements for the English level of talent. Because college English teaching not only has the characteristics of the subject itself but also needs to meet the overall requirements of current quality education. The teaching of English in colleges and universities strives for the comprehensive development of students, which makes the structure of college English teaching very complicated, and it is difficult to guarantee teaching efficiency. With the change of educational concepts, the number of university classrooms is also showing a rapid increase. In the actual teaching process, the teacher needs to teach multiple students at the same time, and it is difficult to pay attention to all the students. The development of big data, using many monitoring resources in the classroom, combined with target detection in deep learning, provides research ideas for detecting student learning status and improving student teaching efficiency [1].At present, many video target detections are derived from static image target detection. Zhao et al. studied that if the target detection model of the static image is directly used in the video target detection, the effect is very poor. Therefore, scholars combine the time and context information of the video to perform target detection [2]. Initially, the postprocessing stage completes the detection by a single frame of images. However, this method is mostly multistage, the results of each stage are affected by the results of the previous stage, and it is troublesome to correct errors in the previous stage. There are unclear problems caused by out-of-focus and object motion in the video, and this problem is not solved very well in the postprocessing stage [3]. Dou et al. [4] used optical flow, Long-Short Term Memory (LSTM), and Artificial Neural Network (ANN) to aggregate video time and context information to optimize the features of fuzzy frames, to make the detection accuracy better. In addition, the concept of key frames is introduced, the detection time is optimized, and optical flow-related technologies are used to give feature propagation. Recurrent Neural Networks (RNN) combined with lightweight and heavyweight feature extractors are interleaved and used to further improve the accuracy and speed of video target detection [5]. There are many shortcomings in the detection speed and accuracy of the current research compared with previous studies [6–9]. Affected by the detection target, its performance will also have a certain gap. Its availability in complex environments, dense target detection, and lightweight model design still needs great improvement [10].The deep learning Single Shot MultiBox Detector (SSD) algorithm is optimized. Through the analysis of the algorithm, a series of improvements have been made to the deficiencies of its large amount of basic network parameters and poor detection of small targets. The SSD base network is reasonably replaced. The characteristics of the deep separable convolutional network are used to optimize the network parameters to enhance computational efficiency. The data in the deep feature map is merged upward in the shallow layer, and the accuracy of the calibration of small targets can be improved. Finally, experiments related to student behavioural state are analysed. This proves that the accuracy of small target recognition has been improved without changing the calculation speed of traditional algorithms. These help teachers understand the students’ learning status and are of great significance to the improvement of English classroom teaching efficiency.The structure is arranged as follows: Section1 is the introduction, which introduces related research results in the detection field; Section 2 is the research method, which introduces the design process of the algorithm in detail; Section 3 is the experimental results, testing and analysing the performance of the designed algorithm; Section 4 is the conclusion, summarizing the research algorithm and explaining the future research direction.
## 2. Materials and Methods
### 2.1. Target Detection in Deep Learning
As a frequently used deep learning network, the neural network is composed of many neurons. It has two functions, linear and nonlinear functions in sequence. The output of the linear function is not related to the number of layers, and it is always linear, so the scope of application is limited [11]. However, reality is often very complicated, and the neural network needs to analyse and process many nonlinear problems, so a function is used to activate the result. Such a neural network can analyse and process nonlinear problems [12]. The calculation process of the activation function is shown in Figure 1.Figure 1
Activation function calculation process.In the activation function calculation process in Figure1, the input is placed in the god cell, the neuron is linearly calculated on it and then transferred to the activation function, and the neuron can get a nonlinear result [13]. The application of the activation function in the neural network enhances its representation ability.The Sigmoid function originally originated from the biological field, and it is also called the Logistic function. Its function image looks like the letter S, with an increasing trend in general. Its output is in the range of 0 and 1, so it is used on the output of the activation network layer [14]. The Sigmoid function is shown in equation (1):(1)fz=11+e−z.In equation (1), f(z) represents the required loss function, and z is the input value. This function is generally used to solve two classifications. Although it has its own advantages, it can be handled well in some projects, but when using this method to obtain the derivative, the program will be more troublesome, and sometimes, there will be the problem of vanishing gradient [15].The Rectified Linear Unit (ReLU) function is a linear rectification function. Image recognition and computer vision are widely used. Its function equation is(2)fx=maxx,0.In the function equation,f(x) represents the function that requires the loss, and x is the input. The calculation of this function is relatively simple, so its calculation speed is excellent. In the calculation, some neurons are set to 0, so the network will be very sparse so that the problem of overfitting is optimized [16].The Soft version of max (Softmax) function is also called the normalized exponential function. Its result is maintained at (0, 1), and the sum of the probability of satisfying the output result is 1, as shown in equation (3):(3)fxj=exj∑k=1kexj,j=1,…,k.In equation (3), fxj is the value of the loss function, xj is the j-th input, and k is the number of input values. The function works well on multiclassification problems; however, the isolation effect between different categories is slightly insufficient [17].
### 2.2. Convolutional Neural Network (CNN) Structure
As the basis for exploring deep learning, Convolutional Neural Network (CNN) has a network structure divided into three layers: input data, output data, and intermediate layer [18]. The specific CNN structure is shown in Figure 2.Figure 2
CNN structure.In the CNN structure, the input layer can analyse multidimensional data. When inputting relevant data into the network, it is necessary to unify the time and frequency of the relevant data. The output layer is to output the corresponding results of specific problems and classify the problems. The output is related to the object category. In the positioning problem, the output is the coordinate data of the object. The middle layer is divided into three layers: convolutional layer, pooling layer, and fully connected layer, which will be introduced one by one in the following [19].The most important part of the convolutional layer is the convolution kernel. The convolution kernel can also be regarded as a matrix of elements, and different elements will have corresponding weights and bias coefficients. When performing a convolution operation, the input data will be scanned with a certain rule. The function of the pooling layer is to delete invalid information in the data obtained by the upper layer and reduce the size. Generally, there are average pooling, maximum pooling, overlapping pooling, and maximum pooling. The first and second types are widely used. The function of the fully connected layer is used to classify the information data from the previous layer. In special cases, the previous operation can be replaced with the average value of the entire parameter value, which can reduce the target of redundant data [20].CNN generally has the following two characteristics:(1)
Local area connection. Normally, neurons are connected to each other when the network is connected. In CNN, it is only partially connected. If there are connections betweenN−1 layers of neurons and N layers of neurons, the connection form of CNN is shown in Figure 3.(2)
Weight sharing. The convolution kernel of the convolutional layer can be regarded as an element matrix, and the convolution operation is to use the convolution kernel to scan the information. For example, if a3∗3∗1 convolution kernel has 9 parameters, input an image to pass this convolution. The integration kernel performs related convolution processing, and the entire image will share these 9 parameters during scanning.Figure 3
Connection form of CNN.
### 2.3. Methods of Face Recognition and Image Preprocessing
In face recognition, multitask convolutional neural (MTCNN) face detection algorithm, affine transformation face alignment, and Insightface face comparison algorithm are analysed. When performing student face recognition, the process shown in Figure4 is used for recognition and detection.Figure 4
Face recognition process.In the face recognition process in Figure4, the relevant face data set should be prepared, and then the MTCNN algorithm is used to align the face with the affine transformation. The processed data is processed in Insightface for information comparison, and finally, the recognition result is obtained. The entire recognition process is over. In the algorithm selected in this paper, the MTCNN face detection model uses the image pyramid multiscale face detection method as the basis and uses its subnetwork to obtain the relevant features of the face in order to lay the foundation for correcting the direction of the face. When correcting the face, the method of affine transformation is used to align the face. Since the face images do not always show a very regular face, the change of the angle will have a great influence on the recognition, and it is more important to correct the face. Through the above MTCNN face detection algorithm, the key points of the face are obtained as a basis, and appropriate changes such as translation, rotation, and scaling are used to achieve the purpose of face alignment. The geometric transformation of the image is realized by the affine transformation method. The combination of translation-related image transformation is affine transformation, which uses the linear change from two-dimensional coordinates to achieve the mapping between the image and the image, and the flatness and parallelism of the image do not change during this process. The Insightface face comparison method used in face recognition reduces the distance within the class so that the class is closer, and many features with angular characteristics are obtained so that the performance of the face recognition model is enhanced.Under normal circumstances, the image has many interferences such as a lot of noise, and the information effect will be affected to varying degrees. In order to ensure that the quality of the image to be operated meets the standard, the preprocessing of the image is necessary. As shown in Figure5, several common image processing methods are used.Figure 5
Image preprocessing method.In Figure5, normalization is to transform the image into a standard mode, use the image invariant moments to find a set of parameters, and use it to reduce the interference of other functions on the image. Its essence is to find the amount of the image that does not change. After the shape brightness operation is performed on it, the changed image and the original image can be classified into one category.
### 2.4. Classroom Behaviour Recognition Model Design Process
As far as classroom teachers are concerned, mastering the relevant behaviours of students in the classroom can obtain the current state of the students in class and then make corresponding adjustments to improve teaching efficiency. Combining the relevant characteristics of students’ behaviour in the classroom, an optimized SSD algorithm is designed [21]. The specific recognition process of the constructed classroom behaviour recognition model is shown in Figure 6.Figure 6
Behaviour recognition process.Specifically, the application of the behaviour recognition process in the classroom is mainly by the following aspects:(1)
Collect student behaviour images. Find enough images of behaviours such as raising hand, sitting upright, writing, sleeping, and playing with mobile phone in class. The number of each action is equal.(2)
Build an identification database. The collected images are preprocessed and labelled, and the images are divided into training set, test set, and verification set according to the proportions.(3)
Train and test the model. Let the training set be trained in the behaviour recognition network model to obtain the initial model, test the model through the validation set, and then adjust the network model parameters according to the results. Use the test data in the model to observe whether the output results meet the expectations, to decide whether to continue training or not, retain the behaviour recognition model with excellent recognition effect, and use it in the subsequent classroom behaviour recognition [22].
### 2.5. Optimization Design of SSD Target Detection Algorithm
The target detection algorithm is an improvement and optimization on SSD, so it is necessary to understand the structure of the original algorithm model and the principle of prioritization [23]. According to the input image size, it can be divided into SSD300 and SSD512. SSD300 is used. Its network structure is divided into two parts. One is the main part of the network, also known as the basic network. It comes from the relevant subtype network. The second is the convolutional network added later. Its function is to assist the previous network in acquiring image features more deeply [24]. Delete the fully connected layer behind Visual Geometry Group Network 16 (VGG16), and keep the previous part of the convolutional network. Use the two newly created convolutional layers in the deleted places, named Convolution 6 (Conv6) and Conv7, add eight slowly decreasing convolutional layers to the end, and then add the classification layer and the nonmaximum suppression layer. The SSD network structure is shown in Figure 7.Figure 7
SSD network structure.SSD is a one-stage target detection algorithm [25]. In the process of feature extraction, the SSD algorithm uses multiscale feature maps for detection, adds a gradually decreasing convolution layer to the modified VGG16 network, and then selects 6 layers from all levels for prediction. They are Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2, and the size is slow to take effect from front to back. In the feature map, the relatively large size is used to identify small objects, and the smaller one is used to identify large objects [26]. In this way, image features can be obtained at different levels, and not only can shallow-level data information be obtained, but also deeper-level information can be obtained.The goal of basic network improvement is to replace the original backbone network VGG16 with a lightweight network. By consulting relevant information, the Mobilenet network is more suitable for the requirements here because it uses depth separable convolution to replace ordinary convolution to reduce the number of parameters. Compared with the hundreds of millions of parameters in VGG16, the Mobilenet network only contains 4.2 million parameters. Therefore, Mobilenet is used as the foundation, and after certain improvements, it is used as the basic network of SSD [27].The following is an introduction to specific Mobilenet improvements. The related situation of basic improvement is shown in Figure8.(1)
Mobilenet has been improved. Mobilenet is more efficient than VGG16, mainly manifested in the following: (1) depthwise separable convolution is used to construct the network; (2) width coefficient and resolution coefficient are used. It uses two parts to complete a convolution operation, followed by deep convolution and point convolution [28]. If they are regarded as two layers, then the Mobilenet network structure has a total of 28 layers. On the contrary, if they are regarded as one layer, then there are 14 layers. The essence of the depth separable convolution is to perform the convolution operation in two steps. When the image is transferred to the network, it is necessary to use the deep convolution operation to obtain the relevant feature information data and perform the Batch Normalization (BN) and Rectified Linear Unit (ReLU) operations on the previously obtained feature maps. Then use the point convolution operation to obtain other pieces of relevant feature map information. After that, BN and ReLU are used here to get the following results. The ratio of the depth separable convolution and the standard convolution parameter can be obtained by equation (4):(4)Fk∗Fk∗Ff∗Ff∗R+1∗1∗Ff∗Ff∗R∗PFk∗Fk∗R∗P∗Ff∗Ff=P+Fk2PFk2.In equation (4), Fk is the value of the convolution kernel and the size of the Ff shi image.In order to reduce the network parameters, it is necessary to use not only the depth separable convolution but also the width coefficienta and the resolution coefficient ρ. More values for a are 1, 0.75, 0.5, and 0.25. The function of a is to reduce the number of channels. For example, an input channel with a value of R is converted into αR after being added, and the amount of calculation is reduced. The reduced value is α2. The amount of calculation is also affected by the resolution, so the function of ρ is to reduce the object resolution. After they are used, the calculation amount of the pixel value is reduced by ρ2. The above is the related improvement measures taken for Mobilenet.When performing model training and learning, it is necessary to continuously observe the change of the loss function. When the value of the loss function continues to decrease, it means that the result of model training is approaching the best result. During the gradient descent, the amplitude of the value swing may become extremely large or not change, making the gradient descent speed slower. So, the addition of optimization algorithms is obviously very important. Root Mean Square Prop (RMSProp) optimization algorithm is used. This algorithm obtains the historical gradients of all dimensions and squares them. After superposition, the attenuation rate is added to obtain the relevant historical gradient sum. In the parameter update process, the learning rate is divided by the value calculated by equation (4). After using this optimization algorithm, the gradient direction is maintained within a small range, and the network convergence speed is well optimized. Its specific calculation is as equation (5):(5)SsR=βSdR+1−βdR2,(6)R=R−ρdRSdR+a.In equations (5) and (6), β is the decay rate. SR is the cumulative gradient variable. ρ is the learning rate. a is a constant, and its function is to avoid the situation where the denominator is 0. R is the parameter.(2)
Replacement of SSD basic network: inspired by the traditional SSD model design structure, the first 14 improved deep separable convolutional layers in the previously improved Mobilenet network are replaced with VGG16, which is used to improve the backbone network of the algorithm [29]. Then, add the feature extraction performance of the model to it; after the replacement of the basic network, add a convolutional layer with a decreasing correlation size to obtain deeper feature information of the image [30]. At the end of the network, the classification layer used to analyse the category and the nonmaximum suppression layer to filter the regression box are connected to replace the basic network [31]. After the implementation of the abovementioned improvement strategy for the traditional SSD, the improved SSD model is trained in the data-related training set, and the specific model is designed.Figure 8
The replacement process of the basic network.
### 2.6. Case Analysis
In order to evaluate the improved algorithm of the paper, the following will compare the average accuracy and detection speed of the improved SSD algorithm, the general Mobilenet-SSD algorithm, and the improved Mobilenet-SSD algorithm. The precision is closely related to the accuracy rate, calculated as in equation (7):(7)precision=TpTp+Fp.Tp is the number of positive samples in the prediction of positive samples, and Fp is the number of positive samples in the prediction of negative samples. The surveillance video in the teaching process of a university is sampled by the Open Source Computer Vision Library (Open CV) operation frame, and actions such as raising hands and writing are selected for preservation and processing. Finally, 800 images were obtained. Using the data enhancement method mentioned above, after the image data enhancement processing, the final 1600 images were obtained as the data set of this experiment. In the training set, 400 pieces were randomly selected for the various actions of raising hands, listening to lectures, playing with mobile phones, writing, and sleeping, and a total of 2,000 pieces were used as the training set for this experiment.
## 2.1. Target Detection in Deep Learning
As a frequently used deep learning network, the neural network is composed of many neurons. It has two functions, linear and nonlinear functions in sequence. The output of the linear function is not related to the number of layers, and it is always linear, so the scope of application is limited [11]. However, reality is often very complicated, and the neural network needs to analyse and process many nonlinear problems, so a function is used to activate the result. Such a neural network can analyse and process nonlinear problems [12]. The calculation process of the activation function is shown in Figure 1.Figure 1
Activation function calculation process.In the activation function calculation process in Figure1, the input is placed in the god cell, the neuron is linearly calculated on it and then transferred to the activation function, and the neuron can get a nonlinear result [13]. The application of the activation function in the neural network enhances its representation ability.The Sigmoid function originally originated from the biological field, and it is also called the Logistic function. Its function image looks like the letter S, with an increasing trend in general. Its output is in the range of 0 and 1, so it is used on the output of the activation network layer [14]. The Sigmoid function is shown in equation (1):(1)fz=11+e−z.In equation (1), f(z) represents the required loss function, and z is the input value. This function is generally used to solve two classifications. Although it has its own advantages, it can be handled well in some projects, but when using this method to obtain the derivative, the program will be more troublesome, and sometimes, there will be the problem of vanishing gradient [15].The Rectified Linear Unit (ReLU) function is a linear rectification function. Image recognition and computer vision are widely used. Its function equation is(2)fx=maxx,0.In the function equation,f(x) represents the function that requires the loss, and x is the input. The calculation of this function is relatively simple, so its calculation speed is excellent. In the calculation, some neurons are set to 0, so the network will be very sparse so that the problem of overfitting is optimized [16].The Soft version of max (Softmax) function is also called the normalized exponential function. Its result is maintained at (0, 1), and the sum of the probability of satisfying the output result is 1, as shown in equation (3):(3)fxj=exj∑k=1kexj,j=1,…,k.In equation (3), fxj is the value of the loss function, xj is the j-th input, and k is the number of input values. The function works well on multiclassification problems; however, the isolation effect between different categories is slightly insufficient [17].
## 2.2. Convolutional Neural Network (CNN) Structure
As the basis for exploring deep learning, Convolutional Neural Network (CNN) has a network structure divided into three layers: input data, output data, and intermediate layer [18]. The specific CNN structure is shown in Figure 2.Figure 2
CNN structure.In the CNN structure, the input layer can analyse multidimensional data. When inputting relevant data into the network, it is necessary to unify the time and frequency of the relevant data. The output layer is to output the corresponding results of specific problems and classify the problems. The output is related to the object category. In the positioning problem, the output is the coordinate data of the object. The middle layer is divided into three layers: convolutional layer, pooling layer, and fully connected layer, which will be introduced one by one in the following [19].The most important part of the convolutional layer is the convolution kernel. The convolution kernel can also be regarded as a matrix of elements, and different elements will have corresponding weights and bias coefficients. When performing a convolution operation, the input data will be scanned with a certain rule. The function of the pooling layer is to delete invalid information in the data obtained by the upper layer and reduce the size. Generally, there are average pooling, maximum pooling, overlapping pooling, and maximum pooling. The first and second types are widely used. The function of the fully connected layer is used to classify the information data from the previous layer. In special cases, the previous operation can be replaced with the average value of the entire parameter value, which can reduce the target of redundant data [20].CNN generally has the following two characteristics:(1)
Local area connection. Normally, neurons are connected to each other when the network is connected. In CNN, it is only partially connected. If there are connections betweenN−1 layers of neurons and N layers of neurons, the connection form of CNN is shown in Figure 3.(2)
Weight sharing. The convolution kernel of the convolutional layer can be regarded as an element matrix, and the convolution operation is to use the convolution kernel to scan the information. For example, if a3∗3∗1 convolution kernel has 9 parameters, input an image to pass this convolution. The integration kernel performs related convolution processing, and the entire image will share these 9 parameters during scanning.Figure 3
Connection form of CNN.
## 2.3. Methods of Face Recognition and Image Preprocessing
In face recognition, multitask convolutional neural (MTCNN) face detection algorithm, affine transformation face alignment, and Insightface face comparison algorithm are analysed. When performing student face recognition, the process shown in Figure4 is used for recognition and detection.Figure 4
Face recognition process.In the face recognition process in Figure4, the relevant face data set should be prepared, and then the MTCNN algorithm is used to align the face with the affine transformation. The processed data is processed in Insightface for information comparison, and finally, the recognition result is obtained. The entire recognition process is over. In the algorithm selected in this paper, the MTCNN face detection model uses the image pyramid multiscale face detection method as the basis and uses its subnetwork to obtain the relevant features of the face in order to lay the foundation for correcting the direction of the face. When correcting the face, the method of affine transformation is used to align the face. Since the face images do not always show a very regular face, the change of the angle will have a great influence on the recognition, and it is more important to correct the face. Through the above MTCNN face detection algorithm, the key points of the face are obtained as a basis, and appropriate changes such as translation, rotation, and scaling are used to achieve the purpose of face alignment. The geometric transformation of the image is realized by the affine transformation method. The combination of translation-related image transformation is affine transformation, which uses the linear change from two-dimensional coordinates to achieve the mapping between the image and the image, and the flatness and parallelism of the image do not change during this process. The Insightface face comparison method used in face recognition reduces the distance within the class so that the class is closer, and many features with angular characteristics are obtained so that the performance of the face recognition model is enhanced.Under normal circumstances, the image has many interferences such as a lot of noise, and the information effect will be affected to varying degrees. In order to ensure that the quality of the image to be operated meets the standard, the preprocessing of the image is necessary. As shown in Figure5, several common image processing methods are used.Figure 5
Image preprocessing method.In Figure5, normalization is to transform the image into a standard mode, use the image invariant moments to find a set of parameters, and use it to reduce the interference of other functions on the image. Its essence is to find the amount of the image that does not change. After the shape brightness operation is performed on it, the changed image and the original image can be classified into one category.
## 2.4. Classroom Behaviour Recognition Model Design Process
As far as classroom teachers are concerned, mastering the relevant behaviours of students in the classroom can obtain the current state of the students in class and then make corresponding adjustments to improve teaching efficiency. Combining the relevant characteristics of students’ behaviour in the classroom, an optimized SSD algorithm is designed [21]. The specific recognition process of the constructed classroom behaviour recognition model is shown in Figure 6.Figure 6
Behaviour recognition process.Specifically, the application of the behaviour recognition process in the classroom is mainly by the following aspects:(1)
Collect student behaviour images. Find enough images of behaviours such as raising hand, sitting upright, writing, sleeping, and playing with mobile phone in class. The number of each action is equal.(2)
Build an identification database. The collected images are preprocessed and labelled, and the images are divided into training set, test set, and verification set according to the proportions.(3)
Train and test the model. Let the training set be trained in the behaviour recognition network model to obtain the initial model, test the model through the validation set, and then adjust the network model parameters according to the results. Use the test data in the model to observe whether the output results meet the expectations, to decide whether to continue training or not, retain the behaviour recognition model with excellent recognition effect, and use it in the subsequent classroom behaviour recognition [22].
## 2.5. Optimization Design of SSD Target Detection Algorithm
The target detection algorithm is an improvement and optimization on SSD, so it is necessary to understand the structure of the original algorithm model and the principle of prioritization [23]. According to the input image size, it can be divided into SSD300 and SSD512. SSD300 is used. Its network structure is divided into two parts. One is the main part of the network, also known as the basic network. It comes from the relevant subtype network. The second is the convolutional network added later. Its function is to assist the previous network in acquiring image features more deeply [24]. Delete the fully connected layer behind Visual Geometry Group Network 16 (VGG16), and keep the previous part of the convolutional network. Use the two newly created convolutional layers in the deleted places, named Convolution 6 (Conv6) and Conv7, add eight slowly decreasing convolutional layers to the end, and then add the classification layer and the nonmaximum suppression layer. The SSD network structure is shown in Figure 7.Figure 7
SSD network structure.SSD is a one-stage target detection algorithm [25]. In the process of feature extraction, the SSD algorithm uses multiscale feature maps for detection, adds a gradually decreasing convolution layer to the modified VGG16 network, and then selects 6 layers from all levels for prediction. They are Conv4_3, Conv7, Conv8_2, Conv9_2, Conv10_2, and Conv11_2, and the size is slow to take effect from front to back. In the feature map, the relatively large size is used to identify small objects, and the smaller one is used to identify large objects [26]. In this way, image features can be obtained at different levels, and not only can shallow-level data information be obtained, but also deeper-level information can be obtained.The goal of basic network improvement is to replace the original backbone network VGG16 with a lightweight network. By consulting relevant information, the Mobilenet network is more suitable for the requirements here because it uses depth separable convolution to replace ordinary convolution to reduce the number of parameters. Compared with the hundreds of millions of parameters in VGG16, the Mobilenet network only contains 4.2 million parameters. Therefore, Mobilenet is used as the foundation, and after certain improvements, it is used as the basic network of SSD [27].The following is an introduction to specific Mobilenet improvements. The related situation of basic improvement is shown in Figure8.(1)
Mobilenet has been improved. Mobilenet is more efficient than VGG16, mainly manifested in the following: (1) depthwise separable convolution is used to construct the network; (2) width coefficient and resolution coefficient are used. It uses two parts to complete a convolution operation, followed by deep convolution and point convolution [28]. If they are regarded as two layers, then the Mobilenet network structure has a total of 28 layers. On the contrary, if they are regarded as one layer, then there are 14 layers. The essence of the depth separable convolution is to perform the convolution operation in two steps. When the image is transferred to the network, it is necessary to use the deep convolution operation to obtain the relevant feature information data and perform the Batch Normalization (BN) and Rectified Linear Unit (ReLU) operations on the previously obtained feature maps. Then use the point convolution operation to obtain other pieces of relevant feature map information. After that, BN and ReLU are used here to get the following results. The ratio of the depth separable convolution and the standard convolution parameter can be obtained by equation (4):(4)Fk∗Fk∗Ff∗Ff∗R+1∗1∗Ff∗Ff∗R∗PFk∗Fk∗R∗P∗Ff∗Ff=P+Fk2PFk2.In equation (4), Fk is the value of the convolution kernel and the size of the Ff shi image.In order to reduce the network parameters, it is necessary to use not only the depth separable convolution but also the width coefficienta and the resolution coefficient ρ. More values for a are 1, 0.75, 0.5, and 0.25. The function of a is to reduce the number of channels. For example, an input channel with a value of R is converted into αR after being added, and the amount of calculation is reduced. The reduced value is α2. The amount of calculation is also affected by the resolution, so the function of ρ is to reduce the object resolution. After they are used, the calculation amount of the pixel value is reduced by ρ2. The above is the related improvement measures taken for Mobilenet.When performing model training and learning, it is necessary to continuously observe the change of the loss function. When the value of the loss function continues to decrease, it means that the result of model training is approaching the best result. During the gradient descent, the amplitude of the value swing may become extremely large or not change, making the gradient descent speed slower. So, the addition of optimization algorithms is obviously very important. Root Mean Square Prop (RMSProp) optimization algorithm is used. This algorithm obtains the historical gradients of all dimensions and squares them. After superposition, the attenuation rate is added to obtain the relevant historical gradient sum. In the parameter update process, the learning rate is divided by the value calculated by equation (4). After using this optimization algorithm, the gradient direction is maintained within a small range, and the network convergence speed is well optimized. Its specific calculation is as equation (5):(5)SsR=βSdR+1−βdR2,(6)R=R−ρdRSdR+a.In equations (5) and (6), β is the decay rate. SR is the cumulative gradient variable. ρ is the learning rate. a is a constant, and its function is to avoid the situation where the denominator is 0. R is the parameter.(2)
Replacement of SSD basic network: inspired by the traditional SSD model design structure, the first 14 improved deep separable convolutional layers in the previously improved Mobilenet network are replaced with VGG16, which is used to improve the backbone network of the algorithm [29]. Then, add the feature extraction performance of the model to it; after the replacement of the basic network, add a convolutional layer with a decreasing correlation size to obtain deeper feature information of the image [30]. At the end of the network, the classification layer used to analyse the category and the nonmaximum suppression layer to filter the regression box are connected to replace the basic network [31]. After the implementation of the abovementioned improvement strategy for the traditional SSD, the improved SSD model is trained in the data-related training set, and the specific model is designed.Figure 8
The replacement process of the basic network.
## 2.6. Case Analysis
In order to evaluate the improved algorithm of the paper, the following will compare the average accuracy and detection speed of the improved SSD algorithm, the general Mobilenet-SSD algorithm, and the improved Mobilenet-SSD algorithm. The precision is closely related to the accuracy rate, calculated as in equation (7):(7)precision=TpTp+Fp.Tp is the number of positive samples in the prediction of positive samples, and Fp is the number of positive samples in the prediction of negative samples. The surveillance video in the teaching process of a university is sampled by the Open Source Computer Vision Library (Open CV) operation frame, and actions such as raising hands and writing are selected for preservation and processing. Finally, 800 images were obtained. Using the data enhancement method mentioned above, after the image data enhancement processing, the final 1600 images were obtained as the data set of this experiment. In the training set, 400 pieces were randomly selected for the various actions of raising hands, listening to lectures, playing with mobile phones, writing, and sleeping, and a total of 2,000 pieces were used as the training set for this experiment.
## 3. Results
### 3.1. The Recognition Performance of Different Algorithms
For the traditional SSD algorithm, the unoptimized Mobilenet-SSD algorithm, and the optimized Mobilenet-SSD algorithm, after the above training set is trained, the average accuracy and detection speed of each model in the data set are shown in Figure9.Figure 9
Different model recognition performance.In Figure9, after comparing the target recognition performance of different models, the optimized Mobilenet-SSD model has a higher average accuracy rate than the traditional SSD algorithm and the unoptimized Mobilenet-SSD algorithm, reaching 82.13%, and the detection speed is up to 23.5 fps (frames per second). Compared with the SSD model with high accuracy and slow detection speed, the overall performance of the Mobilenet-SSD model with fast detection speed and low accuracy is better.
### 3.2. Accuracy Test of Specific Behaviours of Different Models
Table1 shows the five behavior detection results on SSD and optimized Mobilenet SSD models. The specific behaviors are:attending class, raising hands, playing mobile phones, writing, and sleeping. The specific values of the test results are shown in Table 1.Figure 10
Comparison of different behaviour detection accuracy (A: listening to lectures, B: playing with mobile phones, C: raising hands, D: writing, E: sleeping).Table 1
Different algorithm behaviour detection accuracy rate.
Algorithm\behaviourAttend classPlay cell phoneRaise handsWritingSleepSDD88.53%78.74%85.27%76.09%86.66%Optimization of Mobilenet-SSD88.31%79.15%86.76%81.12%85.04%In Table1, the optimized Mobilenet-SSD algorithm has a recognition accuracy of 88.31%, which is lower than that of the previous algorithm. The accuracy of mobile phone playing behaviour is 79.15%, which is an improvement compared with 78.74% of the SDD algorithm. The detection accuracy of the remaining hand-raising and writing behaviours has been improved to varying degrees. The accuracy of sleeping behaviour has a downward trend. The change trend of the five behaviour detection accuracy is shown in Figure 10.Figure10 shows that the optimized Mobilenet-SSD model has different behaviour detection accuracies in the classroom. Except that the behaviour of listening to lectures and sleeping is easily affected by the interference of occlusion, the other three actions have better action detection accuracy than the traditional SSD model. In writing behaviour detection, the optimized Mobilenet-SSD model has a detection accuracy of 81.11%, which is the biggest difference with traditional SSD. Combining the two experiments, the optimized Mobilenet-SSD model is compared with the traditional detection model in behaviour detection accuracy and detection speed. It can provide English teachers with better feedback on the students’ listening status during the teaching process, thereby improving the English classroom teaching efficiency.
## 3.1. The Recognition Performance of Different Algorithms
For the traditional SSD algorithm, the unoptimized Mobilenet-SSD algorithm, and the optimized Mobilenet-SSD algorithm, after the above training set is trained, the average accuracy and detection speed of each model in the data set are shown in Figure9.Figure 9
Different model recognition performance.In Figure9, after comparing the target recognition performance of different models, the optimized Mobilenet-SSD model has a higher average accuracy rate than the traditional SSD algorithm and the unoptimized Mobilenet-SSD algorithm, reaching 82.13%, and the detection speed is up to 23.5 fps (frames per second). Compared with the SSD model with high accuracy and slow detection speed, the overall performance of the Mobilenet-SSD model with fast detection speed and low accuracy is better.
## 3.2. Accuracy Test of Specific Behaviours of Different Models
Table1 shows the five behavior detection results on SSD and optimized Mobilenet SSD models. The specific behaviors are:attending class, raising hands, playing mobile phones, writing, and sleeping. The specific values of the test results are shown in Table 1.Figure 10
Comparison of different behaviour detection accuracy (A: listening to lectures, B: playing with mobile phones, C: raising hands, D: writing, E: sleeping).Table 1
Different algorithm behaviour detection accuracy rate.
Algorithm\behaviourAttend classPlay cell phoneRaise handsWritingSleepSDD88.53%78.74%85.27%76.09%86.66%Optimization of Mobilenet-SSD88.31%79.15%86.76%81.12%85.04%In Table1, the optimized Mobilenet-SSD algorithm has a recognition accuracy of 88.31%, which is lower than that of the previous algorithm. The accuracy of mobile phone playing behaviour is 79.15%, which is an improvement compared with 78.74% of the SDD algorithm. The detection accuracy of the remaining hand-raising and writing behaviours has been improved to varying degrees. The accuracy of sleeping behaviour has a downward trend. The change trend of the five behaviour detection accuracy is shown in Figure 10.Figure10 shows that the optimized Mobilenet-SSD model has different behaviour detection accuracies in the classroom. Except that the behaviour of listening to lectures and sleeping is easily affected by the interference of occlusion, the other three actions have better action detection accuracy than the traditional SSD model. In writing behaviour detection, the optimized Mobilenet-SSD model has a detection accuracy of 81.11%, which is the biggest difference with traditional SSD. Combining the two experiments, the optimized Mobilenet-SSD model is compared with the traditional detection model in behaviour detection accuracy and detection speed. It can provide English teachers with better feedback on the students’ listening status during the teaching process, thereby improving the English classroom teaching efficiency.
## 4. Conclusion
Under the influence of the scale of teaching, the efficiency of English teachers in classroom teaching has been greatly affected. In this case, the use of classroom monitoring resources combined with in-depth learning target detection provides research ideas for improving student teaching efficiency. With the expanding teaching scale, English teachers’ classroom teaching behavior has a greater impact on the teaching efficiency. Based on this, the use of relevant monitoring resources in the classroom combined with target detection in deep learning provides a research idea for detecting students’ learning status and improving students’ teaching efficiency. Therefore, the paper optimizes the SSD target detection algorithm. Through the analysis of the algorithm, the algorithm is optimized and improved aiming at the defects of large amount of basic network parameters and poor small target detection. Using RMSProp’s optimization algorithm, the convergence speed of the algorithm is optimized. Through the related experiments of student behaviour analysis, it is confirmed that the accuracy of small target recognition has been improved without changing the operation speed of the traditional algorithm. The accuracy of the algorithm objectively reflects the better overall performance of the designed algorithm. The disadvantage is limited by conditions, the sample data selected for the experiment is not particularly sufficient, and it may have a certain impact on the final experimental results. In the follow-up exploration, the experiment was carried out since finding more sufficient sample data. The performance of the algorithm is more deeply understood, modern technical support is provided for teachers to understand the learning status of students, and the efficiency of English classroom teaching is improved. The research content has far-reaching significance.
---
*Source: 1014501-2022-01-21.xml* | 2022 |
# The Desmosomal Plaque Proteins of the Plakophilin Family
**Authors:** Steffen Neuber; Mario Mühmer; Denise Wratten; Peter J. Koch; Roland Moll; Ansgar Schmidt
**Journal:** Dermatology Research and Practice
(2010)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2010/101452
---
## Abstract
Three related proteins of the plakophilin family (PKP1_3) have been identified as junctional proteins that are essential for the formation and stabilization of desmosomal cell contacts. Failure of PKP expression can have fatal effects on desmosomal adhesion, leading to abnormal tissue and organ development. Thus, loss of functional PKP 1 in humans leads to ectodermal dysplasia/skin fragility (EDSF) syndrome, a genodermatosis with severe blistering of the epidermis as well as abnormal keratinocytes differentiation. Mutations in the human PKP 2 gene have been linked to severe heart abnormalities that lead to arrhythmogenic right ventricular cardiomyopathy (ARVC). In the past few years it has been shown that junctional adhesion is not the only function of PKPs. These proteins have been implicated in cell signaling, organization of the cytoskeleton, and control of protein biosynthesis under specific cellular circumstances. Clearly, PKPs are more than just cell adhesion proteins. In this paper we will give an overview of our current knowledge on the very distinct roles of plakophilins in the cell.
---
## Body
## 1. Introduction
Cellular adhesion is mediated by distinct protein complexes at the cytoplasmic membrane, termed junctions, that have been characterized by their morphology on the ultrastructural level [1]. Desmosomes reveal a characteristic appearance and anchor different types of intermediate filaments (IF) to the cell membrane. The fundamental functional importance of desmosomal cell contacts for cellular and tissue architecture, differentiation, development, and tissue stability is generally accepted and has previously been described [2–4]. Experimental evidence for the importance of desmosomal adhesion for specific tissues and organs has been established by knockout experiments of desmosomal genes in mice (see, e.g., [5]). Moreover, examination of a variety of human diseases characterized by a loss, or impairment of desmosomal adhesion—regardless of genetical, autoimmune, or infectious etiology—advanced our understanding of desmosomal function [6]. Desmosomes are formed by all epithelial tissues and tumors derived therefrom as well as by specific nonepithelial tissues such as heart muscle cells. Desmosomal cadherins (i.e., desmogleins DSGs and desmocollins DSCs) located on adjacent cells mediate intercellular connection via interactions of their extracellular domains (for review see [7]). On the cytoplasmic side of the plasma membrane, IF are linked to the desmosomal cadherins via desmosomal plaque proteins. Besides the constitutive desmosomal plaque proteins desmoplakin (DSP) and plakoglobin (JUP), at least one of the three classical members of the plakophilin family (PKP 1 to PKP 3) is required for the formation of functional desmosomes [8–10]. The role of PKPs in cellular adhesion have been analyzed in detail during the past decade [8–10]. However, additional functions of the plakophilins that are not directly linked to desmosomal adhesion have recently been described. In this review we want to provide insights not only into the known properties and functions of plakophilins in desmosomes, but also into cellular functions not related to adhesion.
## 2. Common Features of the Plakophilins
Plakophilins are probably the most basic proteins identified in cellular adhesion complexes so far with an isoelectric point (pI) of about pH 9.3. Based on their primary sequences, PKPs have been classified as a distinct subfamily of the armadillo repeat proteins (for review see [11]). The carboxyl-terminal part of the proteins includes nine armadillo repeats which contain a spacer sequence between the fifth and sixth repeat that leads to a characteristic kink in the domain structure as determined by crystallography of the armadillo domain of PKP 1 [12]. The amino-terminal parts (head domain) of the three plakophilins are rather diverse and exhibit no obvious homology to themselves or other proteins. Only a small sequence near the amino-terminus, designated homology region (HR) 2, shows some degree of homology between the plakophilins. An analysis of amino acid sequence homology reveals that the PKPs are related to the catenin proteins of the p120ctn-group, which are associated with classical cadherins, such as E-cadherin, in adherens junctions. The PKPs are more distantly related to the classical catenins, β-catenin and plakoglobin [8, 13]. PKPs show complex but overlapping expression patterns in mammalian tissues. Certain cells and tissues express only one type of PKP. Mutations affecting the corresponding PKPs thus can lead to severe diseases in these tissues since compensatory PKP isoforms are not expressed or may not substitute for all functional aspects. This probably explains the severe skin diseases caused by PKP 1 mutations and the heart diseases caused by PKP 2 mutations.
### 2.1. Plakophilin 1
PKP 1 is the smallest of the plakophilins, with a calculated molecular weight of 80.497 Da and an apparent molecular weight of approximately 75 kDa as judged by SDS-PAGE [14]. This protein is localized in the desmosomes of stratified, complex, and transitional epithelia but is absent in simple epithelia [14–16]. In stratified epithelia, PKP 1 is synthesized in all cell layers, with an increase in expression from the basal to the granular compartment as determined by quantifications of PKP 1-specific immunofluorescence signal intensity in human epidermis [17]. This indicates that PKP1 is a marker for keratinocyte differentiation. PKP 1 appears to be absent in the stratum corneum of stratified squamous epithelia though (Figure 1).Immunohistochemical staining of sections of human skin with antibodies against PKP 1. Sections of formaldehyde-fixed tissue samples of human skin were stained with a monoclonal antibody (clone PP1 5C2; Progen, Heidelberg; for methods see [18]) against PKP 1 a to d. (a) Overview of epidermis showing a strong reaction of the antibodies at the desmosomes of all layers. (b) At a higher magnification, the basal layers exhibit a somewhat weaker desmosomal staining that can be resolved occasionally into individual spot-like desmosomes containing PKP 1. During keratinocyte differentiation, desmosomal labeling is getting more pronounced. (c) Cross-section of a hair follicle (Hf) with desmosomal staining of the outer root sheath while the hair-shaft is not stained (Sg, sebaceous gland). Arrow marks the duct of a sebaceous gland. (d) Eccrine sweat ducts are marked intensively by antibodies while the secretory portions of eccrine glands show a distinct but weaker staining (arrow). Apocrine sweat glands (lower left corner) are negative. Scale bars: 100 μm (b); 200 μm a, c, and d.
(a)(b)(c)(d)The human PKP 1 gene is expressed as two different splice variants which differ with respect to cell-biological behavior, molecular weight, and abundance. PKP1a is the smaller isoform while the larger PKP1b isoform (predicted molecular weight: 82.860 kDa) is less abundant in stratified epithelia. The additional amino acid sequence contained in PKP1b is encoded by exon 7 which is spliced out of the PKP1a mRNA [19]. The PKP1b-specific amino acid sequence is located at the end of the fourth armadillo repeat and has a distinct effect on the cell biological activities of the protein. In addition to its desmosomal localization, PKP 1 has been detected in the nucleus of a broad range of cell types, even in those that do not incorporate PKP 1 in desmosomes such as simple epithelial cells [14, 19]. This distinct subcellular distribution has been observed for both variants of PKP 1. While the smaller PKP 1a may also be present in desmosomes, PKP 1b localization is restricted to the nucleus and not detectable in desmosomes. This conclusion is supported by transfection of cDNAs into cultured cells, where PKP 1a accumulates in desmosomes and is also rapidly transferred into the nucleus, while PKP 1b is only nuclear (own observations). Nevertheless, neither the way PKP 1 enters the nucleus nor the functions of this protein therein are yet known.Both the nuclear and desmosomal PKP 1 pool are degraded by caspases rapidly during apoptosis of keratinocytes suggesting that this protein is involved in the remodeling of the cytoskeleton under these conditions [20]. Signaling functions, as shown for some of the related catenins such as β-catenin, plakoglobin, and p120ctn, have been postulated for PKP 1, but proof is still lacking [21, 22]. A typical nuclear localization signal has not been identified in the protein so far, but cDNA transfection studies of the complete protein or individual parts of the protein into cells have shown that the head domain on its own, and to some extent the armadillo domain, are able to enter the nucleus [23]. The mechanism of the PKP 1 nuclear migration is currently unknown, but may utilize a piggyback mechanism.Various in vitro approaches revealed that the binding of desmosomal PKP 1 to other desmosomal proteins such as DSP, DSG 1, DSC 1, and different keratins is mediated by its head domain sequence [24–28]. The armadillo repeat domain of PKP 1 alone is sufficient to localize the protein to the plasma membrane [23]. The PKP 1 binding partner at the plasma membrane has not been determined but might be one of the desmosomal proteins or even cortical actin. In particular, it has been observed that the armadillo domain coaligns with actin microfilaments under certain circumstances and may be involved in the reorganization of this cytoskeletal component [26]. Nevertheless, the carboxyl-terminal part, in particular the last 40 amino acids, seems to be essential for the recruitment of the entire PKP 1 to the plasma membrane as shown by transfection studies of mutant cDNA constructs into A431 keratinocytes [28].Important clues for the understanding of PKP 1 function came from a report of an autosomal-recessive genodermatosis that is caused by mutations in the PKP 1 gene [29]. The ectodermal dysplasia/skin fragility (EDSF) syndrome (OMIM 604536; the collection of known mutations in the PKP 1 gene is shown in Figure 2 and published cases of EDSF syndrome are listed in Table 1) clinically manifests in the skin and its appendages. Patients suffer from blistering with erosions of their skin upon mechanical stress. Nails are dystrophic and the epidermis of soles and palms displays hyperkeratosis. The hair density on the scalp, eyebrows, and eyelashes is reduced. In severe cases, hair might be completely absent from these body regions. Impaired sweating has occasionally been observed. All other epithelial tissues that express PKP 1, including mucous membranes, seem to be normal in these patients, suggesting functional compensation by the other PKPs. Histological examination of affected skin reveals that the intercellular space is widened and epidermal keratinocytes are acantholytic from the suprabasal layers upwards, suggesting loss of cell-cell adhesion. Cell rupture, as noticed for epidermolytical bullous dermatosis, has not been observed. Immunofluorescence microscopy analyses of patients’ skin biopsies showed that certain desmosomal components such as desmogleins, desmocollins, and plakoglobin are still localized at the plasma membrane. In contrast, PKP 1 is completely absent or drastically reduced [30]. As a consequence, desmoplakin is no longer localized in the desmosomes but instead is dispersed throughout the cytoplasm. On the ultrastructural level, desmosomes appear smaller and are numerically reduced in the affected epidermal layers. Additionally, keratin filaments have lost contact to desmosomal junctions and are collapsed around the nucleus. Biochemical analysis of patients’ skin revealed that the other PKPs are upregulated to some extent and may compensate in part for the loss of PKP 1 in nonaffected epidermal layers [17]. Interestingly, it does not seem to matter for the development of the clinicopathological findings of EDSF syndrome to what extent the protein is truncated due to the mutations in PKP 1 gene. In a case reported by McGrath and colleagues the mutations occurred close to the amino-terminus of the protein, which could result either in a severely truncated protein or—more likely—in complete loss of the protein (i.e., a functional null mutation) as judged by immunofluorescence microscopy [29]. In contrast, the mutations in the PKP 1 gene reported by Hamada et al. occurred near the carboxyl-terminus resulting in the expression of a truncated protein. Based on the mild phenotype of the ESDF syndrome in these patients, it can be assumed that this truncated protein is at least partially functional but clinicopathology of ESDF still manifests [30]. Surprisingly, most of the EDSF-related mutations in human PKP 1 gene involve splice-site mutations (8 out of 13 known mutated alleles) leading to impaired splicing products and subsequent mRNA degradation or the generation of truncated proteins. The reason for the prevalence of splice-site mutations in EDSF is not known.Table 1
Published cases of EDSF syndrome with clinical features and observed mutations in PKP 1 gene.
Case1Clinicopathological findingsObserved mutations2ReferenceEpidermal fragilityHyperkeratosis on palms/solesAlopeciaNail dysplasiaHypohidrosis1yesyesyesyesyes(a) p.Q304X (b) c.1132ins28[29]2yesyesyesyesyes(a) p.Y71X (b) IVS1-1G>A[31]3yesyesyesyesnoIVS6-2A>T[32]4yesyesyesyesyesIVS11+1G>A[30]5yesnoyesyesnoIVS4-2A>G[33]6yesyesyesyesnot observedIVS1-1G>A[33]7yesyesnoyesnoIVS9+1G>A[34]8yesyesyesyesno(a) c.1053T>A +IVS5+1G>A (b) IVS10-2G>T[35]9yesyesyesyesnoc.888delC[36]10yesyesyesyesnot observedp.R672X[37]1 Numbering of the case correlates to the positions of mutations shown in Figure 2.2 For compound heterozygosity, mutations of both alleles are given as (a) and (b).Figure 2
Position of mutations in human PKP 1 gene. Schematic representation of the protein structure of PKP 1 with head domain “Head” in blue color containing the homologous region 2 “HR2” near the amino-terminus which is followed by nine armadillo repeats (yellow boxes; numbered in circles from 1 to 9). Finally, a short domain (blue) at the carboxyl-terminus is shown. Positions of homozygous mutations are marked by double arrows, positions of compound heterozygous mutations by connected arrows. Green arrows designate mutations affecting the coding region, red arrows denote splice-site mutations. For numbering and references of the mutations see Table1.These findings in conjunction with cell biological data obtained in transfection studies convincingly illustrate that PKP 1 is essential for the recruitment of desmoplakin to the desmosomal plaque and probably is involved into lateral enlargement of the plaque structure in skin, explaining the structural and functional defects in epidermal desmosomes lacking PKP 1. Evidently, integration of PKP 1 in the desmosomes provides the epidermal keratinocytes with stability against mechanical stress. A sequence stretch in the HR2 domain of PKP 1 is thought to be essential for the recruitment of DSP and represents a conserved motif of all the PKPs, suggesting that DSP recruitment is a common function of all PKPs [28].Although a direct interaction of PKP 1 with keratins has been demonstrated frequently in vitro, it is not clear whether this protein alone is sufficient to connect the intermediate filament cytoskeleton to the desmosome. Specific inactivation of DSP in the skin of mice demonstrates the necessity of both proteins, DSP and PKP 1 (in cooperation with plakoglobin), for anchorage of keratins [38] suggesting that all three components are required. This is further demonstrated by the fact that failure of either PKP 1 or DSP can lead to loss of cell-cell adhesion and acantholysis in the epidermis. The mechanism underlying the failure of epidermal desmosomes without PKP 1 to maintain adhesion is not known. It is tempting to speculate that besides structural defects cell signaling defects could contribute to this phenomenon, similar to the disease mechanisms postulated for the autoimmune blistering diseases of the pemphigus group in which autoantibodies target desmosomal cadherins. Binding of autoantibodies to the desmosomal cadherins seems to trigger intracellular signaling pathways that lead to the reorganization of the cytoskeleton involving the disconnection of desmosomal cadherins of adjacent cells (for the mechanisms of this outside-in signaling see [39]). The same pathways may be involved in the dissolution of desmosomal adhesion when PKP 1 is lost. Given that patients with PKP 1 null mutations show defects in differentiation pathways affecting skin appendage formation and homeostasis, it is unlikely that adhesion defects can account for the entire spectrum of disease phenotypes.Analysis of keratinocytes derived from patients suffering from EDSF syndrome exhibits some interesting properties. Quantitative analyses of the desmosome size in cultured cells revealed that reintroduction of PKP 1 increases the lateral extent of desmosomes. As proposed by others [25, 40], desmosomal cohesiveness might be increased by lateral interactions of PKP 1 with DSP, making additional linkage between desmosomal proteins and keratin network accessible [41]. It is noteworthy that PKP 1 null keratinocytes show increased cell migration, which has implications for tumor biology.
### 2.2. Plakophilin 2
PKP 2 is, with a predicted mass of 92.756 Da and an apparent molecular weight of 100 kDa (estimated from Western blot analysis), the largest of the three plakophilins and it is also the prevailing isoform since it is expressed in all cell types with desmosomal junctions [42]. PKP 2 is found in the basal cells of certain stratified epithelia while more differentiated keratinocytes are negative for desmosomal PKP 2 (Figure 3). Moreover, PKP 2 has recently also been found in new types of cell junction which differ in terms of their biochemical composition from both classical desmosomes and conventional adherens junctions (reviewed in [43]). Similar to PKP 1, PKP 2 also occurs as two different splice variants. An additional exon coding for 44 amino acids is integrated into PKP 2 b close to the border of the second to third armadillo repeat of the protein [42]. The two PKP 2 splice variants appear to be coexpressed in all cell types analyzed thus far, and it is not known whether these two proteins have different functions.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 2. (a) The staining of samples of human skin with a monoclonal antibody against PKP 2 (clone PP2-150; Progen, Heidelberg) demonstrates a weak and delicate desmosomal staining as well as cytoplasmic staining in the basal layer of the interfollicular epidermis (arrow). Suprabasal keratinocytes remain unstained. (b) Eccrine sweat glands and ducts show a strong reaction with PKP 2-specific antibodies while apocrine sweat glands exhibit an apical, distinct but weak desmosomal reaction (arrow). (c) Hepatocytes as well as bile ductules are marked at the cell-cell contacts by PKP 2-specific antibodies (arrow). (d) Bile ducts also show a sharp and apical staining of desmosomal structure by the PKP 2-antibodies. The samples shown in (c) and (d) are derived from liver tissue in the vicinity of a metastasis of a gastrointestinal stromal tumor with portal and periportal fibrosis and ductal and ductular proliferation. Scale bars: 100μm (d), 200 μm (a, b, c).
(a)(b)(c)(d)Like PKP 1, PKP 2 has been detected in the nucleus of many cell types [42]. Its presence in the nucleus is independent of its presence in desmosomes. Some nonepithelial cell types, which do not assemble desmosomes, show only nuclear localization of PKP 2 (e.g., fibroblasts [42]). In stratified epithelia, nuclear and desmosomal localization of PKP2 is regulated independently. In the differentiated layers of stratified epithelia, PKP 2 is excluded from desmosomes and accumulates in the nuclei of keratinocytes. Recently, Müller and colleagues identified a molecular pathway that appears to regulate nuclear accumulation of PKP 2 [44]. The Cdc25C-associated kinase 1 (C-TAK 1) emerges to be involved in cell-cycle regulation and Ras-signaling. It was shown that C-TAK 1 phosphorylates Cdc25C and KSR1, a scaffold protein for mitogen-activated protein kinase (MAPK) and Raf-1 kinase. Müller et al. demonstrated that PKP 2 is also a substrate for C-TAK 1 [44]. This phosphorylation of PKP 2 enforces an interaction of PKP 2 with 14-3-3 proteins, which prevents the nuclear accumulation of PKP 2. Consequently, mutation of the C-TAK 1 phosphorylation site or the 14-3-3 binding domain in PKP 2 increases nuclear accumulation of PKP 2. The pathways that trigger C-TAK 1-mediated phosphorylation of PKP 2 and its retention in the cytoplasm have not been analyzed so far.What does PKP 2 do in the nucleus? Recent experiments by Mertens and colleagues provided some insights [45]. Immunoprecipitation experiments revealed an association of PKP 2 with the largest subunit of RNA-polymerase-III holoenzyme, protein RPC155, as well as other components such as RPC82 and RPC39. The PKP 2-positive complexes also contain RNA-polymerase-III-associated transcription factor TFIIIB but not TFIIIC. The colocalization of PKP 2 and RPC155 in particles in the interchromatin space has been shown by immunofluorescence microscopy. Mertens and colleagues [45] postulated that these particles do not represent active forms of polymerase-III, because the PKP 2-positive particles do not contain transcription factor TFIIIC, a factor required for the formation of an active RNA polymerase III complex. Thus, the actual function of these complexes remains unclear. Nevertheless, the almost general appearance of PKP 2, as well as PKP 1, in the nucleus seems to differ fundamentally from the nuclear localization of other related catenins such as β-catenin or p120ctn, which are translocated into the nucleus upon specific signals and have been shown to be involved in gene regulation [21, 22].Besides these nuclear functions, PKP 2 may be involved in cytoplasmic signaling, which is based on the observation that it can bindβ-catenin [46], a key downstream effector protein of the canonical Wnt-signaling pathway [21]. Using two-hybrid and immunoprecipitation assay, it was shown that PKP 2 can bind to β-catenin. However, when bound to PKP 2, β-catenin cannot associate to E-cadherin, which may reduce the pool of β-catenin available to function in cell adhesion. Overexpression of PKP 2 in colon carcinoma cells leads to an increase in β-catenin/TCF signaling suggesting a regulatory role of PKP 2 in Wnt signaling and providing a potential functional link between desmosomal adhesion and signaling [46].PKP 2 also seems to be involved in the assembly of the desmosomal components into desmosomes. siRNA-mediated depletion of PKP 2 in keratinocytes leads to changes in the subcellular localization of DSP which mimics the behavior of a DSP mutant deficient for a PKCα (i.e., protein kinase C) phosphorylation site. Different isoforms of PKC have been implicated in the regulation of cellular processes such as migration, cellular adhesion, or cytoskeletal reorganization (for review see [47]). Bass-Zubeck et al. investigated the connection between PKP 2, DSP, and PKC [48]. The authors found that PKP 2 binds to PKCα and DSP via its head domain. A detailed analysis revealed that PKP 2 simultaneously binds DSP and PKCα, which facilitates the subsequent phosphorylation of DSP at its IF-binding domain by PKC [48]. This increases DSP integration into the desmosomes and the subsequent attachment of IFs to desmoplakin.Insights into the function of PKP 2 also came from gene knockout experiments in mice, as well as the analysis of an autosomal-dominant human hereditary disease linked to PKP 2 mutations [49, 50]. Ablation of the PKP 2 gene in mice leads to a lethal phenotype around mid-gestation (E10.5) [49]. Homozygous PKP 2-null embryos died because of severe alterations of the heart structure resulting in the outflow of blood into the pericardium and subsequent collapse of the embryonic blood circulation. On the microscopic level, PKP 2 deficient hearts display reduced trabeculation as well as abnormally thin cardiac walls. The reason for the instability of cell contacts between cardiomyocytes is apparent on the ultrastructural level. The junctional complexes of theareae compositae(formerly designated as intercalated disks; see [43]) that connect cardiomyocytes include at least two types of junctions in an amalgamated fashion, desmosomes and adherens junctions. The areae compositae are altered significantly in PKP 2-mutant mice. Associated with the deficiency of PKP 2, DSP is depleted from the desmosomal junctions and accumulates in the cytoplasm. Additionally, DSG 2 expression seems to be reduced in PKP 2-null cardiomyocytes and desmosomal components were less resistant to detergent extraction, suggesting impaired function of cell junctions. Therefore, PKP 2 seems to be essential for the regular subcellular distribution of desmoplakin and its accumulation in the areae compositae of cardiomyocytes. Interestingly, Grossmann et al. found no alteration in other PKP 2-expressing epithelia in the mutant animals [49]. This is likely due to the expression of multiple PKP isoforms in many cell types (except for the heart which expresses only PKP 2), providing functional compensation in case one isoform is not functional.The essential function of PKP 2 in the heart was also demonstrated by the identification of a haplo-insufficiency of PKP 2 in a hereditary human disease, autosomal-dominant arrhythmogenic right ventricular cardiomyopathy (ARVC; [50]). In ARVC, cardiomyocytes are progressively replaced by fibro-fatty tissue, especially in the right ventricle (for a recent review see [51]). This replacement leads to abnormal electrical conductance with syncopes and tachycardia and an often lethal failure in the mechanical capability of the heart (e.g., “sudden cardiac death” of young athletes). The mechanism leading to ARVC may include apoptosis of cardiomyocytes due to the weak and disrupted intercellular adhesion of cardiomyocytes caused by haplo-insufficiency of PKP 2 and subsequent insufficient anchorage of DSP [52]. The decline of cardiomyocytes may therefore lead to the development of scar tissue in the right ventricle. Moreover, transdifferentiation of cardiomyocytes into fibro- or adipocytes may take place, probably caused by disturbed Wnt/β-catenin-signaling [53, 54]. This is supported by further observations. The decrease of DSP in cultured atrial myocytes by siRNA results in the redistribution of plakoglobin to the nucleus and the suppression of the canonical Wnt/β-catenin-signaling pathway [54]. Genes inducing adipogenesis and fibrogenesis were upregulated in these DSP-deficient cells. Decrease of DSP was also noticed in cardiomyocytes of PKP 2-deficient mice [49], suggesting that a cellular transdifferentiation may also occur in ARVC. At least 12 different genes or chromosomal loci have been associated with the autosomal-dominant or recessive types of ARVC so far, including all five known desmosomal genes expressed in cardiomyocytes (i.e., DSG 2, DSC 2, DSP, JUP, and PKP 2).The loss of PKP 2 may also contribute to the abnormal electrical conductance of the heart [55]. Gap junctions play an essential role in the electrical coupling of cardiomyocytes and the coordinated heart contraction (reviewed in [56]). Downregulation of PKP 2 in primary cardiomyocytes of rat heart leads to reduced expression of the gap junction protein connexin 42. In addition, a decrease of cellular coupling via gap junctions is also detectable, which may result in the disturbed transmission of electrical impulses in the ventricle. Therefore, it appears that PKP 2 can influence the organization of different types of cellular junctions such as gap junctions andareae compositaein heart muscle cells.
### 2.3. Plakophilin 3
PKP 3 has a calculated mass of 87,081 Da and is detected with an apparent molecular weight of approximately 87 kDa on Western blot analysis [57, 58]. Strikingly, in contrast to the other PKP gens, PKP3 gene seems not to encode for different splice variants. PKP 3 is present in the desmosomes of all cell layers of stratified epithelia and in almost all simple epithelia, with the exception of hepatocytes (Figure 4). In epidermal cells, PKP 3 is expressed in a homogeneous pattern. Furthermore, it is detectable in the desmosomes of some nonepithelial cells with the notable exception of cardiomyocytes. This fact may explain the severe heart phenotype of PKP 2 loss, since PKP 2 is the only PKP expressed in cardiomyocytes and its loss of function cannot be compensated by the other PKPs. Although PKP 3 is mainly located in desmosomes, a significant proportion of the protein remains soluble in the cytoplasm. In contrast to the other PKPs, PKP 3 has not been detected in the nucleus.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 3. (a) Intensive reaction of desmosomes and cytoplasm is visible by staining sections of human skin with a monoclonal antibody against PKP 3 (clone PKP3 310.9.1; Progen, Heidelberg). Basal and lower suprabasal keratinocytes exhibit a strong cytoplasmic staining while desmosomal staining is less prominent. With ongoing differentiation, the desmosomal labeling is increasing. (b) Eccrine and apocrine (arrows) sweat glands show strong desmosomal labeling with PKP 3-specific antibodies. (c) Reaction of PKP 3-specific antibodies on liver is restricted to bile ductules (arrow; see description of liver tissue in the legend to Figure3) while hepatocytes are completely negative for PKP 3. The insert presents a magnification of a bile ductule of human liver stained with antibodies against PKP 3, exhibiting a labeling of the desmosomal junctions. (d) Bile ducts (here in a large portal field) show a clear desmosomal reaction at the apical pole of cells (arrow). Scale bars: 100 μm (d), 200 μm a, b, c.
(a)(b)(c)(d)A better understanding of the functions of PKP 3 came from the analyses of PKP 3 knockout mice [59]. In contrast to the other two PKPs, the PKP 3 knockout phenotype is fairly mild. PKP 3-null animals are viable and exhibit defects in the morphogenesis and morphology of specific hair follicles. Moreover, alterations in density and spacing of desmosomes and adherens junctions in PKP 3-null epidermis and oral cavity were observed (own unpublished observations). Consequently, PKP 3 is involved in the development or maintenance of skin appendages. Other PKP 3-positive epithelia appear normal in PKP 3-null animals. In addition, an upregulation of the expression of specific junctional proteins, such as the other PKPs, was noticed. In comparison to the other two PKPs, the PKP 3 knockout phenotype is modest, which may in part be due to the fact that an additional PKP is coexpressed in most epithelia and may compensate for at least some of the PKP 3 functions. Diseases in associated with the loss or heterozygosity of PKP 3 have not been reported so far.Surprisingly, among the three plakophilins, PKP 3 exhibits the most extensive binding repertoire to other desmosomal components [60] and it demonstratesin silico the most extensive interaction rate of desmosomal proteins, as predicted for keratinocytes by Cirillo and Prime [61]. It is capable to bind to most of the desmosomal proteins such as all DSG and DSC isoforms, JUP and DSP and furthermore, it is the only PKP that interacts with the smaller DSC-b isoforms that are missing the binding site for plakoglobin [60]. This implicates an apparent binding site for PKP 3 at the juxtamembrane domain of desmosomal cadherins. Both the PKP 3 head domain and the arm-repeats seem to be crucial for these interactions, since most of the interactions to other desmosomal proteins occur in yeast two-hybrid assay only using the entire PKP 3 but not using the individual domains [60].Further PKP 3 interaction partners are emerging, that are not linked to cell adhesion, suggesting a broader biological role of PKP 3. PKP 3 has been shown, for example, to interact with RNA-binding proteins such as poly-A binding protein C1 (PABPC1), FXR1 (Fragile X mental redartation-1), and G3BP (GAP SH3 domain-binding protein) in stress granules [62]. Stress granules develop when cells respond to diverse environmental stress conditions and these particles represent stalled translational complexes (for a recent review of stress granules see [63]). The function of PKP 3 in stress granules and the basis for the integration into stress granules remain unclear, but it seems likely that this is not a general function of all PKPs, since in addition to PKP 3, only PKP 1 but not PKP 2 has the ability to integrate into the stress granules.Another PKP 3-binding protein identified is dynamin-like protein DNM-1L [64]. DNM-1L is involved in the peroxisomal and mitochondrial fission and fusion as well as mitochondrial-dependent apoptosis of cells [65, 66]. Although the biological significance for this interaction is not clear, it is tempting to speculate that the PKP 3 could affect the apoptotic response of cells.
### 2.4. Plakophilins in Tumors
Cellular adhesion molecules, especially components of the adherens junctions such as E-cadherin andβ-catenin, have been shown to be important in the development, progression, and metastasis of tumors [67]. Likewise, several desmosomal proteins have also been linked to malignant processes (reviewed in [68]). Reliable data demonstrating a causal link between plakophilins and tumor development are still forthcoming. Thus far, most published studies focused on the expression of PKPs in tumors and a correlation of PKP expression and tumor prognosis. Well and moderately differentiated squamous cell carcinomas (SqCC) of skin express PKP 1, whereas in poorly differentiated tumors, PKP 1 is downregulated [69]. Tumor cells of basal cell carcinomas (BCC) exhibit a more heterogeneous expression of PKP 1, being confined to small patchy areas [69]. In solid nodular BCCs, PKP 1 expression has been found to be reduced in comparison to normal overlaying epidermis and was hardly detectable in nodules growing close to the basal epidermis. Immunohistochemical analysis of the expression of PKP 1 in oral SqCCs revealed similar results to those obtained with skin tumors [18, 70]. This is, however, conflicting with observations made by others [71], who found that PKP 1 is strongly expressed only in a small proportion of well-differentiated SqCCs. Furthermore, these authors found that most of the well-differentiated tumors are negative for PKP 1. Interestingly, using cells derived from oral SqCCs, Sobolik-Delmaire et al. [70] could demonstrate that cell lines expressing low levels of PKP 1 exhibit increased cell mobility which is reduced by ectopic expression of PKP 1. In contrast, another cell line of an oral SqCC that expresses comparably high levels of PKP 1 becomes more mobile and invasive in vitro when PKP 1 is diminished by a shRNA knock-down approach.Interestingly, in a part of oral and pharyngeal SqCCs analyzed by Schwarz et al. [18], nuclear localization of PKP 1 in tumor cells was noticed. This is remarkable since adjacent non-neoplastic squamous epithelium did not show nuclear PKP 1. In contrast to PKP 1, immunostaining for PKP 2 in histological sections of SqCC is low and often restricted to peripherally located tumor cells or is even completely absent [18], whereas PKP 3 expression patterns are similar to PKP 1 in SqCC. The expression of PKP 3 seems to correlate inversely with the degree of malignancy of tumors.An analysis of adenocarcinomas from different organs such as colon and pancreas revealed that PKP 1 is not detected whereas PKP 2 and PKP 3 are frequently expressed [18, 72], sometimes associated with a change from an apical desmosomal staining to a staining of almost complete lateral surface. The only exceptions were prostate adenocarcinomas which displayed a low level of PKP 1 immunoreactivity. Interestingly, in non-small cell lung carcinomas (NSCLC; adenocarcinomas and SqCC) and cultured cells derived thereof, Furukawa et al. observed an elevated expression of PKP 3 [64]. Inhibition of PKP 3 expression by siRNA approach in NSCLC cultured cells led to reduced colony formation and less viability of cells. Moreover, over-expression of PKP 3 in COS-cells caused enhanced proliferation rate and elevated activity in in vitro invasion assay. The authors postulated that PKP 3 may have an oncogenic function when localized in the cytoplasm under certain conditions. It thus appears that PKP 3 can potentially both, advance tumorigenesis (as seen in some NSCLC) or suppress it (as noticed for some SqCCs). Recent observations suggest that PKP 3 may be involved in epithelial-mesenchymal transition (EMT) that is of relevance especially for metastasis of tumor cells [73]. Analysis of PKP 3 expression in invasive cancer cells revealed that PKP 3 expression seems to be repressed by the transcription factor ZEB 1, a potent repressor of E-cadherin expression that is also involved in EMT, at least in breast cancer cells. Nuclear accumulation of ZEB 1 (i.e., Zinc finger E-box-binding homeobox-1) correlated with a loss of membrane staining for PKP 3. Similar observations have been reported for PKP 2-repression by ZEB 2 in colon cancer cells [74]. In conclusion, the precise role of PKPs in tumor development and tumor progression is not clear. It is possible that some of these proteins can function both, as oncogenes or as tumor suppressors, depending on the cell type studied. Further research is needed to establish a causal link between PKP expression (or loss of expression) and cancer.In summary, in the past few years PKPs have been recognized to be essential for desmosomal adhesion and tissue integrity. Nevertheless, recent data suggest that PKPs exert cellular functions unrelated to cell adhesion. Further questions like the ability of individual PKPs to compensate for the loss of one isoform and the role of PKPs in cell signaling and in tumor development need to be further investigated.
## 2.1. Plakophilin 1
PKP 1 is the smallest of the plakophilins, with a calculated molecular weight of 80.497 Da and an apparent molecular weight of approximately 75 kDa as judged by SDS-PAGE [14]. This protein is localized in the desmosomes of stratified, complex, and transitional epithelia but is absent in simple epithelia [14–16]. In stratified epithelia, PKP 1 is synthesized in all cell layers, with an increase in expression from the basal to the granular compartment as determined by quantifications of PKP 1-specific immunofluorescence signal intensity in human epidermis [17]. This indicates that PKP1 is a marker for keratinocyte differentiation. PKP 1 appears to be absent in the stratum corneum of stratified squamous epithelia though (Figure 1).Immunohistochemical staining of sections of human skin with antibodies against PKP 1. Sections of formaldehyde-fixed tissue samples of human skin were stained with a monoclonal antibody (clone PP1 5C2; Progen, Heidelberg; for methods see [18]) against PKP 1 a to d. (a) Overview of epidermis showing a strong reaction of the antibodies at the desmosomes of all layers. (b) At a higher magnification, the basal layers exhibit a somewhat weaker desmosomal staining that can be resolved occasionally into individual spot-like desmosomes containing PKP 1. During keratinocyte differentiation, desmosomal labeling is getting more pronounced. (c) Cross-section of a hair follicle (Hf) with desmosomal staining of the outer root sheath while the hair-shaft is not stained (Sg, sebaceous gland). Arrow marks the duct of a sebaceous gland. (d) Eccrine sweat ducts are marked intensively by antibodies while the secretory portions of eccrine glands show a distinct but weaker staining (arrow). Apocrine sweat glands (lower left corner) are negative. Scale bars: 100 μm (b); 200 μm a, c, and d.
(a)(b)(c)(d)The human PKP 1 gene is expressed as two different splice variants which differ with respect to cell-biological behavior, molecular weight, and abundance. PKP1a is the smaller isoform while the larger PKP1b isoform (predicted molecular weight: 82.860 kDa) is less abundant in stratified epithelia. The additional amino acid sequence contained in PKP1b is encoded by exon 7 which is spliced out of the PKP1a mRNA [19]. The PKP1b-specific amino acid sequence is located at the end of the fourth armadillo repeat and has a distinct effect on the cell biological activities of the protein. In addition to its desmosomal localization, PKP 1 has been detected in the nucleus of a broad range of cell types, even in those that do not incorporate PKP 1 in desmosomes such as simple epithelial cells [14, 19]. This distinct subcellular distribution has been observed for both variants of PKP 1. While the smaller PKP 1a may also be present in desmosomes, PKP 1b localization is restricted to the nucleus and not detectable in desmosomes. This conclusion is supported by transfection of cDNAs into cultured cells, where PKP 1a accumulates in desmosomes and is also rapidly transferred into the nucleus, while PKP 1b is only nuclear (own observations). Nevertheless, neither the way PKP 1 enters the nucleus nor the functions of this protein therein are yet known.Both the nuclear and desmosomal PKP 1 pool are degraded by caspases rapidly during apoptosis of keratinocytes suggesting that this protein is involved in the remodeling of the cytoskeleton under these conditions [20]. Signaling functions, as shown for some of the related catenins such as β-catenin, plakoglobin, and p120ctn, have been postulated for PKP 1, but proof is still lacking [21, 22]. A typical nuclear localization signal has not been identified in the protein so far, but cDNA transfection studies of the complete protein or individual parts of the protein into cells have shown that the head domain on its own, and to some extent the armadillo domain, are able to enter the nucleus [23]. The mechanism of the PKP 1 nuclear migration is currently unknown, but may utilize a piggyback mechanism.Various in vitro approaches revealed that the binding of desmosomal PKP 1 to other desmosomal proteins such as DSP, DSG 1, DSC 1, and different keratins is mediated by its head domain sequence [24–28]. The armadillo repeat domain of PKP 1 alone is sufficient to localize the protein to the plasma membrane [23]. The PKP 1 binding partner at the plasma membrane has not been determined but might be one of the desmosomal proteins or even cortical actin. In particular, it has been observed that the armadillo domain coaligns with actin microfilaments under certain circumstances and may be involved in the reorganization of this cytoskeletal component [26]. Nevertheless, the carboxyl-terminal part, in particular the last 40 amino acids, seems to be essential for the recruitment of the entire PKP 1 to the plasma membrane as shown by transfection studies of mutant cDNA constructs into A431 keratinocytes [28].Important clues for the understanding of PKP 1 function came from a report of an autosomal-recessive genodermatosis that is caused by mutations in the PKP 1 gene [29]. The ectodermal dysplasia/skin fragility (EDSF) syndrome (OMIM 604536; the collection of known mutations in the PKP 1 gene is shown in Figure 2 and published cases of EDSF syndrome are listed in Table 1) clinically manifests in the skin and its appendages. Patients suffer from blistering with erosions of their skin upon mechanical stress. Nails are dystrophic and the epidermis of soles and palms displays hyperkeratosis. The hair density on the scalp, eyebrows, and eyelashes is reduced. In severe cases, hair might be completely absent from these body regions. Impaired sweating has occasionally been observed. All other epithelial tissues that express PKP 1, including mucous membranes, seem to be normal in these patients, suggesting functional compensation by the other PKPs. Histological examination of affected skin reveals that the intercellular space is widened and epidermal keratinocytes are acantholytic from the suprabasal layers upwards, suggesting loss of cell-cell adhesion. Cell rupture, as noticed for epidermolytical bullous dermatosis, has not been observed. Immunofluorescence microscopy analyses of patients’ skin biopsies showed that certain desmosomal components such as desmogleins, desmocollins, and plakoglobin are still localized at the plasma membrane. In contrast, PKP 1 is completely absent or drastically reduced [30]. As a consequence, desmoplakin is no longer localized in the desmosomes but instead is dispersed throughout the cytoplasm. On the ultrastructural level, desmosomes appear smaller and are numerically reduced in the affected epidermal layers. Additionally, keratin filaments have lost contact to desmosomal junctions and are collapsed around the nucleus. Biochemical analysis of patients’ skin revealed that the other PKPs are upregulated to some extent and may compensate in part for the loss of PKP 1 in nonaffected epidermal layers [17]. Interestingly, it does not seem to matter for the development of the clinicopathological findings of EDSF syndrome to what extent the protein is truncated due to the mutations in PKP 1 gene. In a case reported by McGrath and colleagues the mutations occurred close to the amino-terminus of the protein, which could result either in a severely truncated protein or—more likely—in complete loss of the protein (i.e., a functional null mutation) as judged by immunofluorescence microscopy [29]. In contrast, the mutations in the PKP 1 gene reported by Hamada et al. occurred near the carboxyl-terminus resulting in the expression of a truncated protein. Based on the mild phenotype of the ESDF syndrome in these patients, it can be assumed that this truncated protein is at least partially functional but clinicopathology of ESDF still manifests [30]. Surprisingly, most of the EDSF-related mutations in human PKP 1 gene involve splice-site mutations (8 out of 13 known mutated alleles) leading to impaired splicing products and subsequent mRNA degradation or the generation of truncated proteins. The reason for the prevalence of splice-site mutations in EDSF is not known.Table 1
Published cases of EDSF syndrome with clinical features and observed mutations in PKP 1 gene.
Case1Clinicopathological findingsObserved mutations2ReferenceEpidermal fragilityHyperkeratosis on palms/solesAlopeciaNail dysplasiaHypohidrosis1yesyesyesyesyes(a) p.Q304X (b) c.1132ins28[29]2yesyesyesyesyes(a) p.Y71X (b) IVS1-1G>A[31]3yesyesyesyesnoIVS6-2A>T[32]4yesyesyesyesyesIVS11+1G>A[30]5yesnoyesyesnoIVS4-2A>G[33]6yesyesyesyesnot observedIVS1-1G>A[33]7yesyesnoyesnoIVS9+1G>A[34]8yesyesyesyesno(a) c.1053T>A +IVS5+1G>A (b) IVS10-2G>T[35]9yesyesyesyesnoc.888delC[36]10yesyesyesyesnot observedp.R672X[37]1 Numbering of the case correlates to the positions of mutations shown in Figure 2.2 For compound heterozygosity, mutations of both alleles are given as (a) and (b).Figure 2
Position of mutations in human PKP 1 gene. Schematic representation of the protein structure of PKP 1 with head domain “Head” in blue color containing the homologous region 2 “HR2” near the amino-terminus which is followed by nine armadillo repeats (yellow boxes; numbered in circles from 1 to 9). Finally, a short domain (blue) at the carboxyl-terminus is shown. Positions of homozygous mutations are marked by double arrows, positions of compound heterozygous mutations by connected arrows. Green arrows designate mutations affecting the coding region, red arrows denote splice-site mutations. For numbering and references of the mutations see Table1.These findings in conjunction with cell biological data obtained in transfection studies convincingly illustrate that PKP 1 is essential for the recruitment of desmoplakin to the desmosomal plaque and probably is involved into lateral enlargement of the plaque structure in skin, explaining the structural and functional defects in epidermal desmosomes lacking PKP 1. Evidently, integration of PKP 1 in the desmosomes provides the epidermal keratinocytes with stability against mechanical stress. A sequence stretch in the HR2 domain of PKP 1 is thought to be essential for the recruitment of DSP and represents a conserved motif of all the PKPs, suggesting that DSP recruitment is a common function of all PKPs [28].Although a direct interaction of PKP 1 with keratins has been demonstrated frequently in vitro, it is not clear whether this protein alone is sufficient to connect the intermediate filament cytoskeleton to the desmosome. Specific inactivation of DSP in the skin of mice demonstrates the necessity of both proteins, DSP and PKP 1 (in cooperation with plakoglobin), for anchorage of keratins [38] suggesting that all three components are required. This is further demonstrated by the fact that failure of either PKP 1 or DSP can lead to loss of cell-cell adhesion and acantholysis in the epidermis. The mechanism underlying the failure of epidermal desmosomes without PKP 1 to maintain adhesion is not known. It is tempting to speculate that besides structural defects cell signaling defects could contribute to this phenomenon, similar to the disease mechanisms postulated for the autoimmune blistering diseases of the pemphigus group in which autoantibodies target desmosomal cadherins. Binding of autoantibodies to the desmosomal cadherins seems to trigger intracellular signaling pathways that lead to the reorganization of the cytoskeleton involving the disconnection of desmosomal cadherins of adjacent cells (for the mechanisms of this outside-in signaling see [39]). The same pathways may be involved in the dissolution of desmosomal adhesion when PKP 1 is lost. Given that patients with PKP 1 null mutations show defects in differentiation pathways affecting skin appendage formation and homeostasis, it is unlikely that adhesion defects can account for the entire spectrum of disease phenotypes.Analysis of keratinocytes derived from patients suffering from EDSF syndrome exhibits some interesting properties. Quantitative analyses of the desmosome size in cultured cells revealed that reintroduction of PKP 1 increases the lateral extent of desmosomes. As proposed by others [25, 40], desmosomal cohesiveness might be increased by lateral interactions of PKP 1 with DSP, making additional linkage between desmosomal proteins and keratin network accessible [41]. It is noteworthy that PKP 1 null keratinocytes show increased cell migration, which has implications for tumor biology.
## 2.2. Plakophilin 2
PKP 2 is, with a predicted mass of 92.756 Da and an apparent molecular weight of 100 kDa (estimated from Western blot analysis), the largest of the three plakophilins and it is also the prevailing isoform since it is expressed in all cell types with desmosomal junctions [42]. PKP 2 is found in the basal cells of certain stratified epithelia while more differentiated keratinocytes are negative for desmosomal PKP 2 (Figure 3). Moreover, PKP 2 has recently also been found in new types of cell junction which differ in terms of their biochemical composition from both classical desmosomes and conventional adherens junctions (reviewed in [43]). Similar to PKP 1, PKP 2 also occurs as two different splice variants. An additional exon coding for 44 amino acids is integrated into PKP 2 b close to the border of the second to third armadillo repeat of the protein [42]. The two PKP 2 splice variants appear to be coexpressed in all cell types analyzed thus far, and it is not known whether these two proteins have different functions.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 2. (a) The staining of samples of human skin with a monoclonal antibody against PKP 2 (clone PP2-150; Progen, Heidelberg) demonstrates a weak and delicate desmosomal staining as well as cytoplasmic staining in the basal layer of the interfollicular epidermis (arrow). Suprabasal keratinocytes remain unstained. (b) Eccrine sweat glands and ducts show a strong reaction with PKP 2-specific antibodies while apocrine sweat glands exhibit an apical, distinct but weak desmosomal reaction (arrow). (c) Hepatocytes as well as bile ductules are marked at the cell-cell contacts by PKP 2-specific antibodies (arrow). (d) Bile ducts also show a sharp and apical staining of desmosomal structure by the PKP 2-antibodies. The samples shown in (c) and (d) are derived from liver tissue in the vicinity of a metastasis of a gastrointestinal stromal tumor with portal and periportal fibrosis and ductal and ductular proliferation. Scale bars: 100μm (d), 200 μm (a, b, c).
(a)(b)(c)(d)Like PKP 1, PKP 2 has been detected in the nucleus of many cell types [42]. Its presence in the nucleus is independent of its presence in desmosomes. Some nonepithelial cell types, which do not assemble desmosomes, show only nuclear localization of PKP 2 (e.g., fibroblasts [42]). In stratified epithelia, nuclear and desmosomal localization of PKP2 is regulated independently. In the differentiated layers of stratified epithelia, PKP 2 is excluded from desmosomes and accumulates in the nuclei of keratinocytes. Recently, Müller and colleagues identified a molecular pathway that appears to regulate nuclear accumulation of PKP 2 [44]. The Cdc25C-associated kinase 1 (C-TAK 1) emerges to be involved in cell-cycle regulation and Ras-signaling. It was shown that C-TAK 1 phosphorylates Cdc25C and KSR1, a scaffold protein for mitogen-activated protein kinase (MAPK) and Raf-1 kinase. Müller et al. demonstrated that PKP 2 is also a substrate for C-TAK 1 [44]. This phosphorylation of PKP 2 enforces an interaction of PKP 2 with 14-3-3 proteins, which prevents the nuclear accumulation of PKP 2. Consequently, mutation of the C-TAK 1 phosphorylation site or the 14-3-3 binding domain in PKP 2 increases nuclear accumulation of PKP 2. The pathways that trigger C-TAK 1-mediated phosphorylation of PKP 2 and its retention in the cytoplasm have not been analyzed so far.What does PKP 2 do in the nucleus? Recent experiments by Mertens and colleagues provided some insights [45]. Immunoprecipitation experiments revealed an association of PKP 2 with the largest subunit of RNA-polymerase-III holoenzyme, protein RPC155, as well as other components such as RPC82 and RPC39. The PKP 2-positive complexes also contain RNA-polymerase-III-associated transcription factor TFIIIB but not TFIIIC. The colocalization of PKP 2 and RPC155 in particles in the interchromatin space has been shown by immunofluorescence microscopy. Mertens and colleagues [45] postulated that these particles do not represent active forms of polymerase-III, because the PKP 2-positive particles do not contain transcription factor TFIIIC, a factor required for the formation of an active RNA polymerase III complex. Thus, the actual function of these complexes remains unclear. Nevertheless, the almost general appearance of PKP 2, as well as PKP 1, in the nucleus seems to differ fundamentally from the nuclear localization of other related catenins such as β-catenin or p120ctn, which are translocated into the nucleus upon specific signals and have been shown to be involved in gene regulation [21, 22].Besides these nuclear functions, PKP 2 may be involved in cytoplasmic signaling, which is based on the observation that it can bindβ-catenin [46], a key downstream effector protein of the canonical Wnt-signaling pathway [21]. Using two-hybrid and immunoprecipitation assay, it was shown that PKP 2 can bind to β-catenin. However, when bound to PKP 2, β-catenin cannot associate to E-cadherin, which may reduce the pool of β-catenin available to function in cell adhesion. Overexpression of PKP 2 in colon carcinoma cells leads to an increase in β-catenin/TCF signaling suggesting a regulatory role of PKP 2 in Wnt signaling and providing a potential functional link between desmosomal adhesion and signaling [46].PKP 2 also seems to be involved in the assembly of the desmosomal components into desmosomes. siRNA-mediated depletion of PKP 2 in keratinocytes leads to changes in the subcellular localization of DSP which mimics the behavior of a DSP mutant deficient for a PKCα (i.e., protein kinase C) phosphorylation site. Different isoforms of PKC have been implicated in the regulation of cellular processes such as migration, cellular adhesion, or cytoskeletal reorganization (for review see [47]). Bass-Zubeck et al. investigated the connection between PKP 2, DSP, and PKC [48]. The authors found that PKP 2 binds to PKCα and DSP via its head domain. A detailed analysis revealed that PKP 2 simultaneously binds DSP and PKCα, which facilitates the subsequent phosphorylation of DSP at its IF-binding domain by PKC [48]. This increases DSP integration into the desmosomes and the subsequent attachment of IFs to desmoplakin.Insights into the function of PKP 2 also came from gene knockout experiments in mice, as well as the analysis of an autosomal-dominant human hereditary disease linked to PKP 2 mutations [49, 50]. Ablation of the PKP 2 gene in mice leads to a lethal phenotype around mid-gestation (E10.5) [49]. Homozygous PKP 2-null embryos died because of severe alterations of the heart structure resulting in the outflow of blood into the pericardium and subsequent collapse of the embryonic blood circulation. On the microscopic level, PKP 2 deficient hearts display reduced trabeculation as well as abnormally thin cardiac walls. The reason for the instability of cell contacts between cardiomyocytes is apparent on the ultrastructural level. The junctional complexes of theareae compositae(formerly designated as intercalated disks; see [43]) that connect cardiomyocytes include at least two types of junctions in an amalgamated fashion, desmosomes and adherens junctions. The areae compositae are altered significantly in PKP 2-mutant mice. Associated with the deficiency of PKP 2, DSP is depleted from the desmosomal junctions and accumulates in the cytoplasm. Additionally, DSG 2 expression seems to be reduced in PKP 2-null cardiomyocytes and desmosomal components were less resistant to detergent extraction, suggesting impaired function of cell junctions. Therefore, PKP 2 seems to be essential for the regular subcellular distribution of desmoplakin and its accumulation in the areae compositae of cardiomyocytes. Interestingly, Grossmann et al. found no alteration in other PKP 2-expressing epithelia in the mutant animals [49]. This is likely due to the expression of multiple PKP isoforms in many cell types (except for the heart which expresses only PKP 2), providing functional compensation in case one isoform is not functional.The essential function of PKP 2 in the heart was also demonstrated by the identification of a haplo-insufficiency of PKP 2 in a hereditary human disease, autosomal-dominant arrhythmogenic right ventricular cardiomyopathy (ARVC; [50]). In ARVC, cardiomyocytes are progressively replaced by fibro-fatty tissue, especially in the right ventricle (for a recent review see [51]). This replacement leads to abnormal electrical conductance with syncopes and tachycardia and an often lethal failure in the mechanical capability of the heart (e.g., “sudden cardiac death” of young athletes). The mechanism leading to ARVC may include apoptosis of cardiomyocytes due to the weak and disrupted intercellular adhesion of cardiomyocytes caused by haplo-insufficiency of PKP 2 and subsequent insufficient anchorage of DSP [52]. The decline of cardiomyocytes may therefore lead to the development of scar tissue in the right ventricle. Moreover, transdifferentiation of cardiomyocytes into fibro- or adipocytes may take place, probably caused by disturbed Wnt/β-catenin-signaling [53, 54]. This is supported by further observations. The decrease of DSP in cultured atrial myocytes by siRNA results in the redistribution of plakoglobin to the nucleus and the suppression of the canonical Wnt/β-catenin-signaling pathway [54]. Genes inducing adipogenesis and fibrogenesis were upregulated in these DSP-deficient cells. Decrease of DSP was also noticed in cardiomyocytes of PKP 2-deficient mice [49], suggesting that a cellular transdifferentiation may also occur in ARVC. At least 12 different genes or chromosomal loci have been associated with the autosomal-dominant or recessive types of ARVC so far, including all five known desmosomal genes expressed in cardiomyocytes (i.e., DSG 2, DSC 2, DSP, JUP, and PKP 2).The loss of PKP 2 may also contribute to the abnormal electrical conductance of the heart [55]. Gap junctions play an essential role in the electrical coupling of cardiomyocytes and the coordinated heart contraction (reviewed in [56]). Downregulation of PKP 2 in primary cardiomyocytes of rat heart leads to reduced expression of the gap junction protein connexin 42. In addition, a decrease of cellular coupling via gap junctions is also detectable, which may result in the disturbed transmission of electrical impulses in the ventricle. Therefore, it appears that PKP 2 can influence the organization of different types of cellular junctions such as gap junctions andareae compositaein heart muscle cells.
## 2.3. Plakophilin 3
PKP 3 has a calculated mass of 87,081 Da and is detected with an apparent molecular weight of approximately 87 kDa on Western blot analysis [57, 58]. Strikingly, in contrast to the other PKP gens, PKP3 gene seems not to encode for different splice variants. PKP 3 is present in the desmosomes of all cell layers of stratified epithelia and in almost all simple epithelia, with the exception of hepatocytes (Figure 4). In epidermal cells, PKP 3 is expressed in a homogeneous pattern. Furthermore, it is detectable in the desmosomes of some nonepithelial cells with the notable exception of cardiomyocytes. This fact may explain the severe heart phenotype of PKP 2 loss, since PKP 2 is the only PKP expressed in cardiomyocytes and its loss of function cannot be compensated by the other PKPs. Although PKP 3 is mainly located in desmosomes, a significant proportion of the protein remains soluble in the cytoplasm. In contrast to the other PKPs, PKP 3 has not been detected in the nucleus.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 3. (a) Intensive reaction of desmosomes and cytoplasm is visible by staining sections of human skin with a monoclonal antibody against PKP 3 (clone PKP3 310.9.1; Progen, Heidelberg). Basal and lower suprabasal keratinocytes exhibit a strong cytoplasmic staining while desmosomal staining is less prominent. With ongoing differentiation, the desmosomal labeling is increasing. (b) Eccrine and apocrine (arrows) sweat glands show strong desmosomal labeling with PKP 3-specific antibodies. (c) Reaction of PKP 3-specific antibodies on liver is restricted to bile ductules (arrow; see description of liver tissue in the legend to Figure3) while hepatocytes are completely negative for PKP 3. The insert presents a magnification of a bile ductule of human liver stained with antibodies against PKP 3, exhibiting a labeling of the desmosomal junctions. (d) Bile ducts (here in a large portal field) show a clear desmosomal reaction at the apical pole of cells (arrow). Scale bars: 100 μm (d), 200 μm a, b, c.
(a)(b)(c)(d)A better understanding of the functions of PKP 3 came from the analyses of PKP 3 knockout mice [59]. In contrast to the other two PKPs, the PKP 3 knockout phenotype is fairly mild. PKP 3-null animals are viable and exhibit defects in the morphogenesis and morphology of specific hair follicles. Moreover, alterations in density and spacing of desmosomes and adherens junctions in PKP 3-null epidermis and oral cavity were observed (own unpublished observations). Consequently, PKP 3 is involved in the development or maintenance of skin appendages. Other PKP 3-positive epithelia appear normal in PKP 3-null animals. In addition, an upregulation of the expression of specific junctional proteins, such as the other PKPs, was noticed. In comparison to the other two PKPs, the PKP 3 knockout phenotype is modest, which may in part be due to the fact that an additional PKP is coexpressed in most epithelia and may compensate for at least some of the PKP 3 functions. Diseases in associated with the loss or heterozygosity of PKP 3 have not been reported so far.Surprisingly, among the three plakophilins, PKP 3 exhibits the most extensive binding repertoire to other desmosomal components [60] and it demonstratesin silico the most extensive interaction rate of desmosomal proteins, as predicted for keratinocytes by Cirillo and Prime [61]. It is capable to bind to most of the desmosomal proteins such as all DSG and DSC isoforms, JUP and DSP and furthermore, it is the only PKP that interacts with the smaller DSC-b isoforms that are missing the binding site for plakoglobin [60]. This implicates an apparent binding site for PKP 3 at the juxtamembrane domain of desmosomal cadherins. Both the PKP 3 head domain and the arm-repeats seem to be crucial for these interactions, since most of the interactions to other desmosomal proteins occur in yeast two-hybrid assay only using the entire PKP 3 but not using the individual domains [60].Further PKP 3 interaction partners are emerging, that are not linked to cell adhesion, suggesting a broader biological role of PKP 3. PKP 3 has been shown, for example, to interact with RNA-binding proteins such as poly-A binding protein C1 (PABPC1), FXR1 (Fragile X mental redartation-1), and G3BP (GAP SH3 domain-binding protein) in stress granules [62]. Stress granules develop when cells respond to diverse environmental stress conditions and these particles represent stalled translational complexes (for a recent review of stress granules see [63]). The function of PKP 3 in stress granules and the basis for the integration into stress granules remain unclear, but it seems likely that this is not a general function of all PKPs, since in addition to PKP 3, only PKP 1 but not PKP 2 has the ability to integrate into the stress granules.Another PKP 3-binding protein identified is dynamin-like protein DNM-1L [64]. DNM-1L is involved in the peroxisomal and mitochondrial fission and fusion as well as mitochondrial-dependent apoptosis of cells [65, 66]. Although the biological significance for this interaction is not clear, it is tempting to speculate that the PKP 3 could affect the apoptotic response of cells.
## 2.4. Plakophilins in Tumors
Cellular adhesion molecules, especially components of the adherens junctions such as E-cadherin andβ-catenin, have been shown to be important in the development, progression, and metastasis of tumors [67]. Likewise, several desmosomal proteins have also been linked to malignant processes (reviewed in [68]). Reliable data demonstrating a causal link between plakophilins and tumor development are still forthcoming. Thus far, most published studies focused on the expression of PKPs in tumors and a correlation of PKP expression and tumor prognosis. Well and moderately differentiated squamous cell carcinomas (SqCC) of skin express PKP 1, whereas in poorly differentiated tumors, PKP 1 is downregulated [69]. Tumor cells of basal cell carcinomas (BCC) exhibit a more heterogeneous expression of PKP 1, being confined to small patchy areas [69]. In solid nodular BCCs, PKP 1 expression has been found to be reduced in comparison to normal overlaying epidermis and was hardly detectable in nodules growing close to the basal epidermis. Immunohistochemical analysis of the expression of PKP 1 in oral SqCCs revealed similar results to those obtained with skin tumors [18, 70]. This is, however, conflicting with observations made by others [71], who found that PKP 1 is strongly expressed only in a small proportion of well-differentiated SqCCs. Furthermore, these authors found that most of the well-differentiated tumors are negative for PKP 1. Interestingly, using cells derived from oral SqCCs, Sobolik-Delmaire et al. [70] could demonstrate that cell lines expressing low levels of PKP 1 exhibit increased cell mobility which is reduced by ectopic expression of PKP 1. In contrast, another cell line of an oral SqCC that expresses comparably high levels of PKP 1 becomes more mobile and invasive in vitro when PKP 1 is diminished by a shRNA knock-down approach.Interestingly, in a part of oral and pharyngeal SqCCs analyzed by Schwarz et al. [18], nuclear localization of PKP 1 in tumor cells was noticed. This is remarkable since adjacent non-neoplastic squamous epithelium did not show nuclear PKP 1. In contrast to PKP 1, immunostaining for PKP 2 in histological sections of SqCC is low and often restricted to peripherally located tumor cells or is even completely absent [18], whereas PKP 3 expression patterns are similar to PKP 1 in SqCC. The expression of PKP 3 seems to correlate inversely with the degree of malignancy of tumors.An analysis of adenocarcinomas from different organs such as colon and pancreas revealed that PKP 1 is not detected whereas PKP 2 and PKP 3 are frequently expressed [18, 72], sometimes associated with a change from an apical desmosomal staining to a staining of almost complete lateral surface. The only exceptions were prostate adenocarcinomas which displayed a low level of PKP 1 immunoreactivity. Interestingly, in non-small cell lung carcinomas (NSCLC; adenocarcinomas and SqCC) and cultured cells derived thereof, Furukawa et al. observed an elevated expression of PKP 3 [64]. Inhibition of PKP 3 expression by siRNA approach in NSCLC cultured cells led to reduced colony formation and less viability of cells. Moreover, over-expression of PKP 3 in COS-cells caused enhanced proliferation rate and elevated activity in in vitro invasion assay. The authors postulated that PKP 3 may have an oncogenic function when localized in the cytoplasm under certain conditions. It thus appears that PKP 3 can potentially both, advance tumorigenesis (as seen in some NSCLC) or suppress it (as noticed for some SqCCs). Recent observations suggest that PKP 3 may be involved in epithelial-mesenchymal transition (EMT) that is of relevance especially for metastasis of tumor cells [73]. Analysis of PKP 3 expression in invasive cancer cells revealed that PKP 3 expression seems to be repressed by the transcription factor ZEB 1, a potent repressor of E-cadherin expression that is also involved in EMT, at least in breast cancer cells. Nuclear accumulation of ZEB 1 (i.e., Zinc finger E-box-binding homeobox-1) correlated with a loss of membrane staining for PKP 3. Similar observations have been reported for PKP 2-repression by ZEB 2 in colon cancer cells [74]. In conclusion, the precise role of PKPs in tumor development and tumor progression is not clear. It is possible that some of these proteins can function both, as oncogenes or as tumor suppressors, depending on the cell type studied. Further research is needed to establish a causal link between PKP expression (or loss of expression) and cancer.In summary, in the past few years PKPs have been recognized to be essential for desmosomal adhesion and tissue integrity. Nevertheless, recent data suggest that PKPs exert cellular functions unrelated to cell adhesion. Further questions like the ability of individual PKPs to compensate for the loss of one isoform and the role of PKPs in cell signaling and in tumor development need to be further investigated.
---
*Source: 101452-2010-04-21.xml* | 101452-2010-04-21_101452-2010-04-21.md | 72,409 | The Desmosomal Plaque Proteins of the Plakophilin Family | Steffen Neuber; Mario Mühmer; Denise Wratten; Peter J. Koch; Roland Moll; Ansgar Schmidt | Dermatology Research and Practice
(2010) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2010/101452 | 101452-2010-04-21.xml | ---
## Abstract
Three related proteins of the plakophilin family (PKP1_3) have been identified as junctional proteins that are essential for the formation and stabilization of desmosomal cell contacts. Failure of PKP expression can have fatal effects on desmosomal adhesion, leading to abnormal tissue and organ development. Thus, loss of functional PKP 1 in humans leads to ectodermal dysplasia/skin fragility (EDSF) syndrome, a genodermatosis with severe blistering of the epidermis as well as abnormal keratinocytes differentiation. Mutations in the human PKP 2 gene have been linked to severe heart abnormalities that lead to arrhythmogenic right ventricular cardiomyopathy (ARVC). In the past few years it has been shown that junctional adhesion is not the only function of PKPs. These proteins have been implicated in cell signaling, organization of the cytoskeleton, and control of protein biosynthesis under specific cellular circumstances. Clearly, PKPs are more than just cell adhesion proteins. In this paper we will give an overview of our current knowledge on the very distinct roles of plakophilins in the cell.
---
## Body
## 1. Introduction
Cellular adhesion is mediated by distinct protein complexes at the cytoplasmic membrane, termed junctions, that have been characterized by their morphology on the ultrastructural level [1]. Desmosomes reveal a characteristic appearance and anchor different types of intermediate filaments (IF) to the cell membrane. The fundamental functional importance of desmosomal cell contacts for cellular and tissue architecture, differentiation, development, and tissue stability is generally accepted and has previously been described [2–4]. Experimental evidence for the importance of desmosomal adhesion for specific tissues and organs has been established by knockout experiments of desmosomal genes in mice (see, e.g., [5]). Moreover, examination of a variety of human diseases characterized by a loss, or impairment of desmosomal adhesion—regardless of genetical, autoimmune, or infectious etiology—advanced our understanding of desmosomal function [6]. Desmosomes are formed by all epithelial tissues and tumors derived therefrom as well as by specific nonepithelial tissues such as heart muscle cells. Desmosomal cadherins (i.e., desmogleins DSGs and desmocollins DSCs) located on adjacent cells mediate intercellular connection via interactions of their extracellular domains (for review see [7]). On the cytoplasmic side of the plasma membrane, IF are linked to the desmosomal cadherins via desmosomal plaque proteins. Besides the constitutive desmosomal plaque proteins desmoplakin (DSP) and plakoglobin (JUP), at least one of the three classical members of the plakophilin family (PKP 1 to PKP 3) is required for the formation of functional desmosomes [8–10]. The role of PKPs in cellular adhesion have been analyzed in detail during the past decade [8–10]. However, additional functions of the plakophilins that are not directly linked to desmosomal adhesion have recently been described. In this review we want to provide insights not only into the known properties and functions of plakophilins in desmosomes, but also into cellular functions not related to adhesion.
## 2. Common Features of the Plakophilins
Plakophilins are probably the most basic proteins identified in cellular adhesion complexes so far with an isoelectric point (pI) of about pH 9.3. Based on their primary sequences, PKPs have been classified as a distinct subfamily of the armadillo repeat proteins (for review see [11]). The carboxyl-terminal part of the proteins includes nine armadillo repeats which contain a spacer sequence between the fifth and sixth repeat that leads to a characteristic kink in the domain structure as determined by crystallography of the armadillo domain of PKP 1 [12]. The amino-terminal parts (head domain) of the three plakophilins are rather diverse and exhibit no obvious homology to themselves or other proteins. Only a small sequence near the amino-terminus, designated homology region (HR) 2, shows some degree of homology between the plakophilins. An analysis of amino acid sequence homology reveals that the PKPs are related to the catenin proteins of the p120ctn-group, which are associated with classical cadherins, such as E-cadherin, in adherens junctions. The PKPs are more distantly related to the classical catenins, β-catenin and plakoglobin [8, 13]. PKPs show complex but overlapping expression patterns in mammalian tissues. Certain cells and tissues express only one type of PKP. Mutations affecting the corresponding PKPs thus can lead to severe diseases in these tissues since compensatory PKP isoforms are not expressed or may not substitute for all functional aspects. This probably explains the severe skin diseases caused by PKP 1 mutations and the heart diseases caused by PKP 2 mutations.
### 2.1. Plakophilin 1
PKP 1 is the smallest of the plakophilins, with a calculated molecular weight of 80.497 Da and an apparent molecular weight of approximately 75 kDa as judged by SDS-PAGE [14]. This protein is localized in the desmosomes of stratified, complex, and transitional epithelia but is absent in simple epithelia [14–16]. In stratified epithelia, PKP 1 is synthesized in all cell layers, with an increase in expression from the basal to the granular compartment as determined by quantifications of PKP 1-specific immunofluorescence signal intensity in human epidermis [17]. This indicates that PKP1 is a marker for keratinocyte differentiation. PKP 1 appears to be absent in the stratum corneum of stratified squamous epithelia though (Figure 1).Immunohistochemical staining of sections of human skin with antibodies against PKP 1. Sections of formaldehyde-fixed tissue samples of human skin were stained with a monoclonal antibody (clone PP1 5C2; Progen, Heidelberg; for methods see [18]) against PKP 1 a to d. (a) Overview of epidermis showing a strong reaction of the antibodies at the desmosomes of all layers. (b) At a higher magnification, the basal layers exhibit a somewhat weaker desmosomal staining that can be resolved occasionally into individual spot-like desmosomes containing PKP 1. During keratinocyte differentiation, desmosomal labeling is getting more pronounced. (c) Cross-section of a hair follicle (Hf) with desmosomal staining of the outer root sheath while the hair-shaft is not stained (Sg, sebaceous gland). Arrow marks the duct of a sebaceous gland. (d) Eccrine sweat ducts are marked intensively by antibodies while the secretory portions of eccrine glands show a distinct but weaker staining (arrow). Apocrine sweat glands (lower left corner) are negative. Scale bars: 100 μm (b); 200 μm a, c, and d.
(a)(b)(c)(d)The human PKP 1 gene is expressed as two different splice variants which differ with respect to cell-biological behavior, molecular weight, and abundance. PKP1a is the smaller isoform while the larger PKP1b isoform (predicted molecular weight: 82.860 kDa) is less abundant in stratified epithelia. The additional amino acid sequence contained in PKP1b is encoded by exon 7 which is spliced out of the PKP1a mRNA [19]. The PKP1b-specific amino acid sequence is located at the end of the fourth armadillo repeat and has a distinct effect on the cell biological activities of the protein. In addition to its desmosomal localization, PKP 1 has been detected in the nucleus of a broad range of cell types, even in those that do not incorporate PKP 1 in desmosomes such as simple epithelial cells [14, 19]. This distinct subcellular distribution has been observed for both variants of PKP 1. While the smaller PKP 1a may also be present in desmosomes, PKP 1b localization is restricted to the nucleus and not detectable in desmosomes. This conclusion is supported by transfection of cDNAs into cultured cells, where PKP 1a accumulates in desmosomes and is also rapidly transferred into the nucleus, while PKP 1b is only nuclear (own observations). Nevertheless, neither the way PKP 1 enters the nucleus nor the functions of this protein therein are yet known.Both the nuclear and desmosomal PKP 1 pool are degraded by caspases rapidly during apoptosis of keratinocytes suggesting that this protein is involved in the remodeling of the cytoskeleton under these conditions [20]. Signaling functions, as shown for some of the related catenins such as β-catenin, plakoglobin, and p120ctn, have been postulated for PKP 1, but proof is still lacking [21, 22]. A typical nuclear localization signal has not been identified in the protein so far, but cDNA transfection studies of the complete protein or individual parts of the protein into cells have shown that the head domain on its own, and to some extent the armadillo domain, are able to enter the nucleus [23]. The mechanism of the PKP 1 nuclear migration is currently unknown, but may utilize a piggyback mechanism.Various in vitro approaches revealed that the binding of desmosomal PKP 1 to other desmosomal proteins such as DSP, DSG 1, DSC 1, and different keratins is mediated by its head domain sequence [24–28]. The armadillo repeat domain of PKP 1 alone is sufficient to localize the protein to the plasma membrane [23]. The PKP 1 binding partner at the plasma membrane has not been determined but might be one of the desmosomal proteins or even cortical actin. In particular, it has been observed that the armadillo domain coaligns with actin microfilaments under certain circumstances and may be involved in the reorganization of this cytoskeletal component [26]. Nevertheless, the carboxyl-terminal part, in particular the last 40 amino acids, seems to be essential for the recruitment of the entire PKP 1 to the plasma membrane as shown by transfection studies of mutant cDNA constructs into A431 keratinocytes [28].Important clues for the understanding of PKP 1 function came from a report of an autosomal-recessive genodermatosis that is caused by mutations in the PKP 1 gene [29]. The ectodermal dysplasia/skin fragility (EDSF) syndrome (OMIM 604536; the collection of known mutations in the PKP 1 gene is shown in Figure 2 and published cases of EDSF syndrome are listed in Table 1) clinically manifests in the skin and its appendages. Patients suffer from blistering with erosions of their skin upon mechanical stress. Nails are dystrophic and the epidermis of soles and palms displays hyperkeratosis. The hair density on the scalp, eyebrows, and eyelashes is reduced. In severe cases, hair might be completely absent from these body regions. Impaired sweating has occasionally been observed. All other epithelial tissues that express PKP 1, including mucous membranes, seem to be normal in these patients, suggesting functional compensation by the other PKPs. Histological examination of affected skin reveals that the intercellular space is widened and epidermal keratinocytes are acantholytic from the suprabasal layers upwards, suggesting loss of cell-cell adhesion. Cell rupture, as noticed for epidermolytical bullous dermatosis, has not been observed. Immunofluorescence microscopy analyses of patients’ skin biopsies showed that certain desmosomal components such as desmogleins, desmocollins, and plakoglobin are still localized at the plasma membrane. In contrast, PKP 1 is completely absent or drastically reduced [30]. As a consequence, desmoplakin is no longer localized in the desmosomes but instead is dispersed throughout the cytoplasm. On the ultrastructural level, desmosomes appear smaller and are numerically reduced in the affected epidermal layers. Additionally, keratin filaments have lost contact to desmosomal junctions and are collapsed around the nucleus. Biochemical analysis of patients’ skin revealed that the other PKPs are upregulated to some extent and may compensate in part for the loss of PKP 1 in nonaffected epidermal layers [17]. Interestingly, it does not seem to matter for the development of the clinicopathological findings of EDSF syndrome to what extent the protein is truncated due to the mutations in PKP 1 gene. In a case reported by McGrath and colleagues the mutations occurred close to the amino-terminus of the protein, which could result either in a severely truncated protein or—more likely—in complete loss of the protein (i.e., a functional null mutation) as judged by immunofluorescence microscopy [29]. In contrast, the mutations in the PKP 1 gene reported by Hamada et al. occurred near the carboxyl-terminus resulting in the expression of a truncated protein. Based on the mild phenotype of the ESDF syndrome in these patients, it can be assumed that this truncated protein is at least partially functional but clinicopathology of ESDF still manifests [30]. Surprisingly, most of the EDSF-related mutations in human PKP 1 gene involve splice-site mutations (8 out of 13 known mutated alleles) leading to impaired splicing products and subsequent mRNA degradation or the generation of truncated proteins. The reason for the prevalence of splice-site mutations in EDSF is not known.Table 1
Published cases of EDSF syndrome with clinical features and observed mutations in PKP 1 gene.
Case1Clinicopathological findingsObserved mutations2ReferenceEpidermal fragilityHyperkeratosis on palms/solesAlopeciaNail dysplasiaHypohidrosis1yesyesyesyesyes(a) p.Q304X (b) c.1132ins28[29]2yesyesyesyesyes(a) p.Y71X (b) IVS1-1G>A[31]3yesyesyesyesnoIVS6-2A>T[32]4yesyesyesyesyesIVS11+1G>A[30]5yesnoyesyesnoIVS4-2A>G[33]6yesyesyesyesnot observedIVS1-1G>A[33]7yesyesnoyesnoIVS9+1G>A[34]8yesyesyesyesno(a) c.1053T>A +IVS5+1G>A (b) IVS10-2G>T[35]9yesyesyesyesnoc.888delC[36]10yesyesyesyesnot observedp.R672X[37]1 Numbering of the case correlates to the positions of mutations shown in Figure 2.2 For compound heterozygosity, mutations of both alleles are given as (a) and (b).Figure 2
Position of mutations in human PKP 1 gene. Schematic representation of the protein structure of PKP 1 with head domain “Head” in blue color containing the homologous region 2 “HR2” near the amino-terminus which is followed by nine armadillo repeats (yellow boxes; numbered in circles from 1 to 9). Finally, a short domain (blue) at the carboxyl-terminus is shown. Positions of homozygous mutations are marked by double arrows, positions of compound heterozygous mutations by connected arrows. Green arrows designate mutations affecting the coding region, red arrows denote splice-site mutations. For numbering and references of the mutations see Table1.These findings in conjunction with cell biological data obtained in transfection studies convincingly illustrate that PKP 1 is essential for the recruitment of desmoplakin to the desmosomal plaque and probably is involved into lateral enlargement of the plaque structure in skin, explaining the structural and functional defects in epidermal desmosomes lacking PKP 1. Evidently, integration of PKP 1 in the desmosomes provides the epidermal keratinocytes with stability against mechanical stress. A sequence stretch in the HR2 domain of PKP 1 is thought to be essential for the recruitment of DSP and represents a conserved motif of all the PKPs, suggesting that DSP recruitment is a common function of all PKPs [28].Although a direct interaction of PKP 1 with keratins has been demonstrated frequently in vitro, it is not clear whether this protein alone is sufficient to connect the intermediate filament cytoskeleton to the desmosome. Specific inactivation of DSP in the skin of mice demonstrates the necessity of both proteins, DSP and PKP 1 (in cooperation with plakoglobin), for anchorage of keratins [38] suggesting that all three components are required. This is further demonstrated by the fact that failure of either PKP 1 or DSP can lead to loss of cell-cell adhesion and acantholysis in the epidermis. The mechanism underlying the failure of epidermal desmosomes without PKP 1 to maintain adhesion is not known. It is tempting to speculate that besides structural defects cell signaling defects could contribute to this phenomenon, similar to the disease mechanisms postulated for the autoimmune blistering diseases of the pemphigus group in which autoantibodies target desmosomal cadherins. Binding of autoantibodies to the desmosomal cadherins seems to trigger intracellular signaling pathways that lead to the reorganization of the cytoskeleton involving the disconnection of desmosomal cadherins of adjacent cells (for the mechanisms of this outside-in signaling see [39]). The same pathways may be involved in the dissolution of desmosomal adhesion when PKP 1 is lost. Given that patients with PKP 1 null mutations show defects in differentiation pathways affecting skin appendage formation and homeostasis, it is unlikely that adhesion defects can account for the entire spectrum of disease phenotypes.Analysis of keratinocytes derived from patients suffering from EDSF syndrome exhibits some interesting properties. Quantitative analyses of the desmosome size in cultured cells revealed that reintroduction of PKP 1 increases the lateral extent of desmosomes. As proposed by others [25, 40], desmosomal cohesiveness might be increased by lateral interactions of PKP 1 with DSP, making additional linkage between desmosomal proteins and keratin network accessible [41]. It is noteworthy that PKP 1 null keratinocytes show increased cell migration, which has implications for tumor biology.
### 2.2. Plakophilin 2
PKP 2 is, with a predicted mass of 92.756 Da and an apparent molecular weight of 100 kDa (estimated from Western blot analysis), the largest of the three plakophilins and it is also the prevailing isoform since it is expressed in all cell types with desmosomal junctions [42]. PKP 2 is found in the basal cells of certain stratified epithelia while more differentiated keratinocytes are negative for desmosomal PKP 2 (Figure 3). Moreover, PKP 2 has recently also been found in new types of cell junction which differ in terms of their biochemical composition from both classical desmosomes and conventional adherens junctions (reviewed in [43]). Similar to PKP 1, PKP 2 also occurs as two different splice variants. An additional exon coding for 44 amino acids is integrated into PKP 2 b close to the border of the second to third armadillo repeat of the protein [42]. The two PKP 2 splice variants appear to be coexpressed in all cell types analyzed thus far, and it is not known whether these two proteins have different functions.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 2. (a) The staining of samples of human skin with a monoclonal antibody against PKP 2 (clone PP2-150; Progen, Heidelberg) demonstrates a weak and delicate desmosomal staining as well as cytoplasmic staining in the basal layer of the interfollicular epidermis (arrow). Suprabasal keratinocytes remain unstained. (b) Eccrine sweat glands and ducts show a strong reaction with PKP 2-specific antibodies while apocrine sweat glands exhibit an apical, distinct but weak desmosomal reaction (arrow). (c) Hepatocytes as well as bile ductules are marked at the cell-cell contacts by PKP 2-specific antibodies (arrow). (d) Bile ducts also show a sharp and apical staining of desmosomal structure by the PKP 2-antibodies. The samples shown in (c) and (d) are derived from liver tissue in the vicinity of a metastasis of a gastrointestinal stromal tumor with portal and periportal fibrosis and ductal and ductular proliferation. Scale bars: 100μm (d), 200 μm (a, b, c).
(a)(b)(c)(d)Like PKP 1, PKP 2 has been detected in the nucleus of many cell types [42]. Its presence in the nucleus is independent of its presence in desmosomes. Some nonepithelial cell types, which do not assemble desmosomes, show only nuclear localization of PKP 2 (e.g., fibroblasts [42]). In stratified epithelia, nuclear and desmosomal localization of PKP2 is regulated independently. In the differentiated layers of stratified epithelia, PKP 2 is excluded from desmosomes and accumulates in the nuclei of keratinocytes. Recently, Müller and colleagues identified a molecular pathway that appears to regulate nuclear accumulation of PKP 2 [44]. The Cdc25C-associated kinase 1 (C-TAK 1) emerges to be involved in cell-cycle regulation and Ras-signaling. It was shown that C-TAK 1 phosphorylates Cdc25C and KSR1, a scaffold protein for mitogen-activated protein kinase (MAPK) and Raf-1 kinase. Müller et al. demonstrated that PKP 2 is also a substrate for C-TAK 1 [44]. This phosphorylation of PKP 2 enforces an interaction of PKP 2 with 14-3-3 proteins, which prevents the nuclear accumulation of PKP 2. Consequently, mutation of the C-TAK 1 phosphorylation site or the 14-3-3 binding domain in PKP 2 increases nuclear accumulation of PKP 2. The pathways that trigger C-TAK 1-mediated phosphorylation of PKP 2 and its retention in the cytoplasm have not been analyzed so far.What does PKP 2 do in the nucleus? Recent experiments by Mertens and colleagues provided some insights [45]. Immunoprecipitation experiments revealed an association of PKP 2 with the largest subunit of RNA-polymerase-III holoenzyme, protein RPC155, as well as other components such as RPC82 and RPC39. The PKP 2-positive complexes also contain RNA-polymerase-III-associated transcription factor TFIIIB but not TFIIIC. The colocalization of PKP 2 and RPC155 in particles in the interchromatin space has been shown by immunofluorescence microscopy. Mertens and colleagues [45] postulated that these particles do not represent active forms of polymerase-III, because the PKP 2-positive particles do not contain transcription factor TFIIIC, a factor required for the formation of an active RNA polymerase III complex. Thus, the actual function of these complexes remains unclear. Nevertheless, the almost general appearance of PKP 2, as well as PKP 1, in the nucleus seems to differ fundamentally from the nuclear localization of other related catenins such as β-catenin or p120ctn, which are translocated into the nucleus upon specific signals and have been shown to be involved in gene regulation [21, 22].Besides these nuclear functions, PKP 2 may be involved in cytoplasmic signaling, which is based on the observation that it can bindβ-catenin [46], a key downstream effector protein of the canonical Wnt-signaling pathway [21]. Using two-hybrid and immunoprecipitation assay, it was shown that PKP 2 can bind to β-catenin. However, when bound to PKP 2, β-catenin cannot associate to E-cadherin, which may reduce the pool of β-catenin available to function in cell adhesion. Overexpression of PKP 2 in colon carcinoma cells leads to an increase in β-catenin/TCF signaling suggesting a regulatory role of PKP 2 in Wnt signaling and providing a potential functional link between desmosomal adhesion and signaling [46].PKP 2 also seems to be involved in the assembly of the desmosomal components into desmosomes. siRNA-mediated depletion of PKP 2 in keratinocytes leads to changes in the subcellular localization of DSP which mimics the behavior of a DSP mutant deficient for a PKCα (i.e., protein kinase C) phosphorylation site. Different isoforms of PKC have been implicated in the regulation of cellular processes such as migration, cellular adhesion, or cytoskeletal reorganization (for review see [47]). Bass-Zubeck et al. investigated the connection between PKP 2, DSP, and PKC [48]. The authors found that PKP 2 binds to PKCα and DSP via its head domain. A detailed analysis revealed that PKP 2 simultaneously binds DSP and PKCα, which facilitates the subsequent phosphorylation of DSP at its IF-binding domain by PKC [48]. This increases DSP integration into the desmosomes and the subsequent attachment of IFs to desmoplakin.Insights into the function of PKP 2 also came from gene knockout experiments in mice, as well as the analysis of an autosomal-dominant human hereditary disease linked to PKP 2 mutations [49, 50]. Ablation of the PKP 2 gene in mice leads to a lethal phenotype around mid-gestation (E10.5) [49]. Homozygous PKP 2-null embryos died because of severe alterations of the heart structure resulting in the outflow of blood into the pericardium and subsequent collapse of the embryonic blood circulation. On the microscopic level, PKP 2 deficient hearts display reduced trabeculation as well as abnormally thin cardiac walls. The reason for the instability of cell contacts between cardiomyocytes is apparent on the ultrastructural level. The junctional complexes of theareae compositae(formerly designated as intercalated disks; see [43]) that connect cardiomyocytes include at least two types of junctions in an amalgamated fashion, desmosomes and adherens junctions. The areae compositae are altered significantly in PKP 2-mutant mice. Associated with the deficiency of PKP 2, DSP is depleted from the desmosomal junctions and accumulates in the cytoplasm. Additionally, DSG 2 expression seems to be reduced in PKP 2-null cardiomyocytes and desmosomal components were less resistant to detergent extraction, suggesting impaired function of cell junctions. Therefore, PKP 2 seems to be essential for the regular subcellular distribution of desmoplakin and its accumulation in the areae compositae of cardiomyocytes. Interestingly, Grossmann et al. found no alteration in other PKP 2-expressing epithelia in the mutant animals [49]. This is likely due to the expression of multiple PKP isoforms in many cell types (except for the heart which expresses only PKP 2), providing functional compensation in case one isoform is not functional.The essential function of PKP 2 in the heart was also demonstrated by the identification of a haplo-insufficiency of PKP 2 in a hereditary human disease, autosomal-dominant arrhythmogenic right ventricular cardiomyopathy (ARVC; [50]). In ARVC, cardiomyocytes are progressively replaced by fibro-fatty tissue, especially in the right ventricle (for a recent review see [51]). This replacement leads to abnormal electrical conductance with syncopes and tachycardia and an often lethal failure in the mechanical capability of the heart (e.g., “sudden cardiac death” of young athletes). The mechanism leading to ARVC may include apoptosis of cardiomyocytes due to the weak and disrupted intercellular adhesion of cardiomyocytes caused by haplo-insufficiency of PKP 2 and subsequent insufficient anchorage of DSP [52]. The decline of cardiomyocytes may therefore lead to the development of scar tissue in the right ventricle. Moreover, transdifferentiation of cardiomyocytes into fibro- or adipocytes may take place, probably caused by disturbed Wnt/β-catenin-signaling [53, 54]. This is supported by further observations. The decrease of DSP in cultured atrial myocytes by siRNA results in the redistribution of plakoglobin to the nucleus and the suppression of the canonical Wnt/β-catenin-signaling pathway [54]. Genes inducing adipogenesis and fibrogenesis were upregulated in these DSP-deficient cells. Decrease of DSP was also noticed in cardiomyocytes of PKP 2-deficient mice [49], suggesting that a cellular transdifferentiation may also occur in ARVC. At least 12 different genes or chromosomal loci have been associated with the autosomal-dominant or recessive types of ARVC so far, including all five known desmosomal genes expressed in cardiomyocytes (i.e., DSG 2, DSC 2, DSP, JUP, and PKP 2).The loss of PKP 2 may also contribute to the abnormal electrical conductance of the heart [55]. Gap junctions play an essential role in the electrical coupling of cardiomyocytes and the coordinated heart contraction (reviewed in [56]). Downregulation of PKP 2 in primary cardiomyocytes of rat heart leads to reduced expression of the gap junction protein connexin 42. In addition, a decrease of cellular coupling via gap junctions is also detectable, which may result in the disturbed transmission of electrical impulses in the ventricle. Therefore, it appears that PKP 2 can influence the organization of different types of cellular junctions such as gap junctions andareae compositaein heart muscle cells.
### 2.3. Plakophilin 3
PKP 3 has a calculated mass of 87,081 Da and is detected with an apparent molecular weight of approximately 87 kDa on Western blot analysis [57, 58]. Strikingly, in contrast to the other PKP gens, PKP3 gene seems not to encode for different splice variants. PKP 3 is present in the desmosomes of all cell layers of stratified epithelia and in almost all simple epithelia, with the exception of hepatocytes (Figure 4). In epidermal cells, PKP 3 is expressed in a homogeneous pattern. Furthermore, it is detectable in the desmosomes of some nonepithelial cells with the notable exception of cardiomyocytes. This fact may explain the severe heart phenotype of PKP 2 loss, since PKP 2 is the only PKP expressed in cardiomyocytes and its loss of function cannot be compensated by the other PKPs. Although PKP 3 is mainly located in desmosomes, a significant proportion of the protein remains soluble in the cytoplasm. In contrast to the other PKPs, PKP 3 has not been detected in the nucleus.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 3. (a) Intensive reaction of desmosomes and cytoplasm is visible by staining sections of human skin with a monoclonal antibody against PKP 3 (clone PKP3 310.9.1; Progen, Heidelberg). Basal and lower suprabasal keratinocytes exhibit a strong cytoplasmic staining while desmosomal staining is less prominent. With ongoing differentiation, the desmosomal labeling is increasing. (b) Eccrine and apocrine (arrows) sweat glands show strong desmosomal labeling with PKP 3-specific antibodies. (c) Reaction of PKP 3-specific antibodies on liver is restricted to bile ductules (arrow; see description of liver tissue in the legend to Figure3) while hepatocytes are completely negative for PKP 3. The insert presents a magnification of a bile ductule of human liver stained with antibodies against PKP 3, exhibiting a labeling of the desmosomal junctions. (d) Bile ducts (here in a large portal field) show a clear desmosomal reaction at the apical pole of cells (arrow). Scale bars: 100 μm (d), 200 μm a, b, c.
(a)(b)(c)(d)A better understanding of the functions of PKP 3 came from the analyses of PKP 3 knockout mice [59]. In contrast to the other two PKPs, the PKP 3 knockout phenotype is fairly mild. PKP 3-null animals are viable and exhibit defects in the morphogenesis and morphology of specific hair follicles. Moreover, alterations in density and spacing of desmosomes and adherens junctions in PKP 3-null epidermis and oral cavity were observed (own unpublished observations). Consequently, PKP 3 is involved in the development or maintenance of skin appendages. Other PKP 3-positive epithelia appear normal in PKP 3-null animals. In addition, an upregulation of the expression of specific junctional proteins, such as the other PKPs, was noticed. In comparison to the other two PKPs, the PKP 3 knockout phenotype is modest, which may in part be due to the fact that an additional PKP is coexpressed in most epithelia and may compensate for at least some of the PKP 3 functions. Diseases in associated with the loss or heterozygosity of PKP 3 have not been reported so far.Surprisingly, among the three plakophilins, PKP 3 exhibits the most extensive binding repertoire to other desmosomal components [60] and it demonstratesin silico the most extensive interaction rate of desmosomal proteins, as predicted for keratinocytes by Cirillo and Prime [61]. It is capable to bind to most of the desmosomal proteins such as all DSG and DSC isoforms, JUP and DSP and furthermore, it is the only PKP that interacts with the smaller DSC-b isoforms that are missing the binding site for plakoglobin [60]. This implicates an apparent binding site for PKP 3 at the juxtamembrane domain of desmosomal cadherins. Both the PKP 3 head domain and the arm-repeats seem to be crucial for these interactions, since most of the interactions to other desmosomal proteins occur in yeast two-hybrid assay only using the entire PKP 3 but not using the individual domains [60].Further PKP 3 interaction partners are emerging, that are not linked to cell adhesion, suggesting a broader biological role of PKP 3. PKP 3 has been shown, for example, to interact with RNA-binding proteins such as poly-A binding protein C1 (PABPC1), FXR1 (Fragile X mental redartation-1), and G3BP (GAP SH3 domain-binding protein) in stress granules [62]. Stress granules develop when cells respond to diverse environmental stress conditions and these particles represent stalled translational complexes (for a recent review of stress granules see [63]). The function of PKP 3 in stress granules and the basis for the integration into stress granules remain unclear, but it seems likely that this is not a general function of all PKPs, since in addition to PKP 3, only PKP 1 but not PKP 2 has the ability to integrate into the stress granules.Another PKP 3-binding protein identified is dynamin-like protein DNM-1L [64]. DNM-1L is involved in the peroxisomal and mitochondrial fission and fusion as well as mitochondrial-dependent apoptosis of cells [65, 66]. Although the biological significance for this interaction is not clear, it is tempting to speculate that the PKP 3 could affect the apoptotic response of cells.
### 2.4. Plakophilins in Tumors
Cellular adhesion molecules, especially components of the adherens junctions such as E-cadherin andβ-catenin, have been shown to be important in the development, progression, and metastasis of tumors [67]. Likewise, several desmosomal proteins have also been linked to malignant processes (reviewed in [68]). Reliable data demonstrating a causal link between plakophilins and tumor development are still forthcoming. Thus far, most published studies focused on the expression of PKPs in tumors and a correlation of PKP expression and tumor prognosis. Well and moderately differentiated squamous cell carcinomas (SqCC) of skin express PKP 1, whereas in poorly differentiated tumors, PKP 1 is downregulated [69]. Tumor cells of basal cell carcinomas (BCC) exhibit a more heterogeneous expression of PKP 1, being confined to small patchy areas [69]. In solid nodular BCCs, PKP 1 expression has been found to be reduced in comparison to normal overlaying epidermis and was hardly detectable in nodules growing close to the basal epidermis. Immunohistochemical analysis of the expression of PKP 1 in oral SqCCs revealed similar results to those obtained with skin tumors [18, 70]. This is, however, conflicting with observations made by others [71], who found that PKP 1 is strongly expressed only in a small proportion of well-differentiated SqCCs. Furthermore, these authors found that most of the well-differentiated tumors are negative for PKP 1. Interestingly, using cells derived from oral SqCCs, Sobolik-Delmaire et al. [70] could demonstrate that cell lines expressing low levels of PKP 1 exhibit increased cell mobility which is reduced by ectopic expression of PKP 1. In contrast, another cell line of an oral SqCC that expresses comparably high levels of PKP 1 becomes more mobile and invasive in vitro when PKP 1 is diminished by a shRNA knock-down approach.Interestingly, in a part of oral and pharyngeal SqCCs analyzed by Schwarz et al. [18], nuclear localization of PKP 1 in tumor cells was noticed. This is remarkable since adjacent non-neoplastic squamous epithelium did not show nuclear PKP 1. In contrast to PKP 1, immunostaining for PKP 2 in histological sections of SqCC is low and often restricted to peripherally located tumor cells or is even completely absent [18], whereas PKP 3 expression patterns are similar to PKP 1 in SqCC. The expression of PKP 3 seems to correlate inversely with the degree of malignancy of tumors.An analysis of adenocarcinomas from different organs such as colon and pancreas revealed that PKP 1 is not detected whereas PKP 2 and PKP 3 are frequently expressed [18, 72], sometimes associated with a change from an apical desmosomal staining to a staining of almost complete lateral surface. The only exceptions were prostate adenocarcinomas which displayed a low level of PKP 1 immunoreactivity. Interestingly, in non-small cell lung carcinomas (NSCLC; adenocarcinomas and SqCC) and cultured cells derived thereof, Furukawa et al. observed an elevated expression of PKP 3 [64]. Inhibition of PKP 3 expression by siRNA approach in NSCLC cultured cells led to reduced colony formation and less viability of cells. Moreover, over-expression of PKP 3 in COS-cells caused enhanced proliferation rate and elevated activity in in vitro invasion assay. The authors postulated that PKP 3 may have an oncogenic function when localized in the cytoplasm under certain conditions. It thus appears that PKP 3 can potentially both, advance tumorigenesis (as seen in some NSCLC) or suppress it (as noticed for some SqCCs). Recent observations suggest that PKP 3 may be involved in epithelial-mesenchymal transition (EMT) that is of relevance especially for metastasis of tumor cells [73]. Analysis of PKP 3 expression in invasive cancer cells revealed that PKP 3 expression seems to be repressed by the transcription factor ZEB 1, a potent repressor of E-cadherin expression that is also involved in EMT, at least in breast cancer cells. Nuclear accumulation of ZEB 1 (i.e., Zinc finger E-box-binding homeobox-1) correlated with a loss of membrane staining for PKP 3. Similar observations have been reported for PKP 2-repression by ZEB 2 in colon cancer cells [74]. In conclusion, the precise role of PKPs in tumor development and tumor progression is not clear. It is possible that some of these proteins can function both, as oncogenes or as tumor suppressors, depending on the cell type studied. Further research is needed to establish a causal link between PKP expression (or loss of expression) and cancer.In summary, in the past few years PKPs have been recognized to be essential for desmosomal adhesion and tissue integrity. Nevertheless, recent data suggest that PKPs exert cellular functions unrelated to cell adhesion. Further questions like the ability of individual PKPs to compensate for the loss of one isoform and the role of PKPs in cell signaling and in tumor development need to be further investigated.
## 2.1. Plakophilin 1
PKP 1 is the smallest of the plakophilins, with a calculated molecular weight of 80.497 Da and an apparent molecular weight of approximately 75 kDa as judged by SDS-PAGE [14]. This protein is localized in the desmosomes of stratified, complex, and transitional epithelia but is absent in simple epithelia [14–16]. In stratified epithelia, PKP 1 is synthesized in all cell layers, with an increase in expression from the basal to the granular compartment as determined by quantifications of PKP 1-specific immunofluorescence signal intensity in human epidermis [17]. This indicates that PKP1 is a marker for keratinocyte differentiation. PKP 1 appears to be absent in the stratum corneum of stratified squamous epithelia though (Figure 1).Immunohistochemical staining of sections of human skin with antibodies against PKP 1. Sections of formaldehyde-fixed tissue samples of human skin were stained with a monoclonal antibody (clone PP1 5C2; Progen, Heidelberg; for methods see [18]) against PKP 1 a to d. (a) Overview of epidermis showing a strong reaction of the antibodies at the desmosomes of all layers. (b) At a higher magnification, the basal layers exhibit a somewhat weaker desmosomal staining that can be resolved occasionally into individual spot-like desmosomes containing PKP 1. During keratinocyte differentiation, desmosomal labeling is getting more pronounced. (c) Cross-section of a hair follicle (Hf) with desmosomal staining of the outer root sheath while the hair-shaft is not stained (Sg, sebaceous gland). Arrow marks the duct of a sebaceous gland. (d) Eccrine sweat ducts are marked intensively by antibodies while the secretory portions of eccrine glands show a distinct but weaker staining (arrow). Apocrine sweat glands (lower left corner) are negative. Scale bars: 100 μm (b); 200 μm a, c, and d.
(a)(b)(c)(d)The human PKP 1 gene is expressed as two different splice variants which differ with respect to cell-biological behavior, molecular weight, and abundance. PKP1a is the smaller isoform while the larger PKP1b isoform (predicted molecular weight: 82.860 kDa) is less abundant in stratified epithelia. The additional amino acid sequence contained in PKP1b is encoded by exon 7 which is spliced out of the PKP1a mRNA [19]. The PKP1b-specific amino acid sequence is located at the end of the fourth armadillo repeat and has a distinct effect on the cell biological activities of the protein. In addition to its desmosomal localization, PKP 1 has been detected in the nucleus of a broad range of cell types, even in those that do not incorporate PKP 1 in desmosomes such as simple epithelial cells [14, 19]. This distinct subcellular distribution has been observed for both variants of PKP 1. While the smaller PKP 1a may also be present in desmosomes, PKP 1b localization is restricted to the nucleus and not detectable in desmosomes. This conclusion is supported by transfection of cDNAs into cultured cells, where PKP 1a accumulates in desmosomes and is also rapidly transferred into the nucleus, while PKP 1b is only nuclear (own observations). Nevertheless, neither the way PKP 1 enters the nucleus nor the functions of this protein therein are yet known.Both the nuclear and desmosomal PKP 1 pool are degraded by caspases rapidly during apoptosis of keratinocytes suggesting that this protein is involved in the remodeling of the cytoskeleton under these conditions [20]. Signaling functions, as shown for some of the related catenins such as β-catenin, plakoglobin, and p120ctn, have been postulated for PKP 1, but proof is still lacking [21, 22]. A typical nuclear localization signal has not been identified in the protein so far, but cDNA transfection studies of the complete protein or individual parts of the protein into cells have shown that the head domain on its own, and to some extent the armadillo domain, are able to enter the nucleus [23]. The mechanism of the PKP 1 nuclear migration is currently unknown, but may utilize a piggyback mechanism.Various in vitro approaches revealed that the binding of desmosomal PKP 1 to other desmosomal proteins such as DSP, DSG 1, DSC 1, and different keratins is mediated by its head domain sequence [24–28]. The armadillo repeat domain of PKP 1 alone is sufficient to localize the protein to the plasma membrane [23]. The PKP 1 binding partner at the plasma membrane has not been determined but might be one of the desmosomal proteins or even cortical actin. In particular, it has been observed that the armadillo domain coaligns with actin microfilaments under certain circumstances and may be involved in the reorganization of this cytoskeletal component [26]. Nevertheless, the carboxyl-terminal part, in particular the last 40 amino acids, seems to be essential for the recruitment of the entire PKP 1 to the plasma membrane as shown by transfection studies of mutant cDNA constructs into A431 keratinocytes [28].Important clues for the understanding of PKP 1 function came from a report of an autosomal-recessive genodermatosis that is caused by mutations in the PKP 1 gene [29]. The ectodermal dysplasia/skin fragility (EDSF) syndrome (OMIM 604536; the collection of known mutations in the PKP 1 gene is shown in Figure 2 and published cases of EDSF syndrome are listed in Table 1) clinically manifests in the skin and its appendages. Patients suffer from blistering with erosions of their skin upon mechanical stress. Nails are dystrophic and the epidermis of soles and palms displays hyperkeratosis. The hair density on the scalp, eyebrows, and eyelashes is reduced. In severe cases, hair might be completely absent from these body regions. Impaired sweating has occasionally been observed. All other epithelial tissues that express PKP 1, including mucous membranes, seem to be normal in these patients, suggesting functional compensation by the other PKPs. Histological examination of affected skin reveals that the intercellular space is widened and epidermal keratinocytes are acantholytic from the suprabasal layers upwards, suggesting loss of cell-cell adhesion. Cell rupture, as noticed for epidermolytical bullous dermatosis, has not been observed. Immunofluorescence microscopy analyses of patients’ skin biopsies showed that certain desmosomal components such as desmogleins, desmocollins, and plakoglobin are still localized at the plasma membrane. In contrast, PKP 1 is completely absent or drastically reduced [30]. As a consequence, desmoplakin is no longer localized in the desmosomes but instead is dispersed throughout the cytoplasm. On the ultrastructural level, desmosomes appear smaller and are numerically reduced in the affected epidermal layers. Additionally, keratin filaments have lost contact to desmosomal junctions and are collapsed around the nucleus. Biochemical analysis of patients’ skin revealed that the other PKPs are upregulated to some extent and may compensate in part for the loss of PKP 1 in nonaffected epidermal layers [17]. Interestingly, it does not seem to matter for the development of the clinicopathological findings of EDSF syndrome to what extent the protein is truncated due to the mutations in PKP 1 gene. In a case reported by McGrath and colleagues the mutations occurred close to the amino-terminus of the protein, which could result either in a severely truncated protein or—more likely—in complete loss of the protein (i.e., a functional null mutation) as judged by immunofluorescence microscopy [29]. In contrast, the mutations in the PKP 1 gene reported by Hamada et al. occurred near the carboxyl-terminus resulting in the expression of a truncated protein. Based on the mild phenotype of the ESDF syndrome in these patients, it can be assumed that this truncated protein is at least partially functional but clinicopathology of ESDF still manifests [30]. Surprisingly, most of the EDSF-related mutations in human PKP 1 gene involve splice-site mutations (8 out of 13 known mutated alleles) leading to impaired splicing products and subsequent mRNA degradation or the generation of truncated proteins. The reason for the prevalence of splice-site mutations in EDSF is not known.Table 1
Published cases of EDSF syndrome with clinical features and observed mutations in PKP 1 gene.
Case1Clinicopathological findingsObserved mutations2ReferenceEpidermal fragilityHyperkeratosis on palms/solesAlopeciaNail dysplasiaHypohidrosis1yesyesyesyesyes(a) p.Q304X (b) c.1132ins28[29]2yesyesyesyesyes(a) p.Y71X (b) IVS1-1G>A[31]3yesyesyesyesnoIVS6-2A>T[32]4yesyesyesyesyesIVS11+1G>A[30]5yesnoyesyesnoIVS4-2A>G[33]6yesyesyesyesnot observedIVS1-1G>A[33]7yesyesnoyesnoIVS9+1G>A[34]8yesyesyesyesno(a) c.1053T>A +IVS5+1G>A (b) IVS10-2G>T[35]9yesyesyesyesnoc.888delC[36]10yesyesyesyesnot observedp.R672X[37]1 Numbering of the case correlates to the positions of mutations shown in Figure 2.2 For compound heterozygosity, mutations of both alleles are given as (a) and (b).Figure 2
Position of mutations in human PKP 1 gene. Schematic representation of the protein structure of PKP 1 with head domain “Head” in blue color containing the homologous region 2 “HR2” near the amino-terminus which is followed by nine armadillo repeats (yellow boxes; numbered in circles from 1 to 9). Finally, a short domain (blue) at the carboxyl-terminus is shown. Positions of homozygous mutations are marked by double arrows, positions of compound heterozygous mutations by connected arrows. Green arrows designate mutations affecting the coding region, red arrows denote splice-site mutations. For numbering and references of the mutations see Table1.These findings in conjunction with cell biological data obtained in transfection studies convincingly illustrate that PKP 1 is essential for the recruitment of desmoplakin to the desmosomal plaque and probably is involved into lateral enlargement of the plaque structure in skin, explaining the structural and functional defects in epidermal desmosomes lacking PKP 1. Evidently, integration of PKP 1 in the desmosomes provides the epidermal keratinocytes with stability against mechanical stress. A sequence stretch in the HR2 domain of PKP 1 is thought to be essential for the recruitment of DSP and represents a conserved motif of all the PKPs, suggesting that DSP recruitment is a common function of all PKPs [28].Although a direct interaction of PKP 1 with keratins has been demonstrated frequently in vitro, it is not clear whether this protein alone is sufficient to connect the intermediate filament cytoskeleton to the desmosome. Specific inactivation of DSP in the skin of mice demonstrates the necessity of both proteins, DSP and PKP 1 (in cooperation with plakoglobin), for anchorage of keratins [38] suggesting that all three components are required. This is further demonstrated by the fact that failure of either PKP 1 or DSP can lead to loss of cell-cell adhesion and acantholysis in the epidermis. The mechanism underlying the failure of epidermal desmosomes without PKP 1 to maintain adhesion is not known. It is tempting to speculate that besides structural defects cell signaling defects could contribute to this phenomenon, similar to the disease mechanisms postulated for the autoimmune blistering diseases of the pemphigus group in which autoantibodies target desmosomal cadherins. Binding of autoantibodies to the desmosomal cadherins seems to trigger intracellular signaling pathways that lead to the reorganization of the cytoskeleton involving the disconnection of desmosomal cadherins of adjacent cells (for the mechanisms of this outside-in signaling see [39]). The same pathways may be involved in the dissolution of desmosomal adhesion when PKP 1 is lost. Given that patients with PKP 1 null mutations show defects in differentiation pathways affecting skin appendage formation and homeostasis, it is unlikely that adhesion defects can account for the entire spectrum of disease phenotypes.Analysis of keratinocytes derived from patients suffering from EDSF syndrome exhibits some interesting properties. Quantitative analyses of the desmosome size in cultured cells revealed that reintroduction of PKP 1 increases the lateral extent of desmosomes. As proposed by others [25, 40], desmosomal cohesiveness might be increased by lateral interactions of PKP 1 with DSP, making additional linkage between desmosomal proteins and keratin network accessible [41]. It is noteworthy that PKP 1 null keratinocytes show increased cell migration, which has implications for tumor biology.
## 2.2. Plakophilin 2
PKP 2 is, with a predicted mass of 92.756 Da and an apparent molecular weight of 100 kDa (estimated from Western blot analysis), the largest of the three plakophilins and it is also the prevailing isoform since it is expressed in all cell types with desmosomal junctions [42]. PKP 2 is found in the basal cells of certain stratified epithelia while more differentiated keratinocytes are negative for desmosomal PKP 2 (Figure 3). Moreover, PKP 2 has recently also been found in new types of cell junction which differ in terms of their biochemical composition from both classical desmosomes and conventional adherens junctions (reviewed in [43]). Similar to PKP 1, PKP 2 also occurs as two different splice variants. An additional exon coding for 44 amino acids is integrated into PKP 2 b close to the border of the second to third armadillo repeat of the protein [42]. The two PKP 2 splice variants appear to be coexpressed in all cell types analyzed thus far, and it is not known whether these two proteins have different functions.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 2. (a) The staining of samples of human skin with a monoclonal antibody against PKP 2 (clone PP2-150; Progen, Heidelberg) demonstrates a weak and delicate desmosomal staining as well as cytoplasmic staining in the basal layer of the interfollicular epidermis (arrow). Suprabasal keratinocytes remain unstained. (b) Eccrine sweat glands and ducts show a strong reaction with PKP 2-specific antibodies while apocrine sweat glands exhibit an apical, distinct but weak desmosomal reaction (arrow). (c) Hepatocytes as well as bile ductules are marked at the cell-cell contacts by PKP 2-specific antibodies (arrow). (d) Bile ducts also show a sharp and apical staining of desmosomal structure by the PKP 2-antibodies. The samples shown in (c) and (d) are derived from liver tissue in the vicinity of a metastasis of a gastrointestinal stromal tumor with portal and periportal fibrosis and ductal and ductular proliferation. Scale bars: 100μm (d), 200 μm (a, b, c).
(a)(b)(c)(d)Like PKP 1, PKP 2 has been detected in the nucleus of many cell types [42]. Its presence in the nucleus is independent of its presence in desmosomes. Some nonepithelial cell types, which do not assemble desmosomes, show only nuclear localization of PKP 2 (e.g., fibroblasts [42]). In stratified epithelia, nuclear and desmosomal localization of PKP2 is regulated independently. In the differentiated layers of stratified epithelia, PKP 2 is excluded from desmosomes and accumulates in the nuclei of keratinocytes. Recently, Müller and colleagues identified a molecular pathway that appears to regulate nuclear accumulation of PKP 2 [44]. The Cdc25C-associated kinase 1 (C-TAK 1) emerges to be involved in cell-cycle regulation and Ras-signaling. It was shown that C-TAK 1 phosphorylates Cdc25C and KSR1, a scaffold protein for mitogen-activated protein kinase (MAPK) and Raf-1 kinase. Müller et al. demonstrated that PKP 2 is also a substrate for C-TAK 1 [44]. This phosphorylation of PKP 2 enforces an interaction of PKP 2 with 14-3-3 proteins, which prevents the nuclear accumulation of PKP 2. Consequently, mutation of the C-TAK 1 phosphorylation site or the 14-3-3 binding domain in PKP 2 increases nuclear accumulation of PKP 2. The pathways that trigger C-TAK 1-mediated phosphorylation of PKP 2 and its retention in the cytoplasm have not been analyzed so far.What does PKP 2 do in the nucleus? Recent experiments by Mertens and colleagues provided some insights [45]. Immunoprecipitation experiments revealed an association of PKP 2 with the largest subunit of RNA-polymerase-III holoenzyme, protein RPC155, as well as other components such as RPC82 and RPC39. The PKP 2-positive complexes also contain RNA-polymerase-III-associated transcription factor TFIIIB but not TFIIIC. The colocalization of PKP 2 and RPC155 in particles in the interchromatin space has been shown by immunofluorescence microscopy. Mertens and colleagues [45] postulated that these particles do not represent active forms of polymerase-III, because the PKP 2-positive particles do not contain transcription factor TFIIIC, a factor required for the formation of an active RNA polymerase III complex. Thus, the actual function of these complexes remains unclear. Nevertheless, the almost general appearance of PKP 2, as well as PKP 1, in the nucleus seems to differ fundamentally from the nuclear localization of other related catenins such as β-catenin or p120ctn, which are translocated into the nucleus upon specific signals and have been shown to be involved in gene regulation [21, 22].Besides these nuclear functions, PKP 2 may be involved in cytoplasmic signaling, which is based on the observation that it can bindβ-catenin [46], a key downstream effector protein of the canonical Wnt-signaling pathway [21]. Using two-hybrid and immunoprecipitation assay, it was shown that PKP 2 can bind to β-catenin. However, when bound to PKP 2, β-catenin cannot associate to E-cadherin, which may reduce the pool of β-catenin available to function in cell adhesion. Overexpression of PKP 2 in colon carcinoma cells leads to an increase in β-catenin/TCF signaling suggesting a regulatory role of PKP 2 in Wnt signaling and providing a potential functional link between desmosomal adhesion and signaling [46].PKP 2 also seems to be involved in the assembly of the desmosomal components into desmosomes. siRNA-mediated depletion of PKP 2 in keratinocytes leads to changes in the subcellular localization of DSP which mimics the behavior of a DSP mutant deficient for a PKCα (i.e., protein kinase C) phosphorylation site. Different isoforms of PKC have been implicated in the regulation of cellular processes such as migration, cellular adhesion, or cytoskeletal reorganization (for review see [47]). Bass-Zubeck et al. investigated the connection between PKP 2, DSP, and PKC [48]. The authors found that PKP 2 binds to PKCα and DSP via its head domain. A detailed analysis revealed that PKP 2 simultaneously binds DSP and PKCα, which facilitates the subsequent phosphorylation of DSP at its IF-binding domain by PKC [48]. This increases DSP integration into the desmosomes and the subsequent attachment of IFs to desmoplakin.Insights into the function of PKP 2 also came from gene knockout experiments in mice, as well as the analysis of an autosomal-dominant human hereditary disease linked to PKP 2 mutations [49, 50]. Ablation of the PKP 2 gene in mice leads to a lethal phenotype around mid-gestation (E10.5) [49]. Homozygous PKP 2-null embryos died because of severe alterations of the heart structure resulting in the outflow of blood into the pericardium and subsequent collapse of the embryonic blood circulation. On the microscopic level, PKP 2 deficient hearts display reduced trabeculation as well as abnormally thin cardiac walls. The reason for the instability of cell contacts between cardiomyocytes is apparent on the ultrastructural level. The junctional complexes of theareae compositae(formerly designated as intercalated disks; see [43]) that connect cardiomyocytes include at least two types of junctions in an amalgamated fashion, desmosomes and adherens junctions. The areae compositae are altered significantly in PKP 2-mutant mice. Associated with the deficiency of PKP 2, DSP is depleted from the desmosomal junctions and accumulates in the cytoplasm. Additionally, DSG 2 expression seems to be reduced in PKP 2-null cardiomyocytes and desmosomal components were less resistant to detergent extraction, suggesting impaired function of cell junctions. Therefore, PKP 2 seems to be essential for the regular subcellular distribution of desmoplakin and its accumulation in the areae compositae of cardiomyocytes. Interestingly, Grossmann et al. found no alteration in other PKP 2-expressing epithelia in the mutant animals [49]. This is likely due to the expression of multiple PKP isoforms in many cell types (except for the heart which expresses only PKP 2), providing functional compensation in case one isoform is not functional.The essential function of PKP 2 in the heart was also demonstrated by the identification of a haplo-insufficiency of PKP 2 in a hereditary human disease, autosomal-dominant arrhythmogenic right ventricular cardiomyopathy (ARVC; [50]). In ARVC, cardiomyocytes are progressively replaced by fibro-fatty tissue, especially in the right ventricle (for a recent review see [51]). This replacement leads to abnormal electrical conductance with syncopes and tachycardia and an often lethal failure in the mechanical capability of the heart (e.g., “sudden cardiac death” of young athletes). The mechanism leading to ARVC may include apoptosis of cardiomyocytes due to the weak and disrupted intercellular adhesion of cardiomyocytes caused by haplo-insufficiency of PKP 2 and subsequent insufficient anchorage of DSP [52]. The decline of cardiomyocytes may therefore lead to the development of scar tissue in the right ventricle. Moreover, transdifferentiation of cardiomyocytes into fibro- or adipocytes may take place, probably caused by disturbed Wnt/β-catenin-signaling [53, 54]. This is supported by further observations. The decrease of DSP in cultured atrial myocytes by siRNA results in the redistribution of plakoglobin to the nucleus and the suppression of the canonical Wnt/β-catenin-signaling pathway [54]. Genes inducing adipogenesis and fibrogenesis were upregulated in these DSP-deficient cells. Decrease of DSP was also noticed in cardiomyocytes of PKP 2-deficient mice [49], suggesting that a cellular transdifferentiation may also occur in ARVC. At least 12 different genes or chromosomal loci have been associated with the autosomal-dominant or recessive types of ARVC so far, including all five known desmosomal genes expressed in cardiomyocytes (i.e., DSG 2, DSC 2, DSP, JUP, and PKP 2).The loss of PKP 2 may also contribute to the abnormal electrical conductance of the heart [55]. Gap junctions play an essential role in the electrical coupling of cardiomyocytes and the coordinated heart contraction (reviewed in [56]). Downregulation of PKP 2 in primary cardiomyocytes of rat heart leads to reduced expression of the gap junction protein connexin 42. In addition, a decrease of cellular coupling via gap junctions is also detectable, which may result in the disturbed transmission of electrical impulses in the ventricle. Therefore, it appears that PKP 2 can influence the organization of different types of cellular junctions such as gap junctions andareae compositaein heart muscle cells.
## 2.3. Plakophilin 3
PKP 3 has a calculated mass of 87,081 Da and is detected with an apparent molecular weight of approximately 87 kDa on Western blot analysis [57, 58]. Strikingly, in contrast to the other PKP gens, PKP3 gene seems not to encode for different splice variants. PKP 3 is present in the desmosomes of all cell layers of stratified epithelia and in almost all simple epithelia, with the exception of hepatocytes (Figure 4). In epidermal cells, PKP 3 is expressed in a homogeneous pattern. Furthermore, it is detectable in the desmosomes of some nonepithelial cells with the notable exception of cardiomyocytes. This fact may explain the severe heart phenotype of PKP 2 loss, since PKP 2 is the only PKP expressed in cardiomyocytes and its loss of function cannot be compensated by the other PKPs. Although PKP 3 is mainly located in desmosomes, a significant proportion of the protein remains soluble in the cytoplasm. In contrast to the other PKPs, PKP 3 has not been detected in the nucleus.Immunohistochemical staining of sections of human skin a, and b and liver c, and d with antibodies against PKP 3. (a) Intensive reaction of desmosomes and cytoplasm is visible by staining sections of human skin with a monoclonal antibody against PKP 3 (clone PKP3 310.9.1; Progen, Heidelberg). Basal and lower suprabasal keratinocytes exhibit a strong cytoplasmic staining while desmosomal staining is less prominent. With ongoing differentiation, the desmosomal labeling is increasing. (b) Eccrine and apocrine (arrows) sweat glands show strong desmosomal labeling with PKP 3-specific antibodies. (c) Reaction of PKP 3-specific antibodies on liver is restricted to bile ductules (arrow; see description of liver tissue in the legend to Figure3) while hepatocytes are completely negative for PKP 3. The insert presents a magnification of a bile ductule of human liver stained with antibodies against PKP 3, exhibiting a labeling of the desmosomal junctions. (d) Bile ducts (here in a large portal field) show a clear desmosomal reaction at the apical pole of cells (arrow). Scale bars: 100 μm (d), 200 μm a, b, c.
(a)(b)(c)(d)A better understanding of the functions of PKP 3 came from the analyses of PKP 3 knockout mice [59]. In contrast to the other two PKPs, the PKP 3 knockout phenotype is fairly mild. PKP 3-null animals are viable and exhibit defects in the morphogenesis and morphology of specific hair follicles. Moreover, alterations in density and spacing of desmosomes and adherens junctions in PKP 3-null epidermis and oral cavity were observed (own unpublished observations). Consequently, PKP 3 is involved in the development or maintenance of skin appendages. Other PKP 3-positive epithelia appear normal in PKP 3-null animals. In addition, an upregulation of the expression of specific junctional proteins, such as the other PKPs, was noticed. In comparison to the other two PKPs, the PKP 3 knockout phenotype is modest, which may in part be due to the fact that an additional PKP is coexpressed in most epithelia and may compensate for at least some of the PKP 3 functions. Diseases in associated with the loss or heterozygosity of PKP 3 have not been reported so far.Surprisingly, among the three plakophilins, PKP 3 exhibits the most extensive binding repertoire to other desmosomal components [60] and it demonstratesin silico the most extensive interaction rate of desmosomal proteins, as predicted for keratinocytes by Cirillo and Prime [61]. It is capable to bind to most of the desmosomal proteins such as all DSG and DSC isoforms, JUP and DSP and furthermore, it is the only PKP that interacts with the smaller DSC-b isoforms that are missing the binding site for plakoglobin [60]. This implicates an apparent binding site for PKP 3 at the juxtamembrane domain of desmosomal cadherins. Both the PKP 3 head domain and the arm-repeats seem to be crucial for these interactions, since most of the interactions to other desmosomal proteins occur in yeast two-hybrid assay only using the entire PKP 3 but not using the individual domains [60].Further PKP 3 interaction partners are emerging, that are not linked to cell adhesion, suggesting a broader biological role of PKP 3. PKP 3 has been shown, for example, to interact with RNA-binding proteins such as poly-A binding protein C1 (PABPC1), FXR1 (Fragile X mental redartation-1), and G3BP (GAP SH3 domain-binding protein) in stress granules [62]. Stress granules develop when cells respond to diverse environmental stress conditions and these particles represent stalled translational complexes (for a recent review of stress granules see [63]). The function of PKP 3 in stress granules and the basis for the integration into stress granules remain unclear, but it seems likely that this is not a general function of all PKPs, since in addition to PKP 3, only PKP 1 but not PKP 2 has the ability to integrate into the stress granules.Another PKP 3-binding protein identified is dynamin-like protein DNM-1L [64]. DNM-1L is involved in the peroxisomal and mitochondrial fission and fusion as well as mitochondrial-dependent apoptosis of cells [65, 66]. Although the biological significance for this interaction is not clear, it is tempting to speculate that the PKP 3 could affect the apoptotic response of cells.
## 2.4. Plakophilins in Tumors
Cellular adhesion molecules, especially components of the adherens junctions such as E-cadherin andβ-catenin, have been shown to be important in the development, progression, and metastasis of tumors [67]. Likewise, several desmosomal proteins have also been linked to malignant processes (reviewed in [68]). Reliable data demonstrating a causal link between plakophilins and tumor development are still forthcoming. Thus far, most published studies focused on the expression of PKPs in tumors and a correlation of PKP expression and tumor prognosis. Well and moderately differentiated squamous cell carcinomas (SqCC) of skin express PKP 1, whereas in poorly differentiated tumors, PKP 1 is downregulated [69]. Tumor cells of basal cell carcinomas (BCC) exhibit a more heterogeneous expression of PKP 1, being confined to small patchy areas [69]. In solid nodular BCCs, PKP 1 expression has been found to be reduced in comparison to normal overlaying epidermis and was hardly detectable in nodules growing close to the basal epidermis. Immunohistochemical analysis of the expression of PKP 1 in oral SqCCs revealed similar results to those obtained with skin tumors [18, 70]. This is, however, conflicting with observations made by others [71], who found that PKP 1 is strongly expressed only in a small proportion of well-differentiated SqCCs. Furthermore, these authors found that most of the well-differentiated tumors are negative for PKP 1. Interestingly, using cells derived from oral SqCCs, Sobolik-Delmaire et al. [70] could demonstrate that cell lines expressing low levels of PKP 1 exhibit increased cell mobility which is reduced by ectopic expression of PKP 1. In contrast, another cell line of an oral SqCC that expresses comparably high levels of PKP 1 becomes more mobile and invasive in vitro when PKP 1 is diminished by a shRNA knock-down approach.Interestingly, in a part of oral and pharyngeal SqCCs analyzed by Schwarz et al. [18], nuclear localization of PKP 1 in tumor cells was noticed. This is remarkable since adjacent non-neoplastic squamous epithelium did not show nuclear PKP 1. In contrast to PKP 1, immunostaining for PKP 2 in histological sections of SqCC is low and often restricted to peripherally located tumor cells or is even completely absent [18], whereas PKP 3 expression patterns are similar to PKP 1 in SqCC. The expression of PKP 3 seems to correlate inversely with the degree of malignancy of tumors.An analysis of adenocarcinomas from different organs such as colon and pancreas revealed that PKP 1 is not detected whereas PKP 2 and PKP 3 are frequently expressed [18, 72], sometimes associated with a change from an apical desmosomal staining to a staining of almost complete lateral surface. The only exceptions were prostate adenocarcinomas which displayed a low level of PKP 1 immunoreactivity. Interestingly, in non-small cell lung carcinomas (NSCLC; adenocarcinomas and SqCC) and cultured cells derived thereof, Furukawa et al. observed an elevated expression of PKP 3 [64]. Inhibition of PKP 3 expression by siRNA approach in NSCLC cultured cells led to reduced colony formation and less viability of cells. Moreover, over-expression of PKP 3 in COS-cells caused enhanced proliferation rate and elevated activity in in vitro invasion assay. The authors postulated that PKP 3 may have an oncogenic function when localized in the cytoplasm under certain conditions. It thus appears that PKP 3 can potentially both, advance tumorigenesis (as seen in some NSCLC) or suppress it (as noticed for some SqCCs). Recent observations suggest that PKP 3 may be involved in epithelial-mesenchymal transition (EMT) that is of relevance especially for metastasis of tumor cells [73]. Analysis of PKP 3 expression in invasive cancer cells revealed that PKP 3 expression seems to be repressed by the transcription factor ZEB 1, a potent repressor of E-cadherin expression that is also involved in EMT, at least in breast cancer cells. Nuclear accumulation of ZEB 1 (i.e., Zinc finger E-box-binding homeobox-1) correlated with a loss of membrane staining for PKP 3. Similar observations have been reported for PKP 2-repression by ZEB 2 in colon cancer cells [74]. In conclusion, the precise role of PKPs in tumor development and tumor progression is not clear. It is possible that some of these proteins can function both, as oncogenes or as tumor suppressors, depending on the cell type studied. Further research is needed to establish a causal link between PKP expression (or loss of expression) and cancer.In summary, in the past few years PKPs have been recognized to be essential for desmosomal adhesion and tissue integrity. Nevertheless, recent data suggest that PKPs exert cellular functions unrelated to cell adhesion. Further questions like the ability of individual PKPs to compensate for the loss of one isoform and the role of PKPs in cell signaling and in tumor development need to be further investigated.
---
*Source: 101452-2010-04-21.xml* | 2010 |
# Research Progress on Durability of Cellulose Fiber-Reinforced Cement-Based Composites
**Authors:** Jie Liu; Chun Lv
**Journal:** International Journal of Polymer Science
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1014531
---
## Abstract
The performance of cellulose fiber-reinforced cement-based composites (CFCCs) depends not only on the characteristics of the cement matrix and fibers but also on the bonding property of the matrix and fibers. The durability of cement-based composites including various properties such as impermeability, frost resistance, and carbonization resistance has an important impact on the long-term service life of the matrix structure. The presence of a large number of hydroxyl groups on the molecular chain of cellulose can promote the formation of intra- and intermolecular hydrogen bonds of cellulose. This special structure imparts the cellulose high hydrophilicity, which leads the cement hydration C-S-H gel to adhere to the surface of cellulosic fibers (CFs) and induce its growth. The cavity of CFs has good water absorption and can be used as an internal curing fiber for the continuous hydration of cement-based composites. But CFs in the Portland cement matrix tend to deteriorate under strong alkali conditions. This paper presents a review of the research on the durability of CFCCs. The methods and paths to improve the durability of CFCCs are summarized and analyzed from the perspectives of the internal curing of CFs, the deterioration of the performance of CFs in the matrix, and the use of many types of supplementary cementitious materials. Finally, the development and engineering application of CFCCs have been prospected.
---
## Body
## 1. Introduction
In the last few years, more and more different types of fibers have been used to reinforce cement-based composites. As is known, these fibers include steel, organic synthetic, carbon, and glass fibers. The commonly used fibers for reinforcements exhibit a set of advantages, such as long application time, relatively mature technology, and interesting physical and mechanical properties. However, compared to CFs, the commonly used fibers have their own limitations [1]. As far as we know, the density of steel fibers is relatively large, which cannot meet the requirements of reducing the weight of composites. At the same time, the size of the steel fiber is too large, there is an interface transition zone in the cement matrix, and it also lacks inducing the growth of the hydrate gel in the cement matrix interface transition zone. The organic synthetic fiber has poor compatibility with the cement matrix and is not easy to disperse in the cement matrix. It pollutes the environment greatly, which is contrary to the concept of green cement-based composites [2–4]. The carbon fiber is easy to agglomerate in the cement matrix, and the cost is relatively high [5–8]. The glass fiber is brittle and has poor wear resistance [9, 10].Plant fibers such as crop straws are composed of cellulose, hemicellulose, lignin, and other substances, and cellulose is the main component. CFs are widely found in nature. They are inexpensive, have low density, are renewable, are biodegradable, are rich in sources, and have good reinforcement effects on cement-based composites [11–13]. The geometric characteristics, mechanical properties, volume mixing ratio of CFs, and bonding property of the fiber and matrix interface are important factors that affect the strength and toughness of the matrix. Compared with traditional fibers, CFs have a larger specific surface area, aspect ratio, excellent toughness, and bonding ability. It disperses evenly in the cement matrix, has good compatibility, and has a filling and bridging effect on the cement matrix [14]. At the same time, the addition of CFs also greatly reduces the density of cement-based composites, improves their flexural strength [15–17], inhibits the occurrence and development of microcracks within the matrix [18, 19], and enhances the impact resistance of cement-based composites [20].Over the last few years, studies on the toughening modification of cement-based composites using CFs have focused on the macromechanical properties of composites, the microstructure of the interface between the cement matrix and the fibers, the effect of fiber cracking and toughening, and the durability [21–24]. The durability of cement-based composites includes impermeability, frost resistance, carbonization resistance, and resistance to sulfate attack, which have an important impact on the long-term service life of the structure [25]. The durability of CFCCs is the main reason which limits their engineering application [26].This paper presents a review of the research done in the area of the durability of CFCCs during the last years (2005.01-2021.05), which used Chinese journal literature retrieval databases, including China Journal Network and CNKI Scholar, English journal literature retrieval databases, and so on. Search keywords include cellulose fiber, concrete durability, internal curing, impermeability, frost resistance, and carbonization. In the searched literature, no relevant similar reviews have been found, and works of literature that are not related to the content of the paper have also been excluded. Only a few book chapters or reviews provide a general overview of the durability of CFCCs [27, 28], summarizing the main improvements and findings from a few research papers. There are also overviews of nanocellulose fibers, which are different from the contents discussed in this paper [29, 30]. The durability of CFCCs such as impermeability, frost resistance, and carbonization resistance is reviewed, and the impact of CFs on the durability of composites and the internal curing of CFs is discussed in this paper. From the perspectives of the performance, the degradation of CFs in the matrix, and the application of multiple types of auxiliary gel materials, a summary analysis is made to improve the durability of CFCCs, and the solutions are proposed.
## 2. Performance of CFs
As one of the most abundant renewable resources, CFs have the advantage of being low cost and environmentally friendly [31]. In 1932, scientists studied the chemical structure of cellulose [32]. Cellulose is a straight-chain polymer formed by linking countless D-glucopyranose anhydrides(1-5) with β(1-4) glycosides. The structure of cellulose is highly regular and unbranched, and the chemical formula is (C6H10O5)n. There are a large number of hydroxyl groups on the molecular chain of cellulose. The presence of this polar group promotes the formation of intra- and intermolecular hydrogen bonds of cellulose [33, 34]. CFs can be made from renewable resources such as straw, they have a porous cell structure, and their specific surface area is better than other fibers [35]. As described above, there are many pores in the cross-section of the fibers, as shown in Figures 1(a) and 1(b). The properties of CFs mainly depend on the type of cellulose. The cellulose content of different types of plants varied greatly, among which the cellulose content of cotton is the highest. Table 1 shows the performance index data of commonly used CFs and other commonly used fibers [36]. The physical and mechanical properties of CFs can be seen in Table 1. The mechanical properties of CFs such as tensile strength and Young’s modulus are weaker than those of the commonly used fibers such as carbon fiber and aramid fiber. Flax, jute, ramie, sisal, and hemp fibers have better mechanical properties as similar to glass fiber in tensile strength and Young’s modulus, so they can be directly used as CFCCs.Figure 1
SEM images of CFs. (a) Sisal fibers [39]. (b) Jute fibers [13]. (c, d) Combination of the macroscale fiber and cement matrix [37].
(a)(b)(c)(d)Table 1
Performance of CFs and commonly used fibers.
FibersDensity (g/cm3)Elongation (%)Tensile strength (MPa)Young’s modulus (GPa)Cotton fiber1.5-1.67.0-8.0287-5975.5-12.6Jute fiber1.31.5-1.8393-77326.5Flax fiber1.52.7-3.2345-103527.6Ramie fiber1.53.6-3.8400-93861.4-128.0Sisal fiber1.52.0-2.5511-6359.4-22.0Coir fiber1.230.01754.0-6.0Cork fiber1.5—100040.0E-glass fiber2.52.52000–350070.0S-glass fiber2.52.8457086.0Aramid fiber1.43.3-3.73000-315063.0-67.0Carbon fiber1.41.4-1.84000230.0-240.0The performance of CFCCs includes working performance, mechanical properties, and durability. The effect of CFs on the working performance and mechanical properties of composites is similar to that of traditional fibers, but the impact on durability is very different. As mentioned above, the existence of a large number of hydroxyl groups on the molecular chain of cellulose promotes the formation of intra- and intermolecular hydrogen bonds of cellulose. This special structure renders the cellulose extremely hydrophilic, makes the CFs compatible with the cement matrix, and has a good cohesive force. C-S-H gel, the main hydration product of the cement, grows on the surface of CFs, as shown in Figures1(c) and 1(d). Wu et al. [37] analyzed the SEM (Figures 1(c) and 1(d)) and found that the cement hydration around CFs was more complete. The reason is that the CFs can induce the orderly and directional growth of hydration products in the initial stage of cement hydration. After the cement hardens, when the load exceeds its cracking load, the fibers can share the load, which greatly increases the load of the cement matrix, avoiding or delaying the growth of microcracks. In Figure 1(d), the CF surface has peeled off and separated countless microfibers with a diameter less than 1 μm. Some of the microfiber ends are embedded in the cement hydration product. The formation of the microstructure of the cement paste will be accompanied by complex chemical reactions and physical changes. When the shrinkage of the matrix is restricted, the cement-based material will crack. The degree of cracking depends on the tensile strength and shrinkage stress of the cement paste. These microfibers play a bridging and filling role in the composites. The moisture absorbed in the CFs supplements the lack of moisture in the cement hydration process and can induce the hydration of the cement surface, and the hydration product can fill the microcracks so as to achieve the composites’ effect of enhancing impermeability.However, the lignin and hemicellulose in CFs are easily dissolved in the alkaline solution of the cement matrix, and the strong alkali material enters the fiber cavity to cause the mineralization and degradation of the fiber structure, thereby affecting the durability. The CFs applied to cement-based composites can be classified by the function of their size. CFs can be found as macroscale, microcrystalline, and nanocrystalline CFs. Macroscale CFs include strands (long fibers of lengths around 20 to 100 cm), staple fibers (short fibers with lengths between 1 and 20 cm), and pulp (very short fibers with lengths between 1 and 10 mm), which can be processed by chemical methods to form micron-scale microcrystalline CFs and nanosized nanocrystalline CFs [37, 38].In general terms, micron-sized microcrystalline CFs and nanosized nanocrystalline CFs can be produced in chemical or enzymatic ways. The loosely arranged amorphous regions in the cellulose are destroyed, and the crystalline regions are retained to obtain micron-sized microcrystalline CFs and nanosized nanocrystalline CFs with higher crystallinity. The preparation methods of nanocrystalline CFs mainly include the physical mechanical method, acid hydrolysis method, and biological method. Nanocrystalline CFs have a rigid rod-like structure, and their properties are shown in Table2.Table 2
Performance of CFs and commonly used fibers.
CFsLength (nm)Diameter (nm)Tensile strength (MPa)Young’s modulus (GPa)Ref.Macroscale CFs (cotton)2,000,000-3,000,00011,540287-5975.5-12.6[36]Microcrystalline CFs100-3,0001,4407,500100.0-140.0[37, 40, 41]Nanocrystalline CFs50-5001-100
## 3. Research Status of Durability of CFCCs
Anselme Payen, a French scientist, extracted a compound from wood in 1838 and named it cellulose. The majority of the fabrication methods for cement composites reinforced with CFs in the pulp form are based on the Hatschek process, patented by L. Hatschek in 1900 [27]. After a century, the research and application of CFCCs have become increasingly widespread. The research on CFs is from mechanical property to durability, from the bonding phenomenon between the CFs and the matrix to the bonding mechanism, and from macroscale application to microscale application. Only a few studies have been focused on the bond adhesion of CFs with cement matrices. For instance, some studies analyzed the effect of CF shape and curing age on the bond strength of CFCCs using pull-out tests [28, 42], and the other studies analyzed the fiber matrix bond adhesion indirectly [14–16].At this stage, research studies on the durability of CFCCs include frost resistance, carbonization resistance, water penetration resistance, chloride ion penetration resistance, gas penetration resistance, sulfate erosion resistance, early anticracking performance, compressive creep performance, and compressive fatigue deformation performance. In general terms, the durability of the composites depends on the resistance to chloride ion permeability, frost resistance, and carbonization resistance. As can be seen in Table3, it is a summary of related studies on the durability of CFCCs.Table 3
Summary of studies on durability of CFCCs.
DurabilityCFsCFs (wt.%)Fiber formCementitiousTest methodsRef.Antichloride ion penetrationUF500 CF0.45~1.5Randomly dispersedP·O42.5RCM method; water seepage height[43–45]Bokai super fiber0.6~1.2P·O52.5Water seepage pressure[46]AntifreezingFlake fiber0.9Randomly dispersedP·O42.5Rapid freezing[43, 47]UF500 CF0.1~0.2[48]CarbonizationCork kraft pulp8.0AlignedP·II42.5RAccelerate carbonization[49]Rape straw fiber0.5~2.00Randomly dispersedSAC[50, 51]Coir fiber2.0Slag cement12 years[52]Antidry and wet cycleSisal fiber1.5~8.0Randomly dispersedP·O42.5RCBI method[53, 54]Cork CF4.0P·O42.525 dry/wet cycles[55]Sisal fiber0.6AlignedCalcined clay100 dry/wet cycles[56]Antisulfate attackUF500 CF0.9Randomly dispersedP·II42.5RSulfate-wet and dry cycle coupling[51]AnticrackSisal fiber2.0AlignedMK/PCInstron 5948 test system[57]Table3 shows that CFCCs are rich in varieties, and their durability involved a wide range, which are closely related to the service life of the composites. The test methods for resistance of CFCCs to freezing and thawing usually include slow freezing and thawing, rapid freezing and thawing, and single-side freezing and thawing. The test methods for resistance of CFCCs to chloride penetration adopt the rapid chloride migration (RCM) coefficient test and coulomb electric flux test. The procedures for preparing CFCCs reported in the literature can be divided into two main groups depending on the fiber form: fibers randomly dispersed in the cement matrix [58–60] and aligned fibers or fibrous structures [43, 46, 50, 53]. In view of the fibers randomly dispersed in the matrix, it has certain limitations in the mechanical properties of reinforced composites, and aligned fibers or fibrous structures are also used to strengthen the cement matrix [49, 56, 57, 61].Cement-based composite is a kind of porous material, which provides a channel for harmful external impurities to penetrate into the matrix. Adding CFs to the cement matrix can reduce the generation of early microcracks, inhibit the development of microcracks during the service period of the matrix, and improve the durability of CFCCs. After absorbing water, the CFs are evenly dispersed into the cement matrix, forming a plurality of microwater flow channels inside the matrix. These pores can continue to provide water for the later cement hydration, make it fully hydrated, ensure the mechanical properties of the cement matrix, prevent the cement matrix from cracking, and also improve the cement matrix’s antipermeability, antifreezing, and anticarbonization capabilities. The uniform distribution of the fiber network enhances the adhesion between the matrix components, the matrix structure has good integrity, and the impact resistance is also significantly improved [62].
### 3.1. Impermeability of CFCCs
Impermeability is an important factor that affects the durability of CFCCs. CFs are evenly distributed in the cement matrix, which can reduce the segregation effect in the initial stage of the cement hydration, inhibit the formation of shrinkage cracks in the cement matrix, reduce the porosity of the matrix, improve the compactness of the matrix, and effectively prevent harmful substances from penetrating into the matrix. It is usually determined by the rapid chloride ion migration coefficient test or the water penetration height test [44]. The chloride ion diffusion coefficient and the water penetration height are reduced after the CFs have been mixed, and the impermeability of the CFs is better than that of the polypropylene fibers under the same dosage. The test results showed that when the CF volume fraction was 0.9%, the effect of improving the impermeability of the matrix was the best [45]. CFs have little effect on the compressive strength of the matrix but significantly improve the splitting tensile strength, axial tensile strength, and ultimate tensile value and effectively block the penetration of chloride ions. At the same time, the cracking time and the width of the matrix crack have been improved [63, 64]. It has been found through experiments that the bubble content in the CF cement matrix decreases by 40%, and the bubble spacing becomes smaller [65].CFs are effectively bonded to the cement matrix and distributed in random directions, forming a uniform support system, optimizing the pore structure of the cement matrix, and blocking the internal communication channels of the matrix. Due to the toughening and cracking resistance of CFs, the number of initial cracks can be significantly reduced, the long-term cracks of the matrix can be effectively inhibited, and the possibility of forming through cracks in the matrix can be reduced. In this way, the microcrack pattern of the cement-based composite is further refined, and its impermeability is significantly improved.
### 3.2. Effect of Freeze-Thaw Cycles on the Durability of CFCCs
Due to the effect of negative temperature, the water in the pores of CFCCs in a water-saturated state freezes and produces a volume change to form tensile stress. Under the action of the freeze-thaw cycle, the cement matrix damage gradually accumulates and expands, which will eventually lead to the destruction of the cement matrix. CFs can significantly improve the frost resistance of CFCCs. On the one hand, the addition of CFs reduces the water penetration in the cement matrix. On the other hand, the CFs can absorb parts of the unfrozen free water and reduce the hydrostatic pressure in the cement matrix. A large number of comparative tests have shown that polypropylene fibers and CFs have improved frost resistance to cement-based composites [47, 66, 67], and CFs are better than polypropylene fibers. When the volume fraction of CFs is 0.9%, the antifreezing effect of CFCCs is the best. Under the action of the freeze-thaw cycle, the CFs have the effect of binding the surface slurry of the matrix, and its appearance damage is improved to a certain extent.The addition of CFs slows down the rate of decrease of relative dynamic elastic modulus, increases the number of freeze-thaw cycles that the test piece can withstand, and improves the frost resistance of the CFCCs; it can be seen from Figure2 [48]. As the number of freeze-thaw cycles increases, the relative dynamic elastic modulus of the specimens decreases. At the beginning of the freeze-thaw cycle (within 100 freeze-thaw cycles), for basalt fibers and CFs, when the fiber volume fraction is 1.0, 1.5, or 2.0%, the relative dynamic elastic modulus decreases slowly, and the effect of fiber content is not much different. In the late freeze-thaw period (the number of freeze-thaw cycles is greater than 100), the decline rate increases, indicating that the internal damage of the concrete gradually increases after the freeze-thaw cycle. The relative dynamic elastic modulus of plain concrete decreases faster than that of fiber concrete. When the freeze-thaw cycle reaches 200 times, the freeze-thaw damage rate is far less than 60%. However, the relative dynamic elastic modulus of fiber-reinforced concrete with a fiber volume fraction of 2% decreases more gently than that of concrete with fiber volume fractions of 1.0% and 1.5%.Figure 2
(a) The relative dynamic elastic modulus of basalt fibers and (b) CF with different fiber volume fraction changes with the number of freeze-thaw cycles [48].
(a)(b)Since the elastic modulus and tensile properties of basalt fibers are higher than those of CFs, the basalt fibers can more effectively improve the tensile strength of concrete, inhibit the expansion of internal cracks in concrete, reduce the entry of water into the matrix, and delay the frost heave damage of the internal structure. Therefore, when the number of freeze-thaw cycles is high, basalt fibers improve the freeze resistance of concrete more significantly than CFs. In general, the effect of improving the frost resistance is not significant when the fiber volume fraction is increased from 1.5% to 2.0%. Therefore, it is more economical and reasonable to choose the fiber volume fraction of 1.5%.
### 3.3. Effect of Carbonization on the Durability of CFCCs
When CFCCs are applied as the protective layer of steel bars, the impact of carbonization on the steel bars must be paid attention to. The high alkalinity inside the matrix passivates the surface of the steel bar, and the passivated film can prevent the steel bar from corroding by the external environment. The hydration product of cement-based composites has stable performance in an alkaline environment and can maintain good cementing ability. Carbonization is the process of neutralizing the cement matrix, which can reduce the alkalinity of CFCCs and can induce the corrosion of the stressed steel bars and the destruction of the structure [68, 69]. The essence of carbonization is the diffusion process of carbon dioxide gas from the surface to the inside of the matrix. Both compactness of the matrix structure and internal defects affect the diffusion rate. The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, which reduce internal defects and enhance carbonization resistance. In the case of loading, the generated microcracks become channels for the diffusion of carbon dioxide gas, which reduce its anticarbonization performance. After adding CFs to the matrix, the carbonization of cement-based composites is still a diffusion-dominated process [70–72].Analysis of the pore size distribution data obtained based on the mercury intrusion experiment method shows that the fiber has a significant influence on the microstructure of cement-based composites [73]. After the fiber volume fraction reaches 0.6%, it can be observed that the unimodal width on the pore distribution curve of cement-based materials increases, and the number of large-scale pores (radius greater than or equal to 200 nanometers) increases, which confirms that the micromatrix structure is coarsened due to the introduction of fibers. The addition of fibers can change the working performance of the fresh cement paste, thereby producing an air-entraining effect [74]. At the 0.6% fiber volume fraction, the effect of this phenomenon exceeds the internal curing effect, which causes the degradation of the overall microstructure of cement-based composites. Compared with the 0.6% fiber volume fraction, at the 0.3% fiber volume fraction, the pore size distribution before and after carbonization is less different, indicating that the 0.3% fiber volume fraction has a limited effect on the microstructure of cement-based composites.
## 3.1. Impermeability of CFCCs
Impermeability is an important factor that affects the durability of CFCCs. CFs are evenly distributed in the cement matrix, which can reduce the segregation effect in the initial stage of the cement hydration, inhibit the formation of shrinkage cracks in the cement matrix, reduce the porosity of the matrix, improve the compactness of the matrix, and effectively prevent harmful substances from penetrating into the matrix. It is usually determined by the rapid chloride ion migration coefficient test or the water penetration height test [44]. The chloride ion diffusion coefficient and the water penetration height are reduced after the CFs have been mixed, and the impermeability of the CFs is better than that of the polypropylene fibers under the same dosage. The test results showed that when the CF volume fraction was 0.9%, the effect of improving the impermeability of the matrix was the best [45]. CFs have little effect on the compressive strength of the matrix but significantly improve the splitting tensile strength, axial tensile strength, and ultimate tensile value and effectively block the penetration of chloride ions. At the same time, the cracking time and the width of the matrix crack have been improved [63, 64]. It has been found through experiments that the bubble content in the CF cement matrix decreases by 40%, and the bubble spacing becomes smaller [65].CFs are effectively bonded to the cement matrix and distributed in random directions, forming a uniform support system, optimizing the pore structure of the cement matrix, and blocking the internal communication channels of the matrix. Due to the toughening and cracking resistance of CFs, the number of initial cracks can be significantly reduced, the long-term cracks of the matrix can be effectively inhibited, and the possibility of forming through cracks in the matrix can be reduced. In this way, the microcrack pattern of the cement-based composite is further refined, and its impermeability is significantly improved.
## 3.2. Effect of Freeze-Thaw Cycles on the Durability of CFCCs
Due to the effect of negative temperature, the water in the pores of CFCCs in a water-saturated state freezes and produces a volume change to form tensile stress. Under the action of the freeze-thaw cycle, the cement matrix damage gradually accumulates and expands, which will eventually lead to the destruction of the cement matrix. CFs can significantly improve the frost resistance of CFCCs. On the one hand, the addition of CFs reduces the water penetration in the cement matrix. On the other hand, the CFs can absorb parts of the unfrozen free water and reduce the hydrostatic pressure in the cement matrix. A large number of comparative tests have shown that polypropylene fibers and CFs have improved frost resistance to cement-based composites [47, 66, 67], and CFs are better than polypropylene fibers. When the volume fraction of CFs is 0.9%, the antifreezing effect of CFCCs is the best. Under the action of the freeze-thaw cycle, the CFs have the effect of binding the surface slurry of the matrix, and its appearance damage is improved to a certain extent.The addition of CFs slows down the rate of decrease of relative dynamic elastic modulus, increases the number of freeze-thaw cycles that the test piece can withstand, and improves the frost resistance of the CFCCs; it can be seen from Figure2 [48]. As the number of freeze-thaw cycles increases, the relative dynamic elastic modulus of the specimens decreases. At the beginning of the freeze-thaw cycle (within 100 freeze-thaw cycles), for basalt fibers and CFs, when the fiber volume fraction is 1.0, 1.5, or 2.0%, the relative dynamic elastic modulus decreases slowly, and the effect of fiber content is not much different. In the late freeze-thaw period (the number of freeze-thaw cycles is greater than 100), the decline rate increases, indicating that the internal damage of the concrete gradually increases after the freeze-thaw cycle. The relative dynamic elastic modulus of plain concrete decreases faster than that of fiber concrete. When the freeze-thaw cycle reaches 200 times, the freeze-thaw damage rate is far less than 60%. However, the relative dynamic elastic modulus of fiber-reinforced concrete with a fiber volume fraction of 2% decreases more gently than that of concrete with fiber volume fractions of 1.0% and 1.5%.Figure 2
(a) The relative dynamic elastic modulus of basalt fibers and (b) CF with different fiber volume fraction changes with the number of freeze-thaw cycles [48].
(a)(b)Since the elastic modulus and tensile properties of basalt fibers are higher than those of CFs, the basalt fibers can more effectively improve the tensile strength of concrete, inhibit the expansion of internal cracks in concrete, reduce the entry of water into the matrix, and delay the frost heave damage of the internal structure. Therefore, when the number of freeze-thaw cycles is high, basalt fibers improve the freeze resistance of concrete more significantly than CFs. In general, the effect of improving the frost resistance is not significant when the fiber volume fraction is increased from 1.5% to 2.0%. Therefore, it is more economical and reasonable to choose the fiber volume fraction of 1.5%.
## 3.3. Effect of Carbonization on the Durability of CFCCs
When CFCCs are applied as the protective layer of steel bars, the impact of carbonization on the steel bars must be paid attention to. The high alkalinity inside the matrix passivates the surface of the steel bar, and the passivated film can prevent the steel bar from corroding by the external environment. The hydration product of cement-based composites has stable performance in an alkaline environment and can maintain good cementing ability. Carbonization is the process of neutralizing the cement matrix, which can reduce the alkalinity of CFCCs and can induce the corrosion of the stressed steel bars and the destruction of the structure [68, 69]. The essence of carbonization is the diffusion process of carbon dioxide gas from the surface to the inside of the matrix. Both compactness of the matrix structure and internal defects affect the diffusion rate. The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, which reduce internal defects and enhance carbonization resistance. In the case of loading, the generated microcracks become channels for the diffusion of carbon dioxide gas, which reduce its anticarbonization performance. After adding CFs to the matrix, the carbonization of cement-based composites is still a diffusion-dominated process [70–72].Analysis of the pore size distribution data obtained based on the mercury intrusion experiment method shows that the fiber has a significant influence on the microstructure of cement-based composites [73]. After the fiber volume fraction reaches 0.6%, it can be observed that the unimodal width on the pore distribution curve of cement-based materials increases, and the number of large-scale pores (radius greater than or equal to 200 nanometers) increases, which confirms that the micromatrix structure is coarsened due to the introduction of fibers. The addition of fibers can change the working performance of the fresh cement paste, thereby producing an air-entraining effect [74]. At the 0.6% fiber volume fraction, the effect of this phenomenon exceeds the internal curing effect, which causes the degradation of the overall microstructure of cement-based composites. Compared with the 0.6% fiber volume fraction, at the 0.3% fiber volume fraction, the pore size distribution before and after carbonization is less different, indicating that the 0.3% fiber volume fraction has a limited effect on the microstructure of cement-based composites.
## 4. Constitutive Relationship between CFs and Composite Durability
According to the fiber spacing theory proposed by the concept of fiber crack resistance, the use of densely spaced fibers as a crack barrier can reduce the stress intensity factor of the microcrack tip inside the matrix and inhibit the propagation of microcracks in the matrix, thereby improving the initial cracking strength of the composites. It is known from the previous studies that there is a certain limit on the amount of CF, which has a significant impact on the durability of composites due to the internal curing of the cement matrix and the long-term alkaline environment.In fact, CFs increase the air content of the concrete and relieve the hydrostatic pressure and osmotic pressure during low-temperature cycles. Secondly, dense microfibers improve the internal quality of the concrete, reduce internal defects, and improve concrete’s tensile properties such as ultimate tensile strain and fracture energy. In addition, due to the small diameter of CFs and the large number of fibers per unit weight, the fiber spacing is small, which increases the energy loss during the concrete damage process and effectively inhibits the cracking of the concrete.
### 4.1. Internal Curing Fiber of Cement-Based Composites
Different from other types of fibers, CFs have a unique hollow lumen structure and good water absorption. It can be used as the internal curing fiber of cement-based composites, as shown in Figures1(a) and 1(b). In the absence of an external water supply for maintenance, it can play its role in the internal maintenance of the cement matrix, improve the water loss of the cement matrix under natural conditions, and promote continuous hydration for a long time. Therefore, its later strength increases greatly [75, 76]. CFs can also harmonize the workability of composites and improve construction performance. In addition, the use of the curing properties of CFs can improve the interlayer deposition and stacking process of 3D printed cement-based composites, reduce interlayer voids and longitudinal defects, and enhance durability [77].CFs can effectively reduce the shrinkage of the cement matrix and significantly improve the flexural strength and fracture toughness of composites. The 28-day fracture toughness and 100-day fracture toughness of the composite vary with the fiber content, as shown in Figure3(a). The physical parameter of CFCCs varies with fiber content. As the amount of CFs increases, the density of the composites decreases. Table 4 shows the density of CFCCs [78]. When the fiber content is 16% by mass, the 28-day fracture toughness is increased by 37 times. Figure 3(b) shows the relationship between the deflection and the fiber content of two CFCCs of rice straw (RFRCC) and bamboo (BFRCC) [78]. It can be seen that as the CF content increased, the deflection of the test piece also increased, which further shows that the fiber improved the deformability and toughness of the composite, thereby enhancing the durability of the composites. As shown in Figure 3, composites reinforced with CFs experience a significant reduction in fracture toughness along with the prolongation of time. This means that the CF-reinforced composites will become stiffer and more brittle with time. Melo Filho and his coworkers [39] suggested that the weakening of energy absorbability of the CF was probably due to the deposition of calcium hydroxide crystals on the CF surface.Figure 3
(a) Influence of fiber content on fracture toughness and (b) influence of fiber content on composite deformation [78].
(a)(b)Table 4
Bulk density of CFCCs at 28 days (g/cm3).
CFsFiber content (wt.%)0481216RFRCC2.01±0.041.70±0.011.49±0.031.38±0.041.26±0.01BFRCC2.01±0.041.71±0.011.50±0.041.36±0.041.30±0.05
### 4.2. Deterioration of CFs in the Cement Matrix
The structural characteristics of CFs are the root cause of their deterioration in the environment of a high alkaline cement matrix. Studies have shown that the fiber is in an alkaline environment for a long time, and the lignin and hemicellulose in fibers are easy to dissolve in the alkaline solution of the cement-based composite, resulting in partial fiber breakage and tensile strength weakening. On the other hand, the strong alkali substance of the cement matrix enters into the fiber cavity to cause the mineralization of the fiber structure, which can reduce the mechanical properties of the fiber. At the same time, the extremely strong hydrophilicity of CFs causes its volume to change, which affects the overall structure durability [49, 57]. The sisal fiber and coconut husk fiber were immersed in a calcium hydroxide-saturated solution, the strength test was conducted for 28 days, and it was found that the tensile strength decreased by about 50% [52]. When CFs are immersed in water, saturated lime water, and sodium hydroxide solution, the lignin, cellulose, and hemicellulose contents of the fibers are all reduced [37, 58]. The application of these deteriorated fibers in cement-based composites will inevitably cause the mechanical properties of cement-based composites to decrease.
### 4.3. Ways to Improve the Durability of CFCCs
At present, there are generally two methods to improve the durability of CFCCs based on the internal curing and prone to deterioration characteristics of CFs. One method is to modify the cement matrix to consume the calcium hydroxide content of the alkaline component produced during cement hydration. Another method is to modify the fibers to improve the stability of the fibers in the cement matrix by physical or chemical methods.
#### 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
#### 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
#### 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 4.1. Internal Curing Fiber of Cement-Based Composites
Different from other types of fibers, CFs have a unique hollow lumen structure and good water absorption. It can be used as the internal curing fiber of cement-based composites, as shown in Figures1(a) and 1(b). In the absence of an external water supply for maintenance, it can play its role in the internal maintenance of the cement matrix, improve the water loss of the cement matrix under natural conditions, and promote continuous hydration for a long time. Therefore, its later strength increases greatly [75, 76]. CFs can also harmonize the workability of composites and improve construction performance. In addition, the use of the curing properties of CFs can improve the interlayer deposition and stacking process of 3D printed cement-based composites, reduce interlayer voids and longitudinal defects, and enhance durability [77].CFs can effectively reduce the shrinkage of the cement matrix and significantly improve the flexural strength and fracture toughness of composites. The 28-day fracture toughness and 100-day fracture toughness of the composite vary with the fiber content, as shown in Figure3(a). The physical parameter of CFCCs varies with fiber content. As the amount of CFs increases, the density of the composites decreases. Table 4 shows the density of CFCCs [78]. When the fiber content is 16% by mass, the 28-day fracture toughness is increased by 37 times. Figure 3(b) shows the relationship between the deflection and the fiber content of two CFCCs of rice straw (RFRCC) and bamboo (BFRCC) [78]. It can be seen that as the CF content increased, the deflection of the test piece also increased, which further shows that the fiber improved the deformability and toughness of the composite, thereby enhancing the durability of the composites. As shown in Figure 3, composites reinforced with CFs experience a significant reduction in fracture toughness along with the prolongation of time. This means that the CF-reinforced composites will become stiffer and more brittle with time. Melo Filho and his coworkers [39] suggested that the weakening of energy absorbability of the CF was probably due to the deposition of calcium hydroxide crystals on the CF surface.Figure 3
(a) Influence of fiber content on fracture toughness and (b) influence of fiber content on composite deformation [78].
(a)(b)Table 4
Bulk density of CFCCs at 28 days (g/cm3).
CFsFiber content (wt.%)0481216RFRCC2.01±0.041.70±0.011.49±0.031.38±0.041.26±0.01BFRCC2.01±0.041.71±0.011.50±0.041.36±0.041.30±0.05
## 4.2. Deterioration of CFs in the Cement Matrix
The structural characteristics of CFs are the root cause of their deterioration in the environment of a high alkaline cement matrix. Studies have shown that the fiber is in an alkaline environment for a long time, and the lignin and hemicellulose in fibers are easy to dissolve in the alkaline solution of the cement-based composite, resulting in partial fiber breakage and tensile strength weakening. On the other hand, the strong alkali substance of the cement matrix enters into the fiber cavity to cause the mineralization of the fiber structure, which can reduce the mechanical properties of the fiber. At the same time, the extremely strong hydrophilicity of CFs causes its volume to change, which affects the overall structure durability [49, 57]. The sisal fiber and coconut husk fiber were immersed in a calcium hydroxide-saturated solution, the strength test was conducted for 28 days, and it was found that the tensile strength decreased by about 50% [52]. When CFs are immersed in water, saturated lime water, and sodium hydroxide solution, the lignin, cellulose, and hemicellulose contents of the fibers are all reduced [37, 58]. The application of these deteriorated fibers in cement-based composites will inevitably cause the mechanical properties of cement-based composites to decrease.
## 4.3. Ways to Improve the Durability of CFCCs
At present, there are generally two methods to improve the durability of CFCCs based on the internal curing and prone to deterioration characteristics of CFs. One method is to modify the cement matrix to consume the calcium hydroxide content of the alkaline component produced during cement hydration. Another method is to modify the fibers to improve the stability of the fibers in the cement matrix by physical or chemical methods.
### 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
### 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
### 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
## 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
## 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 5. Conclusions
When microcracks appear during the service period of cement-based composites, the fibers share the load through the bridging action, which slows down the continuous development of the microcracks and increases the durability of the composites. The main conclusions are as follows.The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, so CFs can significantly improve the permeability, frost resistance, and carbonization resistance of CFCCs.CFs uniformly dispersed in the cement matrix, and it can induce the orderly growth of cement hydration products at the initial stage of hydration and enhance the compactness of the cement matrix.The internal curing characteristics of CFs on the cement matrix can enhance the durability of CFCCs. The utilization of microcrystalline CFs and nanocrystalline CFs has further improved the durability of CFCCs.Cementitious materials with low alkali corrosion have been used to reduce the long-term performance degradation of CFs, such as magnesium silicate cement, magnesium phosphate cement, and geopolymer cement.Fiber modification is an important measure to improve the durability of CFCCs, and in particular, chemical modification has been usually used.
---
*Source: 1014531-2021-08-17.xml* | 1014531-2021-08-17_1014531-2021-08-17.md | 72,330 | Research Progress on Durability of Cellulose Fiber-Reinforced Cement-Based Composites | Jie Liu; Chun Lv | International Journal of Polymer Science
(2021) | Chemistry and Chemical Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1014531 | 1014531-2021-08-17.xml | ---
## Abstract
The performance of cellulose fiber-reinforced cement-based composites (CFCCs) depends not only on the characteristics of the cement matrix and fibers but also on the bonding property of the matrix and fibers. The durability of cement-based composites including various properties such as impermeability, frost resistance, and carbonization resistance has an important impact on the long-term service life of the matrix structure. The presence of a large number of hydroxyl groups on the molecular chain of cellulose can promote the formation of intra- and intermolecular hydrogen bonds of cellulose. This special structure imparts the cellulose high hydrophilicity, which leads the cement hydration C-S-H gel to adhere to the surface of cellulosic fibers (CFs) and induce its growth. The cavity of CFs has good water absorption and can be used as an internal curing fiber for the continuous hydration of cement-based composites. But CFs in the Portland cement matrix tend to deteriorate under strong alkali conditions. This paper presents a review of the research on the durability of CFCCs. The methods and paths to improve the durability of CFCCs are summarized and analyzed from the perspectives of the internal curing of CFs, the deterioration of the performance of CFs in the matrix, and the use of many types of supplementary cementitious materials. Finally, the development and engineering application of CFCCs have been prospected.
---
## Body
## 1. Introduction
In the last few years, more and more different types of fibers have been used to reinforce cement-based composites. As is known, these fibers include steel, organic synthetic, carbon, and glass fibers. The commonly used fibers for reinforcements exhibit a set of advantages, such as long application time, relatively mature technology, and interesting physical and mechanical properties. However, compared to CFs, the commonly used fibers have their own limitations [1]. As far as we know, the density of steel fibers is relatively large, which cannot meet the requirements of reducing the weight of composites. At the same time, the size of the steel fiber is too large, there is an interface transition zone in the cement matrix, and it also lacks inducing the growth of the hydrate gel in the cement matrix interface transition zone. The organic synthetic fiber has poor compatibility with the cement matrix and is not easy to disperse in the cement matrix. It pollutes the environment greatly, which is contrary to the concept of green cement-based composites [2–4]. The carbon fiber is easy to agglomerate in the cement matrix, and the cost is relatively high [5–8]. The glass fiber is brittle and has poor wear resistance [9, 10].Plant fibers such as crop straws are composed of cellulose, hemicellulose, lignin, and other substances, and cellulose is the main component. CFs are widely found in nature. They are inexpensive, have low density, are renewable, are biodegradable, are rich in sources, and have good reinforcement effects on cement-based composites [11–13]. The geometric characteristics, mechanical properties, volume mixing ratio of CFs, and bonding property of the fiber and matrix interface are important factors that affect the strength and toughness of the matrix. Compared with traditional fibers, CFs have a larger specific surface area, aspect ratio, excellent toughness, and bonding ability. It disperses evenly in the cement matrix, has good compatibility, and has a filling and bridging effect on the cement matrix [14]. At the same time, the addition of CFs also greatly reduces the density of cement-based composites, improves their flexural strength [15–17], inhibits the occurrence and development of microcracks within the matrix [18, 19], and enhances the impact resistance of cement-based composites [20].Over the last few years, studies on the toughening modification of cement-based composites using CFs have focused on the macromechanical properties of composites, the microstructure of the interface between the cement matrix and the fibers, the effect of fiber cracking and toughening, and the durability [21–24]. The durability of cement-based composites includes impermeability, frost resistance, carbonization resistance, and resistance to sulfate attack, which have an important impact on the long-term service life of the structure [25]. The durability of CFCCs is the main reason which limits their engineering application [26].This paper presents a review of the research done in the area of the durability of CFCCs during the last years (2005.01-2021.05), which used Chinese journal literature retrieval databases, including China Journal Network and CNKI Scholar, English journal literature retrieval databases, and so on. Search keywords include cellulose fiber, concrete durability, internal curing, impermeability, frost resistance, and carbonization. In the searched literature, no relevant similar reviews have been found, and works of literature that are not related to the content of the paper have also been excluded. Only a few book chapters or reviews provide a general overview of the durability of CFCCs [27, 28], summarizing the main improvements and findings from a few research papers. There are also overviews of nanocellulose fibers, which are different from the contents discussed in this paper [29, 30]. The durability of CFCCs such as impermeability, frost resistance, and carbonization resistance is reviewed, and the impact of CFs on the durability of composites and the internal curing of CFs is discussed in this paper. From the perspectives of the performance, the degradation of CFs in the matrix, and the application of multiple types of auxiliary gel materials, a summary analysis is made to improve the durability of CFCCs, and the solutions are proposed.
## 2. Performance of CFs
As one of the most abundant renewable resources, CFs have the advantage of being low cost and environmentally friendly [31]. In 1932, scientists studied the chemical structure of cellulose [32]. Cellulose is a straight-chain polymer formed by linking countless D-glucopyranose anhydrides(1-5) with β(1-4) glycosides. The structure of cellulose is highly regular and unbranched, and the chemical formula is (C6H10O5)n. There are a large number of hydroxyl groups on the molecular chain of cellulose. The presence of this polar group promotes the formation of intra- and intermolecular hydrogen bonds of cellulose [33, 34]. CFs can be made from renewable resources such as straw, they have a porous cell structure, and their specific surface area is better than other fibers [35]. As described above, there are many pores in the cross-section of the fibers, as shown in Figures 1(a) and 1(b). The properties of CFs mainly depend on the type of cellulose. The cellulose content of different types of plants varied greatly, among which the cellulose content of cotton is the highest. Table 1 shows the performance index data of commonly used CFs and other commonly used fibers [36]. The physical and mechanical properties of CFs can be seen in Table 1. The mechanical properties of CFs such as tensile strength and Young’s modulus are weaker than those of the commonly used fibers such as carbon fiber and aramid fiber. Flax, jute, ramie, sisal, and hemp fibers have better mechanical properties as similar to glass fiber in tensile strength and Young’s modulus, so they can be directly used as CFCCs.Figure 1
SEM images of CFs. (a) Sisal fibers [39]. (b) Jute fibers [13]. (c, d) Combination of the macroscale fiber and cement matrix [37].
(a)(b)(c)(d)Table 1
Performance of CFs and commonly used fibers.
FibersDensity (g/cm3)Elongation (%)Tensile strength (MPa)Young’s modulus (GPa)Cotton fiber1.5-1.67.0-8.0287-5975.5-12.6Jute fiber1.31.5-1.8393-77326.5Flax fiber1.52.7-3.2345-103527.6Ramie fiber1.53.6-3.8400-93861.4-128.0Sisal fiber1.52.0-2.5511-6359.4-22.0Coir fiber1.230.01754.0-6.0Cork fiber1.5—100040.0E-glass fiber2.52.52000–350070.0S-glass fiber2.52.8457086.0Aramid fiber1.43.3-3.73000-315063.0-67.0Carbon fiber1.41.4-1.84000230.0-240.0The performance of CFCCs includes working performance, mechanical properties, and durability. The effect of CFs on the working performance and mechanical properties of composites is similar to that of traditional fibers, but the impact on durability is very different. As mentioned above, the existence of a large number of hydroxyl groups on the molecular chain of cellulose promotes the formation of intra- and intermolecular hydrogen bonds of cellulose. This special structure renders the cellulose extremely hydrophilic, makes the CFs compatible with the cement matrix, and has a good cohesive force. C-S-H gel, the main hydration product of the cement, grows on the surface of CFs, as shown in Figures1(c) and 1(d). Wu et al. [37] analyzed the SEM (Figures 1(c) and 1(d)) and found that the cement hydration around CFs was more complete. The reason is that the CFs can induce the orderly and directional growth of hydration products in the initial stage of cement hydration. After the cement hardens, when the load exceeds its cracking load, the fibers can share the load, which greatly increases the load of the cement matrix, avoiding or delaying the growth of microcracks. In Figure 1(d), the CF surface has peeled off and separated countless microfibers with a diameter less than 1 μm. Some of the microfiber ends are embedded in the cement hydration product. The formation of the microstructure of the cement paste will be accompanied by complex chemical reactions and physical changes. When the shrinkage of the matrix is restricted, the cement-based material will crack. The degree of cracking depends on the tensile strength and shrinkage stress of the cement paste. These microfibers play a bridging and filling role in the composites. The moisture absorbed in the CFs supplements the lack of moisture in the cement hydration process and can induce the hydration of the cement surface, and the hydration product can fill the microcracks so as to achieve the composites’ effect of enhancing impermeability.However, the lignin and hemicellulose in CFs are easily dissolved in the alkaline solution of the cement matrix, and the strong alkali material enters the fiber cavity to cause the mineralization and degradation of the fiber structure, thereby affecting the durability. The CFs applied to cement-based composites can be classified by the function of their size. CFs can be found as macroscale, microcrystalline, and nanocrystalline CFs. Macroscale CFs include strands (long fibers of lengths around 20 to 100 cm), staple fibers (short fibers with lengths between 1 and 20 cm), and pulp (very short fibers with lengths between 1 and 10 mm), which can be processed by chemical methods to form micron-scale microcrystalline CFs and nanosized nanocrystalline CFs [37, 38].In general terms, micron-sized microcrystalline CFs and nanosized nanocrystalline CFs can be produced in chemical or enzymatic ways. The loosely arranged amorphous regions in the cellulose are destroyed, and the crystalline regions are retained to obtain micron-sized microcrystalline CFs and nanosized nanocrystalline CFs with higher crystallinity. The preparation methods of nanocrystalline CFs mainly include the physical mechanical method, acid hydrolysis method, and biological method. Nanocrystalline CFs have a rigid rod-like structure, and their properties are shown in Table2.Table 2
Performance of CFs and commonly used fibers.
CFsLength (nm)Diameter (nm)Tensile strength (MPa)Young’s modulus (GPa)Ref.Macroscale CFs (cotton)2,000,000-3,000,00011,540287-5975.5-12.6[36]Microcrystalline CFs100-3,0001,4407,500100.0-140.0[37, 40, 41]Nanocrystalline CFs50-5001-100
## 3. Research Status of Durability of CFCCs
Anselme Payen, a French scientist, extracted a compound from wood in 1838 and named it cellulose. The majority of the fabrication methods for cement composites reinforced with CFs in the pulp form are based on the Hatschek process, patented by L. Hatschek in 1900 [27]. After a century, the research and application of CFCCs have become increasingly widespread. The research on CFs is from mechanical property to durability, from the bonding phenomenon between the CFs and the matrix to the bonding mechanism, and from macroscale application to microscale application. Only a few studies have been focused on the bond adhesion of CFs with cement matrices. For instance, some studies analyzed the effect of CF shape and curing age on the bond strength of CFCCs using pull-out tests [28, 42], and the other studies analyzed the fiber matrix bond adhesion indirectly [14–16].At this stage, research studies on the durability of CFCCs include frost resistance, carbonization resistance, water penetration resistance, chloride ion penetration resistance, gas penetration resistance, sulfate erosion resistance, early anticracking performance, compressive creep performance, and compressive fatigue deformation performance. In general terms, the durability of the composites depends on the resistance to chloride ion permeability, frost resistance, and carbonization resistance. As can be seen in Table3, it is a summary of related studies on the durability of CFCCs.Table 3
Summary of studies on durability of CFCCs.
DurabilityCFsCFs (wt.%)Fiber formCementitiousTest methodsRef.Antichloride ion penetrationUF500 CF0.45~1.5Randomly dispersedP·O42.5RCM method; water seepage height[43–45]Bokai super fiber0.6~1.2P·O52.5Water seepage pressure[46]AntifreezingFlake fiber0.9Randomly dispersedP·O42.5Rapid freezing[43, 47]UF500 CF0.1~0.2[48]CarbonizationCork kraft pulp8.0AlignedP·II42.5RAccelerate carbonization[49]Rape straw fiber0.5~2.00Randomly dispersedSAC[50, 51]Coir fiber2.0Slag cement12 years[52]Antidry and wet cycleSisal fiber1.5~8.0Randomly dispersedP·O42.5RCBI method[53, 54]Cork CF4.0P·O42.525 dry/wet cycles[55]Sisal fiber0.6AlignedCalcined clay100 dry/wet cycles[56]Antisulfate attackUF500 CF0.9Randomly dispersedP·II42.5RSulfate-wet and dry cycle coupling[51]AnticrackSisal fiber2.0AlignedMK/PCInstron 5948 test system[57]Table3 shows that CFCCs are rich in varieties, and their durability involved a wide range, which are closely related to the service life of the composites. The test methods for resistance of CFCCs to freezing and thawing usually include slow freezing and thawing, rapid freezing and thawing, and single-side freezing and thawing. The test methods for resistance of CFCCs to chloride penetration adopt the rapid chloride migration (RCM) coefficient test and coulomb electric flux test. The procedures for preparing CFCCs reported in the literature can be divided into two main groups depending on the fiber form: fibers randomly dispersed in the cement matrix [58–60] and aligned fibers or fibrous structures [43, 46, 50, 53]. In view of the fibers randomly dispersed in the matrix, it has certain limitations in the mechanical properties of reinforced composites, and aligned fibers or fibrous structures are also used to strengthen the cement matrix [49, 56, 57, 61].Cement-based composite is a kind of porous material, which provides a channel for harmful external impurities to penetrate into the matrix. Adding CFs to the cement matrix can reduce the generation of early microcracks, inhibit the development of microcracks during the service period of the matrix, and improve the durability of CFCCs. After absorbing water, the CFs are evenly dispersed into the cement matrix, forming a plurality of microwater flow channels inside the matrix. These pores can continue to provide water for the later cement hydration, make it fully hydrated, ensure the mechanical properties of the cement matrix, prevent the cement matrix from cracking, and also improve the cement matrix’s antipermeability, antifreezing, and anticarbonization capabilities. The uniform distribution of the fiber network enhances the adhesion between the matrix components, the matrix structure has good integrity, and the impact resistance is also significantly improved [62].
### 3.1. Impermeability of CFCCs
Impermeability is an important factor that affects the durability of CFCCs. CFs are evenly distributed in the cement matrix, which can reduce the segregation effect in the initial stage of the cement hydration, inhibit the formation of shrinkage cracks in the cement matrix, reduce the porosity of the matrix, improve the compactness of the matrix, and effectively prevent harmful substances from penetrating into the matrix. It is usually determined by the rapid chloride ion migration coefficient test or the water penetration height test [44]. The chloride ion diffusion coefficient and the water penetration height are reduced after the CFs have been mixed, and the impermeability of the CFs is better than that of the polypropylene fibers under the same dosage. The test results showed that when the CF volume fraction was 0.9%, the effect of improving the impermeability of the matrix was the best [45]. CFs have little effect on the compressive strength of the matrix but significantly improve the splitting tensile strength, axial tensile strength, and ultimate tensile value and effectively block the penetration of chloride ions. At the same time, the cracking time and the width of the matrix crack have been improved [63, 64]. It has been found through experiments that the bubble content in the CF cement matrix decreases by 40%, and the bubble spacing becomes smaller [65].CFs are effectively bonded to the cement matrix and distributed in random directions, forming a uniform support system, optimizing the pore structure of the cement matrix, and blocking the internal communication channels of the matrix. Due to the toughening and cracking resistance of CFs, the number of initial cracks can be significantly reduced, the long-term cracks of the matrix can be effectively inhibited, and the possibility of forming through cracks in the matrix can be reduced. In this way, the microcrack pattern of the cement-based composite is further refined, and its impermeability is significantly improved.
### 3.2. Effect of Freeze-Thaw Cycles on the Durability of CFCCs
Due to the effect of negative temperature, the water in the pores of CFCCs in a water-saturated state freezes and produces a volume change to form tensile stress. Under the action of the freeze-thaw cycle, the cement matrix damage gradually accumulates and expands, which will eventually lead to the destruction of the cement matrix. CFs can significantly improve the frost resistance of CFCCs. On the one hand, the addition of CFs reduces the water penetration in the cement matrix. On the other hand, the CFs can absorb parts of the unfrozen free water and reduce the hydrostatic pressure in the cement matrix. A large number of comparative tests have shown that polypropylene fibers and CFs have improved frost resistance to cement-based composites [47, 66, 67], and CFs are better than polypropylene fibers. When the volume fraction of CFs is 0.9%, the antifreezing effect of CFCCs is the best. Under the action of the freeze-thaw cycle, the CFs have the effect of binding the surface slurry of the matrix, and its appearance damage is improved to a certain extent.The addition of CFs slows down the rate of decrease of relative dynamic elastic modulus, increases the number of freeze-thaw cycles that the test piece can withstand, and improves the frost resistance of the CFCCs; it can be seen from Figure2 [48]. As the number of freeze-thaw cycles increases, the relative dynamic elastic modulus of the specimens decreases. At the beginning of the freeze-thaw cycle (within 100 freeze-thaw cycles), for basalt fibers and CFs, when the fiber volume fraction is 1.0, 1.5, or 2.0%, the relative dynamic elastic modulus decreases slowly, and the effect of fiber content is not much different. In the late freeze-thaw period (the number of freeze-thaw cycles is greater than 100), the decline rate increases, indicating that the internal damage of the concrete gradually increases after the freeze-thaw cycle. The relative dynamic elastic modulus of plain concrete decreases faster than that of fiber concrete. When the freeze-thaw cycle reaches 200 times, the freeze-thaw damage rate is far less than 60%. However, the relative dynamic elastic modulus of fiber-reinforced concrete with a fiber volume fraction of 2% decreases more gently than that of concrete with fiber volume fractions of 1.0% and 1.5%.Figure 2
(a) The relative dynamic elastic modulus of basalt fibers and (b) CF with different fiber volume fraction changes with the number of freeze-thaw cycles [48].
(a)(b)Since the elastic modulus and tensile properties of basalt fibers are higher than those of CFs, the basalt fibers can more effectively improve the tensile strength of concrete, inhibit the expansion of internal cracks in concrete, reduce the entry of water into the matrix, and delay the frost heave damage of the internal structure. Therefore, when the number of freeze-thaw cycles is high, basalt fibers improve the freeze resistance of concrete more significantly than CFs. In general, the effect of improving the frost resistance is not significant when the fiber volume fraction is increased from 1.5% to 2.0%. Therefore, it is more economical and reasonable to choose the fiber volume fraction of 1.5%.
### 3.3. Effect of Carbonization on the Durability of CFCCs
When CFCCs are applied as the protective layer of steel bars, the impact of carbonization on the steel bars must be paid attention to. The high alkalinity inside the matrix passivates the surface of the steel bar, and the passivated film can prevent the steel bar from corroding by the external environment. The hydration product of cement-based composites has stable performance in an alkaline environment and can maintain good cementing ability. Carbonization is the process of neutralizing the cement matrix, which can reduce the alkalinity of CFCCs and can induce the corrosion of the stressed steel bars and the destruction of the structure [68, 69]. The essence of carbonization is the diffusion process of carbon dioxide gas from the surface to the inside of the matrix. Both compactness of the matrix structure and internal defects affect the diffusion rate. The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, which reduce internal defects and enhance carbonization resistance. In the case of loading, the generated microcracks become channels for the diffusion of carbon dioxide gas, which reduce its anticarbonization performance. After adding CFs to the matrix, the carbonization of cement-based composites is still a diffusion-dominated process [70–72].Analysis of the pore size distribution data obtained based on the mercury intrusion experiment method shows that the fiber has a significant influence on the microstructure of cement-based composites [73]. After the fiber volume fraction reaches 0.6%, it can be observed that the unimodal width on the pore distribution curve of cement-based materials increases, and the number of large-scale pores (radius greater than or equal to 200 nanometers) increases, which confirms that the micromatrix structure is coarsened due to the introduction of fibers. The addition of fibers can change the working performance of the fresh cement paste, thereby producing an air-entraining effect [74]. At the 0.6% fiber volume fraction, the effect of this phenomenon exceeds the internal curing effect, which causes the degradation of the overall microstructure of cement-based composites. Compared with the 0.6% fiber volume fraction, at the 0.3% fiber volume fraction, the pore size distribution before and after carbonization is less different, indicating that the 0.3% fiber volume fraction has a limited effect on the microstructure of cement-based composites.
## 3.1. Impermeability of CFCCs
Impermeability is an important factor that affects the durability of CFCCs. CFs are evenly distributed in the cement matrix, which can reduce the segregation effect in the initial stage of the cement hydration, inhibit the formation of shrinkage cracks in the cement matrix, reduce the porosity of the matrix, improve the compactness of the matrix, and effectively prevent harmful substances from penetrating into the matrix. It is usually determined by the rapid chloride ion migration coefficient test or the water penetration height test [44]. The chloride ion diffusion coefficient and the water penetration height are reduced after the CFs have been mixed, and the impermeability of the CFs is better than that of the polypropylene fibers under the same dosage. The test results showed that when the CF volume fraction was 0.9%, the effect of improving the impermeability of the matrix was the best [45]. CFs have little effect on the compressive strength of the matrix but significantly improve the splitting tensile strength, axial tensile strength, and ultimate tensile value and effectively block the penetration of chloride ions. At the same time, the cracking time and the width of the matrix crack have been improved [63, 64]. It has been found through experiments that the bubble content in the CF cement matrix decreases by 40%, and the bubble spacing becomes smaller [65].CFs are effectively bonded to the cement matrix and distributed in random directions, forming a uniform support system, optimizing the pore structure of the cement matrix, and blocking the internal communication channels of the matrix. Due to the toughening and cracking resistance of CFs, the number of initial cracks can be significantly reduced, the long-term cracks of the matrix can be effectively inhibited, and the possibility of forming through cracks in the matrix can be reduced. In this way, the microcrack pattern of the cement-based composite is further refined, and its impermeability is significantly improved.
## 3.2. Effect of Freeze-Thaw Cycles on the Durability of CFCCs
Due to the effect of negative temperature, the water in the pores of CFCCs in a water-saturated state freezes and produces a volume change to form tensile stress. Under the action of the freeze-thaw cycle, the cement matrix damage gradually accumulates and expands, which will eventually lead to the destruction of the cement matrix. CFs can significantly improve the frost resistance of CFCCs. On the one hand, the addition of CFs reduces the water penetration in the cement matrix. On the other hand, the CFs can absorb parts of the unfrozen free water and reduce the hydrostatic pressure in the cement matrix. A large number of comparative tests have shown that polypropylene fibers and CFs have improved frost resistance to cement-based composites [47, 66, 67], and CFs are better than polypropylene fibers. When the volume fraction of CFs is 0.9%, the antifreezing effect of CFCCs is the best. Under the action of the freeze-thaw cycle, the CFs have the effect of binding the surface slurry of the matrix, and its appearance damage is improved to a certain extent.The addition of CFs slows down the rate of decrease of relative dynamic elastic modulus, increases the number of freeze-thaw cycles that the test piece can withstand, and improves the frost resistance of the CFCCs; it can be seen from Figure2 [48]. As the number of freeze-thaw cycles increases, the relative dynamic elastic modulus of the specimens decreases. At the beginning of the freeze-thaw cycle (within 100 freeze-thaw cycles), for basalt fibers and CFs, when the fiber volume fraction is 1.0, 1.5, or 2.0%, the relative dynamic elastic modulus decreases slowly, and the effect of fiber content is not much different. In the late freeze-thaw period (the number of freeze-thaw cycles is greater than 100), the decline rate increases, indicating that the internal damage of the concrete gradually increases after the freeze-thaw cycle. The relative dynamic elastic modulus of plain concrete decreases faster than that of fiber concrete. When the freeze-thaw cycle reaches 200 times, the freeze-thaw damage rate is far less than 60%. However, the relative dynamic elastic modulus of fiber-reinforced concrete with a fiber volume fraction of 2% decreases more gently than that of concrete with fiber volume fractions of 1.0% and 1.5%.Figure 2
(a) The relative dynamic elastic modulus of basalt fibers and (b) CF with different fiber volume fraction changes with the number of freeze-thaw cycles [48].
(a)(b)Since the elastic modulus and tensile properties of basalt fibers are higher than those of CFs, the basalt fibers can more effectively improve the tensile strength of concrete, inhibit the expansion of internal cracks in concrete, reduce the entry of water into the matrix, and delay the frost heave damage of the internal structure. Therefore, when the number of freeze-thaw cycles is high, basalt fibers improve the freeze resistance of concrete more significantly than CFs. In general, the effect of improving the frost resistance is not significant when the fiber volume fraction is increased from 1.5% to 2.0%. Therefore, it is more economical and reasonable to choose the fiber volume fraction of 1.5%.
## 3.3. Effect of Carbonization on the Durability of CFCCs
When CFCCs are applied as the protective layer of steel bars, the impact of carbonization on the steel bars must be paid attention to. The high alkalinity inside the matrix passivates the surface of the steel bar, and the passivated film can prevent the steel bar from corroding by the external environment. The hydration product of cement-based composites has stable performance in an alkaline environment and can maintain good cementing ability. Carbonization is the process of neutralizing the cement matrix, which can reduce the alkalinity of CFCCs and can induce the corrosion of the stressed steel bars and the destruction of the structure [68, 69]. The essence of carbonization is the diffusion process of carbon dioxide gas from the surface to the inside of the matrix. Both compactness of the matrix structure and internal defects affect the diffusion rate. The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, which reduce internal defects and enhance carbonization resistance. In the case of loading, the generated microcracks become channels for the diffusion of carbon dioxide gas, which reduce its anticarbonization performance. After adding CFs to the matrix, the carbonization of cement-based composites is still a diffusion-dominated process [70–72].Analysis of the pore size distribution data obtained based on the mercury intrusion experiment method shows that the fiber has a significant influence on the microstructure of cement-based composites [73]. After the fiber volume fraction reaches 0.6%, it can be observed that the unimodal width on the pore distribution curve of cement-based materials increases, and the number of large-scale pores (radius greater than or equal to 200 nanometers) increases, which confirms that the micromatrix structure is coarsened due to the introduction of fibers. The addition of fibers can change the working performance of the fresh cement paste, thereby producing an air-entraining effect [74]. At the 0.6% fiber volume fraction, the effect of this phenomenon exceeds the internal curing effect, which causes the degradation of the overall microstructure of cement-based composites. Compared with the 0.6% fiber volume fraction, at the 0.3% fiber volume fraction, the pore size distribution before and after carbonization is less different, indicating that the 0.3% fiber volume fraction has a limited effect on the microstructure of cement-based composites.
## 4. Constitutive Relationship between CFs and Composite Durability
According to the fiber spacing theory proposed by the concept of fiber crack resistance, the use of densely spaced fibers as a crack barrier can reduce the stress intensity factor of the microcrack tip inside the matrix and inhibit the propagation of microcracks in the matrix, thereby improving the initial cracking strength of the composites. It is known from the previous studies that there is a certain limit on the amount of CF, which has a significant impact on the durability of composites due to the internal curing of the cement matrix and the long-term alkaline environment.In fact, CFs increase the air content of the concrete and relieve the hydrostatic pressure and osmotic pressure during low-temperature cycles. Secondly, dense microfibers improve the internal quality of the concrete, reduce internal defects, and improve concrete’s tensile properties such as ultimate tensile strain and fracture energy. In addition, due to the small diameter of CFs and the large number of fibers per unit weight, the fiber spacing is small, which increases the energy loss during the concrete damage process and effectively inhibits the cracking of the concrete.
### 4.1. Internal Curing Fiber of Cement-Based Composites
Different from other types of fibers, CFs have a unique hollow lumen structure and good water absorption. It can be used as the internal curing fiber of cement-based composites, as shown in Figures1(a) and 1(b). In the absence of an external water supply for maintenance, it can play its role in the internal maintenance of the cement matrix, improve the water loss of the cement matrix under natural conditions, and promote continuous hydration for a long time. Therefore, its later strength increases greatly [75, 76]. CFs can also harmonize the workability of composites and improve construction performance. In addition, the use of the curing properties of CFs can improve the interlayer deposition and stacking process of 3D printed cement-based composites, reduce interlayer voids and longitudinal defects, and enhance durability [77].CFs can effectively reduce the shrinkage of the cement matrix and significantly improve the flexural strength and fracture toughness of composites. The 28-day fracture toughness and 100-day fracture toughness of the composite vary with the fiber content, as shown in Figure3(a). The physical parameter of CFCCs varies with fiber content. As the amount of CFs increases, the density of the composites decreases. Table 4 shows the density of CFCCs [78]. When the fiber content is 16% by mass, the 28-day fracture toughness is increased by 37 times. Figure 3(b) shows the relationship between the deflection and the fiber content of two CFCCs of rice straw (RFRCC) and bamboo (BFRCC) [78]. It can be seen that as the CF content increased, the deflection of the test piece also increased, which further shows that the fiber improved the deformability and toughness of the composite, thereby enhancing the durability of the composites. As shown in Figure 3, composites reinforced with CFs experience a significant reduction in fracture toughness along with the prolongation of time. This means that the CF-reinforced composites will become stiffer and more brittle with time. Melo Filho and his coworkers [39] suggested that the weakening of energy absorbability of the CF was probably due to the deposition of calcium hydroxide crystals on the CF surface.Figure 3
(a) Influence of fiber content on fracture toughness and (b) influence of fiber content on composite deformation [78].
(a)(b)Table 4
Bulk density of CFCCs at 28 days (g/cm3).
CFsFiber content (wt.%)0481216RFRCC2.01±0.041.70±0.011.49±0.031.38±0.041.26±0.01BFRCC2.01±0.041.71±0.011.50±0.041.36±0.041.30±0.05
### 4.2. Deterioration of CFs in the Cement Matrix
The structural characteristics of CFs are the root cause of their deterioration in the environment of a high alkaline cement matrix. Studies have shown that the fiber is in an alkaline environment for a long time, and the lignin and hemicellulose in fibers are easy to dissolve in the alkaline solution of the cement-based composite, resulting in partial fiber breakage and tensile strength weakening. On the other hand, the strong alkali substance of the cement matrix enters into the fiber cavity to cause the mineralization of the fiber structure, which can reduce the mechanical properties of the fiber. At the same time, the extremely strong hydrophilicity of CFs causes its volume to change, which affects the overall structure durability [49, 57]. The sisal fiber and coconut husk fiber were immersed in a calcium hydroxide-saturated solution, the strength test was conducted for 28 days, and it was found that the tensile strength decreased by about 50% [52]. When CFs are immersed in water, saturated lime water, and sodium hydroxide solution, the lignin, cellulose, and hemicellulose contents of the fibers are all reduced [37, 58]. The application of these deteriorated fibers in cement-based composites will inevitably cause the mechanical properties of cement-based composites to decrease.
### 4.3. Ways to Improve the Durability of CFCCs
At present, there are generally two methods to improve the durability of CFCCs based on the internal curing and prone to deterioration characteristics of CFs. One method is to modify the cement matrix to consume the calcium hydroxide content of the alkaline component produced during cement hydration. Another method is to modify the fibers to improve the stability of the fibers in the cement matrix by physical or chemical methods.
#### 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
#### 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
#### 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 4.1. Internal Curing Fiber of Cement-Based Composites
Different from other types of fibers, CFs have a unique hollow lumen structure and good water absorption. It can be used as the internal curing fiber of cement-based composites, as shown in Figures1(a) and 1(b). In the absence of an external water supply for maintenance, it can play its role in the internal maintenance of the cement matrix, improve the water loss of the cement matrix under natural conditions, and promote continuous hydration for a long time. Therefore, its later strength increases greatly [75, 76]. CFs can also harmonize the workability of composites and improve construction performance. In addition, the use of the curing properties of CFs can improve the interlayer deposition and stacking process of 3D printed cement-based composites, reduce interlayer voids and longitudinal defects, and enhance durability [77].CFs can effectively reduce the shrinkage of the cement matrix and significantly improve the flexural strength and fracture toughness of composites. The 28-day fracture toughness and 100-day fracture toughness of the composite vary with the fiber content, as shown in Figure3(a). The physical parameter of CFCCs varies with fiber content. As the amount of CFs increases, the density of the composites decreases. Table 4 shows the density of CFCCs [78]. When the fiber content is 16% by mass, the 28-day fracture toughness is increased by 37 times. Figure 3(b) shows the relationship between the deflection and the fiber content of two CFCCs of rice straw (RFRCC) and bamboo (BFRCC) [78]. It can be seen that as the CF content increased, the deflection of the test piece also increased, which further shows that the fiber improved the deformability and toughness of the composite, thereby enhancing the durability of the composites. As shown in Figure 3, composites reinforced with CFs experience a significant reduction in fracture toughness along with the prolongation of time. This means that the CF-reinforced composites will become stiffer and more brittle with time. Melo Filho and his coworkers [39] suggested that the weakening of energy absorbability of the CF was probably due to the deposition of calcium hydroxide crystals on the CF surface.Figure 3
(a) Influence of fiber content on fracture toughness and (b) influence of fiber content on composite deformation [78].
(a)(b)Table 4
Bulk density of CFCCs at 28 days (g/cm3).
CFsFiber content (wt.%)0481216RFRCC2.01±0.041.70±0.011.49±0.031.38±0.041.26±0.01BFRCC2.01±0.041.71±0.011.50±0.041.36±0.041.30±0.05
## 4.2. Deterioration of CFs in the Cement Matrix
The structural characteristics of CFs are the root cause of their deterioration in the environment of a high alkaline cement matrix. Studies have shown that the fiber is in an alkaline environment for a long time, and the lignin and hemicellulose in fibers are easy to dissolve in the alkaline solution of the cement-based composite, resulting in partial fiber breakage and tensile strength weakening. On the other hand, the strong alkali substance of the cement matrix enters into the fiber cavity to cause the mineralization of the fiber structure, which can reduce the mechanical properties of the fiber. At the same time, the extremely strong hydrophilicity of CFs causes its volume to change, which affects the overall structure durability [49, 57]. The sisal fiber and coconut husk fiber were immersed in a calcium hydroxide-saturated solution, the strength test was conducted for 28 days, and it was found that the tensile strength decreased by about 50% [52]. When CFs are immersed in water, saturated lime water, and sodium hydroxide solution, the lignin, cellulose, and hemicellulose contents of the fibers are all reduced [37, 58]. The application of these deteriorated fibers in cement-based composites will inevitably cause the mechanical properties of cement-based composites to decrease.
## 4.3. Ways to Improve the Durability of CFCCs
At present, there are generally two methods to improve the durability of CFCCs based on the internal curing and prone to deterioration characteristics of CFs. One method is to modify the cement matrix to consume the calcium hydroxide content of the alkaline component produced during cement hydration. Another method is to modify the fibers to improve the stability of the fibers in the cement matrix by physical or chemical methods.
### 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
### 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
### 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 4.3.1. Modification of the Cement Matrix
The reinforced concrete structure should be prevented from carbonization, and as for cement-based composites reinforced with single CFs, carbonization needs to be accelerated in order to enhance their durability. The purpose of carbonization is to make the cement hydration product calcium hydroxide react with carbon dioxide to form calcium carbonate. Pizzol et al. [79] have done the strengthened and accelerated carbonization test of the composite with sisal and kraft pulp, which increased the load-bearing capacity of the composite by 25% and the toughness by 80%, and reduced the fiber degradation in the cement medium. Carbonization reduced the porosity, water absorption, and nitrogen permeability of the composite, increased the matrix interface density, and made the fiber and cement matrix bond tighter, as shown in Figure 4(a). Carbonization improved the compressive strength and the durability and weather resistance of the composites, and their service life was extended [70, 71, 80]. Due to the chemical stability of the carbonized product and its reduced capillary porosity, the CFCCs have better flexural strength and can improve the adhesion between the cement-based matrix and the CFs.Figure 4
(a) Microstructure [72]. (b) Voids around the CFs. (c) Micro-nano-level microcrystalline cellulose [35].
(a)(b)(c)Studies have shown that the optimal water content of the carbonized matrix is 40% to 60% [72], and carbonization significantly improves the durability of the matrix against dry and wet freezing and thawing. Both carbonization and the addition of mineral admixtures can reduce the calcium hydroxide content in the cement matrix. The cement-based composite is mixed with mineral admixtures such as silica fume, metakaolin, blast furnace slag, and fly ash, which can undergo secondary hydration reaction with calcium hydroxide in the cement to obtain hydrated calcium silicate or hydrated calcium aluminate [81]. Replacing part of cement with mineral admixtures significantly reduces the content of calcium hydroxide, avoids the deterioration of fiber performance, and ensures the strength and toughness of cement-based composites [82]. More studies revealed that it can produce cementitious materials without calcium hydroxide by using calcined metakaolin and calcined waste crushed clay bricks instead of ordinary Portland cement [53].There are many types of supplementary cementitious materials used, and the extent of improvement varies. As shown in Table5, the abbreviations of each component are as follows: silica fume (SF), blast furnace slag (SL), fly ash (FA), metakaolin (MK), rice husk ash (RHA), natural rubber latex (NRL), nanoclay (NC), gypsum (GY), and lime (LI).Table 5
Use of supplementary cementitious materials of CFCCs.
Cementitious materialsWeight of cement (%)CFsExtent of improvementRef.MK50 MKSisalSignificant reduction of the calcium hydroxide formation; no signs of fiber degradation[38, 64, 83]MK and SF15 SF or 15 MKSisalImproved the mechanical properties and the durability[81]SL, SF, and MK70 SL/10 MK or 70 SL/10 SFKraft pulpEffective in preventing degradation[82]RHA, MK, and NC30 RHA, MK, and NCSisalThe durability of composites was improved owing to the mitigation of fiber degradation[53]SF and SL10 SF and 40% SLCannabinusSlowing down the strength loss and embrittlement[84]SF, SL, FA, MK10% SF/70% SL, 10% MK/70% SL, and 10% MK/10% SF/70% FASoftwood kraft pulpPrevented composite degradation due to a reduction in the calcium hydroxide content and the stabilization of the alkali content[85]SL, GY, LI88 SL/10 GY/2 LI (0 cement)Coir and sisalDo not appear to have a significant effect on the prevention of ductility dropping[86]SF, NRL13.55 SF/14.55 CF/1.40 NRLCelluloseImproved material durability[20]
## 4.3.2. Modification of CFs
Improving the water resistance of CFs and the adhesion between the matrix and the fiber interface is a necessary method to develop composites with good mechanical and environmental properties. However, the various types of CFs, geographical and climatic conditions, and growth cycles make the performance of CFs different. Some CFs have poor chemical resistance and low strength. These fibers can be modified to improve their internal and external structures and mechanical performance. The modification of CFs mainly includes physical modification, chemical modification, and biological modification, among which chemical modification is the most common [87–90], as shown in Table 6.Table 6
Modification of CFs and the extent of improvement.
TypeMethodCFsExtent of improvementRef.Physical modificationPolyelectrolyte adsorptionBlue eucalyptus paperAntibacterial effect[91]Nonelectrolyte adsorptionCellulose nanocrystalsChanged the surface structure and properties of cellulose[92]Chemical modificationSmall molecule modificationPulpPromoted the preparation of nanocellulose[93]Graft modificationSoftwood cellulose fiberFormed a strong nanocomposite[94]Cross-linking modificationCellulose nanocrystalsImproved thermal stability and water resistance, decreased swelling degree[95]Biological modificationIn situ modification6-Carboxyfluorescein-modified glucoseWith nonnatural characteristic fluorescent function[96]Ex situ modificationProanthocyanidins as cross-linking agentNovel bacterial cellulose/gelatin composite[97, 98]In the modification method of cellulose, the physical method is simple, convenient, and easy to operate, but the performance of the modified product is unstable, and the modifier is easy to fall off from the cellulose, resulting in product performance reduction. The chemical method is a better modification method [91, 92]. Compared with small molecule modification, graft polymerization has obvious advantages. It imparts other properties to cellulose without changing the properties of cellulose, and the modification effect is very stable. However, there are also disadvantages such as difficulty in operation and difficulty in controlling the reaction [93–95]. The biological modification to cellulose should be modified in situ or ex situ according to the actual situation [96–98].The fibers are chemically modified to remove hemicellulose, lignin, pectin, and other substances on its surface so that the structure of CFs becomes fibrillation and, which have a relatively rough appearance. The cement matrix interface forms a mechanical interlocking morphology [99–101].When eucalyptus fibers are modified with 3-mercaptopropyltrimethoxysilane [102], it is found that the fiber reduces the water retention and meanwhile improves the dimensional stability of the composite. Through the dry-wet cycle treatment of abaca, agave, and sisal fibers [103], the cross-section of the fiber is reduced, Young’s modulus is increased, and the tensile strength and tensile strain are reduced. At the same time, the cavity becomes thinner. The modified fibers increase the interfacial shear strength of the cement-based composite and also improve the durability. By adding 5% styrene-acrylic copolymer for treatment, after 200 dry and wet cycles, the water absorption ratio of the test piece is reduced by 50%, the elastic modulus value is reduced by 40%, and the shrinkage rate is reduced by 15%, which improves the stiffness and dimensional stability of the specimen [104]. After the modification treatment, the interface between the fibers and the cement matrix forms a dense and cohesive transition zone, which makes the fiber adhere to the cement surface to prevent the fiber from mineralizing.
## 4.3.3. Multitype CFs
Macroscale CFs are of large diameter and cavity, so they will absorb water and swell in the initial stage of mixing with the cement matrix. In the later stage of cement hydration, the fiber moisture gradually loses, the fiber shrinks and collapses in the matrix interface, and some voids are left at the fiber and cement matrix interface, which affects the performance of the composite, as shown in Figure4(b). In order to improve this situation, the acid hydrolysis method can be used to prepare micron-sized microcrystalline CFs and nanosized nanocrystalline CFs [105, 106].Microcrystalline CF is super absorbent, which can supplement the lack of moisture in the cement matrix at the later stage of hydration so that the cement matrix can be fully hydrated. A large amount of cement hydration gel can be induced around the microcrystalline CFs in the cement matrix to fill the microcracks and voids of the matrix and reduce the dry shrinkage cracks of the cement matrix at the initial stage of hydration. Nanocrystalline CF is of the same size as the cement hydration gel, which can induce the cement matrix hydration C-S-H gel to adhere to the surface of the nanocrystalline CFs, so they are connected and fused to form a uniform continuous C-S-H gel phase in the cement matrix. The cement matrix hydrate can completely embed in the nanocrystalline CFs, avoiding the negative effects of volume instability caused by macroscale CFs and microcrystalline CFs, and can further improve the durability of the composite, as shown in Figure4(c). The shrinkage rate and mechanical properties of the material have been improved by using micro-nano-microcrystalline cellulose to toughen the concrete [107, 108]. Compared with ordinary samples, the network structure formed by the multitype fibers can transmit and share the stress generated by the plastic shrinkage of the cement matrix. The combination of the fiber and the matrix improved the crack resistance of the material and also enhanced its durability.
## 5. Conclusions
When microcracks appear during the service period of cement-based composites, the fibers share the load through the bridging action, which slows down the continuous development of the microcracks and increases the durability of the composites. The main conclusions are as follows.The hydrophilicity and unique hollow structure of CFs optimize the pore structure of cement-based composites, so CFs can significantly improve the permeability, frost resistance, and carbonization resistance of CFCCs.CFs uniformly dispersed in the cement matrix, and it can induce the orderly growth of cement hydration products at the initial stage of hydration and enhance the compactness of the cement matrix.The internal curing characteristics of CFs on the cement matrix can enhance the durability of CFCCs. The utilization of microcrystalline CFs and nanocrystalline CFs has further improved the durability of CFCCs.Cementitious materials with low alkali corrosion have been used to reduce the long-term performance degradation of CFs, such as magnesium silicate cement, magnesium phosphate cement, and geopolymer cement.Fiber modification is an important measure to improve the durability of CFCCs, and in particular, chemical modification has been usually used.
---
*Source: 1014531-2021-08-17.xml* | 2021 |
# Acoustic Velocity Log Numerical Simulation and Saturation Estimation of Gas Hydrate Reservoir in Shenhu Area, South China Sea
**Authors:** Kun Xiao; Changchun Zou; Biao Xiang; Jieqiong Liu
**Journal:** The Scientific World Journal
(2013)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2013/101459
---
## Abstract
Gas hydrate model and free gas model are established, and two-phase theory (TPT) for numerical simulation of elastic wave velocity is adopted to investigate the unconsolidated deep-water sedimentary strata in Shenhu area, South China Sea. The relationships between compression wave (P wave) velocity and gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 are studied, respectively, and gas hydrate saturation of research area is estimated by gas hydrate model. In depth of 50 to 245 m below seafloor (mbsf), as sediment porosity decreases, P wave velocity increases gradually; as gas hydrate saturation increases, P wave velocity increases gradually; as free gas saturation increases, P wave velocity decreases. This rule is almost consistent with the previous research result. In depth of 195 to 220 mbsf, the actual measurement of P wave velocity increases significantly relative to the P wave velocity of saturated water modeling, and this layer is determined to be rich in gas hydrate. The average value of gas hydrate saturation estimated from the TPT model is 23.2%, and the maximum saturation is 31.5%, which is basically in accordance with simplified three-phase equation (STPE), effective medium theory (EMT), resistivity log (Rt), and chloride anomaly method.
---
## Body
## 1. Introduction
Gas hydrate mainly exists in the seafloor and polar permafrost [1], and it owns the cage structure of solid crystal, which is formed by water molecules and natural gas (usually dominated by methane). The formation of gas hydrate needs a low-temperature and high-pressure environment, and the concentration of methane must exceed its solubility in the pore water. So gas hydrate is commonly distributed in the water depth greater than 300 m in the continental slope belt [2, 3]. The submarine gas hydrate reserves mainly depend on the distribution of gas hydrate area, the thickness of gas hydrate stability zone, the porosity of sedimentary layer, the saturation of gas hydrate, and so on. However, the accurate estimation of gas hydrate reserves is very difficult due to the lack of research on the determination of gas hydrate distribution and gas hydrate saturation. As gas hydrate is rich in methane, it is associated with a series of scientific issues [4], including the global carbon cycle [5], global temperature changes [6], the sea-level rise [7], and future energy supply [8]. Therefore, researches on determining gas hydrate distribution and estimating gas hydrate saturation have become the focus of the scientists all over the world. In May 2007, the gas hydrate samples and various log data of gas hydrate zone were firstly obtained in Shenhu area, South China Sea, which made a significant breakthrough in exploration of gas hydrate in China. Meanwhile, it provided a great convenience for investigating the properties of the gas hydrate reservoir [9].Geophysical logging is an important tool for evaluating gas hydrate saturation, and valuable information can be obtained by studying the resistivity and acoustic velocity log data. According to the log data in Shenhu area, South China Sea, Chinese scholars have utilized some theoretical models and empirical formulas to estimate gas hydrate saturation [10–14], and the results of these researches have greatly promoted the process of studying gas hydrate saturation by using log data in China. However, previous studies mainly focused on the estimation of the gas hydrate saturation, and the discrepancy of the estimation of gas hydrate saturation was large because of the application of various methods. Besides, previous studies did not systematically study the relationship between any two of the gas hydrate saturation, elastic wave velocity, and sediment porosity, which led to incomplete understanding of the log data.Foreign scholars have carried out researches on the evaluation of the marine gas hydrate saturation earlier. A variety of theoretical models or experimental models have been proposed to estimate gas hydrate saturation, such as Wyllie et al. [15] time average equation with the seismic velocity [16–18], the effective medium theory [19–23], Biot-Gassmann theory model [24–27], compression wave (P wave) velocity of thermal-elastic theory [28], the three-phase equation (TPE) [29–34], and velocity model theory based on the two-phase theory (TPT) model [35]. Moreover, Tinivella et al. [36] made a research to compare the TPT model with the TPE model for evaluating gas hydrate saturations in marine sediments, and the comparison showed that the two theoretical approaches were in very good agreement. Based on this, the TPT model has been applied to verify the TPE model and estimate the gas hydrate and free gas saturations in several different areas [37–39].In this work, based on the log data at site SH2 in Shenhu Area, we first establish gas hydrate model and free gas model by applying elastic wave velocity numerical model of the TPT method, then study the dependence of the P wave velocity on gas hydrate saturation, free gas saturation, and sediment porosity, and finally choose the gas hydrate model to estimate gas hydrate saturation at site SH2.
## 2. Numerical Simulation of Elastic Wave Velocity by the TPT Model
The TPT model [41, 42], which supposes that the rock solid part is composed of rock matrix and gas hydrate and that the rock pore fluid is composed of free gas and water, can be used to study the elastic wave velocity model about the elastic characteristics of marine sand-shale reservoirs. Based on this theory and using the reported in Tinivella [35], the velocity relation between P wave velocity (Vp) and shear wave (S wave) velocity (Vs) is as follows:
(1)Vp={[(ϕeff/k)(ρm/ρf)+(1-β-2(ϕeff/k))(1-β)(1-ϕeff-β)Cb+ϕeffCf(1Cm+43μ)+(ϕeff/k)(ρm/ρf)+(1-β-2(ϕeff/k))(1-β)(1-ϕeff-β)Cb+ϕeffCf]·1ρm(1-(ϕeff/k)(ρf/ρm))}1/2,Vs={μ{ρm[1-(ϕeff·ρf)/(k·ρm)]}}1/2.
Where ϕeff is the effective porosity, μ is the average rigidity of the skeleton, ρm is the average density, ρf is the density of the fluid phase, β is the proportionality coefficient, k is the coupling factor, and Cb, Cf and Cm are the compressibility of the solid phase, the fluid phase, and the matrix, respectively.
## 3. Geological Setting
Shenhu area is considered as one of the occurrences of gas hydrate, which is located in the middle of the northern slope of the South China Sea, between the Xisha Trough and the Dongsha Islands and Baiyun Sag of Zhu II depression of the Pearl River Mouth Basin (Figure1). Shenhu area experienced a geological evolution process similar to the northern margin of the South China Sea and eventually formed the regional sedimentary sequences in which marine sediments were the dominant composition [43]. Taking the Cenozoic sedimentary strata in the Shenhu area, for example, it is about 1000–7000 m, and the organic matter content is 0.46%–1.9% [44–46], which can provide the material base for gas hydrate. In recent years, some studies have confirmed that mud volcanoes, seafloor slips, mud diapers and other special structural units beneficial to the formation of gas hydrate are widely being developed in Shenhu area [47], and the bottom simulating reflections (BSRs) have also been identified using various geophysical methods in the northern South China Sea by Guangzhou Marine Geological Survey, China Geological Survey [48].Figure 1
Areas of gas hydrate exploration and drilling area with drilling sites in the northern part of the South China Sea [40]. (Red dots, gas hydrate samples obtained; dark purple dots, no gas hydrate samples obtained.)During April–June 2007, eight sites were drilled in Shenhu area (see Figure2), among which, sites of SH2, SH3, and SH7 in water depth of 1105 to 1423 m were determined to contain gas hydrate in recovered core samples. The thickness of gas hydrate stability zone was about 10 to 25 m [9, 49], and the sediment lithology in and above the zone was silt and silty clay respectively, according to the core data.Figure 2
The conventional logs in site SH2 [10]. (The area delineated by a pink line is the occurrence of gas hydrate reservoir, and the depth range is 195 to 220 mbsf.)
## 4. Data Set Used in the Study
As previous researchers have done much work on site SH2 [10–14, 40, 50], many valuable references can be used to obtain better results and verify the reliability of our research, so we select site SH2 as the research object. The water depth of site SH2 is 1232 m, and the maximum drilling depth is 245 mbsf. Figure 2 shows the conventional logs of site SH2. The measurement depth range is from 50 to 245 mbsf, and the measurement projects include caliper, density, natural gamma ray, acoustic, and resistivity logging. The occurrence of gas hydrate reservoir in this site appears to be “response characteristics of two high and two low” in the log curve, namely high resistivity, high natural gamma ray and low density, low acoustic time, especially for resistivity and acoustic logs. Besides, when the layer of well diameter changes, caliper curve can be used as the effective parameter to identify gas hydrate reservoir because the abnormality of other logs has nothing to do with the well condition.Based on the previous research methods for the thickness of gas hydrate stability zone [51–53] and combined with the analysis of conventional log data, the gas hydrate stability zone of site SH2 is determined to be at the depth of 195 to 220 mbsf [40].
## 5. Methodology
Seafloor sediments containing gas hydrate are generally composed of rock grain, gas hydrate, water, and natural gas. In order to research the characteristics of gas hydrate reservoir, gas hydrate model and free gas model have been established in this section, and based on these two models, the numerical simulation method and the TPT are used to study the dependence of the elastic wave velocity on sediment porosity, gas hydrate saturation and free gas saturation.
### 5.1. Gas Hydrate Model
#### 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
#### 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
#### 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
### 5.2. Free Gas Model
#### 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
#### 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
#### 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
### 5.3. Estimation of Sediment Porosity
Sediment porosity is a key parameter for the estimation of gas hydrate saturation for both gas hydrate and free gas models. Therefore, appropriate log data should be selected to estimate the sediment porosity at site SH2. The log data that can be applied to determine the sediment porosity include density, acoustic, resistivity, and neutron logging.When it comes to the determination of sediment porosity by acoustic log, the data need to be corrected by the regional core data because seafloor sediments are always loose silt and silty clay. However, the core data is always insufficient, and it is difficult to determine the compaction correction coefficient, so acoustic log is not available. When it comes to the determination of sediment porosity by resistivity log, the Archie formula should be used to calculate porosity, and the Archie constants and formation water resistivity need to be known for the Archie formula [57]. However, the previous two parameters are generally determined by some empirical equations, and the estimation error of the sediment porosity is significant. As for the determination of sediment porosity by neutron log, it usually cannot be realized because of the lack of the neutron log data.Compared with the resistivity, the acoustic and the density logs are less affected in gas hydrate reservoirs, and can generally reflect the situation of sediment porosity, so we select the density log data to estimate the sediment porosity in this study. The intensity of scattering gamma ray can be measured by the density log, which reflects electron density of the strata and volume density of rock (ρb). The estimation of ϕ by density log data can be expressed as [58]
(29)ϕ=(ρma-ρb)(ρma-ρf),
where ρma is matrix density and ρf is fluid density. Considering the effect of shaly sediments, (29) can be written as [58]
(30)ϕ=(ρma-ρb)(ρma-ρf)-Vsh(ρma-ρsh)(ρma-ρf),(31)Vsh=(2GCUR×SH-1)(2GCUR-1),(32)SH=(GR-GRmin)(GRmax-GRmin),
where Vsh is the volume content of the shale, SH is the content index of the shale, ρsh is the density of the shale, GR is the value of natural gamma log in the research interval, GRmin is the value of natural gamma log in pure sandstone interval, GRmax the is value of natural gamma log in pure mud interval, and GCUR is the Hilchie index, which is 3.7 in the Tertiary of North America and 2 in old stratum [59].Equations (29) and (30) are used to calculate the sediment porosity at site SH2, and processing parameters can be set as follows: ρma=2.65 g/cm3, ρf=1.04 g/cm3, ρsh=2.70 g/cm3 [12, 35]. The porosity calculated by (29) and (30) is close to each other (Figure 3), which varies in the range of 30% to 55%, and the average value is 45%. The result indicates that sediments at site SH2 are of high porosity.Figure 3
Result of sediment porosity calculated by density log data at site SH2. (Red line is the sediment porosity estimation used density log data; bright green line is the sediment porosity estimation which considered the effect of argillaceous sediments.)
## 5.1. Gas Hydrate Model
### 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
### 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
### 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
## 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
## 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.2. Free Gas Model
### 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
### 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
### 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
## 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
## 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.3. Estimation of Sediment Porosity
Sediment porosity is a key parameter for the estimation of gas hydrate saturation for both gas hydrate and free gas models. Therefore, appropriate log data should be selected to estimate the sediment porosity at site SH2. The log data that can be applied to determine the sediment porosity include density, acoustic, resistivity, and neutron logging.When it comes to the determination of sediment porosity by acoustic log, the data need to be corrected by the regional core data because seafloor sediments are always loose silt and silty clay. However, the core data is always insufficient, and it is difficult to determine the compaction correction coefficient, so acoustic log is not available. When it comes to the determination of sediment porosity by resistivity log, the Archie formula should be used to calculate porosity, and the Archie constants and formation water resistivity need to be known for the Archie formula [57]. However, the previous two parameters are generally determined by some empirical equations, and the estimation error of the sediment porosity is significant. As for the determination of sediment porosity by neutron log, it usually cannot be realized because of the lack of the neutron log data.Compared with the resistivity, the acoustic and the density logs are less affected in gas hydrate reservoirs, and can generally reflect the situation of sediment porosity, so we select the density log data to estimate the sediment porosity in this study. The intensity of scattering gamma ray can be measured by the density log, which reflects electron density of the strata and volume density of rock (ρb). The estimation of ϕ by density log data can be expressed as [58]
(29)ϕ=(ρma-ρb)(ρma-ρf),
where ρma is matrix density and ρf is fluid density. Considering the effect of shaly sediments, (29) can be written as [58]
(30)ϕ=(ρma-ρb)(ρma-ρf)-Vsh(ρma-ρsh)(ρma-ρf),(31)Vsh=(2GCUR×SH-1)(2GCUR-1),(32)SH=(GR-GRmin)(GRmax-GRmin),
where Vsh is the volume content of the shale, SH is the content index of the shale, ρsh is the density of the shale, GR is the value of natural gamma log in the research interval, GRmin is the value of natural gamma log in pure sandstone interval, GRmax the is value of natural gamma log in pure mud interval, and GCUR is the Hilchie index, which is 3.7 in the Tertiary of North America and 2 in old stratum [59].Equations (29) and (30) are used to calculate the sediment porosity at site SH2, and processing parameters can be set as follows: ρma=2.65 g/cm3, ρf=1.04 g/cm3, ρsh=2.70 g/cm3 [12, 35]. The porosity calculated by (29) and (30) is close to each other (Figure 3), which varies in the range of 30% to 55%, and the average value is 45%. The result indicates that sediments at site SH2 are of high porosity.Figure 3
Result of sediment porosity calculated by density log data at site SH2. (Red line is the sediment porosity estimation used density log data; bright green line is the sediment porosity estimation which considered the effect of argillaceous sediments.)
## 6. Results
Gas hydrate model of the TPT is used to forward stimulate P wave velocity of sediment formation in different gas hydrate saturation conditions in depth of 50 to 245 mbsf at site SH2. Table1 shows the values of the main parameters used to evaluate the velocity. When Sh=0, we can get the P wave velocity of water saturated sediments forward stimulated by gas hydrate model based on the TPT. From Figure 4, the tendency between actual curve of P wave velocity log and P wave velocity curve of the saturated water condition is almost consistent in the gas hydrate interval (above 195 mbsf), so the model and its parameters are rational for numerical simulation in this study. The difference between actual P wave velocity of the log and P wave velocity of saturated water condition reflects the value of gas hydrate or free gas saturation, which can be used to qualitatively identify the gas hydrate reservoir. The specific response characteristics are as follows: the possibility of containing gas hydrate is dominant when the actual P wave velocity of the log is higher than P wave velocity of saturated water condition; the possibility of containing free gas is dominant when actual P wave velocity of the log is lower than the P wave velocity of saturated water condition [35]. In depth of 195 to 220 mbsf, actual P wave velocity of the log is significantly higher than the P wave velocity of saturated water condition, so this interval is the gas hydrate stability zone. In depth of 220 to 245 mbsf, the actual P wave velocity of the log has an increase relative to the P wave velocity of saturated water condition. However, Figure 2 does not indicate the increase of resistivity. Without the coring analysis data in this interval, whether the abnormality is caused by gas hydrate or not can not be ascertained, and should be researched in further study.Table 1
Values of the main parameters of the gas hydrate and free gas model of the TPT.
Parameters
Details
References
C
s
2
.
7
×
1
0
-
11 Pa−1
[60]
C
h
1
.
79
×
1
0
-
10 Pa−1
[1]
C
w
4
.
79
×
1
0
-
10 Pa−1
[61]
C
g
4
.
24
×
1
0
-
8 Pa−1
[62]
ρ
s
2650 kg/m3
[60]
ρ
w
1040 kg/m3
[1]
ρ
g
88.48 kg/m3
[62]
ρ
h
767 kg/m3
[1]
V
s
116
+
4.65
z m/s 237+1.28z m/s 332+0.58z m/s
[63]
μ
sm
0
ρ
m
V
s
2 Pa
[35]
μ
h
3
.
7
×
1
0
9 Pa
[35]
k
2.3
[61]Figure 4
Forward stimulating P wave velocity of sediment formation at site SH2. (The red line is the actual log P wave velocity at site SH2; the blue line, sky blue line, bright green line, pink line, green line, dark blue line, and black line are assumed to P wave velocity made by forward modeling of the gas hydrate model when gas hydrate saturations are 0, 5%, 10%, 15%, 20%, 25%, and 30%, resp.)When Sh gradually increases, P wave velocity made by forward stimulation also increases; when Sh>15%, P wave velocity increases significantly; when Sh=30%, P wave velocity curve is located at the right of the actual P wave velocity logging curve. The above results indicate that the basic range of gas hydrate saturation is 0–30% in depth of 50 to 245 mbsf at site SH2.In order to study the dependence of P wave velocity on sediment porosity, gas hydrate saturation, it is assumed that values of gas hydrate saturation increase from 0 to 1 in the interval of 0.1. Using gas hydrate model of the TPT to model the corresponding P wave velocity of the previous gas hydrate saturations, the relation surface of previous three properties can be formed as Figure5 shows. With the increase of the sediment burial depth in depth of 50 to 245 mbsf at site SH2, the porosity presents a decreasing trend except for the abnormality caused by borehole conditions in some intervals, and P wave velocity of forward stimulation (Sh=0) slowly increases from 1743 to 1795 m/s. But with the increase of gas hydrate saturation, the increase rate of P wave velocity is obviously accelerated, and P wave velocity (burial depth is 51 mbsf) increases from 1743 to 3961 m/s. From the above analysis, the general rule between P wave velocity of forward stimulation and sediment porosity, gas hydrate saturation at site SH2 is the smaller the sediment porosity, the greater the P wave velocity; the higher the gas hydrate saturation, the greater the P wave velocity. This result is basically in accordance with the research result made by Tinivella [35].Figure 5
Relation between P wave velocity and sediment porosity, gas hydrate saturation at site SH2. (Because of the effect of variation of borehole conditions and actual sediments, the sediment porosity in some intervals does not reduce with the increasing depth and causes the curved surface unsmooth growth. In depth of 195 to 220 mbsf of gas hydrate reservoir, P wave velocity surface of forward simulation subsides as the sediment porosity relatively increases.)Similarly, it is assumed that values of free gas saturation increase from 0 to 1 in the interval of 0.1 in order to study the relation between P wave velocity and free gas saturation. Using free gas model of the TPT to model the corresponding P wave velocity of the previous free gas saturations, the relation surface of P wave velocity, sediment porosity, and free gas saturation can be formed as Figure6 shows. In depth of 50 to 245 mbsf at site SH2, with the increase of free gas saturation, P wave velocity (burial depth is 51 mbsf) decreases from 1773 to 597 m/s (The velocity 597 m/s is obtained supposing 100% free gas saturation), and the decrease rate is significant. Considering the depth effect, the decrease rate of P wave velocity slows down with the increase of burial depth. From the above analysis, the general rule between P wave velocity of forward stimulation and free gas saturation at site SH2 is the higher the free gas saturation, the lower the P wave velocity. This result is also basically in accordance with the research result made by Tinivella [35].Figure 6
Relation between P wave velocity and sediment porosity, free gas saturation at site SH2. (P wave velocity is easily affected by the free gas saturation. When the free gas saturation increases, P wave velocity of forward stimulation by the free gas model of the TPT decreases rapidly.)The estimation of gas hydrate saturation for gas hydrate reservoir evaluation has an important significance. In order to estimate gas hydrate saturation in sediments, it is necessary to associate the P wave velocity of the log with the P wave velocity of gas hydrate model based on the TPT. Given an initial gas hydrate saturation, based on gas hydrate model of the TPT, the difference between P wave velocity of forward simulation and the actual P wave velocity of the log in this saturation can be acquired. If the difference is in the range of allowable error, the saturation can be treated as the actual saturation; if the difference does not satisfy the error requirement, the value of gas hydrate saturation should be modified until meeting the error precision.Using gas hydrate model of the TPT to inverse gas hydrate saturation at site SH2, the values of the main parameters are listed in Table1, and the inversion result is shown in Figure 7. In the interval of 50 to 90 mbsf at site SH2, the range of gas hydrate saturation is 0–17.5%, and the average value is 4.8%. As the shallow sediments are influenced by variation of borehole conditions, the estimation error of gas hydrate saturation in this interval is significant, which should be noticed during the analysis. In the interval of 90 to 195 mbsf, the range of gas hydrate saturation is 0–18.9%, and the average value is 7%. In the interval of 195 to 220 mbsf, the range of gas hydrate saturation is 7–31.5%, and the average value is 23.2%. With the increase of burial depth, gas hydrate saturation gradually increases and finally reaches the peak value of 31.5% in 208 mbsf. Then the gas hydrate saturation decreases slowly with the increase of burial depth, and the range of gas hydrate saturation is 0–25.8% in the interval of 220 to 245 mbsf, with an average value of 15.5%.Figure 7
Estimation of gas hydrate saturation at site SH2.
## 7. Discussion
It is very important to determine the porosity for the evaluation of gas hydrate saturation. In order to analyze the accuracy of porosity estimation by density log data, we use resistivity log data and combine the Archie formula [57] to make a comparison between the estimation results, and the comparison results are shown in Figure 8. The porosity estimated by resistivity log data generally changes in the range of 30 to 50%, and the average value is 43% [12, 64]. In the interval of 50 to 195 mbsf, the curves of density porosity and resistivity porosity are approximately coincident, while the former fluctuates due to the borehole effect. In the interval of 195 to 220 mbsf, the sediment contains gas hydrate, and the curve of resistivity porosity decreases significantly compared with the curve of density porosity due to the significant increase of resistivity (Figure 2), so the porosity calculated by resistivity log data needs to be corrected to exclude the influence of the increase of skeleton components. In the interval of 220 to 245 mbsf, the curves of density porosity and resistivity porosity are approximately coincident again. These results indicate that using density log data to estimate the porosity in the gas hydrate stability zone at site SH2 is relatively more reliable. According to previous studies, the range of core porosity by laboratory analysis in this interval was 40–55%, which was almost coincident with the estimation of porosity by density log data in this study [13]. As Figure 8 shows, the core porosity distribution corresponds with the curve of density porosity of the well, and this result proves that the porosity estimated by density log data can meet the requirement for evaluating gas hydrate saturation at site SH2.Figure 8
Comparison of estimation of sediment porosity made by different methods at site SH2. (The red curve is the porosity estimated by density log data; the bright green curve is the porosity considering the effect of shaly sediments; blue curve is the porosity estimated by resistivity log combined with Archie formula; the black dots are the porosity measured by core laboratory.)In order to verify the accuracy of gas hydrate saturation estimated by the TPT, we compare gas hydrate saturation in this study with that estimated by Wang et al. [50] in the occurrence of gas hydrate (195 to 220 mbsf) at site SH2 (Figure 9). The curve of gas hydrate saturation made by the TPT first increases and then decreases as the burial depth increases, which reaches the peak value of 31.5% in the 208 mbsfs and then decreases gradually. The peak value of gas hydrate saturation by the TPT is slightly smaller than the resistivity log (Rt) method (40.5%), the simplified three-phase equation (STPE) method (41%), and the effective medium theory (EMT) method (38.5%). However, the curve trend of gas hydrate saturation estimated by the TPT and other three methods is basically consistent, and the difference only lies in the amplitude of the curve, which indicates that using the TPT method to estimate gas hydrate saturation at site SH2 is available.Figure 9
Comparison of estimation of gas hydrate saturation made by different methods in depth of 195 to 220 mbsf at site SH2. (The red curve, black curve, blue curve, and violet curve represent gas hydrate saturations estimated by the TPT, the Rt, the STPE, and the EMT method, respectively; the bright green dots represent gas hydrate saturations calculated by chloride anomaly method.)The average value of gas hydrate saturations calculated by chloride anomaly method is 25%, and the peak value is 45% [9]. The peak value of gas hydrate saturation estimated by the TPT is relatively lower than that estimated by chloride anomaly method, but the average values estimated by the two methods are basically same, and most distribution dots of gas hydrate saturations obtained by chloride anomaly method correspond with the curve of gas hydrate saturation estimated by the TPT method (Figure 9). This also indicates that using the TPT method to estimate gas hydrate saturation at site SH2 is available.
## 8. Conclusions
In summary, the relationships between P wave velocity and gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 are studie, respectively, by virtue of elastic wave velocity of numerical stimulation based on the TPT, and gas hydrate model and free gas model are established to estimate the sediment porosity in order to determine the gas hydrate saturation of the research area. Some conclusions can be drawn as follows.(
1
) Using the difference between P wave velocity of saturated water condition and actual P wave velocity of the log, whether the sediment contains gas hydrate or not can be identified quickly. In the interval of 195 to 220 mbsf at site SH2, the actual P wave velocity of the log increases significantly relative to P wave velocity of forward stimulation in saturated water condition, so this interval is determined to contain gas hydrate.(
2
) By virtue of elastic wave velocity of numerical stimulation based on the TPT, combined with log data, the dependence of P wave velocity on gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 can be analyzed, respectively. In the interval of 50 to 245 mbsf, as sediment porosity decreases, P wave velocity gradually increases; as gas hydrate saturation increases, P wave velocity increases; as free gas saturation increases, P wave velocity gradually decreases.(
3
) The log data can be used to calculate gas hydrate saturation of the whole well, and the availability is better than the coring data. The average value of gas hydrate saturation estimated by the TPT is 23.2%, and the peak value is 31.5%, which is basically in accordance with the values estimated by the STPE model, the EMT model, the Rt model and chloride anomaly method.
---
*Source: 101459-2013-07-01.xml* | 101459-2013-07-01_101459-2013-07-01.md | 45,300 | Acoustic Velocity Log Numerical Simulation and Saturation Estimation of Gas Hydrate Reservoir in Shenhu Area, South China Sea | Kun Xiao; Changchun Zou; Biao Xiang; Jieqiong Liu | The Scientific World Journal
(2013) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2013/101459 | 101459-2013-07-01.xml | ---
## Abstract
Gas hydrate model and free gas model are established, and two-phase theory (TPT) for numerical simulation of elastic wave velocity is adopted to investigate the unconsolidated deep-water sedimentary strata in Shenhu area, South China Sea. The relationships between compression wave (P wave) velocity and gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 are studied, respectively, and gas hydrate saturation of research area is estimated by gas hydrate model. In depth of 50 to 245 m below seafloor (mbsf), as sediment porosity decreases, P wave velocity increases gradually; as gas hydrate saturation increases, P wave velocity increases gradually; as free gas saturation increases, P wave velocity decreases. This rule is almost consistent with the previous research result. In depth of 195 to 220 mbsf, the actual measurement of P wave velocity increases significantly relative to the P wave velocity of saturated water modeling, and this layer is determined to be rich in gas hydrate. The average value of gas hydrate saturation estimated from the TPT model is 23.2%, and the maximum saturation is 31.5%, which is basically in accordance with simplified three-phase equation (STPE), effective medium theory (EMT), resistivity log (Rt), and chloride anomaly method.
---
## Body
## 1. Introduction
Gas hydrate mainly exists in the seafloor and polar permafrost [1], and it owns the cage structure of solid crystal, which is formed by water molecules and natural gas (usually dominated by methane). The formation of gas hydrate needs a low-temperature and high-pressure environment, and the concentration of methane must exceed its solubility in the pore water. So gas hydrate is commonly distributed in the water depth greater than 300 m in the continental slope belt [2, 3]. The submarine gas hydrate reserves mainly depend on the distribution of gas hydrate area, the thickness of gas hydrate stability zone, the porosity of sedimentary layer, the saturation of gas hydrate, and so on. However, the accurate estimation of gas hydrate reserves is very difficult due to the lack of research on the determination of gas hydrate distribution and gas hydrate saturation. As gas hydrate is rich in methane, it is associated with a series of scientific issues [4], including the global carbon cycle [5], global temperature changes [6], the sea-level rise [7], and future energy supply [8]. Therefore, researches on determining gas hydrate distribution and estimating gas hydrate saturation have become the focus of the scientists all over the world. In May 2007, the gas hydrate samples and various log data of gas hydrate zone were firstly obtained in Shenhu area, South China Sea, which made a significant breakthrough in exploration of gas hydrate in China. Meanwhile, it provided a great convenience for investigating the properties of the gas hydrate reservoir [9].Geophysical logging is an important tool for evaluating gas hydrate saturation, and valuable information can be obtained by studying the resistivity and acoustic velocity log data. According to the log data in Shenhu area, South China Sea, Chinese scholars have utilized some theoretical models and empirical formulas to estimate gas hydrate saturation [10–14], and the results of these researches have greatly promoted the process of studying gas hydrate saturation by using log data in China. However, previous studies mainly focused on the estimation of the gas hydrate saturation, and the discrepancy of the estimation of gas hydrate saturation was large because of the application of various methods. Besides, previous studies did not systematically study the relationship between any two of the gas hydrate saturation, elastic wave velocity, and sediment porosity, which led to incomplete understanding of the log data.Foreign scholars have carried out researches on the evaluation of the marine gas hydrate saturation earlier. A variety of theoretical models or experimental models have been proposed to estimate gas hydrate saturation, such as Wyllie et al. [15] time average equation with the seismic velocity [16–18], the effective medium theory [19–23], Biot-Gassmann theory model [24–27], compression wave (P wave) velocity of thermal-elastic theory [28], the three-phase equation (TPE) [29–34], and velocity model theory based on the two-phase theory (TPT) model [35]. Moreover, Tinivella et al. [36] made a research to compare the TPT model with the TPE model for evaluating gas hydrate saturations in marine sediments, and the comparison showed that the two theoretical approaches were in very good agreement. Based on this, the TPT model has been applied to verify the TPE model and estimate the gas hydrate and free gas saturations in several different areas [37–39].In this work, based on the log data at site SH2 in Shenhu Area, we first establish gas hydrate model and free gas model by applying elastic wave velocity numerical model of the TPT method, then study the dependence of the P wave velocity on gas hydrate saturation, free gas saturation, and sediment porosity, and finally choose the gas hydrate model to estimate gas hydrate saturation at site SH2.
## 2. Numerical Simulation of Elastic Wave Velocity by the TPT Model
The TPT model [41, 42], which supposes that the rock solid part is composed of rock matrix and gas hydrate and that the rock pore fluid is composed of free gas and water, can be used to study the elastic wave velocity model about the elastic characteristics of marine sand-shale reservoirs. Based on this theory and using the reported in Tinivella [35], the velocity relation between P wave velocity (Vp) and shear wave (S wave) velocity (Vs) is as follows:
(1)Vp={[(ϕeff/k)(ρm/ρf)+(1-β-2(ϕeff/k))(1-β)(1-ϕeff-β)Cb+ϕeffCf(1Cm+43μ)+(ϕeff/k)(ρm/ρf)+(1-β-2(ϕeff/k))(1-β)(1-ϕeff-β)Cb+ϕeffCf]·1ρm(1-(ϕeff/k)(ρf/ρm))}1/2,Vs={μ{ρm[1-(ϕeff·ρf)/(k·ρm)]}}1/2.
Where ϕeff is the effective porosity, μ is the average rigidity of the skeleton, ρm is the average density, ρf is the density of the fluid phase, β is the proportionality coefficient, k is the coupling factor, and Cb, Cf and Cm are the compressibility of the solid phase, the fluid phase, and the matrix, respectively.
## 3. Geological Setting
Shenhu area is considered as one of the occurrences of gas hydrate, which is located in the middle of the northern slope of the South China Sea, between the Xisha Trough and the Dongsha Islands and Baiyun Sag of Zhu II depression of the Pearl River Mouth Basin (Figure1). Shenhu area experienced a geological evolution process similar to the northern margin of the South China Sea and eventually formed the regional sedimentary sequences in which marine sediments were the dominant composition [43]. Taking the Cenozoic sedimentary strata in the Shenhu area, for example, it is about 1000–7000 m, and the organic matter content is 0.46%–1.9% [44–46], which can provide the material base for gas hydrate. In recent years, some studies have confirmed that mud volcanoes, seafloor slips, mud diapers and other special structural units beneficial to the formation of gas hydrate are widely being developed in Shenhu area [47], and the bottom simulating reflections (BSRs) have also been identified using various geophysical methods in the northern South China Sea by Guangzhou Marine Geological Survey, China Geological Survey [48].Figure 1
Areas of gas hydrate exploration and drilling area with drilling sites in the northern part of the South China Sea [40]. (Red dots, gas hydrate samples obtained; dark purple dots, no gas hydrate samples obtained.)During April–June 2007, eight sites were drilled in Shenhu area (see Figure2), among which, sites of SH2, SH3, and SH7 in water depth of 1105 to 1423 m were determined to contain gas hydrate in recovered core samples. The thickness of gas hydrate stability zone was about 10 to 25 m [9, 49], and the sediment lithology in and above the zone was silt and silty clay respectively, according to the core data.Figure 2
The conventional logs in site SH2 [10]. (The area delineated by a pink line is the occurrence of gas hydrate reservoir, and the depth range is 195 to 220 mbsf.)
## 4. Data Set Used in the Study
As previous researchers have done much work on site SH2 [10–14, 40, 50], many valuable references can be used to obtain better results and verify the reliability of our research, so we select site SH2 as the research object. The water depth of site SH2 is 1232 m, and the maximum drilling depth is 245 mbsf. Figure 2 shows the conventional logs of site SH2. The measurement depth range is from 50 to 245 mbsf, and the measurement projects include caliper, density, natural gamma ray, acoustic, and resistivity logging. The occurrence of gas hydrate reservoir in this site appears to be “response characteristics of two high and two low” in the log curve, namely high resistivity, high natural gamma ray and low density, low acoustic time, especially for resistivity and acoustic logs. Besides, when the layer of well diameter changes, caliper curve can be used as the effective parameter to identify gas hydrate reservoir because the abnormality of other logs has nothing to do with the well condition.Based on the previous research methods for the thickness of gas hydrate stability zone [51–53] and combined with the analysis of conventional log data, the gas hydrate stability zone of site SH2 is determined to be at the depth of 195 to 220 mbsf [40].
## 5. Methodology
Seafloor sediments containing gas hydrate are generally composed of rock grain, gas hydrate, water, and natural gas. In order to research the characteristics of gas hydrate reservoir, gas hydrate model and free gas model have been established in this section, and based on these two models, the numerical simulation method and the TPT are used to study the dependence of the elastic wave velocity on sediment porosity, gas hydrate saturation and free gas saturation.
### 5.1. Gas Hydrate Model
#### 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
#### 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
#### 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
### 5.2. Free Gas Model
#### 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
#### 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
#### 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
### 5.3. Estimation of Sediment Porosity
Sediment porosity is a key parameter for the estimation of gas hydrate saturation for both gas hydrate and free gas models. Therefore, appropriate log data should be selected to estimate the sediment porosity at site SH2. The log data that can be applied to determine the sediment porosity include density, acoustic, resistivity, and neutron logging.When it comes to the determination of sediment porosity by acoustic log, the data need to be corrected by the regional core data because seafloor sediments are always loose silt and silty clay. However, the core data is always insufficient, and it is difficult to determine the compaction correction coefficient, so acoustic log is not available. When it comes to the determination of sediment porosity by resistivity log, the Archie formula should be used to calculate porosity, and the Archie constants and formation water resistivity need to be known for the Archie formula [57]. However, the previous two parameters are generally determined by some empirical equations, and the estimation error of the sediment porosity is significant. As for the determination of sediment porosity by neutron log, it usually cannot be realized because of the lack of the neutron log data.Compared with the resistivity, the acoustic and the density logs are less affected in gas hydrate reservoirs, and can generally reflect the situation of sediment porosity, so we select the density log data to estimate the sediment porosity in this study. The intensity of scattering gamma ray can be measured by the density log, which reflects electron density of the strata and volume density of rock (ρb). The estimation of ϕ by density log data can be expressed as [58]
(29)ϕ=(ρma-ρb)(ρma-ρf),
where ρma is matrix density and ρf is fluid density. Considering the effect of shaly sediments, (29) can be written as [58]
(30)ϕ=(ρma-ρb)(ρma-ρf)-Vsh(ρma-ρsh)(ρma-ρf),(31)Vsh=(2GCUR×SH-1)(2GCUR-1),(32)SH=(GR-GRmin)(GRmax-GRmin),
where Vsh is the volume content of the shale, SH is the content index of the shale, ρsh is the density of the shale, GR is the value of natural gamma log in the research interval, GRmin is the value of natural gamma log in pure sandstone interval, GRmax the is value of natural gamma log in pure mud interval, and GCUR is the Hilchie index, which is 3.7 in the Tertiary of North America and 2 in old stratum [59].Equations (29) and (30) are used to calculate the sediment porosity at site SH2, and processing parameters can be set as follows: ρma=2.65 g/cm3, ρf=1.04 g/cm3, ρsh=2.70 g/cm3 [12, 35]. The porosity calculated by (29) and (30) is close to each other (Figure 3), which varies in the range of 30% to 55%, and the average value is 45%. The result indicates that sediments at site SH2 are of high porosity.Figure 3
Result of sediment porosity calculated by density log data at site SH2. (Red line is the sediment porosity estimation used density log data; bright green line is the sediment porosity estimation which considered the effect of argillaceous sediments.)
## 5.1. Gas Hydrate Model
### 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
### 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
### 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.1.1. Establishment of Gas Hydrate Model
The gas hydrate model assumes that the sediments are composed of rock grain, gas hydrate and water, and gas hydrate, is in the pore space, which is regarded as a part of the rock matrix. Supposing thatϕs, ϕh, ϕw, and ϕg represent the volume percentage of rock grain, gas hydrate, water, and free gas in the sediments respectively, the gas hydrate model can be expressed as
(2)ϕs+ϕh+ϕw=1,(3)ϕ=ϕh+ϕw,
where ϕ is sediment porosity.Gas hydrate saturation (Sh) and water saturation (Sw) can be written as
(4)Sh=ϕhϕ,Sw=ϕwϕ.The volume percentages of rock grain in solid phase (Ss′) and gas hydrate in the solid phase (Sh′) can be written, respectively, as
(5)Ss′=ϕs(ϕs+ϕh),(6)Sh′=ϕh(ϕs+ϕh).
## 5.1.2. The Parameter Determination for Numerical Simulation of Gas Hydrate Model Based on the TPT
In order to apply the TPT to gas hydrate model, some parameters in (1) should be known. The parameters can be determined by Tinivella and Schon’s derivation formula [35, 54].(1)
Effective porosity (ϕeff) can be written as
(7)ϕeff=(1-Sh)ϕ.(2)
Average density of sediments (ρm), density of the solid phase (ρb), and density of the fluid phase (ρf) can be written as
(8)ρm=(1-ϕeff)ρb+ϕeffρf,(9)ρb=Ss′ρs+Sh′ρh,(10)ρf=ρw,
where ρs is density of the rock grain, ρh is gas hydrate density and ρw is water density.(3)
Assume that the solid compressibility lies between the Voigt and Reuss averages [54]. Cb can be written as
(11)Cb=12(Ss′Cs+Sh′Ch)+12(Ss′Cs+Sh′Ch)-1,
where Cs is rock grain compressibility and Ch is gas hydrate compressibility.(4)
Compressibility of the fluid phase (Cf) is
(12)Cf=Cw,
where Cw is water compressibility.(5)
Compressibility of the matrix (Cm) indicates the compressibility of sediments without water, and it can be calculated by the following equation:
(13)Cm=(1-ϕeff)Cb+ϕeffCp,
where Cp is pore compressibility. The algorithm [35] to calculate Cp is:
(14)Cp=(1-ϕeff/ϕ0)Pd,
where ϕ0 is the sediment porosity at the sea bottom and Pd is differential pressure.(6)
Proportional coefficient (β) is
(15)β=CbCm.(7)
Shear modulus (μ) indicates average rigidity of the skeleton, and it can be calculated by the following equation:
(16)μ=(ϕs+ϕh)[ϕsSs′μsm+Sh′μh]-1,
where μh is gas hydrate rigidity and μsm is the shear modulus of solid matrix with gas hydrate [55], which can be calculated by the following equation:
(17)μsm=(μsmKT-μsm0)[ϕh(1-ϕs)]3.8+μsm0,
where μsmKT is Kuster and Toksoz’s shear modulus [56] and μsm0 is the shear modulus of solid matrix without gas hydrate.
## 5.1.3. Implementation Steps of Numerical Simulation of Gas Hydrate Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSh and Sw, calculate ϕs, ϕh, ϕw, Ss′, and Sh′ according to (2)–(6).(3)
According to (7), calculate ϕeff.(4)
According to (8) and (10), calculate ρb, ρf, and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.2. Free Gas Model
### 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
### 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
### 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.2.1. Establishment of Free Gas Model
Free gas model assumes that the sediments are composed of rock grain, water, and free gas, and it can be expressed as(18)ϕs+ϕw+ϕg=1,(19)ϕ=ϕw+ϕg.Water saturation (Sw) and free gas saturation (Sg) can be expressed as
(20)Sw=ϕwϕ,(21)Sg=ϕgϕ.
## 5.2.2. The Parameter Determination for Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
ϕeff can be written as
(22)ϕeff=ϕ.(2)
ρm, ρb, and ρf can be written as
(23)ρm=(1-ϕeff)ρb+ϕeffρf=(1-ϕ)ρb+ϕρf,(24)ρb=ρs,(25)ρf=Swρw+Sgρg,
where ρg is free gas density.(3)
Cb can be written as
(26)Cb=Cs.(4)
Assume that the fluid compressibility lies between the Voigt and Reuss averages [54]. Cf can be written as
(27)Cf=12(SwCw+SgCg)+12(SwCw+SgCg)-1,
where Cg is the compressibility of free gas.(5)
Cm is calculated by the same equation as that used in gas hydrate model.(6)
β is calculated by the same equation as that used in gas hydrate model.(7)
μ can be written as
(28)μ=μs,μs=μsm01-ϕ,
where μs is rock grain rigidity.(8)
Variation range ofk is 1–∞.
## 5.2.3. Implementation Steps of Numerical Simulation of Free Gas Model Based on the TPT
Consider the following.(1)
Given theϕ, or calculate it by the empirical formula.(2)
Given theSg and Sw, calculate ϕs, ϕw, and ϕg according to (18)–(21).(3)
According to (22), calculate ϕeff.(4)
According to (23) and (25), calculate ρb, ρf and ρm.(5)
According to (11)–(14), calculate Cb, Cf, and Cm.(6)
According to (15), calculate β.(7)
According to (16) and (17), calculate μ.(8)
According to (1), calculate Vp.
## 5.3. Estimation of Sediment Porosity
Sediment porosity is a key parameter for the estimation of gas hydrate saturation for both gas hydrate and free gas models. Therefore, appropriate log data should be selected to estimate the sediment porosity at site SH2. The log data that can be applied to determine the sediment porosity include density, acoustic, resistivity, and neutron logging.When it comes to the determination of sediment porosity by acoustic log, the data need to be corrected by the regional core data because seafloor sediments are always loose silt and silty clay. However, the core data is always insufficient, and it is difficult to determine the compaction correction coefficient, so acoustic log is not available. When it comes to the determination of sediment porosity by resistivity log, the Archie formula should be used to calculate porosity, and the Archie constants and formation water resistivity need to be known for the Archie formula [57]. However, the previous two parameters are generally determined by some empirical equations, and the estimation error of the sediment porosity is significant. As for the determination of sediment porosity by neutron log, it usually cannot be realized because of the lack of the neutron log data.Compared with the resistivity, the acoustic and the density logs are less affected in gas hydrate reservoirs, and can generally reflect the situation of sediment porosity, so we select the density log data to estimate the sediment porosity in this study. The intensity of scattering gamma ray can be measured by the density log, which reflects electron density of the strata and volume density of rock (ρb). The estimation of ϕ by density log data can be expressed as [58]
(29)ϕ=(ρma-ρb)(ρma-ρf),
where ρma is matrix density and ρf is fluid density. Considering the effect of shaly sediments, (29) can be written as [58]
(30)ϕ=(ρma-ρb)(ρma-ρf)-Vsh(ρma-ρsh)(ρma-ρf),(31)Vsh=(2GCUR×SH-1)(2GCUR-1),(32)SH=(GR-GRmin)(GRmax-GRmin),
where Vsh is the volume content of the shale, SH is the content index of the shale, ρsh is the density of the shale, GR is the value of natural gamma log in the research interval, GRmin is the value of natural gamma log in pure sandstone interval, GRmax the is value of natural gamma log in pure mud interval, and GCUR is the Hilchie index, which is 3.7 in the Tertiary of North America and 2 in old stratum [59].Equations (29) and (30) are used to calculate the sediment porosity at site SH2, and processing parameters can be set as follows: ρma=2.65 g/cm3, ρf=1.04 g/cm3, ρsh=2.70 g/cm3 [12, 35]. The porosity calculated by (29) and (30) is close to each other (Figure 3), which varies in the range of 30% to 55%, and the average value is 45%. The result indicates that sediments at site SH2 are of high porosity.Figure 3
Result of sediment porosity calculated by density log data at site SH2. (Red line is the sediment porosity estimation used density log data; bright green line is the sediment porosity estimation which considered the effect of argillaceous sediments.)
## 6. Results
Gas hydrate model of the TPT is used to forward stimulate P wave velocity of sediment formation in different gas hydrate saturation conditions in depth of 50 to 245 mbsf at site SH2. Table1 shows the values of the main parameters used to evaluate the velocity. When Sh=0, we can get the P wave velocity of water saturated sediments forward stimulated by gas hydrate model based on the TPT. From Figure 4, the tendency between actual curve of P wave velocity log and P wave velocity curve of the saturated water condition is almost consistent in the gas hydrate interval (above 195 mbsf), so the model and its parameters are rational for numerical simulation in this study. The difference between actual P wave velocity of the log and P wave velocity of saturated water condition reflects the value of gas hydrate or free gas saturation, which can be used to qualitatively identify the gas hydrate reservoir. The specific response characteristics are as follows: the possibility of containing gas hydrate is dominant when the actual P wave velocity of the log is higher than P wave velocity of saturated water condition; the possibility of containing free gas is dominant when actual P wave velocity of the log is lower than the P wave velocity of saturated water condition [35]. In depth of 195 to 220 mbsf, actual P wave velocity of the log is significantly higher than the P wave velocity of saturated water condition, so this interval is the gas hydrate stability zone. In depth of 220 to 245 mbsf, the actual P wave velocity of the log has an increase relative to the P wave velocity of saturated water condition. However, Figure 2 does not indicate the increase of resistivity. Without the coring analysis data in this interval, whether the abnormality is caused by gas hydrate or not can not be ascertained, and should be researched in further study.Table 1
Values of the main parameters of the gas hydrate and free gas model of the TPT.
Parameters
Details
References
C
s
2
.
7
×
1
0
-
11 Pa−1
[60]
C
h
1
.
79
×
1
0
-
10 Pa−1
[1]
C
w
4
.
79
×
1
0
-
10 Pa−1
[61]
C
g
4
.
24
×
1
0
-
8 Pa−1
[62]
ρ
s
2650 kg/m3
[60]
ρ
w
1040 kg/m3
[1]
ρ
g
88.48 kg/m3
[62]
ρ
h
767 kg/m3
[1]
V
s
116
+
4.65
z m/s 237+1.28z m/s 332+0.58z m/s
[63]
μ
sm
0
ρ
m
V
s
2 Pa
[35]
μ
h
3
.
7
×
1
0
9 Pa
[35]
k
2.3
[61]Figure 4
Forward stimulating P wave velocity of sediment formation at site SH2. (The red line is the actual log P wave velocity at site SH2; the blue line, sky blue line, bright green line, pink line, green line, dark blue line, and black line are assumed to P wave velocity made by forward modeling of the gas hydrate model when gas hydrate saturations are 0, 5%, 10%, 15%, 20%, 25%, and 30%, resp.)When Sh gradually increases, P wave velocity made by forward stimulation also increases; when Sh>15%, P wave velocity increases significantly; when Sh=30%, P wave velocity curve is located at the right of the actual P wave velocity logging curve. The above results indicate that the basic range of gas hydrate saturation is 0–30% in depth of 50 to 245 mbsf at site SH2.In order to study the dependence of P wave velocity on sediment porosity, gas hydrate saturation, it is assumed that values of gas hydrate saturation increase from 0 to 1 in the interval of 0.1. Using gas hydrate model of the TPT to model the corresponding P wave velocity of the previous gas hydrate saturations, the relation surface of previous three properties can be formed as Figure5 shows. With the increase of the sediment burial depth in depth of 50 to 245 mbsf at site SH2, the porosity presents a decreasing trend except for the abnormality caused by borehole conditions in some intervals, and P wave velocity of forward stimulation (Sh=0) slowly increases from 1743 to 1795 m/s. But with the increase of gas hydrate saturation, the increase rate of P wave velocity is obviously accelerated, and P wave velocity (burial depth is 51 mbsf) increases from 1743 to 3961 m/s. From the above analysis, the general rule between P wave velocity of forward stimulation and sediment porosity, gas hydrate saturation at site SH2 is the smaller the sediment porosity, the greater the P wave velocity; the higher the gas hydrate saturation, the greater the P wave velocity. This result is basically in accordance with the research result made by Tinivella [35].Figure 5
Relation between P wave velocity and sediment porosity, gas hydrate saturation at site SH2. (Because of the effect of variation of borehole conditions and actual sediments, the sediment porosity in some intervals does not reduce with the increasing depth and causes the curved surface unsmooth growth. In depth of 195 to 220 mbsf of gas hydrate reservoir, P wave velocity surface of forward simulation subsides as the sediment porosity relatively increases.)Similarly, it is assumed that values of free gas saturation increase from 0 to 1 in the interval of 0.1 in order to study the relation between P wave velocity and free gas saturation. Using free gas model of the TPT to model the corresponding P wave velocity of the previous free gas saturations, the relation surface of P wave velocity, sediment porosity, and free gas saturation can be formed as Figure6 shows. In depth of 50 to 245 mbsf at site SH2, with the increase of free gas saturation, P wave velocity (burial depth is 51 mbsf) decreases from 1773 to 597 m/s (The velocity 597 m/s is obtained supposing 100% free gas saturation), and the decrease rate is significant. Considering the depth effect, the decrease rate of P wave velocity slows down with the increase of burial depth. From the above analysis, the general rule between P wave velocity of forward stimulation and free gas saturation at site SH2 is the higher the free gas saturation, the lower the P wave velocity. This result is also basically in accordance with the research result made by Tinivella [35].Figure 6
Relation between P wave velocity and sediment porosity, free gas saturation at site SH2. (P wave velocity is easily affected by the free gas saturation. When the free gas saturation increases, P wave velocity of forward stimulation by the free gas model of the TPT decreases rapidly.)The estimation of gas hydrate saturation for gas hydrate reservoir evaluation has an important significance. In order to estimate gas hydrate saturation in sediments, it is necessary to associate the P wave velocity of the log with the P wave velocity of gas hydrate model based on the TPT. Given an initial gas hydrate saturation, based on gas hydrate model of the TPT, the difference between P wave velocity of forward simulation and the actual P wave velocity of the log in this saturation can be acquired. If the difference is in the range of allowable error, the saturation can be treated as the actual saturation; if the difference does not satisfy the error requirement, the value of gas hydrate saturation should be modified until meeting the error precision.Using gas hydrate model of the TPT to inverse gas hydrate saturation at site SH2, the values of the main parameters are listed in Table1, and the inversion result is shown in Figure 7. In the interval of 50 to 90 mbsf at site SH2, the range of gas hydrate saturation is 0–17.5%, and the average value is 4.8%. As the shallow sediments are influenced by variation of borehole conditions, the estimation error of gas hydrate saturation in this interval is significant, which should be noticed during the analysis. In the interval of 90 to 195 mbsf, the range of gas hydrate saturation is 0–18.9%, and the average value is 7%. In the interval of 195 to 220 mbsf, the range of gas hydrate saturation is 7–31.5%, and the average value is 23.2%. With the increase of burial depth, gas hydrate saturation gradually increases and finally reaches the peak value of 31.5% in 208 mbsf. Then the gas hydrate saturation decreases slowly with the increase of burial depth, and the range of gas hydrate saturation is 0–25.8% in the interval of 220 to 245 mbsf, with an average value of 15.5%.Figure 7
Estimation of gas hydrate saturation at site SH2.
## 7. Discussion
It is very important to determine the porosity for the evaluation of gas hydrate saturation. In order to analyze the accuracy of porosity estimation by density log data, we use resistivity log data and combine the Archie formula [57] to make a comparison between the estimation results, and the comparison results are shown in Figure 8. The porosity estimated by resistivity log data generally changes in the range of 30 to 50%, and the average value is 43% [12, 64]. In the interval of 50 to 195 mbsf, the curves of density porosity and resistivity porosity are approximately coincident, while the former fluctuates due to the borehole effect. In the interval of 195 to 220 mbsf, the sediment contains gas hydrate, and the curve of resistivity porosity decreases significantly compared with the curve of density porosity due to the significant increase of resistivity (Figure 2), so the porosity calculated by resistivity log data needs to be corrected to exclude the influence of the increase of skeleton components. In the interval of 220 to 245 mbsf, the curves of density porosity and resistivity porosity are approximately coincident again. These results indicate that using density log data to estimate the porosity in the gas hydrate stability zone at site SH2 is relatively more reliable. According to previous studies, the range of core porosity by laboratory analysis in this interval was 40–55%, which was almost coincident with the estimation of porosity by density log data in this study [13]. As Figure 8 shows, the core porosity distribution corresponds with the curve of density porosity of the well, and this result proves that the porosity estimated by density log data can meet the requirement for evaluating gas hydrate saturation at site SH2.Figure 8
Comparison of estimation of sediment porosity made by different methods at site SH2. (The red curve is the porosity estimated by density log data; the bright green curve is the porosity considering the effect of shaly sediments; blue curve is the porosity estimated by resistivity log combined with Archie formula; the black dots are the porosity measured by core laboratory.)In order to verify the accuracy of gas hydrate saturation estimated by the TPT, we compare gas hydrate saturation in this study with that estimated by Wang et al. [50] in the occurrence of gas hydrate (195 to 220 mbsf) at site SH2 (Figure 9). The curve of gas hydrate saturation made by the TPT first increases and then decreases as the burial depth increases, which reaches the peak value of 31.5% in the 208 mbsfs and then decreases gradually. The peak value of gas hydrate saturation by the TPT is slightly smaller than the resistivity log (Rt) method (40.5%), the simplified three-phase equation (STPE) method (41%), and the effective medium theory (EMT) method (38.5%). However, the curve trend of gas hydrate saturation estimated by the TPT and other three methods is basically consistent, and the difference only lies in the amplitude of the curve, which indicates that using the TPT method to estimate gas hydrate saturation at site SH2 is available.Figure 9
Comparison of estimation of gas hydrate saturation made by different methods in depth of 195 to 220 mbsf at site SH2. (The red curve, black curve, blue curve, and violet curve represent gas hydrate saturations estimated by the TPT, the Rt, the STPE, and the EMT method, respectively; the bright green dots represent gas hydrate saturations calculated by chloride anomaly method.)The average value of gas hydrate saturations calculated by chloride anomaly method is 25%, and the peak value is 45% [9]. The peak value of gas hydrate saturation estimated by the TPT is relatively lower than that estimated by chloride anomaly method, but the average values estimated by the two methods are basically same, and most distribution dots of gas hydrate saturations obtained by chloride anomaly method correspond with the curve of gas hydrate saturation estimated by the TPT method (Figure 9). This also indicates that using the TPT method to estimate gas hydrate saturation at site SH2 is available.
## 8. Conclusions
In summary, the relationships between P wave velocity and gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 are studie, respectively, by virtue of elastic wave velocity of numerical stimulation based on the TPT, and gas hydrate model and free gas model are established to estimate the sediment porosity in order to determine the gas hydrate saturation of the research area. Some conclusions can be drawn as follows.(
1
) Using the difference between P wave velocity of saturated water condition and actual P wave velocity of the log, whether the sediment contains gas hydrate or not can be identified quickly. In the interval of 195 to 220 mbsf at site SH2, the actual P wave velocity of the log increases significantly relative to P wave velocity of forward stimulation in saturated water condition, so this interval is determined to contain gas hydrate.(
2
) By virtue of elastic wave velocity of numerical stimulation based on the TPT, combined with log data, the dependence of P wave velocity on gas hydrate saturation, free gas saturation, and sediment porosity at site SH2 can be analyzed, respectively. In the interval of 50 to 245 mbsf, as sediment porosity decreases, P wave velocity gradually increases; as gas hydrate saturation increases, P wave velocity increases; as free gas saturation increases, P wave velocity gradually decreases.(
3
) The log data can be used to calculate gas hydrate saturation of the whole well, and the availability is better than the coring data. The average value of gas hydrate saturation estimated by the TPT is 23.2%, and the peak value is 31.5%, which is basically in accordance with the values estimated by the STPE model, the EMT model, the Rt model and chloride anomaly method.
---
*Source: 101459-2013-07-01.xml* | 2013 |
# Effect of Single Injection of Recombinant Human Bone Morphogenetic Protein-2-Loaded Artificial Collagen-Like Peptide in a Mouse Segmental Bone Transport Model
**Authors:** Ryo Tazawa; Hiroaki Minehara; Terumasa Matsuura; Tadashi Kawamura; Kentaro Uchida; Gen Inoue; Wataru Saito; Masashi Takaso
**Journal:** BioMed Research International
(2019)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2019/1014594
---
## Abstract
This study aimed to investigate whether a single injection of recombinant human bone morphogenetic protein-2-loaded artificial collagen-like peptide gel (rhBMP-2/ACG) accelerates consolidation at the bone defect site and bone union at the docking site in a mouse segmental bone transport (SBT) model. A critical sized bone defect (2 mm) was created in the femur of mice and subsequently reconstructed using SBT with an external fixator. Mice were divided into four treatment groups: Group CONT (immobile control), Group 0.2 (bone segments moved 0.2 mm/day for 10 days), Group 1.0 (bone segments moved 1.0 mm/day for 2 days), and Group 1.0/BMP-2 (rhBMP-2/ACG injected into the bone defect and segments moved 1.0 mm/day for 2 days). Consolidation at the bone defect site and bone union at the docking site was evaluated radiologically and histologically across eight weeks. Bone volume and bone mineral content were significantly higher in Group 0.2 than in Group 1.0. Group 0.2 showed evidence of rebuilding of the medullary canal eight weeks after surgery at the bone defect site. However, in Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. Group 1.0/BMP-2 had higher bone volume and bone mineral content compared to Group 1.0, and all mice achieved bone union at the bone defect and docking sites. Single injection of rhBMP-2/ACG combined with SBT may be effective for enhancing bone healing in large bone defects.
---
## Body
## 1. Introduction
Surgical treatment of large bone defects has long been a challenge for orthopaedic surgeons. Segmental bone transport (SBT) using an external fixator is a standard treatment for large-diameter bone defects at the donor site with low morbidity [1]. However, the long-term application of the device is needed for bone healing. In addition, 27% of patients who receive SBT treatment fail to show bone repair and union at the docking site and require additional procedures such as a second transport operation, autogenous bone grafting, and osteosynthesis [2]. Therefore, strategies that reduce the treatment time and improve new bone formation using SBT without the need for secondary surgery are necessary for the treatment of large bone defects.Successful SBT treatment depends on both bone regeneration at the bone defect site, called the “regenerate,” and bone union at the meeting point between the transported and distal segments at the completion of bone transport, called the “docking site.” Studies aimed at improving the bone healing process in SBT have revealed that regeneration at the defect site is improved by the optimization of transport speed [3, 4]. However, nonunion at the docking site often occurs [5–7], even when consolidation is successfully achieved using an optimized transport speed. Therefore, additional optimization of SBT is required to repair large bone defects.Bone morphogenetic protein-2 (BMP-2) is a potent inducer of bone formation [8]. Previous studies have reported the use of recombinant human (rh) BMP-2 in the treatment of open fractures and spinal surgery [9–11]. In a rabbit SBT model, BMP-2-loaded composite materials consisting of β-tricalcium phosphate (β-TCP) and polyethylene glycol (PEG) promoted consolidation and union at the docking site when injected percutaneously into the defect and docking sites following surgery and the completion of bone transport [12]. In clinical settings, however, repeated injection may increase the rate of such complications such as ectopic bone formation, seroma formation, and wound dehiscence [13, 14].To date, several carriers to facilitate sustained release of growth factors have been developed [15, 16]. We previously reported that the artificial collagen-like peptide poly(POG)n was a useful carrier for growth factors. Injection of bFGF-loaded poly(POG)n into fracture sites stimulated periosteal bone formation in mouse fracture models [17, 18]. In the present study, we investigated whether a single injection of rhBMP-2-loaded artificial collagen-like peptide gel (rhBMP-2/ACG) accelerates consolidation at the bone defect site and bone union at the docking site in a mouse SBT model.
## 2. Materials and Methods
### 2.1. Implants
Segmental MouseDis (Figure1(a)), an external fixator device consisting of a fixator block and five 0.45 mm diameter mounting pins, was purchased from RISystem AG (Davos, Switzerland). The bone segment to be transported was attached to the device’s transport unit via a mounting pin for movement across the defect (Figures 1(b)–1(d)).Figure 1
Segmental MouseDis. (a) The fixator block has a transport unit (black arrow) which can be moved distally along the transport track using an activation screw (arrow head). (b) Before bone transport. (c) Turning the activation screw clockwise causes the transport unit to move 0.2 mm along the transport track. (d) Completed bone transport.
(a)
(b)
(c)
(d)
### 2.2. Chemicals
CHO cell-derived rhBMP-2 was obtained from PeproTech Inc. (Rocky Hill, NJ, USA). Collagen materials have been used for the sustained release of rhBMP-2 in clinical settings [9–11]. We previously reported that collagen-like polypeptide poly(POG)n is superior to animal-derived collagen with regard to thermal stability and lack of pathogenicity and is a useful carrier which retains bFGF well [17, 18]. Our preliminary study showed that 2.0 μg of rhBMP-2 without poly(POG)n failed to promote bone formation in a mice bone defect model. Therefore, to administer and retain rhBMP-2 at the bone defect site, we dissolved 2.0 μg of rhBMP-2 in 22.5 μL of the collagen-like polypeptide poly(POG)n gel, which was purchased from JNC Corporation (Tokyo, Japan).
### 2.3. Animals
All surgeries and handling were performed based on the guidelines of the Animal Ethics Committee of Kitasato University (Permission number: 2018-087). A total of 32 six-month-old male C57BL/6J mice (Charles River Laboratories Japan, Inc., Yokohama, Japan) were used for this study. The mice were fed a standard laboratory diet, CRF-1 (Oriental Yeast, Tokyo, Japan), and housed under controlled temperature (23 ± 2°C) and humidity (55 ± 10%) conditions and a 12-hour light/dark cycle. Mice were randomly divided into four treatment groups of eight mice each, namely, Group CONT (immobile control), Group 0.2, Group 1.0, and Group 1.0/BMP-2. A 2.0-mm critical sized bone defect was created in the right femur according to previous studies (see below for details) [19, 20]. In Group CONT, the defect was fixed with the Segmental MouseDis without transportation of the bone segment. Mice in Groups 0.2 and 1.0 underwent fixation with the device and the bone segment was moved 0.2 mm/day for 10 days and 1.0 mm/day for 2 days, respectively. Mice in Group 1.0/BMP-2 underwent fixation with the device and received an injection of 2.0 μg of rhBMP-2 into the bone defect site immediately after the defect-creating surgery and the bone segment was moved 1.0 mm/day for 2 days.
### 2.4. Surgical Procedure
All mice were initially sedated using isoflurane followed by an intramuscular injection of domitor (Nippon Zenyaku Kogyo Co., Ltd., Fukushima, Japan), midazolam (Sand Co., Yamagata, Japan), and Vetorphale (Meiji Seika Kaisha, Ltd., Tokyo, Japan; 0.075 mL/100 g) at a ratio of 3 : 1 : 1. The operation was performed only on the right femur. The surgical site was prepared by hair removal and sterilization, and the surgery was performed under aseptic conditions. The skin of the lateral thigh was cut from the hip to the knee via a longitudinal incision 20 mm in length to expose the fascia latae between the gluteus superficialis and biceps femoris muscles. To implant the external fixator device, care was taken to position it at the center and parallel to the longitudinal axis of the femur. After predrilling, the first mounting pin was fixed to the distal segment of the femur by inserting it through the most distal hole in the fixator block. The second mounting pin was fixed to the proximal segment of the femur through the most proximal hole, holding the fixator block parallel to the femur. The remaining three mounting pins were inserted through the final three holes, including one in the transport unit.After adjusting the fixator block, a transport segment was created by performing a transverse osteotomy between the second and third pins from the proximal end using a 0.22 mm diameter Gigli saw. A 2.0 mm bone defect was then created between the third and fourth pins from the proximal end using a microdrill. Only mice in Group 1.0/BMP-2 received an injection of 2.0μg of rhBMP-2 into the 2 mm bone defect site. Finally, nonabsorbable thread was used to suture the fascia and skin. Successful completion of the surgical procedure was validated by examining radiographs. All mice were free to perform normal activities immediately after surgery. In all groups except Group CONT, bone transport was conducted routinely from two days after surgery. All animals were sacrificed at eight weeks after surgery. The femur with external fixator was carefully dissected out for radiological and histological evaluation.
### 2.5. Soft X-ray Radiographs
The process of SBT and regenerative new bone formation were monitored using soft X-ray radiographs (SOFTEX-CMB4; SOFTEX Corporation, Kanagawa, Japan) taken at an exposure of 10 seconds, a voltage of 35 kV, and a current of 3.0 mA using X-ray IX Industrial Film (Fuji Photo Film Co., Ltd., Tokyo, Japan).
### 2.6. Microcomputed Tomography (Micro-CT)
Following the sacrifice of mice eight weeks postsurgery, femurs were extracted and fixed in 4% paraformaldehyde for 48 hours at 4°C. The tissue was subsequently moved to PBS and imaged on a microfocus X-ray CT system (inspeXio SMX-90CT; Shimadzu, Tokyo, Japan). Tube voltage, tube current, and voxel size were 90 kV, 100μA, and 30 × 30 × 30 μm, respectively. 3D imaging software (TRI/3D BON; Ratoc System Engineering Co., Ltd., Tokyo, Japan) was used to generate 3D reconstructed images at a threshold determined based on discriminant analysis. We evaluated bone union at both the bone defect and docking sites. Bone union was defined as the continuity of cortical bone over three of four images in the sagittal and coronal plane at the center of the bone defect site and docking site, respectively. We also calculated the bone volume (BV) and bone mineral content (BMC) of regenerative new bone at the bone defect site in all samples. All parameters were measured in a rectangular region of interest (ROI) that consisted of a 1500 μm length of bone mass indicative of regenerative new bone between the proximal femoral segment and transport segment.
### 2.7. Histology
Following micro-CT imaging, femurs were submerged in 20% EDTA for four weeks for demineralization. The resulting tissue was embedded in paraffin using an automatic tissue processor (Tissue-Tek VIP 6; Sakura Fine Tek, Tokyo, Japan) and sectioned through the femur’s long axis at 3μm in the sagittal plane using a microtome (REM-710; Yamato Kohki industrial Co. Ltd., Saitama, Japan). All sections were then stained with hematoxylin and eosin (HE) and evaluated qualitatively.
### 2.8. Statistical Analysis
SPSS software (version 19.0; SPSS, Chicago, IL, USA) was used for all analyses. Differences between groups were analyzed using one-way ANOVA and a subsequent Bonferroni’s post hoc comparisons test.p<0.05 was considered significant.
## 2.1. Implants
Segmental MouseDis (Figure1(a)), an external fixator device consisting of a fixator block and five 0.45 mm diameter mounting pins, was purchased from RISystem AG (Davos, Switzerland). The bone segment to be transported was attached to the device’s transport unit via a mounting pin for movement across the defect (Figures 1(b)–1(d)).Figure 1
Segmental MouseDis. (a) The fixator block has a transport unit (black arrow) which can be moved distally along the transport track using an activation screw (arrow head). (b) Before bone transport. (c) Turning the activation screw clockwise causes the transport unit to move 0.2 mm along the transport track. (d) Completed bone transport.
(a)
(b)
(c)
(d)
## 2.2. Chemicals
CHO cell-derived rhBMP-2 was obtained from PeproTech Inc. (Rocky Hill, NJ, USA). Collagen materials have been used for the sustained release of rhBMP-2 in clinical settings [9–11]. We previously reported that collagen-like polypeptide poly(POG)n is superior to animal-derived collagen with regard to thermal stability and lack of pathogenicity and is a useful carrier which retains bFGF well [17, 18]. Our preliminary study showed that 2.0 μg of rhBMP-2 without poly(POG)n failed to promote bone formation in a mice bone defect model. Therefore, to administer and retain rhBMP-2 at the bone defect site, we dissolved 2.0 μg of rhBMP-2 in 22.5 μL of the collagen-like polypeptide poly(POG)n gel, which was purchased from JNC Corporation (Tokyo, Japan).
## 2.3. Animals
All surgeries and handling were performed based on the guidelines of the Animal Ethics Committee of Kitasato University (Permission number: 2018-087). A total of 32 six-month-old male C57BL/6J mice (Charles River Laboratories Japan, Inc., Yokohama, Japan) were used for this study. The mice were fed a standard laboratory diet, CRF-1 (Oriental Yeast, Tokyo, Japan), and housed under controlled temperature (23 ± 2°C) and humidity (55 ± 10%) conditions and a 12-hour light/dark cycle. Mice were randomly divided into four treatment groups of eight mice each, namely, Group CONT (immobile control), Group 0.2, Group 1.0, and Group 1.0/BMP-2. A 2.0-mm critical sized bone defect was created in the right femur according to previous studies (see below for details) [19, 20]. In Group CONT, the defect was fixed with the Segmental MouseDis without transportation of the bone segment. Mice in Groups 0.2 and 1.0 underwent fixation with the device and the bone segment was moved 0.2 mm/day for 10 days and 1.0 mm/day for 2 days, respectively. Mice in Group 1.0/BMP-2 underwent fixation with the device and received an injection of 2.0 μg of rhBMP-2 into the bone defect site immediately after the defect-creating surgery and the bone segment was moved 1.0 mm/day for 2 days.
## 2.4. Surgical Procedure
All mice were initially sedated using isoflurane followed by an intramuscular injection of domitor (Nippon Zenyaku Kogyo Co., Ltd., Fukushima, Japan), midazolam (Sand Co., Yamagata, Japan), and Vetorphale (Meiji Seika Kaisha, Ltd., Tokyo, Japan; 0.075 mL/100 g) at a ratio of 3 : 1 : 1. The operation was performed only on the right femur. The surgical site was prepared by hair removal and sterilization, and the surgery was performed under aseptic conditions. The skin of the lateral thigh was cut from the hip to the knee via a longitudinal incision 20 mm in length to expose the fascia latae between the gluteus superficialis and biceps femoris muscles. To implant the external fixator device, care was taken to position it at the center and parallel to the longitudinal axis of the femur. After predrilling, the first mounting pin was fixed to the distal segment of the femur by inserting it through the most distal hole in the fixator block. The second mounting pin was fixed to the proximal segment of the femur through the most proximal hole, holding the fixator block parallel to the femur. The remaining three mounting pins were inserted through the final three holes, including one in the transport unit.After adjusting the fixator block, a transport segment was created by performing a transverse osteotomy between the second and third pins from the proximal end using a 0.22 mm diameter Gigli saw. A 2.0 mm bone defect was then created between the third and fourth pins from the proximal end using a microdrill. Only mice in Group 1.0/BMP-2 received an injection of 2.0μg of rhBMP-2 into the 2 mm bone defect site. Finally, nonabsorbable thread was used to suture the fascia and skin. Successful completion of the surgical procedure was validated by examining radiographs. All mice were free to perform normal activities immediately after surgery. In all groups except Group CONT, bone transport was conducted routinely from two days after surgery. All animals were sacrificed at eight weeks after surgery. The femur with external fixator was carefully dissected out for radiological and histological evaluation.
## 2.5. Soft X-ray Radiographs
The process of SBT and regenerative new bone formation were monitored using soft X-ray radiographs (SOFTEX-CMB4; SOFTEX Corporation, Kanagawa, Japan) taken at an exposure of 10 seconds, a voltage of 35 kV, and a current of 3.0 mA using X-ray IX Industrial Film (Fuji Photo Film Co., Ltd., Tokyo, Japan).
## 2.6. Microcomputed Tomography (Micro-CT)
Following the sacrifice of mice eight weeks postsurgery, femurs were extracted and fixed in 4% paraformaldehyde for 48 hours at 4°C. The tissue was subsequently moved to PBS and imaged on a microfocus X-ray CT system (inspeXio SMX-90CT; Shimadzu, Tokyo, Japan). Tube voltage, tube current, and voxel size were 90 kV, 100μA, and 30 × 30 × 30 μm, respectively. 3D imaging software (TRI/3D BON; Ratoc System Engineering Co., Ltd., Tokyo, Japan) was used to generate 3D reconstructed images at a threshold determined based on discriminant analysis. We evaluated bone union at both the bone defect and docking sites. Bone union was defined as the continuity of cortical bone over three of four images in the sagittal and coronal plane at the center of the bone defect site and docking site, respectively. We also calculated the bone volume (BV) and bone mineral content (BMC) of regenerative new bone at the bone defect site in all samples. All parameters were measured in a rectangular region of interest (ROI) that consisted of a 1500 μm length of bone mass indicative of regenerative new bone between the proximal femoral segment and transport segment.
## 2.7. Histology
Following micro-CT imaging, femurs were submerged in 20% EDTA for four weeks for demineralization. The resulting tissue was embedded in paraffin using an automatic tissue processor (Tissue-Tek VIP 6; Sakura Fine Tek, Tokyo, Japan) and sectioned through the femur’s long axis at 3μm in the sagittal plane using a microtome (REM-710; Yamato Kohki industrial Co. Ltd., Saitama, Japan). All sections were then stained with hematoxylin and eosin (HE) and evaluated qualitatively.
## 2.8. Statistical Analysis
SPSS software (version 19.0; SPSS, Chicago, IL, USA) was used for all analyses. Differences between groups were analyzed using one-way ANOVA and a subsequent Bonferroni’s post hoc comparisons test.p<0.05 was considered significant.
## 3. Results
### 3.1. Radiological Evaluation
#### 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
#### 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
### 3.2. Histological Evaluation
We performed a histological examination to evaluate new bone formation (Figure5). HE staining of tissue from Group 0.2 and Group 1.0/BMP-2 showed large amounts of longitudinal trabecular bone and regenerative new bone and evidence of rebuilding of the medullary canal at eight weeks after surgery at the bone defect site. Meanwhile, in Group CONT and Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. At the docking site, all mice in Group 1.0/BMP-2 showed bone union between the distal end of the transported segment and distal bone. In contrast, Group 0.2 and Group 1.0 showed discontinuity between the transported segment and distal bone with fibrocartilaginous tissue.Figure 5
Hematoxylin and eosin-stained tissue sections of femurs showing new bone formation at the bone defect site and union at the docking site at 8 weeks after surgery. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. The scale bars indicate 1000μm (a, d, g, j) and 100 μm (b, c, e, f, h, i, k, l).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 3.1. Radiological Evaluation
### 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
### 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
## 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
## 3.2. Histological Evaluation
We performed a histological examination to evaluate new bone formation (Figure5). HE staining of tissue from Group 0.2 and Group 1.0/BMP-2 showed large amounts of longitudinal trabecular bone and regenerative new bone and evidence of rebuilding of the medullary canal at eight weeks after surgery at the bone defect site. Meanwhile, in Group CONT and Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. At the docking site, all mice in Group 1.0/BMP-2 showed bone union between the distal end of the transported segment and distal bone. In contrast, Group 0.2 and Group 1.0 showed discontinuity between the transported segment and distal bone with fibrocartilaginous tissue.Figure 5
Hematoxylin and eosin-stained tissue sections of femurs showing new bone formation at the bone defect site and union at the docking site at 8 weeks after surgery. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. The scale bars indicate 1000μm (a, d, g, j) and 100 μm (b, c, e, f, h, i, k, l).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 4. Discussion
We used a mouse bone transport model to study the effects of a single injection of rhBMP-2/ACG on bone regeneration at the bone defect site and bone union at the docking site. All mice treated with a single injection of rhBMP-2/ACG exhibited accelerated bone healing at both the defect and docking sites, even with rapid bone transport.Previous studies have shown that the optimization of transportation speed in SBT improves the consolidation of regenerative new bone formation. For example, Ilizarov [3, 4] demonstrated that the optimal distraction rate was 1.0 mm/day, while a rate of 0.5 mm/day resulted in premature bone healing and a rate of 2.0 mm/day produced only fibrous connective tissue at the distal ends of the bone without osteogenesis in a canine tibia SBT model. In contrast, increased time to docking often results in nonunion at the docking site with interposed fibrocartilaginous tissue at the bone ends [5]. Consistent with these reports in human and large animal studies, our mouse model showed that consolidation was achieved using a distraction rate of 0.2 mm/day, while consolidation was insufficient using a rate of 1.0 mm/day. Union at the docking site was inadequate at the faster distraction rate, and fibrocartilaginous tissue was observed histologically. Our results suggest that findings from mouse SBT models may be extrapolated to larger animal models.rhBMP-2 administration is safe and increases the rate of healing of fractures and wounds and reduces infection rates in patients with open tibia fractures [9]. A previous study reported the effectiveness of a BMP-2-loaded β-TCP/PEG composite injected into the distraction gap at the start and into the docking site at the end of distraction in rabbit models [12]. In the present study, we demonstrated that a single injection of rhBMP-2/ACG combined with SBT accelerated bone healing at both the defect and docking sites in a mouse critical sized bone defect model, even at a faster distraction rate. rhBMP-2/ACG combined with SBT may therefore be effective for enhancing bone healing in large bone defects and reduce the treatment period. However, nonclinical research in nonhuman primates followed by clinical trials is needed to optimize the rhBMP-2 dose required to facilitate distraction osteogenesis in patients with diaphyseal bone defects.
## 5. Conclusions
A single injection of rhBMP-2/ACG exhibited accelerated bone healing at both the defect and docking sites, even with rapid bone transport, in a mouse SBT model. The use of rhBMP-2/ACG combined with SBT may accelerate bone healing in large bone defects without the need for repeated BMP-2 injection.
---
*Source: 1014594-2019-12-24.xml* | 1014594-2019-12-24_1014594-2019-12-24.md | 32,343 | Effect of Single Injection of Recombinant Human Bone Morphogenetic Protein-2-Loaded Artificial Collagen-Like Peptide in a Mouse Segmental Bone Transport Model | Ryo Tazawa; Hiroaki Minehara; Terumasa Matsuura; Tadashi Kawamura; Kentaro Uchida; Gen Inoue; Wataru Saito; Masashi Takaso | BioMed Research International
(2019) | Medical & Health Sciences | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2019/1014594 | 1014594-2019-12-24.xml | ---
## Abstract
This study aimed to investigate whether a single injection of recombinant human bone morphogenetic protein-2-loaded artificial collagen-like peptide gel (rhBMP-2/ACG) accelerates consolidation at the bone defect site and bone union at the docking site in a mouse segmental bone transport (SBT) model. A critical sized bone defect (2 mm) was created in the femur of mice and subsequently reconstructed using SBT with an external fixator. Mice were divided into four treatment groups: Group CONT (immobile control), Group 0.2 (bone segments moved 0.2 mm/day for 10 days), Group 1.0 (bone segments moved 1.0 mm/day for 2 days), and Group 1.0/BMP-2 (rhBMP-2/ACG injected into the bone defect and segments moved 1.0 mm/day for 2 days). Consolidation at the bone defect site and bone union at the docking site was evaluated radiologically and histologically across eight weeks. Bone volume and bone mineral content were significantly higher in Group 0.2 than in Group 1.0. Group 0.2 showed evidence of rebuilding of the medullary canal eight weeks after surgery at the bone defect site. However, in Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. Group 1.0/BMP-2 had higher bone volume and bone mineral content compared to Group 1.0, and all mice achieved bone union at the bone defect and docking sites. Single injection of rhBMP-2/ACG combined with SBT may be effective for enhancing bone healing in large bone defects.
---
## Body
## 1. Introduction
Surgical treatment of large bone defects has long been a challenge for orthopaedic surgeons. Segmental bone transport (SBT) using an external fixator is a standard treatment for large-diameter bone defects at the donor site with low morbidity [1]. However, the long-term application of the device is needed for bone healing. In addition, 27% of patients who receive SBT treatment fail to show bone repair and union at the docking site and require additional procedures such as a second transport operation, autogenous bone grafting, and osteosynthesis [2]. Therefore, strategies that reduce the treatment time and improve new bone formation using SBT without the need for secondary surgery are necessary for the treatment of large bone defects.Successful SBT treatment depends on both bone regeneration at the bone defect site, called the “regenerate,” and bone union at the meeting point between the transported and distal segments at the completion of bone transport, called the “docking site.” Studies aimed at improving the bone healing process in SBT have revealed that regeneration at the defect site is improved by the optimization of transport speed [3, 4]. However, nonunion at the docking site often occurs [5–7], even when consolidation is successfully achieved using an optimized transport speed. Therefore, additional optimization of SBT is required to repair large bone defects.Bone morphogenetic protein-2 (BMP-2) is a potent inducer of bone formation [8]. Previous studies have reported the use of recombinant human (rh) BMP-2 in the treatment of open fractures and spinal surgery [9–11]. In a rabbit SBT model, BMP-2-loaded composite materials consisting of β-tricalcium phosphate (β-TCP) and polyethylene glycol (PEG) promoted consolidation and union at the docking site when injected percutaneously into the defect and docking sites following surgery and the completion of bone transport [12]. In clinical settings, however, repeated injection may increase the rate of such complications such as ectopic bone formation, seroma formation, and wound dehiscence [13, 14].To date, several carriers to facilitate sustained release of growth factors have been developed [15, 16]. We previously reported that the artificial collagen-like peptide poly(POG)n was a useful carrier for growth factors. Injection of bFGF-loaded poly(POG)n into fracture sites stimulated periosteal bone formation in mouse fracture models [17, 18]. In the present study, we investigated whether a single injection of rhBMP-2-loaded artificial collagen-like peptide gel (rhBMP-2/ACG) accelerates consolidation at the bone defect site and bone union at the docking site in a mouse SBT model.
## 2. Materials and Methods
### 2.1. Implants
Segmental MouseDis (Figure1(a)), an external fixator device consisting of a fixator block and five 0.45 mm diameter mounting pins, was purchased from RISystem AG (Davos, Switzerland). The bone segment to be transported was attached to the device’s transport unit via a mounting pin for movement across the defect (Figures 1(b)–1(d)).Figure 1
Segmental MouseDis. (a) The fixator block has a transport unit (black arrow) which can be moved distally along the transport track using an activation screw (arrow head). (b) Before bone transport. (c) Turning the activation screw clockwise causes the transport unit to move 0.2 mm along the transport track. (d) Completed bone transport.
(a)
(b)
(c)
(d)
### 2.2. Chemicals
CHO cell-derived rhBMP-2 was obtained from PeproTech Inc. (Rocky Hill, NJ, USA). Collagen materials have been used for the sustained release of rhBMP-2 in clinical settings [9–11]. We previously reported that collagen-like polypeptide poly(POG)n is superior to animal-derived collagen with regard to thermal stability and lack of pathogenicity and is a useful carrier which retains bFGF well [17, 18]. Our preliminary study showed that 2.0 μg of rhBMP-2 without poly(POG)n failed to promote bone formation in a mice bone defect model. Therefore, to administer and retain rhBMP-2 at the bone defect site, we dissolved 2.0 μg of rhBMP-2 in 22.5 μL of the collagen-like polypeptide poly(POG)n gel, which was purchased from JNC Corporation (Tokyo, Japan).
### 2.3. Animals
All surgeries and handling were performed based on the guidelines of the Animal Ethics Committee of Kitasato University (Permission number: 2018-087). A total of 32 six-month-old male C57BL/6J mice (Charles River Laboratories Japan, Inc., Yokohama, Japan) were used for this study. The mice were fed a standard laboratory diet, CRF-1 (Oriental Yeast, Tokyo, Japan), and housed under controlled temperature (23 ± 2°C) and humidity (55 ± 10%) conditions and a 12-hour light/dark cycle. Mice were randomly divided into four treatment groups of eight mice each, namely, Group CONT (immobile control), Group 0.2, Group 1.0, and Group 1.0/BMP-2. A 2.0-mm critical sized bone defect was created in the right femur according to previous studies (see below for details) [19, 20]. In Group CONT, the defect was fixed with the Segmental MouseDis without transportation of the bone segment. Mice in Groups 0.2 and 1.0 underwent fixation with the device and the bone segment was moved 0.2 mm/day for 10 days and 1.0 mm/day for 2 days, respectively. Mice in Group 1.0/BMP-2 underwent fixation with the device and received an injection of 2.0 μg of rhBMP-2 into the bone defect site immediately after the defect-creating surgery and the bone segment was moved 1.0 mm/day for 2 days.
### 2.4. Surgical Procedure
All mice were initially sedated using isoflurane followed by an intramuscular injection of domitor (Nippon Zenyaku Kogyo Co., Ltd., Fukushima, Japan), midazolam (Sand Co., Yamagata, Japan), and Vetorphale (Meiji Seika Kaisha, Ltd., Tokyo, Japan; 0.075 mL/100 g) at a ratio of 3 : 1 : 1. The operation was performed only on the right femur. The surgical site was prepared by hair removal and sterilization, and the surgery was performed under aseptic conditions. The skin of the lateral thigh was cut from the hip to the knee via a longitudinal incision 20 mm in length to expose the fascia latae between the gluteus superficialis and biceps femoris muscles. To implant the external fixator device, care was taken to position it at the center and parallel to the longitudinal axis of the femur. After predrilling, the first mounting pin was fixed to the distal segment of the femur by inserting it through the most distal hole in the fixator block. The second mounting pin was fixed to the proximal segment of the femur through the most proximal hole, holding the fixator block parallel to the femur. The remaining three mounting pins were inserted through the final three holes, including one in the transport unit.After adjusting the fixator block, a transport segment was created by performing a transverse osteotomy between the second and third pins from the proximal end using a 0.22 mm diameter Gigli saw. A 2.0 mm bone defect was then created between the third and fourth pins from the proximal end using a microdrill. Only mice in Group 1.0/BMP-2 received an injection of 2.0μg of rhBMP-2 into the 2 mm bone defect site. Finally, nonabsorbable thread was used to suture the fascia and skin. Successful completion of the surgical procedure was validated by examining radiographs. All mice were free to perform normal activities immediately after surgery. In all groups except Group CONT, bone transport was conducted routinely from two days after surgery. All animals were sacrificed at eight weeks after surgery. The femur with external fixator was carefully dissected out for radiological and histological evaluation.
### 2.5. Soft X-ray Radiographs
The process of SBT and regenerative new bone formation were monitored using soft X-ray radiographs (SOFTEX-CMB4; SOFTEX Corporation, Kanagawa, Japan) taken at an exposure of 10 seconds, a voltage of 35 kV, and a current of 3.0 mA using X-ray IX Industrial Film (Fuji Photo Film Co., Ltd., Tokyo, Japan).
### 2.6. Microcomputed Tomography (Micro-CT)
Following the sacrifice of mice eight weeks postsurgery, femurs were extracted and fixed in 4% paraformaldehyde for 48 hours at 4°C. The tissue was subsequently moved to PBS and imaged on a microfocus X-ray CT system (inspeXio SMX-90CT; Shimadzu, Tokyo, Japan). Tube voltage, tube current, and voxel size were 90 kV, 100μA, and 30 × 30 × 30 μm, respectively. 3D imaging software (TRI/3D BON; Ratoc System Engineering Co., Ltd., Tokyo, Japan) was used to generate 3D reconstructed images at a threshold determined based on discriminant analysis. We evaluated bone union at both the bone defect and docking sites. Bone union was defined as the continuity of cortical bone over three of four images in the sagittal and coronal plane at the center of the bone defect site and docking site, respectively. We also calculated the bone volume (BV) and bone mineral content (BMC) of regenerative new bone at the bone defect site in all samples. All parameters were measured in a rectangular region of interest (ROI) that consisted of a 1500 μm length of bone mass indicative of regenerative new bone between the proximal femoral segment and transport segment.
### 2.7. Histology
Following micro-CT imaging, femurs were submerged in 20% EDTA for four weeks for demineralization. The resulting tissue was embedded in paraffin using an automatic tissue processor (Tissue-Tek VIP 6; Sakura Fine Tek, Tokyo, Japan) and sectioned through the femur’s long axis at 3μm in the sagittal plane using a microtome (REM-710; Yamato Kohki industrial Co. Ltd., Saitama, Japan). All sections were then stained with hematoxylin and eosin (HE) and evaluated qualitatively.
### 2.8. Statistical Analysis
SPSS software (version 19.0; SPSS, Chicago, IL, USA) was used for all analyses. Differences between groups were analyzed using one-way ANOVA and a subsequent Bonferroni’s post hoc comparisons test.p<0.05 was considered significant.
## 2.1. Implants
Segmental MouseDis (Figure1(a)), an external fixator device consisting of a fixator block and five 0.45 mm diameter mounting pins, was purchased from RISystem AG (Davos, Switzerland). The bone segment to be transported was attached to the device’s transport unit via a mounting pin for movement across the defect (Figures 1(b)–1(d)).Figure 1
Segmental MouseDis. (a) The fixator block has a transport unit (black arrow) which can be moved distally along the transport track using an activation screw (arrow head). (b) Before bone transport. (c) Turning the activation screw clockwise causes the transport unit to move 0.2 mm along the transport track. (d) Completed bone transport.
(a)
(b)
(c)
(d)
## 2.2. Chemicals
CHO cell-derived rhBMP-2 was obtained from PeproTech Inc. (Rocky Hill, NJ, USA). Collagen materials have been used for the sustained release of rhBMP-2 in clinical settings [9–11]. We previously reported that collagen-like polypeptide poly(POG)n is superior to animal-derived collagen with regard to thermal stability and lack of pathogenicity and is a useful carrier which retains bFGF well [17, 18]. Our preliminary study showed that 2.0 μg of rhBMP-2 without poly(POG)n failed to promote bone formation in a mice bone defect model. Therefore, to administer and retain rhBMP-2 at the bone defect site, we dissolved 2.0 μg of rhBMP-2 in 22.5 μL of the collagen-like polypeptide poly(POG)n gel, which was purchased from JNC Corporation (Tokyo, Japan).
## 2.3. Animals
All surgeries and handling were performed based on the guidelines of the Animal Ethics Committee of Kitasato University (Permission number: 2018-087). A total of 32 six-month-old male C57BL/6J mice (Charles River Laboratories Japan, Inc., Yokohama, Japan) were used for this study. The mice were fed a standard laboratory diet, CRF-1 (Oriental Yeast, Tokyo, Japan), and housed under controlled temperature (23 ± 2°C) and humidity (55 ± 10%) conditions and a 12-hour light/dark cycle. Mice were randomly divided into four treatment groups of eight mice each, namely, Group CONT (immobile control), Group 0.2, Group 1.0, and Group 1.0/BMP-2. A 2.0-mm critical sized bone defect was created in the right femur according to previous studies (see below for details) [19, 20]. In Group CONT, the defect was fixed with the Segmental MouseDis without transportation of the bone segment. Mice in Groups 0.2 and 1.0 underwent fixation with the device and the bone segment was moved 0.2 mm/day for 10 days and 1.0 mm/day for 2 days, respectively. Mice in Group 1.0/BMP-2 underwent fixation with the device and received an injection of 2.0 μg of rhBMP-2 into the bone defect site immediately after the defect-creating surgery and the bone segment was moved 1.0 mm/day for 2 days.
## 2.4. Surgical Procedure
All mice were initially sedated using isoflurane followed by an intramuscular injection of domitor (Nippon Zenyaku Kogyo Co., Ltd., Fukushima, Japan), midazolam (Sand Co., Yamagata, Japan), and Vetorphale (Meiji Seika Kaisha, Ltd., Tokyo, Japan; 0.075 mL/100 g) at a ratio of 3 : 1 : 1. The operation was performed only on the right femur. The surgical site was prepared by hair removal and sterilization, and the surgery was performed under aseptic conditions. The skin of the lateral thigh was cut from the hip to the knee via a longitudinal incision 20 mm in length to expose the fascia latae between the gluteus superficialis and biceps femoris muscles. To implant the external fixator device, care was taken to position it at the center and parallel to the longitudinal axis of the femur. After predrilling, the first mounting pin was fixed to the distal segment of the femur by inserting it through the most distal hole in the fixator block. The second mounting pin was fixed to the proximal segment of the femur through the most proximal hole, holding the fixator block parallel to the femur. The remaining three mounting pins were inserted through the final three holes, including one in the transport unit.After adjusting the fixator block, a transport segment was created by performing a transverse osteotomy between the second and third pins from the proximal end using a 0.22 mm diameter Gigli saw. A 2.0 mm bone defect was then created between the third and fourth pins from the proximal end using a microdrill. Only mice in Group 1.0/BMP-2 received an injection of 2.0μg of rhBMP-2 into the 2 mm bone defect site. Finally, nonabsorbable thread was used to suture the fascia and skin. Successful completion of the surgical procedure was validated by examining radiographs. All mice were free to perform normal activities immediately after surgery. In all groups except Group CONT, bone transport was conducted routinely from two days after surgery. All animals were sacrificed at eight weeks after surgery. The femur with external fixator was carefully dissected out for radiological and histological evaluation.
## 2.5. Soft X-ray Radiographs
The process of SBT and regenerative new bone formation were monitored using soft X-ray radiographs (SOFTEX-CMB4; SOFTEX Corporation, Kanagawa, Japan) taken at an exposure of 10 seconds, a voltage of 35 kV, and a current of 3.0 mA using X-ray IX Industrial Film (Fuji Photo Film Co., Ltd., Tokyo, Japan).
## 2.6. Microcomputed Tomography (Micro-CT)
Following the sacrifice of mice eight weeks postsurgery, femurs were extracted and fixed in 4% paraformaldehyde for 48 hours at 4°C. The tissue was subsequently moved to PBS and imaged on a microfocus X-ray CT system (inspeXio SMX-90CT; Shimadzu, Tokyo, Japan). Tube voltage, tube current, and voxel size were 90 kV, 100μA, and 30 × 30 × 30 μm, respectively. 3D imaging software (TRI/3D BON; Ratoc System Engineering Co., Ltd., Tokyo, Japan) was used to generate 3D reconstructed images at a threshold determined based on discriminant analysis. We evaluated bone union at both the bone defect and docking sites. Bone union was defined as the continuity of cortical bone over three of four images in the sagittal and coronal plane at the center of the bone defect site and docking site, respectively. We also calculated the bone volume (BV) and bone mineral content (BMC) of regenerative new bone at the bone defect site in all samples. All parameters were measured in a rectangular region of interest (ROI) that consisted of a 1500 μm length of bone mass indicative of regenerative new bone between the proximal femoral segment and transport segment.
## 2.7. Histology
Following micro-CT imaging, femurs were submerged in 20% EDTA for four weeks for demineralization. The resulting tissue was embedded in paraffin using an automatic tissue processor (Tissue-Tek VIP 6; Sakura Fine Tek, Tokyo, Japan) and sectioned through the femur’s long axis at 3μm in the sagittal plane using a microtome (REM-710; Yamato Kohki industrial Co. Ltd., Saitama, Japan). All sections were then stained with hematoxylin and eosin (HE) and evaluated qualitatively.
## 2.8. Statistical Analysis
SPSS software (version 19.0; SPSS, Chicago, IL, USA) was used for all analyses. Differences between groups were analyzed using one-way ANOVA and a subsequent Bonferroni’s post hoc comparisons test.p<0.05 was considered significant.
## 3. Results
### 3.1. Radiological Evaluation
#### 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
#### 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
### 3.2. Histological Evaluation
We performed a histological examination to evaluate new bone formation (Figure5). HE staining of tissue from Group 0.2 and Group 1.0/BMP-2 showed large amounts of longitudinal trabecular bone and regenerative new bone and evidence of rebuilding of the medullary canal at eight weeks after surgery at the bone defect site. Meanwhile, in Group CONT and Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. At the docking site, all mice in Group 1.0/BMP-2 showed bone union between the distal end of the transported segment and distal bone. In contrast, Group 0.2 and Group 1.0 showed discontinuity between the transported segment and distal bone with fibrocartilaginous tissue.Figure 5
Hematoxylin and eosin-stained tissue sections of femurs showing new bone formation at the bone defect site and union at the docking site at 8 weeks after surgery. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. The scale bars indicate 1000μm (a, d, g, j) and 100 μm (b, c, e, f, h, i, k, l).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 3.1. Radiological Evaluation
### 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
### 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
## 3.1.1. Soft X-ray Radiographs
To evaluate the process of SBT and regenerative new bone formation, we examined soft X-ray radiographs (Figure2). At four weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed a weak shadow indicating regenerative new bone at the bone defect site that gradually consolidated over time. At eight weeks after surgery, Group 0.2 and Group 1.0/BMP-2 showed definitive evidence of regenerative new bone formation. In contrast, Group CONT and Group 1.0 showed nonunion at the bone defect site.Figure 2
Soft X-ray images of the process of SBT and regenerative new bone formation. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. Radiographs were obtained immediately after surgery (a, d, g, j) and 4 (b, e, h, k) and 8 (c, f, i, l) weeks following surgery.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 3.2.2. Micro-CT
To evaluate bone union at the bone defect and docking sites, we performed micro-CT (Figure3 and Table 1) and calculated the BV and BMC (Figure 4) at eight weeks after surgery. At the bone defect site, BV and BMC were significantly higher in Group 0.2 than in Group CONT (BV, p<0.001; BMC, p<0.001) and seven of eight mice showed bone union. In contrast, BV and BMC were not significantly different in Group 1.0 compared to Group CONT (BV, p=0.246; BMC, p=0.098) and two of eight mice showed bone union. At the docking site, four of eight mice in Group 0.2 and five of eight mice in Group 1.0 showed bone union. On the other hand, Group 1.0/BMP-2 showed significantly higher BV and BMC than Group CONT and Group 1.0 (BV, p<0.001 and p<0.001; BMC, p<0.001 and p=0.007, respectively). In addition, all mice in Group 1.0/BMP-2 showed bone union at the bone defect and docking sites.Figure 3
Microcomputed tomography images of untreated and treated femurs in mice with a 2 mm bone defect at 8 weeks after surgery. (a) Group CONT. (b) Group 0.2. (c) Group 1.0. (d) Group 1.0/BMP-2. The scale bar indicates 4000μm.
(a)
(b)
(c)
(d)Table 1
Number of mice in each group showing consolidation at the 2 mm bone defect and union at the docking site.
Consolidation rate at the bone defect site
Union rate at the docking site
CONT
0/8
—
0.2
7/8
4/8
1.0
2/8
5/8
1.0/BMP-2
8/8
8/8Figure 4
Analysis of new bone formation in mouse femurs at 8 weeks after surgery from microcomputed tomography images. (a) Bone volume (BV) and (b) bone mineral content (BMC) at the bone defect site. Data show mean ± standard error (S.E.).n = 8. ∗p<0.05.
(a)
(b)
## 3.2. Histological Evaluation
We performed a histological examination to evaluate new bone formation (Figure5). HE staining of tissue from Group 0.2 and Group 1.0/BMP-2 showed large amounts of longitudinal trabecular bone and regenerative new bone and evidence of rebuilding of the medullary canal at eight weeks after surgery at the bone defect site. Meanwhile, in Group CONT and Group 1.0, maturation of regenerative bone at the bone defect site was poor, with the central area between the proximal and distal bone composed mainly of masses of fibrous and adipose tissue. At the docking site, all mice in Group 1.0/BMP-2 showed bone union between the distal end of the transported segment and distal bone. In contrast, Group 0.2 and Group 1.0 showed discontinuity between the transported segment and distal bone with fibrocartilaginous tissue.Figure 5
Hematoxylin and eosin-stained tissue sections of femurs showing new bone formation at the bone defect site and union at the docking site at 8 weeks after surgery. (a–c) Group CONT. (d–f) Group 0.2. (g–i) Group 1.0. (j–l) Group 1.0/BMP-2. The scale bars indicate 1000μm (a, d, g, j) and 100 μm (b, c, e, f, h, i, k, l).
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
## 4. Discussion
We used a mouse bone transport model to study the effects of a single injection of rhBMP-2/ACG on bone regeneration at the bone defect site and bone union at the docking site. All mice treated with a single injection of rhBMP-2/ACG exhibited accelerated bone healing at both the defect and docking sites, even with rapid bone transport.Previous studies have shown that the optimization of transportation speed in SBT improves the consolidation of regenerative new bone formation. For example, Ilizarov [3, 4] demonstrated that the optimal distraction rate was 1.0 mm/day, while a rate of 0.5 mm/day resulted in premature bone healing and a rate of 2.0 mm/day produced only fibrous connective tissue at the distal ends of the bone without osteogenesis in a canine tibia SBT model. In contrast, increased time to docking often results in nonunion at the docking site with interposed fibrocartilaginous tissue at the bone ends [5]. Consistent with these reports in human and large animal studies, our mouse model showed that consolidation was achieved using a distraction rate of 0.2 mm/day, while consolidation was insufficient using a rate of 1.0 mm/day. Union at the docking site was inadequate at the faster distraction rate, and fibrocartilaginous tissue was observed histologically. Our results suggest that findings from mouse SBT models may be extrapolated to larger animal models.rhBMP-2 administration is safe and increases the rate of healing of fractures and wounds and reduces infection rates in patients with open tibia fractures [9]. A previous study reported the effectiveness of a BMP-2-loaded β-TCP/PEG composite injected into the distraction gap at the start and into the docking site at the end of distraction in rabbit models [12]. In the present study, we demonstrated that a single injection of rhBMP-2/ACG combined with SBT accelerated bone healing at both the defect and docking sites in a mouse critical sized bone defect model, even at a faster distraction rate. rhBMP-2/ACG combined with SBT may therefore be effective for enhancing bone healing in large bone defects and reduce the treatment period. However, nonclinical research in nonhuman primates followed by clinical trials is needed to optimize the rhBMP-2 dose required to facilitate distraction osteogenesis in patients with diaphyseal bone defects.
## 5. Conclusions
A single injection of rhBMP-2/ACG exhibited accelerated bone healing at both the defect and docking sites, even with rapid bone transport, in a mouse SBT model. The use of rhBMP-2/ACG combined with SBT may accelerate bone healing in large bone defects without the need for repeated BMP-2 injection.
---
*Source: 1014594-2019-12-24.xml* | 2019 |
# Saliency Aggregation: Multifeature and Neighbor Based Salient Region Detection for Social Images
**Authors:** Ye Liang; Congyan Lang; Jian Yu; Hongzhe Liu; Nan Ma
**Journal:** Applied Computational Intelligence and Soft Computing
(2018)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2018/1014595
---
## Abstract
The popularity of social networks has brought the rapid growth of social images which have become an increasingly important image type. One of the most obvious attributes of social images is the tag. However, the sate-of-the-art methods fail to fully exploit the tag information for saliency detection. Thus this paper focuses on salient region detection of social images using both image appearance features and image tag cues. First, a deep convolution neural network is built, which considers both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation terms are added to the saliency model to enhance salient regions. The aggregation method is dependent on individual images and considers the performance gaps appropriately. Finally, we also have constructed a new large dataset of challenging social images and pixel-wise saliency annotations to promote further researches and evaluations of visual saliency models. Extensive experiments show that the proposed method performs well on not only the new dataset but also several state-of-the-art saliency datasets.
---
## Body
## 1. Introduction
Images and videos are two of the main ways for social entertainments and communications. With the popularity of photo sharing websites, social images have become an important type. The most obvious feature of social images is that they typically have several tags to describe the contents. How to use the tags for multimedia tasks, such as image indexing and retrieval [1, 2], has attracted increasing attention these days [3]. However, tags are seldom considered in state-of-the-art salient region detection models. Therefore, in this paper, we focus on salient region detection of social images using both appearance features and tag features.With the development of saliency detection, a large number of saliency detection algorithms have been developed [4–6]. It has been found that only relying on low-level features cannot achieve satisfactory results. The researches have proved that the hierarchical and deep architectures [7–12] for salient region detection are very effective. Thus, a salient region detection method based on deep learning is proposed in this paper. In addition, various priors are also very important in salient region detection [13], for example, face [14–16], car [17], color [14], center bias [13], and objectness [18–20]. Intuitively, the tags could potentially be important high-level semantic cues for salient region detection [16, 21]. Thus, tags are incorporated into our salient region detection models.It is observed that different methods perform differently in saliency analysis [22]. The performance of saliency varies with individual images. The problem also exists in deep feature based methods and handcrafted feature based methods. So handcrafted feature based detection methods can be considered as complementarities to deep feature based detection methods. However, the fusion process is without ground truth. It is nontrivial to determine which saliency map is better. The good saliency aggregation model should work on each individual image and be able to consider the performance gaps appropriately. Therefore, how to fuse saliency maps of different detection methods is a key issue to be solved in the paper.The framework of salient region detection is shown in Figure1. It includes two parts: deep learning based salient region detection and handcrafted feature based salient region detection. Deep features include CNN (convolution neural network) features and tag features. Finally, the spatial coherence of saliency maps is optimized through the fully connected conditional random field model.Figure 1
Framework.There are a variety of saliency detection benchmark datasets, either from saliency detection field [7, 8, 23–26] or from image segmentation field [27–29]. To promote further researches and evaluations on visual saliency detection for social images, it is necessary to construct a new dataset of social images.The paper focuses on salient region detection of social images. The contributions of this paper are twofold. First, a deep learning based salient region detection method for social images is proposed, considering both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation method is proposed, which fuses state-of-the-art handcrafted feature based detection methods with our deep learning based detection method. The aggregation method is dependent on each specific individual image and considers the saliency performance gaps appropriately. So the detection model has fully taken advantage of image tags.The rest of the paper is organized as follows. The deep learning based model is proposed in Section2. Section 3 discusses the handcrafted feature based detection models. In Section 4, the saliency aggregation method is proposed. Spatial coherence optimization is discussed in Section 5. In Section 6, the new saliency dataset of social images is introduced. In Section 7, extensive experiments are performed and analyzed. Finally, conclusions are given in Section 8.
## 2. Deep Learning Based Salient Region Detection
Deep learning based salient region detection uses two types of features, appearance based CNN (convolution neural network) features and social image tag features. They are discussed in the following subsections.
### 2.1. CNN Based Salient Region Detection
#### 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
#### 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
#### 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
### 2.2. Tag Semantic Feature Computation
Due to the fact that objects are closely related to salient regions, we use object tags to compute semantic features. The probability that a region is a particular object reflects the possibility being a salient region to some extent. Therefore, the probabilities that regions are specific objects can be regarded as priors.RCNN (Regions with CNN) [31] is based on deep learning and has been widely used because of its excellent object detection accuracy. In the paper, RCNN is used to detect objects; thus tag semantics are transformed into RCNN features.Suppose there areX object detectors. For the kth detector, the detection process is as follows.(1) SelectN proposals which are more likely to contain the specific object.(2) Compute theith proposal probability pki of the ith proposal being the kth object, 1≤k≤X, 1≤i≤N. At the same time, each pixel in the ith proposal also has the same probability pki.(3) ForN proposals, each pixel has the score ∑i=1Npki∗fki being the kth object. If the pixel is contained by ith proposal, then fki=1, else fki=0.X dimension feature is obtained for each pixel after X objects detector detection. X dimension feature is normalized as f, f∈RX. Each dimension of f indicates probability being a specific object.
### 2.3. Fusion of CNN Based Saliency and Tag Semantic Features
Assume that the saliency map isSD and RCNN based semantic features is T; the fusion is (3)S=SD·expT.Tags are priors and play weights in fusion.S represents the fused saliency map.
## 2.1. CNN Based Salient Region Detection
### 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
### 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
### 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
## 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
## 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
## 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
## 2.2. Tag Semantic Feature Computation
Due to the fact that objects are closely related to salient regions, we use object tags to compute semantic features. The probability that a region is a particular object reflects the possibility being a salient region to some extent. Therefore, the probabilities that regions are specific objects can be regarded as priors.RCNN (Regions with CNN) [31] is based on deep learning and has been widely used because of its excellent object detection accuracy. In the paper, RCNN is used to detect objects; thus tag semantics are transformed into RCNN features.Suppose there areX object detectors. For the kth detector, the detection process is as follows.(1) SelectN proposals which are more likely to contain the specific object.(2) Compute theith proposal probability pki of the ith proposal being the kth object, 1≤k≤X, 1≤i≤N. At the same time, each pixel in the ith proposal also has the same probability pki.(3) ForN proposals, each pixel has the score ∑i=1Npki∗fki being the kth object. If the pixel is contained by ith proposal, then fki=1, else fki=0.X dimension feature is obtained for each pixel after X objects detector detection. X dimension feature is normalized as f, f∈RX. Each dimension of f indicates probability being a specific object.
## 2.3. Fusion of CNN Based Saliency and Tag Semantic Features
Assume that the saliency map isSD and RCNN based semantic features is T; the fusion is (3)S=SD·expT.Tags are priors and play weights in fusion.S represents the fused saliency map.
## 3. Handcrafted Feature Based Salient Region Detection
It is observed that different methods perform differently in saliency analysis [22]. Although the overall detection effect based on deep features is better than that based on handcrafted features, the differences still exist on individual images. So handcrafted feature based salient maps can be considered as complementarities to deep feature based saliency maps. In Figure 4, the first column shows the original social images; the second shows the ground truth masks; the third shows the salient maps of DRFI method [25] which is based on handcrafted features; the last represents the salient maps of MDF method [8], which are based on deep features. We can see that the last column includes incomplete parts, unclear boundaries, and false detections. So in the paper, some state-of-the-art salient region detection methods based on handcrafted features are selected as complementarities to our proposed deep detection method.Figure 4
Examples of saliency detection results. Images in each column are original images, ground truth masks, salient maps of method DRFI [25], and salient maps of method MDF [8], respectively.
## 4. Saliency Aggregation
### 4.1. Main Idea
It is observed that if a salient region detection method has good effects on a social image, this method has great possibility to get sound effect on similar images. The main idea of aggregation is based on this assumption.In training process, sort lists of all detection methods on all images can be achieved. Sort lists can be seen as priors in testing.In testing process, we searchKNN (K nearest neighbors) images similar to the test image in the training set. Moreover, sort lists of KNN images are known in the training stage. KNN images can vote for detection methods through sort lists. Thus, the test image is able to obtain its sort list based on voting. Salient map of test image can be computed by aggregating its salient maps of different methods using sort lists.Training process and testing process are shown in Figures5 and 6.Figure 5
Training process.Figure 6
Testing process.
### 4.2. Training Process
Given an imageI in the training set, its ground truth is given by G; its salient maps using different detection methods is denoted as S=S1,S2,S3,…,Si,…,SM. In this saliency map set, M is the number of detection methods, and Si is the salient map of the ith method.For every detection method, its salient maps can be compared with ground truthG and yield AUC (Area under ROC Curve) values. The greater the AUC value, the better the saliency detection performance. After AUC value computation, sort lists of all methods can be obtained.For convenience, it is assumed that there are four detection methods. Sort lists are shown in Figure7. The data structure is single linked list. Data domain of header node denotes image and pointer domain of header node points to data node. Nonheader node includes three domains: the first domain is the AUC value, the second domain is the method index, and the last domain is a pointer.Figure 7
Images and their sort lists.
### 4.3. Testing Process
A social image has two parts: image and corresponding tags. In the testing set, imageI and its tag set T=t1,t2,…,ti,…,tN are given, where N is the number of tags. We search its neighbors through tag semantics and image appearance. Sort lists of neighbors can vote for salient maps of image I.
#### 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
#### 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
### 4.4. Vote Based Saliency Maps Aggregation
Suppose the test image isI, the number of tag neighbors is k, and the number of appearance neighbors is k.After tag based search in the training set, the detected neighbor number isy. If y is bigger than k, then k images are selected according to appearance similarities from y images. Finally, tag based neighbor set is given as(4)ImgT=Img1T,Img2T,…,ImgiT,…,ImgxT,where x is the final number of neighbors; if y≥k, then x=k; otherwise, x=y.After appearance based similarity computation in the training set,k nearest neighbors are selected as(5)ImgA=Img1A,Img2A,…,ImgiA,…,ImgkA.Merge sets (4) and (5) and get the set as (6)Img=Img1,Img2,…,Imgx,…,Imgx+k.Each neighbor image has a sort list and contains the AUC values of all detection methods. The AUC values can vote for each detection method. Vote weights are summed as(7)auc=∑i=1x+kauci1,∑i=1x+kauci2,…,∑i=1x+kaucij,…,∑i=1x+kauciM.In∑i=1x+kaucij, i is the ith neighbor and j is the jth detection method. M is the number of detection models.The salient map set of imageI is(8)SI=S1I,S2I,…,Sip,…,SMI,where Sj(I) is the saliency map of the jth detection method.The fused saliency map can be computed as follows.(9)SFI=SI·aucT.
## 4.1. Main Idea
It is observed that if a salient region detection method has good effects on a social image, this method has great possibility to get sound effect on similar images. The main idea of aggregation is based on this assumption.In training process, sort lists of all detection methods on all images can be achieved. Sort lists can be seen as priors in testing.In testing process, we searchKNN (K nearest neighbors) images similar to the test image in the training set. Moreover, sort lists of KNN images are known in the training stage. KNN images can vote for detection methods through sort lists. Thus, the test image is able to obtain its sort list based on voting. Salient map of test image can be computed by aggregating its salient maps of different methods using sort lists.Training process and testing process are shown in Figures5 and 6.Figure 5
Training process.Figure 6
Testing process.
## 4.2. Training Process
Given an imageI in the training set, its ground truth is given by G; its salient maps using different detection methods is denoted as S=S1,S2,S3,…,Si,…,SM. In this saliency map set, M is the number of detection methods, and Si is the salient map of the ith method.For every detection method, its salient maps can be compared with ground truthG and yield AUC (Area under ROC Curve) values. The greater the AUC value, the better the saliency detection performance. After AUC value computation, sort lists of all methods can be obtained.For convenience, it is assumed that there are four detection methods. Sort lists are shown in Figure7. The data structure is single linked list. Data domain of header node denotes image and pointer domain of header node points to data node. Nonheader node includes three domains: the first domain is the AUC value, the second domain is the method index, and the last domain is a pointer.Figure 7
Images and their sort lists.
## 4.3. Testing Process
A social image has two parts: image and corresponding tags. In the testing set, imageI and its tag set T=t1,t2,…,ti,…,tN are given, where N is the number of tags. We search its neighbors through tag semantics and image appearance. Sort lists of neighbors can vote for salient maps of image I.
### 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
### 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
## 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
## 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
## 4.4. Vote Based Saliency Maps Aggregation
Suppose the test image isI, the number of tag neighbors is k, and the number of appearance neighbors is k.After tag based search in the training set, the detected neighbor number isy. If y is bigger than k, then k images are selected according to appearance similarities from y images. Finally, tag based neighbor set is given as(4)ImgT=Img1T,Img2T,…,ImgiT,…,ImgxT,where x is the final number of neighbors; if y≥k, then x=k; otherwise, x=y.After appearance based similarity computation in the training set,k nearest neighbors are selected as(5)ImgA=Img1A,Img2A,…,ImgiA,…,ImgkA.Merge sets (4) and (5) and get the set as (6)Img=Img1,Img2,…,Imgx,…,Imgx+k.Each neighbor image has a sort list and contains the AUC values of all detection methods. The AUC values can vote for each detection method. Vote weights are summed as(7)auc=∑i=1x+kauci1,∑i=1x+kauci2,…,∑i=1x+kaucij,…,∑i=1x+kauciM.In∑i=1x+kaucij, i is the ith neighbor and j is the jth detection method. M is the number of detection models.The salient map set of imageI is(8)SI=S1I,S2I,…,Sip,…,SMI,where Sj(I) is the saliency map of the jth detection method.The fused saliency map can be computed as follows.(9)SFI=SI·aucT.
## 5. Spatial Coherence Optimization
In saliency computations, the spatial relationship of adjacent regions is not considered, so it will result in noises on salient regions. In the field of image segmentation, the researchers use fully connected CRF (conditional random field) model [49] to achieve better segmentation results. Therefore, we use the fully connected CRF model to optimize the spatial coherence of saliency maps.The objective function is defined as follows.(10)SL=-∑ilogPli+∑i,jθijli,lj,where L is the binary variable being salient or not. P(li) is the probability of pixel xi being salient. Initially, P(1)=Si, P(0)=1-Si. Si is the saliency of the pixel i.θ
i
,
j is defined as follows.(11)θi,j=uli,ljω1exp-pi-pj22σ12-Ii-Ij22σ22+ω2exp-pi-pj22σ32.Ifli≠lj, then u(li,lj)=1, or else 0.Both position information and color information are considered inθi,j.p
i is the position of pixel i and pj is the position of pixel j.I
i is the color of pixel i and Ij is the color of pixel j.ω
1
exp
(
-
p
i
-
p
j
2
/
2
σ
1
2
-
I
i
-
I
j
2
/
2
σ
2
2
) suggests that adjacent pixels with similar colors should have similar saliency. σ1 and σ2 control color similarity and distance proximity.ω
2
exp
(
-
p
i
-
p
j
2
/
2
σ
3
2
) only considers position information. The purpose is to remove small areas.
## 6. Construction of Saliency Dataset of Social Images
The paper focuses on salient region detection of social images, so it is necessary to construct a new dataset of social images to promote further researches and evaluations of visual saliency models. The following will be discussed in detail.
### 6.1. Data Source
NUS-WIDE dataset [50] is a web image dataset constructed by NUS lab for media search. The images and the tags of this dataset are from Flickr which is a popular social web site. We randomly select 10000 images from NUS-WIDE dataset. The images come from thirty-eight folders of NUS-WIDE dataset, including carvings, castle, cat, cell phones, chairs, chrysanthemums, classroom, cliff, computers, cooling tower, coral, cordless cougar, courthouse, cow, coyote, dance dancing, deer, den, desert, detail, diver, dock, close-up, cloverleaf, cubs, doll, dog, dogs, fish, flag, eagle, elephant, elk, f-16, facade, and fawn.
### 6.2. Salient Region Annotation
Since the bounding boxes for salient regions are rough and can not reveal region boundaries, we adopt the pixel-wise annotation. In annotation process, nine subjects are asked to specify the attractive regions according to their first glance at the image.To reduce label inconsistency of the annotation results, the pixel consistency score is computed. A pixel can be considered salient if 50% of subjects have selected it [23].Finally, two subjects use Adobe Photoshop to segment salient regions.
### 6.3. Image Selection
First, 10000 images are randomly selected from NUS-wide dataset. Then, the images are further selected by the following criteria.(1)
The color contrast of any salient region and corresponding image is less than 0.7.(2)
Salient regions are rich in size. The proportion of salient regions to the corresponding image covers 10 grades,[0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8), [0.8,0.9), [0.9,1].(3)
At least ten percent of the salient regions connected with the image boundaries.After 5 rounds of selecting, the dataset contains 5429 images.In the new dataset, the images have one or more salient regions; the positions of salient regions are not limited to image centers. The sizes of salient regions are varied. A great deal of images have complex/cluttered backgrounds. There are 78 tags which come from 81 tags of NUS-WIDE dataset. All these will bring challenges to salient region detection.
### 6.4. Typical Images of the New Dataset
In this section, typical examples of images, ground truth masks, and tags are listed below. Images can have one or multiple salient regions in Figure8. The images may have cluttered and complex backgrounds in Figure 9. The sizes of salient regions are rich in Figure 10.Figure 8
Images with one or multiple salient regions.Figure 9
Images with cluttered and complex backgrounds.Figure 10
Images in various size levels.
## 6.1. Data Source
NUS-WIDE dataset [50] is a web image dataset constructed by NUS lab for media search. The images and the tags of this dataset are from Flickr which is a popular social web site. We randomly select 10000 images from NUS-WIDE dataset. The images come from thirty-eight folders of NUS-WIDE dataset, including carvings, castle, cat, cell phones, chairs, chrysanthemums, classroom, cliff, computers, cooling tower, coral, cordless cougar, courthouse, cow, coyote, dance dancing, deer, den, desert, detail, diver, dock, close-up, cloverleaf, cubs, doll, dog, dogs, fish, flag, eagle, elephant, elk, f-16, facade, and fawn.
## 6.2. Salient Region Annotation
Since the bounding boxes for salient regions are rough and can not reveal region boundaries, we adopt the pixel-wise annotation. In annotation process, nine subjects are asked to specify the attractive regions according to their first glance at the image.To reduce label inconsistency of the annotation results, the pixel consistency score is computed. A pixel can be considered salient if 50% of subjects have selected it [23].Finally, two subjects use Adobe Photoshop to segment salient regions.
## 6.3. Image Selection
First, 10000 images are randomly selected from NUS-wide dataset. Then, the images are further selected by the following criteria.(1)
The color contrast of any salient region and corresponding image is less than 0.7.(2)
Salient regions are rich in size. The proportion of salient regions to the corresponding image covers 10 grades,[0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8), [0.8,0.9), [0.9,1].(3)
At least ten percent of the salient regions connected with the image boundaries.After 5 rounds of selecting, the dataset contains 5429 images.In the new dataset, the images have one or more salient regions; the positions of salient regions are not limited to image centers. The sizes of salient regions are varied. A great deal of images have complex/cluttered backgrounds. There are 78 tags which come from 81 tags of NUS-WIDE dataset. All these will bring challenges to salient region detection.
## 6.4. Typical Images of the New Dataset
In this section, typical examples of images, ground truth masks, and tags are listed below. Images can have one or multiple salient regions in Figure8. The images may have cluttered and complex backgrounds in Figure 9. The sizes of salient regions are rich in Figure 10.Figure 8
Images with one or multiple salient regions.Figure 9
Images with cluttered and complex backgrounds.Figure 10
Images in various size levels.
## 7. Experiments
### 7.1. Experimental Setup
#### 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
#### 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
#### 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
### 7.2. Experiments on the New Dataset TBD
#### 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
#### 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
### 7.3. Experiments on State-of-the-Art Datasets
The experiment results are given in Table3. We can see that AUC values of OBS are the highest on all datasets, F-measure values of OBS are the highest on all datasets, and MAE values are the lowest or the second lowest. The performance of OBS is the best. However, the improvements of OBS are not so obvious because objectness feature is not the accurate tag feature. Thus we believe that the results will be improved obviously if we use accurate tag annotation of images.Table 3
F-measure, AUC, and MAE of OBS and 11 state-of-the-art methods on six state-of-the-art datasets.
Metric
AUC
F-measure
MAE
Dataset
MSRA1000
FT
0.766
0.579
0.241
DRFI
0.966
0.845
0.112
RC
0.937
0.817
0.138
GC
0.863
0.719
0.159
HS
0.93
0.813
0.161
MC
0.975
0.894
0.054
MR
0.941
0.824
0.127
SF
0.886
0.7
0.166
BD
0.948
0.82
0.11
MDF
0.978
0.888
0.066
LEGS
0.958
0.87
0.081
OBS
0.984
0.893
0.061
Dataset
HKU-IS
FT
0.71
0.477
0.244
DRFI
0.95
0.776
0.167
RC
0.903
0.726
0.165
GC
0.777
0.588
0.211
HS
0.884
0.71
0.213
MC
0.928
0.798
0.102
MR
0.87
0.714
0.174
SF
0.828
0.59
0.173
BD
0.91
0.726
0.14
MDF
0.971
0.869
0.072
LEGS
0.907
0.77
0.118
OBS
0.976
0.871
0.078
Dataset
PASCAL-S
FT
0.627
0.413
0.309
DRFI
0.899
0.69
0.21
RC
0.84
0.644
0.227
GC
0.727
0.539
0.266
HS
0.838
0.641
0.264
MC
0.907
0.74
0.145
MR
0.852
0.661
0.223
SF
0.746
0.493
0.24
BD
0.866
0.655
0.201
MDF
0.921
0.771
0.146
LEGS
0.891
0.752
0.157
OBS
0.927
0.778
0.141
Dataset
ECSSD
FT
0.663
0.43
0.289
DRFI
0.943
0.782
0.17
RC
0.893
0.738
0.186
GC
0.767
0.597
0.233
HS
0.885
0.727
0.228
MC
0.948
0.837
0.1
MR
0.888
0.736
0.189
SF
0.793
0.548
0.219
BD
0.896
0.716
0.171
MDF
0.957
0.847
0.106
LEGS
0.925
0.827
0.118
OBS
0.968
0.856
0.112
Dataset
DUT-OMRON
FT
0.682
0.381
0.25
DRFI
0.931
0.664
0.15
RC
0.859
0.599
0.189
GC
0.757
0.495
0.218
HS
0.86
0.616
0.227
MC
0.929
0.703
0.088
MR
0.853
0.61
0.187
SF
0.81
0.495
0.147
BD
0.894
0.63
0.144
MDF
0.935
0.728
0.088
LEGS
0.885
0.669
0.133
OBS
0.943
0.731
0.091
Dataset
SOD
FT
0.607
0.441
0.323
DRFI
0.89
0.699
0.223
RC
0.828
0.657
0.242
GC
0.692
0.526
0.284
HS
0.817
0.646
0.283
MC
0.868
0.727
0.179
MR
0.812
0.636
0.259
SF
0.714
0.516
0.267
BD
0.827
0.653
0.229
MDF
0.899
0.793
0.157
LEGS
0.836
0.732
0.195
OBS
0.907
0.801
0.163Experiments on state-of-the-art datasets validate the effectiveness of our proposed method DBS.
## 7.1. Experimental Setup
### 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
### 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
### 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
## 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
## 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
## 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
## 7.2. Experiments on the New Dataset TBD
### 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
### 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
## 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
## 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
## 7.3. Experiments on State-of-the-Art Datasets
The experiment results are given in Table3. We can see that AUC values of OBS are the highest on all datasets, F-measure values of OBS are the highest on all datasets, and MAE values are the lowest or the second lowest. The performance of OBS is the best. However, the improvements of OBS are not so obvious because objectness feature is not the accurate tag feature. Thus we believe that the results will be improved obviously if we use accurate tag annotation of images.Table 3
F-measure, AUC, and MAE of OBS and 11 state-of-the-art methods on six state-of-the-art datasets.
Metric
AUC
F-measure
MAE
Dataset
MSRA1000
FT
0.766
0.579
0.241
DRFI
0.966
0.845
0.112
RC
0.937
0.817
0.138
GC
0.863
0.719
0.159
HS
0.93
0.813
0.161
MC
0.975
0.894
0.054
MR
0.941
0.824
0.127
SF
0.886
0.7
0.166
BD
0.948
0.82
0.11
MDF
0.978
0.888
0.066
LEGS
0.958
0.87
0.081
OBS
0.984
0.893
0.061
Dataset
HKU-IS
FT
0.71
0.477
0.244
DRFI
0.95
0.776
0.167
RC
0.903
0.726
0.165
GC
0.777
0.588
0.211
HS
0.884
0.71
0.213
MC
0.928
0.798
0.102
MR
0.87
0.714
0.174
SF
0.828
0.59
0.173
BD
0.91
0.726
0.14
MDF
0.971
0.869
0.072
LEGS
0.907
0.77
0.118
OBS
0.976
0.871
0.078
Dataset
PASCAL-S
FT
0.627
0.413
0.309
DRFI
0.899
0.69
0.21
RC
0.84
0.644
0.227
GC
0.727
0.539
0.266
HS
0.838
0.641
0.264
MC
0.907
0.74
0.145
MR
0.852
0.661
0.223
SF
0.746
0.493
0.24
BD
0.866
0.655
0.201
MDF
0.921
0.771
0.146
LEGS
0.891
0.752
0.157
OBS
0.927
0.778
0.141
Dataset
ECSSD
FT
0.663
0.43
0.289
DRFI
0.943
0.782
0.17
RC
0.893
0.738
0.186
GC
0.767
0.597
0.233
HS
0.885
0.727
0.228
MC
0.948
0.837
0.1
MR
0.888
0.736
0.189
SF
0.793
0.548
0.219
BD
0.896
0.716
0.171
MDF
0.957
0.847
0.106
LEGS
0.925
0.827
0.118
OBS
0.968
0.856
0.112
Dataset
DUT-OMRON
FT
0.682
0.381
0.25
DRFI
0.931
0.664
0.15
RC
0.859
0.599
0.189
GC
0.757
0.495
0.218
HS
0.86
0.616
0.227
MC
0.929
0.703
0.088
MR
0.853
0.61
0.187
SF
0.81
0.495
0.147
BD
0.894
0.63
0.144
MDF
0.935
0.728
0.088
LEGS
0.885
0.669
0.133
OBS
0.943
0.731
0.091
Dataset
SOD
FT
0.607
0.441
0.323
DRFI
0.89
0.699
0.223
RC
0.828
0.657
0.242
GC
0.692
0.526
0.284
HS
0.817
0.646
0.283
MC
0.868
0.727
0.179
MR
0.812
0.636
0.259
SF
0.714
0.516
0.267
BD
0.827
0.653
0.229
MDF
0.899
0.793
0.157
LEGS
0.836
0.732
0.195
OBS
0.907
0.801
0.163Experiments on state-of-the-art datasets validate the effectiveness of our proposed method DBS.
## 8. Conclusions
The paper focuses on salient region detection of social images. First, the proposed deep learning based salient region detection method considers both appearance features and tag features. Tag features are detected by RCNN models. Second, tag neighbor features and appearance neighbor features are added to the saliency aggregation model. Finally, a new database of challenging social images and pixel-wise saliency annotations is constructed, which can promote further researches and evaluations of visual saliency model.
---
*Source: 1014595-2018-01-01.xml* | 1014595-2018-01-01_1014595-2018-01-01.md | 62,125 | Saliency Aggregation: Multifeature and Neighbor Based Salient Region Detection for Social Images | Ye Liang; Congyan Lang; Jian Yu; Hongzhe Liu; Nan Ma | Applied Computational Intelligence and Soft Computing
(2018) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2018/1014595 | 1014595-2018-01-01.xml | ---
## Abstract
The popularity of social networks has brought the rapid growth of social images which have become an increasingly important image type. One of the most obvious attributes of social images is the tag. However, the sate-of-the-art methods fail to fully exploit the tag information for saliency detection. Thus this paper focuses on salient region detection of social images using both image appearance features and image tag cues. First, a deep convolution neural network is built, which considers both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation terms are added to the saliency model to enhance salient regions. The aggregation method is dependent on individual images and considers the performance gaps appropriately. Finally, we also have constructed a new large dataset of challenging social images and pixel-wise saliency annotations to promote further researches and evaluations of visual saliency models. Extensive experiments show that the proposed method performs well on not only the new dataset but also several state-of-the-art saliency datasets.
---
## Body
## 1. Introduction
Images and videos are two of the main ways for social entertainments and communications. With the popularity of photo sharing websites, social images have become an important type. The most obvious feature of social images is that they typically have several tags to describe the contents. How to use the tags for multimedia tasks, such as image indexing and retrieval [1, 2], has attracted increasing attention these days [3]. However, tags are seldom considered in state-of-the-art salient region detection models. Therefore, in this paper, we focus on salient region detection of social images using both appearance features and tag features.With the development of saliency detection, a large number of saliency detection algorithms have been developed [4–6]. It has been found that only relying on low-level features cannot achieve satisfactory results. The researches have proved that the hierarchical and deep architectures [7–12] for salient region detection are very effective. Thus, a salient region detection method based on deep learning is proposed in this paper. In addition, various priors are also very important in salient region detection [13], for example, face [14–16], car [17], color [14], center bias [13], and objectness [18–20]. Intuitively, the tags could potentially be important high-level semantic cues for salient region detection [16, 21]. Thus, tags are incorporated into our salient region detection models.It is observed that different methods perform differently in saliency analysis [22]. The performance of saliency varies with individual images. The problem also exists in deep feature based methods and handcrafted feature based methods. So handcrafted feature based detection methods can be considered as complementarities to deep feature based detection methods. However, the fusion process is without ground truth. It is nontrivial to determine which saliency map is better. The good saliency aggregation model should work on each individual image and be able to consider the performance gaps appropriately. Therefore, how to fuse saliency maps of different detection methods is a key issue to be solved in the paper.The framework of salient region detection is shown in Figure1. It includes two parts: deep learning based salient region detection and handcrafted feature based salient region detection. Deep features include CNN (convolution neural network) features and tag features. Finally, the spatial coherence of saliency maps is optimized through the fully connected conditional random field model.Figure 1
Framework.There are a variety of saliency detection benchmark datasets, either from saliency detection field [7, 8, 23–26] or from image segmentation field [27–29]. To promote further researches and evaluations on visual saliency detection for social images, it is necessary to construct a new dataset of social images.The paper focuses on salient region detection of social images. The contributions of this paper are twofold. First, a deep learning based salient region detection method for social images is proposed, considering both appearance features and tag features. Second, tag neighbor and appearance neighbor based saliency aggregation method is proposed, which fuses state-of-the-art handcrafted feature based detection methods with our deep learning based detection method. The aggregation method is dependent on each specific individual image and considers the saliency performance gaps appropriately. So the detection model has fully taken advantage of image tags.The rest of the paper is organized as follows. The deep learning based model is proposed in Section2. Section 3 discusses the handcrafted feature based detection models. In Section 4, the saliency aggregation method is proposed. Spatial coherence optimization is discussed in Section 5. In Section 6, the new saliency dataset of social images is introduced. In Section 7, extensive experiments are performed and analyzed. Finally, conclusions are given in Section 8.
## 2. Deep Learning Based Salient Region Detection
Deep learning based salient region detection uses two types of features, appearance based CNN (convolution neural network) features and social image tag features. They are discussed in the following subsections.
### 2.1. CNN Based Salient Region Detection
#### 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
#### 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
#### 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
### 2.2. Tag Semantic Feature Computation
Due to the fact that objects are closely related to salient regions, we use object tags to compute semantic features. The probability that a region is a particular object reflects the possibility being a salient region to some extent. Therefore, the probabilities that regions are specific objects can be regarded as priors.RCNN (Regions with CNN) [31] is based on deep learning and has been widely used because of its excellent object detection accuracy. In the paper, RCNN is used to detect objects; thus tag semantics are transformed into RCNN features.Suppose there areX object detectors. For the kth detector, the detection process is as follows.(1) SelectN proposals which are more likely to contain the specific object.(2) Compute theith proposal probability pki of the ith proposal being the kth object, 1≤k≤X, 1≤i≤N. At the same time, each pixel in the ith proposal also has the same probability pki.(3) ForN proposals, each pixel has the score ∑i=1Npki∗fki being the kth object. If the pixel is contained by ith proposal, then fki=1, else fki=0.X dimension feature is obtained for each pixel after X objects detector detection. X dimension feature is normalized as f, f∈RX. Each dimension of f indicates probability being a specific object.
### 2.3. Fusion of CNN Based Saliency and Tag Semantic Features
Assume that the saliency map isSD and RCNN based semantic features is T; the fusion is (3)S=SD·expT.Tags are priors and play weights in fusion.S represents the fused saliency map.
## 2.1. CNN Based Salient Region Detection
### 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
### 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
### 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
## 2.1.1. Network Architecture
The deep network for appearance feature extraction has 8 layers [30] as shown in Figure 2. It includes 5 convolution layers, 2 fully connected layers, and 1 output layer. The bottom layer represents the input image and the adjacent upper layer represents the regions for deep feature extraction.Figure 2
Architecture of network.The convolution layers are responsible for the multiscale feature extraction. In order to achieve translation invariance, max pooling operation is performed after convolution operation. The learned feature is composed of 4096 elements. Fully connected layers are followed by ReLU (Rectified Linear Units) for nonlinear mapping. The dropout procedure is to avoid overfitting. ReLU performs the operation for each element in the following.(1)Rxi=max0,xi,where x is the feature of 4096 elements; if xi≥0, then max(0,xi)=xi; otherwise max(0,xi)=0,1≤i≤4096.The output layer uses softmax regression to calculate the probability of image patches being salient.
## 2.1.2. Multiscale CNN Feature Computation
In an image, salient regions have uniqueness, scarcity, and obvious difference with their neighborhoods. Inspired by literature [8], in order to effectively compute the saliency, three types of differences are computed, that is, the difference between the region and its neighborhoods, the difference between the region and the whole image, and the difference between the region and image boundaries. To compute these differences, four types of regions are extracted: (1) rectangle sample in a sliding window fashion; (2) neighborhoods of rectangle sample; (3) boundaries of the image; (4) image area except rectangle sample. Four types of regions are shown in Figure 3.Figure 3
Four types of regions. (a) The red region denotes rectangle sample, (b) The blue regions denote neighborhoods of rectangle sample, (c) The blue regions denote boundaries of the image, (d) The blue regions denote image area except rectangle sample.
(a)
(b)
(c)
(d)
## 2.1.3. Training of CNN Network
Caffe [30], an open source framework, is used for CNN training and testing. The deep convolution neural network is originally trained on the ImageNet dataset. We extract multiscale features for each region and fine-tune the network parameters. For each image in the training set, we crop samples into 51×51 RGB patches in a sliding window fashion with a stride of 10 pixels. To label the sample patches, if more than 70% pixels in the example are salient, then this sample label is 1; otherwise it is 0. Using this annotation strategy, we obtain sample regions Bi and corresponding labels li.In fine-tuning process, the cost function is the softmax loss with weight decay given by(2)Lθ=-1m∑i=1m∑j=01lli=jlogPli=j∣θ+λ∑k=18WkF2,where θ is the learnable parameter of convolution neural network, including the bias and weights of all layers; l· is the indicator function; P(li=j∣θ) is the probability of the ith sample being salient; λ is the parameter of weight decay; Wk is the weight of the kth layer. We use stochastic gradient descent to train the network with batch size m=256, λ=0.0005. The initial learning rate is 0.01. When the cost is stabilized, the learning rate is decreased by a factor of 0.1. 80 epochs are repeated for the training process. The dropout rate is set to 0.5 to avoid overfitting.
## 2.2. Tag Semantic Feature Computation
Due to the fact that objects are closely related to salient regions, we use object tags to compute semantic features. The probability that a region is a particular object reflects the possibility being a salient region to some extent. Therefore, the probabilities that regions are specific objects can be regarded as priors.RCNN (Regions with CNN) [31] is based on deep learning and has been widely used because of its excellent object detection accuracy. In the paper, RCNN is used to detect objects; thus tag semantics are transformed into RCNN features.Suppose there areX object detectors. For the kth detector, the detection process is as follows.(1) SelectN proposals which are more likely to contain the specific object.(2) Compute theith proposal probability pki of the ith proposal being the kth object, 1≤k≤X, 1≤i≤N. At the same time, each pixel in the ith proposal also has the same probability pki.(3) ForN proposals, each pixel has the score ∑i=1Npki∗fki being the kth object. If the pixel is contained by ith proposal, then fki=1, else fki=0.X dimension feature is obtained for each pixel after X objects detector detection. X dimension feature is normalized as f, f∈RX. Each dimension of f indicates probability being a specific object.
## 2.3. Fusion of CNN Based Saliency and Tag Semantic Features
Assume that the saliency map isSD and RCNN based semantic features is T; the fusion is (3)S=SD·expT.Tags are priors and play weights in fusion.S represents the fused saliency map.
## 3. Handcrafted Feature Based Salient Region Detection
It is observed that different methods perform differently in saliency analysis [22]. Although the overall detection effect based on deep features is better than that based on handcrafted features, the differences still exist on individual images. So handcrafted feature based salient maps can be considered as complementarities to deep feature based saliency maps. In Figure 4, the first column shows the original social images; the second shows the ground truth masks; the third shows the salient maps of DRFI method [25] which is based on handcrafted features; the last represents the salient maps of MDF method [8], which are based on deep features. We can see that the last column includes incomplete parts, unclear boundaries, and false detections. So in the paper, some state-of-the-art salient region detection methods based on handcrafted features are selected as complementarities to our proposed deep detection method.Figure 4
Examples of saliency detection results. Images in each column are original images, ground truth masks, salient maps of method DRFI [25], and salient maps of method MDF [8], respectively.
## 4. Saliency Aggregation
### 4.1. Main Idea
It is observed that if a salient region detection method has good effects on a social image, this method has great possibility to get sound effect on similar images. The main idea of aggregation is based on this assumption.In training process, sort lists of all detection methods on all images can be achieved. Sort lists can be seen as priors in testing.In testing process, we searchKNN (K nearest neighbors) images similar to the test image in the training set. Moreover, sort lists of KNN images are known in the training stage. KNN images can vote for detection methods through sort lists. Thus, the test image is able to obtain its sort list based on voting. Salient map of test image can be computed by aggregating its salient maps of different methods using sort lists.Training process and testing process are shown in Figures5 and 6.Figure 5
Training process.Figure 6
Testing process.
### 4.2. Training Process
Given an imageI in the training set, its ground truth is given by G; its salient maps using different detection methods is denoted as S=S1,S2,S3,…,Si,…,SM. In this saliency map set, M is the number of detection methods, and Si is the salient map of the ith method.For every detection method, its salient maps can be compared with ground truthG and yield AUC (Area under ROC Curve) values. The greater the AUC value, the better the saliency detection performance. After AUC value computation, sort lists of all methods can be obtained.For convenience, it is assumed that there are four detection methods. Sort lists are shown in Figure7. The data structure is single linked list. Data domain of header node denotes image and pointer domain of header node points to data node. Nonheader node includes three domains: the first domain is the AUC value, the second domain is the method index, and the last domain is a pointer.Figure 7
Images and their sort lists.
### 4.3. Testing Process
A social image has two parts: image and corresponding tags. In the testing set, imageI and its tag set T=t1,t2,…,ti,…,tN are given, where N is the number of tags. We search its neighbors through tag semantics and image appearance. Sort lists of neighbors can vote for salient maps of image I.
#### 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
#### 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
### 4.4. Vote Based Saliency Maps Aggregation
Suppose the test image isI, the number of tag neighbors is k, and the number of appearance neighbors is k.After tag based search in the training set, the detected neighbor number isy. If y is bigger than k, then k images are selected according to appearance similarities from y images. Finally, tag based neighbor set is given as(4)ImgT=Img1T,Img2T,…,ImgiT,…,ImgxT,where x is the final number of neighbors; if y≥k, then x=k; otherwise, x=y.After appearance based similarity computation in the training set,k nearest neighbors are selected as(5)ImgA=Img1A,Img2A,…,ImgiA,…,ImgkA.Merge sets (4) and (5) and get the set as (6)Img=Img1,Img2,…,Imgx,…,Imgx+k.Each neighbor image has a sort list and contains the AUC values of all detection methods. The AUC values can vote for each detection method. Vote weights are summed as(7)auc=∑i=1x+kauci1,∑i=1x+kauci2,…,∑i=1x+kaucij,…,∑i=1x+kauciM.In∑i=1x+kaucij, i is the ith neighbor and j is the jth detection method. M is the number of detection models.The salient map set of imageI is(8)SI=S1I,S2I,…,Sip,…,SMI,where Sj(I) is the saliency map of the jth detection method.The fused saliency map can be computed as follows.(9)SFI=SI·aucT.
## 4.1. Main Idea
It is observed that if a salient region detection method has good effects on a social image, this method has great possibility to get sound effect on similar images. The main idea of aggregation is based on this assumption.In training process, sort lists of all detection methods on all images can be achieved. Sort lists can be seen as priors in testing.In testing process, we searchKNN (K nearest neighbors) images similar to the test image in the training set. Moreover, sort lists of KNN images are known in the training stage. KNN images can vote for detection methods through sort lists. Thus, the test image is able to obtain its sort list based on voting. Salient map of test image can be computed by aggregating its salient maps of different methods using sort lists.Training process and testing process are shown in Figures5 and 6.Figure 5
Training process.Figure 6
Testing process.
## 4.2. Training Process
Given an imageI in the training set, its ground truth is given by G; its salient maps using different detection methods is denoted as S=S1,S2,S3,…,Si,…,SM. In this saliency map set, M is the number of detection methods, and Si is the salient map of the ith method.For every detection method, its salient maps can be compared with ground truthG and yield AUC (Area under ROC Curve) values. The greater the AUC value, the better the saliency detection performance. After AUC value computation, sort lists of all methods can be obtained.For convenience, it is assumed that there are four detection methods. Sort lists are shown in Figure7. The data structure is single linked list. Data domain of header node denotes image and pointer domain of header node points to data node. Nonheader node includes three domains: the first domain is the AUC value, the second domain is the method index, and the last domain is a pointer.Figure 7
Images and their sort lists.
## 4.3. Testing Process
A social image has two parts: image and corresponding tags. In the testing set, imageI and its tag set T=t1,t2,…,ti,…,tN are given, where N is the number of tags. We search its neighbors through tag semantics and image appearance. Sort lists of neighbors can vote for salient maps of image I.
### 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
### 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
## 4.3.1. Tag Based Neighbor Search
There are two types of tags: object tags and scene tags. Because objects are closely related to salient regions, object tags are used in semantic search.There are 37 object tags in the new dataset, including animal, bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, whale, vehicles, boats, cars, plane, train, person, police, military, tattoo, computer, coral, flowers, flags, tower, statue, sign, book, sun, leaf, sand, tree, food, rocks, and toy.In these categories, animal has super class and subclass relationship with bear, birds, cat, fox, zebra, horses, tiger, cow, dog, elk, fish, and whale; vehicles have super class and subclass relationship with boats, cars, plane, and train; person has super class and subclass relationship with police, military, and tattoo.Although super class and subclass have great relevance in the class definition, many subclasses have a variety of differences in environment and appearance. So, for animal class, subclasses need exact matching to find neighbors; for vehicles class, subclasses need exact matching to find neighbors; because of particularity of class people, if there is no exact matching of subclass, matching can be performed at person level.
## 4.3.2. Appearance Based Neighbor Search
256 dimensional histogram of RGB color space is used andχ2 distance is computed.
## 4.4. Vote Based Saliency Maps Aggregation
Suppose the test image isI, the number of tag neighbors is k, and the number of appearance neighbors is k.After tag based search in the training set, the detected neighbor number isy. If y is bigger than k, then k images are selected according to appearance similarities from y images. Finally, tag based neighbor set is given as(4)ImgT=Img1T,Img2T,…,ImgiT,…,ImgxT,where x is the final number of neighbors; if y≥k, then x=k; otherwise, x=y.After appearance based similarity computation in the training set,k nearest neighbors are selected as(5)ImgA=Img1A,Img2A,…,ImgiA,…,ImgkA.Merge sets (4) and (5) and get the set as (6)Img=Img1,Img2,…,Imgx,…,Imgx+k.Each neighbor image has a sort list and contains the AUC values of all detection methods. The AUC values can vote for each detection method. Vote weights are summed as(7)auc=∑i=1x+kauci1,∑i=1x+kauci2,…,∑i=1x+kaucij,…,∑i=1x+kauciM.In∑i=1x+kaucij, i is the ith neighbor and j is the jth detection method. M is the number of detection models.The salient map set of imageI is(8)SI=S1I,S2I,…,Sip,…,SMI,where Sj(I) is the saliency map of the jth detection method.The fused saliency map can be computed as follows.(9)SFI=SI·aucT.
## 5. Spatial Coherence Optimization
In saliency computations, the spatial relationship of adjacent regions is not considered, so it will result in noises on salient regions. In the field of image segmentation, the researchers use fully connected CRF (conditional random field) model [49] to achieve better segmentation results. Therefore, we use the fully connected CRF model to optimize the spatial coherence of saliency maps.The objective function is defined as follows.(10)SL=-∑ilogPli+∑i,jθijli,lj,where L is the binary variable being salient or not. P(li) is the probability of pixel xi being salient. Initially, P(1)=Si, P(0)=1-Si. Si is the saliency of the pixel i.θ
i
,
j is defined as follows.(11)θi,j=uli,ljω1exp-pi-pj22σ12-Ii-Ij22σ22+ω2exp-pi-pj22σ32.Ifli≠lj, then u(li,lj)=1, or else 0.Both position information and color information are considered inθi,j.p
i is the position of pixel i and pj is the position of pixel j.I
i is the color of pixel i and Ij is the color of pixel j.ω
1
exp
(
-
p
i
-
p
j
2
/
2
σ
1
2
-
I
i
-
I
j
2
/
2
σ
2
2
) suggests that adjacent pixels with similar colors should have similar saliency. σ1 and σ2 control color similarity and distance proximity.ω
2
exp
(
-
p
i
-
p
j
2
/
2
σ
3
2
) only considers position information. The purpose is to remove small areas.
## 6. Construction of Saliency Dataset of Social Images
The paper focuses on salient region detection of social images, so it is necessary to construct a new dataset of social images to promote further researches and evaluations of visual saliency models. The following will be discussed in detail.
### 6.1. Data Source
NUS-WIDE dataset [50] is a web image dataset constructed by NUS lab for media search. The images and the tags of this dataset are from Flickr which is a popular social web site. We randomly select 10000 images from NUS-WIDE dataset. The images come from thirty-eight folders of NUS-WIDE dataset, including carvings, castle, cat, cell phones, chairs, chrysanthemums, classroom, cliff, computers, cooling tower, coral, cordless cougar, courthouse, cow, coyote, dance dancing, deer, den, desert, detail, diver, dock, close-up, cloverleaf, cubs, doll, dog, dogs, fish, flag, eagle, elephant, elk, f-16, facade, and fawn.
### 6.2. Salient Region Annotation
Since the bounding boxes for salient regions are rough and can not reveal region boundaries, we adopt the pixel-wise annotation. In annotation process, nine subjects are asked to specify the attractive regions according to their first glance at the image.To reduce label inconsistency of the annotation results, the pixel consistency score is computed. A pixel can be considered salient if 50% of subjects have selected it [23].Finally, two subjects use Adobe Photoshop to segment salient regions.
### 6.3. Image Selection
First, 10000 images are randomly selected from NUS-wide dataset. Then, the images are further selected by the following criteria.(1)
The color contrast of any salient region and corresponding image is less than 0.7.(2)
Salient regions are rich in size. The proportion of salient regions to the corresponding image covers 10 grades,[0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8), [0.8,0.9), [0.9,1].(3)
At least ten percent of the salient regions connected with the image boundaries.After 5 rounds of selecting, the dataset contains 5429 images.In the new dataset, the images have one or more salient regions; the positions of salient regions are not limited to image centers. The sizes of salient regions are varied. A great deal of images have complex/cluttered backgrounds. There are 78 tags which come from 81 tags of NUS-WIDE dataset. All these will bring challenges to salient region detection.
### 6.4. Typical Images of the New Dataset
In this section, typical examples of images, ground truth masks, and tags are listed below. Images can have one or multiple salient regions in Figure8. The images may have cluttered and complex backgrounds in Figure 9. The sizes of salient regions are rich in Figure 10.Figure 8
Images with one or multiple salient regions.Figure 9
Images with cluttered and complex backgrounds.Figure 10
Images in various size levels.
## 6.1. Data Source
NUS-WIDE dataset [50] is a web image dataset constructed by NUS lab for media search. The images and the tags of this dataset are from Flickr which is a popular social web site. We randomly select 10000 images from NUS-WIDE dataset. The images come from thirty-eight folders of NUS-WIDE dataset, including carvings, castle, cat, cell phones, chairs, chrysanthemums, classroom, cliff, computers, cooling tower, coral, cordless cougar, courthouse, cow, coyote, dance dancing, deer, den, desert, detail, diver, dock, close-up, cloverleaf, cubs, doll, dog, dogs, fish, flag, eagle, elephant, elk, f-16, facade, and fawn.
## 6.2. Salient Region Annotation
Since the bounding boxes for salient regions are rough and can not reveal region boundaries, we adopt the pixel-wise annotation. In annotation process, nine subjects are asked to specify the attractive regions according to their first glance at the image.To reduce label inconsistency of the annotation results, the pixel consistency score is computed. A pixel can be considered salient if 50% of subjects have selected it [23].Finally, two subjects use Adobe Photoshop to segment salient regions.
## 6.3. Image Selection
First, 10000 images are randomly selected from NUS-wide dataset. Then, the images are further selected by the following criteria.(1)
The color contrast of any salient region and corresponding image is less than 0.7.(2)
Salient regions are rich in size. The proportion of salient regions to the corresponding image covers 10 grades,[0,0.1), [0.1,0.2), [0.2,0.3), [0.3,0.4), [0.4,0.5), [0.5,0.6), [0.6,0.7), [0.7,0.8), [0.8,0.9), [0.9,1].(3)
At least ten percent of the salient regions connected with the image boundaries.After 5 rounds of selecting, the dataset contains 5429 images.In the new dataset, the images have one or more salient regions; the positions of salient regions are not limited to image centers. The sizes of salient regions are varied. A great deal of images have complex/cluttered backgrounds. There are 78 tags which come from 81 tags of NUS-WIDE dataset. All these will bring challenges to salient region detection.
## 6.4. Typical Images of the New Dataset
In this section, typical examples of images, ground truth masks, and tags are listed below. Images can have one or multiple salient regions in Figure8. The images may have cluttered and complex backgrounds in Figure 9. The sizes of salient regions are rich in Figure 10.Figure 8
Images with one or multiple salient regions.Figure 9
Images with cluttered and complex backgrounds.Figure 10
Images in various size levels.
## 7. Experiments
### 7.1. Experimental Setup
#### 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
#### 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
#### 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
### 7.2. Experiments on the New Dataset TBD
#### 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
#### 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
### 7.3. Experiments on State-of-the-Art Datasets
The experiment results are given in Table3. We can see that AUC values of OBS are the highest on all datasets, F-measure values of OBS are the highest on all datasets, and MAE values are the lowest or the second lowest. The performance of OBS is the best. However, the improvements of OBS are not so obvious because objectness feature is not the accurate tag feature. Thus we believe that the results will be improved obviously if we use accurate tag annotation of images.Table 3
F-measure, AUC, and MAE of OBS and 11 state-of-the-art methods on six state-of-the-art datasets.
Metric
AUC
F-measure
MAE
Dataset
MSRA1000
FT
0.766
0.579
0.241
DRFI
0.966
0.845
0.112
RC
0.937
0.817
0.138
GC
0.863
0.719
0.159
HS
0.93
0.813
0.161
MC
0.975
0.894
0.054
MR
0.941
0.824
0.127
SF
0.886
0.7
0.166
BD
0.948
0.82
0.11
MDF
0.978
0.888
0.066
LEGS
0.958
0.87
0.081
OBS
0.984
0.893
0.061
Dataset
HKU-IS
FT
0.71
0.477
0.244
DRFI
0.95
0.776
0.167
RC
0.903
0.726
0.165
GC
0.777
0.588
0.211
HS
0.884
0.71
0.213
MC
0.928
0.798
0.102
MR
0.87
0.714
0.174
SF
0.828
0.59
0.173
BD
0.91
0.726
0.14
MDF
0.971
0.869
0.072
LEGS
0.907
0.77
0.118
OBS
0.976
0.871
0.078
Dataset
PASCAL-S
FT
0.627
0.413
0.309
DRFI
0.899
0.69
0.21
RC
0.84
0.644
0.227
GC
0.727
0.539
0.266
HS
0.838
0.641
0.264
MC
0.907
0.74
0.145
MR
0.852
0.661
0.223
SF
0.746
0.493
0.24
BD
0.866
0.655
0.201
MDF
0.921
0.771
0.146
LEGS
0.891
0.752
0.157
OBS
0.927
0.778
0.141
Dataset
ECSSD
FT
0.663
0.43
0.289
DRFI
0.943
0.782
0.17
RC
0.893
0.738
0.186
GC
0.767
0.597
0.233
HS
0.885
0.727
0.228
MC
0.948
0.837
0.1
MR
0.888
0.736
0.189
SF
0.793
0.548
0.219
BD
0.896
0.716
0.171
MDF
0.957
0.847
0.106
LEGS
0.925
0.827
0.118
OBS
0.968
0.856
0.112
Dataset
DUT-OMRON
FT
0.682
0.381
0.25
DRFI
0.931
0.664
0.15
RC
0.859
0.599
0.189
GC
0.757
0.495
0.218
HS
0.86
0.616
0.227
MC
0.929
0.703
0.088
MR
0.853
0.61
0.187
SF
0.81
0.495
0.147
BD
0.894
0.63
0.144
MDF
0.935
0.728
0.088
LEGS
0.885
0.669
0.133
OBS
0.943
0.731
0.091
Dataset
SOD
FT
0.607
0.441
0.323
DRFI
0.89
0.699
0.223
RC
0.828
0.657
0.242
GC
0.692
0.526
0.284
HS
0.817
0.646
0.283
MC
0.868
0.727
0.179
MR
0.812
0.636
0.259
SF
0.714
0.516
0.267
BD
0.827
0.653
0.229
MDF
0.899
0.793
0.157
LEGS
0.836
0.732
0.195
OBS
0.907
0.801
0.163Experiments on state-of-the-art datasets validate the effectiveness of our proposed method DBS.
## 7.1. Experimental Setup
### 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
### 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
### 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
## 7.1.1. Experiments on the New Dataset
The aim of the paper is to solve salient region detection of social images. So the main experimental dataset is our new dataset, which is abbreviated as TBD (Tag Based Dataset).We selected 20 object tags, including bear, birds, boats, buildings, cars, cat, computer, coral, cow, dog, elk, fish, flowers, fox, horses, person, plane, tiger, train, and zebra. Correspondingly, 20 RCNN object detectors were chosen to extract RCNN features. Top 1000 proposals of each detector were used to compute RCNN features.The proposed deep based detection method is abbreviated as DBS (Deep Based Saliency). DBS method was compared with 27 state-of-the-art methods in Section7.2.1. 27 state-of-the-art methods are CB [34], FT [23], SEG [44], RC [14], SVO [17], LRR [39], SF [45], GS [37], CA [33], SS [47], HS [7], TD [48], MR [24], DRFI [25], PCA [41], HM [38], GC [36], MC [40], DSR [35], SBF [43], BD [42], SMD [46], BL [32], MCDL [9], MDF [8], LEGS [10], and RFCN [11]. These methods not only are very popular but also cover many types.In addition, we also verify the performance of the aggregation method in Section7.2.2.
## 7.1.2. Experiments on State-of-the-Art Datasets
We also carried out the experiments on six state-of-the-art datasets to validate our method. These datasets are MSRA1000 [23], DUT-OMRON [24], ECSSD [7], HKU-IS [8], PASCAL-S [51], and SOD [27]. In these datasets, SOD [27] is a dataset which is from segmentation field; others are from saliency field. Because these datasets have no image level tags, we extract objectness feature [19] of these datasets. Objectness is a kind of high-level semantic cues, so objectness cue is similar to tag feature. Compared with the method DBS, the method using objectness feature instead of tag feature is abbreviated as OBS (Objectness Based Saliency).OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].
## 7.1.3. Evaluation Criteria
We adopted popular performance evaluations to quantitatively evaluate the results, including PR (Precision Recall) curves, ROC (Receiver Operating Characteristic) curves,F-measure value, AUC (Area under ROC Curve) value, and MAE (Mean Absolute Error) value, respectively.
## 7.2. Experiments on the New Dataset TBD
### 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
### 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
## 7.2.1. Experiments of Deep Learning Based Detection Method
DBS is compared with 27 state-of-the-art methods. The results are given in Table1 and Figure 11.Table 1
F-measure, AUC, and MAE of DBS and 27 state-of-the-art methods.
F-measure
AUC
MAE
CB
0.5472
0.7971
0.2662
SEG
0.4917
0.7588
0.3592
SVO
0.3498
0.8361
0.409
SF
0.3659
0.7541
0.2077
CA
0.5161
0.8287
0.2778
TD
0.5432
0.8081
0.2333
SS
0.2516
0.6714
0.2499
HS
0.5576
0.7883
0.2747
DRFI
0.5897
0.8623
0.2063
HM
0.4892
0.7945
0.2263
BD
0.5443
0.8185
0.1955
BL
0.5823
0.8562
0.266
MR
0.5084
0.7753
0.229
PCA
0.5392
0.8439
0.2778
FT
0.3559
0.6126
0.2808
RC
0.5307
0.8105
0.3128
LRR
0.5124
0.7956
0.3067
GS
0.5164
0.8136
0.2056
SMD
0.6033
0.8437
0.1976
GC
0.5063
0.7511
0.2596
DSR
0.5035
0.8139
0.2105
MC
0.574
0.8427
0.2287
SBF
0.493
0.848
0.2325
MCDL
0.6559
0.8813
0.1457
LEGS
0.6124
0.8193
0.1844
RFCN
0.6768
0.8803
0.1476
MDF
0.6574
0.8483
0.1556
DBS
0.6621
0.8917
0.1505Figure 11
Visual comparisons of DBS with 27 state-of-the-art methods. The order of images are original image, ground truth mask, BL [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS.Among the 28 methods in Table1, the top four methods are all deep learning based methods, including MCDL [9], RFCN [11], MDF [8], and DBS. To some extent, deep learning based detection methods are better than handcrafted feature based methods, in terms of both completeness and accuracy of saliency maps. AUC value of DBS method is the highest. F-measure value of DBS method is slightly lower than RFCN [11]. MAE value of DBS is third low. The overall performance of DBS method is good.Typical saliency maps are shown in Figure11.
## 7.2.2. Experiments of Aggregation Method
The handcrafted feature based detection methods used as complementarities to DBS are DRFI [25], SMD [46], BL [32], and MC [40].In neighbor searching, the number of tag neighbors is 4 and the number of appearance neighbors is 4.In order to verify the effect of neighbors, appearance neighbor based method and tag neighbor based method are carried out, respectively. Appearance neighbor based aggregation method is abbreviated as ABS (Appearance Based Saliency). Tag neighbor based aggregation method is abbreviated as TBS (Tag Based Saliency). Tag neighbor and appearance neighbor based aggregation method is abbreviated as FBS (Fusion Based Saliency).The detection performances of DBS, ABS, TBS, and FBS are compared in Table2.Table 2
F-measure, AUC, and MAE of DBS, ABS, TBS, and FBS.
DBS
ABS
TBS
FBS
F-measure
0.6621
0.6652
0.6688
0.6712
AUC
0.8917
0.9061
0.9113
0.9166
MAE
0.1505
0.1497
0.1474
0.1452The performance of TBS is better than the performance of ABS. The reasons are as follows. ABS method is based on appearance feature based neighbor search. Appearance similar images cannot guarantee similar saliency maps. However, TBS method uses object information. The same or similar objects can ensure similar salient regions to some extent. So the performance of TBS is better.PR and ROC curves are shown in Figures12 and 13. PR and ROC curves of FBS are higher than 27 state-of-the-art methods.Figure 12
PR curves of FBS and 27 state-of-the-art methods.Figure 13
ROC curves of FBS and 27 state-of-the-art methods.The examples of typical saliency maps of FBS method and DBS method are shown in Figure14. It can be seen that the aggregation results are more complete and the details are better.Figure 14
Visual comparisons of FBS with DBS. The order of images is original image, ground truth mask, FBS, and DBS.
## 7.3. Experiments on State-of-the-Art Datasets
The experiment results are given in Table3. We can see that AUC values of OBS are the highest on all datasets, F-measure values of OBS are the highest on all datasets, and MAE values are the lowest or the second lowest. The performance of OBS is the best. However, the improvements of OBS are not so obvious because objectness feature is not the accurate tag feature. Thus we believe that the results will be improved obviously if we use accurate tag annotation of images.Table 3
F-measure, AUC, and MAE of OBS and 11 state-of-the-art methods on six state-of-the-art datasets.
Metric
AUC
F-measure
MAE
Dataset
MSRA1000
FT
0.766
0.579
0.241
DRFI
0.966
0.845
0.112
RC
0.937
0.817
0.138
GC
0.863
0.719
0.159
HS
0.93
0.813
0.161
MC
0.975
0.894
0.054
MR
0.941
0.824
0.127
SF
0.886
0.7
0.166
BD
0.948
0.82
0.11
MDF
0.978
0.888
0.066
LEGS
0.958
0.87
0.081
OBS
0.984
0.893
0.061
Dataset
HKU-IS
FT
0.71
0.477
0.244
DRFI
0.95
0.776
0.167
RC
0.903
0.726
0.165
GC
0.777
0.588
0.211
HS
0.884
0.71
0.213
MC
0.928
0.798
0.102
MR
0.87
0.714
0.174
SF
0.828
0.59
0.173
BD
0.91
0.726
0.14
MDF
0.971
0.869
0.072
LEGS
0.907
0.77
0.118
OBS
0.976
0.871
0.078
Dataset
PASCAL-S
FT
0.627
0.413
0.309
DRFI
0.899
0.69
0.21
RC
0.84
0.644
0.227
GC
0.727
0.539
0.266
HS
0.838
0.641
0.264
MC
0.907
0.74
0.145
MR
0.852
0.661
0.223
SF
0.746
0.493
0.24
BD
0.866
0.655
0.201
MDF
0.921
0.771
0.146
LEGS
0.891
0.752
0.157
OBS
0.927
0.778
0.141
Dataset
ECSSD
FT
0.663
0.43
0.289
DRFI
0.943
0.782
0.17
RC
0.893
0.738
0.186
GC
0.767
0.597
0.233
HS
0.885
0.727
0.228
MC
0.948
0.837
0.1
MR
0.888
0.736
0.189
SF
0.793
0.548
0.219
BD
0.896
0.716
0.171
MDF
0.957
0.847
0.106
LEGS
0.925
0.827
0.118
OBS
0.968
0.856
0.112
Dataset
DUT-OMRON
FT
0.682
0.381
0.25
DRFI
0.931
0.664
0.15
RC
0.859
0.599
0.189
GC
0.757
0.495
0.218
HS
0.86
0.616
0.227
MC
0.929
0.703
0.088
MR
0.853
0.61
0.187
SF
0.81
0.495
0.147
BD
0.894
0.63
0.144
MDF
0.935
0.728
0.088
LEGS
0.885
0.669
0.133
OBS
0.943
0.731
0.091
Dataset
SOD
FT
0.607
0.441
0.323
DRFI
0.89
0.699
0.223
RC
0.828
0.657
0.242
GC
0.692
0.526
0.284
HS
0.817
0.646
0.283
MC
0.868
0.727
0.179
MR
0.812
0.636
0.259
SF
0.714
0.516
0.267
BD
0.827
0.653
0.229
MDF
0.899
0.793
0.157
LEGS
0.836
0.732
0.195
OBS
0.907
0.801
0.163Experiments on state-of-the-art datasets validate the effectiveness of our proposed method DBS.
## 8. Conclusions
The paper focuses on salient region detection of social images. First, the proposed deep learning based salient region detection method considers both appearance features and tag features. Tag features are detected by RCNN models. Second, tag neighbor features and appearance neighbor features are added to the saliency aggregation model. Finally, a new database of challenging social images and pixel-wise saliency annotations is constructed, which can promote further researches and evaluations of visual saliency model.
---
*Source: 1014595-2018-01-01.xml* | 2018 |
# Coupling Strategies Investigation of Hybrid Atomistic-Continuum Method Based on State Variable Coupling
**Authors:** Qian Wang; Xiao-Guang Ren; Xin-Hai Xu; Chao Li; Hong-Yu Ji; Xue-Jun Yang
**Journal:** Advances in Materials Science and Engineering
(2017)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2017/1014636
---
## Abstract
Different configurations of coupling strategies influence greatly the accuracy and convergence of the simulation results in the hybrid atomistic-continuum method. This study aims to quantitatively investigate this effect and offer the guidance on how to choose the proper configuration of coupling strategies in the hybrid atomistic-continuum method. We first propose a hybrid molecular dynamics- (MD-) continuum solver in LAMMPS and OpenFOAM that exchanges state variables between the atomistic region and the continuum region and evaluate different configurations of coupling strategies using the sudden start Couette flow, aiming to find the preferable configuration that delivers better accuracy and efficiency. The major findings are as follows:(1) the C→A region plays the most important role in the overlap region and the “4-layer-1” combination achieves the best precision with a fixed width of the overlap region; (2) the data exchanging operation only needs a few sampling points closer to the occasions of interactions and decreasing the coupling exchange operations can reduce the computational load with acceptable errors; (3) the nonperiodic boundary force model with a smoothing parameter of 0.1 and a finer parameter of 20 can not only achieve the minimum disturbance near the MD-continuum interface but also keep the simulation precision.
---
## Body
## 1. Introduction
In recent years, with the rapid development of nanotechnology, microscale/nanoscale devices such as microelectromechanical system (MEMS) devices and lab-on-a-chip devices have been widely used. The fluid flows in these devices involve a broad range of scales from atomistic scale to macroscopic scale [1]. Generally speaking, fluid simulation based on the continuum assumption uses Navier-Stokes (NS) equations to investigate fluid dynamics under macroscopic scale. As the characteristic scale decreases, the fluid flows at the microscales/nanoscales exhibit quite different properties with the flows at macroscale, such as the invalidity of the continuum assumption [2] and increased viscosity in nanochannels [3]. Molecular dynamics (MD), one of widely used microfluid simulation methods, resolves fluid features at microscales/nanoscales. However, its computation-intensive feature brings heavy loads to both simulation time and memory usage, limiting the simulation scale nanometer in length scale and nanosecond in time scale. In order to simulate physical problems with a large length scale and to capture microscopic physical phenomena, many multiscale simulation methods have been proposed. For solid simulation, the “bridging domain” and “bridging scale” method [4, 5] uses Lagrangian multipliers and solution projection method to seamlessly couple two solvers in different scales with few unusual effects. For dense liquid simulation, the hybrid atomistic-continuum method (HAC) has been raised [6–8]. HAC applies molecular dynamics in regions where the atomistic description is needed, for example, boundary regions and corner vortex regions, while using the continuum method in the remaining regions to obtain both computation efficiency and simulation accuracy. In this paper, we focus on the simulation of the dense fluid.There are two types of coupling approaches available: the flux-based method [9, 10] and the state variable-based method [6]. The former uses mass flux, momentum flux, and energy flux to exchange data between the continuum method and molecular dynamics, thus satisfying conservation laws naturally. The latter couples these two methods using mass and momentum state variables to perform simulation. For incompressible problems, the computational cost involved in calculating the molecular fluxes across a surface is much higher than the calculation of state variables only [11]. Therefore, we focus on the state variable-based HAC method in this paper.The first hybrid method combining molecular dynamics with the continuum method for dense fluid was proposed by O’Connell and Thompson [6]. For the one-dimensional Couette flow, the domain was split into an atomistic region and a continuum region, using an overlap region to alleviate dramatic density oscillation and couple the results of these two regions. The overlap region contains a nonperiodic boundary force region (npbf region), an atomistic-coupled-to-continuum region (A→C region) and a continuum-coupled-to-atomistic region (C→A region). However, this method has a limitation that it does not cope with the mass transfer across the MD-continuum interface.Later, HAC models emerged differing in forms of coupling strategies, boundary conditions extraction, and nonperiodic boundary force models [12–16]. In order to cope with the mass flux transfer, some researchers [17, 18] introduced themass flow region; other researchers [14, 19] brought forward thebuffer region to further relax the fluctuations between the MD results and the continuum results; still other researchers [20–22] proposed different expressions for the nonperiodic boundary force. More detailed review is given by Mohamed and Mohamad [23].Nevertheless, over the past 20 years, several issues still remained to be investigated in the hybrid atomistic-continuum method based on state variables. Strategy configurations of the spatial coupling, temporal coupling, and associated parameters have significant influences on simulation efficiency and accuracy. The existing research simply configured these strategies from their own point of view, while detailed analysis of these coupling strategies has never been performed. Firstly, for the HAC methods based on domain decomposition, spatial configurations of the overlap region in the existing approaches differ from each other and mostly set functional regions with the same width. Only Yen et al. [8] explored the appropriate size of the pure MD region and the overlap region. Secondly, on the occasions of data exchanging, existing approaches generally use the average of all sampling points of subsequent MD steps, to alleviate thermal noise from finite space and time sampling in the MD region. However, the increased quantity of samples brings better elimination of thermal noise and results in time lagging on the average results transferred to the continuum region. It is important to explore the effects of different quantities of sampling points and occasions for data exchanging on the convergence of the coupled simulation. Finally, when dealing with nonperiodic boundaries in the MD region, Issa and Poesio [13] proposed the FoReV algorithm and empirically configured a smoothing parameter. But when coupled to the continuum method, the effects of choosing different parameters on the local liquid structure near the MD-continuum interface have to be investigated.Based on the above analysis, we could conclude that there exist several coupling strategies worthy of further investigation. In this paper, we design a domain decomposition type of hybrid MD-continuum solver using open source software LAMMPS [24] and OpenFOAM [25] and investigate the coupling strategy issues of the HAC simulation usingCouette channel flow as the model flow. The main contributions of this paper are summarized as follows:(i)
We analyze the effects of choosing different spatial strategies for configuring the overlap region on the model flow. The finding is that, when given the equal width of the functional regions, “5-layer” combination obtains the best numerical precision as the width of the overlap region varies. By contrast, “4-layer-1” combination is proved to be the best settings while given the fixed width of the overlap region. Apart from that, we also find out that enlarging the C→A region could result in a better simulation accuracy in the model flow.(ii)
We investigate the more efficient temporal strategies for data exchanging. The efficiency is obtained by the study on the quantity of samples and the time points for data exchanging. The practical conclusion is that data exchanging in theA→C operation only needs a few sampling points which are close to the occasions of interactions to guarantee modeling efficiency and to reduce the times of sampling. With acceptable errors, we also find out that timely data exchanging performs better than the other settings, but decreasing the coupling exchange operations can further reduce the computational load.(iii)
We conduct analysis of parameters for the nonperiodic boundary force model on the model flow. We add a finer parameter to the force model in order to apply the FoReV algorithm in the HAC model effectively. Results indicate that under the domain decomposition along the flow direction, the proper combination of the smoothing parameter and the finer parameter can not only achieve the minimum disturbance on the local structure near the MD-continuum interface but also keep the simulation precision.The remainder of the paper is organized as follows: the hybrid atomistic-continuum simulation methodology is presented in Section2 and the discussion of mutable parameters for coupling strategies is described in Section 3. In Section 4, we apply these coupling parameters to the benchmark problems and compare the convergence and accuracy of the results of numerical tests. Section 5 concludes the paper.
## 2. Model Configuration and Methodology
In this section, we propose the MD-continuum solver based on the HAC physical model and coupling strategies using LAMMPS and OpenFOAM. OpenFOAM servers as the main framework which is highly modular and elegant extendibility [26] and LAMMPS is built as a library to be called from. Section 2.1 introduces the decomposition of the simulation domain; Section 2.2 includes the numerical methods of the atomistic region and the continuum region; Section 2.3 gives the configuration of the overlap region and associated coupling operations and Section 2.4 introduces the temporal coupling.
### 2.1. Domain Decomposition
In this paper, we investigate coupling strategy issues contained in the HAC simulation using Couette flows as the model flows which are under incompressible constant temperature condition. We take two kinds of boundary conditions into account, that is, the no-slip and the slip boundary condition. The computational domain and the coordinate system in the current HAC simulation are shown in Figures1 and 2. Under the no-slip boundary condition, the simulation domain is split into an atomistic region, a continuum region, and an overlap region (left), while there are two atomistic regions and two overlap regions under the slip boundary condition (right). The outer boundary of the continuum region which resides in the atomistic region is called hybrid solution interface (HSI). The atomistic regions include liquid fluid atom regions and wall atom regions and are located near the wall regions in order to provide accurate boundary conditions to the continuum part. In the current study, a two-dimensional simulation is performed in the continuum region, that is, only in xy plane, while a three-dimensional simulation is performed in the atomistic regions with z axis as the extension direction.Figure 1
Simulation domain decomposition and coordinates of the no-slip boundary condition;H is the height of the channel.Figure 2
Simulation domain decomposition and coordinates of the slip boundary condition;H is the height of the channel.
### 2.2. Atomistic Region and Continuum Region
In this section, we introduce numerical methods used in the atomistic region and the continuum region. Section2.2.1 shows the molecular dynamics simulation method and physical parameters and Section 2.2.2 gives the continuum solution using the finite volume method (FVM).
#### 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
#### 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
### 2.3. Overlap Region
We categorize the existing configurations of the overlap region into five functional regions, that is, theA→C region, the C→A region, thebuffer region, the nonperiodic boundary force region (npbf region), and themass flow region. The latter two are often unified to thecontrol region. The schematic of the overlap region is shown in Figure 3.Figure 3
Detailed schematic diagram of the overlap region.
#### 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
#### 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
#### 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
#### 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
#### 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
### 2.4. Temporal Coupling
There are three time variables to be considered in the HAC simulation [33]: integration time steps of Newtonian equation ΔtMD, integration time steps of Navier-Stokes equation ΔtCFD, and sampling average time Δtave. As we have mentioned before, ΔtMD is chosen as 0.005τ in our paper. In order to solve NS equation accurately, firstly, ΔtCFD must be far less than characteristic time on the mesh, that is, diffusion time ρΔxΔy/μ, where μ is the dynamic viscosity and μ=2.14ϵτσ-3. Secondly, it must be larger than the decay time of velocity autocorrelation function tvv=0.14ρ-1T-1/2 so as to reduce thermal noise from HSI [6]. Finally, it should meet the CFL condition [34], uflowΔtCFD<Δx/2. The CFD time step we choose in this paper is ΔtCFD=M×ΔtMD=0.5τ, where M is 100. The time advancing mechanism in the HAC simulation is shown in Figure 6, which is sequential coupling.Figure 6
Time coupling and advancing mechanism in the current HAC method. In one coupling cycle, it includes four steps, that is, the CFD advancing, theC→A operation, the MD advancing, and the A→C operation.In each coupling simulation time step, CFD advances a certain time, for example, oneΔtCFD, and transfers continuum velocities to the atomistic region through CCLD in the C→A operation. Then, MD advances the same time interval, that is, M×ΔtMD, samples and averages atom velocities, and then passes them to the continuum region in the A→C operation, thus finishing one cycle of HAC simulation.Indeed, due to the small time stepΔtMD, the HAC simulation time of a benchmark case will take a long time. However, the efficiency of the HAC method is defined by comparing the full MD simulation for the same scale of the benchmark case. The HAC method only applies atomistic simulation in part of the simulation domain. Obviously the HAC method is much more efficient than the full MD method with simulation of the full domain.
## 2.1. Domain Decomposition
In this paper, we investigate coupling strategy issues contained in the HAC simulation using Couette flows as the model flows which are under incompressible constant temperature condition. We take two kinds of boundary conditions into account, that is, the no-slip and the slip boundary condition. The computational domain and the coordinate system in the current HAC simulation are shown in Figures1 and 2. Under the no-slip boundary condition, the simulation domain is split into an atomistic region, a continuum region, and an overlap region (left), while there are two atomistic regions and two overlap regions under the slip boundary condition (right). The outer boundary of the continuum region which resides in the atomistic region is called hybrid solution interface (HSI). The atomistic regions include liquid fluid atom regions and wall atom regions and are located near the wall regions in order to provide accurate boundary conditions to the continuum part. In the current study, a two-dimensional simulation is performed in the continuum region, that is, only in xy plane, while a three-dimensional simulation is performed in the atomistic regions with z axis as the extension direction.Figure 1
Simulation domain decomposition and coordinates of the no-slip boundary condition;H is the height of the channel.Figure 2
Simulation domain decomposition and coordinates of the slip boundary condition;H is the height of the channel.
## 2.2. Atomistic Region and Continuum Region
In this section, we introduce numerical methods used in the atomistic region and the continuum region. Section2.2.1 shows the molecular dynamics simulation method and physical parameters and Section 2.2.2 gives the continuum solution using the finite volume method (FVM).
### 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
### 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
## 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
## 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
## 2.3. Overlap Region
We categorize the existing configurations of the overlap region into five functional regions, that is, theA→C region, the C→A region, thebuffer region, the nonperiodic boundary force region (npbf region), and themass flow region. The latter two are often unified to thecontrol region. The schematic of the overlap region is shown in Figure 3.Figure 3
Detailed schematic diagram of the overlap region.
### 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
### 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
### 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
### 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
### 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
## 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
## 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
## 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
## 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
## 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
## 2.4. Temporal Coupling
There are three time variables to be considered in the HAC simulation [33]: integration time steps of Newtonian equation ΔtMD, integration time steps of Navier-Stokes equation ΔtCFD, and sampling average time Δtave. As we have mentioned before, ΔtMD is chosen as 0.005τ in our paper. In order to solve NS equation accurately, firstly, ΔtCFD must be far less than characteristic time on the mesh, that is, diffusion time ρΔxΔy/μ, where μ is the dynamic viscosity and μ=2.14ϵτσ-3. Secondly, it must be larger than the decay time of velocity autocorrelation function tvv=0.14ρ-1T-1/2 so as to reduce thermal noise from HSI [6]. Finally, it should meet the CFL condition [34], uflowΔtCFD<Δx/2. The CFD time step we choose in this paper is ΔtCFD=M×ΔtMD=0.5τ, where M is 100. The time advancing mechanism in the HAC simulation is shown in Figure 6, which is sequential coupling.Figure 6
Time coupling and advancing mechanism in the current HAC method. In one coupling cycle, it includes four steps, that is, the CFD advancing, theC→A operation, the MD advancing, and the A→C operation.In each coupling simulation time step, CFD advances a certain time, for example, oneΔtCFD, and transfers continuum velocities to the atomistic region through CCLD in the C→A operation. Then, MD advances the same time interval, that is, M×ΔtMD, samples and averages atom velocities, and then passes them to the continuum region in the A→C operation, thus finishing one cycle of HAC simulation.Indeed, due to the small time stepΔtMD, the HAC simulation time of a benchmark case will take a long time. However, the efficiency of the HAC method is defined by comparing the full MD simulation for the same scale of the benchmark case. The HAC method only applies atomistic simulation in part of the simulation domain. Obviously the HAC method is much more efficient than the full MD method with simulation of the full domain.
## 3. Mutable Parameters in Coupling Strategies
Concentrating on the influences of configuring different coupling strategies on accuracy and efficiency of the HAC simulation, in this section, the coupling strategies are embodied into the parameters listed as follows: configurations of the functional regions in Section3.1, variables of data exchanging in Section 3.2, and coefficients of the nonperiodic boundary force model in Section 3.3.
### 3.1. Parameters of Functional Regions:Layer Number andLayer Width
In the HAC simulation, the physical results of the atomistic region and the continuum region should keep consistent in the overlap region. The legal configuration of the overlap region can exchange data between continuum solver and atomistic solver correctly and alleviate the unusual effects due to artificial operations. The width of each component of the overlap region will lead to different simulation accuracy. Therefore, the configuration of the functional regions must be carefully designed and tested.Generally speaking, the overlap region should be located at a certain distance away from the solid wall. In Section2.3, we mention five functional regions to carry out data coupling. In the existing research, there are different combinations of functional regions and different widths of them. We summarize thelayer number (LN) and thelayer width (LW) in Table 1.Table 1
Configurations of functional regions in the overlap region.
Research LN Configuration details (region) LW O’Connell and Thompson 1995 [6] 3 npbf, A→C, C→A Equal Nie et al. 2004 [7] 3 control (npbf, mass flow), A→C, C→A Equal Zhou et al. 2014 [27] 3 control (npbf, mass flow, C→A), buffer, A→C Equal Cosden and Lukes 2013 [18] 4 mass flow, C→A, buffer, A→C Equal Sun et al. 2010 [19] 4 depress (extending to the continuum region, npbf), C→A, buffer, A→C Equal Yen et al. 2007 [8] 5 depress (extending to the continuum region, npbf), buffer, C→A, buffer, A→C Equal Ko et al. 2010 [14] 5 control (npbf, mass flow), buffer, C→A, buffer, A→C EqualFor the completeness of comparison, we add an alternative configuration which includes four layers, that is, thecontrol region (thenpbf region and themass flow region), thebuffer region, the C→A region, and the A→C region. Those configurations that contain thedepress region extending into the continuum region in Table 1 are not considered in the current comparison. We can also see that the functional regions have equal width in the previous research.In this paper, themass flow region is located at the top of the atomistic region, which is near the MD-continuum interface, and its width is equal to one CFD mesh cell width. Thenpbf region is located within a cutoff distance near the MD-continuum interface. The widths of these two remain unchanged.For theC→A region, the larger its width, the stronger the control exerted on the atomistic region by the continuum region and the more obvious effect CCLD has on the data exchanging in this region. For thebuffer region, its function is to alleviate the fluctuations among other kernel functional regions. Increasing its width means increasing the computation load, so it is better to minimize the width of thebuffer region or to remove it. For the A→C region, due to the existence of statistical thermal fluctuations in the atomistic region, the scope of it apparently influences the sampling average results and thus influences its capability to provide accurate boundary conditions to the continuum region.On the premise of accurate HAC simulation, different combinations of functional regions and different widths of them may have influences on simulation results. In Section4.2, we will test and discuss such influences and give suggestions about how to configure the overlap region properly.
### 3.2. Parameters of the Data Exchanging Operation:N and ex_steps
In Section3.1, we have talked about spatial average in the A→C region, while in this section we consider the temporal average parameters of the A→C operation. Generally speaking, in order to correctly extract macroscopic velocity, averages are taken over microscopic variables within a control volume and during time, which is called the binning method [35]. Specifically, in our HAC model, the control volume is the bin i in the A→C region and the time interval for averaging is Δtave for the continuum boundary extraction. Timely and correctly exchanging sampling atomistic data to the continuum solver is a kernel operation in the HAC method. We must pay attention that the temporal average over all sample points will introduce time lagging boundary results and would not exchange coupling information timely. Due to the overload of sampling operations, we should also question whether decreasing the times of coupling data exchanging operations could further reduce the sampling times and the computational load within acceptable errors.There are several parameters involved in the temporal average operation:(i)
ex_steps: the number of CFD time steps corresponding to one HAC data exchange operation Δtave, and Δtave=ex_steps×ΔtCFD(ii)
N: the number of sampling MD points in one data exchange operation(iii)
interval: the interval of sampling points in one data exchange operationIn the previous research [7, 8], data was often exchanged between the continuum region and the atomistic region in each CFD time step, and temporal average is performed over all M time steps in one ΔtCFD; that is, ex_steps=1, N=M, and interval=1.In Section4.3, we will discuss and test the proper sampling number N and exchange times ex_steps in one data exchange operation. Furthermore, we will provide guidance on the macroscopic velocity extraction.
### 3.3. Parameters of the Nonperiodic Boundary Force Model:α and β
Due to the lack of interaction with missing atoms, the atoms near the nonperiodic boundary will show unphysical effects. A nonperiodic boundary is applied to render density inhomogeneous and force nonuniform near the MD-continuum interface. In the existing HAC models, the nonperiodic boundary is often remedied with an external force model on those atoms near the boundary. This remedy operation will keep the pressure in the atomistic region correctly and alleviate the density fluctuation.We construct thenpbf region near the MD-continuum interface within a cutoff radius distance. Then, we refine thenpbf region intonpbf bins with width of binynpbf on the basis of CFD mesh width Lycell in y direction; that is, Lycell=β×binynpbf, where β is the finer parameter. Issa and Poesio [13] only provide a nonperiodic boundary force model with an empirical smoothing parameter α=1×10-4. A proper combination of the smoothing parameter α and the finer parameter β to minimize disturbance on the local liquid structure near the MD-continuum interface is worth investigating. In Section 4.4, we will provide numerical verifications on different combinations of α and β and discuss the appropriate values of them.
## 3.1. Parameters of Functional Regions:Layer Number andLayer Width
In the HAC simulation, the physical results of the atomistic region and the continuum region should keep consistent in the overlap region. The legal configuration of the overlap region can exchange data between continuum solver and atomistic solver correctly and alleviate the unusual effects due to artificial operations. The width of each component of the overlap region will lead to different simulation accuracy. Therefore, the configuration of the functional regions must be carefully designed and tested.Generally speaking, the overlap region should be located at a certain distance away from the solid wall. In Section2.3, we mention five functional regions to carry out data coupling. In the existing research, there are different combinations of functional regions and different widths of them. We summarize thelayer number (LN) and thelayer width (LW) in Table 1.Table 1
Configurations of functional regions in the overlap region.
Research LN Configuration details (region) LW O’Connell and Thompson 1995 [6] 3 npbf, A→C, C→A Equal Nie et al. 2004 [7] 3 control (npbf, mass flow), A→C, C→A Equal Zhou et al. 2014 [27] 3 control (npbf, mass flow, C→A), buffer, A→C Equal Cosden and Lukes 2013 [18] 4 mass flow, C→A, buffer, A→C Equal Sun et al. 2010 [19] 4 depress (extending to the continuum region, npbf), C→A, buffer, A→C Equal Yen et al. 2007 [8] 5 depress (extending to the continuum region, npbf), buffer, C→A, buffer, A→C Equal Ko et al. 2010 [14] 5 control (npbf, mass flow), buffer, C→A, buffer, A→C EqualFor the completeness of comparison, we add an alternative configuration which includes four layers, that is, thecontrol region (thenpbf region and themass flow region), thebuffer region, the C→A region, and the A→C region. Those configurations that contain thedepress region extending into the continuum region in Table 1 are not considered in the current comparison. We can also see that the functional regions have equal width in the previous research.In this paper, themass flow region is located at the top of the atomistic region, which is near the MD-continuum interface, and its width is equal to one CFD mesh cell width. Thenpbf region is located within a cutoff distance near the MD-continuum interface. The widths of these two remain unchanged.For theC→A region, the larger its width, the stronger the control exerted on the atomistic region by the continuum region and the more obvious effect CCLD has on the data exchanging in this region. For thebuffer region, its function is to alleviate the fluctuations among other kernel functional regions. Increasing its width means increasing the computation load, so it is better to minimize the width of thebuffer region or to remove it. For the A→C region, due to the existence of statistical thermal fluctuations in the atomistic region, the scope of it apparently influences the sampling average results and thus influences its capability to provide accurate boundary conditions to the continuum region.On the premise of accurate HAC simulation, different combinations of functional regions and different widths of them may have influences on simulation results. In Section4.2, we will test and discuss such influences and give suggestions about how to configure the overlap region properly.
## 3.2. Parameters of the Data Exchanging Operation:N and ex_steps
In Section3.1, we have talked about spatial average in the A→C region, while in this section we consider the temporal average parameters of the A→C operation. Generally speaking, in order to correctly extract macroscopic velocity, averages are taken over microscopic variables within a control volume and during time, which is called the binning method [35]. Specifically, in our HAC model, the control volume is the bin i in the A→C region and the time interval for averaging is Δtave for the continuum boundary extraction. Timely and correctly exchanging sampling atomistic data to the continuum solver is a kernel operation in the HAC method. We must pay attention that the temporal average over all sample points will introduce time lagging boundary results and would not exchange coupling information timely. Due to the overload of sampling operations, we should also question whether decreasing the times of coupling data exchanging operations could further reduce the sampling times and the computational load within acceptable errors.There are several parameters involved in the temporal average operation:(i)
ex_steps: the number of CFD time steps corresponding to one HAC data exchange operation Δtave, and Δtave=ex_steps×ΔtCFD(ii)
N: the number of sampling MD points in one data exchange operation(iii)
interval: the interval of sampling points in one data exchange operationIn the previous research [7, 8], data was often exchanged between the continuum region and the atomistic region in each CFD time step, and temporal average is performed over all M time steps in one ΔtCFD; that is, ex_steps=1, N=M, and interval=1.In Section4.3, we will discuss and test the proper sampling number N and exchange times ex_steps in one data exchange operation. Furthermore, we will provide guidance on the macroscopic velocity extraction.
## 3.3. Parameters of the Nonperiodic Boundary Force Model:α and β
Due to the lack of interaction with missing atoms, the atoms near the nonperiodic boundary will show unphysical effects. A nonperiodic boundary is applied to render density inhomogeneous and force nonuniform near the MD-continuum interface. In the existing HAC models, the nonperiodic boundary is often remedied with an external force model on those atoms near the boundary. This remedy operation will keep the pressure in the atomistic region correctly and alleviate the density fluctuation.We construct thenpbf region near the MD-continuum interface within a cutoff radius distance. Then, we refine thenpbf region intonpbf bins with width of binynpbf on the basis of CFD mesh width Lycell in y direction; that is, Lycell=β×binynpbf, where β is the finer parameter. Issa and Poesio [13] only provide a nonperiodic boundary force model with an empirical smoothing parameter α=1×10-4. A proper combination of the smoothing parameter α and the finer parameter β to minimize disturbance on the local liquid structure near the MD-continuum interface is worth investigating. In Section 4.4, we will provide numerical verifications on different combinations of α and β and discuss the appropriate values of them.
## 4. Results and Discussion
In this section, we use the model Couette flow with either no-slip or slip boundary condition to verify the effects of choosing different parameters in coupling strategies on the HAC simulation. We run the simulations on a high performance cluster where each computing node contains 12 Intel Xeon E5-2620 2.10 GHz CPU cores and a total main memory of 16 GB [36]. Section 4.1 validates the correctness of our HAC solver; Section 4.2 tests the effects of different spatial configurations of the overlap region on the a Couette flow; Section 4.3 investigates the effects of different temporal strategies of data exchanging on the model flow and Section 4.4 discusses the proper combination of parameters in the nonperiodic boundary force model.
### 4.1. Verification and Efficiency
In this section, two benchmark cases are carried out to test the validity and practical performance of our HAC solver by comparing with either the analytical solution or the full MD results.
#### 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
#### 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
### 4.2. Effects of Functional Region Configurations on Couette Flows
Based on the discussion of the mutable variables mentioned in Section3.1, we test and discuss the effects of different combinations and different widths of the functional regions on the convergence and accuracy of Couette flow simulation here. The channel height of Couette flow is H=100σ and the sliding velocity is Uw=1.0σ/τ for both the no-slip and slip boundary conditions as shown in Figure 1 and Figure 2, while other test conditions are the same as those in Section 4.1.1 and Section 4.1.2.
#### 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
#### 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
### 4.3. Effects of Sampling Parameters on Couette Flows
In this section, we test different groups of sampling and averaging parameters mentioned in Section3.2 under the no-slip and slip boundary conditions with channel height H=100σ as shown in Figures 1 and 2.
#### 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
#### 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
### 4.4. Effects of Parameters of the Nonperiodic Boundary Force Model on Couette Flows
The effects of the smoothing parameterα and the finer parameter β on Couette flow simulation are discussed in this section. We use two tests under the no-slip and slip boundary conditions to measure these effects as shown in Figures 1 and 2. The combination of parameters are listed in Table 14. These 16 situations are tested to pick out the best combination of parameters with the least disturbance on the local liquid structure near the MD-continuum interface.Table 14
Combinations of parameters in the nonperiodic boundary force model with 16 situations.
Situation STNI-1 STNI-2 STNI-3 STNI-4 α 1×10-4 0.001 0.01 0.1 β 20 20 20 20 Situation STNI-5 STNI-6 STNI-7 STNI-8 α 1 × 1 0 - 4 0.001 0.01 0.1 β 10 10 10 10 Situation STNI-9 STNI-10 STNI-11 STNI-12 α 1 × 1 0 - 4 0.001 0.01 0.1 β 5 5 5 5 Situation STNI-13 STNI-14 STNI-15 STNI-16 α 1 × 1 0 - 4 0.001 0.01 0.1 β 1 1 1 1Figure15 shows the normalized cumulative mean errors under the no-slip boundary condition. Figures 16 and 17 show the relative errors of the slip length and density near the MD-continuum interface under the slip boundary condition.Figure 15
Differences among the sixteen situations using the results of “STNI-1” as the normalized base (a) and summation errors over the five time intervals (b).
(a) (b)Figure 16
Relative errors of slip length under 16 situations.Figure 17
Relative errors of density under 16 situations; the base density isρσ3=0.81.Figure17 shows that, with the decrease of the finer parameter β, the density near the MD-continuum interface drifts away from the base increasingly and “STNI-4” achieves the minimum disturbance on the local liquid structure. For Figure 15, “STNI-4” performs best in matching with the analytical solution and in Figure 16 “STNI-4” predicts the slip length better than the other situations except “STNI-6” and “STNI-10”, with 4% difference.Based on the above analysis, we can conclude that, in our current model flow, the smoothing parameterα and the finer parameter β can be chosen as 0.1 and 20, respectively, thus providing the best accuracy and prediction capability.
## 4.1. Verification and Efficiency
In this section, two benchmark cases are carried out to test the validity and practical performance of our HAC solver by comparing with either the analytical solution or the full MD results.
### 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
### 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
## 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
## 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
## 4.2. Effects of Functional Region Configurations on Couette Flows
Based on the discussion of the mutable variables mentioned in Section3.1, we test and discuss the effects of different combinations and different widths of the functional regions on the convergence and accuracy of Couette flow simulation here. The channel height of Couette flow is H=100σ and the sliding velocity is Uw=1.0σ/τ for both the no-slip and slip boundary conditions as shown in Figure 1 and Figure 2, while other test conditions are the same as those in Section 4.1.1 and Section 4.1.2.
### 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
### 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
## 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
## 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
## 4.3. Effects of Sampling Parameters on Couette Flows
In this section, we test different groups of sampling and averaging parameters mentioned in Section3.2 under the no-slip and slip boundary conditions with channel height H=100σ as shown in Figures 1 and 2.
### 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
### 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
## 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
## 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
## 4.4. Effects of Parameters of the Nonperiodic Boundary Force Model on Couette Flows
The effects of the smoothing parameterα and the finer parameter β on Couette flow simulation are discussed in this section. We use two tests under the no-slip and slip boundary conditions to measure these effects as shown in Figures 1 and 2. The combination of parameters are listed in Table 14. These 16 situations are tested to pick out the best combination of parameters with the least disturbance on the local liquid structure near the MD-continuum interface.Table 14
Combinations of parameters in the nonperiodic boundary force model with 16 situations.
Situation STNI-1 STNI-2 STNI-3 STNI-4 α 1×10-4 0.001 0.01 0.1 β 20 20 20 20 Situation STNI-5 STNI-6 STNI-7 STNI-8 α 1 × 1 0 - 4 0.001 0.01 0.1 β 10 10 10 10 Situation STNI-9 STNI-10 STNI-11 STNI-12 α 1 × 1 0 - 4 0.001 0.01 0.1 β 5 5 5 5 Situation STNI-13 STNI-14 STNI-15 STNI-16 α 1 × 1 0 - 4 0.001 0.01 0.1 β 1 1 1 1Figure15 shows the normalized cumulative mean errors under the no-slip boundary condition. Figures 16 and 17 show the relative errors of the slip length and density near the MD-continuum interface under the slip boundary condition.Figure 15
Differences among the sixteen situations using the results of “STNI-1” as the normalized base (a) and summation errors over the five time intervals (b).
(a) (b)Figure 16
Relative errors of slip length under 16 situations.Figure 17
Relative errors of density under 16 situations; the base density isρσ3=0.81.Figure17 shows that, with the decrease of the finer parameter β, the density near the MD-continuum interface drifts away from the base increasingly and “STNI-4” achieves the minimum disturbance on the local liquid structure. For Figure 15, “STNI-4” performs best in matching with the analytical solution and in Figure 16 “STNI-4” predicts the slip length better than the other situations except “STNI-6” and “STNI-10”, with 4% difference.Based on the above analysis, we can conclude that, in our current model flow, the smoothing parameterα and the finer parameter β can be chosen as 0.1 and 20, respectively, thus providing the best accuracy and prediction capability.
## 5. Conclusions
In this paper, we design a domain decomposition type of hybrid MD-continuum solver, using open source software LAMMPS and OpenFOAM. We useCouette channel flow as our model flow to investigate the coupling strategy issues. Focusing on the fixed channel height and the sliding velocity, under the no-slip and slip boundary conditions, we make a deep analysis of different combinations and different widths of the functional regions, different parameters of data exchanging, and various combinations of parameters in the nonperiodic boundary force model.Forlayernumber and layerwidth, we find that under the condition of the equal width of functional regions, when the width of the overlap region is varied, “5-layer” combination obtains the best accuracy, while when the width of the overlap region is fixed, “4-layer-1” combination is the best setting of these five combinations, which has a reasonable width of functional regions and alleviates the fluctuations sufficiently. Furthermore, we can give the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region which can effectively relax the exchanging data between the continuum region and the atomistic region, while thebuffer region located between thecontrol region and the C→A region can be removed to save the computational load. We can also figure out that under the fixed width of the overlap region, widening the C→A region gets a better simulation result in our model flow.As to the sampling parameters of temporal average, the present results disclose that the data exchanging operation only needs a few sampling points which are close to the occasions of interactions to guarantee modeling efficiency and to reduce the sampling times, and also timely data exchanging between the atomistic region and the continuum region performs best beyond other settings, but with acceptable errors; a half or a quarter of the data exchanging times can be chosen to reduce the computational load. The discussion of parameters of the nonperiodic boundary force model on the model flow shows that, under the domain decomposition along the flow direction, the smoothing parameter value of 0.1 and the finer parameter value of 20 can achieve the minimum disturbance on the local structure near the MD-continuum interface and keep the simulation accuracy.In this paper, we mainly focus on the HAC method based on geometrical coupling. There are other coupling techniques, such as embedded coupling [38]. The microscopic method can be used to calibrate the parameters in the continuum mechanics model [39, 40]. In our future work, we aim to extend the simulation power of our framework to support more kinds of multiscale coupling.
---
*Source: 1014636-2017-02-14.xml* | 1014636-2017-02-14_1014636-2017-02-14.md | 130,868 | Coupling Strategies Investigation of Hybrid Atomistic-Continuum Method Based on State Variable Coupling | Qian Wang; Xiao-Guang Ren; Xin-Hai Xu; Chao Li; Hong-Yu Ji; Xue-Jun Yang | Advances in Materials Science and Engineering
(2017) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2017/1014636 | 1014636-2017-02-14.xml | ---
## Abstract
Different configurations of coupling strategies influence greatly the accuracy and convergence of the simulation results in the hybrid atomistic-continuum method. This study aims to quantitatively investigate this effect and offer the guidance on how to choose the proper configuration of coupling strategies in the hybrid atomistic-continuum method. We first propose a hybrid molecular dynamics- (MD-) continuum solver in LAMMPS and OpenFOAM that exchanges state variables between the atomistic region and the continuum region and evaluate different configurations of coupling strategies using the sudden start Couette flow, aiming to find the preferable configuration that delivers better accuracy and efficiency. The major findings are as follows:(1) the C→A region plays the most important role in the overlap region and the “4-layer-1” combination achieves the best precision with a fixed width of the overlap region; (2) the data exchanging operation only needs a few sampling points closer to the occasions of interactions and decreasing the coupling exchange operations can reduce the computational load with acceptable errors; (3) the nonperiodic boundary force model with a smoothing parameter of 0.1 and a finer parameter of 20 can not only achieve the minimum disturbance near the MD-continuum interface but also keep the simulation precision.
---
## Body
## 1. Introduction
In recent years, with the rapid development of nanotechnology, microscale/nanoscale devices such as microelectromechanical system (MEMS) devices and lab-on-a-chip devices have been widely used. The fluid flows in these devices involve a broad range of scales from atomistic scale to macroscopic scale [1]. Generally speaking, fluid simulation based on the continuum assumption uses Navier-Stokes (NS) equations to investigate fluid dynamics under macroscopic scale. As the characteristic scale decreases, the fluid flows at the microscales/nanoscales exhibit quite different properties with the flows at macroscale, such as the invalidity of the continuum assumption [2] and increased viscosity in nanochannels [3]. Molecular dynamics (MD), one of widely used microfluid simulation methods, resolves fluid features at microscales/nanoscales. However, its computation-intensive feature brings heavy loads to both simulation time and memory usage, limiting the simulation scale nanometer in length scale and nanosecond in time scale. In order to simulate physical problems with a large length scale and to capture microscopic physical phenomena, many multiscale simulation methods have been proposed. For solid simulation, the “bridging domain” and “bridging scale” method [4, 5] uses Lagrangian multipliers and solution projection method to seamlessly couple two solvers in different scales with few unusual effects. For dense liquid simulation, the hybrid atomistic-continuum method (HAC) has been raised [6–8]. HAC applies molecular dynamics in regions where the atomistic description is needed, for example, boundary regions and corner vortex regions, while using the continuum method in the remaining regions to obtain both computation efficiency and simulation accuracy. In this paper, we focus on the simulation of the dense fluid.There are two types of coupling approaches available: the flux-based method [9, 10] and the state variable-based method [6]. The former uses mass flux, momentum flux, and energy flux to exchange data between the continuum method and molecular dynamics, thus satisfying conservation laws naturally. The latter couples these two methods using mass and momentum state variables to perform simulation. For incompressible problems, the computational cost involved in calculating the molecular fluxes across a surface is much higher than the calculation of state variables only [11]. Therefore, we focus on the state variable-based HAC method in this paper.The first hybrid method combining molecular dynamics with the continuum method for dense fluid was proposed by O’Connell and Thompson [6]. For the one-dimensional Couette flow, the domain was split into an atomistic region and a continuum region, using an overlap region to alleviate dramatic density oscillation and couple the results of these two regions. The overlap region contains a nonperiodic boundary force region (npbf region), an atomistic-coupled-to-continuum region (A→C region) and a continuum-coupled-to-atomistic region (C→A region). However, this method has a limitation that it does not cope with the mass transfer across the MD-continuum interface.Later, HAC models emerged differing in forms of coupling strategies, boundary conditions extraction, and nonperiodic boundary force models [12–16]. In order to cope with the mass flux transfer, some researchers [17, 18] introduced themass flow region; other researchers [14, 19] brought forward thebuffer region to further relax the fluctuations between the MD results and the continuum results; still other researchers [20–22] proposed different expressions for the nonperiodic boundary force. More detailed review is given by Mohamed and Mohamad [23].Nevertheless, over the past 20 years, several issues still remained to be investigated in the hybrid atomistic-continuum method based on state variables. Strategy configurations of the spatial coupling, temporal coupling, and associated parameters have significant influences on simulation efficiency and accuracy. The existing research simply configured these strategies from their own point of view, while detailed analysis of these coupling strategies has never been performed. Firstly, for the HAC methods based on domain decomposition, spatial configurations of the overlap region in the existing approaches differ from each other and mostly set functional regions with the same width. Only Yen et al. [8] explored the appropriate size of the pure MD region and the overlap region. Secondly, on the occasions of data exchanging, existing approaches generally use the average of all sampling points of subsequent MD steps, to alleviate thermal noise from finite space and time sampling in the MD region. However, the increased quantity of samples brings better elimination of thermal noise and results in time lagging on the average results transferred to the continuum region. It is important to explore the effects of different quantities of sampling points and occasions for data exchanging on the convergence of the coupled simulation. Finally, when dealing with nonperiodic boundaries in the MD region, Issa and Poesio [13] proposed the FoReV algorithm and empirically configured a smoothing parameter. But when coupled to the continuum method, the effects of choosing different parameters on the local liquid structure near the MD-continuum interface have to be investigated.Based on the above analysis, we could conclude that there exist several coupling strategies worthy of further investigation. In this paper, we design a domain decomposition type of hybrid MD-continuum solver using open source software LAMMPS [24] and OpenFOAM [25] and investigate the coupling strategy issues of the HAC simulation usingCouette channel flow as the model flow. The main contributions of this paper are summarized as follows:(i)
We analyze the effects of choosing different spatial strategies for configuring the overlap region on the model flow. The finding is that, when given the equal width of the functional regions, “5-layer” combination obtains the best numerical precision as the width of the overlap region varies. By contrast, “4-layer-1” combination is proved to be the best settings while given the fixed width of the overlap region. Apart from that, we also find out that enlarging the C→A region could result in a better simulation accuracy in the model flow.(ii)
We investigate the more efficient temporal strategies for data exchanging. The efficiency is obtained by the study on the quantity of samples and the time points for data exchanging. The practical conclusion is that data exchanging in theA→C operation only needs a few sampling points which are close to the occasions of interactions to guarantee modeling efficiency and to reduce the times of sampling. With acceptable errors, we also find out that timely data exchanging performs better than the other settings, but decreasing the coupling exchange operations can further reduce the computational load.(iii)
We conduct analysis of parameters for the nonperiodic boundary force model on the model flow. We add a finer parameter to the force model in order to apply the FoReV algorithm in the HAC model effectively. Results indicate that under the domain decomposition along the flow direction, the proper combination of the smoothing parameter and the finer parameter can not only achieve the minimum disturbance on the local structure near the MD-continuum interface but also keep the simulation precision.The remainder of the paper is organized as follows: the hybrid atomistic-continuum simulation methodology is presented in Section2 and the discussion of mutable parameters for coupling strategies is described in Section 3. In Section 4, we apply these coupling parameters to the benchmark problems and compare the convergence and accuracy of the results of numerical tests. Section 5 concludes the paper.
## 2. Model Configuration and Methodology
In this section, we propose the MD-continuum solver based on the HAC physical model and coupling strategies using LAMMPS and OpenFOAM. OpenFOAM servers as the main framework which is highly modular and elegant extendibility [26] and LAMMPS is built as a library to be called from. Section 2.1 introduces the decomposition of the simulation domain; Section 2.2 includes the numerical methods of the atomistic region and the continuum region; Section 2.3 gives the configuration of the overlap region and associated coupling operations and Section 2.4 introduces the temporal coupling.
### 2.1. Domain Decomposition
In this paper, we investigate coupling strategy issues contained in the HAC simulation using Couette flows as the model flows which are under incompressible constant temperature condition. We take two kinds of boundary conditions into account, that is, the no-slip and the slip boundary condition. The computational domain and the coordinate system in the current HAC simulation are shown in Figures1 and 2. Under the no-slip boundary condition, the simulation domain is split into an atomistic region, a continuum region, and an overlap region (left), while there are two atomistic regions and two overlap regions under the slip boundary condition (right). The outer boundary of the continuum region which resides in the atomistic region is called hybrid solution interface (HSI). The atomistic regions include liquid fluid atom regions and wall atom regions and are located near the wall regions in order to provide accurate boundary conditions to the continuum part. In the current study, a two-dimensional simulation is performed in the continuum region, that is, only in xy plane, while a three-dimensional simulation is performed in the atomistic regions with z axis as the extension direction.Figure 1
Simulation domain decomposition and coordinates of the no-slip boundary condition;H is the height of the channel.Figure 2
Simulation domain decomposition and coordinates of the slip boundary condition;H is the height of the channel.
### 2.2. Atomistic Region and Continuum Region
In this section, we introduce numerical methods used in the atomistic region and the continuum region. Section2.2.1 shows the molecular dynamics simulation method and physical parameters and Section 2.2.2 gives the continuum solution using the finite volume method (FVM).
#### 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
#### 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
### 2.3. Overlap Region
We categorize the existing configurations of the overlap region into five functional regions, that is, theA→C region, the C→A region, thebuffer region, the nonperiodic boundary force region (npbf region), and themass flow region. The latter two are often unified to thecontrol region. The schematic of the overlap region is shown in Figure 3.Figure 3
Detailed schematic diagram of the overlap region.
#### 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
#### 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
#### 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
#### 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
#### 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
### 2.4. Temporal Coupling
There are three time variables to be considered in the HAC simulation [33]: integration time steps of Newtonian equation ΔtMD, integration time steps of Navier-Stokes equation ΔtCFD, and sampling average time Δtave. As we have mentioned before, ΔtMD is chosen as 0.005τ in our paper. In order to solve NS equation accurately, firstly, ΔtCFD must be far less than characteristic time on the mesh, that is, diffusion time ρΔxΔy/μ, where μ is the dynamic viscosity and μ=2.14ϵτσ-3. Secondly, it must be larger than the decay time of velocity autocorrelation function tvv=0.14ρ-1T-1/2 so as to reduce thermal noise from HSI [6]. Finally, it should meet the CFL condition [34], uflowΔtCFD<Δx/2. The CFD time step we choose in this paper is ΔtCFD=M×ΔtMD=0.5τ, where M is 100. The time advancing mechanism in the HAC simulation is shown in Figure 6, which is sequential coupling.Figure 6
Time coupling and advancing mechanism in the current HAC method. In one coupling cycle, it includes four steps, that is, the CFD advancing, theC→A operation, the MD advancing, and the A→C operation.In each coupling simulation time step, CFD advances a certain time, for example, oneΔtCFD, and transfers continuum velocities to the atomistic region through CCLD in the C→A operation. Then, MD advances the same time interval, that is, M×ΔtMD, samples and averages atom velocities, and then passes them to the continuum region in the A→C operation, thus finishing one cycle of HAC simulation.Indeed, due to the small time stepΔtMD, the HAC simulation time of a benchmark case will take a long time. However, the efficiency of the HAC method is defined by comparing the full MD simulation for the same scale of the benchmark case. The HAC method only applies atomistic simulation in part of the simulation domain. Obviously the HAC method is much more efficient than the full MD method with simulation of the full domain.
## 2.1. Domain Decomposition
In this paper, we investigate coupling strategy issues contained in the HAC simulation using Couette flows as the model flows which are under incompressible constant temperature condition. We take two kinds of boundary conditions into account, that is, the no-slip and the slip boundary condition. The computational domain and the coordinate system in the current HAC simulation are shown in Figures1 and 2. Under the no-slip boundary condition, the simulation domain is split into an atomistic region, a continuum region, and an overlap region (left), while there are two atomistic regions and two overlap regions under the slip boundary condition (right). The outer boundary of the continuum region which resides in the atomistic region is called hybrid solution interface (HSI). The atomistic regions include liquid fluid atom regions and wall atom regions and are located near the wall regions in order to provide accurate boundary conditions to the continuum part. In the current study, a two-dimensional simulation is performed in the continuum region, that is, only in xy plane, while a three-dimensional simulation is performed in the atomistic regions with z axis as the extension direction.Figure 1
Simulation domain decomposition and coordinates of the no-slip boundary condition;H is the height of the channel.Figure 2
Simulation domain decomposition and coordinates of the slip boundary condition;H is the height of the channel.
## 2.2. Atomistic Region and Continuum Region
In this section, we introduce numerical methods used in the atomistic region and the continuum region. Section2.2.1 shows the molecular dynamics simulation method and physical parameters and Section 2.2.2 gives the continuum solution using the finite volume method (FVM).
### 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
### 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
## 2.2.1. Atomistic Region
The truncated and shifted Lennard-Jones (LJ) potential is used to model the interactions between fluid atoms as well as wall atoms in the atomistic region. The potential is given by(1)ϕr=4ϵσr12-σr6-σrc12+σrc6,r≤rc,0,r>rc,where rij is the distance between atom i and j, rc is the cutoff radius, and ϵ and σ are the characteristic molecular energy and the molecular length scale, respectively. In this study, we choose liquid Argon as the model liquid flow with LJ parameters σ=0.34nm, ϵ=1.67×10-21J, and m=6.63×10-26kg, where m is the atom mass and a well-defined liquid phase of Argon with TkBϵ-1=1.1, ρσ3=0.81, and the dynamic viscosity μ=2.14ϵτσ-3, where kB is the Boltzmann constant and τ=(mσ2/ϵ)1/2 is the characteristic time of the Lennard-Jones potential. We choose a cutoff distance rc=2.2σ to save the computation time. The wall atoms are modeled by two (111) planes with the x direction of the lattice along [112¯] orientation. For the two-dimensional continuum simulation, there is no flow in z direction. Therefore, in the atomistic region, x and z directions are under periodic boundary conditions while y direction is not.The constant temperature in the simulation system is maintained by Langevin thermostat [28, 29] that couples the system to a thermal reservoir through the addition of Gaussian noise and frictional terms. The thermostat only applies to z direction which is perpendicular to the bulk flow direction and dose not influence the bulk flow velocity. The equation of motion for i atom is given by(2)my¨i=-∑j≠i∂ϕ∂yj-mΓy˙i+ηi,where the summation donates all pair-interaction forces on atom i and Γ is the damping ratio. ηi denotes a Gaussian distributed force with zero mean and variance of 2mkBTΓ. We choose the damping ratio of Langevin thermostat Γ=1.0τ-1 here. Initially, the liquid atoms are arranged at the lattice and each velocity component of atoms conforms to the Gaussian distribution. The Newtonian equations for atoms are integrated with the velocity Verlet algorithm with time step Δt=0.005τ.
## 2.2.2. Continuum Region
In the continuum region, we model the system using the incompressible Navier-Stokes equation. The NS equation and the continuity equation are given by(3)∇·u=0,∂u∂t+∇·uu=-1ρ∇P+ν∇2u,where u is the bulk flow velocity, P is the pressure, and ν is the kinematic viscosity. The equations above are solved numerically using the finite volume method. Density in continuum region is the same as the atomistic region and periodic boundary condition is applied in x direction. We solve this two-dimensional NS equation using the PISO algorithm with the icoFoam solver in OpenFOAM [25].
## 2.3. Overlap Region
We categorize the existing configurations of the overlap region into five functional regions, that is, theA→C region, the C→A region, thebuffer region, the nonperiodic boundary force region (npbf region), and themass flow region. The latter two are often unified to thecontrol region. The schematic of the overlap region is shown in Figure 3.Figure 3
Detailed schematic diagram of the overlap region.
### 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
### 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
### 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
### 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
### 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
## 2.3.1.A→C Region
TheA→C region is located on the outer boundary of the continuum region centered on HSI in Figure 3. In this region, the A→C operation transfers the spatial and temporal average of atom velocities to the continuum region as the new velocity boundary. This region can be further divided into bins in x and y direction to match the mesh cells in the continuum region. For the ith bin, the boundary condition for the continuum velocityu is given by the following average equation:(4)ui=1Ni∑kNivk,where vk is the velocity of the kth atom in the bin i, Ni is the number of atoms in the bin i, and the bracket represents the temporal average. The most widely used sampling average methods are SAM and CAM [30]. We use the CAM method in our simulation with the following equation, and the temporal average is performed over S sampling points:(5)uiCAM=∑j=1S∑i=1Nkvk∑j=1SNk.However, sampling over finite spatial and temporal scales incurs statistical fluctuations. The HSI is the portion of the continuum region that receives boundary data from the atomistic region. Due to inherent statistical fluctuations in this data, the boundary conditions in the complete continuum region boundary may not exactly mass conserve. The most common remedy [31] is to apply a correction factor as follows:(6)vHSI·ncorrected=vHSI·n-∫ϕvϕ·ndS∫HSIdS,where vHSI is the calculated velocity on the HSI using CAM sampling technique,n is the normal vector to the boundary, dS is an element of the boundary, and ϕ is the whole boundary of the continuum region.
## 2.3.2.C→A Region
TheC→A region transfers the velocities of continuum mesh cells to the velocities of atoms located in them through the crude constraint Lagrangian dynamics (CCLD) [6] and provides boundary conditions for the atomistic region. Similar to the A→C region, the C→A region can be divided into bins in x and y direction to match the mesh cells in continuum region.The CCLD requires that the mean atomistic velocities in bini should be equal to the average continuum velocities in the bin i; that is(7)u¯i=1Ni∑pNivp,where Ni is the number of atoms in the bin i. In this paper, the volume of the C→A region bins is equal to that of cells in the continuum region. Through CCLD, we can get the new velocities of atoms located in the bin i; that is(8)x˙jα=vjα+ξαujα-1Ni∑kNivkα,where α is x, y, or z direction and ξ is the constraint strength. The constraint strength is 0.01 as used by O’Connell and Thompson [6] while 1 by Nie et al. [7, 8]. In this paper, we set ξ=1.0 in x and y direction while ξ=0 in z direction.
## 2.3.3. Nonperiodic Boundary Force Region
In order to prevent atoms from drifting away from the atomistic region, keep the quantity of atoms unchanged, and remedy the nonperiodic boundary in the HAC simulation, the existing research had proposed many external force models [7, 20, 21, 27]. Figure 4 shows the atoms missing interaction force near the MD-continuum interface. In this paper, we model an external force for these atoms to alleviate density fluctuation due to the nonperiodic boundary condition and reflecting wall based on the FoReV algorithm proposed by Issa and Poesio [13].Figure 4
Missing interaction region due to the nonperiodic boundary at the MD-continuum interfaceymax.Each atom traveling throughymax will be reflected back into the atomistic region with the same displacement across the interface and reversed velocity. This guarantees a constant number of atoms in the atomistic region. We divide the region near the MD-continuum interface into several bins. In Figure 5, we summarize the total external force experienced by atoms residing in bin k and use a feedback force to apply on atoms in the missing interactions region.Figure 5
Detailed schematic of thenpbf region residing in thecontrol region.Firstly, we calculate the average external force experienced by bink; that is(9)Fbin,k=∑i∈k,j∉kFijNk,where Fij is the interaction force between atoms i and j, Nk is the number of atoms in bin k, and Fbin,k is the average external force experienced by atoms in bin k. Through a feedback mechanism, we apply a normal reversed force on each atom that resides in the cutoff distance near the MD-continuum interface, which makes Fybin,k=0. This reversed force -Fb,kn+1 is given as follows, which is constructed with the simple exponential smoothing, including a smoothing parameter α:(10)Fb,kn+1=αFbin,kn+1+1-αFb,kn.In each MD time step, we calculateFbin,k innpbf region bins, use (10) to construct the feedback force Fb,k, and apply it to associated atoms to remedy nonperiodic boundary effect.
## 2.3.4. Mass Flow Region
For the incompressible condition, the total number of atoms in the atomistic region should be kept unchanged. In order to simulate themass flux across the MD-continuum interface, we bring forward themass flow region and use it to deal with the atom insertion and deletion. Themass flow region can be split into several bins to meet with continuum cells. In one continuum time step ΔtCFD, the number of atoms to be inserted into or deleted from the mass flow bin i is given by(11)n=AiρuiαΔtCFDm,where Ai=(Δx×Δz)·n is the bin area normal to y direction andn is the face normal vector, pointed into the atomistic region. uiα is the continuum velocity, ρ is the liquid density, and m is the atom mass. If n is negative, certain atoms should be deleted from themass flow region and if n is positive, certain atoms should be inserted into themass flow region. Since an atom is inseparable, the nearest integer is taken and the remaining fraction is included at the next insert/delete operation. In Section 2.3.3, the reflecting wall boundary condition prevents atoms from drifting away freely, so the amount of atoms in the atomistic region can only be changed through the insert and delete operation.The atom insertion is via the USHER algorithm [32]. USHER algorithm tries to find a point in a certain bin with the potential that is equal to the average potential of the bin. The initial insertion point is randomly chosen and updated by Newton-Raphson method. The atom deletion is to remove those atoms near the MD-continuum interface ymax. The main thought of the algorithm of deletion is to choose atoms that are most probable to leave the atomistic region using the distance to ymax and the velocity normal to ymax.
## 2.3.5. Buffer Region
Thebuffer region is located between the A→C region and the C→A region or between thecontrol region and the C→A region. We use it to alleviate the fluctuation of the artificial operations on atoms. In thebuffer region, there is no atom insertion or deletion and no constraint on atom velocities. But we have to narrow the width of thebuffer region to decrease the computational load but keep a certain width to relax the results of the atomistic region and the continuum region before being coupled together. We will discuss how to set thebuffer region in later sections.
## 2.4. Temporal Coupling
There are three time variables to be considered in the HAC simulation [33]: integration time steps of Newtonian equation ΔtMD, integration time steps of Navier-Stokes equation ΔtCFD, and sampling average time Δtave. As we have mentioned before, ΔtMD is chosen as 0.005τ in our paper. In order to solve NS equation accurately, firstly, ΔtCFD must be far less than characteristic time on the mesh, that is, diffusion time ρΔxΔy/μ, where μ is the dynamic viscosity and μ=2.14ϵτσ-3. Secondly, it must be larger than the decay time of velocity autocorrelation function tvv=0.14ρ-1T-1/2 so as to reduce thermal noise from HSI [6]. Finally, it should meet the CFL condition [34], uflowΔtCFD<Δx/2. The CFD time step we choose in this paper is ΔtCFD=M×ΔtMD=0.5τ, where M is 100. The time advancing mechanism in the HAC simulation is shown in Figure 6, which is sequential coupling.Figure 6
Time coupling and advancing mechanism in the current HAC method. In one coupling cycle, it includes four steps, that is, the CFD advancing, theC→A operation, the MD advancing, and the A→C operation.In each coupling simulation time step, CFD advances a certain time, for example, oneΔtCFD, and transfers continuum velocities to the atomistic region through CCLD in the C→A operation. Then, MD advances the same time interval, that is, M×ΔtMD, samples and averages atom velocities, and then passes them to the continuum region in the A→C operation, thus finishing one cycle of HAC simulation.Indeed, due to the small time stepΔtMD, the HAC simulation time of a benchmark case will take a long time. However, the efficiency of the HAC method is defined by comparing the full MD simulation for the same scale of the benchmark case. The HAC method only applies atomistic simulation in part of the simulation domain. Obviously the HAC method is much more efficient than the full MD method with simulation of the full domain.
## 3. Mutable Parameters in Coupling Strategies
Concentrating on the influences of configuring different coupling strategies on accuracy and efficiency of the HAC simulation, in this section, the coupling strategies are embodied into the parameters listed as follows: configurations of the functional regions in Section3.1, variables of data exchanging in Section 3.2, and coefficients of the nonperiodic boundary force model in Section 3.3.
### 3.1. Parameters of Functional Regions:Layer Number andLayer Width
In the HAC simulation, the physical results of the atomistic region and the continuum region should keep consistent in the overlap region. The legal configuration of the overlap region can exchange data between continuum solver and atomistic solver correctly and alleviate the unusual effects due to artificial operations. The width of each component of the overlap region will lead to different simulation accuracy. Therefore, the configuration of the functional regions must be carefully designed and tested.Generally speaking, the overlap region should be located at a certain distance away from the solid wall. In Section2.3, we mention five functional regions to carry out data coupling. In the existing research, there are different combinations of functional regions and different widths of them. We summarize thelayer number (LN) and thelayer width (LW) in Table 1.Table 1
Configurations of functional regions in the overlap region.
Research LN Configuration details (region) LW O’Connell and Thompson 1995 [6] 3 npbf, A→C, C→A Equal Nie et al. 2004 [7] 3 control (npbf, mass flow), A→C, C→A Equal Zhou et al. 2014 [27] 3 control (npbf, mass flow, C→A), buffer, A→C Equal Cosden and Lukes 2013 [18] 4 mass flow, C→A, buffer, A→C Equal Sun et al. 2010 [19] 4 depress (extending to the continuum region, npbf), C→A, buffer, A→C Equal Yen et al. 2007 [8] 5 depress (extending to the continuum region, npbf), buffer, C→A, buffer, A→C Equal Ko et al. 2010 [14] 5 control (npbf, mass flow), buffer, C→A, buffer, A→C EqualFor the completeness of comparison, we add an alternative configuration which includes four layers, that is, thecontrol region (thenpbf region and themass flow region), thebuffer region, the C→A region, and the A→C region. Those configurations that contain thedepress region extending into the continuum region in Table 1 are not considered in the current comparison. We can also see that the functional regions have equal width in the previous research.In this paper, themass flow region is located at the top of the atomistic region, which is near the MD-continuum interface, and its width is equal to one CFD mesh cell width. Thenpbf region is located within a cutoff distance near the MD-continuum interface. The widths of these two remain unchanged.For theC→A region, the larger its width, the stronger the control exerted on the atomistic region by the continuum region and the more obvious effect CCLD has on the data exchanging in this region. For thebuffer region, its function is to alleviate the fluctuations among other kernel functional regions. Increasing its width means increasing the computation load, so it is better to minimize the width of thebuffer region or to remove it. For the A→C region, due to the existence of statistical thermal fluctuations in the atomistic region, the scope of it apparently influences the sampling average results and thus influences its capability to provide accurate boundary conditions to the continuum region.On the premise of accurate HAC simulation, different combinations of functional regions and different widths of them may have influences on simulation results. In Section4.2, we will test and discuss such influences and give suggestions about how to configure the overlap region properly.
### 3.2. Parameters of the Data Exchanging Operation:N and ex_steps
In Section3.1, we have talked about spatial average in the A→C region, while in this section we consider the temporal average parameters of the A→C operation. Generally speaking, in order to correctly extract macroscopic velocity, averages are taken over microscopic variables within a control volume and during time, which is called the binning method [35]. Specifically, in our HAC model, the control volume is the bin i in the A→C region and the time interval for averaging is Δtave for the continuum boundary extraction. Timely and correctly exchanging sampling atomistic data to the continuum solver is a kernel operation in the HAC method. We must pay attention that the temporal average over all sample points will introduce time lagging boundary results and would not exchange coupling information timely. Due to the overload of sampling operations, we should also question whether decreasing the times of coupling data exchanging operations could further reduce the sampling times and the computational load within acceptable errors.There are several parameters involved in the temporal average operation:(i)
ex_steps: the number of CFD time steps corresponding to one HAC data exchange operation Δtave, and Δtave=ex_steps×ΔtCFD(ii)
N: the number of sampling MD points in one data exchange operation(iii)
interval: the interval of sampling points in one data exchange operationIn the previous research [7, 8], data was often exchanged between the continuum region and the atomistic region in each CFD time step, and temporal average is performed over all M time steps in one ΔtCFD; that is, ex_steps=1, N=M, and interval=1.In Section4.3, we will discuss and test the proper sampling number N and exchange times ex_steps in one data exchange operation. Furthermore, we will provide guidance on the macroscopic velocity extraction.
### 3.3. Parameters of the Nonperiodic Boundary Force Model:α and β
Due to the lack of interaction with missing atoms, the atoms near the nonperiodic boundary will show unphysical effects. A nonperiodic boundary is applied to render density inhomogeneous and force nonuniform near the MD-continuum interface. In the existing HAC models, the nonperiodic boundary is often remedied with an external force model on those atoms near the boundary. This remedy operation will keep the pressure in the atomistic region correctly and alleviate the density fluctuation.We construct thenpbf region near the MD-continuum interface within a cutoff radius distance. Then, we refine thenpbf region intonpbf bins with width of binynpbf on the basis of CFD mesh width Lycell in y direction; that is, Lycell=β×binynpbf, where β is the finer parameter. Issa and Poesio [13] only provide a nonperiodic boundary force model with an empirical smoothing parameter α=1×10-4. A proper combination of the smoothing parameter α and the finer parameter β to minimize disturbance on the local liquid structure near the MD-continuum interface is worth investigating. In Section 4.4, we will provide numerical verifications on different combinations of α and β and discuss the appropriate values of them.
## 3.1. Parameters of Functional Regions:Layer Number andLayer Width
In the HAC simulation, the physical results of the atomistic region and the continuum region should keep consistent in the overlap region. The legal configuration of the overlap region can exchange data between continuum solver and atomistic solver correctly and alleviate the unusual effects due to artificial operations. The width of each component of the overlap region will lead to different simulation accuracy. Therefore, the configuration of the functional regions must be carefully designed and tested.Generally speaking, the overlap region should be located at a certain distance away from the solid wall. In Section2.3, we mention five functional regions to carry out data coupling. In the existing research, there are different combinations of functional regions and different widths of them. We summarize thelayer number (LN) and thelayer width (LW) in Table 1.Table 1
Configurations of functional regions in the overlap region.
Research LN Configuration details (region) LW O’Connell and Thompson 1995 [6] 3 npbf, A→C, C→A Equal Nie et al. 2004 [7] 3 control (npbf, mass flow), A→C, C→A Equal Zhou et al. 2014 [27] 3 control (npbf, mass flow, C→A), buffer, A→C Equal Cosden and Lukes 2013 [18] 4 mass flow, C→A, buffer, A→C Equal Sun et al. 2010 [19] 4 depress (extending to the continuum region, npbf), C→A, buffer, A→C Equal Yen et al. 2007 [8] 5 depress (extending to the continuum region, npbf), buffer, C→A, buffer, A→C Equal Ko et al. 2010 [14] 5 control (npbf, mass flow), buffer, C→A, buffer, A→C EqualFor the completeness of comparison, we add an alternative configuration which includes four layers, that is, thecontrol region (thenpbf region and themass flow region), thebuffer region, the C→A region, and the A→C region. Those configurations that contain thedepress region extending into the continuum region in Table 1 are not considered in the current comparison. We can also see that the functional regions have equal width in the previous research.In this paper, themass flow region is located at the top of the atomistic region, which is near the MD-continuum interface, and its width is equal to one CFD mesh cell width. Thenpbf region is located within a cutoff distance near the MD-continuum interface. The widths of these two remain unchanged.For theC→A region, the larger its width, the stronger the control exerted on the atomistic region by the continuum region and the more obvious effect CCLD has on the data exchanging in this region. For thebuffer region, its function is to alleviate the fluctuations among other kernel functional regions. Increasing its width means increasing the computation load, so it is better to minimize the width of thebuffer region or to remove it. For the A→C region, due to the existence of statistical thermal fluctuations in the atomistic region, the scope of it apparently influences the sampling average results and thus influences its capability to provide accurate boundary conditions to the continuum region.On the premise of accurate HAC simulation, different combinations of functional regions and different widths of them may have influences on simulation results. In Section4.2, we will test and discuss such influences and give suggestions about how to configure the overlap region properly.
## 3.2. Parameters of the Data Exchanging Operation:N and ex_steps
In Section3.1, we have talked about spatial average in the A→C region, while in this section we consider the temporal average parameters of the A→C operation. Generally speaking, in order to correctly extract macroscopic velocity, averages are taken over microscopic variables within a control volume and during time, which is called the binning method [35]. Specifically, in our HAC model, the control volume is the bin i in the A→C region and the time interval for averaging is Δtave for the continuum boundary extraction. Timely and correctly exchanging sampling atomistic data to the continuum solver is a kernel operation in the HAC method. We must pay attention that the temporal average over all sample points will introduce time lagging boundary results and would not exchange coupling information timely. Due to the overload of sampling operations, we should also question whether decreasing the times of coupling data exchanging operations could further reduce the sampling times and the computational load within acceptable errors.There are several parameters involved in the temporal average operation:(i)
ex_steps: the number of CFD time steps corresponding to one HAC data exchange operation Δtave, and Δtave=ex_steps×ΔtCFD(ii)
N: the number of sampling MD points in one data exchange operation(iii)
interval: the interval of sampling points in one data exchange operationIn the previous research [7, 8], data was often exchanged between the continuum region and the atomistic region in each CFD time step, and temporal average is performed over all M time steps in one ΔtCFD; that is, ex_steps=1, N=M, and interval=1.In Section4.3, we will discuss and test the proper sampling number N and exchange times ex_steps in one data exchange operation. Furthermore, we will provide guidance on the macroscopic velocity extraction.
## 3.3. Parameters of the Nonperiodic Boundary Force Model:α and β
Due to the lack of interaction with missing atoms, the atoms near the nonperiodic boundary will show unphysical effects. A nonperiodic boundary is applied to render density inhomogeneous and force nonuniform near the MD-continuum interface. In the existing HAC models, the nonperiodic boundary is often remedied with an external force model on those atoms near the boundary. This remedy operation will keep the pressure in the atomistic region correctly and alleviate the density fluctuation.We construct thenpbf region near the MD-continuum interface within a cutoff radius distance. Then, we refine thenpbf region intonpbf bins with width of binynpbf on the basis of CFD mesh width Lycell in y direction; that is, Lycell=β×binynpbf, where β is the finer parameter. Issa and Poesio [13] only provide a nonperiodic boundary force model with an empirical smoothing parameter α=1×10-4. A proper combination of the smoothing parameter α and the finer parameter β to minimize disturbance on the local liquid structure near the MD-continuum interface is worth investigating. In Section 4.4, we will provide numerical verifications on different combinations of α and β and discuss the appropriate values of them.
## 4. Results and Discussion
In this section, we use the model Couette flow with either no-slip or slip boundary condition to verify the effects of choosing different parameters in coupling strategies on the HAC simulation. We run the simulations on a high performance cluster where each computing node contains 12 Intel Xeon E5-2620 2.10 GHz CPU cores and a total main memory of 16 GB [36]. Section 4.1 validates the correctness of our HAC solver; Section 4.2 tests the effects of different spatial configurations of the overlap region on the a Couette flow; Section 4.3 investigates the effects of different temporal strategies of data exchanging on the model flow and Section 4.4 discusses the proper combination of parameters in the nonperiodic boundary force model.
### 4.1. Verification and Efficiency
In this section, two benchmark cases are carried out to test the validity and practical performance of our HAC solver by comparing with either the analytical solution or the full MD results.
#### 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
#### 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
### 4.2. Effects of Functional Region Configurations on Couette Flows
Based on the discussion of the mutable variables mentioned in Section3.1, we test and discuss the effects of different combinations and different widths of the functional regions on the convergence and accuracy of Couette flow simulation here. The channel height of Couette flow is H=100σ and the sliding velocity is Uw=1.0σ/τ for both the no-slip and slip boundary conditions as shown in Figure 1 and Figure 2, while other test conditions are the same as those in Section 4.1.1 and Section 4.1.2.
#### 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
#### 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
### 4.3. Effects of Sampling Parameters on Couette Flows
In this section, we test different groups of sampling and averaging parameters mentioned in Section3.2 under the no-slip and slip boundary conditions with channel height H=100σ as shown in Figures 1 and 2.
#### 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
#### 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
### 4.4. Effects of Parameters of the Nonperiodic Boundary Force Model on Couette Flows
The effects of the smoothing parameterα and the finer parameter β on Couette flow simulation are discussed in this section. We use two tests under the no-slip and slip boundary conditions to measure these effects as shown in Figures 1 and 2. The combination of parameters are listed in Table 14. These 16 situations are tested to pick out the best combination of parameters with the least disturbance on the local liquid structure near the MD-continuum interface.Table 14
Combinations of parameters in the nonperiodic boundary force model with 16 situations.
Situation STNI-1 STNI-2 STNI-3 STNI-4 α 1×10-4 0.001 0.01 0.1 β 20 20 20 20 Situation STNI-5 STNI-6 STNI-7 STNI-8 α 1 × 1 0 - 4 0.001 0.01 0.1 β 10 10 10 10 Situation STNI-9 STNI-10 STNI-11 STNI-12 α 1 × 1 0 - 4 0.001 0.01 0.1 β 5 5 5 5 Situation STNI-13 STNI-14 STNI-15 STNI-16 α 1 × 1 0 - 4 0.001 0.01 0.1 β 1 1 1 1Figure15 shows the normalized cumulative mean errors under the no-slip boundary condition. Figures 16 and 17 show the relative errors of the slip length and density near the MD-continuum interface under the slip boundary condition.Figure 15
Differences among the sixteen situations using the results of “STNI-1” as the normalized base (a) and summation errors over the five time intervals (b).
(a) (b)Figure 16
Relative errors of slip length under 16 situations.Figure 17
Relative errors of density under 16 situations; the base density isρσ3=0.81.Figure17 shows that, with the decrease of the finer parameter β, the density near the MD-continuum interface drifts away from the base increasingly and “STNI-4” achieves the minimum disturbance on the local liquid structure. For Figure 15, “STNI-4” performs best in matching with the analytical solution and in Figure 16 “STNI-4” predicts the slip length better than the other situations except “STNI-6” and “STNI-10”, with 4% difference.Based on the above analysis, we can conclude that, in our current model flow, the smoothing parameterα and the finer parameter β can be chosen as 0.1 and 20, respectively, thus providing the best accuracy and prediction capability.
## 4.1. Verification and Efficiency
In this section, two benchmark cases are carried out to test the validity and practical performance of our HAC solver by comparing with either the analytical solution or the full MD results.
### 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
### 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
## 4.1.1. No-Slip Sudden Start Couette Flow
We firstly consider the typical no-slip sudden start Couette flow proposed by O’Connell and Thompson [6] as shown in Figure 1. The analytical solution for a sudden start Couette flow is given by(12)uy,t=uwallyLy+2uwallπ∑n=1∞cosnπn×sinnπyLyexp-νn2π2tLy2,where Ly is the distance between the two walls, uwall is the sliding velocity, and ν is the kinematic viscosity. We compare the resulting velocity profiles from our HAC model with the analytical solution.The height of the channel isH=44σ as used by Yen et al. [8]. The sliding velocity is Uw=1.0σ/τ at y/H=Ly. The simulation domain is spilt into the atomistic region near the lower wall and the continuum region near the upper wall. In this case, the no-slip boundary is set for the sliding wall in the continuum region as well as the stationary wall in the atomistic region. The wall-fluid interaction parameters σwf/σ=1.0, ϵwf/ϵ=0.6, and ρwf/ρ=1.0 are employed as used by Thompson and Troian [29]. The height of the pure continuum region, the pure MD region, and the overlap region are 24σ, 12σ, and 8σ, respectively, and the combination of functional regions has 4 layers including thecontrol region, the C→A region, thebuffer region, and the A→C region. The continuum region is divided into nx×ny=5×16 cells for numerical calculation; that is, Δx×Δy=3σ×2σ. The three-dimensional MD region is also divided into bins matching the cells in the continuum region, which is nx×ny×nz=5×11×1 bins and 10σ in z direction.Initially, the mean fluid is zero in the whole simulation domain. Att=0, the upper wall in the continuum region begins to move at Uw=1.0σ/τ, while the lower stationary wall in the atomistic region keeps still. The results are then averaged over the five time intervals as indicated in Figure 7. The transient velocity profiles of our HAC model match well with the analytical solution, especially in the overlap region, and the steady state profile is linear as expected. Therefore, this case can be used as to demonstrate the correctness of the proposed HAC model.Figure 7
Velocity evolution profiles averaged over five time intervals compared with the analytical solution.We also want to compare the efficiency of our HAC method to the full MD simulation using this benchmark case in serial mode. We list the detailed simulation time consuming in Table2. The total simulation time of the HAC method is only 39.25% of the full MD simulation which exhibits considerable efficiency.Table 2
Detailed time consuming for the HAC method and the full MD simulation.
Method MD domain (σ3) Number of particles CFD domain (σ3) Time (s) HAC 15×20×10 2592 15×32×10 3649.32 Full MD 15 × 44 × 10 5616 — 9295.87
## 4.1.2. Slip Sudden Start Couette Flow
The second case is the slip sudden start Couette flow, with wall-fluid interaction parameters,σwf/σ=0.75, ϵwf/ϵ=0.6, and ρwf/ρ=4.0, as used by Thompson and Troian [29]. The velocity of sliding wall is Uw=1.0σ/τ. In this case, the height of the channel is H=44σ and there are two MD regions near the sliding wall and the stationary wall, respectively, while the continuum region is in the middle part of the channel as shown in Figure 2. The dimensions of the two MD regions and the continuum region are 22σ and 20σ, respectively. The partitions of cells and bins are the same as the previous case. We run independent realization of the same system ten times with the same configuration for both the HAC simulation and the full MD simulation for better thermal noise decrease as with Nie et al. [7]. The comparison of the evolutionary velocity profiles predicted by our HAC simulation and the full MD is presented in Figure 8. As we can see, the results of the two solutions agree quite well with each other with a small discrepancy in the evolution, and the deviation diminishes at the final steady state.Figure 8
Velocity evolution profiles averaged over four time intervals compared with the full MD results.
## 4.2. Effects of Functional Region Configurations on Couette Flows
Based on the discussion of the mutable variables mentioned in Section3.1, we test and discuss the effects of different combinations and different widths of the functional regions on the convergence and accuracy of Couette flow simulation here. The channel height of Couette flow is H=100σ and the sliding velocity is Uw=1.0σ/τ for both the no-slip and slip boundary conditions as shown in Figure 1 and Figure 2, while other test conditions are the same as those in Section 4.1.1 and Section 4.1.2.
### 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
### 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
## 4.2.1. Effects of Different Functional Region Combinations on Couette Flows
In Section3.1, we have summarized the existing configurations of the overlap region. Five of the combinations are presented in Table 3 for the comparison of the influences on the simulation of Couette flow.Table 3
Details of different combinations of functional regions to be compared with.
Type of combinations Detailed combinations of functional regions 3-layer-1 control (npbf, mass flow), A→C, C→A 3-layer-2 control (npbf, mass flow, C→A), buffer, A→C 4-layer-1 control (npbf, mass flow), C→A, buffer, A→C 4-layer-2 control (npbf, mass flow), buffer, C→A, A→C 5-layer control (npbf, mass flow), buffer, C→A, buffer,A→CFirstly, we consider the tests with the no-slip boundary condition, with the same domain decomposition method in Section4.1.1, and with the extended channel height of 100σ. The width of functional regions is 2σ in y direction. Therefore, the total width of the overlap region in these five combinations ranges from 6σ to 10σ. The cumulative mean error between the results of the HAC simulation and the analytical solution is defined as(13)err=∑i=1NUiHAC-Uianalysis/UianalysisN,where N is the number of points in each resulting line, with the value of 50 when H=100σ. For it is a time-stepping simulation, we take five time intervals into account. In order to clearly distinguish between these five combinations, we depict the results of five intervals separately and average the total difference over these five intervals to depict theaverage deviation among these configurations using the first combination as the base, that is, “3-layer-1” as shown in Figure 9. In the following sections, we will focus on the minimumaverage deviation, that is, the last figure in Figure 9, and use the same kind of data conversion to clearly explain the distinction among different configurations.Figure 9
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.In the above figures, we find that, at the beginning of the HAC simulation, due to the thermal noise in the atomistic region, the results in the first three intervals deviate from the analytical solution greatly. However, this deviation vanishes when the simulation process reaches the final steady state.When the width of functional regions is2σ and the width of the overlap region is varying from 6σ to 10σ, the result of “5-layer” combination obtains the minimumaverage deviation compared with those of the other four. It is because of that “5-layer” is configured with the most atoms and the widest overlap region with the width of 10σ. In such combination, the fluctuations between the results from the two different numerical methods can be sufficiently alleviated, similar to the conclusion given by Yen et al. [8].Secondly, we consider the slip boundary condition of the above five combinations with parameters same as those of Section4.1.2. The slip length is defined as [37](14)Ls=Uw∂u/∂nw.In our model Couette flow, the simplified definition of (14) is given by [29](15)Ls=Uw/γ˙-H2,where γ˙ is the shearing rate and H is the channel height. We take the slip length at the stationary wall Lstationary and at the sliding wall Lsliding into consideration when the simulations reach the steady state and define the relative errors between the HAC simulation results and the full MD results as follows:(16)errstationary=LengthstationaryHAC-LengthstationaryfullMDLengthstationaryfullMD,errsliding=LengthslidingHAC-LengthslidingfullMDLengthslidingfullMDand the unified relative error between them is given by(17)errHAC=errstationary+errsliding2.The full MD results of the slip length at stationary wall and sliding wall areLstationary=2.6409σ and Lsliding=2.3619σ respectively, and the ideal slip length calculated by (15) is Ls=2.5014σ.The unified relative error ofLstationary and Lsliding is shown in Table 4. The HAC simulation results of these five combinations are all different from the full MD results, while the result of “5-layer” combination obtains the minimum error with 0.3735, which corresponds to the conclusion of that under the no-slip boundary condition.Table 4
Slip length prediction of five combinations and unified relative errors with different widths of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.3012σ 4.8011σ 4.0739σ 4.9841σ 4.0947σ Lsliding 2.6735σ 1.7497σ 3.2287σ 3.5236σ 2.826σ errHAC 0.3804 0.5386 0.4548 0.68955 0.3735In the above two tests, the widths of the overlap regions are not the same with each other. Next, we carry out another two tests with the width of the overlap region fixed at10σ under the no-slip and slip boundary conditions. The width of each functional region is the same: therefore, the width of functional regions is 3.33σ in “3-layer” combination, 2.5σ in “4-layer” combination, and 2σ in “5-layer” combination.Using the same data processing method as the previous test, we depict the cumulative mean errors among five combinations andaverage deviation as shown in Figure 10.Figure 10
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure10, the result of “4-layer-1” combination achieves the minimumaverage derivation compared with “5-layer” one and also has a better time-stepping performance than “5-layer” one. We also test the slip boundary condition, and the relative errors of the slip length are listed in Table 5. The result of “4-layer-1” combination gains the slightest deviation from the full MD results with error of 0.3599.Table 5
Slip length prediction of five combinations and unified relative errors with the fixed width of the overlap region.
Situation 3-layer-1 3-layer-2 4-layer-1 4-layer-2 5-layer Lstationary 4.8303σ 4.7316σ 4.1067σ 4.4892σ 4.0947σ Lsliding 2.9541σ 3.8346σ 2.7511σ 3.2528σ 2.826σ errHAC 0.53985 0.7076 0.3599 0.53855 0.3735Based on the above four tests, we draw the following conclusion: under the condition of the equal width of the functional regions, when the width of the overlap region varies, “5-layer” combination obtains the best accuracy with minimumaverage deviation, while when the width of the overlap region is fixed, “4-layer-1” combination is the best configuration of these five combinations, which has a reasonable width of the functional regions and alleviates the fluctuations sufficiently. Furthermore, we draw the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region, which can effectively relax the data exchanging between the continuum region and the atomistic region, while thebuffer region between thecontrol region and the C→A region could not be set so as to relieve the computational load.
## 4.2.2. Effects of Different Widths of Functional Regions on Couette Flows
In Section4.2.1, the widths of functional regions are the same, while in this section we discuss the effects of different widths of functional regions on the Couette flow simulation. Following the discussion results of the previous section, we use “4-layer-1” type of combination to deploy the overlap region with the fixed width but to widen the C→A region, thebuffer region, and the A→C region separately. The settings of different widths are listed in Table 6. We choose another two situations from Section 4.2.1 for comparison.Table 6
Configuration situations of different widths of the functional regions.
Situation C → A region buffer region A → C region wider C → A 4σ 2σ 2σ wider buffer 2σ 4σ 2σ wider A → C 2σ 2σ 4σ compare-1 2σ 2σ 2σ compare-2 2.5σ 2.5σ 2.5σWe simulate the tests under the no-slip and slip boundary conditions and compare the simulation accuracy and convergence of these five situations. Under the no-slip boundary condition, the cumulative mean errors andaverage deviation are depicted in Figure 11.Figure 11
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.From Figure11, the results show that under the no-slip boundary condition “widerC→A” has a better accuracy than “compare-2.” For the slip boundary condition, the relative errors are listed in Table 7. It also shows that the “widerC→A” achieves the best accuracy among these five situations with error of 0.30942.Table 7
Slip length prediction of five situations and unified relative errors with different widths of the functional regions.
Situation wider C → A wider buffer wider A → C compare-1 compare-2 Lstationary 4.0375σ 5.2954σ 4.9102σ 4.0739σ 4.1067σ Lsliding 2.1705σ 3.2045σ 1.7241σ 3.2287σ 2.7511σ errHAC 0.30942 0.680925 0.5647 0.4548 0.3599In our model flow under the slip boundary condition, the atomistic regions provide more accuracy boundary conditions for the continuum region, while the continuum region just serves as the transmission container to transfer data between these two atomistic regions. In “widerC→A,” when enlarging the C→A region, the CCLD in the C→A region has more atoms to apply on. Therefore, the continuum region of “widerC→A” transfers data more efficiently and “widerC→A” predicts the slip length with the minimum error.From the above two tests, “widerC→A” that enlarges the C→A region performs best in matching with the analytical solution and predicting the slip length. Conclusions could be drawn that under the fixed width of the overlap region widening the C→A region improves simulation accuracy.
## 4.3. Effects of Sampling Parameters on Couette Flows
In this section, we test different groups of sampling and averaging parameters mentioned in Section3.2 under the no-slip and slip boundary conditions with channel height H=100σ as shown in Figures 1 and 2.
### 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
### 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
## 4.3.1. Effects of Different Sampling Number on Couette Flows
We firstly setex_steps=1 and interval=1 and vary the sampling number N under the no-slip and slip boundary conditions. The parameter list is shown in Table 8.Table 8
Parameter list of different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare N 1 2 4 8 16 32 64 100The first test is under the no-slip boundary and we depict the cumulative mean errors during the five time intervals andaverage deviation as shown in Figure 12. The result of “N-32” with sample number N=32 near the occasion of data exchanging obtains the minimum cumulative mean error. The second test is performed under the slip boundary condition and the relative errors are listed in Table 9.Table 9
Slip length prediction of eight situations and unified relative errors with different sampling numbers.
Situation N-1 N-2 N-4 N-8 N-16 N-32 N-64 compare Lstationary 4.1917σ 4.7359σ 4.4798σ 4.1362σ 4.1975σ 3.8371σ 4.4089σ 4.1067σ Lsliding 3.0493σ 2.459σ 3.1436σ 3.1479σ 3.1785σ 2.751σ 2.7515σ 2.7511σ errHAC 0.43915 0.4172 0.51365 0.4495 0.46755 0.30885 0.4172 0.3599Figure 12
Cumulative mean errors during five time intervals (0–50)τ, (50–100)τ, (100–150)τ, (1000–1500)τ, and (7000–8000)τ and final average deviation.Just as the conclusion of Section4.2.2 shows, under the current model flow, sampling points closer to the occasion of data exchanging provide more important messages for the simulation evolution. Therefore, “N-32” provides better data exchanging boundary conditions while ensuring the quantity of sampling points compared with other situations.From these two tests, we find that “N-32”, which provides 32 sampling points, achieves the best performance with the least error of 0.30885, offers the most effective information of the atomistic region, and also reduces the sampling times.
## 4.3.2. Effects of Different Data Exchanging Times on Couette Flows
The data exchanging timesex_steps influence the total number of the C→A and the A→C operations in one HAC simulation. We discuss the proper times of data exchanging in the current section.Firstly, we change the value ofex_steps, sample, and average over all MD time steps in one Δtave under the no-slip boundary condition to check the effects of ex_steps on Couette flows. The parameters are listed in Table 10.Table 10
Parameter list of different data exchanging times and sampling numbers.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 200 400 800 1600 3200 100In this section, the time intervals are multiples of the data exchanging time, and we consider four time intervals in this test. The cumulative mean errors andaverage deviation are plotted in Figure 13.Figure 13
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.We can figure out that, when theex_steps changes, the accuracy of the first five situations is lower than the last one with ex_steps=1. In Figure 13, the accuracy of “ex-2” and “ex-4” is very much closer to “compare” with only 1% difference, while the total times of data exchanging operations are a half or a quarter of the last one.Secondly, we test the slip boundary condition, and the results are listed in Table11 with different sampling numbers; “compare” reaches the highest accuracy, but “ex-2” and “ex-4” have only deviated from “compare” with a percentage of 5, that is, 0.36691 and 0.3993, respectively.Table 11
Slip length prediction of six situations and unified relative errors with differentex_steps and N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 4.092σ 4.3663σ 4.5941σ 5.2711σ 4.5607σ 4.1067σ Lsliding 2.7973σ 2.708σ 2.3235σ 2.0588σ 2.5892σ 2.7511σ errHAC 0.36691 0.39994 0.37795 0.5621 0.41155 0.3599In the above two tests, the total number of sampling points is different. As we have discussed in Section4.3, sampling number with N=32 performs better than all MD time steps averaging. Next, we take another two cases into account with the same sampling points. The parameters are shown in Table 12.Table 12
Parameter list of different exchanging times but of equal sampling number.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare ex_steps 2 4 8 16 32 1 N 32 32 32 32 32 32We perform the tests under the no-slip and slip boundary conditions and plot the normalized errors under the no-slip boundary condition in Figure14 and the relative errors under the slip boundary condition in Table 13.Table 13
Slip length prediction of six situations and unified relative errors with differentex_steps and equal N.
Situation ex-2 ex-4 ex-8 ex-16 ex-32 compare Lstationary 3.8645σ 3.8896σ 4.8903σ 4.1258σ 4.3519σ 3.8371σ Lsliding 2.8012σ 2.7598σ 2.2585σ 2.6894σ 3.4859σ 2.751σ errHAC 0.32645 0.32065 0.4478 0.3505 0.5619 0.30885Figure 14
Cumulative mean errors during four time intervals (64–128)τ, (256–512)τ, (960–1472)τ, and (6976–8000)τ and final average deviation.From Figure14 and Table 13, we can see that the first two situations perform better than “ex-8,” “ex-16,” and “ex-32” but worse than the last one under the no-slip boundary condition as well as the slip boundary condition. But the first two situations are worse than the last situation only by 1% and 2% in these two tests with error of 0.32645 and 0.32065, respectively.From the results of all the tests, we can conclude that timely data exchanging between the atomistic region and the continuum region performs better than otherex_steps settings. But with acceptable errors, we can choose a half or a quarter of the times of data exchanging to reduce the computational load. Therefore, “ex-2” and “ex-4” can be better choices.
## 4.4. Effects of Parameters of the Nonperiodic Boundary Force Model on Couette Flows
The effects of the smoothing parameterα and the finer parameter β on Couette flow simulation are discussed in this section. We use two tests under the no-slip and slip boundary conditions to measure these effects as shown in Figures 1 and 2. The combination of parameters are listed in Table 14. These 16 situations are tested to pick out the best combination of parameters with the least disturbance on the local liquid structure near the MD-continuum interface.Table 14
Combinations of parameters in the nonperiodic boundary force model with 16 situations.
Situation STNI-1 STNI-2 STNI-3 STNI-4 α 1×10-4 0.001 0.01 0.1 β 20 20 20 20 Situation STNI-5 STNI-6 STNI-7 STNI-8 α 1 × 1 0 - 4 0.001 0.01 0.1 β 10 10 10 10 Situation STNI-9 STNI-10 STNI-11 STNI-12 α 1 × 1 0 - 4 0.001 0.01 0.1 β 5 5 5 5 Situation STNI-13 STNI-14 STNI-15 STNI-16 α 1 × 1 0 - 4 0.001 0.01 0.1 β 1 1 1 1Figure15 shows the normalized cumulative mean errors under the no-slip boundary condition. Figures 16 and 17 show the relative errors of the slip length and density near the MD-continuum interface under the slip boundary condition.Figure 15
Differences among the sixteen situations using the results of “STNI-1” as the normalized base (a) and summation errors over the five time intervals (b).
(a) (b)Figure 16
Relative errors of slip length under 16 situations.Figure 17
Relative errors of density under 16 situations; the base density isρσ3=0.81.Figure17 shows that, with the decrease of the finer parameter β, the density near the MD-continuum interface drifts away from the base increasingly and “STNI-4” achieves the minimum disturbance on the local liquid structure. For Figure 15, “STNI-4” performs best in matching with the analytical solution and in Figure 16 “STNI-4” predicts the slip length better than the other situations except “STNI-6” and “STNI-10”, with 4% difference.Based on the above analysis, we can conclude that, in our current model flow, the smoothing parameterα and the finer parameter β can be chosen as 0.1 and 20, respectively, thus providing the best accuracy and prediction capability.
## 5. Conclusions
In this paper, we design a domain decomposition type of hybrid MD-continuum solver, using open source software LAMMPS and OpenFOAM. We useCouette channel flow as our model flow to investigate the coupling strategy issues. Focusing on the fixed channel height and the sliding velocity, under the no-slip and slip boundary conditions, we make a deep analysis of different combinations and different widths of the functional regions, different parameters of data exchanging, and various combinations of parameters in the nonperiodic boundary force model.Forlayernumber and layerwidth, we find that under the condition of the equal width of functional regions, when the width of the overlap region is varied, “5-layer” combination obtains the best accuracy, while when the width of the overlap region is fixed, “4-layer-1” combination is the best setting of these five combinations, which has a reasonable width of functional regions and alleviates the fluctuations sufficiently. Furthermore, we can give the conclusion that it is reasonable to set abuffer region located between the C→A region and the A→C region which can effectively relax the exchanging data between the continuum region and the atomistic region, while thebuffer region located between thecontrol region and the C→A region can be removed to save the computational load. We can also figure out that under the fixed width of the overlap region, widening the C→A region gets a better simulation result in our model flow.As to the sampling parameters of temporal average, the present results disclose that the data exchanging operation only needs a few sampling points which are close to the occasions of interactions to guarantee modeling efficiency and to reduce the sampling times, and also timely data exchanging between the atomistic region and the continuum region performs best beyond other settings, but with acceptable errors; a half or a quarter of the data exchanging times can be chosen to reduce the computational load. The discussion of parameters of the nonperiodic boundary force model on the model flow shows that, under the domain decomposition along the flow direction, the smoothing parameter value of 0.1 and the finer parameter value of 20 can achieve the minimum disturbance on the local structure near the MD-continuum interface and keep the simulation accuracy.In this paper, we mainly focus on the HAC method based on geometrical coupling. There are other coupling techniques, such as embedded coupling [38]. The microscopic method can be used to calibrate the parameters in the continuum mechanics model [39, 40]. In our future work, we aim to extend the simulation power of our framework to support more kinds of multiscale coupling.
---
*Source: 1014636-2017-02-14.xml* | 2017 |
# Experimental Study on the Components in Polyvalent “Ghost”Salmonella Vaccine for Veterinary Use
**Authors:** Daniela Vasileva Pencheva; Elena Ilieva Velichkova; Denis Zdravkov Sandarov; Adrian Draganov Cardoso; Maria Hristova Mileva; Petia Dinkova Genova-Kalou; Rayna Bryaskova
**Journal:** Journal of Nanomaterials
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101464
---
## Abstract
Development of “ghost”Salmonella vaccines, inactivated by using a hybrid nanomaterial based on silver nanoparticles (AgNps) stabilized via polyvinyl alcohol (PVA), is an innovative approach in vaccine production. For this purpose, a series of attempts to establish the components of the polyvalent “ghost” Salmonella vaccine and the most suitable methods for its preparation were performed. The following strains S. Enteritidis, S. Newport-Puerto Rico, and S. Typhimurium were chosen as appropriate candidates for their incorporation in order to create polyvalent Salmonella “ghost” vaccine for veterinary use.
---
## Body
## 1. Introduction
Obtaining of “ghost” vaccine by inactivating bacteria with hybrid material based on silver nanoparticles stabilized by polyvinyl alcohol (PVA/AgNps) is an innovative new approach to the application of whole cell inactivated vaccines [1]. There are many advantages of using vaccines treated by this way such as keeping the antigenic range and creating complex protective immunity. Annually in many European countries and the United States are reported a large number of cases ofSalmonella gastroenteritis. Approximately 80 deaths are recorded each year in the UK [2]. There are also known data caused by a significant number of nontyphoidalSalmonella systemic and nonenteric forms of human infections. In a study performed for five-year period in Bulgaria it was found that 21% of them are resistant to ampicillin and gentamicin, 17.64% are resistant to tetracycline, 14.28% are resistant to nalidixic acid, and 10% of them are resistant to chloramphenicol [3]. About half of theSalmonella outbreaks are due to contaminated poultry and poultry products. The route to poultry infection is the colonization of the hen house and its pets, such as rodents, insects, and wild birds.Salmonella in the feces of laying eggs contaminate surface or penetrate through the cracks of light shells. Concerning hens with ovarian microorganism infection it was established thatS. Enteritidis can reach the egg by internal vertical transmission via the reproductive tract to the yolk or albumin [4]. HistoricallyS. Typhimurium is the most commonly reported serotype. In 2001, the three most commonSalmonella serotypes (more than 50% of all isolates) wereS. Typhimurium (22%),S. Enteritidis (18%), andS. Newport (10%) [5].S. Newport is one of theSalmonella serotypes causing diseases in cattle [6]. The emergence of multidrug-resistantSalmonella strains raises the question of strengthening the measures related to the prevention and protection at poultry. An inactivatedSalmonella vaccine is available on the market for the active immunization of chickens, hens, and their parents [7]. It contains formalin-inactivated cells ofS. Enteritidis PT4: 1 × 109 cells andS. Typhimurium DT104: 1 × 109 cells. This type of inactivatedSalmonella vaccines cannot offer 100% protection due to the destruction of bacterial cells as a result of treatment with formalin. An alternative to this vaccine could be a vaccine derived from ghost cells resulting from treatment with the hybrid material.The aim of the present investigation is to establish the components of the polyvalent “ghost”Salmonella vaccine with preserved integrity of the cell surface by inactivation of differentSalmonella strains with AgNps stabilized by PVA.
## 2. Materials and Methods
PVA/AgNps hybrid materials were prepared by adding a silver salt (AgNO3), the precursor for silver ions, to the PVA solution thus leading to coordination of silver ions with hydroxyl groups (-OH) from PVA. Boiling the PVA solution at 100°C for 60 min in the presence of AgNO3 results in the formation of silver nanoparticles stabilized in PVA, which protects the silver nanoparticles from agglomeration and ensures the homogeneous distribution of silver nanoparticles. The formation of silver nanoparticles was proven by UV-Vis spectroscopy and transmission electron microscopy (TEM) [8]. The silver concentration in PVA/AgNps solution was 174 mg/L as determined by ICP analysis.To determine the Minimal Bactericidal Concentration (MBC) of the synthesized samples, the following control strains from the collection of “Laboratory for Control of In Vitro Diagnostic Medical Devices” by “BB-NCIPD”S. Typhi London,S. Paratyphi B,S. Nairobi,S. Typhimurium, 79aS. Newport-Puerto Rico,S. Enteritidis, andS. EnteritidisATCC 13076 were used. The PVA/AgNps solution (174 mg/L) is diluted initially with injection water at ratio 1 : 6 thus leading to starting concentration of 29 mg/L. In five sterile tubes, successively falling twofold dilutions starting from a working dilution of PVA/AgNps in a volume of 1 mL with water for injection to a concentration of 0.45 mg/L were performed. To each tube was added a quantity of the bacterial suspension (by validated patented methodology) to provide from 105 to 106 CFU (Colonies Forming Units) of them. From a suspension the related positive control was seeded by the same amount using surface method on agar plates with Soybean Casein Agar (SCA). Tubes and plates were placed for 24 hours at 32.5±2.5°C. Each tube was plated by agar surface plate method on plate with SCA, which was cultured in a thermostat at 32.5±2.5°C in order to confirm inactivation of the bacterial suspension.From working cultures of 4 controlSalmonella strains,S. Typhimurium,S. Newport-Puerto Rico,S. Enteritidis, andS. EnteritidisATCC 13076, were prepared as antigens for immunization “ghost”Salmonella vaccines. To obtain the required bacterial mass of pure culture, the strain was inoculated on plain agar slant layer. A standardized bacterial suspension from each strain was separately treated with the hybrid material PVA/AgNps, added in an amount that is in a silver concentration of 30 mg/L in the final volume of the antigen. Confirmed inactivated bacterial suspension standardized in densitometer to 3 MF was used as an antigen for immunization of rabbits. Intravenous immunizations were carried out with increasing antigenic load of 0.5 to 2 mL as established in the “BB-NCIPD” scheme on Californian rabbits: immunization in vena marginalis in intervals of 3 to 4 days.The resulting serum was titrated in reaction stage agglutination to establish the specific titer. The presence of cross-agglutinines in different hyperimmune sera was established by slide agglutination reaction (Table1).Table 1
Content of cross-agglutinins diluted to 1 : 50 anti-Salmonella sera.
Strains
S. entericaserovar Enteritidis ATCC 13076 1,9,12; gm;—
S. entericaserovar Enteritidis1,9,12; gm;—
79 a S. entericaserovar Newport-Puerto Rico 6,8 [20];—;1,2
S. entericaserovar Typhimurium1,4,[5],12;i;1,5
Anti-S.Enteritidis ATCC 13076 serum
++++
+
−
++
Anti-S.Enteritidis serum
−
++++
−
+++
Anti-S.Newport-Puerto Rico serum
−
−
++++
++
Anti-S.Typhimurium serum
−
−
+++
++++
++++: very good visible agglutinates in clear liquid; +++: good visible agglutinates in almost clear liquid; ++: visible agglutinates in turbid liquid; +: slightly visible agglutinates in turbid liquid.Cell viability after the cytotoxicity analysis of the material was determined by a modification of the MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] [9] analysis on the cell line of mouse fibroblast cells (L20B). It gives information about the possibilities of application of the hybrid material that will perform bactericidal effect without affecting the metabolism of host cells.
## 3. Results and Discussion
Initially, the Minimal Bactericidal Concentration (MBC) of PVA/AgNps hybrid materials for the organismsSalmonella enterica serovar Typhi London,Salmonella enterica serovar Paratyphi B, andSalmonella enterica serovar Nairobi, which were added to the reaction mixture at a concentration of 105 to 106 CFU via a validated methodology, was determined according to the requirements of CLSI M 26-A [10]. It was established that the MBC was defined at 0.054 mg/L silver concentration in all cases.MBC for the strains provided for involvement in the experimental “ghost”Salmonella vaccine,Salmonella enterica serovar Typhimurium,Salmonella enterica serovar Newport-Puerto Rico,Salmonella enterica serovar Enteritidis, andSalmonella enterica serovar Enteritidis ATCC 13076, was additionally determined. Before performing the test, the MBC for both strainsS. enterica serovar Enteritidis andS. enterica serovar Typhimurium was established as lower than 0.027 mg/L. Only forS. Newport-Puerto Rico was the MBC 0.108 mg/L (≈0.11 mg/L) (Figure 1). Therefore, it was assumed that the testedSalmonella strains were sensitive to silver, as tests with the same hybrid material showed that MBC values equal to or more than 1.1 mg/L are sign for silver resistance [11]. Evidence of widespread resistance ofSalmonella to silver has long been cited in the literature, resulting in a plasmid encoding the genes for resistance to heavy metals [12].Figure 1
MBC of PVA/AgNps determined by macrodilution method for (a)S. Newport-Puerto Rico, (b)S. Enteritidis ATCC 13076, (c)S. Enteritidis, and (d)S. Typhimurium.
(a)
(b)
(c)
(d)Maximal nontoxic concentration (MNC) was defined as 0.007 mg/L, while the concentration required to inhibit cell viability by 50% (CD50) was determined as 0.53 mg/L in a dose-dependent manner (Figure 2). As the MBC from the respective strains was determined at 105 to 106 CFU bacterial load, therefore, to inactivate one billionth bacterial suspension, silver concentration of 30 mg/L suspension was applied.Figure 2
Cytotoxic effect of PVA/AgNps on the viability of mouse fibroblast (L20B) cell line at 24 h and 48 h.Sera were tested in the reaction slide agglutination at a dilution with TRIS saline buffer as 1 : 50 for presence of cross-agglutinins from other strains used in the experiment (Table1). With reference to the scheme of White-Kaufmann [13] common H1 antigens in the second phase ofS. enterica serovar Newport-Puerto Rico andS. enterica serovar Typhimurium were found. This explains the coagglutination in anti-S. Newport-Puerto Rico and anti-S. Typhimurium sera. Having common O1 antigens explains coagglutination in sera: anti-S. Enteritidis and anti-S. Typhimurium serum.The specific titer of all obtained after immunization rabbit antisera was determined in a Gruber’s reaction stage agglutination. Anti-S. Enteritidis ATCC 13076, anti-S. Newport-Puerto Rico, and anti-S. Typhimurium sera were with O-titer 1 : 6400. Only anti-S. Enteritidis serum has O titer 1 : 1600. A significant difference in the activity of sera obtained from both strainsS. Enteritidis was found; therefore it was considered as appropriate to incorporate both of them in the ongoing prospective studies on the composition of polyvalentSalmonella “ghost” vaccine for veterinary use.TEM analysis a month after completion of the immunization was performed (Figure3) to one of those used in attempts antigens. It was found that the presence of the PVA/AgNps for longer period in the antigen for the immunization results in complete lysis of the bacterial cells after apoptosis. Therefore, an additional step consisting in washing of the antigen with saline after inactivation with PVA/AgNps, in order to preserve the inactivated bacterial cells in the form of “ghost” cells, is necessary.Figure 3
TEM analysis of theSalmonella antigen, inactivated with PVA/AgNp.
## 4. Conclusions
PVA/AgNps hybrid material was applied to obtain “ghost” cells with preserved integrity of the cell surface by inactivation of differentSalmonella strains. Initially, MBC for differentSalmonella strains was determined by macrodilution method. Minimal nontoxic concentration of PVA/AgNp and CD50 were established as well. The specific titer of all obtained after immunization rabbit antisera was determined in a Gruber’s reaction stage agglutination, as strainsS. Enteritidis,S. Newport-Puerto Rico, andS. Typhimurium were chosen as appropriate candidates for their incorporation in order to create polyvalentSalmonella “ghost” vaccine for veterinary use.The addition of more strains to the vaccine will expand its range of possible causes, and their inactivation by PVA/AgNps will allow retention of a full range of antigenic determinants thus providing complete protection.
---
*Source: 101464-2015-07-22.xml* | 101464-2015-07-22_101464-2015-07-22.md | 12,925 | Experimental Study on the Components in Polyvalent “Ghost”Salmonella Vaccine for Veterinary Use | Daniela Vasileva Pencheva; Elena Ilieva Velichkova; Denis Zdravkov Sandarov; Adrian Draganov Cardoso; Maria Hristova Mileva; Petia Dinkova Genova-Kalou; Rayna Bryaskova | Journal of Nanomaterials
(2015) | Engineering & Technology | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101464 | 101464-2015-07-22.xml | ---
## Abstract
Development of “ghost”Salmonella vaccines, inactivated by using a hybrid nanomaterial based on silver nanoparticles (AgNps) stabilized via polyvinyl alcohol (PVA), is an innovative approach in vaccine production. For this purpose, a series of attempts to establish the components of the polyvalent “ghost” Salmonella vaccine and the most suitable methods for its preparation were performed. The following strains S. Enteritidis, S. Newport-Puerto Rico, and S. Typhimurium were chosen as appropriate candidates for their incorporation in order to create polyvalent Salmonella “ghost” vaccine for veterinary use.
---
## Body
## 1. Introduction
Obtaining of “ghost” vaccine by inactivating bacteria with hybrid material based on silver nanoparticles stabilized by polyvinyl alcohol (PVA/AgNps) is an innovative new approach to the application of whole cell inactivated vaccines [1]. There are many advantages of using vaccines treated by this way such as keeping the antigenic range and creating complex protective immunity. Annually in many European countries and the United States are reported a large number of cases ofSalmonella gastroenteritis. Approximately 80 deaths are recorded each year in the UK [2]. There are also known data caused by a significant number of nontyphoidalSalmonella systemic and nonenteric forms of human infections. In a study performed for five-year period in Bulgaria it was found that 21% of them are resistant to ampicillin and gentamicin, 17.64% are resistant to tetracycline, 14.28% are resistant to nalidixic acid, and 10% of them are resistant to chloramphenicol [3]. About half of theSalmonella outbreaks are due to contaminated poultry and poultry products. The route to poultry infection is the colonization of the hen house and its pets, such as rodents, insects, and wild birds.Salmonella in the feces of laying eggs contaminate surface or penetrate through the cracks of light shells. Concerning hens with ovarian microorganism infection it was established thatS. Enteritidis can reach the egg by internal vertical transmission via the reproductive tract to the yolk or albumin [4]. HistoricallyS. Typhimurium is the most commonly reported serotype. In 2001, the three most commonSalmonella serotypes (more than 50% of all isolates) wereS. Typhimurium (22%),S. Enteritidis (18%), andS. Newport (10%) [5].S. Newport is one of theSalmonella serotypes causing diseases in cattle [6]. The emergence of multidrug-resistantSalmonella strains raises the question of strengthening the measures related to the prevention and protection at poultry. An inactivatedSalmonella vaccine is available on the market for the active immunization of chickens, hens, and their parents [7]. It contains formalin-inactivated cells ofS. Enteritidis PT4: 1 × 109 cells andS. Typhimurium DT104: 1 × 109 cells. This type of inactivatedSalmonella vaccines cannot offer 100% protection due to the destruction of bacterial cells as a result of treatment with formalin. An alternative to this vaccine could be a vaccine derived from ghost cells resulting from treatment with the hybrid material.The aim of the present investigation is to establish the components of the polyvalent “ghost”Salmonella vaccine with preserved integrity of the cell surface by inactivation of differentSalmonella strains with AgNps stabilized by PVA.
## 2. Materials and Methods
PVA/AgNps hybrid materials were prepared by adding a silver salt (AgNO3), the precursor for silver ions, to the PVA solution thus leading to coordination of silver ions with hydroxyl groups (-OH) from PVA. Boiling the PVA solution at 100°C for 60 min in the presence of AgNO3 results in the formation of silver nanoparticles stabilized in PVA, which protects the silver nanoparticles from agglomeration and ensures the homogeneous distribution of silver nanoparticles. The formation of silver nanoparticles was proven by UV-Vis spectroscopy and transmission electron microscopy (TEM) [8]. The silver concentration in PVA/AgNps solution was 174 mg/L as determined by ICP analysis.To determine the Minimal Bactericidal Concentration (MBC) of the synthesized samples, the following control strains from the collection of “Laboratory for Control of In Vitro Diagnostic Medical Devices” by “BB-NCIPD”S. Typhi London,S. Paratyphi B,S. Nairobi,S. Typhimurium, 79aS. Newport-Puerto Rico,S. Enteritidis, andS. EnteritidisATCC 13076 were used. The PVA/AgNps solution (174 mg/L) is diluted initially with injection water at ratio 1 : 6 thus leading to starting concentration of 29 mg/L. In five sterile tubes, successively falling twofold dilutions starting from a working dilution of PVA/AgNps in a volume of 1 mL with water for injection to a concentration of 0.45 mg/L were performed. To each tube was added a quantity of the bacterial suspension (by validated patented methodology) to provide from 105 to 106 CFU (Colonies Forming Units) of them. From a suspension the related positive control was seeded by the same amount using surface method on agar plates with Soybean Casein Agar (SCA). Tubes and plates were placed for 24 hours at 32.5±2.5°C. Each tube was plated by agar surface plate method on plate with SCA, which was cultured in a thermostat at 32.5±2.5°C in order to confirm inactivation of the bacterial suspension.From working cultures of 4 controlSalmonella strains,S. Typhimurium,S. Newport-Puerto Rico,S. Enteritidis, andS. EnteritidisATCC 13076, were prepared as antigens for immunization “ghost”Salmonella vaccines. To obtain the required bacterial mass of pure culture, the strain was inoculated on plain agar slant layer. A standardized bacterial suspension from each strain was separately treated with the hybrid material PVA/AgNps, added in an amount that is in a silver concentration of 30 mg/L in the final volume of the antigen. Confirmed inactivated bacterial suspension standardized in densitometer to 3 MF was used as an antigen for immunization of rabbits. Intravenous immunizations were carried out with increasing antigenic load of 0.5 to 2 mL as established in the “BB-NCIPD” scheme on Californian rabbits: immunization in vena marginalis in intervals of 3 to 4 days.The resulting serum was titrated in reaction stage agglutination to establish the specific titer. The presence of cross-agglutinines in different hyperimmune sera was established by slide agglutination reaction (Table1).Table 1
Content of cross-agglutinins diluted to 1 : 50 anti-Salmonella sera.
Strains
S. entericaserovar Enteritidis ATCC 13076 1,9,12; gm;—
S. entericaserovar Enteritidis1,9,12; gm;—
79 a S. entericaserovar Newport-Puerto Rico 6,8 [20];—;1,2
S. entericaserovar Typhimurium1,4,[5],12;i;1,5
Anti-S.Enteritidis ATCC 13076 serum
++++
+
−
++
Anti-S.Enteritidis serum
−
++++
−
+++
Anti-S.Newport-Puerto Rico serum
−
−
++++
++
Anti-S.Typhimurium serum
−
−
+++
++++
++++: very good visible agglutinates in clear liquid; +++: good visible agglutinates in almost clear liquid; ++: visible agglutinates in turbid liquid; +: slightly visible agglutinates in turbid liquid.Cell viability after the cytotoxicity analysis of the material was determined by a modification of the MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] [9] analysis on the cell line of mouse fibroblast cells (L20B). It gives information about the possibilities of application of the hybrid material that will perform bactericidal effect without affecting the metabolism of host cells.
## 3. Results and Discussion
Initially, the Minimal Bactericidal Concentration (MBC) of PVA/AgNps hybrid materials for the organismsSalmonella enterica serovar Typhi London,Salmonella enterica serovar Paratyphi B, andSalmonella enterica serovar Nairobi, which were added to the reaction mixture at a concentration of 105 to 106 CFU via a validated methodology, was determined according to the requirements of CLSI M 26-A [10]. It was established that the MBC was defined at 0.054 mg/L silver concentration in all cases.MBC for the strains provided for involvement in the experimental “ghost”Salmonella vaccine,Salmonella enterica serovar Typhimurium,Salmonella enterica serovar Newport-Puerto Rico,Salmonella enterica serovar Enteritidis, andSalmonella enterica serovar Enteritidis ATCC 13076, was additionally determined. Before performing the test, the MBC for both strainsS. enterica serovar Enteritidis andS. enterica serovar Typhimurium was established as lower than 0.027 mg/L. Only forS. Newport-Puerto Rico was the MBC 0.108 mg/L (≈0.11 mg/L) (Figure 1). Therefore, it was assumed that the testedSalmonella strains were sensitive to silver, as tests with the same hybrid material showed that MBC values equal to or more than 1.1 mg/L are sign for silver resistance [11]. Evidence of widespread resistance ofSalmonella to silver has long been cited in the literature, resulting in a plasmid encoding the genes for resistance to heavy metals [12].Figure 1
MBC of PVA/AgNps determined by macrodilution method for (a)S. Newport-Puerto Rico, (b)S. Enteritidis ATCC 13076, (c)S. Enteritidis, and (d)S. Typhimurium.
(a)
(b)
(c)
(d)Maximal nontoxic concentration (MNC) was defined as 0.007 mg/L, while the concentration required to inhibit cell viability by 50% (CD50) was determined as 0.53 mg/L in a dose-dependent manner (Figure 2). As the MBC from the respective strains was determined at 105 to 106 CFU bacterial load, therefore, to inactivate one billionth bacterial suspension, silver concentration of 30 mg/L suspension was applied.Figure 2
Cytotoxic effect of PVA/AgNps on the viability of mouse fibroblast (L20B) cell line at 24 h and 48 h.Sera were tested in the reaction slide agglutination at a dilution with TRIS saline buffer as 1 : 50 for presence of cross-agglutinins from other strains used in the experiment (Table1). With reference to the scheme of White-Kaufmann [13] common H1 antigens in the second phase ofS. enterica serovar Newport-Puerto Rico andS. enterica serovar Typhimurium were found. This explains the coagglutination in anti-S. Newport-Puerto Rico and anti-S. Typhimurium sera. Having common O1 antigens explains coagglutination in sera: anti-S. Enteritidis and anti-S. Typhimurium serum.The specific titer of all obtained after immunization rabbit antisera was determined in a Gruber’s reaction stage agglutination. Anti-S. Enteritidis ATCC 13076, anti-S. Newport-Puerto Rico, and anti-S. Typhimurium sera were with O-titer 1 : 6400. Only anti-S. Enteritidis serum has O titer 1 : 1600. A significant difference in the activity of sera obtained from both strainsS. Enteritidis was found; therefore it was considered as appropriate to incorporate both of them in the ongoing prospective studies on the composition of polyvalentSalmonella “ghost” vaccine for veterinary use.TEM analysis a month after completion of the immunization was performed (Figure3) to one of those used in attempts antigens. It was found that the presence of the PVA/AgNps for longer period in the antigen for the immunization results in complete lysis of the bacterial cells after apoptosis. Therefore, an additional step consisting in washing of the antigen with saline after inactivation with PVA/AgNps, in order to preserve the inactivated bacterial cells in the form of “ghost” cells, is necessary.Figure 3
TEM analysis of theSalmonella antigen, inactivated with PVA/AgNp.
## 4. Conclusions
PVA/AgNps hybrid material was applied to obtain “ghost” cells with preserved integrity of the cell surface by inactivation of differentSalmonella strains. Initially, MBC for differentSalmonella strains was determined by macrodilution method. Minimal nontoxic concentration of PVA/AgNp and CD50 were established as well. The specific titer of all obtained after immunization rabbit antisera was determined in a Gruber’s reaction stage agglutination, as strainsS. Enteritidis,S. Newport-Puerto Rico, andS. Typhimurium were chosen as appropriate candidates for their incorporation in order to create polyvalentSalmonella “ghost” vaccine for veterinary use.The addition of more strains to the vaccine will expand its range of possible causes, and their inactivation by PVA/AgNps will allow retention of a full range of antigenic determinants thus providing complete protection.
---
*Source: 101464-2015-07-22.xml* | 2015 |
# Oxidative Stress Induced by MnSOD-p53 Interaction: Pro- or Anti-Tumorigenic?
**Authors:** Delira Robbins; Yunfeng Zhao
**Journal:** Journal of Signal Transduction
(2012)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2012/101465
---
## Abstract
The formation of reactive oxygen species (ROS) is a result of incomplete reduction of molecular oxygen during cellular metabolism. Although ROS has been shown to act as signaling molecules, it is known that these reactive molecules can act as prooxidants causing damage to DNA, proteins, and lipids, which over time can lead to disease propagation and ultimately cell death. Thus, restoring the protective antioxidant capacity of the cell has become an important target in therapeutic intervention. In addition, a clearer understanding of the disease stage and molecular events that contribute to ROS generation during tumor promotion can lead to novel approaches to enhance target specificity in cancer progression. This paper will focus on not only the traditional routes of ROS generation, but also on new mechanisms via the tumor suppressor p53 and the interaction between p53 and MnSOD, the primary antioxidant enzyme in mitochondria. In addition, the potential consequences of the p53-MnSOD interaction have also been discussed. Lastly, we have highlighted clinical implications of targeting the p53-MnSOD interaction and discussed recent therapeutic mechanisms utilized to modulate both p53 and MnSOD as a method of tumor suppression.
---
## Body
## 1. Introduction
Oxidative stress has been defined as the cellular imbalance of prooxidants versus antioxidants that overwhelms the cell’s capacity to scavenge the oxidative load and contributes to the pathogenesis of various diseases. Reactive oxygen species (ROS) are free radicals derived from molecular oxygen that play a key role in promoting oxidative stress. These radicals result from the incomplete reduction of oxygen mainly during mitochondrial respiration. There are several products of oxygen metabolism, both nonradicals and radicals that form ROS such as hydrogen peroxide (H2O2) and superoxide anions (O2.-). Contributors of ROS can modify the intracellular redox status through unfavorable interactions with endogenous regulators of oxidative stress. Superoxide radicals can interact with mitochondrial nitric oxide to form peroxynitrite which can alter antioxidant enzymes such as aconitase and the mitochondrial complexes of the electron transport chain [1]. On the other hand, the presence of oxidative stress can alter normal cellular homeostasis by modifying proteins involved in DNA repair; activating signal transduction pathways involved in cell survival and inflammation; as well as, inducing cellular apoptotic pathways that are detrimental to the cell. For many years, scientists have tried to combat free radical generation and superoxide production through the utilization of the exogenous antioxidant supplementation, such as ascorbate, vitamin E, as well as linoleic acid. However, many of these trials have failed showing no significant decrease in cancer incidence, death, or major cardiovascular events [2]. Herein, we will focus on several novel signaling pathways affecting ROS generation, such as p53 signaling and the interaction between p53 and manganese superoxide dismutase (MnSOD) and how to potentially target these pathways for cancer therapy.
## 2. Oxidative Stress
Oxidative stress has been repeatedly shown to contribute to the progression of multiple diseases, such as cancer [3], diabetes [4], ulcerative colitis [5], cardiovascular disease [6], pulmonary disease [7] as well as neurodegenerative diseases [8]. Nevertheless, the biological significance of oxidative stress can be beneficial or detrimental depending on certain parameters such as concentration, duration of action, cell type exposed, the type of free radicals and reactive metabolites involved, and the activities of the associated signal transduction pathways.The mitochondrial electron transport chain remains to be one of the main sources of intracellular oxidative stress [9]. During mitochondrial respiration, electrons flow through four integral membrane protein complexes to finally reduce molecular oxygen to water. However, approximately 1-2% of molecular oxygen undergoes incomplete reduction, resulting in the formation of superoxide anions and mitochondria-mediated ROS generation [10]. Though mainly produced from mitochondrial respiration, superoxide anions can be detoxified via endogenous antioxidant enzymes such as manganese superoxide dismutase (MnSOD) to hydrogen peroxide, which is further converted to water via the enzymatic actions of various antioxidant enzymes including glutathione reductases, peroxiredoxins, glutathione transferases, as well as catalase which all function in the removal of hydrogen peroxide.Nevertheless, it is common for cells in response to stress to enhance ROS generation. Oxidoreductases are enzymes that are often activated during the cellular stress response and catalyze the transfer of electrons from the electron donor (reductant) to the electron acceptor (oxidant) [11] with associated formation of superoxide anions and ROS as byproducts. There are several enzymes that act as oxidoreductases and contribute to intracellular ROS generation, such as cyclooxygenase [12], lipoxygenase [12, 13], cytochrome P450 enzymes [14], nitric-oxide synthase [15], xanthine oxidase [16], and mitochondrial NADH: ubiquinone oxidoreductase (complex I) [17]. NAPDH oxidases of the Nox family are also oxidoreductases that produce superoxide anions as a primary product and one of the key sources of intracellular ROS formation. NADPH oxidases (Nox) are endogenous enzymatic heterogenic complexes that reduce molecular oxygen to superoxide, in conjunction with NADPH oxidation, which can be converted to various ROS. Nox can be activated by a myriad of cellular stress stimuli such as heavy metals [18, 19], organic solvents [20], UV and ionizing irradiation [21, 22]. Once the cellular stress response is initiated two cytosolic regulatory units of Nox, p47phox, p67phox, and the small G protein Rac translocate to the membrane and associate with cytochrome b558 (consisting of two subunits gp91phox (Nox2) and p22phox), which acts as a central docking site for complex formation [23]. Emerging evidence has linked Nox enzymes to oxidative stress that may contribute to disease progression [11, 17, 24, 25]. The radicals generated from Nox activation are capable of modulating various redox-sensitive signaling pathways involved in the activation of mitogen-activated protein kinases (MAPKs) and transcription factors (NF-κB) [26–28] causing regulation of Nox activation to be complex.Oxidative stress can be generated endogenously, as well as promoted exogenously by multiple environmental factors. Ultraviolet irradiation (UV) is an environmental promoter of oxidative stress. UV is known to damage DNA and other intracellular proteins through direct and indirect mechanisms. UV exists in three forms UVA (400–320 nm), UVB (320–290 nm), and UVC (290–100 nm). UVA and UVB are the most biologically significant, with UVC being most absorbed by ozone [29]. UV is known to directly induce the cross-linking of neighboring pyrimidines to form pyrimidine dimers in DNA that result in mutagenic DNA lesions [30–35]. However, UV is known to promote ROS generation that can damage a large number of intracellular proteins and can indirectly damage DNA.Associated with oxidative damage is lipid peroxidation. High levels of ROS are detrimental and can cause damage to various biomolecules, which include the fatty acid side chains of membrane lipids that form reactive organic products such as malondialdehyde and 4-hydroxynonenal, both of which can generate DNA adducts and point mutations [36]. Lipid peroxidation not only affects DNA stability, but can also alter lipid membrane proteins that are involved in signal transduction pathways to promote constitutive activation and downstream cellular proliferation. Furthermore, previous studies have shown that products of lipid peroxidation served as intermediates in the activation of signaling pathways that involved phospholipase A2 and the MAPK pathway, both associated with UV-induced carcinogenesis [37–39].Although there are various sources of endogenous oxidative stress, mitochondria are the major cellular organelles that contribute to intracellular ROS generation. Mitochondria consume approximately 80–90% of the cell’s oxygen for ATP synthesis via oxidative phosphorylation. In the early 1920s Otto Warburg and colleagues theorized that defective oxidative phosphorylation during cancer progression caused tumor cells to undergo a metabolic shift requiring high rates of glycolysis that promoted lactate production in the presence of oxygen. This phenomenon became known as aerobic glycolysis and later coined “The Warburg Effect.” Some of the metabolic enzymes that are altered during cancer progression are involved in the mitochondrial electron transport chain [40, 41]. The electron transport chain consists of a constant flow of electrons through mitochondrial intermembrane complexes with molecular oxygen being the ultimate electron acceptor. The process of the electron transport chain is used to pump protons into the mitochondrial inner membrane creating an electrochemical gradient. The gradient that is created is coupled to ATP synthesis. However, leaking electrons contribute to the incomplete reduction of molecular oxygen, resulting in superoxide anion formation. Mitochondria are readily susceptible to oxidative damage for various reasons: (1) lack of effective base excision repair mechanisms; (2) the close proximity of mitochondrial DNA to ROS generation; (3) lack of mitochondrial DNA protective histones [42]. Therefore, alterations in mitochondrial ROS generation and protection via antioxidant expression are key in the detrimental effects of disease progression.
## 3. Manganese Superoxide Dismutase (MnSOD)
Maintaining a balance between free radicals and antioxidants is required for cellular homeostasis. However, when this balance is altered in favor of free radical generation, normal physiology is altered and the pathogenesis of disease is promoted. Antioxidants are endogenous defense mechanisms utilized by the cell to fight against fluctuations in free radical generation, which include both enzymatic and nonenzymatic contributors. Ascorbic acid (Vitamin C) andα-tocopherol (Vitamin E) are nonenzymatic antioxidants that have been previously shown to effectively scavenge free radicals. On the other hand, antioxidants such as glutathione peroxidase and superoxide dismutase are enzymatic antioxidants that catalyze the neutralization of free radicals into products that are nontoxic to the cell. Superoxide dismutase catalyzes the dismutation of superoxide anions leading to the formation of hydrogen peroxide and molecular oxygen. Hydrogen peroxide is further detoxified to water via catalase and other endogenous antioxidant enzymes. The superoxide dismutase family consists of metalloenzymes. Currently, there are three major superoxide dismutase enzymes within the human cell: manganese superoxide dismutase (MnSOD), copper-zinc superoxide dismutase (Cu, ZnSOD), and extracellular superoxide dismutase (ECSOD). MnSOD is localized in the mitochondrial matrix [43], Cu, ZnSOD is found primarily in the cytosol [44] and can be detected in the mitochondrial intermembrane space [45], and ECSOD is a homotetrameric glycosylated form of Cu, ZnSOD found in the extracellular space [46].MnSOD is ubiquitously found in both prokaryotes and eukaryotes, and its increased activity is often associated with cytoprotection against oxidants. MnSOD can be induced by various mediators of oxidative stress such as tumor necrosis factor, lipopolysaccharide, and interleukin-1 [47]. This antioxidant enzyme is nuclear encoded by a gene localized to chromosome 6q25 [48], a region often lost in cancers such as melanoma [49]. MnSOD is synthesized in the cytosol as a larger precursor with a transit peptide on the N-terminus and imported to the mitochondrial matrix via proteolytic processing to the mature form [50]. Most cancer cells and in vitro transformed cell lines have diminished MnSOD activity compared to normal counterparts [51]. In addition, deficiencies in MnSOD may contribute to oxidative stress generation that promotes neoplastic transformation and/or maintenance of the malignant phenotype. In looking at the correlation between MnSOD expression and cancer progression, mutations within the MnSOD gene and its regulatory sequence have been observed in several types of human cancers [52, 53]. However, antioxidants can suppress carcinogenesis, particularly during the promotion phase. In addition, our laboratory as well as others has shown that overexpression of MnSOD reduces tumor multiplicity, incidence, and metastatic ability in various in vitro and in vivo models [54–57].
## 4. The Tumor Suppressor p53
p53 is a well-characterized transcription factor known to induce its tumor suppressor activity by activating genes known to play a role in cell cycle arrest, such asp21CIP1 and GADD45. These genes, once activated, arrest the cell cycle to allow for adequate DNA repair to restore normal cell proliferation. However, if the cell becomes overwhelmed by the stressor or the DNA damage cannot be repaired, p53 can ultimately induce apoptosis. The tumor suppressive activities of p53 can also be defined by the induction senescence. Senescence is characterized by irreversible loss of proliferative potential, acquisition of characteristic morphology, and expression of specific biomarkers such as senescence-associated β-galactosidase [58]. Nevertheless, how p53 regulates senescence is often contradictory and dependent on ROS generation. p53 can mediate cellular senescence via the transactivation of p21CIP1. Nonetheless, emerging evidence suggests ROS as a common mediator of senescence with the involvement of superoxide dismutase and p53. Blander et al. reported that RNAi-mediated knockdown of SOD1 in primary human fibroblasts induced cellular senescence mediated by p53. However, senescence was not induced in p53-deficient human fibroblasts [59]. Furthermore overexpression of MnSOD induced growth arrest in the human colorectal cancer cell line HCT116 and increased senescence which required the induction of p53 [60]. On the contrary, p53 can suppress senescence through the inhibition of the mTOR pathway via multiple mechanisms [61–63]. Nevertheless, this diverse biological spectrum of p53 regulation of cellular function remains complex and is dependent on the source of activation and cell type.There are various sources of p53 activators, which include nucleotide depletion, hypoxia, ultraviolet radiation, ionizing radiation, as well as many chemotherapeutic drugs can act as activators of p53 (i.e., Doxorubicin). In normal cells, p53 remains at a low level and is under strict control by its negative regulator Mdm2. p53 induces autoregulation via Mdm2. As a transcription factor, p53 can bind to the promoter region of themdm2 gene to promote transcription of Mdm2 mRNA [64, 65]. Following proper translation into a functional protein, Mdm2 acts as an E3 ligase during p53 activation. Mdm2 can polyubiquitinate p53 leading to proteasomal degradation [66]. However, Mdm2 can also monoubiquinate p53 leading to intracellular trafficking [67]. The decisive role of p53 to induce cell cycle arrest, senescence, or apoptosis involves intricate posttranslational, as well as, transcription-dependent and transcription-independent mechanisms. The tumor suppressor p53 is a well-characterized transcription factor known to induce the transactivation of proapoptotic genes such as Bax, Puma, Noxa, Bid and represses the transcription of anti-apoptotic genes such as Bcl-2, Bcl-xL, and survivin [68, 69]. Nevertheless, p53 can induce apoptosis independent of its transcriptional activity. Many of the transcription-independent mechanisms of p53 were discovered through the use of inhibitors of transcription/translation, as well as p53 truncated mutants with altered subcellular localization, DNA binding, and cofactor recruitment. The p53 monomer consists of various multifunctional domains including the N-terminal transactivation domain (residues 1–73), a proline-rich region (residues 63–97), the highly conserved DNA-binding core domain (residues 94–312), a tetramerization domain located within the C-terminus (residues 324–355), and an unstructured basic domain (residues 360–393) [70] (Figure 1). There are multiple polymorphisms that occur within the TP3 gene that may enhance or alter p53 functionality. Dumont et al. discovered functional differences in polymorphic variants that enhanced p53-mediated apoptosis independent of its transactivation abilities [71]. A common sequence polymorphism that occurs within the proline-rich domain encoding arginine at position 72 exhibited a fivefold increase in inducing apoptosis compared to the common proline (Pro72) variant. These results suggested two mechanisms of Arg 72 apoptotic enhancement: (1) increased mitochondrial localization; (2) enhanced binding of the Arg 72 variant to the negative p53 regulator E3 ligase, Mdm2. Although increased binding to Mdm2 did not augment p53 degradation, it was suggested that the altered confirmation of the p53 Arg 72 variant enhanced the binding ability and facilitated greater nuclear export [71]. This suggests the importance of understanding the regulation of structure-activity relationships in polymorphic forms of p53 in transcription-independent apoptosis.Figure 1
p53 Multifunctional domains. The p53 monomer consists of various multifunctional domains including the N-terminal transactivation domain (residues 1–73), a proline-rich region (residues 63–97), the highly conserved DNA-binding core domain (residues 94–312), a tetramerization domain located within the C-terminus (residues 324–355), and an unstructured basic domain (residues 360–393).During p53-mediated apoptosis, a distinct cytoplasmic pool of p53 translocates to the mitochondria. To promote mitochondrial translocation, the E3 ligase, Mdm2 monoubiquitinates p53 [72]. Since the p53 protein lacks a mitochondrial localization sequence, p53 interacts with Bcl-2 family proteins via Bcl-2 homology (BH) domains. The presence of the BH domain allows proteins to regulate and interact with other Bcl-2 members that consist of multiple BH domains [73]. Once p53 arrives at the mitochondrial outer membrane, p53 binds to Bak inducing a conformational change and Bak homo-oligomerization that results in mitochondrial outer membrane permeabilization (MOMP). MOMP allows for the release of pro-apoptotic signaling molecules from the outer and inner mitochondrial membranes into the cytosol triggering the intrinsic apoptotic signaling cascade. ROS generation has been suggested as an alternative p53 apoptotic target independent of cytochrome c release. Li et al. found that ROS generation regulated the mitochondrial membrane potential (Δψ), which was found to be a key constituent in the induction of p53-mediated apoptosis [74]. Interestingly, during ROS generation, apoptosis occurred in the absence of Bax mitochondrial translocation, Bid activation, as well as cytochrome c release. Several studies have suggested that the downstream effects of p53-mediated apoptosis are regulated by Bax expression. It has been shown that the introduction of recombinant Bax protein into isolated mitochondria induced cytochrome c release. The ability of Bax to initiate pore formation in synthetic membranes has been shown to regulate cytochrome c release resulting in the induction of apoptosis [75, 76]. However, discrepancies exist with in vivo studies showing Bax being localized in the cytosol, rather than within the mitochondrial membrane at physiological conditions [77].Herein, we show how p53 has been shown to play a dual role in early-versus late-stage cancer progression. During the process of carcinogenesis, mutations can occur both upstream and downstream of p53 activation. For example, loss of upstream activators of p53, for example, ATM and Chk2, can prevent p53 activation, contributing to unregulated cell cycling and promoting tumorigenesis [78]. In addition, mutations within the p53 protein can alter necessary structure conformational changes and DNA binding properties needed for efficient p53 activation. Lastly, many of these mutations lead to loss of downstream genes such as Bax or NOXA which are pro-apoptotic and necessary for regulation of cellular proliferation and death signaling.The process of tumor formation is a multistage process that involves both the activation of protooncogenes, and the inactivation of tumor suppressor genes, such as PTEN and p53. The multistage carcinogenesis paradigm consists of three well-characterized stages: initiation, promotion, and progression. During the initiation stage, there is the induction of mutations within critical target genes of stem cells, for example, H-ras; however in the skin carcinogenesis model, the epidermal layer remains phenotypically normal. During the tumor promotion stage, a noncarcinogenic agent such as a phorbol ester can be used to induce the clonal expansion of the initiated stem cells through epigenetic mechanisms. This stage is often used by investigators to identify potential therapeutic targets due to its reversibility. During the tumor progression stage, malignancy takes place, being characterized by enhanced invasiveness via the activation of proteases, and metastasizes via tumor cells entering into the lymphatics and loss of tumor suppressor activity (e.g., p53).The two-stage skin carcinogenesis mouse model has been well characterized and used in numerous studies to screen anti cancer agents. An initiator, such as dimethylbenz[a]anthracene (DMBA), is applied to the skin to initiate DNA damage within skin cells. Following DMBA treatment, a tumor promoter such as 12-O-tetradecanoylphorbol-13-acetate (TPA) is applied topically to the same area repeatedly for the duration of the study to promote the clonal expansion of mutated cells during the promotion stage. Interestingly, during the early stages of DMBA/TPA-mediated tumor promotion both oncogenes and tumor suppressor genes are activated, resulting in increased cell proliferation being accompanied by increased cell death [79] (Figure 2). Both processes exist throughout skin tumor formation. Not surprising, these two opposing events are closely related.Figure 2
Mechanisms of carcinogens in early stage carcinogenesis. During the early stages of tumor promotion both oncogenes and tumor suppressor genes are activated, resulting in increased cell proliferation being accompanied by increased cell death.Many of the tumor-promoting mechanisms utilized by phorbol esters are directly linked to the involvement of cell surface membranes [80, 81]. TPA can mediate its pleiotropic actions through intercalating into the cellular membrane and inducing the activation of the Ca2+-activated phospholipid-dependent protein kinase, protein kinase C (PKC) both in vitroandin vivo. TPA can directly activate PKC via molecular mimicry by substituting for diacylglycerol, the endogenous substrate, increasing the affinity of PKC for Ca2+ which leads to the activation of numerous downstream signaling pathways involved in a variety of cellular functions including proliferation and neoplastic transformation [82]. In addition, it is known that a direct correlation exists between phorbol ester-mediated tumor promotion and enzymatic activation of PKC [82, 83]. The PKC family consists of various highly conserved serine/threonine kinases. PKCs are involved in numerous cellular processes including cell differentiation, tumorigenesis, cell death, aging, and neurodegeneration [84]; however the induction of the signaling pathway is determined by the intracellular redox status and the isoform that is activated. The PKC family consists of a myriad of isoforms that have been divided into three classes: (a) classical or conventional PKCs (cPKC: α, βI, βII, and γ); (b) novel PKCs (nPKC: δ, ε, η, and θ); (c) the atypical PKCs (aPKC: λ, ι, and ζ) which are classified based on sensitivity to Ca2+ and diacylglycerol (DAG) [84]. In various types of cancers PKCε has been shown to be upregulated while PKCα and PKCδ are downregulated. Interestingly, TPA activates the PKCε isoform in mouse skin tissues [85]. Furthermore, overexpression of PKCε has been shown to enhance the formation of skin carcinomas [86]. Moreover, TPA treatment leads to the concomitant activation of the redox-sensitive transcription factor activator protein-1 (AP-1) [85]. The AP-1 complex consists of both Jun and Fos oncoproteins. There are 3 jun isoforms (c-jun, jun-B, and jun-D) and 4 fos family members (c-fos, fra-1, fra-2, and fos-B) [87] whose activation is modulated by oxidants such as superoxide and hydrogen peroxide, while DNA binding activities are modulated by the intracellular redox status [88–90]. Kiningham and Clair reported a reduction in tumorigenicity and AP-1 DNA binding activity following overexpression of MnSOD in transfected fibrosarcoma cells [91]. Furthermore, the protein expression of Bcl-xl, an antiapoptotic AP-1 target gene, was decreased, as well. In addition, PKCε activation was reduced in MnSOD transgenic mice treated with DMBA/TPA compared to their nontransgenic counterparts [85]. These results suggest a mechanistic linkage between MnSOD expression, mitogenic activation, and AP-1 binding activity.
## 5. MnSOD-p53 Mitochondrial Interaction
Another activated signaling pathway that has been defined following DMBA/TPA treatment is the Ras-Rac1-NADPH oxidase pathway, which leads to p53 mitochondrial translocation and apoptosis [92]. NADPH oxidase forms a stable heterodimer with the membrane protein p22phox, which serves as a docking site for the SH3 domain-containing regulatory proteins p47phox, p67phox, and p40phox. Upon TPA treatment, Rac, a small GTPase, binds to p67phox which induces NADPH oxidase activation [11] and superoxide production. Mitochondrial p53 has been shown to interact with MnSOD, resulting in decreased enzymatic activity and promoting oxidative stress propagation [93].The primary role of MnSOD is to protect mitochondria from oxidative damage. In 2005, Zhao et al. found that TPA treatment, bothin vitro and in vivo, can induce p53 mitochondrial translocation [93]. In addition, p53 not only came in contact with the outer mitochondrial membrane but was able to localize to the mitochondrial matrix. Interestingly, following p53 mitochondrial translocation and matrix localization, p53 interacted with the mitochondrial antioxidant enzyme MnSOD that resulted in a reduction in MnSOD activity and propagation of oxidative stress [93]. However, the question remains: does mitochondrial p53 contribute to or suppress tumor promotion during the early stages of skin carcinogenesis? We addressed this question by utilizing the JB6 mouse skin epidermal cells. JB6 cells were originally derived from primary BALB/c mouse epidermal cell culture [94]. Through nonselective cloning, it was discovered that clonal variants existed within the JB6 cell lineage that were either stably sensitive (P+) or resistant (P−) to tumor promoter-induced neoplastic transformation [95–97]. In addition, JB6 cells remain the only well-characterized skin keratinocytes for studying tumor promotion and screening anti-cancer agents. In 2010, we utilized the JB6 P+ and P− clonal variants to determine if a relationship existed between tumor promotion and early-stage TPA-induced p53 activation [98]. Surprisingly, we found that p53 was only induced in promotion-sensitive P+ cells and not promotion resistant (P−) cells, therefore suggesting that p53 expression is highly associated with early stage tumor promotion. We then assessed Bax protein expression levels, as a marker for p53 transcriptional activity, and found that Bax expression is only induced in JB6 P+ cells and not P− cells, suggesting that p53 expression, as well as transcriptional activity, is highly associated with early-stage tumor promotion following TPA treatment. MnSOD expression was also measured in both JB6 P+ and P− cells and was found to be highly expressed in promotion-resistant P− cells compared to promotable P+ cells. TPA-mediated ROS generation was measured in P+ and P− cells (unpublished data), and promotion resistant cells contained significantly lower levels of ROS following TPA treatment when compared to their promotable counterparts. It is known that reduced MnSOD expression contributes to increased DNA damage, cancer incidence, and radical-caused diseases [99, 100]. Consistent with that, an increase of several markers of oxidative damage such as 4-HNE, 8-oxoDG, and lipid peroxidation has been seen in both in vitroand in vivo studies following TPA treatment [57, 85, 101, 102], suggesting the involvement of oxidative stress in the promotion of tumorigenesis. These results imply the importance of redox regulation in modulating cellular functions during the early stage of tumor promotion. We questioned whether the ROS generated from the MnSOD-p53 mitochondrial interaction was sufficient to promote tumorigenicity. Therefore, we utilized promotion-resistant JB6 P− cells that exhibited no p53 protein expression or transactivation following TPA treatment to address this question. Interestingly, we found that when JB6 promotion-resistant cells were transfected with wild-type p53, these cells were able to transform and form colonies in soft agar, in comparison to their control counterparts [98]. These results suggest a dual role of p53-mediated ROS generation during the early stages of skin carcinogenesis and how the presence of p53 is necessary for tumor promotion in skin (Figure 3).Figure 3
Mechanism involving the p53-MnSOD interaction during the early stage of tumor promotion. Following exposure to a carcinogen the Ras-Rac1-NADPH oxidase pathway is activated, which leads to p53 mitochondrial translocation. Mitochondrial p53 has been shown to interact with MnSOD, resulting in decreased enzymatic activity and promoting oxidative stress propagation contributing to the early stage of skin tumorigenicity. Elevated levels of MnSOD reduced oxidative stress propagation, suppressed p53 mitochondrial translocation, and decreased downstream skin tumor formation. Reduced levels of MnSOD have been shown to contribute to oxidative stress propagation and promote early-stage skin tumorigenicity.The contradictory role of p53 in promoting cell survival or death is the result of the ability to regulate the expression of both pro- and antioxidant genes. For example, p53 can promote the generation of ROS through the induction of genes involved in mitochondrial injury and cell death which include Bax, Puma, andp66SHC and ROS-generating enzymes such as quinine oxidoreductase (NQO1) and proline oxidase [103]. However, p53 can upregulate the expression of various antioxidant enzymes to modulate ROS levels and promote cell survival such as aldehyde dehydrogenase 4 and mammalian sestrin homologues that encode peroxiredoxins and GPX1, which are major enzymatic removers of peroxide [103].Dhar et al. suggested that p53 possessed “bidirectional” regulation of the antioxidant MnSOD gene. Previous reports suggest the presence of a p53 binding region at 328 bp and 2032 bp upstream of the transcriptional start site of the MnSOD gene [104, 105]. Others suggest that p53 represses MnSOD gene expression by interfering with transcription initiation [106], inhibiting gene activators at the promoter level by forming an inhibitory complex suppressing gene transcription [107] and protein-protein interactions [108]. Nevertheless, p53 can induce the gene expression of MnSOD [104]. p53-mediated MnSOD expression is regulated in conjunction with other cell proliferative transcription factors such as NF-κB. Kiningham and Clair demonstrated the presence of an NF-κB binding site within the intronic enhancer element of the MnSOD gene [91]. It was later shown that mutation of the NF-κB site within the enhancer element abrogated p53 induced MnSOD gene transcription. In addition, knockdown of p65 via siRNA reduced MnSOD gene transcription via p53 as well. Overall the effects of p53 on MnSOD gene expression have been suggested to be concentration dependent, with low concentrations of p53 increasing MnSOD expression via corroborative NF-κB binding promoting cell survival and high concentrations of p53 suppressing MnSOD expression by interfering with important transcriptional binding elements such as SP1.
## 6. Clinical Implications of the MnSOD-p53 Interaction
p53 is mutated in 50% of human cancers. However, the remaining human tumors contain wild-type p53 with defects in the downstream mediated p53-signaling pathways. This, in turn, provides novel areas of discovery in stabilization and restoration of wild-type p53 activity. Currently, many drug companies are focused on utilizing p53 interactions as targets for pharmacological intervention [78]. There are various protein-protein interactions that occur within the cell that positively or negatively regulate p53 expression and function. For example, Mdm2 is an E3 ligase of p53 that polyubiquitinates p53, priming the tumor suppressor for proteasomal degradation. Many have found that, by blocking this interaction through peptides or transcriptional inhibitors, longer durations of p53 activation have resulted. Some of the therapeutic strategies that are currently being utilized are peptides that increase p53 activation through inhibition of Mdm2 function [109]. Three-dimensional structural models [110] of the hdm2-p53 interaction along with biochemical data [111, 112] have identified three residues that are important to this interaction, Phe19, Trp23, and Leu26 [111, 112]. From this data, an 8-mer peptide was generated [113] and showed promising results in inducing apoptosis in tumor cells that overexpressed hdm2 [112]. However, these conditions were difficult to optimize with a smaller molecule therefore causing this peptide to be therapeutically inefficient. Also nutlins have been utilized to disrupt the mdm2-p53 interaction resulting in reactivation of the p53 response [114, 115]. Others have used antisense and transcription inhibitors to prevent the expression of Mdm2 [116].Gene replacement therapy is another therapeutic modality that has been explored in treating tumors lacking or containing mutant p53. This technique utilizes adenoviruses, as well as retroviruses to achieve high expression of p53 in tumor cells. Promising results have been seen with retroviral vectors in patients with nonsmall cell lung cancers [117]. On the contrary, although we have seen the enhancement of tumorigenicity in our in vitro p53 transfection studies [98], we have not tested stably transfected cells in in vivo xenograft mouse models, nor have we tried other tissue types. Therefore, the reintroduction of the p53 gene into tumors may have contradictory outcomes depending on the cell type and tissue microenvironment. This concern has echoed through various studies, persuading investigators to opt to combine gene therapy with chemotherapy and radiotherapy [118–121].For decades, it has been shown that p53 functions only as a tumor suppressor. In addition, p53-mediated ROS generation has been limited to the induction of apoptosis. Currently, the ability of wild-type p53 to contribute to tumor promotion has received considerable attention. We have shown that the p53-MnSOD interaction contributes to the early stage of tumor promotion. In addition, it has been consistently shown that MnSOD activity is altered in human tumors. Therefore, designing diagnostic tools to assess MnSOD activity, as well as p53 activation, can be used to effectively design individualized treatments for cancer patients. For example, following chemotherapeutic treatment, patients that have higher levels of p53 expression and exhibit lower levels of MnSOD can receive an SOD mimetic that can upregulate MnSOD or synthetic compounds that can downregulate p53 activity to decrease ROS-mediated apoptosis and potential relapse within these patients.Gene therapy has also been utilized to modulate MnSOD activity during cancer progression. Overexpression of MnSOD through gene therapy introducing genetically engineered DNA/liposomes containing the human MnSOD transgene into preclinical and clinical models has been shown to be protective in normal tissues against ionizing irradiation. The final product (VLTS-582) is a DNA/liposome formulation that consists of a double-stranded DNA bacterial plasmid containing human MnSOD cDNA in conjunction with two lipids {cholesterol and DOTIM (1-[2-[9-(2)-octadecenoyloxy]]-2-[8-(2)-heptadecenyl]-3-[hydroxyethyl] imidazolinium chloride)} [122]. Recent studies suggest that this formulation has been successful in murine models and has been administered orally to patients concurrently with a weekly chemotherapy regime exhibiting no dose-limiting toxicities. Although proven therapeutically efficacious, more studies are needed to improve (1) delivery of the transgene to the targeted tissue; (2) reducing rapid elimination of the transgene; (3) control of the expression of the transgene within targeted tissues.On the other hand, a topical application of an SOD mimetic has also been described [123]. The Mn (III) porphyrin MnIII TE-2-Pyp5+ possesses highly potent SOD activity as facilitated by the redox properties of the metal center and the positive charge to the ortho-N-ethylpyridyl nitrogens [124]. MnIII TE-2-Pyp5+ has been proven effective in vitro and in various human diseases such as stroke [125, 126], diabetes [127, 128], and cancer and radiation-related treatment [129–132]. In preclinical animal models, topical application of MnIII TE-2-Pyp5+ was shown to reduce levels of oxidative damage and reduced cell proliferation without interfering with p53-mediated apoptosis when applied prior to TPA treatment [129]. These data support the concept that overexpression of MnSOD when applied in conjunction with standard chemotherapeutics or during the tumor promotion stage is protective in both preclinical and clinical models.Nevertheless, both p53 and MnSOD have been shown to posses reduced activity and/or mutated in most human diseases including cancer. Therefore, more therapeutic quests are needed to detect and restore both MnSOD and wild-type p53 activity. However, future therapeutic optimization strategies should have minimal nonspecific drug-related toxicities and be based on the stage of cancer progression which may reveal a therapeutic window for treatment intervention.
## 7. Concluding Remarks
In summary, reactive oxygen species have been implicated in the pathogenesis of various hyperproliferative and inflammatory diseases [133]. In addition, the tumor suppressor p53 has been shown to be activated during the early stage of skin carcinogenesis and contributed to the propagation of oxidative stress. Recent studies demonstrate a novel role of mitochondrial p53 activation. Once in the mitochondria, p53 physically interacts with MnSOD. As a result, this interaction reduces the free radical scavenging abilities of MnSOD, promoting enhanced ROS generation which has been shown to act as a tumorigenic stimulus during cancer progression. This suggests that wild-type p53 may play a direct role in promoting oxidative stress and contributing to the ROS-mediated tumor-proliferative stimuli. In addition, others have shown that mutant p53 can, in fact, translocate to the mitochondria and interact with MnSOD [134]. However, Lontz et al. observed following doxorubicin treatment of lymphoma cell lines with varying wt or mutant p53 levels, mitochondrial function, as evidenced by Complex I/II activities, was only compromised in lymphoma cells expressing wild-type and not mutant p53 [134]. Therefore, the continuation of deciphering mechanistic differences in tumors containing wild-type or mutant p53 can lead to the development of therapeutic p53-mediated interventions and a clearer understanding of chemoresistance in both wild-type and mutant p53 human tumors.Several studies have suggested that MnSOD may play a primary protective role against tissue injury. MnSOD has been found to be depleted in a variety of tumor cells, as well asin vitro transformed cell lines, suggesting that MnSOD may act as a novel tumor suppressor, protecting cells from oxidant-induced carcinogenesis [135]. Nevertheless, overexpression of MnSOD decreases the pathogenesis of human diseases such as cancer. Consistent with that, accumulating evidence suggests that a number of antioxidants or drugs with antioxidant properties can reduce mediators of tumor promotion [136]. Clair et al. showed that transfecting mouse 10T 1/2 cells with human MnSOD cDNA promoted differentiation with 5-azacytidine treatment and protected against neoplastic transformation [137]. In addition, transfecting human MnSOD cDNA into MCF-7 breast cancer cells and UACC-903melanoma cells suppressed their malignant phenotype and suppressed growth in nude mice [54, 138]. We have shown that the cumulative induction of endogenous antioxidant enzymes (i.e., catalase, total SOD and MnSOD) is efficient in reducing tumor incidence and multiplicity [57]. In addition, the induction of endogenous antioxidant enzymes via dietary administration can suppress p53 mitochondrial translocation [98]. TPA can induce p53 mitochondrial translocation; however, this phorbol ester also decreases the mitochondrial membrane potential, as well as mitochondrial complex activities and respiration. Other studies have shown that MnSOD overexpression in mice protects complex I from adriamycin-induced deactivation in cardiac tissue [139]. These results suggest that antioxidant expression protects against fluctuations in mitochondrial functions which suppress p53 mitochondrial translocation, p53-mediated ROS, and both downstream apoptotic and cell proliferation signaling pathways. On the contrary, Connor et al. suggest that overexpression of MnSOD in HT-1080 fibrosarcoma cells and 253J bladder tumor cells enhanced the migatory ability and invasiveness of tumor cells, through the upregulation of matrix metalloproteinases [140]. Although some tumors express higher levels of MnSOD, the downstream effects of enhanced antioxidant expression are dependent on the tumor type and susceptibility to oxidative damage, underlying oncogenic mutations and the stage of disease progression [140]. Nevertheless, these investigators stressed the need of refined regulation of H2O2 production. Therefore the question remains, are the effects of the p53-MnSOD interaction protumorigenic or anti-tumorigenic? To definitively answer this question further investigation of this interaction is needed. However, there are several factors that must be considered in determining the fate of the p53-MnSOD interaction, which include the stage of disease progression as well as tumor microenvironment. It has been shown that p53 activation is required in tumor promotion and can mediate ROS generation. However, the duration of enhanced ROS generation, severity of oxidative damage, and the status of the cellular antioxidant capacity can all contribute to the proliferative/apoptotic switch that occurs during the response to cellular stress. Overall, further studies are needed to clearly assess the status of MnSOD during the various stages of carcinogenesis to enhance the efficacy of standard treatment regimens currently being used.Consistent with that, defining the downstream effects of the p53-MnSOD complex formation can expand our knowledge of the molecular mechanisms that contribute to the early stage of tumorigenesis and how they may be altered during cancer progression. With further knowledge, modulators of MnSOD, p53 and their associated regulators can be therapeutically useful in the treatment of cancer and various stages of tumor progression.
---
*Source: 101465-2011-10-05.xml* | 101465-2011-10-05_101465-2011-10-05.md | 44,526 | Oxidative Stress Induced by MnSOD-p53 Interaction: Pro- or Anti-Tumorigenic? | Delira Robbins; Yunfeng Zhao | Journal of Signal Transduction
(2012) | Biological Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2012/101465 | 101465-2011-10-05.xml | ---
## Abstract
The formation of reactive oxygen species (ROS) is a result of incomplete reduction of molecular oxygen during cellular metabolism. Although ROS has been shown to act as signaling molecules, it is known that these reactive molecules can act as prooxidants causing damage to DNA, proteins, and lipids, which over time can lead to disease propagation and ultimately cell death. Thus, restoring the protective antioxidant capacity of the cell has become an important target in therapeutic intervention. In addition, a clearer understanding of the disease stage and molecular events that contribute to ROS generation during tumor promotion can lead to novel approaches to enhance target specificity in cancer progression. This paper will focus on not only the traditional routes of ROS generation, but also on new mechanisms via the tumor suppressor p53 and the interaction between p53 and MnSOD, the primary antioxidant enzyme in mitochondria. In addition, the potential consequences of the p53-MnSOD interaction have also been discussed. Lastly, we have highlighted clinical implications of targeting the p53-MnSOD interaction and discussed recent therapeutic mechanisms utilized to modulate both p53 and MnSOD as a method of tumor suppression.
---
## Body
## 1. Introduction
Oxidative stress has been defined as the cellular imbalance of prooxidants versus antioxidants that overwhelms the cell’s capacity to scavenge the oxidative load and contributes to the pathogenesis of various diseases. Reactive oxygen species (ROS) are free radicals derived from molecular oxygen that play a key role in promoting oxidative stress. These radicals result from the incomplete reduction of oxygen mainly during mitochondrial respiration. There are several products of oxygen metabolism, both nonradicals and radicals that form ROS such as hydrogen peroxide (H2O2) and superoxide anions (O2.-). Contributors of ROS can modify the intracellular redox status through unfavorable interactions with endogenous regulators of oxidative stress. Superoxide radicals can interact with mitochondrial nitric oxide to form peroxynitrite which can alter antioxidant enzymes such as aconitase and the mitochondrial complexes of the electron transport chain [1]. On the other hand, the presence of oxidative stress can alter normal cellular homeostasis by modifying proteins involved in DNA repair; activating signal transduction pathways involved in cell survival and inflammation; as well as, inducing cellular apoptotic pathways that are detrimental to the cell. For many years, scientists have tried to combat free radical generation and superoxide production through the utilization of the exogenous antioxidant supplementation, such as ascorbate, vitamin E, as well as linoleic acid. However, many of these trials have failed showing no significant decrease in cancer incidence, death, or major cardiovascular events [2]. Herein, we will focus on several novel signaling pathways affecting ROS generation, such as p53 signaling and the interaction between p53 and manganese superoxide dismutase (MnSOD) and how to potentially target these pathways for cancer therapy.
## 2. Oxidative Stress
Oxidative stress has been repeatedly shown to contribute to the progression of multiple diseases, such as cancer [3], diabetes [4], ulcerative colitis [5], cardiovascular disease [6], pulmonary disease [7] as well as neurodegenerative diseases [8]. Nevertheless, the biological significance of oxidative stress can be beneficial or detrimental depending on certain parameters such as concentration, duration of action, cell type exposed, the type of free radicals and reactive metabolites involved, and the activities of the associated signal transduction pathways.The mitochondrial electron transport chain remains to be one of the main sources of intracellular oxidative stress [9]. During mitochondrial respiration, electrons flow through four integral membrane protein complexes to finally reduce molecular oxygen to water. However, approximately 1-2% of molecular oxygen undergoes incomplete reduction, resulting in the formation of superoxide anions and mitochondria-mediated ROS generation [10]. Though mainly produced from mitochondrial respiration, superoxide anions can be detoxified via endogenous antioxidant enzymes such as manganese superoxide dismutase (MnSOD) to hydrogen peroxide, which is further converted to water via the enzymatic actions of various antioxidant enzymes including glutathione reductases, peroxiredoxins, glutathione transferases, as well as catalase which all function in the removal of hydrogen peroxide.Nevertheless, it is common for cells in response to stress to enhance ROS generation. Oxidoreductases are enzymes that are often activated during the cellular stress response and catalyze the transfer of electrons from the electron donor (reductant) to the electron acceptor (oxidant) [11] with associated formation of superoxide anions and ROS as byproducts. There are several enzymes that act as oxidoreductases and contribute to intracellular ROS generation, such as cyclooxygenase [12], lipoxygenase [12, 13], cytochrome P450 enzymes [14], nitric-oxide synthase [15], xanthine oxidase [16], and mitochondrial NADH: ubiquinone oxidoreductase (complex I) [17]. NAPDH oxidases of the Nox family are also oxidoreductases that produce superoxide anions as a primary product and one of the key sources of intracellular ROS formation. NADPH oxidases (Nox) are endogenous enzymatic heterogenic complexes that reduce molecular oxygen to superoxide, in conjunction with NADPH oxidation, which can be converted to various ROS. Nox can be activated by a myriad of cellular stress stimuli such as heavy metals [18, 19], organic solvents [20], UV and ionizing irradiation [21, 22]. Once the cellular stress response is initiated two cytosolic regulatory units of Nox, p47phox, p67phox, and the small G protein Rac translocate to the membrane and associate with cytochrome b558 (consisting of two subunits gp91phox (Nox2) and p22phox), which acts as a central docking site for complex formation [23]. Emerging evidence has linked Nox enzymes to oxidative stress that may contribute to disease progression [11, 17, 24, 25]. The radicals generated from Nox activation are capable of modulating various redox-sensitive signaling pathways involved in the activation of mitogen-activated protein kinases (MAPKs) and transcription factors (NF-κB) [26–28] causing regulation of Nox activation to be complex.Oxidative stress can be generated endogenously, as well as promoted exogenously by multiple environmental factors. Ultraviolet irradiation (UV) is an environmental promoter of oxidative stress. UV is known to damage DNA and other intracellular proteins through direct and indirect mechanisms. UV exists in three forms UVA (400–320 nm), UVB (320–290 nm), and UVC (290–100 nm). UVA and UVB are the most biologically significant, with UVC being most absorbed by ozone [29]. UV is known to directly induce the cross-linking of neighboring pyrimidines to form pyrimidine dimers in DNA that result in mutagenic DNA lesions [30–35]. However, UV is known to promote ROS generation that can damage a large number of intracellular proteins and can indirectly damage DNA.Associated with oxidative damage is lipid peroxidation. High levels of ROS are detrimental and can cause damage to various biomolecules, which include the fatty acid side chains of membrane lipids that form reactive organic products such as malondialdehyde and 4-hydroxynonenal, both of which can generate DNA adducts and point mutations [36]. Lipid peroxidation not only affects DNA stability, but can also alter lipid membrane proteins that are involved in signal transduction pathways to promote constitutive activation and downstream cellular proliferation. Furthermore, previous studies have shown that products of lipid peroxidation served as intermediates in the activation of signaling pathways that involved phospholipase A2 and the MAPK pathway, both associated with UV-induced carcinogenesis [37–39].Although there are various sources of endogenous oxidative stress, mitochondria are the major cellular organelles that contribute to intracellular ROS generation. Mitochondria consume approximately 80–90% of the cell’s oxygen for ATP synthesis via oxidative phosphorylation. In the early 1920s Otto Warburg and colleagues theorized that defective oxidative phosphorylation during cancer progression caused tumor cells to undergo a metabolic shift requiring high rates of glycolysis that promoted lactate production in the presence of oxygen. This phenomenon became known as aerobic glycolysis and later coined “The Warburg Effect.” Some of the metabolic enzymes that are altered during cancer progression are involved in the mitochondrial electron transport chain [40, 41]. The electron transport chain consists of a constant flow of electrons through mitochondrial intermembrane complexes with molecular oxygen being the ultimate electron acceptor. The process of the electron transport chain is used to pump protons into the mitochondrial inner membrane creating an electrochemical gradient. The gradient that is created is coupled to ATP synthesis. However, leaking electrons contribute to the incomplete reduction of molecular oxygen, resulting in superoxide anion formation. Mitochondria are readily susceptible to oxidative damage for various reasons: (1) lack of effective base excision repair mechanisms; (2) the close proximity of mitochondrial DNA to ROS generation; (3) lack of mitochondrial DNA protective histones [42]. Therefore, alterations in mitochondrial ROS generation and protection via antioxidant expression are key in the detrimental effects of disease progression.
## 3. Manganese Superoxide Dismutase (MnSOD)
Maintaining a balance between free radicals and antioxidants is required for cellular homeostasis. However, when this balance is altered in favor of free radical generation, normal physiology is altered and the pathogenesis of disease is promoted. Antioxidants are endogenous defense mechanisms utilized by the cell to fight against fluctuations in free radical generation, which include both enzymatic and nonenzymatic contributors. Ascorbic acid (Vitamin C) andα-tocopherol (Vitamin E) are nonenzymatic antioxidants that have been previously shown to effectively scavenge free radicals. On the other hand, antioxidants such as glutathione peroxidase and superoxide dismutase are enzymatic antioxidants that catalyze the neutralization of free radicals into products that are nontoxic to the cell. Superoxide dismutase catalyzes the dismutation of superoxide anions leading to the formation of hydrogen peroxide and molecular oxygen. Hydrogen peroxide is further detoxified to water via catalase and other endogenous antioxidant enzymes. The superoxide dismutase family consists of metalloenzymes. Currently, there are three major superoxide dismutase enzymes within the human cell: manganese superoxide dismutase (MnSOD), copper-zinc superoxide dismutase (Cu, ZnSOD), and extracellular superoxide dismutase (ECSOD). MnSOD is localized in the mitochondrial matrix [43], Cu, ZnSOD is found primarily in the cytosol [44] and can be detected in the mitochondrial intermembrane space [45], and ECSOD is a homotetrameric glycosylated form of Cu, ZnSOD found in the extracellular space [46].MnSOD is ubiquitously found in both prokaryotes and eukaryotes, and its increased activity is often associated with cytoprotection against oxidants. MnSOD can be induced by various mediators of oxidative stress such as tumor necrosis factor, lipopolysaccharide, and interleukin-1 [47]. This antioxidant enzyme is nuclear encoded by a gene localized to chromosome 6q25 [48], a region often lost in cancers such as melanoma [49]. MnSOD is synthesized in the cytosol as a larger precursor with a transit peptide on the N-terminus and imported to the mitochondrial matrix via proteolytic processing to the mature form [50]. Most cancer cells and in vitro transformed cell lines have diminished MnSOD activity compared to normal counterparts [51]. In addition, deficiencies in MnSOD may contribute to oxidative stress generation that promotes neoplastic transformation and/or maintenance of the malignant phenotype. In looking at the correlation between MnSOD expression and cancer progression, mutations within the MnSOD gene and its regulatory sequence have been observed in several types of human cancers [52, 53]. However, antioxidants can suppress carcinogenesis, particularly during the promotion phase. In addition, our laboratory as well as others has shown that overexpression of MnSOD reduces tumor multiplicity, incidence, and metastatic ability in various in vitro and in vivo models [54–57].
## 4. The Tumor Suppressor p53
p53 is a well-characterized transcription factor known to induce its tumor suppressor activity by activating genes known to play a role in cell cycle arrest, such asp21CIP1 and GADD45. These genes, once activated, arrest the cell cycle to allow for adequate DNA repair to restore normal cell proliferation. However, if the cell becomes overwhelmed by the stressor or the DNA damage cannot be repaired, p53 can ultimately induce apoptosis. The tumor suppressive activities of p53 can also be defined by the induction senescence. Senescence is characterized by irreversible loss of proliferative potential, acquisition of characteristic morphology, and expression of specific biomarkers such as senescence-associated β-galactosidase [58]. Nevertheless, how p53 regulates senescence is often contradictory and dependent on ROS generation. p53 can mediate cellular senescence via the transactivation of p21CIP1. Nonetheless, emerging evidence suggests ROS as a common mediator of senescence with the involvement of superoxide dismutase and p53. Blander et al. reported that RNAi-mediated knockdown of SOD1 in primary human fibroblasts induced cellular senescence mediated by p53. However, senescence was not induced in p53-deficient human fibroblasts [59]. Furthermore overexpression of MnSOD induced growth arrest in the human colorectal cancer cell line HCT116 and increased senescence which required the induction of p53 [60]. On the contrary, p53 can suppress senescence through the inhibition of the mTOR pathway via multiple mechanisms [61–63]. Nevertheless, this diverse biological spectrum of p53 regulation of cellular function remains complex and is dependent on the source of activation and cell type.There are various sources of p53 activators, which include nucleotide depletion, hypoxia, ultraviolet radiation, ionizing radiation, as well as many chemotherapeutic drugs can act as activators of p53 (i.e., Doxorubicin). In normal cells, p53 remains at a low level and is under strict control by its negative regulator Mdm2. p53 induces autoregulation via Mdm2. As a transcription factor, p53 can bind to the promoter region of themdm2 gene to promote transcription of Mdm2 mRNA [64, 65]. Following proper translation into a functional protein, Mdm2 acts as an E3 ligase during p53 activation. Mdm2 can polyubiquitinate p53 leading to proteasomal degradation [66]. However, Mdm2 can also monoubiquinate p53 leading to intracellular trafficking [67]. The decisive role of p53 to induce cell cycle arrest, senescence, or apoptosis involves intricate posttranslational, as well as, transcription-dependent and transcription-independent mechanisms. The tumor suppressor p53 is a well-characterized transcription factor known to induce the transactivation of proapoptotic genes such as Bax, Puma, Noxa, Bid and represses the transcription of anti-apoptotic genes such as Bcl-2, Bcl-xL, and survivin [68, 69]. Nevertheless, p53 can induce apoptosis independent of its transcriptional activity. Many of the transcription-independent mechanisms of p53 were discovered through the use of inhibitors of transcription/translation, as well as p53 truncated mutants with altered subcellular localization, DNA binding, and cofactor recruitment. The p53 monomer consists of various multifunctional domains including the N-terminal transactivation domain (residues 1–73), a proline-rich region (residues 63–97), the highly conserved DNA-binding core domain (residues 94–312), a tetramerization domain located within the C-terminus (residues 324–355), and an unstructured basic domain (residues 360–393) [70] (Figure 1). There are multiple polymorphisms that occur within the TP3 gene that may enhance or alter p53 functionality. Dumont et al. discovered functional differences in polymorphic variants that enhanced p53-mediated apoptosis independent of its transactivation abilities [71]. A common sequence polymorphism that occurs within the proline-rich domain encoding arginine at position 72 exhibited a fivefold increase in inducing apoptosis compared to the common proline (Pro72) variant. These results suggested two mechanisms of Arg 72 apoptotic enhancement: (1) increased mitochondrial localization; (2) enhanced binding of the Arg 72 variant to the negative p53 regulator E3 ligase, Mdm2. Although increased binding to Mdm2 did not augment p53 degradation, it was suggested that the altered confirmation of the p53 Arg 72 variant enhanced the binding ability and facilitated greater nuclear export [71]. This suggests the importance of understanding the regulation of structure-activity relationships in polymorphic forms of p53 in transcription-independent apoptosis.Figure 1
p53 Multifunctional domains. The p53 monomer consists of various multifunctional domains including the N-terminal transactivation domain (residues 1–73), a proline-rich region (residues 63–97), the highly conserved DNA-binding core domain (residues 94–312), a tetramerization domain located within the C-terminus (residues 324–355), and an unstructured basic domain (residues 360–393).During p53-mediated apoptosis, a distinct cytoplasmic pool of p53 translocates to the mitochondria. To promote mitochondrial translocation, the E3 ligase, Mdm2 monoubiquitinates p53 [72]. Since the p53 protein lacks a mitochondrial localization sequence, p53 interacts with Bcl-2 family proteins via Bcl-2 homology (BH) domains. The presence of the BH domain allows proteins to regulate and interact with other Bcl-2 members that consist of multiple BH domains [73]. Once p53 arrives at the mitochondrial outer membrane, p53 binds to Bak inducing a conformational change and Bak homo-oligomerization that results in mitochondrial outer membrane permeabilization (MOMP). MOMP allows for the release of pro-apoptotic signaling molecules from the outer and inner mitochondrial membranes into the cytosol triggering the intrinsic apoptotic signaling cascade. ROS generation has been suggested as an alternative p53 apoptotic target independent of cytochrome c release. Li et al. found that ROS generation regulated the mitochondrial membrane potential (Δψ), which was found to be a key constituent in the induction of p53-mediated apoptosis [74]. Interestingly, during ROS generation, apoptosis occurred in the absence of Bax mitochondrial translocation, Bid activation, as well as cytochrome c release. Several studies have suggested that the downstream effects of p53-mediated apoptosis are regulated by Bax expression. It has been shown that the introduction of recombinant Bax protein into isolated mitochondria induced cytochrome c release. The ability of Bax to initiate pore formation in synthetic membranes has been shown to regulate cytochrome c release resulting in the induction of apoptosis [75, 76]. However, discrepancies exist with in vivo studies showing Bax being localized in the cytosol, rather than within the mitochondrial membrane at physiological conditions [77].Herein, we show how p53 has been shown to play a dual role in early-versus late-stage cancer progression. During the process of carcinogenesis, mutations can occur both upstream and downstream of p53 activation. For example, loss of upstream activators of p53, for example, ATM and Chk2, can prevent p53 activation, contributing to unregulated cell cycling and promoting tumorigenesis [78]. In addition, mutations within the p53 protein can alter necessary structure conformational changes and DNA binding properties needed for efficient p53 activation. Lastly, many of these mutations lead to loss of downstream genes such as Bax or NOXA which are pro-apoptotic and necessary for regulation of cellular proliferation and death signaling.The process of tumor formation is a multistage process that involves both the activation of protooncogenes, and the inactivation of tumor suppressor genes, such as PTEN and p53. The multistage carcinogenesis paradigm consists of three well-characterized stages: initiation, promotion, and progression. During the initiation stage, there is the induction of mutations within critical target genes of stem cells, for example, H-ras; however in the skin carcinogenesis model, the epidermal layer remains phenotypically normal. During the tumor promotion stage, a noncarcinogenic agent such as a phorbol ester can be used to induce the clonal expansion of the initiated stem cells through epigenetic mechanisms. This stage is often used by investigators to identify potential therapeutic targets due to its reversibility. During the tumor progression stage, malignancy takes place, being characterized by enhanced invasiveness via the activation of proteases, and metastasizes via tumor cells entering into the lymphatics and loss of tumor suppressor activity (e.g., p53).The two-stage skin carcinogenesis mouse model has been well characterized and used in numerous studies to screen anti cancer agents. An initiator, such as dimethylbenz[a]anthracene (DMBA), is applied to the skin to initiate DNA damage within skin cells. Following DMBA treatment, a tumor promoter such as 12-O-tetradecanoylphorbol-13-acetate (TPA) is applied topically to the same area repeatedly for the duration of the study to promote the clonal expansion of mutated cells during the promotion stage. Interestingly, during the early stages of DMBA/TPA-mediated tumor promotion both oncogenes and tumor suppressor genes are activated, resulting in increased cell proliferation being accompanied by increased cell death [79] (Figure 2). Both processes exist throughout skin tumor formation. Not surprising, these two opposing events are closely related.Figure 2
Mechanisms of carcinogens in early stage carcinogenesis. During the early stages of tumor promotion both oncogenes and tumor suppressor genes are activated, resulting in increased cell proliferation being accompanied by increased cell death.Many of the tumor-promoting mechanisms utilized by phorbol esters are directly linked to the involvement of cell surface membranes [80, 81]. TPA can mediate its pleiotropic actions through intercalating into the cellular membrane and inducing the activation of the Ca2+-activated phospholipid-dependent protein kinase, protein kinase C (PKC) both in vitroandin vivo. TPA can directly activate PKC via molecular mimicry by substituting for diacylglycerol, the endogenous substrate, increasing the affinity of PKC for Ca2+ which leads to the activation of numerous downstream signaling pathways involved in a variety of cellular functions including proliferation and neoplastic transformation [82]. In addition, it is known that a direct correlation exists between phorbol ester-mediated tumor promotion and enzymatic activation of PKC [82, 83]. The PKC family consists of various highly conserved serine/threonine kinases. PKCs are involved in numerous cellular processes including cell differentiation, tumorigenesis, cell death, aging, and neurodegeneration [84]; however the induction of the signaling pathway is determined by the intracellular redox status and the isoform that is activated. The PKC family consists of a myriad of isoforms that have been divided into three classes: (a) classical or conventional PKCs (cPKC: α, βI, βII, and γ); (b) novel PKCs (nPKC: δ, ε, η, and θ); (c) the atypical PKCs (aPKC: λ, ι, and ζ) which are classified based on sensitivity to Ca2+ and diacylglycerol (DAG) [84]. In various types of cancers PKCε has been shown to be upregulated while PKCα and PKCδ are downregulated. Interestingly, TPA activates the PKCε isoform in mouse skin tissues [85]. Furthermore, overexpression of PKCε has been shown to enhance the formation of skin carcinomas [86]. Moreover, TPA treatment leads to the concomitant activation of the redox-sensitive transcription factor activator protein-1 (AP-1) [85]. The AP-1 complex consists of both Jun and Fos oncoproteins. There are 3 jun isoforms (c-jun, jun-B, and jun-D) and 4 fos family members (c-fos, fra-1, fra-2, and fos-B) [87] whose activation is modulated by oxidants such as superoxide and hydrogen peroxide, while DNA binding activities are modulated by the intracellular redox status [88–90]. Kiningham and Clair reported a reduction in tumorigenicity and AP-1 DNA binding activity following overexpression of MnSOD in transfected fibrosarcoma cells [91]. Furthermore, the protein expression of Bcl-xl, an antiapoptotic AP-1 target gene, was decreased, as well. In addition, PKCε activation was reduced in MnSOD transgenic mice treated with DMBA/TPA compared to their nontransgenic counterparts [85]. These results suggest a mechanistic linkage between MnSOD expression, mitogenic activation, and AP-1 binding activity.
## 5. MnSOD-p53 Mitochondrial Interaction
Another activated signaling pathway that has been defined following DMBA/TPA treatment is the Ras-Rac1-NADPH oxidase pathway, which leads to p53 mitochondrial translocation and apoptosis [92]. NADPH oxidase forms a stable heterodimer with the membrane protein p22phox, which serves as a docking site for the SH3 domain-containing regulatory proteins p47phox, p67phox, and p40phox. Upon TPA treatment, Rac, a small GTPase, binds to p67phox which induces NADPH oxidase activation [11] and superoxide production. Mitochondrial p53 has been shown to interact with MnSOD, resulting in decreased enzymatic activity and promoting oxidative stress propagation [93].The primary role of MnSOD is to protect mitochondria from oxidative damage. In 2005, Zhao et al. found that TPA treatment, bothin vitro and in vivo, can induce p53 mitochondrial translocation [93]. In addition, p53 not only came in contact with the outer mitochondrial membrane but was able to localize to the mitochondrial matrix. Interestingly, following p53 mitochondrial translocation and matrix localization, p53 interacted with the mitochondrial antioxidant enzyme MnSOD that resulted in a reduction in MnSOD activity and propagation of oxidative stress [93]. However, the question remains: does mitochondrial p53 contribute to or suppress tumor promotion during the early stages of skin carcinogenesis? We addressed this question by utilizing the JB6 mouse skin epidermal cells. JB6 cells were originally derived from primary BALB/c mouse epidermal cell culture [94]. Through nonselective cloning, it was discovered that clonal variants existed within the JB6 cell lineage that were either stably sensitive (P+) or resistant (P−) to tumor promoter-induced neoplastic transformation [95–97]. In addition, JB6 cells remain the only well-characterized skin keratinocytes for studying tumor promotion and screening anti-cancer agents. In 2010, we utilized the JB6 P+ and P− clonal variants to determine if a relationship existed between tumor promotion and early-stage TPA-induced p53 activation [98]. Surprisingly, we found that p53 was only induced in promotion-sensitive P+ cells and not promotion resistant (P−) cells, therefore suggesting that p53 expression is highly associated with early stage tumor promotion. We then assessed Bax protein expression levels, as a marker for p53 transcriptional activity, and found that Bax expression is only induced in JB6 P+ cells and not P− cells, suggesting that p53 expression, as well as transcriptional activity, is highly associated with early-stage tumor promotion following TPA treatment. MnSOD expression was also measured in both JB6 P+ and P− cells and was found to be highly expressed in promotion-resistant P− cells compared to promotable P+ cells. TPA-mediated ROS generation was measured in P+ and P− cells (unpublished data), and promotion resistant cells contained significantly lower levels of ROS following TPA treatment when compared to their promotable counterparts. It is known that reduced MnSOD expression contributes to increased DNA damage, cancer incidence, and radical-caused diseases [99, 100]. Consistent with that, an increase of several markers of oxidative damage such as 4-HNE, 8-oxoDG, and lipid peroxidation has been seen in both in vitroand in vivo studies following TPA treatment [57, 85, 101, 102], suggesting the involvement of oxidative stress in the promotion of tumorigenesis. These results imply the importance of redox regulation in modulating cellular functions during the early stage of tumor promotion. We questioned whether the ROS generated from the MnSOD-p53 mitochondrial interaction was sufficient to promote tumorigenicity. Therefore, we utilized promotion-resistant JB6 P− cells that exhibited no p53 protein expression or transactivation following TPA treatment to address this question. Interestingly, we found that when JB6 promotion-resistant cells were transfected with wild-type p53, these cells were able to transform and form colonies in soft agar, in comparison to their control counterparts [98]. These results suggest a dual role of p53-mediated ROS generation during the early stages of skin carcinogenesis and how the presence of p53 is necessary for tumor promotion in skin (Figure 3).Figure 3
Mechanism involving the p53-MnSOD interaction during the early stage of tumor promotion. Following exposure to a carcinogen the Ras-Rac1-NADPH oxidase pathway is activated, which leads to p53 mitochondrial translocation. Mitochondrial p53 has been shown to interact with MnSOD, resulting in decreased enzymatic activity and promoting oxidative stress propagation contributing to the early stage of skin tumorigenicity. Elevated levels of MnSOD reduced oxidative stress propagation, suppressed p53 mitochondrial translocation, and decreased downstream skin tumor formation. Reduced levels of MnSOD have been shown to contribute to oxidative stress propagation and promote early-stage skin tumorigenicity.The contradictory role of p53 in promoting cell survival or death is the result of the ability to regulate the expression of both pro- and antioxidant genes. For example, p53 can promote the generation of ROS through the induction of genes involved in mitochondrial injury and cell death which include Bax, Puma, andp66SHC and ROS-generating enzymes such as quinine oxidoreductase (NQO1) and proline oxidase [103]. However, p53 can upregulate the expression of various antioxidant enzymes to modulate ROS levels and promote cell survival such as aldehyde dehydrogenase 4 and mammalian sestrin homologues that encode peroxiredoxins and GPX1, which are major enzymatic removers of peroxide [103].Dhar et al. suggested that p53 possessed “bidirectional” regulation of the antioxidant MnSOD gene. Previous reports suggest the presence of a p53 binding region at 328 bp and 2032 bp upstream of the transcriptional start site of the MnSOD gene [104, 105]. Others suggest that p53 represses MnSOD gene expression by interfering with transcription initiation [106], inhibiting gene activators at the promoter level by forming an inhibitory complex suppressing gene transcription [107] and protein-protein interactions [108]. Nevertheless, p53 can induce the gene expression of MnSOD [104]. p53-mediated MnSOD expression is regulated in conjunction with other cell proliferative transcription factors such as NF-κB. Kiningham and Clair demonstrated the presence of an NF-κB binding site within the intronic enhancer element of the MnSOD gene [91]. It was later shown that mutation of the NF-κB site within the enhancer element abrogated p53 induced MnSOD gene transcription. In addition, knockdown of p65 via siRNA reduced MnSOD gene transcription via p53 as well. Overall the effects of p53 on MnSOD gene expression have been suggested to be concentration dependent, with low concentrations of p53 increasing MnSOD expression via corroborative NF-κB binding promoting cell survival and high concentrations of p53 suppressing MnSOD expression by interfering with important transcriptional binding elements such as SP1.
## 6. Clinical Implications of the MnSOD-p53 Interaction
p53 is mutated in 50% of human cancers. However, the remaining human tumors contain wild-type p53 with defects in the downstream mediated p53-signaling pathways. This, in turn, provides novel areas of discovery in stabilization and restoration of wild-type p53 activity. Currently, many drug companies are focused on utilizing p53 interactions as targets for pharmacological intervention [78]. There are various protein-protein interactions that occur within the cell that positively or negatively regulate p53 expression and function. For example, Mdm2 is an E3 ligase of p53 that polyubiquitinates p53, priming the tumor suppressor for proteasomal degradation. Many have found that, by blocking this interaction through peptides or transcriptional inhibitors, longer durations of p53 activation have resulted. Some of the therapeutic strategies that are currently being utilized are peptides that increase p53 activation through inhibition of Mdm2 function [109]. Three-dimensional structural models [110] of the hdm2-p53 interaction along with biochemical data [111, 112] have identified three residues that are important to this interaction, Phe19, Trp23, and Leu26 [111, 112]. From this data, an 8-mer peptide was generated [113] and showed promising results in inducing apoptosis in tumor cells that overexpressed hdm2 [112]. However, these conditions were difficult to optimize with a smaller molecule therefore causing this peptide to be therapeutically inefficient. Also nutlins have been utilized to disrupt the mdm2-p53 interaction resulting in reactivation of the p53 response [114, 115]. Others have used antisense and transcription inhibitors to prevent the expression of Mdm2 [116].Gene replacement therapy is another therapeutic modality that has been explored in treating tumors lacking or containing mutant p53. This technique utilizes adenoviruses, as well as retroviruses to achieve high expression of p53 in tumor cells. Promising results have been seen with retroviral vectors in patients with nonsmall cell lung cancers [117]. On the contrary, although we have seen the enhancement of tumorigenicity in our in vitro p53 transfection studies [98], we have not tested stably transfected cells in in vivo xenograft mouse models, nor have we tried other tissue types. Therefore, the reintroduction of the p53 gene into tumors may have contradictory outcomes depending on the cell type and tissue microenvironment. This concern has echoed through various studies, persuading investigators to opt to combine gene therapy with chemotherapy and radiotherapy [118–121].For decades, it has been shown that p53 functions only as a tumor suppressor. In addition, p53-mediated ROS generation has been limited to the induction of apoptosis. Currently, the ability of wild-type p53 to contribute to tumor promotion has received considerable attention. We have shown that the p53-MnSOD interaction contributes to the early stage of tumor promotion. In addition, it has been consistently shown that MnSOD activity is altered in human tumors. Therefore, designing diagnostic tools to assess MnSOD activity, as well as p53 activation, can be used to effectively design individualized treatments for cancer patients. For example, following chemotherapeutic treatment, patients that have higher levels of p53 expression and exhibit lower levels of MnSOD can receive an SOD mimetic that can upregulate MnSOD or synthetic compounds that can downregulate p53 activity to decrease ROS-mediated apoptosis and potential relapse within these patients.Gene therapy has also been utilized to modulate MnSOD activity during cancer progression. Overexpression of MnSOD through gene therapy introducing genetically engineered DNA/liposomes containing the human MnSOD transgene into preclinical and clinical models has been shown to be protective in normal tissues against ionizing irradiation. The final product (VLTS-582) is a DNA/liposome formulation that consists of a double-stranded DNA bacterial plasmid containing human MnSOD cDNA in conjunction with two lipids {cholesterol and DOTIM (1-[2-[9-(2)-octadecenoyloxy]]-2-[8-(2)-heptadecenyl]-3-[hydroxyethyl] imidazolinium chloride)} [122]. Recent studies suggest that this formulation has been successful in murine models and has been administered orally to patients concurrently with a weekly chemotherapy regime exhibiting no dose-limiting toxicities. Although proven therapeutically efficacious, more studies are needed to improve (1) delivery of the transgene to the targeted tissue; (2) reducing rapid elimination of the transgene; (3) control of the expression of the transgene within targeted tissues.On the other hand, a topical application of an SOD mimetic has also been described [123]. The Mn (III) porphyrin MnIII TE-2-Pyp5+ possesses highly potent SOD activity as facilitated by the redox properties of the metal center and the positive charge to the ortho-N-ethylpyridyl nitrogens [124]. MnIII TE-2-Pyp5+ has been proven effective in vitro and in various human diseases such as stroke [125, 126], diabetes [127, 128], and cancer and radiation-related treatment [129–132]. In preclinical animal models, topical application of MnIII TE-2-Pyp5+ was shown to reduce levels of oxidative damage and reduced cell proliferation without interfering with p53-mediated apoptosis when applied prior to TPA treatment [129]. These data support the concept that overexpression of MnSOD when applied in conjunction with standard chemotherapeutics or during the tumor promotion stage is protective in both preclinical and clinical models.Nevertheless, both p53 and MnSOD have been shown to posses reduced activity and/or mutated in most human diseases including cancer. Therefore, more therapeutic quests are needed to detect and restore both MnSOD and wild-type p53 activity. However, future therapeutic optimization strategies should have minimal nonspecific drug-related toxicities and be based on the stage of cancer progression which may reveal a therapeutic window for treatment intervention.
## 7. Concluding Remarks
In summary, reactive oxygen species have been implicated in the pathogenesis of various hyperproliferative and inflammatory diseases [133]. In addition, the tumor suppressor p53 has been shown to be activated during the early stage of skin carcinogenesis and contributed to the propagation of oxidative stress. Recent studies demonstrate a novel role of mitochondrial p53 activation. Once in the mitochondria, p53 physically interacts with MnSOD. As a result, this interaction reduces the free radical scavenging abilities of MnSOD, promoting enhanced ROS generation which has been shown to act as a tumorigenic stimulus during cancer progression. This suggests that wild-type p53 may play a direct role in promoting oxidative stress and contributing to the ROS-mediated tumor-proliferative stimuli. In addition, others have shown that mutant p53 can, in fact, translocate to the mitochondria and interact with MnSOD [134]. However, Lontz et al. observed following doxorubicin treatment of lymphoma cell lines with varying wt or mutant p53 levels, mitochondrial function, as evidenced by Complex I/II activities, was only compromised in lymphoma cells expressing wild-type and not mutant p53 [134]. Therefore, the continuation of deciphering mechanistic differences in tumors containing wild-type or mutant p53 can lead to the development of therapeutic p53-mediated interventions and a clearer understanding of chemoresistance in both wild-type and mutant p53 human tumors.Several studies have suggested that MnSOD may play a primary protective role against tissue injury. MnSOD has been found to be depleted in a variety of tumor cells, as well asin vitro transformed cell lines, suggesting that MnSOD may act as a novel tumor suppressor, protecting cells from oxidant-induced carcinogenesis [135]. Nevertheless, overexpression of MnSOD decreases the pathogenesis of human diseases such as cancer. Consistent with that, accumulating evidence suggests that a number of antioxidants or drugs with antioxidant properties can reduce mediators of tumor promotion [136]. Clair et al. showed that transfecting mouse 10T 1/2 cells with human MnSOD cDNA promoted differentiation with 5-azacytidine treatment and protected against neoplastic transformation [137]. In addition, transfecting human MnSOD cDNA into MCF-7 breast cancer cells and UACC-903melanoma cells suppressed their malignant phenotype and suppressed growth in nude mice [54, 138]. We have shown that the cumulative induction of endogenous antioxidant enzymes (i.e., catalase, total SOD and MnSOD) is efficient in reducing tumor incidence and multiplicity [57]. In addition, the induction of endogenous antioxidant enzymes via dietary administration can suppress p53 mitochondrial translocation [98]. TPA can induce p53 mitochondrial translocation; however, this phorbol ester also decreases the mitochondrial membrane potential, as well as mitochondrial complex activities and respiration. Other studies have shown that MnSOD overexpression in mice protects complex I from adriamycin-induced deactivation in cardiac tissue [139]. These results suggest that antioxidant expression protects against fluctuations in mitochondrial functions which suppress p53 mitochondrial translocation, p53-mediated ROS, and both downstream apoptotic and cell proliferation signaling pathways. On the contrary, Connor et al. suggest that overexpression of MnSOD in HT-1080 fibrosarcoma cells and 253J bladder tumor cells enhanced the migatory ability and invasiveness of tumor cells, through the upregulation of matrix metalloproteinases [140]. Although some tumors express higher levels of MnSOD, the downstream effects of enhanced antioxidant expression are dependent on the tumor type and susceptibility to oxidative damage, underlying oncogenic mutations and the stage of disease progression [140]. Nevertheless, these investigators stressed the need of refined regulation of H2O2 production. Therefore the question remains, are the effects of the p53-MnSOD interaction protumorigenic or anti-tumorigenic? To definitively answer this question further investigation of this interaction is needed. However, there are several factors that must be considered in determining the fate of the p53-MnSOD interaction, which include the stage of disease progression as well as tumor microenvironment. It has been shown that p53 activation is required in tumor promotion and can mediate ROS generation. However, the duration of enhanced ROS generation, severity of oxidative damage, and the status of the cellular antioxidant capacity can all contribute to the proliferative/apoptotic switch that occurs during the response to cellular stress. Overall, further studies are needed to clearly assess the status of MnSOD during the various stages of carcinogenesis to enhance the efficacy of standard treatment regimens currently being used.Consistent with that, defining the downstream effects of the p53-MnSOD complex formation can expand our knowledge of the molecular mechanisms that contribute to the early stage of tumorigenesis and how they may be altered during cancer progression. With further knowledge, modulators of MnSOD, p53 and their associated regulators can be therapeutically useful in the treatment of cancer and various stages of tumor progression.
---
*Source: 101465-2011-10-05.xml* | 2012 |
# Carnitine Deficiency and Pregnancy
**Authors:** Anouk de Bruyn; Yves Jacquemyn; Kristof Kinget; François Eyskens
**Journal:** Case Reports in Obstetrics and Gynecology
(2015)
**Publisher:** Hindawi Publishing Corporation
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2015/101468
---
## Abstract
We present two cases of carnitine deficiency in pregnancy. In our first case, systematic screening revealed L-carnitine deficiency in the first born of an asymptomatic mother. In the course of her second pregnancy, maternal carnitine levels showed a deficiency as well. In a second case, a mother known with carnitine deficiency under supplementation was followed throughout her pregnancy. Both pregnancies had an uneventful outcome. Because carnitine deficiency can have serious complications, supplementation with carnitine is advised. This supplementation should be continued throughout pregnancy according to plasma concentrations.
---
## Body
## 1. Introduction
Carnitine (β-hydroxy-y-N-trimethylammonium butyrate) is necessary for the transport of long chain fatty acids across the inner mitochondrial membrane, where they are used as substrates for the beta-oxidation cycle. Beta-oxidation is the most important source of energy in case of exercise of low and mild intensity and during fasting. Levocarnitine is the biologically active form. Endogenous production of the compound from lysine and methionine is primarily located in the liver, kidneys, and brain. Exogenous supply is via red meat and milk products [1].Carnitine deficiency can have multiple causes. One of them is a condition called systemic primary carnitine deficiency (SPCD). SPCD (OMIM 212140) is a rare autosomal recessive disorder, caused by homozygous or compound heterozygous mutation in the SLC22A5 gene. As a consequence, there is a dysfunctional transporter, that is, the organic cation transporter novel 2. This transporter is located at the apical plasma membranes particularly in heart, muscle, and kidney and normally transfers carnitine intracellularly. The dysfunction leads to high renal loss of carnitine and lower concentrations of carnitine in blood and tissues. Thus, lower concentrations of carnitine will be available for the transfer of long chain fatty acids [2, 3]. SPCD has many variations in its presentation, from asymptomatic over metabolic crises at young age (sudden infant death syndrome, episodes of hypoglycemia, and Reye-like syndrome) to progressive cardiomyopathy [4].Carnitine can cross the placenta. Hence a low carnitine level of the neonate can reflect both neonatal deficiency and maternal deficiency [5–7].
## 2. Case Presentation
Case 1. The first patient delivered a healthy term girl after an uneventful pregnancy. Blood results in the context of the Flemish neonatal screening program showed low free carnitine. Supplementation with L-carnitine was started (200 mg/day or 50 mg/kg body weight/day) for the baby. The mother did not agree to have a blood test herself and presented herself three years later, at the age of 29 years, to our prenatal unit at 7 weeks of gestational age. At that moment she agreed for a blood test. Results were as follows: free carnitine 6 μmol (normal range 32–60 μmol); total carnitine 7 μmol (normal values 41–70 μmol); acylcarnitine 1 μmol (normal values 6–15 μmol); acyl/total ratio 0.14 (normal range 0.12–0.30). She was started on oral L-carnitine supplementation 500 mg 3 times a day. Plasma concentrations were monitored. At 3 months of gestation, the value of free carnitine showed a severe deficiency (3.58 μmol). This was due to medication nonadherence.Because of the carnitine deficiency, she was considered a high-risk patient and close follow-up with ultrasound was established. Fetal growth was normal at the 25th percentile (Figure1: fetal growth percentiles of case 1).Figure 1
Fetal growth percentiles of case 1.The course of her pregnancy was uneventful until the 39th week; at which moment she presented herself with less fetal movements and oligohydramnios on ultrasound. Labour was induced with vaginal prostaglandins and she gave birth to a son with a birth weight of 2710 grams, length of 48 cm, head circumference of 33 cm, and Apgar scores of 9, 9, and 10. Blood analysis of the umbilical cord showed normal pH values (arterial pH = 7.35 and venous pH = 7.42).Neonatal screening of her newborn son showed low-normal carnitine levels: 7.15μmol. Supplementation was not started at that moment; the patient did not present for further follow-up.Case 2. The second patient had been diagnosed with a carnitine deficiency during the investigation of a bilateral blepharoptosis and muscle pains during exercise. As she was followed in another hospital during her first pregnancy, no details could be obtained besides the performance of a caesarean section for breech presentation. She gave birth to a son that was diagnosed with a carnitine deficiency as well. At the age of 34 years, she presented at the 5th week of amenorrhea to the prenatal clinic (G2P1). Her supplementation with L-carnitine was augmented during her pregnancy from 4 to 8 g per day. Except for chronic hypertension, which was controlled with methyldopa orally, no other problems during pregnancy were reported. Close monitoring of the carnitine levels was performed and revealed (near to) normal values throughout the pregnancy, with a minimum of 20.6 μmol/L and a maximal value of 32.1 μmol/L.A repeat caesarean section was performed at 38 weeks and 5 days. A healthy girl was born with a birth weight of 3606 grams, length of 50.5 cm, head circumference of 35.5 cm, and Apgar scores of 9, 9, and 10. Initial blood results of the newborn girl showed normal carnitine levels (36μmol/L). However, three months postnatally, a decreased serum carnitine (17 μmol/L) was seen, so supplementation with 600 mg/d carnitine was started. After her second pregnancy, further investigations were performed on the mother. A renal excretion of L-carnitine in the urine exceeded 80% of the amount filtrated within 24 hours (normal <5%). DNA analysis revealed a heterozygotic mutation in the SLC22A5 gene and no mutation was found in the other allele, so there was no diagnosis of SPCD.
## 3. Discussion
Neonatal screening is one of the detection manners of SPCD. Among others, Lee, Schimmenti, and El-Hattab described cases where neonatal SPCD as well as maternal SPCD was detected after neonatal screening [5–7]. Primary presentation with maternal symptoms during pregnancy is another way to suspect SPCD. This is a consequence of the higher metabolism present during pregnancy. Other causes of carnitine deficiency besides mutations in the SLC22A5 gene are classified as secondary carnitine deficiency and include other hereditary metabolic diseases (e.g., fatty acid oxidation defects), medication (valproic acid, cyclosporine, and pivampicillin), malnutrition, hemodialysis and renal tubular dysfunction (Fanconi nephropathy), and prematurity (lower placentary transfer). Lower carnitine levels are also seen in vegetarians (20–30% lower concentrations).To differentiate SPCD and other mutations in the SLC22A5 gene (as seen in case 2) from alternative causes of carnitine deficiency, a thorough anamnesis and supplementary tests are suggested. Blood analysis will reveal low carnitine plasma concentrations, with a normal acyl/total carnitine ratio. A 24-hour urinary excretion of L-carnitine under L-carnitine supplementation will typically reveal very high rates of carnitine excretion in SPCD, but without the presence of abnormal organic acids as seen in secondary carnitine deficiency.The final diagnosis can be made through genetic analysis (to prove the mutation(s)) or through measurement of the carnitine transport in fibroblasts, isolated from a skin biopsy. This will be reduced in case of SPCD (<10% of controls) [1].Little is known about the treatment of asymptomatic carnitine deficient patients [8]. Since we know that disorders in fatty acid oxidation can lead to sudden death, current recommendations are to treat also the asymptomatic patients by supplementation with L-carnitine [9]. This should be done to prevent potential decompensation with serious consequences. Drug information about carnitine proposes target plasma concentrations between 24 and 48 μmol/L.Schoderbeck et al. [10] discovered a significant decrease in the plasma concentrations of total and free carnitine during pregnancy in healthy women. Because fetal and maternal carnitine concentrations are correlated, it is important for the neonate as well that sufficient maternal carnitine concentrations be important. In patients who receive supplementation, as in patients with known SPCD, close follow-up is therefore recommended to adjust supplementation when necessary. Besides the course of a pregnancy, concentrations of carnitine in people known with a deficiency need to be checked annually.To our knowledge, no studies have been performed to determine the ideal dose of carnitine supplementation and the target values in pregnancy.
## Learning Points/Take Home Messages
(i)
Systematic neonatal screening can lead to diagnosis of metabolic disorders in the mother.
(ii)
Carnitine supplementation should be continued during every (following) pregnancy (according to plasma concentrations).
(iii)
Every case of SPCD should be reported to enlarge our knowledge about this pathology.
---
*Source: 101468-2015-05-28.xml* | 101468-2015-05-28_101468-2015-05-28.md | 9,580 | Carnitine Deficiency and Pregnancy | Anouk de Bruyn; Yves Jacquemyn; Kristof Kinget; François Eyskens | Case Reports in Obstetrics and Gynecology
(2015) | Medical & Health Sciences | Hindawi Publishing Corporation | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2015/101468 | 101468-2015-05-28.xml | ---
## Abstract
We present two cases of carnitine deficiency in pregnancy. In our first case, systematic screening revealed L-carnitine deficiency in the first born of an asymptomatic mother. In the course of her second pregnancy, maternal carnitine levels showed a deficiency as well. In a second case, a mother known with carnitine deficiency under supplementation was followed throughout her pregnancy. Both pregnancies had an uneventful outcome. Because carnitine deficiency can have serious complications, supplementation with carnitine is advised. This supplementation should be continued throughout pregnancy according to plasma concentrations.
---
## Body
## 1. Introduction
Carnitine (β-hydroxy-y-N-trimethylammonium butyrate) is necessary for the transport of long chain fatty acids across the inner mitochondrial membrane, where they are used as substrates for the beta-oxidation cycle. Beta-oxidation is the most important source of energy in case of exercise of low and mild intensity and during fasting. Levocarnitine is the biologically active form. Endogenous production of the compound from lysine and methionine is primarily located in the liver, kidneys, and brain. Exogenous supply is via red meat and milk products [1].Carnitine deficiency can have multiple causes. One of them is a condition called systemic primary carnitine deficiency (SPCD). SPCD (OMIM 212140) is a rare autosomal recessive disorder, caused by homozygous or compound heterozygous mutation in the SLC22A5 gene. As a consequence, there is a dysfunctional transporter, that is, the organic cation transporter novel 2. This transporter is located at the apical plasma membranes particularly in heart, muscle, and kidney and normally transfers carnitine intracellularly. The dysfunction leads to high renal loss of carnitine and lower concentrations of carnitine in blood and tissues. Thus, lower concentrations of carnitine will be available for the transfer of long chain fatty acids [2, 3]. SPCD has many variations in its presentation, from asymptomatic over metabolic crises at young age (sudden infant death syndrome, episodes of hypoglycemia, and Reye-like syndrome) to progressive cardiomyopathy [4].Carnitine can cross the placenta. Hence a low carnitine level of the neonate can reflect both neonatal deficiency and maternal deficiency [5–7].
## 2. Case Presentation
Case 1. The first patient delivered a healthy term girl after an uneventful pregnancy. Blood results in the context of the Flemish neonatal screening program showed low free carnitine. Supplementation with L-carnitine was started (200 mg/day or 50 mg/kg body weight/day) for the baby. The mother did not agree to have a blood test herself and presented herself three years later, at the age of 29 years, to our prenatal unit at 7 weeks of gestational age. At that moment she agreed for a blood test. Results were as follows: free carnitine 6 μmol (normal range 32–60 μmol); total carnitine 7 μmol (normal values 41–70 μmol); acylcarnitine 1 μmol (normal values 6–15 μmol); acyl/total ratio 0.14 (normal range 0.12–0.30). She was started on oral L-carnitine supplementation 500 mg 3 times a day. Plasma concentrations were monitored. At 3 months of gestation, the value of free carnitine showed a severe deficiency (3.58 μmol). This was due to medication nonadherence.Because of the carnitine deficiency, she was considered a high-risk patient and close follow-up with ultrasound was established. Fetal growth was normal at the 25th percentile (Figure1: fetal growth percentiles of case 1).Figure 1
Fetal growth percentiles of case 1.The course of her pregnancy was uneventful until the 39th week; at which moment she presented herself with less fetal movements and oligohydramnios on ultrasound. Labour was induced with vaginal prostaglandins and she gave birth to a son with a birth weight of 2710 grams, length of 48 cm, head circumference of 33 cm, and Apgar scores of 9, 9, and 10. Blood analysis of the umbilical cord showed normal pH values (arterial pH = 7.35 and venous pH = 7.42).Neonatal screening of her newborn son showed low-normal carnitine levels: 7.15μmol. Supplementation was not started at that moment; the patient did not present for further follow-up.Case 2. The second patient had been diagnosed with a carnitine deficiency during the investigation of a bilateral blepharoptosis and muscle pains during exercise. As she was followed in another hospital during her first pregnancy, no details could be obtained besides the performance of a caesarean section for breech presentation. She gave birth to a son that was diagnosed with a carnitine deficiency as well. At the age of 34 years, she presented at the 5th week of amenorrhea to the prenatal clinic (G2P1). Her supplementation with L-carnitine was augmented during her pregnancy from 4 to 8 g per day. Except for chronic hypertension, which was controlled with methyldopa orally, no other problems during pregnancy were reported. Close monitoring of the carnitine levels was performed and revealed (near to) normal values throughout the pregnancy, with a minimum of 20.6 μmol/L and a maximal value of 32.1 μmol/L.A repeat caesarean section was performed at 38 weeks and 5 days. A healthy girl was born with a birth weight of 3606 grams, length of 50.5 cm, head circumference of 35.5 cm, and Apgar scores of 9, 9, and 10. Initial blood results of the newborn girl showed normal carnitine levels (36μmol/L). However, three months postnatally, a decreased serum carnitine (17 μmol/L) was seen, so supplementation with 600 mg/d carnitine was started. After her second pregnancy, further investigations were performed on the mother. A renal excretion of L-carnitine in the urine exceeded 80% of the amount filtrated within 24 hours (normal <5%). DNA analysis revealed a heterozygotic mutation in the SLC22A5 gene and no mutation was found in the other allele, so there was no diagnosis of SPCD.
## 3. Discussion
Neonatal screening is one of the detection manners of SPCD. Among others, Lee, Schimmenti, and El-Hattab described cases where neonatal SPCD as well as maternal SPCD was detected after neonatal screening [5–7]. Primary presentation with maternal symptoms during pregnancy is another way to suspect SPCD. This is a consequence of the higher metabolism present during pregnancy. Other causes of carnitine deficiency besides mutations in the SLC22A5 gene are classified as secondary carnitine deficiency and include other hereditary metabolic diseases (e.g., fatty acid oxidation defects), medication (valproic acid, cyclosporine, and pivampicillin), malnutrition, hemodialysis and renal tubular dysfunction (Fanconi nephropathy), and prematurity (lower placentary transfer). Lower carnitine levels are also seen in vegetarians (20–30% lower concentrations).To differentiate SPCD and other mutations in the SLC22A5 gene (as seen in case 2) from alternative causes of carnitine deficiency, a thorough anamnesis and supplementary tests are suggested. Blood analysis will reveal low carnitine plasma concentrations, with a normal acyl/total carnitine ratio. A 24-hour urinary excretion of L-carnitine under L-carnitine supplementation will typically reveal very high rates of carnitine excretion in SPCD, but without the presence of abnormal organic acids as seen in secondary carnitine deficiency.The final diagnosis can be made through genetic analysis (to prove the mutation(s)) or through measurement of the carnitine transport in fibroblasts, isolated from a skin biopsy. This will be reduced in case of SPCD (<10% of controls) [1].Little is known about the treatment of asymptomatic carnitine deficient patients [8]. Since we know that disorders in fatty acid oxidation can lead to sudden death, current recommendations are to treat also the asymptomatic patients by supplementation with L-carnitine [9]. This should be done to prevent potential decompensation with serious consequences. Drug information about carnitine proposes target plasma concentrations between 24 and 48 μmol/L.Schoderbeck et al. [10] discovered a significant decrease in the plasma concentrations of total and free carnitine during pregnancy in healthy women. Because fetal and maternal carnitine concentrations are correlated, it is important for the neonate as well that sufficient maternal carnitine concentrations be important. In patients who receive supplementation, as in patients with known SPCD, close follow-up is therefore recommended to adjust supplementation when necessary. Besides the course of a pregnancy, concentrations of carnitine in people known with a deficiency need to be checked annually.To our knowledge, no studies have been performed to determine the ideal dose of carnitine supplementation and the target values in pregnancy.
## Learning Points/Take Home Messages
(i)
Systematic neonatal screening can lead to diagnosis of metabolic disorders in the mother.
(ii)
Carnitine supplementation should be continued during every (following) pregnancy (according to plasma concentrations).
(iii)
Every case of SPCD should be reported to enlarge our knowledge about this pathology.
---
*Source: 101468-2015-05-28.xml* | 2015 |
# Geometric Theorems and Application of the DOF Analysis of the Workpiece Based on the Constraint Normal Line
**Authors:** Guojun Luo; Xiaohui Wang; Xianguo Yan
**Journal:** Advances in Materials Science and Engineering
(2021)
**Publisher:** Hindawi
**License:** http://creativecommons.org/licenses/by/4.0/
**DOI:** 10.1155/2021/1014708
---
## Abstract
Aiming at the problem of judging the degree of freedom (DOF) of the workpiece in the fixture by experience, it is difficult to adapt to the analysis of the DOF of some singular workpieces. The workpiece and the fixture are used as rigid bodies, and the workpiece is allowed to move in the plane or space under the constraints of the fixture positioning point, and a set of geometric theorems for judging the DOF and overconstraint of the workpiece can be derived according to the difference in the position of the instantaneous center of the workpiece speed. The judgment of the DOF and overconstraint of the workpiece is abstracted into rules with universal meaning, which effectively overcomes the limitations of existing methods. The research results show that (1) the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and their geometric relationship; (2) the necessary and sufficient condition for limiting the DOF of rotation of the workpiece around a certain axis is that the workpiece has a pair of parallel normal lines in the vertical plane of the axis. Using geometric theorems to judge the DOF of the workpiece is more rigorous, simple, and intuitive, which is convenient for computer-aided judgment and the reasonable layout of the positioning points of the workpiece, which can effectively avoid the misjudgment of the DOF and unnecessary overpositioning when the complex workpiece is combined and positioned on different surfaces. Several examples are used to verify the accuracy of the method and correct unreasonable positioning schemes.
---
## Body
## 1. Introduction
The DOF of the workpiece in the fixture affects the machining accuracy and determines whether the fixture meets the working requirements. Based on this, the calculation of the DOF in the fixture design is the basic task, and many scholars at home and abroad have done in-depth research and exploration [1–4]. Wu [5] described the DOF limit of the workpiece positioning reference as the following four cases: translation between surfaces, translation inline direction, rotation between surfaces, and rotation inline direction, as well as presented a method for automatically extracting constraint DOF information from process requirements. Parag et al. [6] used the unified manufacturing resource model of the STEP-NC standard to represent the connection with the NC machining center. The main contribution is to allow universal modeling of fixtures and loading equipment, as well as machining workpiece and process modeling, to analyze the DOF of workpieces on the fixture. Paulraj and Kumar [7] used a genetic algorithm (GA) to improve the positioning of the active clamps and workpieces in the fixture system and carried out a parametric design (APDL) based on Ansys software of finite element analysis. A contact force for clamping workpiece clamps is predicted in an elastic contact model. Hunter et al. [8] proposed a functional approach to formalize the fixture design process. When some information, such as part information, contact parameters, and position points, was inputted, a relatively reasonable positioning scheme could be given out, the aim of formalization of the fixture design process would be achieved, and the relationship between processing requirements and processing freedom and actual freedom would be analyzed. Huang et al. [9], based on the product master model idea, proposed a process model-driven aeroengine parts’ machining and fixture variant design method. By designing the clamping feature mapping algorithm for isomorphic parts, the design parameter linkage between the process model and the main model of the fixture variant design is realized, which satisfies the freedom requirements of the fixture and improves the response speed of the machine-fit fixture design to the design changes of aeroengine parts. Qin et al. [10, 11] argued that the positioning accuracy is consistent with the DOF of the process, which is the solution of the calculation equation of the DOF of the workpiece, and specified the overall number of conditions to accurately determine the DOF of the workpiece. In practical applications, the local factors that meet the accuracy of component positioning are accurately obtained, and based on this, the optimization planning algorithm for the positioning scheme is obtained. Then, they used the calculation method of homogeneous linear equations, combined with the constraint fixed point in the fixture to establish the parameters, and analyzed the optimal position characteristics of the workpiece. Lu et al. [12] improved the decomposition-based multiobjective evolutionary algorithm (MOEAD) based on the migration behavior of the Gaussian mutation, which satisfies a large number of combinations of positioning points in the fixture layout, and the algorithm has a strong search capability and makes the fixture’s positioning of the workpiece more reasonable. Based on positioning rules of “N‐2‐1” and “3‐2‐1,” Xing et al. [13] established two deviation analysis models of over‐positioning and full release patterns by the difference in the release modes of the anchor point. Li et al. [14] proposed the assumption that the workpiece surface is composed of an anisotropic section, and the component of the workpiece is fixed to the workpiece through the point contact and simulated the calculation model of the DOF of the workpiece, which is determined by the value of the matrix rank. The surface of the workpiece is not completely on a plane. The constraint form of the fixed position device and the calculation formula for calculating the DOF of the workpiece after the fixture positioning are established. By calculating the rank of the established model matrix, the DOF of the workpiece is analyzed and judged. Wu et al. [15, 16] established the constraint model which can quantitatively analyze the DOF of the workpiece by rigid-body freedom analysis theory, and they developed a fixture constraint workpiece DOF analysis system using UG software. The positioning performance reference coefficient can be used to evaluate the performance of the fixture to constrain the position of the workpiece [17, 18] and to find the optimal position of each positioning point based on the coefficient. Based on the research of combined genetic algorithm [19, 20], the optimal positioning method of the workpiece based on the genetic material library is found, and the surface quality in the workpiece clamping is improved. Song and Rong [21] established a matrix for the position parameters, solved the rank of the matrix to analyze the positioning constraint of the workpiece, and obtained the DOF property of the constraint, which can accurately obtain the scheme to meet the machining requirements of the workpiece processing. Liang et al. [22] proposed a new analytical method for the positioning of the workpiece, which firstly analyzed the constraint DOF of the workpiece by a single positioning element, then analyzed the constraint DOF of the workpiece by the positioning element combination, and finally comprehensively analyzed the constraint state of the workpiece by positioning elements. Yang et al. [23], based on the support vector regression (SVR) surrogate model and the elite nondominated sorting genetic algorithm (NSGA-II), proposed a new SMP fixture positioning and layout multiobjective optimization method. By using the ABAQUS™ Python scripting interface, a parameterized FEA model can be established. Taking the fixture positioning layout as a design variable, through the optimization of multiple objective functions, the best positioning plan for the workpiece is obtained. Qin et al. [24–26] established a positioning scheme kinematic model, which was based on the relationship between the workpiece position offset and the positioning source error, and proposed the optimal design criteria of the positioning scheme. The design process of fixtures is over when and only when a fixture achieves good fixturing performance.The aforementioned research on the DOF of the workpiece plays an important role in the construction of the workpiece positioning plan and the optimization of the positioning plan. However, these methods require a deeper theoretical basis, and the methods have limitations. It is still difficult to adapt to the difficult positioning problems that are often encountered in machining, such as the positioning of complex surfaces and the DOF of the workpiece combined with different surfaces [27–29]. There are still two unsolved problems of the current method. Firstly, there is no accurate formula for the analysis and calculation of the DOF of the workpiece with virtual constraints, which brings difficulties to the fixture in the positioning design [18]. Secondly, the positioning of numerous irregular workpieces is common in engineering. Usually, the constraint DOF of the workpiece by the fixture in a certain direction is not in the same direction as the axis but any other direction in the space [30]. The existing method can barely adapt to this situation.Figure1 shows the positioning of an oblique axis on two short V-blocks. The two V-blocks limit the DOFs of the workpiece. Current positioning theory can barely solve this problem.Figure 1
Oblique axis positioned on the V-block. 1, V-block; 2, workpiece.Figure2(a) shows a workpiece positioning device. The workpiece is a cylinder, and three positioning points are set on its left, middle, and right sections, respectively. The difference between either two of the three points is 120° in the circumferential direction. In this case, the motion freedom of the workpiece on the YOZ plane is overpositioned, and the rotational DOF around any axis is not constrained. If, based on Figure 2(a), another positioning point is added to the right section (or any other section) as shown in Figure 2(b), then the overpositioning of the moving constraint in the plane is found by eliminating the constraint of the workpiece, and the rotational freedom of the workpiece in the Y- and Z-directions is constrained. Why is this happening? It is difficult to explain with the current positioning theory.Figure 2
Three-section positioning of a cylinder. (a) Positioning point on each of the three sections. (b) Add another positioning point to the right section. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)The study found that the DOF of the workpiece fixed on the processing table depends on the number of positioning normal lines generated at the positioning point and the geometric relationship of the normal lines. A simple, intuitive, and universal analysis method is proposed for various plane-combined positioning, which can accurately calculate the DOF of the workpiece in the fixture positioning. Based on the positioning constraints of the fixture to the workpiece, this paper proposes the concepts of the constraint positioning normal and the constraint positioning plane and proposes a set of geometric theorems for accurately calculating the DOF of the workpiece. This method can realize the accurate calculation of the DOF of the workpiece and provide a basis for rationally arranging the positioning points of the workpiece.Figure 3
Positioning normal line of the workpiece. 1, Workpiece; 2, fixture.Figure 4
Several distributions of the normal line of workpieces. (a) Intersection and equivalence of two normal lines. (b) Two parallel normal lines. (c) Three normal lines intersect with one another. 1, Workpiece; 2, fixture; 3, supporting bolt; 4, V-block.
(a)(b)(c)
## 2. Analysis and Verification of the Normal Line Geometry Theorem of Workpiece Freedom
### 2.1. Position Normal Line and Normal Plane
(1)
Position constraint normal line: it is the normal line that indicates the workpiece pointing to the constraint direction at the fixture position point. In Figure3, the workpiece is positioned in the fixture, and the position constraint normal line Z at position O is the normal line generated by the constraint point of the workpiece.(2)
Fixed position normal line: the normal lines are specified in uppercase letters. If there are multiple normal lines of the same type in the same direction, we can add a different subscript to the alphabet to make a distinction.For example,F1 and F2 are parallel in geometric relations. The normal line points to the constraint direction of the workpiece from the position of the constraint point. Because the workpiece has six degrees of freedom according to the space, it is affected by six normal lines at most.(3)
Constraint normal plane: if several normals of the workpiece act on the same plane in the space, it is called the constraint normal plane.(4)
Representation of the normal plane: the normal plane is represented by Greek letters. If two normal planes are parallel, they are written in the same letter. For example,β and β1 represent two parallel normal planes. In Figure 4(a), the plane formed by the intersection point of the constraint normal lines M and N with the workpiece can be represented by β(MN). In Figure 4(b), the plane composed of the parallel constraint normals L and L1 on workpiece 1 can be expressed as α(LL1). In Figure 4(c), there are three constraint normal lines represented by L, M, and N, respectively, which jointly act on workpiece 1. They are in different positions in the same plane and have at least two intersections with each other. γ(LMN) can be used to represent the normal plane where the normal lines are L, M, and N.When the workpiece is positioned normally, the distribution of its normal lines in a plane can only be one of the three cases.
### 2.2. Geometric Relation Theory for Determining the Workpiece Constraint and Its Demonstration
Theorem 1.
The workpiece is constrained by a single normal line, and the freedom of movement towards the normal line direction is constrained.Proof.
In Figure3, the workpiece is installed in the fixture, and the contact point O can be simplified as a constraint normal line Z. The workpiece has the freedom of movement away from the plane of the fixture, and we assume that the relative velocity of the workpiece in the fixture at contact point O is Vr. If the relative velocity Vr is projected on the dividing velocity of the normal line, the projection is zero, indicating that the dividing velocity of the workpiece relative to the fixture in the constraint normal line direction is zero, which is the normal line Z direction. Therefore, the translation freedom of the workpiece along the Z-direction of the constraint normal line is constrained, it can be expressed as Z⟶, but the workpiece can rotate around any point in the normal plane.Theorem 2.
If the two constraint normal lines acting on different constraint points intersect, the translation freedom of the workpiece in the normal plane is limited and can be replaced by any two intersecting constraint normal lines at the intersection.Proof.
In Figure4(a), the workpiece is constrained at points a and b in the fixture, and the two equivalent constraint normal lines M and N on the constraint normal plane α(MN) intersect at point O. The workpiece and fixture are regarded as a rigid body, and the workpiece can move in the positioning normal plane α(MN). We assume that the relative velocity of the workpiece at the two contacting points, a and b, is Vr1 and Vr2, respectively. Considering that the instantaneous center of the rigid body is the intersection of two vertical lines of two-point velocities, the rotation center of the workpiece motion is the intersection O of the constrained normal lines M and N in the space, the constrained workpiece can only rotate around the instantaneous center O, and the workpiece cannot move in any direction on the plane relative to the intersection O. Hence, (1) the intersection normal line constrains the motion freedom of the workpiece in any direction. (2) The function of two intersecting constraint normal lines can be expressed as any two nonoverlapping intersecting normal lines at the intersection point O, such as normal lines P and T.Theorem 3.
If the two constraint normals acting on the workpiece are parallel but not collinear, the freedom of movement and rotation of the workpiece in the constraint plane are limited.Proof.
In Figure4(b), the two constraint normal lines L and L1 generated at the fixed positions m and n of the workpiece are parallel, and the positioning mode is a special one, as shown in Figure 4(a). The two normal lines intersect at infinity. Two results can be concluded from Theorem 2. They are as follows: (1) the translation freedom of the workpiece is limited in the normal L direction; (2) the instantaneous center of the workpiece is at infinity in this case. Thus, there is no rotation center in the constraint plane; that is, the rotational freedom of the workpiece in the constraint positioning plane α(LL1) is constrained. It can be expressed as α⌢(LL1).Theorem 4.
If three noncoplanar constraint normal lines intersect at a pointO, the translation freedom of the workpiece in the three-dimensional space is constrained and can also be equivalent to three arbitrary noncoplanar intersection normals at point O.Proof.
The workpiece is fixed on the fixture, and the movement of the workpiece in the fixture is constrained by the intersection of three constraint normals which are not on the same plane at a pointO. Given that the rotation center of the rigid body must be in the same direction with the perpendicular line (normal line) of the velocity, the rotation center of the workpiece must be the point of intersection O of the three normal lines. At this moment, the workpiece can only rotate around the fixed point O, and the workpiece cannot move in any direction relative to point O. Therefore, 1) the workpiece is subject to three noncoplanar constraint normal lines, which constraint its freedom of translation in space 2). The constraint effect of three noncoplanar constraint normal lines at intersection O is the same as that of any three noncoplanar constraint normal lines at intersection O.Theorem 5.
The DOF of the workpiece is the constraint generated by the total DOF minus all normal lines.
Each constraint normal line can only restrict one DOF, and the normal line sets of different geometric relations between the workpieces restrict the DOF. As shown in Figure4(c), the three constraint points of the workpiece produce the normal lines L, M, and N that are not intersected at a point, forming the constraint normal plane α(LMN). It can be seen from Theorem 3 that, as shown in Figure 5, the normal lines L1 and N intersecting in the normal plane jointly limit the translation freedom of the workpiece in any two directions along the constraint normal plane. Under the action of normal line L and equivalent normal L1, the rotational freedom (LL1N) of the workpiece in the normal plane is constrained. Therefore, in a positioning normal plane, if three normal lines intersect at least two intersection points, the translation freedom of the workpiece in any two directions and the rotation freedom of the workpiece in the constrained normal plane are constrained, and the three freedoms in the constrained normal plane are constrained.Figure 5
Equivalence and a set of normal lines. 1, Supporting bolt; 2, workpiece; 3, V-block.
### 2.3. Criteria for Determining Overpositioning of the Workpiece
Each normal line acting on the workpiece will form a constraint, which will restrict the freedom of movement or rotation of the workpiece. According to the normal set theorem, too much positioning will cause the number of constrained normals on the workpiece to exceed the limit value of its DOF, which will cause the same DOF of the workpiece to be repeatedly constrained. When two or more normal lines of the workpiece restrict the same DOF, the workpiece is overpositioned. The judgment of overpositioning of the workpiece will directly affect the accurate analysis of the DOF of the workpiece on the fixture.In fixture positioning, if the total DOF of the workpiece in the positioning space is less than the number of constraint normal lines generated on the workpiece, then there must be excessive positioning on the constraint of the workpiece. For example, the normal line number in a straight line is limited to 1, the normal line number on a plane is limited to 3, and the normal line number in the space is limited to 6. If the overconstrained normal is an intersection normal, then the translation DOF is overpositioned. If the overconstrained normal line is a parallel normal line, the rotational DOF in the normal plane is overpositioned.(1)
Determination of overpositioning on a straight line: ifn (n > 1) constraint normals on a workpiece coincide with a straight line, the workpiece has (n− 1) overconstraints(2)
Determination of overpositioning on a plane: if the workpiece hasn (n ≥ 3) normals on the constraint plane and the n (n ≥ 3) normal lines intersect at a point in the plane or are parallel to each other, then the workpiece has (n − 2) overconstraints(3)
Determination of overpositioning in the space: if the number of constraint normal lines positioned on the workpiece is greater than six, four or more normal lines intersect at one point, four or more normal lines are parallel to one another, four or more normal lines do not intersect with one another, and either two are noncoplanar, then the workpiece is overconstrained
### 2.4. Necessary and Sufficient Conditions of Constraining the Rotational DOF of the Workpiece
Theorem2 proves that the normal line parallelism generated by the workpiece positioned in the fixture was the necessary and sufficient condition to limit the rotational freedom of the workpiece in the constraint normal plane. The following proves the necessary condition to constrain the rotational DOF of the workpiece.Theorem1 proves that the normal line generated by a single constraint can only limit the translation freedom of the workpiece moving along the normal line direction. Here, to constrain the rotational freedom of the workpiece in the plane, two normal lines must jointly act on the workpiece, and the geometric relationship between two normal lines can only be one of the following four scenarios. The impact on the DOF of the workpiece is as follows:(1)
Two normal lines coincide: the two constraint normal lines have the same constraint effect on the workpiece, and they are overconstrained along the normal line direction(2)
Two normal lines intersect: the intersection normal line can be replaced by two intersecting normal lines in any direction, which can restrict the translation freedom of the workpiece in any two directions in the plane, but the rotational freedom is not restricted(3)
Two normal lines are parallel: a pair of parallel normal lines in the constrained normal plane can constrain the translation freedom of the workpiece in the normal direction and the rotation freedom in the normal plane(4)
Two normal lines are noncoplanar: according to the set theorem, two noncoplanar normal lines restrict two translational degrees of freedom of the workpiece along the normal line direction in the spaceA single normal constrains the freedom of movement in the normal direction, the intersecting normal constrains the freedom of movement in the plane, and the spatial noncoplanar normal constrains the freedom of movement in the normal direction of the space. Only when parallel constrained normals appear in a plane or space can the rotational freedom of the workpiece be constrained. Therefore, in the constraint normal plane, the workpiece has two parallel constraint normal lines which are a necessary condition to limit its rotational freedom.
## 2.1. Position Normal Line and Normal Plane
(1)
Position constraint normal line: it is the normal line that indicates the workpiece pointing to the constraint direction at the fixture position point. In Figure3, the workpiece is positioned in the fixture, and the position constraint normal line Z at position O is the normal line generated by the constraint point of the workpiece.(2)
Fixed position normal line: the normal lines are specified in uppercase letters. If there are multiple normal lines of the same type in the same direction, we can add a different subscript to the alphabet to make a distinction.For example,F1 and F2 are parallel in geometric relations. The normal line points to the constraint direction of the workpiece from the position of the constraint point. Because the workpiece has six degrees of freedom according to the space, it is affected by six normal lines at most.(3)
Constraint normal plane: if several normals of the workpiece act on the same plane in the space, it is called the constraint normal plane.(4)
Representation of the normal plane: the normal plane is represented by Greek letters. If two normal planes are parallel, they are written in the same letter. For example,β and β1 represent two parallel normal planes. In Figure 4(a), the plane formed by the intersection point of the constraint normal lines M and N with the workpiece can be represented by β(MN). In Figure 4(b), the plane composed of the parallel constraint normals L and L1 on workpiece 1 can be expressed as α(LL1). In Figure 4(c), there are three constraint normal lines represented by L, M, and N, respectively, which jointly act on workpiece 1. They are in different positions in the same plane and have at least two intersections with each other. γ(LMN) can be used to represent the normal plane where the normal lines are L, M, and N.When the workpiece is positioned normally, the distribution of its normal lines in a plane can only be one of the three cases.
## 2.2. Geometric Relation Theory for Determining the Workpiece Constraint and Its Demonstration
Theorem 1.
The workpiece is constrained by a single normal line, and the freedom of movement towards the normal line direction is constrained.Proof.
In Figure3, the workpiece is installed in the fixture, and the contact point O can be simplified as a constraint normal line Z. The workpiece has the freedom of movement away from the plane of the fixture, and we assume that the relative velocity of the workpiece in the fixture at contact point O is Vr. If the relative velocity Vr is projected on the dividing velocity of the normal line, the projection is zero, indicating that the dividing velocity of the workpiece relative to the fixture in the constraint normal line direction is zero, which is the normal line Z direction. Therefore, the translation freedom of the workpiece along the Z-direction of the constraint normal line is constrained, it can be expressed as Z⟶, but the workpiece can rotate around any point in the normal plane.Theorem 2.
If the two constraint normal lines acting on different constraint points intersect, the translation freedom of the workpiece in the normal plane is limited and can be replaced by any two intersecting constraint normal lines at the intersection.Proof.
In Figure4(a), the workpiece is constrained at points a and b in the fixture, and the two equivalent constraint normal lines M and N on the constraint normal plane α(MN) intersect at point O. The workpiece and fixture are regarded as a rigid body, and the workpiece can move in the positioning normal plane α(MN). We assume that the relative velocity of the workpiece at the two contacting points, a and b, is Vr1 and Vr2, respectively. Considering that the instantaneous center of the rigid body is the intersection of two vertical lines of two-point velocities, the rotation center of the workpiece motion is the intersection O of the constrained normal lines M and N in the space, the constrained workpiece can only rotate around the instantaneous center O, and the workpiece cannot move in any direction on the plane relative to the intersection O. Hence, (1) the intersection normal line constrains the motion freedom of the workpiece in any direction. (2) The function of two intersecting constraint normal lines can be expressed as any two nonoverlapping intersecting normal lines at the intersection point O, such as normal lines P and T.Theorem 3.
If the two constraint normals acting on the workpiece are parallel but not collinear, the freedom of movement and rotation of the workpiece in the constraint plane are limited.Proof.
In Figure4(b), the two constraint normal lines L and L1 generated at the fixed positions m and n of the workpiece are parallel, and the positioning mode is a special one, as shown in Figure 4(a). The two normal lines intersect at infinity. Two results can be concluded from Theorem 2. They are as follows: (1) the translation freedom of the workpiece is limited in the normal L direction; (2) the instantaneous center of the workpiece is at infinity in this case. Thus, there is no rotation center in the constraint plane; that is, the rotational freedom of the workpiece in the constraint positioning plane α(LL1) is constrained. It can be expressed as α⌢(LL1).Theorem 4.
If three noncoplanar constraint normal lines intersect at a pointO, the translation freedom of the workpiece in the three-dimensional space is constrained and can also be equivalent to three arbitrary noncoplanar intersection normals at point O.Proof.
The workpiece is fixed on the fixture, and the movement of the workpiece in the fixture is constrained by the intersection of three constraint normals which are not on the same plane at a pointO. Given that the rotation center of the rigid body must be in the same direction with the perpendicular line (normal line) of the velocity, the rotation center of the workpiece must be the point of intersection O of the three normal lines. At this moment, the workpiece can only rotate around the fixed point O, and the workpiece cannot move in any direction relative to point O. Therefore, 1) the workpiece is subject to three noncoplanar constraint normal lines, which constraint its freedom of translation in space 2). The constraint effect of three noncoplanar constraint normal lines at intersection O is the same as that of any three noncoplanar constraint normal lines at intersection O.Theorem 5.
The DOF of the workpiece is the constraint generated by the total DOF minus all normal lines.
Each constraint normal line can only restrict one DOF, and the normal line sets of different geometric relations between the workpieces restrict the DOF. As shown in Figure4(c), the three constraint points of the workpiece produce the normal lines L, M, and N that are not intersected at a point, forming the constraint normal plane α(LMN). It can be seen from Theorem 3 that, as shown in Figure 5, the normal lines L1 and N intersecting in the normal plane jointly limit the translation freedom of the workpiece in any two directions along the constraint normal plane. Under the action of normal line L and equivalent normal L1, the rotational freedom (LL1N) of the workpiece in the normal plane is constrained. Therefore, in a positioning normal plane, if three normal lines intersect at least two intersection points, the translation freedom of the workpiece in any two directions and the rotation freedom of the workpiece in the constrained normal plane are constrained, and the three freedoms in the constrained normal plane are constrained.Figure 5
Equivalence and a set of normal lines. 1, Supporting bolt; 2, workpiece; 3, V-block.
## 2.3. Criteria for Determining Overpositioning of the Workpiece
Each normal line acting on the workpiece will form a constraint, which will restrict the freedom of movement or rotation of the workpiece. According to the normal set theorem, too much positioning will cause the number of constrained normals on the workpiece to exceed the limit value of its DOF, which will cause the same DOF of the workpiece to be repeatedly constrained. When two or more normal lines of the workpiece restrict the same DOF, the workpiece is overpositioned. The judgment of overpositioning of the workpiece will directly affect the accurate analysis of the DOF of the workpiece on the fixture.In fixture positioning, if the total DOF of the workpiece in the positioning space is less than the number of constraint normal lines generated on the workpiece, then there must be excessive positioning on the constraint of the workpiece. For example, the normal line number in a straight line is limited to 1, the normal line number on a plane is limited to 3, and the normal line number in the space is limited to 6. If the overconstrained normal is an intersection normal, then the translation DOF is overpositioned. If the overconstrained normal line is a parallel normal line, the rotational DOF in the normal plane is overpositioned.(1)
Determination of overpositioning on a straight line: ifn (n > 1) constraint normals on a workpiece coincide with a straight line, the workpiece has (n− 1) overconstraints(2)
Determination of overpositioning on a plane: if the workpiece hasn (n ≥ 3) normals on the constraint plane and the n (n ≥ 3) normal lines intersect at a point in the plane or are parallel to each other, then the workpiece has (n − 2) overconstraints(3)
Determination of overpositioning in the space: if the number of constraint normal lines positioned on the workpiece is greater than six, four or more normal lines intersect at one point, four or more normal lines are parallel to one another, four or more normal lines do not intersect with one another, and either two are noncoplanar, then the workpiece is overconstrained
## 2.4. Necessary and Sufficient Conditions of Constraining the Rotational DOF of the Workpiece
Theorem2 proves that the normal line parallelism generated by the workpiece positioned in the fixture was the necessary and sufficient condition to limit the rotational freedom of the workpiece in the constraint normal plane. The following proves the necessary condition to constrain the rotational DOF of the workpiece.Theorem1 proves that the normal line generated by a single constraint can only limit the translation freedom of the workpiece moving along the normal line direction. Here, to constrain the rotational freedom of the workpiece in the plane, two normal lines must jointly act on the workpiece, and the geometric relationship between two normal lines can only be one of the following four scenarios. The impact on the DOF of the workpiece is as follows:(1)
Two normal lines coincide: the two constraint normal lines have the same constraint effect on the workpiece, and they are overconstrained along the normal line direction(2)
Two normal lines intersect: the intersection normal line can be replaced by two intersecting normal lines in any direction, which can restrict the translation freedom of the workpiece in any two directions in the plane, but the rotational freedom is not restricted(3)
Two normal lines are parallel: a pair of parallel normal lines in the constrained normal plane can constrain the translation freedom of the workpiece in the normal direction and the rotation freedom in the normal plane(4)
Two normal lines are noncoplanar: according to the set theorem, two noncoplanar normal lines restrict two translational degrees of freedom of the workpiece along the normal line direction in the spaceA single normal constrains the freedom of movement in the normal direction, the intersecting normal constrains the freedom of movement in the plane, and the spatial noncoplanar normal constrains the freedom of movement in the normal direction of the space. Only when parallel constrained normals appear in a plane or space can the rotational freedom of the workpiece be constrained. Therefore, in the constraint normal plane, the workpiece has two parallel constraint normal lines which are a necessary condition to limit its rotational freedom.
## 3. Results and Discussion
### 3.1. Application Analysis
#### 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
#### 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
#### 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
#### 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
### 3.2. Discussion
The six-point positioning principle of the workpiece is the main theoretical basis of this article, but the principle does not accurately describe what geometric conditions must be met by the six points to be able to fully position the workpiece nor does it explore which attributes of the constraint points are related to the DOF. And the geometric conditions of each constraint point will cause problems such as positioning. In response to the above problems, this paper proposes five geometric theorems for judging the DOF of the workpiece, including the single normal line theorem, the two intersecting normal lines’ theorem, the two parallel normal lines’ theorem, the three intersecting normal lines’ theorem, and the normal lines’ set theorem, and establishes the judgment. The geometric theorem of the workpiece overconstraint clarifies the number and nature of the DOF that can be constrained by each point or a combination of several points and refines the abstract six-point positioning principle.The guiding significance of the “Geometric Theorem for Judging Workpiece Freedom Constraint” proposed in the article for the analysis of the degree of freedom of workpiece positioning is as follows: the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and its geometric relationship. The method of using normal lines to analyze the DOF of workpiece positioning is very ingenious. Compared with various optimization positioning methods such as positioning matrix and genetic algorithm, it is simpler and more intuitive. I hope that, with the deepening of future research, this theory can be used in theoretical learning. And promotion will have more advantages.
## 3.1. Application Analysis
### 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
### 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
### 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
### 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
## 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
## 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
## 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
## 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
## 3.2. Discussion
The six-point positioning principle of the workpiece is the main theoretical basis of this article, but the principle does not accurately describe what geometric conditions must be met by the six points to be able to fully position the workpiece nor does it explore which attributes of the constraint points are related to the DOF. And the geometric conditions of each constraint point will cause problems such as positioning. In response to the above problems, this paper proposes five geometric theorems for judging the DOF of the workpiece, including the single normal line theorem, the two intersecting normal lines’ theorem, the two parallel normal lines’ theorem, the three intersecting normal lines’ theorem, and the normal lines’ set theorem, and establishes the judgment. The geometric theorem of the workpiece overconstraint clarifies the number and nature of the DOF that can be constrained by each point or a combination of several points and refines the abstract six-point positioning principle.The guiding significance of the “Geometric Theorem for Judging Workpiece Freedom Constraint” proposed in the article for the analysis of the degree of freedom of workpiece positioning is as follows: the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and its geometric relationship. The method of using normal lines to analyze the DOF of workpiece positioning is very ingenious. Compared with various optimization positioning methods such as positioning matrix and genetic algorithm, it is simpler and more intuitive. I hope that, with the deepening of future research, this theory can be used in theoretical learning. And promotion will have more advantages.
## 4. Conclusions
(1)
The number of positioning normal lines of the workpiece in the fixture and their geometric relationship determine the DOF and overconstraint of the workpiece in the fixture(2)
The parallel normal lines of the workpiece are a sufficient and necessary condition to limit the DOF of the rotation of the workpiece in this plane(3)
The geometric theorem is easy to analyze the DOF and overconstraint in the singular positioning of the workpiece, and it can also provide a theoretical basis for the reasonable layout of the positioning point of the workpiece in the fixture(4)
Using the geometric theorem to analyze and judge the DOF and overconstraint of the workpiece is simple and intuitive, more universal, and suitable for computer-aided judgment of the DOF of the workpiece
---
*Source: 1014708-2021-09-01.xml* | 1014708-2021-09-01_1014708-2021-09-01.md | 55,274 | Geometric Theorems and Application of the DOF Analysis of the Workpiece Based on the Constraint Normal Line | Guojun Luo; Xiaohui Wang; Xianguo Yan | Advances in Materials Science and Engineering
(2021) | Engineering & Technology | Hindawi | CC BY 4.0 | http://creativecommons.org/licenses/by/4.0/ | 10.1155/2021/1014708 | 1014708-2021-09-01.xml | ---
## Abstract
Aiming at the problem of judging the degree of freedom (DOF) of the workpiece in the fixture by experience, it is difficult to adapt to the analysis of the DOF of some singular workpieces. The workpiece and the fixture are used as rigid bodies, and the workpiece is allowed to move in the plane or space under the constraints of the fixture positioning point, and a set of geometric theorems for judging the DOF and overconstraint of the workpiece can be derived according to the difference in the position of the instantaneous center of the workpiece speed. The judgment of the DOF and overconstraint of the workpiece is abstracted into rules with universal meaning, which effectively overcomes the limitations of existing methods. The research results show that (1) the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and their geometric relationship; (2) the necessary and sufficient condition for limiting the DOF of rotation of the workpiece around a certain axis is that the workpiece has a pair of parallel normal lines in the vertical plane of the axis. Using geometric theorems to judge the DOF of the workpiece is more rigorous, simple, and intuitive, which is convenient for computer-aided judgment and the reasonable layout of the positioning points of the workpiece, which can effectively avoid the misjudgment of the DOF and unnecessary overpositioning when the complex workpiece is combined and positioned on different surfaces. Several examples are used to verify the accuracy of the method and correct unreasonable positioning schemes.
---
## Body
## 1. Introduction
The DOF of the workpiece in the fixture affects the machining accuracy and determines whether the fixture meets the working requirements. Based on this, the calculation of the DOF in the fixture design is the basic task, and many scholars at home and abroad have done in-depth research and exploration [1–4]. Wu [5] described the DOF limit of the workpiece positioning reference as the following four cases: translation between surfaces, translation inline direction, rotation between surfaces, and rotation inline direction, as well as presented a method for automatically extracting constraint DOF information from process requirements. Parag et al. [6] used the unified manufacturing resource model of the STEP-NC standard to represent the connection with the NC machining center. The main contribution is to allow universal modeling of fixtures and loading equipment, as well as machining workpiece and process modeling, to analyze the DOF of workpieces on the fixture. Paulraj and Kumar [7] used a genetic algorithm (GA) to improve the positioning of the active clamps and workpieces in the fixture system and carried out a parametric design (APDL) based on Ansys software of finite element analysis. A contact force for clamping workpiece clamps is predicted in an elastic contact model. Hunter et al. [8] proposed a functional approach to formalize the fixture design process. When some information, such as part information, contact parameters, and position points, was inputted, a relatively reasonable positioning scheme could be given out, the aim of formalization of the fixture design process would be achieved, and the relationship between processing requirements and processing freedom and actual freedom would be analyzed. Huang et al. [9], based on the product master model idea, proposed a process model-driven aeroengine parts’ machining and fixture variant design method. By designing the clamping feature mapping algorithm for isomorphic parts, the design parameter linkage between the process model and the main model of the fixture variant design is realized, which satisfies the freedom requirements of the fixture and improves the response speed of the machine-fit fixture design to the design changes of aeroengine parts. Qin et al. [10, 11] argued that the positioning accuracy is consistent with the DOF of the process, which is the solution of the calculation equation of the DOF of the workpiece, and specified the overall number of conditions to accurately determine the DOF of the workpiece. In practical applications, the local factors that meet the accuracy of component positioning are accurately obtained, and based on this, the optimization planning algorithm for the positioning scheme is obtained. Then, they used the calculation method of homogeneous linear equations, combined with the constraint fixed point in the fixture to establish the parameters, and analyzed the optimal position characteristics of the workpiece. Lu et al. [12] improved the decomposition-based multiobjective evolutionary algorithm (MOEAD) based on the migration behavior of the Gaussian mutation, which satisfies a large number of combinations of positioning points in the fixture layout, and the algorithm has a strong search capability and makes the fixture’s positioning of the workpiece more reasonable. Based on positioning rules of “N‐2‐1” and “3‐2‐1,” Xing et al. [13] established two deviation analysis models of over‐positioning and full release patterns by the difference in the release modes of the anchor point. Li et al. [14] proposed the assumption that the workpiece surface is composed of an anisotropic section, and the component of the workpiece is fixed to the workpiece through the point contact and simulated the calculation model of the DOF of the workpiece, which is determined by the value of the matrix rank. The surface of the workpiece is not completely on a plane. The constraint form of the fixed position device and the calculation formula for calculating the DOF of the workpiece after the fixture positioning are established. By calculating the rank of the established model matrix, the DOF of the workpiece is analyzed and judged. Wu et al. [15, 16] established the constraint model which can quantitatively analyze the DOF of the workpiece by rigid-body freedom analysis theory, and they developed a fixture constraint workpiece DOF analysis system using UG software. The positioning performance reference coefficient can be used to evaluate the performance of the fixture to constrain the position of the workpiece [17, 18] and to find the optimal position of each positioning point based on the coefficient. Based on the research of combined genetic algorithm [19, 20], the optimal positioning method of the workpiece based on the genetic material library is found, and the surface quality in the workpiece clamping is improved. Song and Rong [21] established a matrix for the position parameters, solved the rank of the matrix to analyze the positioning constraint of the workpiece, and obtained the DOF property of the constraint, which can accurately obtain the scheme to meet the machining requirements of the workpiece processing. Liang et al. [22] proposed a new analytical method for the positioning of the workpiece, which firstly analyzed the constraint DOF of the workpiece by a single positioning element, then analyzed the constraint DOF of the workpiece by the positioning element combination, and finally comprehensively analyzed the constraint state of the workpiece by positioning elements. Yang et al. [23], based on the support vector regression (SVR) surrogate model and the elite nondominated sorting genetic algorithm (NSGA-II), proposed a new SMP fixture positioning and layout multiobjective optimization method. By using the ABAQUS™ Python scripting interface, a parameterized FEA model can be established. Taking the fixture positioning layout as a design variable, through the optimization of multiple objective functions, the best positioning plan for the workpiece is obtained. Qin et al. [24–26] established a positioning scheme kinematic model, which was based on the relationship between the workpiece position offset and the positioning source error, and proposed the optimal design criteria of the positioning scheme. The design process of fixtures is over when and only when a fixture achieves good fixturing performance.The aforementioned research on the DOF of the workpiece plays an important role in the construction of the workpiece positioning plan and the optimization of the positioning plan. However, these methods require a deeper theoretical basis, and the methods have limitations. It is still difficult to adapt to the difficult positioning problems that are often encountered in machining, such as the positioning of complex surfaces and the DOF of the workpiece combined with different surfaces [27–29]. There are still two unsolved problems of the current method. Firstly, there is no accurate formula for the analysis and calculation of the DOF of the workpiece with virtual constraints, which brings difficulties to the fixture in the positioning design [18]. Secondly, the positioning of numerous irregular workpieces is common in engineering. Usually, the constraint DOF of the workpiece by the fixture in a certain direction is not in the same direction as the axis but any other direction in the space [30]. The existing method can barely adapt to this situation.Figure1 shows the positioning of an oblique axis on two short V-blocks. The two V-blocks limit the DOFs of the workpiece. Current positioning theory can barely solve this problem.Figure 1
Oblique axis positioned on the V-block. 1, V-block; 2, workpiece.Figure2(a) shows a workpiece positioning device. The workpiece is a cylinder, and three positioning points are set on its left, middle, and right sections, respectively. The difference between either two of the three points is 120° in the circumferential direction. In this case, the motion freedom of the workpiece on the YOZ plane is overpositioned, and the rotational DOF around any axis is not constrained. If, based on Figure 2(a), another positioning point is added to the right section (or any other section) as shown in Figure 2(b), then the overpositioning of the moving constraint in the plane is found by eliminating the constraint of the workpiece, and the rotational freedom of the workpiece in the Y- and Z-directions is constrained. Why is this happening? It is difficult to explain with the current positioning theory.Figure 2
Three-section positioning of a cylinder. (a) Positioning point on each of the three sections. (b) Add another positioning point to the right section. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)The study found that the DOF of the workpiece fixed on the processing table depends on the number of positioning normal lines generated at the positioning point and the geometric relationship of the normal lines. A simple, intuitive, and universal analysis method is proposed for various plane-combined positioning, which can accurately calculate the DOF of the workpiece in the fixture positioning. Based on the positioning constraints of the fixture to the workpiece, this paper proposes the concepts of the constraint positioning normal and the constraint positioning plane and proposes a set of geometric theorems for accurately calculating the DOF of the workpiece. This method can realize the accurate calculation of the DOF of the workpiece and provide a basis for rationally arranging the positioning points of the workpiece.Figure 3
Positioning normal line of the workpiece. 1, Workpiece; 2, fixture.Figure 4
Several distributions of the normal line of workpieces. (a) Intersection and equivalence of two normal lines. (b) Two parallel normal lines. (c) Three normal lines intersect with one another. 1, Workpiece; 2, fixture; 3, supporting bolt; 4, V-block.
(a)(b)(c)
## 2. Analysis and Verification of the Normal Line Geometry Theorem of Workpiece Freedom
### 2.1. Position Normal Line and Normal Plane
(1)
Position constraint normal line: it is the normal line that indicates the workpiece pointing to the constraint direction at the fixture position point. In Figure3, the workpiece is positioned in the fixture, and the position constraint normal line Z at position O is the normal line generated by the constraint point of the workpiece.(2)
Fixed position normal line: the normal lines are specified in uppercase letters. If there are multiple normal lines of the same type in the same direction, we can add a different subscript to the alphabet to make a distinction.For example,F1 and F2 are parallel in geometric relations. The normal line points to the constraint direction of the workpiece from the position of the constraint point. Because the workpiece has six degrees of freedom according to the space, it is affected by six normal lines at most.(3)
Constraint normal plane: if several normals of the workpiece act on the same plane in the space, it is called the constraint normal plane.(4)
Representation of the normal plane: the normal plane is represented by Greek letters. If two normal planes are parallel, they are written in the same letter. For example,β and β1 represent two parallel normal planes. In Figure 4(a), the plane formed by the intersection point of the constraint normal lines M and N with the workpiece can be represented by β(MN). In Figure 4(b), the plane composed of the parallel constraint normals L and L1 on workpiece 1 can be expressed as α(LL1). In Figure 4(c), there are three constraint normal lines represented by L, M, and N, respectively, which jointly act on workpiece 1. They are in different positions in the same plane and have at least two intersections with each other. γ(LMN) can be used to represent the normal plane where the normal lines are L, M, and N.When the workpiece is positioned normally, the distribution of its normal lines in a plane can only be one of the three cases.
### 2.2. Geometric Relation Theory for Determining the Workpiece Constraint and Its Demonstration
Theorem 1.
The workpiece is constrained by a single normal line, and the freedom of movement towards the normal line direction is constrained.Proof.
In Figure3, the workpiece is installed in the fixture, and the contact point O can be simplified as a constraint normal line Z. The workpiece has the freedom of movement away from the plane of the fixture, and we assume that the relative velocity of the workpiece in the fixture at contact point O is Vr. If the relative velocity Vr is projected on the dividing velocity of the normal line, the projection is zero, indicating that the dividing velocity of the workpiece relative to the fixture in the constraint normal line direction is zero, which is the normal line Z direction. Therefore, the translation freedom of the workpiece along the Z-direction of the constraint normal line is constrained, it can be expressed as Z⟶, but the workpiece can rotate around any point in the normal plane.Theorem 2.
If the two constraint normal lines acting on different constraint points intersect, the translation freedom of the workpiece in the normal plane is limited and can be replaced by any two intersecting constraint normal lines at the intersection.Proof.
In Figure4(a), the workpiece is constrained at points a and b in the fixture, and the two equivalent constraint normal lines M and N on the constraint normal plane α(MN) intersect at point O. The workpiece and fixture are regarded as a rigid body, and the workpiece can move in the positioning normal plane α(MN). We assume that the relative velocity of the workpiece at the two contacting points, a and b, is Vr1 and Vr2, respectively. Considering that the instantaneous center of the rigid body is the intersection of two vertical lines of two-point velocities, the rotation center of the workpiece motion is the intersection O of the constrained normal lines M and N in the space, the constrained workpiece can only rotate around the instantaneous center O, and the workpiece cannot move in any direction on the plane relative to the intersection O. Hence, (1) the intersection normal line constrains the motion freedom of the workpiece in any direction. (2) The function of two intersecting constraint normal lines can be expressed as any two nonoverlapping intersecting normal lines at the intersection point O, such as normal lines P and T.Theorem 3.
If the two constraint normals acting on the workpiece are parallel but not collinear, the freedom of movement and rotation of the workpiece in the constraint plane are limited.Proof.
In Figure4(b), the two constraint normal lines L and L1 generated at the fixed positions m and n of the workpiece are parallel, and the positioning mode is a special one, as shown in Figure 4(a). The two normal lines intersect at infinity. Two results can be concluded from Theorem 2. They are as follows: (1) the translation freedom of the workpiece is limited in the normal L direction; (2) the instantaneous center of the workpiece is at infinity in this case. Thus, there is no rotation center in the constraint plane; that is, the rotational freedom of the workpiece in the constraint positioning plane α(LL1) is constrained. It can be expressed as α⌢(LL1).Theorem 4.
If three noncoplanar constraint normal lines intersect at a pointO, the translation freedom of the workpiece in the three-dimensional space is constrained and can also be equivalent to three arbitrary noncoplanar intersection normals at point O.Proof.
The workpiece is fixed on the fixture, and the movement of the workpiece in the fixture is constrained by the intersection of three constraint normals which are not on the same plane at a pointO. Given that the rotation center of the rigid body must be in the same direction with the perpendicular line (normal line) of the velocity, the rotation center of the workpiece must be the point of intersection O of the three normal lines. At this moment, the workpiece can only rotate around the fixed point O, and the workpiece cannot move in any direction relative to point O. Therefore, 1) the workpiece is subject to three noncoplanar constraint normal lines, which constraint its freedom of translation in space 2). The constraint effect of three noncoplanar constraint normal lines at intersection O is the same as that of any three noncoplanar constraint normal lines at intersection O.Theorem 5.
The DOF of the workpiece is the constraint generated by the total DOF minus all normal lines.
Each constraint normal line can only restrict one DOF, and the normal line sets of different geometric relations between the workpieces restrict the DOF. As shown in Figure4(c), the three constraint points of the workpiece produce the normal lines L, M, and N that are not intersected at a point, forming the constraint normal plane α(LMN). It can be seen from Theorem 3 that, as shown in Figure 5, the normal lines L1 and N intersecting in the normal plane jointly limit the translation freedom of the workpiece in any two directions along the constraint normal plane. Under the action of normal line L and equivalent normal L1, the rotational freedom (LL1N) of the workpiece in the normal plane is constrained. Therefore, in a positioning normal plane, if three normal lines intersect at least two intersection points, the translation freedom of the workpiece in any two directions and the rotation freedom of the workpiece in the constrained normal plane are constrained, and the three freedoms in the constrained normal plane are constrained.Figure 5
Equivalence and a set of normal lines. 1, Supporting bolt; 2, workpiece; 3, V-block.
### 2.3. Criteria for Determining Overpositioning of the Workpiece
Each normal line acting on the workpiece will form a constraint, which will restrict the freedom of movement or rotation of the workpiece. According to the normal set theorem, too much positioning will cause the number of constrained normals on the workpiece to exceed the limit value of its DOF, which will cause the same DOF of the workpiece to be repeatedly constrained. When two or more normal lines of the workpiece restrict the same DOF, the workpiece is overpositioned. The judgment of overpositioning of the workpiece will directly affect the accurate analysis of the DOF of the workpiece on the fixture.In fixture positioning, if the total DOF of the workpiece in the positioning space is less than the number of constraint normal lines generated on the workpiece, then there must be excessive positioning on the constraint of the workpiece. For example, the normal line number in a straight line is limited to 1, the normal line number on a plane is limited to 3, and the normal line number in the space is limited to 6. If the overconstrained normal is an intersection normal, then the translation DOF is overpositioned. If the overconstrained normal line is a parallel normal line, the rotational DOF in the normal plane is overpositioned.(1)
Determination of overpositioning on a straight line: ifn (n > 1) constraint normals on a workpiece coincide with a straight line, the workpiece has (n− 1) overconstraints(2)
Determination of overpositioning on a plane: if the workpiece hasn (n ≥ 3) normals on the constraint plane and the n (n ≥ 3) normal lines intersect at a point in the plane or are parallel to each other, then the workpiece has (n − 2) overconstraints(3)
Determination of overpositioning in the space: if the number of constraint normal lines positioned on the workpiece is greater than six, four or more normal lines intersect at one point, four or more normal lines are parallel to one another, four or more normal lines do not intersect with one another, and either two are noncoplanar, then the workpiece is overconstrained
### 2.4. Necessary and Sufficient Conditions of Constraining the Rotational DOF of the Workpiece
Theorem2 proves that the normal line parallelism generated by the workpiece positioned in the fixture was the necessary and sufficient condition to limit the rotational freedom of the workpiece in the constraint normal plane. The following proves the necessary condition to constrain the rotational DOF of the workpiece.Theorem1 proves that the normal line generated by a single constraint can only limit the translation freedom of the workpiece moving along the normal line direction. Here, to constrain the rotational freedom of the workpiece in the plane, two normal lines must jointly act on the workpiece, and the geometric relationship between two normal lines can only be one of the following four scenarios. The impact on the DOF of the workpiece is as follows:(1)
Two normal lines coincide: the two constraint normal lines have the same constraint effect on the workpiece, and they are overconstrained along the normal line direction(2)
Two normal lines intersect: the intersection normal line can be replaced by two intersecting normal lines in any direction, which can restrict the translation freedom of the workpiece in any two directions in the plane, but the rotational freedom is not restricted(3)
Two normal lines are parallel: a pair of parallel normal lines in the constrained normal plane can constrain the translation freedom of the workpiece in the normal direction and the rotation freedom in the normal plane(4)
Two normal lines are noncoplanar: according to the set theorem, two noncoplanar normal lines restrict two translational degrees of freedom of the workpiece along the normal line direction in the spaceA single normal constrains the freedom of movement in the normal direction, the intersecting normal constrains the freedom of movement in the plane, and the spatial noncoplanar normal constrains the freedom of movement in the normal direction of the space. Only when parallel constrained normals appear in a plane or space can the rotational freedom of the workpiece be constrained. Therefore, in the constraint normal plane, the workpiece has two parallel constraint normal lines which are a necessary condition to limit its rotational freedom.
## 2.1. Position Normal Line and Normal Plane
(1)
Position constraint normal line: it is the normal line that indicates the workpiece pointing to the constraint direction at the fixture position point. In Figure3, the workpiece is positioned in the fixture, and the position constraint normal line Z at position O is the normal line generated by the constraint point of the workpiece.(2)
Fixed position normal line: the normal lines are specified in uppercase letters. If there are multiple normal lines of the same type in the same direction, we can add a different subscript to the alphabet to make a distinction.For example,F1 and F2 are parallel in geometric relations. The normal line points to the constraint direction of the workpiece from the position of the constraint point. Because the workpiece has six degrees of freedom according to the space, it is affected by six normal lines at most.(3)
Constraint normal plane: if several normals of the workpiece act on the same plane in the space, it is called the constraint normal plane.(4)
Representation of the normal plane: the normal plane is represented by Greek letters. If two normal planes are parallel, they are written in the same letter. For example,β and β1 represent two parallel normal planes. In Figure 4(a), the plane formed by the intersection point of the constraint normal lines M and N with the workpiece can be represented by β(MN). In Figure 4(b), the plane composed of the parallel constraint normals L and L1 on workpiece 1 can be expressed as α(LL1). In Figure 4(c), there are three constraint normal lines represented by L, M, and N, respectively, which jointly act on workpiece 1. They are in different positions in the same plane and have at least two intersections with each other. γ(LMN) can be used to represent the normal plane where the normal lines are L, M, and N.When the workpiece is positioned normally, the distribution of its normal lines in a plane can only be one of the three cases.
## 2.2. Geometric Relation Theory for Determining the Workpiece Constraint and Its Demonstration
Theorem 1.
The workpiece is constrained by a single normal line, and the freedom of movement towards the normal line direction is constrained.Proof.
In Figure3, the workpiece is installed in the fixture, and the contact point O can be simplified as a constraint normal line Z. The workpiece has the freedom of movement away from the plane of the fixture, and we assume that the relative velocity of the workpiece in the fixture at contact point O is Vr. If the relative velocity Vr is projected on the dividing velocity of the normal line, the projection is zero, indicating that the dividing velocity of the workpiece relative to the fixture in the constraint normal line direction is zero, which is the normal line Z direction. Therefore, the translation freedom of the workpiece along the Z-direction of the constraint normal line is constrained, it can be expressed as Z⟶, but the workpiece can rotate around any point in the normal plane.Theorem 2.
If the two constraint normal lines acting on different constraint points intersect, the translation freedom of the workpiece in the normal plane is limited and can be replaced by any two intersecting constraint normal lines at the intersection.Proof.
In Figure4(a), the workpiece is constrained at points a and b in the fixture, and the two equivalent constraint normal lines M and N on the constraint normal plane α(MN) intersect at point O. The workpiece and fixture are regarded as a rigid body, and the workpiece can move in the positioning normal plane α(MN). We assume that the relative velocity of the workpiece at the two contacting points, a and b, is Vr1 and Vr2, respectively. Considering that the instantaneous center of the rigid body is the intersection of two vertical lines of two-point velocities, the rotation center of the workpiece motion is the intersection O of the constrained normal lines M and N in the space, the constrained workpiece can only rotate around the instantaneous center O, and the workpiece cannot move in any direction on the plane relative to the intersection O. Hence, (1) the intersection normal line constrains the motion freedom of the workpiece in any direction. (2) The function of two intersecting constraint normal lines can be expressed as any two nonoverlapping intersecting normal lines at the intersection point O, such as normal lines P and T.Theorem 3.
If the two constraint normals acting on the workpiece are parallel but not collinear, the freedom of movement and rotation of the workpiece in the constraint plane are limited.Proof.
In Figure4(b), the two constraint normal lines L and L1 generated at the fixed positions m and n of the workpiece are parallel, and the positioning mode is a special one, as shown in Figure 4(a). The two normal lines intersect at infinity. Two results can be concluded from Theorem 2. They are as follows: (1) the translation freedom of the workpiece is limited in the normal L direction; (2) the instantaneous center of the workpiece is at infinity in this case. Thus, there is no rotation center in the constraint plane; that is, the rotational freedom of the workpiece in the constraint positioning plane α(LL1) is constrained. It can be expressed as α⌢(LL1).Theorem 4.
If three noncoplanar constraint normal lines intersect at a pointO, the translation freedom of the workpiece in the three-dimensional space is constrained and can also be equivalent to three arbitrary noncoplanar intersection normals at point O.Proof.
The workpiece is fixed on the fixture, and the movement of the workpiece in the fixture is constrained by the intersection of three constraint normals which are not on the same plane at a pointO. Given that the rotation center of the rigid body must be in the same direction with the perpendicular line (normal line) of the velocity, the rotation center of the workpiece must be the point of intersection O of the three normal lines. At this moment, the workpiece can only rotate around the fixed point O, and the workpiece cannot move in any direction relative to point O. Therefore, 1) the workpiece is subject to three noncoplanar constraint normal lines, which constraint its freedom of translation in space 2). The constraint effect of three noncoplanar constraint normal lines at intersection O is the same as that of any three noncoplanar constraint normal lines at intersection O.Theorem 5.
The DOF of the workpiece is the constraint generated by the total DOF minus all normal lines.
Each constraint normal line can only restrict one DOF, and the normal line sets of different geometric relations between the workpieces restrict the DOF. As shown in Figure4(c), the three constraint points of the workpiece produce the normal lines L, M, and N that are not intersected at a point, forming the constraint normal plane α(LMN). It can be seen from Theorem 3 that, as shown in Figure 5, the normal lines L1 and N intersecting in the normal plane jointly limit the translation freedom of the workpiece in any two directions along the constraint normal plane. Under the action of normal line L and equivalent normal L1, the rotational freedom (LL1N) of the workpiece in the normal plane is constrained. Therefore, in a positioning normal plane, if three normal lines intersect at least two intersection points, the translation freedom of the workpiece in any two directions and the rotation freedom of the workpiece in the constrained normal plane are constrained, and the three freedoms in the constrained normal plane are constrained.Figure 5
Equivalence and a set of normal lines. 1, Supporting bolt; 2, workpiece; 3, V-block.
## 2.3. Criteria for Determining Overpositioning of the Workpiece
Each normal line acting on the workpiece will form a constraint, which will restrict the freedom of movement or rotation of the workpiece. According to the normal set theorem, too much positioning will cause the number of constrained normals on the workpiece to exceed the limit value of its DOF, which will cause the same DOF of the workpiece to be repeatedly constrained. When two or more normal lines of the workpiece restrict the same DOF, the workpiece is overpositioned. The judgment of overpositioning of the workpiece will directly affect the accurate analysis of the DOF of the workpiece on the fixture.In fixture positioning, if the total DOF of the workpiece in the positioning space is less than the number of constraint normal lines generated on the workpiece, then there must be excessive positioning on the constraint of the workpiece. For example, the normal line number in a straight line is limited to 1, the normal line number on a plane is limited to 3, and the normal line number in the space is limited to 6. If the overconstrained normal is an intersection normal, then the translation DOF is overpositioned. If the overconstrained normal line is a parallel normal line, the rotational DOF in the normal plane is overpositioned.(1)
Determination of overpositioning on a straight line: ifn (n > 1) constraint normals on a workpiece coincide with a straight line, the workpiece has (n− 1) overconstraints(2)
Determination of overpositioning on a plane: if the workpiece hasn (n ≥ 3) normals on the constraint plane and the n (n ≥ 3) normal lines intersect at a point in the plane or are parallel to each other, then the workpiece has (n − 2) overconstraints(3)
Determination of overpositioning in the space: if the number of constraint normal lines positioned on the workpiece is greater than six, four or more normal lines intersect at one point, four or more normal lines are parallel to one another, four or more normal lines do not intersect with one another, and either two are noncoplanar, then the workpiece is overconstrained
## 2.4. Necessary and Sufficient Conditions of Constraining the Rotational DOF of the Workpiece
Theorem2 proves that the normal line parallelism generated by the workpiece positioned in the fixture was the necessary and sufficient condition to limit the rotational freedom of the workpiece in the constraint normal plane. The following proves the necessary condition to constrain the rotational DOF of the workpiece.Theorem1 proves that the normal line generated by a single constraint can only limit the translation freedom of the workpiece moving along the normal line direction. Here, to constrain the rotational freedom of the workpiece in the plane, two normal lines must jointly act on the workpiece, and the geometric relationship between two normal lines can only be one of the following four scenarios. The impact on the DOF of the workpiece is as follows:(1)
Two normal lines coincide: the two constraint normal lines have the same constraint effect on the workpiece, and they are overconstrained along the normal line direction(2)
Two normal lines intersect: the intersection normal line can be replaced by two intersecting normal lines in any direction, which can restrict the translation freedom of the workpiece in any two directions in the plane, but the rotational freedom is not restricted(3)
Two normal lines are parallel: a pair of parallel normal lines in the constrained normal plane can constrain the translation freedom of the workpiece in the normal direction and the rotation freedom in the normal plane(4)
Two normal lines are noncoplanar: according to the set theorem, two noncoplanar normal lines restrict two translational degrees of freedom of the workpiece along the normal line direction in the spaceA single normal constrains the freedom of movement in the normal direction, the intersecting normal constrains the freedom of movement in the plane, and the spatial noncoplanar normal constrains the freedom of movement in the normal direction of the space. Only when parallel constrained normals appear in a plane or space can the rotational freedom of the workpiece be constrained. Therefore, in the constraint normal plane, the workpiece has two parallel constraint normal lines which are a necessary condition to limit its rotational freedom.
## 3. Results and Discussion
### 3.1. Application Analysis
#### 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
#### 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
#### 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
#### 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
### 3.2. Discussion
The six-point positioning principle of the workpiece is the main theoretical basis of this article, but the principle does not accurately describe what geometric conditions must be met by the six points to be able to fully position the workpiece nor does it explore which attributes of the constraint points are related to the DOF. And the geometric conditions of each constraint point will cause problems such as positioning. In response to the above problems, this paper proposes five geometric theorems for judging the DOF of the workpiece, including the single normal line theorem, the two intersecting normal lines’ theorem, the two parallel normal lines’ theorem, the three intersecting normal lines’ theorem, and the normal lines’ set theorem, and establishes the judgment. The geometric theorem of the workpiece overconstraint clarifies the number and nature of the DOF that can be constrained by each point or a combination of several points and refines the abstract six-point positioning principle.The guiding significance of the “Geometric Theorem for Judging Workpiece Freedom Constraint” proposed in the article for the analysis of the degree of freedom of workpiece positioning is as follows: the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and its geometric relationship. The method of using normal lines to analyze the DOF of workpiece positioning is very ingenious. Compared with various optimization positioning methods such as positioning matrix and genetic algorithm, it is simpler and more intuitive. I hope that, with the deepening of future research, this theory can be used in theoretical learning. And promotion will have more advantages.
## 3.1. Application Analysis
### 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
### 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
### 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
### 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
## 3.1.1. Positioning Analysis of the Oblique Axis in the V-Block
Figure1 shows the positioning of the oblique axis on the fixture. Four normal lines are generated by the action of two V-shaped blocks, which intersect at two points O1 and O2. Combined with Theorem 2, the constraint normal line can be replaced by four normal lines Z1, Y1, Z2, and P2. They constrain the four DOFs of the workpiece, that is, the DOF of movement in the normal lines Z⟶, Y⟶, and P⟶, and the DOF of rotation in the normal plane α⌢ (Z1Z2). If another V-block is added, as shown in Figure 6, then the two other rotational DOFs are constrained, and all DOFs of the workpiece in the space are constrained.Figure 6
Positioning analysis of the oblique axis with three V-blocks’ bearing. 1, V-block; 2, workpiece.
## 3.1.2. Positioning Analysis of the Cylindrical Section
The positioning analysis of Figure2(a) is shown in Figure 7(a). The parallel normal lines M, N, and L in the YOZ plane can only constrain the translational DOFs parallel to the YOZ plane. However, there are only two translational DOFs in the plane that are parallel to the YOZ plane. Therefore, there must be a normal line resulting in the displacement overpositioning of the workpiece.Figure 7
Positioning analysis of the cylindrical section. (a) Three-point positioning. (b) Four-point positioning. 1, Workpiece; 2, supporting bolt; 3, pedestal.
(a)(b)Compared with Figure7(a), a bearing is added to the right section in Figure 7(b), and the normal line is P. These conditions make the constraint on the normal lines M and P of the right section at the intersection O to be expressed as two normal lines L1 and N1 parallel to the normal lines of L and N. The normal lines M and P in the right section can be equivalent to the normal lines L1 and N1, which are parallel to the normal lines L and N at intersection O so that the normal lines constrain four DOFs, Z⟶, Y⟶, α⌢(LL1), andβ⌢(NN1), respectively.
## 3.1.3. Positioning Analysis of the Noncentering Shaft
The noncentering shaft in Figure8 is positioned on the two top holes, and its positioning is analyzed. The fixture has three intersecting constraint normal lines at two positioning points O and O1. Combined with Theorem 2, the normal lines X, Y, and Z can be represented at position O and X1, Y1, and Z1 at position O1. These normal lines form three positioning normal planes α(XX1), β(YY1), and α(ZZ1).Figure 8
Positioning analysis of the noncentering shaft. 1, Left top hole; 2, eccentric shaft; 3, right top hole.Here,α(XX1) constrains X⟶and α⌢(XX1); β(YY1) constrains Y⟶ and β⌢(YY1); and α(ZZ1) constrains Z⟶ and α⌢(ZZ1). It seems that the workpiece is completely positioned, but the two positioning normal planes α(YY1) and α(ZZ1) are in the same plane YOZ (or Y1O1Z1), which restricts the same rotational freedom. Hence, the rotational freedom in the plane YOZ is repeatedly constrained. The right top, becoming the revolving top, results in the removal of the normal line Y1, and then the overpositioning is eliminated. Moreover, the workpiece has rotational freedom around OO1.
## 3.1.4. Positioning Analysis of the Oblique Incision Connecting the Rod
In Figure9, the oblique incision connecting the rod is positioned on a supporting plate and two V-blocks, and both sides are machined. The three normal lines of the supporting plate that supports the workpiece are L1, L2, and L3; the normal lines of V-block 1 can be equivalent to Y and Z, and the normal lines of V-block 2 can be equivalent to Y1 and Z1. The positioning normal planes are α(ZZ1), α(L1L3), β(YY1), and γ(L1L2), respectively. Furthermore, α(ZZ1) constrains Z⟶ and α⌢(ZZ1); β(YY1) constrains Y⟶ and β⌢(YY1); α(L1L3) constrains L⟶ and α⌢(L1L3); and γ(L1L2) constrains γ⌢(L1L2). As the positioning normal planes α(L1L3) and α (ZZ1) are parallel and constrain the same rotational DOF, they result in rotation overpositioning of the workpiece. In theory, removing any of the four normal lines, L1, L3, Z, and Z1, can eliminate overpositioning.Figure 9
Overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, connecting rod; 4, V-shaped block 2.However, to ensure that the milling plane is symmetrical about the center position of the connecting rod, the constraint normal line corresponding to the V-shaped block should not be removed. Normal linesL1 and L3 can be combined into one, whereby adding a floating supporting block to the supporting plate can eliminate overpositioning, as shown in Figure 10.Figure 10
Adding the floating support to eliminate the overpositioning of the connecting rod. 1, Supporting plate; 2, V-shaped block 1; 3, workpiece; 4, V-shaped block 2; 5, floating supporting block.
## 3.2. Discussion
The six-point positioning principle of the workpiece is the main theoretical basis of this article, but the principle does not accurately describe what geometric conditions must be met by the six points to be able to fully position the workpiece nor does it explore which attributes of the constraint points are related to the DOF. And the geometric conditions of each constraint point will cause problems such as positioning. In response to the above problems, this paper proposes five geometric theorems for judging the DOF of the workpiece, including the single normal line theorem, the two intersecting normal lines’ theorem, the two parallel normal lines’ theorem, the three intersecting normal lines’ theorem, and the normal lines’ set theorem, and establishes the judgment. The geometric theorem of the workpiece overconstraint clarifies the number and nature of the DOF that can be constrained by each point or a combination of several points and refines the abstract six-point positioning principle.The guiding significance of the “Geometric Theorem for Judging Workpiece Freedom Constraint” proposed in the article for the analysis of the degree of freedom of workpiece positioning is as follows: the DOF and overconstraint of the workpiece in the fixture depend entirely on the number of positioning normal lines of the workpiece and its geometric relationship. The method of using normal lines to analyze the DOF of workpiece positioning is very ingenious. Compared with various optimization positioning methods such as positioning matrix and genetic algorithm, it is simpler and more intuitive. I hope that, with the deepening of future research, this theory can be used in theoretical learning. And promotion will have more advantages.
## 4. Conclusions
(1)
The number of positioning normal lines of the workpiece in the fixture and their geometric relationship determine the DOF and overconstraint of the workpiece in the fixture(2)
The parallel normal lines of the workpiece are a sufficient and necessary condition to limit the DOF of the rotation of the workpiece in this plane(3)
The geometric theorem is easy to analyze the DOF and overconstraint in the singular positioning of the workpiece, and it can also provide a theoretical basis for the reasonable layout of the positioning point of the workpiece in the fixture(4)
Using the geometric theorem to analyze and judge the DOF and overconstraint of the workpiece is simple and intuitive, more universal, and suitable for computer-aided judgment of the DOF of the workpiece
---
*Source: 1014708-2021-09-01.xml* | 2021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.